Tag Archives: AWS Identity and Access Management (IAM)

How to centralize and automate IAM policy creation in sandbox, development, and test environments

Post Syndicated from Mahmoud ElZayet original https://aws.amazon.com/blogs/security/how-to-centralize-and-automate-iam-policy-creation-in-sandbox-development-and-test-environments/

To keep pace with AWS innovation, many customers allow their application teams to experiment with AWS services in sandbox environments as they move toward production-ready architecture. These teams need timely access to various sets of AWS services and resources, which means they also need a mechanism to help ensure least privilege is granted. In other words, your application team generally shouldn’t have access to administrative resources, such as an AWS Lambda function that takes periodic Amazon Elastic Block Store snapshot backups, or an Amazon CloudWatch Events rule that sends events to a centralized information security account managed by your security team.

In this blog post, I’ll show you how to create a centralized and automated workflow that creates and validates AWS Identity and Access Management (IAM) policies for application teams working in various sandbox, development, and test environments. Your security developers can customize this workflow according to the specific requirements of your security team. They can create logic to limit the allowed permission sets based on account type or owning team. I’ll use AWS CodePipeline to create and manage a workflow containing various stages and spanning multiple AWS accounts that I’ll describe in more detail in the next section.

Solution overview

I’ll start with this scenario: Alice is an administrator for an AWS sandbox account used by her organization’s data scientists to try out AWS analytics services such as Amazon Athena and Amazon EMR. The data scientists assess the suitability of these services for their production use cases by running sample analytics jobs on portions of real data sets after any sensitive information has been taken out. The data sets are stored in an existing Amazon Simple Storage Service (Amazon S3) bucket. For every new project, Alice authors a new IAM policy that allows the project team to access their requested Amazon S3 bucket and create their analytics clusters. However, Alice must follow a company guideline that sandbox accounts can only launch specific Amazon Elastic Compute Cloud (Amazon EC2) instance types. She must also restrict access to all administrative AWS Lambda functions and CloudWatch Events rules that the security team use to monitor sandbox account compliance. Below is the solution that meets these requirements and makes it easier for Alice and other administrators to perform their tasks.
 

Figure 1: Solution architecture

Figure 1: Solution architecture

  1. Alice uses the IAM visual editor to author a template that gives the data science team access to launch and manage EMR clusters that analyze S3-based data sets. She then uploads the IAM JSON policy document into an existing S3 bucket using an AWS Key Management Service (AWS KMS) key. The key and the S3 bucket are already created by the security team as part of account baselining, which I will detail later in this post.
  2. AWS CodePipeline automatically fetches the IAM JSON policy document and invokes a sequence of validation checks that use a single and central Lambda function hosted in an AWS account managed by the security team.
  3. If the IAM JSON policy adheres to all account and general security requirements coded by the security developers, the central Lambda function automatically creates the policy in Alice’s account and the pipeline will succeed. The central validation Lambda function will also attach a set of predefined explicit denies to the IAM policy to ensure that it limits undesired user capabilities in the sandbox account. If the IAM JSON policy fails the checks, the pipeline will fail and provide Alice the specific reason for non-compliance. Alice must then modify the policy and resubmit. When the policy has been successfully created, Alice will attach it to the right IAM user, group, or role.

Solution deployment

This solution includes the following three steps:

Prerequisites

As this solution manages permissions granted to AWS services or IAM entities, I highly recommend that you try the solution first in an isolated test environment to make sure it meets all your security requirements.

  1. You’ll need administrator access in two AWS accounts to set up the solution. The deployment of this solution is typically done by one of your organization’s administrators while setting up new AWS accounts. These are the two account types you’ll need access to:
     

    • A sandbox account. This lets application teams experiment with various AWS architectures. This could be a development or test account, as mentioned earlier.
    • A central information security account. Typically, this is owned by an information security team who monitors and enforces security compliance within a multi-account structure.


Important
: Because the Lambda function that you’ll create in the information security account has highly privileged permissions, it’s important to strictly follow best practices for securing the account. You need to limit account access to security team members. Sandbox account administrators should also not give this central Lambda function any IAM permissions in their sandbox account beyond IAM Policy creation.

  1. Because you’ll use the AWS Management Console for both AWS accounts, I strongly recommended that you have roles in both AWS accounts and use the console’s Switch Role feature. You can attach an alias to each account and give each a different color code so that you always know which one you’re logged into.
  2. Make sure to use the same AWS region for all the resources that you create for this solution.

Step 1: Deploy the solution prerequisites

Before building the pipeline across the two AWS accounts, you must first configure the required resources in both accounts, such as IAM roles and encryption keys. This configuration is typically done according to your security team’s guidelines when your organization first sets up the sandbox, development, or test environment.

Important

  • In addition to the initial setup you’ll create in this section, your security team must explicitly deny sandbox, development, or test account administrators from attaching IAM Policies that do not meet the allowed security policies for that account type, such as the AdministratorAccess IAM policy. Moreover, your security team must ensure any current or future users, groups, or roles in the account have no permissions to directly set or update IAM policies like (for example) CreatePolicy, CreatePolicyVersion, PutRolePolicy, PutUserPolicy, PutGroupPolicy, or UpdateAssumeRolePolicy. You want to ensure that creating permissions can only be done through the automation pipeline, which I’ll show you how to build shortly.
  • Because the solution I’ll be describing focuses on the creation of least privilege permissions, it’s highly advisable that your security team combines the solution with IAM permission boundaries to make sure that any permissions defined in this solution are scoped by a set of pre-defined permissions for every type of account in the organization. For example, your account administrators might only be allowed to create IAM users or roles with a pre-defined set of permission boundaries that limit the permissions attached to those principals. For more information about permission boundaries, please refer to this AWS Security blog post.

Create the sandbox account prerequisites

Follow the steps below to deploy an AWS CloudFormation template that will create the following resources in the sandbox account:

  • An S3 bucket where your sandbox administrators will upload IAM policies
  • An IAM role that your automated pipeline will use to access the S3 bucket that stores the IAM policies
  • An AWS KMS key that you will use to encrypt the IAM policies in your S3 bucket
  1. While logged in to your sandbox account in your default browser, select this link to launch an AWS stack with the sandbox environment prerequisites. You’ll be redirected to the CloudFormation console with the template URL already populated.
     
    Figure 2: CloudFormation console

    Figure 2: CloudFormation console with prepopulated URL

  2. Select Next and, optionally, provide a name for your stack. A suggested stack name, Sandbox-Prerequisites, should already be populated.
  3. The template defines an input parameter called CentralAccount that you can populate with the AWS account ID of your security account. For more information on how to find the account ID of your security account, check here.
  4. Select Next, and then select Next again.
  5. To have the stack create the IAM roles that your pipeline will use, select the check box that says I acknowledge that AWS CloudFormation might create IAM resources with custom names, and then select Create Stack.
  6. Select the Stack info tab and refresh periodically while watching the Stack Status field value. After your stack reaches the state CREATE_COMPLETE, navigate to the CloudFormation Outputs tab and copy the following output values to the text editor of your choice. You’ll use these values in subsequent CloudFormation stacks.
     
    Figure 3: CloudFormation Outputs tab

    Figure 3: CloudFormation Outputs tab

Create the information security account prerequisites

Follow the steps below to deploy a CloudFormation template that will create the following resources in your information security account:

  • An IAM role used by your automated pipeline to invoke your central Lambda function and to provide access to the sandbox account KMS key
  • An IAM role used by the central Lambda function to assume a role in the sandbox account and manage IAM policies
  1. While logged in to your security account in your default browser, select this link to launch an AWS stack with the security environment prerequisites. You’ll be redirected to the CloudFormation console with the template URL already populated.
  2. Select Next and, optionally, provide a name for your stack. A suggested stack name, Sandbox-Prerequisites, should already be populated.
  3. Populate the following input parameter fields:
    • SandboxAccount: The AWS account ID for the sandbox account.
    • ArtifactBucket: The bucket name that you noted in your text editor from the previous stack run in the sandbox account
    • CMKARN: The Amazon Resource Name (ARN) of the KMS key that you noted in your text editor from the previous stack run in the sandbox account
    • PolicyCheckerFunctionName: The name of the Lambda function to be created later. The default value is PolicyChecker
  4. Select Next, and then select Next again.
  5. To have the stack create the IAM roles used by your pipeline, select the box that reads I acknowledge that AWS CloudFormation might create IAM resources with custom names, and then select Create Stack.
  6. Wait for your stack until it reaches the state CREATE_COMPLETE.

Create the sandbox account pipeline

Now, switch back to your sandbox account and deploy the CloudFormation template that will create the following resources in the sandbox account:

  • An AWS CodePipeline automation pipeline that fetches the IAM policy document from S3 and sends it to the security account for centralized validation. If valid, a Lambda function in the information security account will also create the IAM policy in the sandbox account.
  • An S3 bucket policy to allow your central Lambda function to fetch the IAM policy JSON document from your bucket
  • An IAM role that will be assumed by the Lambda function in the central information security account and used to create IAM policies in the sandbox account. Sandbox account administrator can then attach those IAM policies to the required entities, like an IAM user or role.
  1. While logged in to your sandbox account in your default browser, select this link to launch an AWS stack with the sandbox environment prerequisites. You’ll be redirected to the CloudFormation console with the template URL already populated.
  2. Click Next and, optionally, provide a name for your stack. A suggested stack name, Sandbox-Pipeline, should already be populated.
  3. Populate the following input parameter fields:
    • CentralAccount: The AWS account ID of the information security account, without hyphens.
    • ArtifactBucket: The same bucket name that you noted in your text editor earlier and used in the previous stack in the information security account.
    • CMKARN: The ARN of the KMS key that you noted in your text editor earlier and used in the previous stack in the information security account.
    • PolicyCheckerFunctionName: Again, the name of the Lambda function to be created later. It must be the same value you provided to the information security account template.
  4. Select Next, and then select Next again.
  5. To have the stack create the required IAM roles, select the box that reads I acknowledge that AWS CloudFormation might create IAM resources with custom names, and then select Create Stack.
  6. Wait for your stack until it reaches the state CREATE_COMPLETE.

Step 2: Set up the policy validation Lambda function in the central information security account

In the central information security account, create the Lambda function to validate the IAM policies created in sandbox environment.

  1. In the AWS Lambda console, select Create Function and then select Author from scratch. Provide values for the following fields:
    • Name. This must be the same function name defined as input parameter PolicyCheckerFunctionName to CloudFormation in step 1, when you set up the information security account prerequisites. If you did not change the default value in step 1, the default is still PolicyChecker.
    • Runtime. Python 2.7.
    • Role. To set the role, select Choose an existing role, and then select the role named policy-checker-lambda-role. This is the role you created in step 1, when you set up the information security account prerequisites.

    Choose Create Function, scroll down to Function Code, and then paste the following code into the editor (replacing the existing code):

    
    #  Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
    #  Licensed under the Apache License, Version 2.0 (the "License"). You may not
    #  use this file except in compliance with
    #  the License. A copy of the License is located at
    #      http://aws.amazon.com/apache2.0/
    #  or in the "license" file accompanying this file. This file is distributed
    #  on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
    #  either express or implied. See the License for the
    #  specific language governing permissions and
    #  limitations under the License.
    from __future__ import print_function
    import json
    import boto3
    import zipfile
    import tempfile
    import os
    
    print('Loading function')
    PERMISSIVE_ERROR_MSG = """Policy creation request rejected: * permissions not
                             allowed in both actions and resources"""
    GENERAL_ERROR_MSG = """An error has occurred while validating policy.
                            Please contact admin"""
    
    
    def get_template(event, s3, artifact, file_in_zip):
        tmp_file = tempfile.NamedTemporaryFile()
        bucket = event['CodePipeline.job']['data']['inputArtifacts'][0]['location']['s3Location']['bucketName']
        key = event['CodePipeline.job']['data']['inputArtifacts'][0]['location']['s3Location']['objectKey']
    
        with tempfile.NamedTemporaryFile() as tmp_file:
            s3.download_file(bucket, key, tmp_file.name)
            with zipfile.ZipFile(tmp_file.name, 'r') as zip:
                return zip.read(file_in_zip)
    
    
    def get_sts_session(event, account, rolename):
        sts = boto3.client("sts")
        RoleArn = str("arn:aws:iam::" + account + ":role/" + rolename)
        response = sts.assume_role(
            RoleArn=RoleArn,
            RoleSessionName='SecurityManageAccountPermissions',
            DurationSeconds=900)
        sts_session = boto3.Session(
            aws_access_key_id=response['Credentials']['AccessKeyId'],
            aws_secret_access_key=response['Credentials']['SecretAccessKey'],
            aws_session_token=response['Credentials']['SessionToken'],
            region_name=os.environ['AWS_REGION'],
            botocore_session=None,
            profile_name=None)
        return (sts_session)
    
    
    def ManagePolicy(event, context):
        # Set boto session to get pipeline artifact from sandbox/dev/test account
        artifact_session = boto3.Session(
            aws_access_key_id=event['CodePipeline.job']['data']
                                   ['artifactCredentials']['accessKeyId'],
            aws_secret_access_key=event['CodePipeline.job']['data']
                                       ['artifactCredentials']['secretAccessKey'],
            aws_session_token=event['CodePipeline.job']['data']
                                   ['artifactCredentials']['sessionToken'],
            region_name=os.environ['AWS_REGION'],
            botocore_session=None,
            profile_name=None)
        # Fetch pipeline artifact from S3
        s3 = artifact_session.client('s3')
        permission_doc = get_template(event, s3, '', 'policy.json')
        metadata_doc = json.loads(get_template(event, s3, '', 'metadata.json'))
        permission_doc_json = json.loads(permission_doc)
        # Assume the central account role in sandbox/dev/test account
        global STS_SESSION
        STS_SESSION = ''  
        STS_SESSION = get_sts_session(
            event, event['CodePipeline.job']['accountId'], 'central-account-role')
        iam = STS_SESSION.client('iam')
        codepipeline = STS_SESSION.client('codepipeline')
        policy_arn = 'arn:aws:iam::' + event['CodePipeline.job']['accountId'] + ':policy/' + metadata_doc['PolicyName']
    
        try:
            # 1.Sample code - Validate policy sent from sandbox/dev/test account:
            # look for * actions and * resources
            for statement in permission_doc_json['Statement']:
                if statement['Action'] == '*' and statement['Resource'] == '*':
                    return codepipeline.put_job_failure_result(
                                        jobId=event['CodePipeline.job']['id'],
                                        failureDetails={
                                            'type': 'JobFailed',
                                            'message': PERMISSIVE_ERROR_MSG})
            # 2.Sample code - Attach any required denies from central
            # pre-defined policy
            iam_local = boto3.client('iam')
            account_id = context.invoked_function_arn.split(":")[4]
            local_policy_arn = 'arn:aws:iam::' + account_id + ':policy/central-deny-policy-sandbox'
            policy_response = iam_local.get_policy(PolicyArn=local_policy_arn)
            policy_version_id = policy_response['Policy']['DefaultVersionId']
            policy_version_doc = iam_local.get_policy_version(
                PolicyArn=local_policy_arn,
                VersionId=policy_version_id)
            for statement in policy_version_doc['PolicyVersion']['Document']['Statement']:
                permission_doc_json['Statement'].append(
                   statement
                )
            # 3. If validated successfully, create policy in
            # sandbox/dev/test account
            iam.create_policy(
                PolicyName=metadata_doc['PolicyName'],
                PolicyDocument=json.dumps(permission_doc_json),
                Description=metadata_doc['PolicyDescription'])
    
            # successful creation, put result back to
            # sandbox/dev/test account pipeline
            codepipeline.put_job_success_result(
                jobId=event['CodePipeline.job']['id'])
        except Exception as e:
            print('Error: ' + str(e))
            codepipeline.put_job_failure_result(
                jobId=event['CodePipeline.job']['id'],
                failureDetails={'type': 'JobFailed', 'message': GENERAL_ERROR_MSG})
    
    def lambda_handler(event, context):
        print(event)
        ManagePolicy(event, context)
    

    This sample code shows how the Lambda function checks the IAM JSON policy submitted by Alice for policies that are too permissive because they allow all IAM actions on all account resources. The sample code also shows an IAM Deny action that prevents the launch of Amazon EC2 instances that are not part of the T2 EC2 instance family. An explicit deny here ensures that only T2 instances can be launched. Your security developers should author code similar to this sample code, in order to meet the security policies of every account type and control the IAM policies created in various sandbox, development, and test environments.

  2. Before saving your new Lambda function code, scroll further down to the Basic Settings section and increase the function timeout to 10 seconds.
  3. Select Save.

Step 3: Test the sandbox account pipeline

Now it’s time to deploy the solution in your sandbox account.

  1. Create the following files and compress them into an archive with the name policy.zip (this is the name expected by your created pipeline).
    • metadata.json: This file contains metadata like the name and description of the IAM policy to be created.
      
                      {
                      "PolicyDescription": "ec2 start permission policy",
                      "PolicyName": "Ec2RunTeamA"
                      }
                      

    • policy.json: This file contains the JSON body of the IAM policy to be created.
      
                      {
                      "Version": "2012-10-17",
                      "Statement": [
                              {
                              "Sid": "EC2Run",
                              "Effect": "Allow",
                              "Action": "ec2:RunInstances",
                              "Resource": "*"
                              }
                      ]
                      }
                      

  2. To upload your policy.zip file to the bucket you created earlier, go to the Amazon S3 console in the sandbox account and, in the search box at the top of the page, search for the bucket you noted in your text editor earlier as ArtifactBucket.
  3. When you locate your bucket, select the bucket name, and then select Upload. The upload dialog will appear.
  4. Select Add Files and navigate to the folder with the policy.zip file. Select the file, select Open, select Next, and then select Next again.
     
    Figure 4: S3 upload dialog

    Figure 4: S3 upload dialog

  5. Select the AWS KMS master-key radio button, and then select the KMS key that has the alias codepipeline-policy-crossaccounts.
     
    Figure 5: Selecting the KMS key

    Figure 5: Selecting the KMS key

  6. Select Next, and then select Upload.
  7. Go to AWS CodePipeline console, select your sandbox pipeline, and wait for the pipeline to start running. It can take up to a minute for it to start.
     
    Figure 6: AWS CodePipeline console

    Figure 6: AWS CodePipeline console

  8. Wait for your pipeline to complete. There should be no validation errors for the IAM policy you just uploaded and your IAM policy should be successfully created. To view the newly created IAM policy, open the AWS IAM console.
  9. Select Policies on the left and search for the policy with the name defined in the metadata.json file.
     
    Figure 7: Viewing your new policy

    Figure 7: Viewing your new policy

  10. Select the policy name. Note the IAM deny that was automatically added to your defined policy.

If you’d like to test the pipeline further, you can modify the policy to permit all actions on all resources. When policy.zip is uploaded again, the pipeline should return the following error:


Policy creation request rejected: * permissions not allowed in both actions and resources

If you encounter any errors as you modify your Lambda function code, you can always go back to the Lambda function logs in the central information security account. For more information on how to access Lambda function logs, please refer to the documentation.

The same logic used here can be extended to other sandbox, development, or test environments. However, for the central information security account, the existing roles will need to be updated to trust and have access to the resources in the newly added sandbox, development, or test account.

Summary

In this blog post, I showed you how to centralize the validation and creation of IAM policies across various AWS accounts. This allows your security developers to start coding your security best practices; permitting automatic creation and validation of IAM policies across your various sandbox, development, and test accounts. Account administrators can then attach those validated IAM policies to the required IAM users, groups or roles. This process strikes the balance between agility and control. It empowers your account administrators to create compliant and least-privilege permission IAM policies, while also allowing your application teams to keep quickly experimenting and innovating. If you have feedback about this blog post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Mahmoud ElZayet

Mahmoud is a Global Accounts Solutions Architect at AWS. He works with large enterprise customers providing guidance and technical assistance for building cloud solutions. Mahmoud is passionate about DevOps and Cloud Compliance topics. Outside of work, he enjoys exploring new places with his wife and two kids.

Automate analyzing your permissions using IAM access advisor APIs

Post Syndicated from Ujjwal Pugalia original https://aws.amazon.com/blogs/security/automate-analyzing-permissions-using-iam-access-advisor/

As an administrator that grants access to AWS, you might want to enable your developers to get started with AWS quickly by granting them broad access. However, as your developers gain experience and your applications stabilize, you want to limit permissions to only what they need. To do this, access advisor will determine the permissions your developers have used by analyzing the last timestamp when an IAM entity (for example, a user, role, or group) accessed an AWS service. This information helps you audit service access, remove unnecessary permissions, and set appropriate permissions across different environments. For example, you can grant broad access to services in development accounts and then reduce permissions for access to specific services in production accounts. Finally, as you manage more IAM entities and AWS accounts, you need a way to scale these processes through automation. To help you achieve this automation, you can now use IAM access advisor APIs with the AWS Command Line Interface (AWS CLI) or a programmatic client.

In this post, I first provide the details of the access advisor APIs. Next, I walk through an example to demonstrate how you can use the AWS CLI to create a report of the last-accessed timestamps for the services used by the roles in your account. In this post, I assume that you’re familiar with access advisor and how to Remove Unnecessary Permissions in Your IAM Policies by Using Service Last Accessed Data from the IAM console. Before I share an example, I’ll describe the new IAM access advisor APIs:

  • generate-service-last-accessed-details: Generates the service last accessed data for an IAM resource (user, role, group, or policy). You need to call this API first to start a job that generates the service last accessed data for the IAM resource. This API returns a JobId that you will use for the other APIs, such as get-service-last-accessed-details, to determine the status of the job completion.
  • get-service-last-accessed-details: Use this to retrieve the service last accessed data for an IAM resource based on the JobID you pass in.
  • get-service-last-accessed-details-with-entities: Use this to retrieve the service last accessed data for a specific AWS service. The API provides you with a list of all the IAM entities who have access to the service and includes the last accessed date for each IAM entity.
  • list-policies-granting-service-access: Use this to retrieve all the IAM policies that grant permissions to the services accessed for an IAM entity. This helps you identify the policies you need to modify to remove any unused permissions.

Now that you understand the different IAM access advisor APIs, I’ll walk through an example to demonstrate how to use them to set permissions based on service last accessed information.

Example use case: Setting permissions for IAM roles

Assume Arnav Desai is a security administrator for Example Corp. He works with several development teams and monitors their access across multiple accounts. To get his development teams up and running quickly, he initially created multiple roles with broad permissions that are based on job function in the development accounts. Now, his developers are ready to deploy workloads to production accounts. The developers need access to configure AWS, however, Arnav only wants to grant them access to what they need. To determine these permissions, he uses access advisor APIs to automate a process that helps him understand the services developers accessed in the last six months. Using this information, he authors policies to grant access to specific services in production. I’ll now show you an example to achieve this in one account using AWS CLI commands.

First, Arnav uses the list-roles command to list the IAM roles in his development account. For this example, there are two roles in his development account: DBAdminRole and NetworkAdminRole.

For each role, he uses the generate-service-last-accessed-details command to generate the service last accessed data for the role. Here’s an example of the command that he uses:


aws iam generate-service-last-accessed-details --arn arn:aws:iam::123456789012:role/DBAdminRole

The command above provides Arnav with a JobId for each role signaling that the job has started generating the service last accessed details. Arnav waits for the job to complete successfully to retrieve the access advisor information. In the meantime, he can call the get-service-last-accessed-details command to view the JobStatus of the job. Once the jobs for both roles are COMPLETED, Arnav can view the service last accessed report for both the roles, as shown below.

DBAdminRole


"ServicesLastAccessed": [
        {
            "LastAuthenticated": "2018-11-01T17:41:15Z",
            "LastAuthenticatedEntity": "arn:aws:iam::123456789012:role/ DBAdminRole",
            "ServiceName": "Amazon DynamoDB",
            "ServiceNamespace": "dynamodb",
            "TotalAuthenticatedEntities": 1
        },
        {
            "LastAuthenticated": "2018-08-25T17:41:15Z",
            "LastAuthenticatedEntity": "arn:aws:iam::123456789012:role/ DBAdminRole",
            "ServiceName": "Amazon S3",
            "ServiceNamespace": "s3",
            "TotalAuthenticatedEntities": 1
        },
	.
	.
	.
    ]

Note: I’ve truncated the output because the DBAdminRole doesn’t access other services.

NetworkAdminRole


"ServicesLastAccessed": [
        {
            "LastAuthenticated": "2018-11-21T17:41:15Z",
            "LastAuthenticatedEntity": "arn:aws:iam::123456789012:role/ NetworkAdminRole",
            "ServiceName": "Amazon EC2",
            "ServiceNamespace": "ec2",
            "TotalAuthenticatedEntities": 1
        },
	.
	.
	.
    ]

Note: I’ve truncated the output because the NetworkAdminRole doesn’t access other services.

Based on the output above, you can see that the two roles in development accessed Amazon DynamoDB, Amazon S3, and Amazon EC2 in the last six months. Using this information, Arnav can author a policy to grant access to these specific services for the production accounts.

Conclusion

In this post, I reviewed IAM access advisor APIs and shown how you can use them to determine service last accessed information programmatically. You can use this information to audit access, removed unused permissions, or grant appropriate permissions across your accounts.

If you have comments about retrieving Access Advisor service last accessed information programmatically, submit them in the Comments section below. If you have issues using access advisor commands, start a thread on the IAM forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Ujjwal Pugalia

Ujjwal is the product manager for the console sign-in and sign-up experience at AWS. He enjoys working in the customer-centric environment at Amazon because it aligns with his prior experience building an enterprise marketplace. Outside of work, Ujjwal enjoys watching crime dramas on Netflix. He holds an MBA from Carnegie Mellon University (CMU) in Pittsburgh.

Stream Amazon CloudWatch Logs to a Centralized Account for Audit and Analysis

Post Syndicated from David Bailey original https://aws.amazon.com/blogs/architecture/stream-amazon-cloudwatch-logs-to-a-centralized-account-for-audit-and-analysis/

A key component of enterprise multi-account environments is logging. Centralized logging provides a single point of access to all salient logs generated across accounts and regions, and is critical for auditing, security and compliance. While some customers use the built-in ability to push Amazon CloudWatch Logs directly into Amazon Elasticsearch Service for analysis, others would prefer to move all logs into a centralized Amazon Simple Storage Service (Amazon S3) bucket location for access by several custom and third-party tools. In this blog post, I will show you how to forward existing and any new CloudWatch Logs log groups created in the future to a cross-account centralized logging Amazon S3 bucket.

The streaming architecture I use in the destination logging account is a streamlined version of the architecture and AWS CloudFormation templates from the Central logging in Multi-Account Environments blog post by Mahmoud Matouk. This blog post assumes some knowledge of CloudFormation, Python3 and the boto3 AWS SDK. You will need to have or configure an AWS working account and logging account, an IAM access and secret key for those accounts, and a working environment containing Python and the boto3 SDK. (For assistance, see the Getting Started Resource Center and Start Building with SDKs and Tools.) All CloudFormation templates and Python code used in this article can be found in this GitHub Repository.

Setting Up the Solution

You need to create or use an existing S3 bucket for storing CloudFormation templates and Python code for an AWS Lambda function. This S3 bucket is referred to throughout the blog post as the <S3 infrastructure-bucket>. Ensure that the bucket does not block new bucket policies or cross-account access by checking the bucket’s Permissions tab and the Public access settings button.

You also need a bucket policy that allows each account that needs to stream logs to access it when we create the AWS Lambda function below. To do so, update your bucket policy to include each new account you create and the <S3 infrastructure-bucket> ARN from the top of the Bucket policy editor page to modify this template:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                  "03XXXXXXXX85",
                  "29XXXXXXXX02",
                  "13XXXXXXXX96",
                  "37XXXXXXXX30",
                  "86XXXXXXXX95"
                ]
            },
            "Action": [
                "s3:Get*",
                "s3:List*"
            ],
            "Resource": [
                "arn:aws:s3:::<S3 infrastructure-bucket>",
                "arn:aws:s3:::<S3 infrastructure-bucket>/*"
            ]
        }
    ]
}

Clone a local copy of the CloudFormation templates and Python code from the GitHub repository. Compress the CentralLogging.py and lambda.py into a .zip file for the lambda function we create below and name it AddSubscriptionFilter.zip. Load these local files into the <S3 infrastructure-bucket>. I recommend using folders called /python for the .py files, /lambdas for the AddSubscriptionFilter.zip file and /cfn for the CloudFormation templates.

Multi-Account Configuration and the Central Logging Account

One form of multi-account configuration is the Landing Zone offering, which provides a core logging account for storing all logs for auditing. I use this account configuration as an example in this blog post. Initially, the Landing Zone setup creates several stack sets and resources, including roles, security groups, alarms, lambda functions, a cloud trail stream and an S3 bucket.

If you are not using a Landing Zone, create an appropriately named S3 bucket in the account you have chosen as a logging account. This S3 bucket will be referred to later as the <LoggingS3Bucket>. To mimic what the Landing Zone calls its logging bucket, you can use the format aws-landing-zone-logs-<Account Number><Region>, or simply pick an appropriate name for the centralized logging location. In a production environment, remember that it is critical to lock down the access to logging resources and the permissions allowed within the account to prevent deletion or tampering with the logs.

Figure 1 - Initial Landing Zone logging account resources

Figure 1 – Initial Landing Zone logging account resources

The S3 bucket – aws-landing-zone-logs-<Account Number><Region> is the most important resource created by the stack-sets for logging purposes. It contains all of the logs streamed to it from all of the accounts. Initially, the Landing Zone only sends the AWS CloudTrail and AWS Config logs to this S3 bucket.

In order to send all of the other CloudWatch Logs that are necessary for auditing, we need to add a destination and streaming mechanism to the logging account.

Logging Account Insfrastructure

The additional infrastructure required in the central logging account provides a destination for the log group subscription filters and a stream for log events that are sent from all accounts and appropriate regions to load them into the <LoggingS3Bucket> repository. The selection of these particular AWS resources is important, because Kinesis Data Streams is the only resource currently supported as a destination for cross-account CloudWatch Logs subscription filters.

The centralLogging.yml CloudFormation template automates the creation of the entire required infrastructure in the core logging account. Make sure to run it in each of the regions in which you need to centralize logs. The log group subscription filter and destination regions must match in order to successfully stream the logs.

Installation Instructions:

  1. Modify the centralLogging.yml template to add your account numbers for all of the accounts you want to stream logs from into the DestinationPolicy where you see the <AccountNumberHere> placeholders. Remove any unused placeholders.
  2. In the same DestinationPolicy, modify the final arn statement, replacing <region> with the region it will be run in (e.g., us-east-1), and the <logging account number> with the account number of the logging account where this template is to be run.
  3. Log in to the core logging account and access the AWS management console using administrator credentials.
  4. Navigate to CloudFormation and click the Create Stack button.
  5. Select Specify an Amazon S3 template URL and enter the Link for the centralLogging.yml template found in the <S3 infrastructure-bucket>.
  6. Enter a stack name, such as CentralizedLogging, and the one parameter called LoggingS3Bucket. Enter in the ARN of the logging bucket: arn:aws:s3::: <LoggingS3Bucket>. This can be obtained by opening the S3 console, clicking on the bucket icon next to this bucket, and then clicking the Copy Bucket ARN button.
  7. Skip the next page, acknowledge the creation of IAM resources, and Create the stack.
  8. When the stack completes, select the stack name to go to stack details and open the Outputs. Copy the value of the DestinationArnExport, which will be needed as a parameter for the script in the next section.

Upon successful creation of this CloudFormation stack, the following new resources will be created:

  • Amazon CloudWatch Logs Destination
  • Amazon Kinesis Stream
  • Amazon Kinesis Firehose Stream
  • Two AWS Identity and Access Management (IAM) Roles
Figure 2 - New infrastructure required in the centralized logging account

Figure 2 – New infrastructure required in the centralized logging account

Because the Landing Zone is a multi-account offering, the Log Destination is required to be the destination for all subscription filters. The key feature of the destination is its DestinationPolicy. Whenever a new account is added to the environment, its account number needs to be added to this DestinationPolicy in order for logs to be sent to it from the new account. Add the new account number in the centralLogging.yml CloudFormation template, and run an update in CloudFormation to complete the addition. A sample Destination Policy looks like this:

{
  "Version" : "2012-10-17",
  "Statement" : [
    {
      "Effect" : "Allow",
      "Principal" : {
        "AWS" : [
          "03XXXXXXXX85",
          "29XXXXXXXX02",
          "13XXXXXXXX96",
          "37XXXXXXXX30",
          "86XXXXXXXX95"
        ]
      },
      "Action" : "logs:PutSubscriptionFilter",
      "Resource" : "arn:aws:logs:<Region>:<LoggingAccountNumber>:destination:CentralLogDestination"
    }
  ]
}

The Kinesis Stream get records from the Logs Destination and holds them for 48 hours. Kinesis Streams scale by adding shards. The CloudFormation template starts the stream with two shards. You need to monitor this as instances and applications are deployed into the accounts, however, because all CloudWatch log objects will flow through this stream, and it will need to be scaled up at some point. To scale, change the number of shards (ShardCount) in the Kinesis Stream resource (KinesisLoggingStream) to the required number. See the Amazon Kinesis Data Streams FAQ documentation to confirm the capacity and throughput of each shard.

Kinesis Firehose provides a simple and efficient mechanism to retrieve the records from the Kinesis Stream and load them into the <LoggingS3Bucket> repository. It uses the CloudFormation template parameter to know where to load the logs. All of the CloudWatch logs loaded by Firehose will be under the prefix /CentralizedAccountsLog. The buffering hints for Firehose suggest that the logs be loaded every 5 minutes or 50 MB. Leave the CompressionFormat UNCOMPRESSED, since the logs are already compressed.

There are two AWS Identity and Access Management (IAM) roles created for this infrastructure. The first, CWLtoKinesisRole is used by the destination to allow CloudWatch Logs from all regions to use the destination to put the log object records into the Kinesis Stream, as well as to pass the role. The second, FirehoseDeliveryRole, allows Firehose to get the log object records from the Kinesis Stream, and then to load them into S3 logging bucket.

Once you have successfully created this infrastructure, the next step is to add the subscription filters to existing log groups.

Adding Subscription Filters to Existing Log Groups

The next step in the process is to add subscription filters for the Log Destination in the core logging account to all existing log groups. Several log groups are created by the Landing Zone, or you may have created them by using various AWS services or by logging application events. For every new AWS account, you will need to run the init_account_central_logging.py Python script to add the subscription filters to all the existing log groups.

The init_account_central_logging.py script takes one parameter, which is the Log Destination ARN. Use the Destination ARN you copied from the stack details output in the previous section as the parameter to the script.

The init_account_central_logging.py script first adds this Destination ARN to the AWS Systems Manager Parameter Store so that the core logic that creates the subscription filter can use it. The script then gets a list of all existing log groups, iterates over them, deletes any existing subscription filters (because there can only be one subscription filter per log group and attempting to create another would cause an error), and then adds the new subscription filter to the centralized logging account to the Log Destination.

Figure 3 - Run script to add subscription filters to existing log groups

Figure 3 – Run script to add subscription filters to existing log groups

Installation Instructions:

  1. Make sure that Python and boto3 are installed and accessible in the client computer – consider loading into a virtual environment to keep dependencies separate.
  2. Set the AWS_PROFILE environment variable to the appropriate AWS account profile.
  3. Log in to the proper account, and obtain administrator or other credentials with appropriate permissions, and add the account access key and secret key to the AWS credentials file.
  4. Set the region and output in the AWS config file.
  5. Download and place two python files into a working directory: init_account_central_logging.py and CentralLogging.py.
  6. Run the script using the command python3 ./init_account_central_logging.py -d <LogDestinationArn>.

Use the AWS Management Console to validate the results. Navigate to CloudWatch Logs and view all of the log groups. Each one should now have a subscription filter named “Logs (CentralLogDestination).”

Automatically Adding Subscription Filters to New Log Groups

The final step to set up the centralized log streaming capability is to run a CloudFormation script to create resources that automatically add subscription filters to new log groups. New log groups are created in accounts by resources (e.g., Lambda functions) and by applications. A subscription filter must be added to every new log group in order to deliver its log events to the logging account,

The AddSubscriptionFilter.yml CloudFormation template contains resources to automatically add subscription filters.

First, it creates a role that allows it to access the lambda code that is stored in a centralized location – the <S3 infrastructure-bucket>. (Remember that its S3 bucket policy must contain this account number in order to access the lambda code.)

Second, the template creates the AddSubscriptionLambda, which reuses the core logic shared by the script in the last section. It retrieves the proper destination from the Parameter Store, deletes any existing subscription filter from the log group, and adds the new subscription filter to the newly created log group. This lambda function is triggered by a CloudWatch event rule.

Third, the CloudFormation creates a Lambda Permission, which allows the event trigger to invoke this particular lambda.

Finally, the CloudFormation template creates an Amazon CloudWatch Events Rule that acts as a trigger for the lambda. This rule looks for an event coming from CloudTrail that signals the creation of a new log group. For each create log group event found, it invokes the AddSubscriptionLambda.

Figure 4 - Infrastructure to automatically add a subscription filter to a new log group and the log flow to the centralized account

Figure 4 – Infrastructure to automatically add a subscription filter to a new log group and the log flow to the centralized account

Installation Instructions:

(Important note: This functionality requires that the LogDestination parameter be properly set to the LogDestinationArn in the Parameter Store before the Lambda will run successfully. The script in the previous step sets this parameter, or it can be done manually. Make certain that the destination specified is in this same region.)

  1. Ensure that the <S3 infrastructure-bucket> has the AddSubscriptionFilter.zip file containing the Python code files lambda.py and CentralLogging.py.
  2. Log in to the appropriate account, and access using administrator credentials. Make sure that the region is set properly.
  3. Navigate to Cloudformation and click the Create Stack button.
  4. Select Specify an Amazon S3 template URL and enter the Link for the AddSubscriptionFilter.yml template found in <S3 infrastructure-bucket>
  5. Enter a stack name, such as AddSubscription.
  6. Enter the two parameters, the <S3 infrastructure-bucket> name (not ARN) and the folder and file name (e.g., lambdas/AddSubscriptionFilter.zip)
  7. Skip the next page, acknowledge the creation of IAM resources, and Create the stack.

In order to test that the automated addition of subscription filters is working properly, use the AWS Management Console to navigate to CloudWatch Logs and click the Actions button. Select Create New Log Group and enter a random log group name, such as “testLogGroup.” When first created, the log group will not have a subscription filter. After a few minutes, refresh the display and you should see the new subscription filter on the log group. At this point, you can delete the test log group.

New Account Setup

As a reminder, when you add new accounts that you want to have stream log events to the central logging account, you will need to configure the new accounts in two places in order for this functionality to work properly.

First, add the account number to the LoggingDestination property DestinationPolicy in the centralLogging.yml template. Then, update the CloudFormation stack.

Second, modify the bucket policy for the <S3 infrastructure-bucket>. Select the Permissions tab, then the Bucket Policy button. Add the new account to allow cross-account access to the lambda code by adding the line “arn:aws:iam::<new account number>:root” to the Principal.AWS list.

Conclusion

Centralized logging is a key component in enterprise multi-account architectures. In this blog post, I have built on the central logging in multi-account environments streaming architecture to automatically subscribe all CloudWatch Logs log groups to send all log events to an S3 bucket in a designated logging account. The solution uses a script to add subscription filters to existing log groups, and a lambda function to automatically place a subscription filter on all new log groups created within the account. This can be used to forward application logs, security logs, VPC flow logs, or any other important logs that are required for audit, security, or compliance purposes.

About the author

David BaileyDavid Bailey is a Cloud Infrastructure Architect with AWS Professional Services specializing in serverless application architecture, IoT, and artificial intelligence. He has spent decades architecting and developing complex custom software applications, as well as teaching internationally on object-oriented design, expert systems, and neural networks.

 

 

Simplify granting access to your AWS resources by using tags on AWS IAM users and roles

Post Syndicated from Sulay Shah original https://aws.amazon.com/blogs/security/simplify-granting-access-to-your-aws-resources-by-using-tags-on-aws-iam-users-and-roles/

Recently, AWS enabled tags on IAM principals (users and roles). With this update, you can now use attribute-based access control (ABAC) to simplify permissions management at scale. This means administrators can create a reusable policy that applies permissions based on the attributes of the IAM principal (such as tags). For example, you can use an IAM policy that grants developers access to resources that match their project tag. As the team adds resources to projects, permissions automatically apply based on attributes. No policy update required for each new resource.

In this blog post, I walk through three examples of how you can control access permissions by using tags on IAM principals and AWS resources. It’s important to note that you can use tags to control access to your AWS resources, but only if the AWS service in question supports tag-based permissions. To learn more about AWS services that support tag-based permissions, see AWS Services That Work with IAM.

As a reminder, I introduced the following tagging condition keys in my post about tagging. Adding tags to the Condition element of a policy tailors the policy’s permissions and limits its actions and resources.

Condition key Description Actions that support the condition key
aws:RequestTag Tags that you request to be added or removed. iam:CreateUser, iam:Create Role, iam:TagRole, iam:UntagRole, iam:TagUser, iam:UntagUser
aws:TagKeys Tag keys that are checked before the actions are executed. iam:CreateUser, iam:Create Role, iam:TagRole, iam:UntagRole, iam:TagUser, iam:UntagUser
aws:PrincipalTag Tags that exist on the user or role making the call. A global condition (all actions across all services support this condition key)
iam:ResourceTag Tags that exist on an IAM resource. All IAM APIs that supports an IAM user or role and sts:AssumeRole

Example 1: Grant IAM users access to your AWS resources by using tags

Assume that you have multiple teams of developers who need permissions to start and stop specific EC2 instances based on their cost center. In the following policy, I specify the EC2 actions ec2:StartInstances and ec2:StopInstances in the Action element and all resources in the Resource element of the policy. In the Condition element of the policy, I use the condition key aws:PrincipalTag. This will help ensure that the principal is able to start and stop that instance only if value of the ec2 instance CostCenter tag matches value of the CostCenter tag on the principal. Attaching this policy to your developer roles or groups simplifies permissions management, as you only need to manage a single policy for all your dev teams requiring permissions to start and stop instances and rely on tag values to specify the resources.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeInstances"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:StartInstances",
                "ec2:StopInstances"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "ec2:ResourceTag/CostCenter": "${aws:PrincipalTag/CostCenter}"
                }
            }
        }
    ]
}

Example 2: Grant users in an IAM group access to your AWS resources by using tags

Assume there are database administrators in your account who need start, stop, and reboot permissions for specific Amazon RDS instances. In the following policy, I define the start, stop, and reboot actions for Amazon RDS in the Action element of the policy, and all resources in the Resource element of the policy. In the Condition element of the policy, I use the condition key, aws:PrincipalTag, to select users with the tag, CostCenter=0735. I use the StringEquals condition operator to check for an exact match of the value. I also use the condition key, rds:db-tag, to control access to databases tagged with Project=DataAnalytics. I attach this policy to an IAM group which contains all the database administrators in my account. Now, any database administrator in this group with tag CostCenter=0735 gets access to the RDS instance tagged Project=DataAnalytics.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "rds:DescribeDBInstances",
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "rds:RebootDBInstance",
                "rds:StartDBInstance",
                "rds:StopDBInstance"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "aws:PrincipalTag/CostCenter": "0735",
                    "rds:db-tag/Project": "DataAnalytics"
                }
            }
        }
    ]
}

Example 3: Use tags to control access to IAM roles

Let’s say a user, Bob in Account A, needs to manage several applications and needs to assume specific roles in Account B. The following policy grants Bob’s IAM user permissions to assume all roles tagged with ExampleCorpABC. In the Action element of the policy, I define sts:AssumeRole, which grants permissions to assume roles. In the Resource element of the policy, I define a wildcard (*) to grant access to all roles, but use the condition key, iam:ResourceTag, in the Condition element to scope down the roles that Bob can assume. As with the previous policy, I use the StringEquals operator to ensure that Bob can assume roles that have the tag, Project=ExampleCorpABC. Now, whenever I create a role in Account B and trust Bob’s account in the role’s trust policy, Bob can only assume this role if it is tagged with Project=ExampleCorpABC.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "sts:AssumeRole",
      "Resource": "*",
      "Condition": {
	          "StringEquals": 
		    {"iam:ResourceTag/Project": "ExampleCorpABC"}
      }
    }
  ]
} 
 

Summary

You now can tag your IAM principals to control access to your AWS resources, and the three examples I’ve included in this post show how tags can help you simplify access management.

If you have comments about this post, submit them in the Comments section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum.

Want more AWS Security news? Follow us on Twitter.

The author

Sulay Shah

Sulay is the product manager for Identity and Access Management service at AWS. He strongly believes in the customer first approach and is always looking for new opportunities to assist customers. Outside of work, Sulay enjoys playing soccer and watching movies. Sulay holds a master’s degree in computer science from the North Carolina State University.

Add Tags to Manage Your AWS IAM Users and Roles

Post Syndicated from Sulay Shah original https://aws.amazon.com/blogs/security/add-tags-to-manage-your-aws-iam-users-and-roles/

We made it easier for you to manage your AWS Identity and Access Management (IAM) resources by enabling you to add tags to your IAM users and roles (also known as IAM principals). Tags enable you to add customizable key-value pairs to resources, and many AWS services support tagging of AWS resources. Now, you can use tags to add custom attributes such as project name and cost center to your IAM principals. Additionally, tags on IAM principals simplify permissions management. For example, you can author a policy that allows a user to assume the roles for a specific project by using a tag. As you add roles with that tag, users gain permissions to assume those roles automatically. In a subsequent post, I will review how you can use tags on IAM principals to control access to your AWS resources.

In this blog post, I introduce the new APIs and conditions you can use to tag IAM principals, show three example policies that address three tagging use cases, and I show how to add tags to IAM principals by using the AWS Console and CLI. The first example policy grants permissions to tag principals. The second example policy requires specific tags for new users, and the third grants permissions to manage specific tags on principals.

Note: You must have the latest version of the AWS CLI to tag your IAM principals. Follow these instructions to update the AWS CLI.

New IAM APIs for tagging IAM principals

The following table lists the new IAM APIs that you must grant access to using an IAM policy so that you can view and modify tags on IAM principals. These APIs support resource-level permissions so that you can grant permissions to tag only specific principals.

Actions Description Supports resource-level permissions
iam:ListUserTags Lists the tags on an IAM user. arn:aws:iam::<ACCOUNT-ID>:user/<USER-NAME>
iam:ListRoleTags Lists the tags on an IAM role. arn:aws:iam::<ACCOUNT-ID>:role/<ROLE-NAME>
iam:TagUser Creates or modifies the tags on an IAM user. arn:aws:iam::<ACCOUNT-ID>:user/<USER-NAME>
iam:TagRole Creates or modifies the tags on an IAM role. arn:aws:iam::<ACCOUNT-ID>:role/<ROLE-NAME>
iam:UntagUser Removes the tags on an IAM user. arn:aws:iam::<ACCOUNT-ID>:user/<USER-NAME>
iam:UntagRole Removes the tags on an IAM role. arn:aws:iam::<ACCOUNT-ID>:role/<ROLE-NAME>

In addition to the new APIs, tagging parameters now are available for the existing iam:CreateUser and iam:CreateRole APIs to enable you to tag your users and roles when they are created. I show how you can add tags to a new user later in this blog post.

Now that you know the APIs you can use to tag IAM principals, let’s review an example of how to grant permissions to tag by using an IAM policy.

Example policy 1: Grant permissions to tag specific users and all roles

To get started using tags, you must first ensure you grant permissions to do so. The following policy grants permissions to tag one IAM user and all roles.

Note: Replace <ACCOUNT-ID> with your 12-digit account number.

        
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "iam:ListUsers",
                "iam:ListRoles"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:ListUserTags",
                "iam:ListRoleTags",
                "iam:TagUser",
                "iam:TagRole",
                "iam:UntagUser",
                "iam:UntagRole"
            ],
            "Resource": [
                "arn:aws:iam:: <ACCOUNT-ID>:user/John",
                "arn:aws:iam:: <ACCOUNT-ID>:role/*"
            ]
        }
    ]
}

This policy lists all the actions required to see and modify tags for IAM principals. The Resource element of the policy grants permissions to tag one user, John, and all roles in the account by specifying the Amazon Resource Name (ARN).

Now that I have reviewed the new APIs you can use to view and modify tags on your IAM principals, let’s go over the new IAM condition keys you can use in policies.

New IAM condition keys for tagging IAM principals

The following table lists the condition keys you can use in your IAM policies to control access by using tags. In this section, I also show examples of how context keys in policies can help you grant more specific access for tagging IAM principals.

Condition key Description Actions that support the condition key
aws:RequestTag Tags that you request to be added or removed from a user or role iam:CreateUser, iam:CreateRole, iam:TagRole, iam:UntagRole, iam:TagUser, iam:UntagUser
aws:TagKeys Tag keys that are checked before the actions are executed iam:CreateUser, iam:CreateRole, iam:TagRole, iam:UntagRole, iam:TagUser, iam:UntagUser
aws:PrincipalTag Tags that exist on the user or role making the call global condition (all actions across all services support this condition key)
iam:ResourceTag Tags that exist on the resource Any IAM API that supports an IAM user or role and sts:AssumeRole

Now that I have explained both the new APIs and condition keys for tagging IAM users and roles, let’s review two more use cases with tags.

Example policy 2: Require tags for new IAM users

Let’s say I want to apply the same tags to all new IAM users so that I can track them consistently along with my other AWS resources. Now, when you create a user, you can also pass in one or more tags. Let’s say I want to ensure that all the administrators on my team apply a CostCenter tag. I create an IAM policy that includes the actions required to create and tag users. I also use the Condition element to list the tags required to be added to each new user during creation. If an administrator forgets to add a tag, the administrator’s attempt to create the user fails.

Note: These actions are creating new users by using the AWS CLI.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ThisRequiresSpecificTagsWhenYouCreateANewUsers",
            "Effect": "Allow",
            "Action": [
                "iam:CreateUser",
                "iam:TagUser"
            ],
            "Resource": "*",
            "Condition": {
                "StringLike": {
                    
                        "aws:RequestTag/CostCenter": "*"
                 
            }
	  }
        }
    ]
} 

The preceding policy grants iam:CreateUser and iam:TagUser to allow creating and tagging IAM users in the AWS CLI. The Condition element that specifies the CostCenter tag is required during creation by using the condition key aws:RequestTag.

Example policy 3: Grant permissions to manage specific tags on IAM principals

Let’s say I want an administrator on my team, Alice, to manage two tags, Project and CostCenter, for all IAM principals in our account. The following policy allows Alice to be able to assign any value to the Project tag, but limits the values she can assign to the CostCenter tag.


{
    "Version": "2012-10-17",
    "Statement": [
       {
            "Sid": "ViewAllTags", 
            "Effect": "Allow",
            "Action": [
                "iam:ListUsers",
                "iam:ListRoles",
				"iam:ListUserTags",
                "iam:ListRoleTags"
            ],
            "Resource": "*"
        },
        {
           "Sid": "TagUserandRoleWithAnyProjectName",
            "Effect": "Allow",
            "Action": [
                "iam:TagUser",
                "iam:TagRole"
            ],
            "Resource": "*",
            "Condition": {
                "StringLike": {
                    "aws:RequestTag/Project": "*"
                }
            }
        },
        {
           "Sid": "TagUserandRoleWithTwoCostCenterValues",
            "Effect": "Allow",
            "Action": [
                "iam:TagUser",
                "iam:TagRole"
            ],
            "Resource": "*",
            "Condition": {
                "StringLike": {
                    "aws:RequestTag/CostCenter": [
                        "1234",
                        "5678"]}}
        },
        {
           "Sid": "UntagUserandRoleProjectCostCenter",
            "Effect": "Allow",
            "Action": [
                "iam:UntagUser",
                "iam:UntagRole"
            ],
            "Resource": "*",
            "Condition": {
                "ForAllValues:StringLike": {
                    "aws:TagKeys": [
                        "CostCenter",
                        "Project"
                    ]
                }
            }
        }
    ]
}

This policy permits Alice to view, add, and remove the Project and CostCenter tags for all principals in the account. In the Condition element of the second and third statements of the policy, I use the condition, aws:RequestTag, to define the tags Alice is allowed to add or remove as well as the values she is able to assign to those tags. Alice can assign any value to the tag, Project, but is limited to two values, 1234 and 5678, for the tag, CostCenter.

Now that you understand how to grant permissions to tag IAM principals, I will show you how to run the commands to tag a new user and an existing role.

How to add tags to a new IAM user

Using the CLI
Let’s say that IAM user, John, is a new team member and needs access to AWS. To manage resources, I use the following command to create John and add the Project, CostCenter, and EmailID tags.

aws iam create-user --user-name John --tags Key=CostCenter,Value=1234, Key=EmailID,[email protected] 

To give John access to the appropriate AWS actions and resources, you can use the use the CLI to attach policies to John.

Using the console
You can also add tags to a user using the AWS console through the user creation flow as shown below.

  1. Sign in to the AWS Management Console and navigate to the IAM console.
  2. In the left navigation pane, select Users, and then select Add user.
  3. Type the user name for the new user.
  4. Select the type of access this user will have. You can select programmatic access, access to the AWS Management Console, or both.
  5. Select Next: Permissions.
  6. On the Set permissions page, specify how you want to assign permissions to this set of new users. You can choose between Add user to group, Copy permissions from existing user, or Attach existing policies to user directly.
  7. Select Next:Tags.
  8. On the Add tags (optional) page, add the tags you want to attach to this principal. I add the CostCenter tag key with a value of 1234 and the EmailID tag key with value of [email protected].
     
    Figure 1: Add tags

    Figure 1: Add tags

  9. Select Next: Review.
  10. Once you reviewed all the information, select Create user. This action creates your user John with the permissions and tags you attached. You can navigate to the user Details page to view this user.

    How to add tags to an existing IAM role

    Using the CLI
    To manage custom data for each role in my account, I need to add the following tags to all existing roles: Company, Project, Service, and CreationDate. The following command adds these tags to all existing roles. To be able to run the commands I just demonstrated, you must have permissions granted to you in an IAM policy.

    aws iam tag-role --role-name * --tags Key=Project, Key=Service

    I can define the value of the tags for a specific role, Migration, by using the following command:

    aws iam add-role-tags --role-name Migration --tags Key=Project,Value=IAM, Key=Service,Value=S3
    

    Using the console
    You can use the console to add tags to roles individually. To do this, on the left side, select Roles, and then select the role you want to add tags to.
     

    Figure 2: Add tags to individual roles

    Figure 2: Add tags to individual roles

    To view the existing tags on the role, select the Tags tab. The image shown below shows a Migration role in my account with two existing tags: Project with value IAM and key Service with value S3. To add tags or edit the existing tags, select Edit tags.
     

    Figure 3: Edit tags

    Figure 3: Edit tags

    Summary

    When you tag IAM principals, you add custom attributes to the users and roles in your account to make it easier to manage your IAM resources. In this post, I reviewed the new APIs and condition keys and showed three policy examples that address use cases to grant permissions to tag your IAM principals. In a subsequent post, I will review how you can use tags on IAM principals to control access to AWS resources and other accounts.

    If you have comments about this post, submit them in the Comments section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum.

    Want more AWS Security news? Follow us on Twitter.

    The author

    Sulay Shah

    Sulay is the product manager for Identity and Access Management service at AWS. He strongly believes in the customer first approach and is always looking for new opportunities to assist customers. Outside of work, Sulay enjoys playing soccer and watching movies. Sulay holds a master’s degree in computer science from the North Carolina State University.

Restrict access to your AWS Glue Data Catalog with resource-level IAM permissions and resource-based policies

Post Syndicated from Ben Snively original https://aws.amazon.com/blogs/big-data/restrict-access-to-your-aws-glue-data-catalog-with-resource-level-iam-permissions-and-resource-based-policies/

A data lake provides a centralized repository that you can use to store all your structured and unstructured data at any scale. A data lake can include both raw datasets and curated, query-optimized datasets. Raw datasets can be quickly ingested, in their original form, without having to force-fit them into a predefined schema. Using data lakes, you can run different types of analytics on both raw and curated datasets. By using Amazon S3 as the storage layer of your data lakes, you can have a set of rich controls at both the bucket and object level. You can use these to define access control policies for the datasets in your lake.

The AWS Glue Data Catalog is a persistent, fully managed metadata store for your data lake on AWS. Using the Glue Data Catalog, you can store, annotate, and share metadata in the AWS Cloud in the same way you do in an Apache Hive Metastore. The Glue Data Catalog also has seamless out-of-box integration with Amazon Athena, Amazon EMR, and Amazon Redshift Spectrum.

Using AWS Glue, you can also create policies to restrict access to different portions of the catalog based on users, roles, or applied at a resource level. With these policies, you can provide granular control over which users can access the various metadata definitions in your data lake.

Important: The S3 and the AWS Glue Data Catalog policies define the access permissions to the data and metadata definitions respectively. In other words, the AWS Glue Data Catalog policies define the access to the metadata, and the S3 policies define the access to the content itself.

You can restrict which metadata operations can be performed, such as GetDatabases, GetTables, and CreateTable, and others using identiy-based policies (IAM). You can also restrict which data catalog objects those operations are performed on. Additionally, you can limit which catalog objects get returned in the resulting call. A Glue Data Catalog “object” here refers to a database, a table, a user-defined function, or a connection stored in the Glue Data Catalog.

Suppose that you have users that require read access to your production databases and tables in your data lake, and others have additional permissions to dev resources. Suppose also that you have a data lake storing both raw data feeds and curated datasets used by business intelligence, analytics, and machine learning applications. You can set these configurations easily, and many others, using the access control mechanisms in the AWS Glue Data Catalog.

Note: The following example shows how to set up a policy on the AWS Glue Data Catalog. It doesn’t set up the related S3 bucket or object level policies. This means the metadata isn’t discoverable when using Athena, EMR, and tools integrating with the AWS Glue Data Catalog. At the point when someone tries to access an S3 object directly, S3 policy enforcement is important. You should use Data Catalog and S3 bucket or object level policies together.

Fine-grained access control

You can define the access to the metadata using both resource-based and identity-based policies, depending on your organization’s needs. Resource-based policies list the principals that are allowed or denied access to your resources, allowing you to set up policies such as cross-account access. Identity policies are specifically attached to users, groups, and roles within IAM.

The fine-grained access portion of the policy is defined within the Resource clause. This portion defines both the AWS Glue Data Catalog object that the action can be performed on, and what resulting objects get returned by that operation.

Let’s run through an example. Suppose that you want to define a policy that allows a set of users to access only the finegrainacces database. The policy also allows users to return all the tables listed within the database. For the GetTables actions, the resource definitions include these resource statements:

"arn:aws:glue:us-east-1:123456789012:catalog",
"arn:aws:glue:us-east-1:123456789012:database/finegrainaccess",
"arn:aws:glue:us-east-1:123456789012:table/finegrainaccess/*"

The first resource statement at the database Amazon Resource Name (ARN) allows the user to call the operation on the finegrainaccess database. The second ARN allows all the tables within that database to be returned.

Now, what if we want to return only the tables that started with “dev_” from the “finegrainaccess” database? If so, this is how the policy changes:

"arn:aws:glue:us-east-1:123456789012:catalog",
"arn:aws:glue:us-east-1:123456789012:db/finegrainaccess",           
"arn:aws:glue:us-east-1:123456789012:tables/finegrainaccess/dev_*"  

Now, we are specifying dev_ as part of the table’s ARN in the second resource definition. This approach also works with actions for getting the list of databases, partitions for a table, connections, and other operations in the catalog.

Taking it for a spin

Note: This post focuses on the policies for AWS Glue Data Catalog. If you look closely, all of these datasets are pointing to the same S3 locations, which are world-readable. In a full example, you should also set the necessary S3 bucket or object level permissions, or both.

Next, we show an example you can do yourself. The next example creates the following in a Data Catalog.

We set up two users in the example, as shown following.

In the AWS Management Console, launch the AWS CloudFormation template.

Choose Next.

Important: Enter a password for the IAM users to be created. These users will have permissions to run Athena queries, access to your Athena S3 results bucket, and see the AWS Glue databases and tables that the CloudFormation script creates. These permissions must match the minimum requirements of the IAM password policy on the account that you run this example from.

Choose Next, and then on the next page choose Next again.

Lastly, acknowledge that the template will create IAM users and policies.

Then choose Create.

When you refresh your CloudFormation page, you can see the script creating the example resources.

Wait until it’s complete.

This script creates the necessary IAM users and policies attached to them, along with the necessarily databases and tables listed preceding.

After the CloudFormation script completes, you should see these tables if using an administrator user.

If you look on the Outputs tab, you can see the two IAM users that were created along with your IAM sign-in URL.

Note:

If you click the sign-in link in the same browser, the system logs you out. A nice trick is to right-click and open a private or incognito window.

If the provided IAM password doesn’t meet the minimum requirements, you see this message in the CloudFormation script event log:

The specified password is invalid … <why it was invalid>

Looking in the AWS Glue Data Catalog, you can see the tables that just got created by the script.

We can see the script created the structure that we outlined preceding.

Let’s check the two user profiles. If you go into IAM and users, they are set as inline policies. You should see the following for each user.

For the AWS Glue dev user, this section gives us full access to anything in the dev databases:

This section gives us the ability to query and see the prod database:

Lastly, this section gives us access to get tables and partitions from the prod database. You can structure this section so that it explicitly lists the blog_prod database in the resource and only allow that. The following lets someone query for database/* and return only the blog_prod tables. This, in fact, is the default behavior of the console.

Without this, you could still query those two databases explicitly, but the policy would not allow a wildcard query such as the following.

In contrast, the QA user doesn’t have access to the dev database and can only see the tables that start with prod_in the prod database. So the following is what the QA user’s policy looks like.

The query for the prod database is as follows:

Only GetTables and GetPartitions are available for the tables starting with prod_.

Notice the “prod_*” in the resource definition following.

Querying based on the different users

Logging in as the two different IAM users created by the AWS CloudFormation output tab and the password you provided, you can see some differences.

Notice that the QA user can’t see any metadata definitions for the blog_dev database, or the staging_yellow table in the blog_prod database.

Next, sign in as blog_dev_user and go to the Athena console. Notice that the blog user only sees the databases and tables listed that this user is permitted to.

The dev user can create a table under blog_dev, but not the blog_prod database.

Now let’s look under dev_qa_user. Notice that we only see the blog_prod and prod_* tables in Athena.

The QA user can query the datasets that user can see, but the policy doesn’t let that user create a database or tables.

If the QA user tries to query through Athena, and manually pull the metadata outside the console, that user can’t see any of the information. You can test this by running the following.

select * from blog_dev.yellow limit 10;

Conclusion

Data cataloging is an important part of many analytical systems. The AWS Glue Data Catalog provides integration with a wide number of tools. Using the Data Catalog, you also can specify a policy that grants permissions to objects in the Data Catalog. Data lakes require detailed access control at both the content level and the level of the metadata describing the content. In this example, we show how you can define the access policies for the metadata in the catalog.


Additional Reading

Learn how to harmonize, search, and analyze loosely coupled datasets on AWS.

 


About the Author

Ben Snively is a Public Sector Specialist Solutions Architect. He works with government, non-profit and education customers on big data and analytical projects, helping them build solutions using AWS. In his spare time, he adds IoT sensors throughout his house and runs analytics on it.

 

 

 

Use YubiKey security key to sign into AWS Management Console with YubiKey for multi-factor authentication

Post Syndicated from Ujjwal Pugalia original https://aws.amazon.com/blogs/security/use-yubikey-security-key-sign-into-aws-management-console/

AWS Identity and Access Management (IAM) best practice is to require all IAM and root users in your account to sign into the AWS Management Console with multi-factor authentication (MFA). When MFA is enabled, AWS prompts users for their username and password (the first factor – what they know) and also provides an authentication challenge such as one-time passcode (OTP) to their MFA device (the second factor – what they have). Now you can enable a YubiKey security key (manufactured by Yubico, a third party provider) as your users’ MFA device.

YubiKey security keys use Universal 2nd Factor (U2F), an open authentication standard that enables users to easily and securely access multiple online services using a single security key, without needing to install drivers or client software. AWS allows you to enable a YubiKey security key as the MFA device for your IAM users. You can also enable a single key for multiple IAM and root users across AWS accounts, making it easier to manage your MFA device for access to multiple users. Now, you can use your existing key to authenticate to other third-party applications, such as GitHub or Dropbox, to sign in to the AWS Management Console.

In this post, I demonstrate how to enable a YubiKey for your IAM users in the IAM console. I then demonstrate how to sign into the AWS Management Console as an IAM user using the YubiKey security key as your MFA device.

Note: You can enable a YubiKey security key as MFA device for your root users from the Security Credentials page by following a similar setup process. Also, the AWS Console Mobile App and mobile browsers do not currently support YubiKey security as MFA for AWS. For more information, please review Supported Configurations for Using U2F Security Keys.

Enabling a YubiKey security key as MFA device for IAM users

To follow along, you must have a YubiKey security key that you want to associate with your IAM user. You can order a YubiKey security key using Amazon.com or other retailers.

Follow these steps to enable a YubiKey security key for your IAM user:

  1. Sign in to the IAM console.
  2. In the left navigation pane, select Users and then choose the name of the user for whom you want to enable a YubiKey.
  3. Select the Security Credentials tab, and then select the Manage link next to Assigned MFA device.
    Figure 1: Managing assigned MFA devices

    Figure 1: Managing assigned MFA devices

  4. In the Manage MFA Device wizard, select U2F security key and then select Continue.
    Figure 2: Selecting your U2F security key

    Figure 2: Selecting your U2F security key

  5. Insert the YubiKey security key into the USB port of your computer, wait for the key to blink, and then touch the button or gold disk on your key. If your key doesn’t blink, please select Troubleshoot U2F to review instructions to troubleshoot the issue.
    Image 3: Inserting the security key

    Figure 3: Inserting the security key

  6. You’ll receive a notification that the security key assignment was successful. The YubiKey security key is ready for use. Select Close.
    Figure 4: Notification of successful setup

    Figure 4: Notification of successful setup

    The Security Credentials tab will now display the U2F security key next to Assigned MFA device.

    Figure 5: Verifying your assigned MFA device

    Figure 5: Verifying your assigned MFA device

Now that you’ve successfully enabled a YubiKey security key as the MFA device for your IAM user (in this example, DBAdmin), I’ll demonstrate how your IAM user can use their YubiKey security key in addition to their username and password to sign into the AWS Management Console.

Using your YubiKey security key to sign into the AWS Management Console as an IAM user

As an IAM user with MFA enabled, you must use your MFA device to sign into the AWS Management Console. During sign-in, you first need to enter your username and password. Next, you need to complete the authentication challenge using your MFA device. Once you have successfully completed the MFA challenge, you can access the AWS Management Console.
Follow these steps to sign into the AWS Management Console using your YubiKey security key as the MFA device:

  1. Enter your AWS account ID or alias to sign in as an IAM user and select Next.
    Figure 6: Signing in as an IAM user

    Figure 6: Signing in as an IAM user

  2. From the IAM sign-in page, re-enter your AWS account ID or alias, plus the username and password for your IAM user. Then select Sign in.
    Figure 7: Entering your IAM account details

    Figure 7: Entering your IAM account details

  3. To authenticate with your YubiKey security key, insert your key into the USB port on your computer, wait for the key to blink, and then touch the button or gold disk on your YubiKey security key. If your key doesn’t blink, please select Troubleshoot MFA to review instructions to troubleshoot the issue.
    Figure 8: Completing sign-in with MFA

    Figure 8: Completing sign-in with MFA

Your IAM user has successfully completed the MFA challenge and signed into the AWS Management console.

Summary

In this blog post, I shared the benefits of using YubiKey security keys as your MFA device. I demonstrated how you can enable a YubiKey security key for your IAM users through the IAM console. I also showed you how to sign into the AWS Management Console using the YubiKey security key associated with your IAM user. You can also enable a U2F security key as an MFA device for root users by following a similar process.

If you have comments about enabling YubiKey or other MFA devices for your users, submit them in the Comments section below. If you have issues enabling YubiKey for your users, start a thread on the IAM forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Ujjwal Pugalia

Ujjwal is the product manager for the console sign-in and sign-up experience at AWS. He enjoys working in the customer-centric environment at Amazon because it aligns with his prior experience building an enterprise marketplace. Outside of work, Ujjwal enjoys watching crime dramas on Netflix. He holds an MBA from Carnegie Mellon University (CMU) in Pittsburgh.

AWS Organizations now requires email address verification in order to invite accounts to an organization

Post Syndicated from Raymond Ma original https://aws.amazon.com/blogs/security/aws-organizations-now-requires-email-address-verification/

AWS Organizations, the service for centrally managing multiple AWS accounts, enables you to invite existing accounts to join your organization. To provide additional assurance about your organization’s identity to AWS accounts that you invite, AWS Organizations is adding a new feature. Beginning on September 27, 2018, you’ll need to verify the email address associated with your organization’s master account before you invite existing accounts to join your organization. To prepare for this feature, you may verify the email address associated with your master account starting today.

If you need to change your master account email address prior to verifying it, please see Managing an AWS Account.

How to verify your master account email address

To verify your master account email address, follow these steps:

  1. Navigate to the Organizations console and choose the Settings tab.
  2. From the Organization details section of the settings page, choose Send verification request.
    Figure 1: Sending the verification request

    Figure 1: Sending the verification request

    You’ll receive a pop-up notification confirming that a verification email has been sent to the email address associated with your master account.

    Figure 2: Confirmation of verification email

    Figure 2: Notification of sent email

  3. Log into your email account and look for an email from AWS with the subject line AWS Organizations email verification request.
  4. Open the email and choose Verify your email address.
    Figure 3: Verification link in email

    Figure 3: Email verification link

  5. If you don’t have an active AWS session, log back into your AWS account. You should see a notification that your email address has been successfully verified. You’ll now be able to invite accounts to join your organization.
    Figure 4: Notification of successful verification

    Figure 4: Notification of successful verification

Summary

To provide additional security assurance to AWS accounts that are invited to join an organization, AWS Organizations will require email verification for organization master accounts starting September 27, 2018. To verify your master account email address and invite AWS accounts to join your organization, sign in to the Organizations console and follow the steps I’ve outlined in this post.

If you have comments about this post, submit them in the Comments section below. If you have questions about or issues implementing this solution, start a new thread on the Organizations forum or contact AWS Support..

Want more AWS Security news? Follow us on Twitter.

Raymond Ma

Raymond is a Senior Technical Product Manager on the AWS Identity team covering AWS Organizations and Accounts. Outside of work, he enjoys taking care of his dog, Merlin, and volunteering with King County Search and Rescue.

Introducing private registry authentication support for AWS Fargate

Post Syndicated from tiffany jernigan (@tiffanyfayj) original https://aws.amazon.com/blogs/compute/introducing-private-registry-authentication-support-for-aws-fargate/

Private registry authentication support for Amazon Elastic Container Service (Amazon ECS) is now available with the AWS Fargate launch type! Now, in addition to Amazon Elastic Container Registry (Amazon ECR), you can use any private registry or repository of your choice for both EC2 and Fargate launch types.

For ECS to pull from a private repository, it needs a secret in AWS Secrets Manager with your registry credentials, an ECS task execution IAM role in AWS Identity Access Management (IAM) with a policy granting access to the secret, and a task with the secret and task execution IAM role ARNs in the task definition.

Diagram of ECS Private Registry Authentication Architecture

Here’s how to use ECS with a private repository on Docker Hub via the AWS Management Console.

Registry

If you don’t already have a private repository (or account), you can create a free repo now. To follow along, run the following commands in a terminal to pull an image, get the image ID, and push it to your new repository:

docker pull tiffanyfay/space
docker images tiffanyfay/space --format {{.ID}}
docker tag <image-id> <your-username/repository-name>:latest
docker login
docker push <your-username/repository-name>

Secrets Manager

In the Secrets Manager console, store a new secret with your Docker Hub credentials, which is used to access your private repository.

By default, Secrets Manager creates an encryption key, DefaultEncryptionKey, on your behalf. You can instead use an existing key or add a new one with AWS Key Management Service (AWS KMS), if you would prefer.

Choose Other type of secrets and add secret keys and values for username and password.

Next, create a name, such as dockerhub, and description for your secret.

Because the keys are corresponding to your Docker Hub credentials, leave rotation disabled.

On the next page, you can review your settings and store your secret. Open your new secret to see the details. Write down the Secret ARN value and keep it handy, as it is used in the next step and later, in your task definition.

IAM

Now that you have a secret, you need to provide Fargate permissions to read it. This is done via a task execution IAM role.

In the IAM console, choose Policies, Create policy. Provide Secrets Manager with read access for secretsmanager:GetSecretValue, with your secret’s ARN as the resource.

Name your policy dockerhubsecret.

If you chose to use your own encryption key, you also need to create a policy with kms:Decrypt permissions for KMS.

Next, choose Role to create an IAM role, which is used as your task execution role. Choose AWS service, Elastic Container Service, and Elastic Container Service Task.

Search for your dockerhubsecret policy and attach it to the role.

Lastly, give the role a name, such as ecsExecutionRoleDockerHub, and create it. Copy the role ARN value. Depending on how you create your task definition, you may need it.

ECS

While the mechanism to authenticate private registries is supported on both EC2 and Fargate launch types, for this example we will be launching a task on Fargate.

Before you can create a task, you need an ECS cluster, VPC, and subnets. If you don’t already have them, in the ECS console, choose Clusters, Get Started. Keep track of the cluster name, VPC ID, and subnet IDs, as you use them soon.

It’s time to create your task definition, which is used to create your task (grouping of up to ten containers that run on the same host). This is where you need your Secrets Manager ARN and IAM role name.

Choose Task Definitions, Create new Task Definition, and select the Fargate launch type. You can then configure your task definition via the wizard or scroll down, choose Configure via JSON and paste the following task definition after replacing fields with angle brackets. This task definition also works with the EC2 launch type.

{
    "family": "space-td",
    "containerDefinitions": [
        {
            "name": "space",
            "image": "<your-username/repository-name>",
            "portMappings": [
                {
                    "protocol": "tcp",
                    "containerPort": 80
                }
            ],
            "cpu": 0,
            "repositoryCredentials": {
                "credentialsParameter": "<secret-ARN>"
            }
        }
    ],
    "memory": "512",
    "cpu": "256",
    "requiresCompatibilities": [
        "FARGATE"
    ],
    "networkMode": "awsvpc",
    "executionRoleArn": "<execution-role-ARN>"
}

If you use the wizard, give your task a name, such as space-td, and specify your task execution IAM role (ecsTaskExecutionRoleDockerHub), a task size of 0.5 GB of memory, and 0.25 vCPU.

Next, choose Container Definitions, Add container. Give the container a name, specify your image <your-username/repository-name>, check the box for private registry authentication, and add your secrets manager ARN and a container port 80. Choose Add.

After you create your task definition, choose Actions, Run Task, and specify the Fargate launch type, your cluster, cluster VPC, subnets, a security group with inbound permissions for your container ports (the default one provides access to port 80). Enable auto-assigning a public IP address.

Open the task from its ID to see the details:

When the Last status field is RUNNING, under Network, copy the public IP address and paste it in a browser.

If you used pushed tiffanyfay/space to your repository, you should see the following:

I hope this post has helped you. If you have any questions, feel free to reach out!

-tiffany

Special thanks to Yuling Zhou, Deepak Dayama, Derek Petersen, Varun Iyer, Adnan Khan and several others for their insights in this blog.

tiffany jernigan

tiffany jernigan

@tiffanyfayj
Tiffany is a developer advocate at Amazon for containers on AWS. Previously she worked at Docker and Intel in software engineering and as a hardware engineer after graduating from Georgia Tech in Electrical Engineering. In the majority of her free time she dabbles in photography and spends time with family and friends. You can find her on twitter/ig as tiffanyfayj.

Delegate permission management to developers by using IAM permissions boundaries

Post Syndicated from Apurv Awasthi original https://aws.amazon.com/blogs/security/delegate-permission-management-to-developers-using-iam-permissions-boundaries/

Today, AWS released a new IAM feature that makes it easier for you to delegate permissions management to trusted employees. As your organization grows, you might want to allow trusted employees to configure and manage IAM permissions to help your organization scale permission management and move workloads to AWS faster. For example, you might want to grant a developer the ability to create and manage permissions for an IAM role required to run an application on Amazon EC2. This ability is powerful and might be used inappropriately or accidentally to attach an administrator access policy to obtain full access to all resources in an account. Now, you can set a permissions boundary to control the maximum permissions employees can grant to the IAM principals (that is, users and roles) that they create and manage.

A permissions boundary is an advanced feature that allows you to limit the maximum permissions that a principal can have. Before we walk you through a specific example, here is an overview of how permissions boundaries work. As the IAM administrator, you can define one or more permissions boundaries using managed policies and allow your employee to create a principal with this boundary. The employee can then attach a permissions policy to this principal. However, the effective permissions of the principal are the intersection of the permissions boundary and permissions policy. As a result, the new principal cannot exceed the boundary that you defined. See the following diagram for a visual representation.
 

Figure 1: The intersection of permission boundaries and policies

Figure 1: The intersection of permissions boundary and permissions policy

In this post, we’ll walk through an example that shows how to grant an employee permission to create IAM roles and assign permissions. We’ll also show how to ensure that these IAM roles can only access Amazon DynamoDB actions and resources in the AWS EU (Frankfurt) region. This solution requires the following steps.

IAM administrator tasks

  1. Define the permissions boundary by creating a customer-managed policy.
  2. Create and attach a permissions policy to allow an employee to create roles, but only with a permissions boundary and a name that meets a specific convention.
  3. Create and attach a permissions policy to allow an employee to pass this role to Amazon EC2.

Employee tasks

  1. Create a role with the required permissions boundary.
  2. Attach a permissions policy to the role.

Administrator step 1: Define the permissions boundary

As an IAM administrator, we’ll create a customer managed policy that grants permissions to put, update, and delete items on all DynamoDB tables in the AWS EU (Frankfurt) region. We’ll require employees to set this policy as the permissions boundary for the roles they create. To follow along, paste the following JSON policy in a file with the name DynamoDB_Boundary_Frankfurt_Text.json.


{
  "Version" : "2012-10-17",
  "Statement" : [
  {
    "Effect": "Allow",
    "Action": [
                "dynamodb:PutItem",
                "dynamodb:UpdateItem",
                "dynamodb:DeleteItem"
   ],
    "Resource": "*",
    "Condition": {
        "StringEquals": {
            "aws:RequestedRegion": "eu-central-1"
        }
    }
  }
]
}

Next, use the create-policy AWS CLI command to create the policy, DynamoDB_Boundary_Frankfurt.

$aws iam create-policy –policy-name DynamoDB_Boundary_Frankfurt –policy-document file://DynamoDB_Boundary_Frankfurt_Text.json

Note: You can also use an AWS managed policy as a permissions boundary.

Administrator step 2: Create and attach the permissions policy

Create a policy that grants permissions to create IAM roles with the DynamoDB_Boundary_Frankfurt permissions boundary, and a name that begins with the prefix MyTestApp. This policy also grants permissions to create and attach IAM policies to roles with this boundary and naming convention. The permissions boundary controls the maximum permissions these roles can have. The naming convention enables administrators to more effectively grant access to manage and use these roles, without updating the employee’s permissions when they create a role. The naming convention also makes it easier to audit and identify roles created by an employee. To create this policy, paste the following JSON policy document in a file with the name Permissions_Policy_For_Employee_Text.json. Make sure to replace the variable <ACCOUNT NUMBER> with your own AWS account number. You can update the policy to grant additional permissions, such as launching EC2 instances in a specific subnet or allowing read-only access on items in a DynmoDB table.


{
  "Version" : "2012-10-17",
  "Statement" : [
     {
"Sid": "SetPermissionsBoundary",
"Effect": "Allow",
"Action": [
"iam:CreateRole",
"iam:AttachRolePolicy",
"iam:DetachRolePolicy"
],
"Resource": "arn:aws:iam::<ACCOUNT_NUMBER>:role/MyTestApp*",
"Condition": {
     "StringEquals": {
     "iam:PermissionsBoundary":     
     "arn:aws:iam::<ACCOUNT_NUMBER>:policy/DynamoDB_Boundary_Frankfurt"}}
      },
     {
      "Sid": "CreateAndEditPermissionsPolicy",
"Effect": "Allow",
"Action": [
"iam:CreatePolicy",
      "iam:CreatePolicyVersion"
],
"Resource": "*"
     }
]
}

Next, use the create-policy command to create the customer managed policy, Permissions_Policy_For_Employee, and use the attach-role-policy command to attach this policy to the principal, MyEmployeeRole, used by your employee.

$aws iam create-policy –policy-name Permissions_Policy_For_Employee –policy-document file://Permissions_Policy_For_Employee_Text.json

$aws iam attach-role-policy –policy-arn arn:aws:iam::<ACCOUNT_NUMBER>:policy/Permissions_Policy_For_Employee –role-name MyEmployeeRole

Administrator step 3: Create and attach the permissions policy for granting permissions to pass the role

Create a policy to allow the employee to pass the roles they created to AWS services, such as Amazon EC2, enabling these services to assume the roles and perform actions on the employee’s behalf. To do this, paste the following JSON policy document in a file with the name Pass_Role_Policy_Text.json.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "arn:aws:iam::<ACCOUNT_NUMBER>:role/MyTestApp*"
        }
    ]
}

Then, use the create-policycreate-policy command to create the policy, Pass_Role_Policy, and the attach-role-policy command to attach this policy to the principal, MyEmployeeRole.

$aws iam create-policy –policy-name Pass_Role_Policy –policy-document file://Pass_Role_Policy_Text.json

$aws iam attach-role-policy –policy-arn arn:aws:iam::<ACCOUNT_NUMBER>:policy/Pass_Role_Policy –role-name MyEmployeeRole

As the IAM administrator, we’ve successfully defined a permissions boundary. We’ve also granted our employee the ability to create IAM roles and attach permissions policies, while ensuring the permissions of the roles don’t exceed the boundary that we set.

Managing Permissions Boundaries

Changing and modifying a permissions boundary is a powerful permission. You should reserve this permission for full administrators in an account. You can do this by ensuring that policies you use as permissions boundaries don’t include the DeleteUserPermissionsBoundary and DeleteRolePermissionsBoundary actions. Or, if you allow “iam:*actions, then you must explicitly deny those actions.

Employee step 1: Create a role by providing the permissions boundary

Your employee can now use the create-role command to create a new IAM role with the DynamoDB_Boundary_Frankfurt permissions boundary and the attach-role-policy command to attach permissions policies to this role.

For this post, we assume that your employee operates an application, MyTestApp, on Amazon EC2 that requires access to the Amazon DynamoDB table, MyTestApp_DDB_Table. The employee can paste the following JSON policy document and save it as Role_Trust_Policy_Text.json to define the trust policy.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Then, the employee can use the create-role command to create the IAM role, MyTestAppRole, and define the permissions boundary as DynamoDB_Boundary_Frankfurt. The create-role command will fail if the employee doesn’t provide the appropriate permissions boundary. Make sure to the <ACCOUNT NUMBER> variable is replaced with the employee’s in the policy below.

$aws iam create-role –role-name MyTestAppRole
–assume-role-policy-document file://Role_Trust_Policy_Text.json
–permissions-boundary arn:aws:iam::<ACCOUNT_NUMBER>:policy/DynamoDB_Boundary_Frankfurt

Next, the employee grants permissions to this role by using the attach-role-policy command to attach the following policy, MyTestApp_DDB_Permissions. This policy grants the ability to perform all actions on the DynamoDB table, MyTestApp_DDB_Table.


{
    "Version": "2012-10-17",
    "Statement": [
{
"Effect": "Allow",
"Action": "dynamodb:*",
"Resource": [
"arn:aws:dynamodb:eu-central-1:<ACCOUNT_NUMBER>:table/MyTestApp_DDB_Table"]
}
]
}

$aws iam attach-role-policy –policy-arn arn:aws:iam::<ACCOUNT_NUMBER>:policy/MyTestApp_DDB_Permissions
–role-name MyTestAppRole

Although the employee granted full DynamoDB access, the effective permissions for this IAM role are the intersection of the permissions boundary, DynamoDB_Boundary_Frankfurt, and the permissions policy, MyTestApp_DDB_Permissions. This means the role only has access to put, update, and delete items on the MyTestApp_DDB_Table in the AWS EU (Frankfurt) region. See the following diagram for a visual representation.
 

Figure 2: Effective permissions for the IAM role

Figure 2: Effective permissions for the IAM role

Summary

We demonstrated how to use permissions boundaries to delegate IAM permission management. Using permissions boundaries can help you scale permission management in your organization and move workloads to AWS faster. To learn more, see the IAM documentation for permissions boundaries.

If you have comments about this post, submit them in the Comments section below. If you have questions or suggestions, please start a new thread on the IAM forum.

Want more AWS Security news? Follow us on Twitter.

How to connect to AWS Secrets Manager service within a Virtual Private Cloud

Post Syndicated from Divya Sridhar original https://aws.amazon.com/blogs/security/how-to-connect-to-aws-secrets-manager-service-within-a-virtual-private-cloud/

You can now use AWS Secrets Manager with Amazon Virtual Private Cloud (Amazon VPC) endpoints powered by AWS Privatelink and keep traffic between your VPC and Secrets Manager within the AWS network.

AWS Secrets Manager is a secrets management service that helps you protect access to your applications, services, and IT resources. This service enables you to rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. When your application running within an Amazon VPC communicates with Secrets Manager, this communication traverses the public internet. By using Secrets Manager with Amazon VPC endpoints, you can now keep this communication within the AWS network and help meet your compliance and regulatory requirements to limit public internet connectivity. You can start using Secrets Manager with Amazon VPC endpoints by creating an Amazon VPC endpoint for Secrets Manager with a few clicks on the VPC console or via AWS CLI. Once you create the VPC endpoint, you can start using it without making any code or configuration changes in your application.

The diagram demonstrates how Secrets Manager works with Amazon VPC endpoints. It shows how I retrieve a secret stored in Secrets Manager from an Amazon EC2 instance. When the request is sent to Secrets Manager, the entire data flow is contained within the VPC and the AWS network.

Figure 1: How Secrets Manager works with Amazon VPC endpoints

Figure 1: How Secrets Manager works with Amazon VPC endpoints

Solution overview

In this post, I show you how to use Secrets Manager with an Amazon VPC endpoint. In this example, we have an application running on an EC2 instance in VPC named vpc-5ad42b3c. This application requires a database password to an RDS instance running in the same VPC. I have stored the database password in Secrets Manager. I will now show how to:

  1. Create an Amazon VPC endpoint for Secrets Manager using the VPC console.
  2. Use the Amazon VPC endpoint via AWS CLI to retrieve the RDS database secret stored in Secrets Manager from an application running on an EC2 instance.

Step 1: Create an Amazon VPC endpoint for Secrets Manager

  1. Open the Amazon VPC console, select Endpoints, and then select Create Endpoint.
  2. Select AWS Services as the Service category, and then, in the Service Name list, select the Secrets Manager endpoint service named com.amazonaws.us-west-2.secrets-manager.
     
    Figure 2: Options to select when creating an endpoint

    Figure 2: Options to select when creating an endpoint

  3. Specify the VPC you want to create the endpoint in. For this post, I chose the VPC named vpc-5ad42b3c where my RDS instance and application are running.
  4. To create a VPC endpoint, you need to specify the private IP address range in which the endpoint will be accessible. To do this, select the subnet for each Availability Zone (AZ). This restricts the VPC endpoint to the private IP address range specific to each AZ and also creates an AZ-specific VPC endpoint. Specifying more than one subnet-AZ combination helps improve fault tolerance and make the endpoint accessible from a different AZ in case of an AZ failure. Here, I specify subnet IDs for availability zones us-west-2a, us-west-2b, and us-west-2c:
     
    Figure 3: Specifying subnet IDs

    Figure 3: Specifying subnet IDs

  5. Select the Enable Private DNS Name checkbox for the VPC endpoint. Private DNS resolves the standard Secrets Manager DNS hostname https://secretsmanager.<region>.amazonaws.com. to the private IP addresses associated with the VPC endpoint specific DNS hostname. As a result, you can access the Secrets Manager VPC Endpoint via the AWS Command Line Interface (AWS CLI) or AWS SDKs without making any code or configuration changes to update the Secrets Manager endpoint URL.
     
    Figure 4: The "Enable Private DNS Name" checkbox

    Figure 4: The “Enable Private DNS Name” checkbox

  6. Associate a security group with this endpoint. The security group enables you to control the traffic to the endpoint from resources in your VPC. For this post, I chose to associate the security group named sg-07e4197d that I created earlier. This security group has been set up to allow all instances running within VPC vpc-5ad42b3c to access the Secrets Manager VPC endpoint. Select Create endpoint to finish creating the endpoint.
     
    Figure 5: Associate a security group and create the endpoint

    Figure 5: Associate a security group and create the endpoint

  7. To view the details of the endpoint you created, select the link on the console.
     
    Figure 6: Viewing the endpoint details

    Figure 6: Viewing the endpoint details

  8. The Details tab shows all the DNS hostnames generated while creating the Amazon VPC endpoint that can be used to connect to Secrets Manager. I can now use the standard endpoint secretsmanager.us-west-2.amazonaws.com or one of the VPC-specific endpoints to connect to Secrets Manager within vpc-5ad42b3c where my RDS instance and application also resides.
     
    Figure 7: The "Details" tab

    Figure 7: The “Details” tab

Step 2: Access Secrets Manager through the VPC endpoint

Now that I have created the VPC endpoint, all traffic between my application running on an EC2 instance hosted within VPC named vpc-5ad42b3c and Secrets Manager will be within the AWS network. This connection will use the VPC endpoint and I can use it to retrieve my RDS database secret stored in Secrets Manager. I can retrieve the secret via the AWS SDK or CLI. As an example, I can use the CLI command shown below to retrieve the current version of my RDS database secret:

$aws secretsmanager get-secret-value –secret-id MyDatabaseSecret –version-stage AWSCURRENT

Since my AWS CLI is configured for us-west-2 region, it uses the standard Secrets Manager endpoint URL https://secretsmanager.us-west-2.amazonaws.com. This standard endpoint automatically routes to the VPC endpoint since I enabled support for Private DNS hostname while creating the VPC endpoint. The above command will result in the following output:


{
  "ARN": "arn:aws:secretsmanager:us-west-2:123456789012:secret:MyDatabaseSecret-a1b2c3",
  "Name": "MyDatabaseSecret",
  "VersionId": "EXAMPLE1-90ab-cdef-fedc-ba987EXAMPLE",
  "SecretString": "{\n  \"username\":\"david\",\n  \"password\":\"BnQw&XDWgaEeT9XGTT29\"\n}\n",
  "VersionStages": [
    "AWSCURRENT"
  ],
  "CreatedDate": 1523477145.713
} 

Summary

I’ve shown you how to create a VPC endpoint for AWS Secrets Manager and retrieve an RDS database secret using the VPC endpoint. Secrets Manager VPC Endpoints help you meet compliance and regulatory requirements about limiting public internet connectivity within your VPC. It enables your applications running within a VPC to use Secrets Manager while keeping traffic between the VPC and Secrets Manager within the AWS network. You can start using Amazon VPC Endpoints for Secrets Manager by creating endpoints in the VPC console or AWS CLI. Once created, your applications that interact with Secrets Manager do not require any code or configuration changes.

To learn more about connecting to Secrets Manager through a VPC endpoint, read the Secrets Manager documentation. For guidance about your overall VPC network structure, see Practical VPC Design.

If you have questions about this feature or anything else related to Secrets Manager, start a new thread in the Secrets Manager forum.

Want more AWS Security news? Follow us on Twitter.

Control access to your APIs using Amazon API Gateway resource policies

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/control-access-to-your-apis-using-amazon-api-gateway-resource-policies/

This post courtesy of Tapodipta Ghosh, AWS Solutions Architect

Amazon API Gateway provides you with a simple, flexible, secure, and fully managed service that lets you focus on building core business services. API Gateway supports multiple mechanisms of access control using AWS Identity and Access Management (IAM), AWS Lambda authorizers, and Amazon Cognito.

You may want to enforce strict control on the locations from which your APIs are invoked. For example, if you are an AWS Partner who offers APIs over a SaaS model, you can take advantage of the new Amazon API Gateway resource policies feature to control access to your APIs using predefined IP address ranges. API Gateway resource policies are JSON policy documents that you attach to an API to control whether a specified principal (typically, an IAM user or role) can invoke the API.

After a customer subscribes to your SaaS product in AWS Marketplace, you can ask for IP address ranges in the registration information. Then you can enable access to your API from only those IP addresses, making it a secure integration. For example, if you know that your customers are spread across a certain geography, you could blacklist all other countries. Alternately, if you have global customers, you can whitelist only specific IP address ranges.

What problems do resource policies solve?

In a distributed development team with separate AWS accounts, integration testing can be challenging. Allowing users from a different AWS account to access your API requires writing and maintaining code for assuming the role in the API owners account. Also, if you work with a third party, you have to write a Lambda authorizer to implement a bearer token–based authorization scheme.

Now, you can use resource policies much like S3 bucket policies, to provide overarching controls on your APIs without writing custom authorizers or complicated application logic. In this post, I demonstrate how you can use API Gateway resource policies to enable users from a different AWS account to access your API securely. You can also allow the API to be invoked only from specified source IP address ranges or CIDR blocks, without writing any code.

Solution overview

Imagine a company has two teams, Team A and Team B. Team B has created an API that is backed by a Lambda function and a DynamoDB database. They want to make the API public to third parties. First, they want Team A to run integration tests. After the API goes live, Team B wants to allow only users who access the API from a known IP address range.

The following diagram shows the sequence:
Flow Diagram

Start with building an API. For this walkthrough, use a SAM template and the AWS CLI to create the API. For the code to create an API and attach the resource policy to it, see the Sam-moviesapi-resourcepolicy GitHub repo.

Here’s a walkthrough of the steps, so you can get a deeper understanding of what’s happening under the covers.

  • Create the API
  • Turn on IAM authentication
  • Grant user access
  • Test the access permissions

Create the API

Assume that you are hosting the API in AccountB. Run the following commands:

git clone https://github.com/aws-samples/aws-sam-movies-api-resource-policy.git
mkdir ./build

cp -p -r ./movies
./build/movies

pip install -r
requirements.txt -t ./build

aws cloudformation package --template-file template.yaml --output-template-file template-out.yaml --s3-bucket $S3Bucket –profile AccountB

aws cloudformation deploy --template-file template-out.yaml --stack-name apigw-resource-policies-demo --capabilities CAPABILITY_IAM –profile AccountB

Note: You’ll need an S3 bucket to store your artifact for the “package” step.

Turn on IAM authentication

After the movie API is set up, turn on IAM authentication, so that it’s protected from unauthenticated attempts.
It should look like the following screenshot:
iam-auth-on

Also, make sure that you are getting a valid response when you make a GET request, as shown in the following screenshot:

Grant user access

Now grant AccountA user access to your API. In the API Gateway console, choose Movies API, Resource Policy.

Note: All the IP address ranges recorded in this post are for illustration purposes only.

Here is a screenshot of how it would look in the console:

The entire policy is listed here:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::<account_idA>:user/<user>",
                    "arn:aws:iam::<account_idA>:root"
                ]
            },
            "Action": "execute-api:Invoke",
            "Resource": "arn:aws:execute-api:us-east-1:<account_idB>:qxz8y9c8a4/*/*/*"
        },
        {
            "Effect": "Allow",
            "Principal": "*",
            "Action": "execute-api:Invoke",
            "Resource": "arn:aws:execute-api:us-east-1:<account_idB>:qxz8y9c8a4/*",
            "Condition": {
                "IpAddress": {
                    "aws:SourceIp": " 203.0.113.0/24"
                }
            }
        }
    ]
}

Here are a few points worth noting. The first policy statement shows how you could provide granular access to certain API IDs down to the specific resource paths in the resource section of the policy. To provide the AccountA user with access only to GET requests, change the resource line to the following:

"Resource": "arn:aws:execute-api:us-east-1:<account_idB>:qxz8y9c8a4/*/GET/*"

In the second statement, you are whitelisting the entire 203.0.113.0/24 network to make all calls to the API.

While whitelisting IP addresses is a good way to start while launching the API for the first time, maintaining the updated list could provide challenging. For a stable product, blacklisting bad actors might be more practical.

A blacklist implementation could look like the following:

{
	"Effect": "Deny",
	"Principal": "*",
	"Action": "execute-api:Invoke",
	"Resource": "arn:aws:execute-api:us-east-1:<account_idB>:qxz8y9c8a4/*",
	"Condition": {
		"IpAddress": {
			"aws:SourceIp": "203.0.113.0/24"
		}
	}
}

You have access logs turned on for the API and your log analysis tool has flagged bad actor/s from a particular IP address range, for example 203.0.113.0/24. Now you can blacklist this IP address in the resource policy.

Test the access permissions

You can now test, using postman, to ensure that the user from AccountA can indeed call the API hosted in AccountB. Also verify that attempts from other accounts are rejected.

In the following examples, the AWS Signature is configured to the AccessKey and SecretKey values from an AccountB user, who was granted access to the API.

Successful response from an authorized user from AccountB – Got a 200 OK

Failure from an unauthorized account/user: Got 401 Unauthorized

Summary

In this post, I showed you the different ways that you can use resource policies to lock down access to your API. Want to restrict a dev API endpoint to the office IP address range? Now you can. Cross-account API access is also made much simpler without having to write complex authentication/authorization schemes.

Easier way to control access to AWS regions using IAM policies

Post Syndicated from Sulay Shah original https://aws.amazon.com/blogs/security/easier-way-to-control-access-to-aws-regions-using-iam-policies/

We made it easier for you to comply with regulatory standards by controlling access to AWS Regions using IAM policies. For example, if your company requires users to create resources in a specific AWS region, you can now add a new condition to the IAM policies you attach to your IAM principal (user or role) to enforce this for all AWS services. In this post, I review conditions in policies, introduce the new condition, and review a policy example to demonstrate how you can control access across multiple AWS services to a specific region.

Condition concepts

Before I introduce the new condition, let’s review the condition element of an IAM policy. A condition is an optional IAM policy element that lets you specify special circumstances under which the policy grants or denies permission. A condition includes a condition key, operator, and value for the condition. There are two types of conditions: service-specific conditions and global conditions. Service-specific conditions are specific to certain actions in an AWS service. For example, the condition key ec2:InstanceType supports specific EC2 actions. Global conditions support all actions across all AWS services.

Now that I’ve reviewed the condition element in an IAM policy, let me introduce the new condition.

AWS:RequestedRegion condition key

The new global condition key, , supports all actions across all AWS services. You can use any string operator and specify any AWS region for its value.

Condition key Description Operator(s) Value
aws:RequestedRegion Allows you to specify the region to which the IAM principal (user or role) can make API calls All string operators (for example, StringEquals Any AWS region (for example, us-east-1)

I’ll now demonstrate the use of the new global condition key.

Example: Policy with region-level control

Let’s say a group of software developers in my organization is working on a project using Amazon EC2 and Amazon RDS. The project requires a web server running on an EC2 instance using Amazon Linux and a MySQL database instance in RDS. The developers also want to test Amazon Lambda, an event-driven platform, to retrieve data from the MySQL DB instance in RDS for future use.

My organization requires all the AWS resources to remain in the Frankfurt, eu-central-1, region. To make sure this project follows these guidelines, I create a single IAM policy for all the AWS services that this group is going to use and apply the new global condition key aws:RequestedRegion for all the services. This way I can ensure that any new EC2 instances launched or any database instances created using RDS are in Frankfurt. This policy also ensures that any Lambda functions this group creates for testing are also in the Frankfurt region.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeAccountAttributes",
                "ec2:DescribeAvailabilityZones",
                "ec2:DescribeInternetGateways",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeSubnets",
                "ec2:DescribeVpcAttribute",
                "ec2:DescribeVpcs",
                "ec2:DescribeInstances",
                "ec2:DescribeImages",
                "ec2:DescribeKeyPairs",
                "rds:Describe*",
                "iam:ListRolePolicies",
                "iam:ListRoles",
                "iam:GetRole",
                "iam:ListInstanceProfiles",
                "iam:AttachRolePolicy",
                "lambda:GetAccountSettings"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:RunInstances",
                "rds:CreateDBInstance",
                "rds:CreateDBCluster",
                "lambda:CreateFunction",
                "lambda:InvokeFunction"
            ],
            "Resource": "*",
      "Condition": {"StringEquals": {"aws:RequestedRegion": "eu-central-1"}}

        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:PassRole"
            ],
            "Resource": "arn:aws:iam::account-id:role/*"
        }
    ]
}

The first statement in the above example contains all the read-only actions that let my developers use the console for EC2, RDS, and Lambda. The permissions for IAM-related actions are required to launch EC2 instances with a role, enable enhanced monitoring in RDS, and for AWS Lambda to assume the IAM execution role to execute the Lambda function. I’ve combined all the read-only actions into a single statement for simplicity. The second statement is where I give write access to my developers for the three services and restrict the write access to the Frankfurt region using the aws:RequestedRegion condition key. You can also list multiple AWS regions with the new condition key if your developers are allowed to create resources in multiple regions. The third statement grants permissions for the IAM action iam:PassRole required by AWS Lambda. For more information on allowing users to create a Lambda function, see Using Identity-Based Policies for AWS Lambda.

Summary

You can now use the aws:RequestedRegion global condition key in your IAM policies to specify the region to which the IAM principal (user or role) can invoke an API call. This capability makes it easier for you to restrict the AWS regions your IAM principals can use to comply with regulatory standards and improve account security. For more information about this global condition key and policy examples using aws:RequestedRegion, see the IAM documentation.

If you have comments about this post, submit them in the Comments section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum.

Want more AWS Security news? Follow us on Twitter.

New .BOT gTLD from Amazon

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/new-bot-gtld-from-amazon/

Today, I’m excited to announce the launch of .BOT, a new generic top-level domain (gTLD) from Amazon. Customers can use .BOT domains to provide an identity and portal for their bots. Fitness bots, slack bots, e-commerce bots, and more can all benefit from an easy-to-access .BOT domain. The phrase “bot” was the 4th most registered domain keyword within the .COM TLD in 2016 with more than 6000 domains per month. A .BOT domain allows customers to provide a definitive internet identity for their bots as well as enhancing SEO performance.

At the time of this writing .BOT domains start at $75 each and must be verified and published with a supported tool like: Amazon Lex, Botkit Studio, Dialogflow, Gupshup, Microsoft Bot Framework, or Pandorabots. You can expect support for more tools over time and if your favorite bot framework isn’t supported feel free to contact us here: [email protected].

Below, I’ll walk through the experience of registering and provisioning a domain for my bot, whereml.bot. Then we’ll look at setting up the domain as a hosted zone in Amazon Route 53. Let’s get started.

Registering a .BOT domain

First, I’ll head over to https://amazonregistry.com/bot, type in a new domain, and click magnifying class to make sure my domain is available and get taken to the registration wizard.

Next, I have the opportunity to choose how I want to verify my bot. I build all of my bots with Amazon Lex so I’ll select that in the drop down and get prompted for instructions specific to AWS. If I had my bot hosted somewhere else I would need to follow the unique verification instructions for that particular framework.

To verify my Lex bot I need to give the Amazon Registry permissions to invoke the bot and verify it’s existence. I’ll do this by creating an AWS Identity and Access Management (IAM) cross account role and providing the AmazonLexReadOnly permissions to that role. This is easily accomplished in the AWS Console. Be sure to provide the account number and external ID shown on the registration page.

Now I’ll add read only permissions to our Amazon Lex bots.

I’ll give my role a fancy name like DotBotCrossAccountVerifyRole and a description so it’s easy to remember why I made this then I’ll click create to create the role and be transported to the role summary page.

Finally, I’ll copy the ARN from the created role and save it for my next step.

Here I’ll add all the details of my Amazon Lex bot. If you haven’t made a bot yet you can follow the tutorial to build a basic bot. I can refer to any alias I’ve deployed but if I just want to grab the latest published bot I can pass in $LATEST as the alias. Finally I’ll click Validate and proceed to registering my domain.

Amazon Registry works with a partner EnCirca to register our domains so we’ll select them and optionally grab Site Builder. I know how to sling some HTML and Javascript together so I’ll pass on the Site Builder side of things.

 

After I click continue we’re taken to EnCirca’s website to finalize the registration and with any luck within a few minutes of purchasing and completing the registration we should receive an email with some good news:

Alright, now that we have a domain name let’s find out how to host things on it.

Using Amazon Route53 with a .BOT domain

Amazon Route 53 is a highly available and scalable DNS with robust APIs, healthchecks, service discovery, and many other features. I definitely want to use this to host my new domain. The first thing I’ll do is navigate to the Route53 console and create a hosted zone with the same name as my domain.


Great! Now, I need to take the Name Server (NS) records that Route53 created for me and use EnCirca’s portal to add these as the authoritative nameservers on the domain.

Now I just add my records to my hosted zone and I should be able to serve traffic! Way cool, I’ve got my very own .bot domain for @WhereML.

Next Steps

  • I could and should add to the security of my site by creating TLS certificates for people who intend to access my domain over TLS. Luckily with AWS Certificate Manager (ACM) this is extremely straightforward and I’ve got my subdomains and root domain verified in just a few clicks.
  • I could create a cloudfront distrobution to front an S3 static single page application to host my entire chatbot and invoke Amazon Lex with a cognito identity right from the browser.

Randall