Tag Archives: AWS Identity and Access Management (IAM)

Easily control the naming of individual IAM role sessions

Post Syndicated from Derrick Oigiagbe original https://aws.amazon.com/blogs/security/easily-control-naming-individual-iam-role-sessions/

AWS Identity and Access Management (IAM) now has a new sts:RoleSessionName condition element for the AWS Security Token Service (AWS STS), that makes it easy for AWS account administrators to control the naming of individual IAM role sessions. IAM roles help you grant access to AWS services and resources by using dynamically generated short-term credentials. Each instantiation of an IAM role, and the associated set of short-term credentials, is known as an IAM role session. Each IAM role session is uniquely identified by a role session name. You can now use the new condition to control how IAM principals and applications name their role sessions when they assume an IAM role, and rely on the role session name to easily track their actions when viewing AWS CloudTrail logs.

How do you name a role session?

There are different ways to name a role session, and it depends on the method used to assume the IAM role. In some cases, AWS sets the role session on your behalf. For example, for Amazon Elastic Compute Cloud (Amazon EC2) instance profiles, AWS sets the role session name to the instance profile ID. When you use the AssumeRolewithSAML API to assume an IAM role, AWS sets the role session name value to the attribute provided by the identity provider, which your administrator defined. In other cases, you provide the role session name when assuming the IAM role. For example, when assuming an IAM role with APIs such as AssumeRole or AssumeRoleWithWebIdentity, the role session name is a required input parameter that you set when making the API request.

What is a condition element?

A condition is an optional IAM policy element. You can use a condition to specify the circumstances under which the IAM policy grants or denies permissions. A condition includes a condition key, operator, and value for the condition.

There are two types of conditions: service-specific conditions and global conditions. Service-specific conditions are specific to certain actions in an AWS service. For example, specific EC2 actions support the ec2:InstanceType condition. All AWS service actions support global conditions.

Now that I’ve explained the meaning of a role session name and the meaning of a condition element in an IAM policy, let me introduce the new condition, sts:RoleSessionName.

sts:RoleSessionName condition

The sts:RoleSessionName is a service-specific condition that you use with the AssumeRole API action, in an IAM policy to control what is set as the role session name. You can use any string operator, such as StringLike, when using this condition.

Condition KeyDescriptionOperator(s)Value
sts:RoleSessionNameUniquely identifies a session when IAM principals, federated identities, and applications assume the same IAM role.All string operatorsString of upper-case and lower-case alphanumeric characters with no spaces. It can include underscores or any of the following characters: =,[email protected]IAM policy element variables can be set as values.

In this post, I will walk you through two examples of how to use the sts:RoleSessionName condition. In the first example, you will learn how to require IAM users to set their aws:username as their role session name when they assume an IAM role in your AWS account. In the second example, you will learn how to require IAM principals to choose from a pre-selected set of role session names when they assume an IAM role in your AWS account.

The examples shared in this post describe a scenario in which you have pricing data that is stored in an Amazon DynamoDB database in your AWS account, and you want to share the pricing data with members from your marketing department, who are in a different AWS account. In addition, you want to use your AWS CloudTrail logs to track the activities of members from the marketing department whenever they access the pricing data. This post will show you how to achieve this by doing the following:

  1. Dedicate an IAM role in your AWS account for the marketing department.
  2. Define the role trust policy for the IAM role, to specify who can assume the IAM role.
  3. Use the new sts:RoleSessionName condition in the role trust policy to define the allowed role session name values for the dedicated IAM role.

When members from the marketing department attempt to assume the IAM role in your AWS account, AWS will verify that their role session name does not conflict with the IAM role trust policy, before authorizing the assume-role action. The new sts:RoleSessionName condition gives you control of the role session name. With this control, when you view the AWS CloudTrail logs, you can now rely on the role session name for any of the following information:

  • To identify the IAM principal or application that assumed an IAM role.
  • The reason why the IAM principal or application assumed an IAM role.
  • To track the actions performed by the IAM principal or application with the assumed IAM role.

Example 1 – Require IAM users to set their aws:username as their role session name when they assume an IAM role in your AWS account

When an IAM user assumes an IAM role in your AWS account, you can require them to set their aws:username as the role session name. With this requirement, you can rely on the role session name to identify the IAM user who performed an action with the IAM role.

This example continues the scenario of sharing pricing data with members of the marketing department within your organization, who are in a different AWS account. John is a member of the marketing department, he is an IAM user in the marketing AWS account and his aws:username is john_s. For John to access the pricing data in your AWS account, you first create a dedicated IAM role for the marketing department, called marketing. John will assume the marketing IAM role to access the pricing information in your AWS account.

Next, you establish a two-way trust between the marketing AWS account and your AWS account. The administrator of the marketing AWS account will need to grant John sts:AssumeRole permission with an IAM policy, so that John can assume the marketing IAM role in your AWS account. The following is a sample policy to grant John assume-role permission. Be sure to replace <AccountNumber> with your account number.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AssumeRole",
            "Effect": "Allow",
            "Action": "sts:AssumeRole",
            "Resource": "arn:aws:iam::<AccountNumber>:role/marketing"
        }
    ]
}

You then create a role trust policy for the marketing IAM role, which permits members of the marketing department to assume the IAM role. The following is a sample policy to create a role trust policy for the marketing IAM role. Be sure to replace <AccountNumber> with your account number.


{
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "AWS": "arn:aws:iam::<AccountNumber>:root"
          },
          "Action": "sts:AssumeRole",
          "Condition": {
            "StringLike": {
              "sts:RoleSessionName": "${aws:username}"
            }
          }
        }
      ]
    }

In the role trust policy above, you use the sts:RoleSessionName condition to ensure that members of the marketing department set their aws:username as their role session name when they assume the marketing IAM role. If John attempts to assume the marketing IAM role and does not set his role session name to john_s, then AWS will not authorize the request. When John sets the role session name to his aws:username, AWS will permit him to assume the marketing IAM role. The following is a sample CLI command to assume an IAM role. Replace <AccountNumber> with your account number.


aws sts assume-role --role-arn arn:aws:iam::<AccountNumber>:role/marketing --role-session-name john_s

In the AWS CLI command above, John assumes the marketing IAM role and sets the role session name to john_s. John then calls the get-caller-identity API to verify that he assumed the marketing IAM role. The following is confirmation that John successfully assumed the marketing IAM role.


{
    "UserId": " AIDACKCEVSQ6C2EXAMPLE:john_s",
    "Account": "<AccountNumber>",
    "Arn": "arn:aws:sts::<AccountNumber>:assumed-role/marketing/john_s"
}

AWS CloudTrail captures any action that John performs with the marketing IAM role, and you can easily identify John’s sessions in your AWS CloudTrail logs by searching for any Amazon Resource Name (ARN) with John’s aws:username (which is john_s) as the role session name. The following is an example of AWS CloudTrail event details that shows the role session name. Replace <AccountNumber> with your account number.



"assumedRoleUser": {
            "assumedRoleId": "AIDACKCEVSQ6C2EXAMPLE:john_s",
            "arn": "arn:aws:sts::<AccountNumber>:assumed-role/marketing/john_s"
        }

Example 2 – Require IAM principals to choose from a pre-selected set of role session names when they assume an IAM role in your AWS account

You can also define the acceptable role session names that an IAM principal or application can use when they assume an IAM role in your AWS account. With this requirement, you ensure that IAM principals and applications that assume IAM roles in your AWS account use a pre-approved role session name that you can easily understand.

Expanding on the previous example, in the following scenario, you have a new AWS account with an Amazon DynamoDB database that stores competitive analysis data. You do not want members of the marketing department to have direct access to this new AWS account. You will achieve this by requesting your marketing partners to first assume the marketing IAM role in your other AWS account with pricing information, and from that AWS account, assume the Analyst IAM role in the new AWS account to access the competitive analysis data. Also, you want your marketing partners to select from a pre-defined set of role session names: “marketing-campaign”, “product-development” and “other”, which will identify their reason for accessing the competitive analysis data.

First, you establish a two-way trust. You grant the marketing IAM role sts:AssumeRole permission with an IAM policy. The following is a sample policy to grant the marketing IAM role assume-role permission. Be sure to replace <AccountNumber> with your account number.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AssumeRole",
            "Effect": "Allow",
            "Action": "sts:AssumeRole",
            "Resource": "arn:aws:iam::<AccountNumber>:role/Analyst"
        }
    ]
}

Next, you create a role trust policy for the Analyst IAM role. In the role trust policy, you set the marketing IAM role as the Principal, to restrict who can access the Analyst IAM role. Then you use the sts:RoleSessionName condition to define the acceptable role session names: marketing-campaign, product-development and other. The following is a role trust policy to limit the list of acceptable role session names. Replace <AccountNumber> with your account number.


        {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
             "AWS": " arn:aws:iam::<AccountNumber>:role/marketing"          
             },
          "Action": "sts:AssumeRole",
          "Condition": {
            "StringLike": {
              "sts:RoleSessionName": [
                "marketing-campaign",
                "product-development",
                "other"
              ]
            }
          }
        }
      ]
    }

If John from the marketing department wants to access the competitive analysis data, and he has assumed the marketing IAM role as shown in example #1, then he can assume the Analyst IAM role in the new AWS account by using the marketing IAM role. For AWS to authorize the assume-role request, when he assumes the Analyst IAM role, he must set the role session name to one of the pre-defined values. The following is a sample CLI command to assume the Analyst IAM role. Replace <AccountNumber> with your account number.


aws sts assume-role --role-arn arn:aws:iam::<AccountNumber>:role/Analyst --role-session-name marketing-campaign

In the CLI command above, John assumes the Analyst IAM role, using the marketing IAM role. He also sets the role session name to marketing-campaign, which is an allowed role session name. John then calls the get-caller-identity API to verify that he successfully assumed the Analyst IAM role. The following log results show the marketing IAM role successfully assumed the Analyst IAM role with the role session name as marketing-campaign.


{
    "UserId": " AIDACKCEVSQ6C2EXAMPLE:marketing-campaign",
    "Account": "<AccountNumber>",
    "Arn": "arn:aws:sts::<AccountNumber>:assumed-role/Analyst/marketing-campaign"
}

AWS CloudTrail captures any action performed with the Analyst IAM role. By viewing the role session names in your AWS CloudTrail logs, you can easily identify the reasons why your marketing partners accessed the competitive analysis data.


"assumedRoleUser": {
            "assumedRoleId": "AIDACKCEVSQ6C2EXAMPLE:marketing-campaign",
            "arn": "arn:aws:sts::<AccountNumber>:assumed-role/Analyst/marketing-campaign"
        }

Conclusion

In this post, I showed how AWS account administrators can use the sts:RoleSessionName condition to control what IAM principal names their session when they assume an IAM role. This control gives you increased confidence to rely on the role session name, when viewing AWS CloudTrail logs, to identify who performed an action with an IAM role, or get additional context for why an IAM principal assumed an IAM role.

For more information about the sts:RoleSessionName condition, and for policy examples, see Available Keys for AWS STS in the AWS IAM User Guide.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Identity and Access Management forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Derrick Oigiagbe

Derrick is a Senior Product Manager for Identity and Access Management service at AWS. Prior to his career at Amazon, he received his MBA from the Carnegie Mellon’s Tepper School of Business. Derrick spent his early career as a technology consultant for Summa Technologies (recently acquired by CGI). In his spare time, Derrick enjoys playing soccer and travelling.

AWS IAM introduces updated policy defaults for IAM user passwords

Post Syndicated from Mark Burr original https://aws.amazon.com/blogs/security/aws-iam-introduces-updated-policy-defaults-for-iam-user-passwords/

To improve the default security for all AWS customers, we are adding a default password policy for AWS Identity and Access Management (IAM) users in AWS accounts. This update will be made globally to the IAM service on August 3rd, 2020. You can implement this change today by creating an IAM password policy in your AWS account. AWS accounts with an existing IAM password policy will not be affected by this change, but it is important to review the details below so you can evaluate any necessary changes to your environment.

What is an IAM password policy?

The IAM password policy is an account-level setting that applies to all IAM users, excluding the root user. You can create a policy to do things like require a minimum password length and specific character types, along with setting mandatory rotation periods. These password settings apply only to passwords assigned to IAM users and do not affect any access keys they might have.

What is the new default policy?

The new default IAM policy will have the following minimum requirements and must:

  • be a minimum of 8 or more characters
  • include a minimum of three of the following mix of character types: uppercase, lowercase, numbers, non-alphanumeric symbols, for example [email protected]#$%^&*()_+-[]{}|‘
  • not be identical to your AWS account name or email address

You can determine your own password requirements by setting a custom policy. Please note that this change does not apply to the root user, which has a separate password policy.

What should customers do to prepare for this update?

For AWS accounts with no password policy applied — the experience will be unchanged until you update user passwords. The new password will need to align with the minimum requirements of the default policy. Likewise, when you create new IAM users in these AWS accounts, the passwords must meet the new minimum requirements of the default policy. A default password policy will be set for all AWS accounts that do not currently have one.

For AWS accounts with an existing password policy — there is no change for any new and existing user passwords, and they will not be affected by this update. If you disable the existing password policy, then any new IAM users created from that point onward will require passwords that meet the minimum requirements of the default policy.

For AWS accounts using automation workflows which create IAM users — If you have implemented an automated user creation workflow that does not produce passwords that meet the new required complexity and have not implemented your own custom policy, you will be affected. You should inspect and evaluate your existing workflows, and they should either be updated to meet the default password policy or set with a custom policy prior to August 3rd to ensure continued operation.

When will these changes happen?

To provide time for you to evaluate potential impact by this change, AWS is updating the default password policy in 90 days, which will take effect at the beginning of August 2020. We encourage all customers to be proactive about assessing and modifying any automation workflows that create IAM users and passwords without a corresponding password policy.

How do I check if a policy is already set?

You can navigate to the AWS IAM console then click on Account settings that will state whether or not a password policy has been set for the account. Click here for an example of how to check this via the AWS Command Line Interface (AWS CLI). For further information and to learn how to check this using the API, please refer to the documentation.

AWS Single Sign-On (AWS SSO)

Note: if you are primarily using IAM users as the source of your identities across multiple accounts, you may want to evaluate AWS SSO, that simplifies the user experience and improves security by eliminating individual passwords in each account. It also allows you to quickly and easily assign your employees access to AWS accounts managed with AWS Organizations, business cloud applications, and custom applications that support Security Assertion Markup Language (SAML) 2.0. To learn more, visit the AWS Single Sign-on page.

Need more assistance?

AWS IQ enables AWS customers to find, securely collaborate with, and pay AWS Certified third-party experts for on-demand project work. Visit the AWS IQ page for information about how to submit a request, get responses from experts, and choose the expert with the right skills and experience. Log into your console and select Get Started with AWS IQ to start a request.

The AWS Technical Support tiers cover development and production issues for AWS products and services, along with other key stack components. AWS Support does not include code development for client applications.

If you have any questions or issues, please start a new thread on the AWS IAM forum, or contact AWS Support or your Technical Account Manager (TAM). If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mark Burr

Mark is a Principal Consultant with the Worldwide Public Sector Professional Services team. He specializes in security, automation, large-scale migrations, enterprise transformation, and executive strategy. Mark enjoys helping global customers achieve amazing outcomes in AWS. When he’s not in the cloud, he’s on a bicycle or drinking a Belgian ale.

New – Use AWS IAM Access Analyzer in AWS Organizations

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-use-aws-iam-access-analyzer-in-aws-organizations/

Last year at AWS re:Invent 2019, we released AWS Identity and Access Management (IAM) Access Analyzer that helps you understand who can access resources by analyzing permissions granted using policies for Amazon Simple Storage Service (S3) buckets, IAM roles, AWS Key Management Service (KMS) keys, AWS Lambda functions, and Amazon Simple Queue Service (SQS) queues.

AWS IAM Access Analyzer uses automated reasoning, a form of mathematical logic and inference, to determine all possible access paths allowed by a resource policy. We call these analytical results provable security, a higher level of assurance for security in the cloud.

Today I am pleased to announce that you can create an analyzer in the AWS Organizations master account or a delegated member account with the entire organization as the zone of trust. Now for each analyzer, you can create a zone of trust to be either a particular account or an entire organization, and set the logical bounds for the analyzer to base findings upon. This helps you quickly identify when resources in your organization can be accessed from outside of your AWS Organization.

AWS IAM Access Analyzer for AWS Organizations – Getting started
You can enable IAM Access Analyzer, in your organization with one click in the IAM Console. Once enabled, IAM Access Analyzer analyzes policies and reports a list of findings for resources that grant public or cross-account access from outside your AWS Organizations in the IAM console and through APIs.

When you create an analyzer on your organization, it recognizes your organization as a zone of trust, meaning all accounts within the organization are trusted to have access to AWS resources. Access analyzer will generate a report that identifies access to your resources from outside of the organization.

For example, if you create an analyzer for your organization then it provides active findings for resource such as S3 buckets in your organization that are accessible publicly or from outside the organization.

When policies change, IAM Access Analyzer automatically triggers a new analysis and reports new findings based on the policy changes. You can also trigger a re-evaluation manually. You can download the details of findings into a report to support compliance audits.

Analyzers are specific to the region in which they are created. You need to create a unique analyzer for each region where you want to enable IAM Access Analyzer.

You can create multiple analyzers for your entire organization in your organization’s master account. Additionally, you can also choose a member account in your organization as a delegated administrator for IAM Access Analyzer. When you choose a member account as the delegated administrator, the member account has a permission to create analyzers within the organization. Additionally individual accounts can create analyzers to identify resources accessible from outside those accounts.

IAM Access Analyzer sends an event to Amazon EventBridge for each generated finding, for a change to the status of an existing finding, and when a finding is deleted. You can monitor IAM Access Analyzer findings with EventBridge. Also, all IAM Access Analyzer actions are logged by AWS CloudTrail and AWS Security Hub. Using the information collected by CloudTrail, you can determine the request that was made to Access Analyzer, the IP address from which the request was made, who made the request, when it was made, and additional details.

Now available!
This integration is available in all AWS Regions where IAM Access Analyzer is available. There is no extra cost for creating an analyzer with organization as the zone of trust. You can learn more through these talks of Dive Deep into IAM Access Analyzer and Automated Reasoning on AWS at AWS re:Invent 2019. Take a look at the feature page and the documentation to learn more.

Please send us feedback either in the AWS forum for IAM or through your usual AWS support contacts.

Channy;

How to define least-privileged permissions for actions called by AWS services

Post Syndicated from Michael Switzer original https://aws.amazon.com/blogs/security/how-to-define-least-privileged-permissions-for-actions-called-by-aws-services/

When you perform certain actions in AWS, the service you called sometimes takes additional actions in other AWS services on your behalf. AWS Identity and Access Management (IAM) now includes condition keys to make it easier to grant only the minimum level of access necessary for IAM principals (users and roles) and AWS services to take those actions. Using the aws:CalledVia condition key, you can create distinct access rules for the actions performed by your IAM principals, and for the subsequent actions taken by AWS services on your behalf.

For example, if you use AWS CloudFormation to launch an Amazon Elastic Compute Cloud (EC2) instance, the CloudFormation service independently uses your credentials to launch the instance in EC2. Your principals need permissions from both services. Based on the principle of granting least privileged permissions, you might want to prevent your principals from taking each of those actions independently. Using the new condition, you can grant your IAM principals the ability to launch EC2 instances only by using CloudFormation, without granting them direct access to EC2.

You can also use the aws:CalledVia condition key to define rules for the initial call made to AWS by your principals, without impacting the additional calls the service makes. For example, you can require that all initial calls to AWS services come from inside your Virtual Private Cloud (VPC) or your private IP subnet, but you will not impose the same rule for downstream requests to other services, since those requests come from AWS rather than your private network.

In this post, I explain the aws:CalledVia condition key and outline the context it provides during authorization. Then, I walk through a detailed use case with examples that show you how to secure access to a database managed in Amazon Athena behind a VPC. In the examples, I’ll cover how to grant access to execute queries in Athena without granting direct access to dependent services such as Amazon Simple Storage Service (S3). I will also explain how you can use the aws:CalledVia condition key to prevent access to your databases from outside your private networks.

How to use the aws:CalledVia condition key

The aws:CalledVia condition key contains an ordered list of each service principal that triggers the AWS action in another service using your credentials. It is a global condition key, meaning you can use it in combination with any AWS service action. For example, say that you use CloudFormation to read and write from an Amazon DynamoDB table that uses encryption supplied by the AWS Key Management Service (AWS KMS). When you call CloudFormation, it calls Amazon DynamoDB to read from the table, then Amazon DynamoDB calls AWS KMS to decrypt the data. Each call gets its own authorization check, and the aws:CalledVia key keeps track of who called whom.

  • For the call to CloudFormation, aws:CalledVia will be empty.
  • For the call from CloudFormation to Amazon DynamoDB, aws:CalledVia will contain [ “cloudformation.amazonaws.com” ]
  • For the call from Amazon DynamoDB to AWS KMS, aws:CalledVia will contain [ “cloudformation.amazonaws.com” , “dynamodb.amazonaws.com” ]

    Important: The aws:CalledVia context is only available when AWS reuses your credentials after you make a request. For example, if you provided a service role to CloudFormation instead of triggering the action directly, the aws:CalledVia context would be empty for a call from CloudFormation to Amazon DynamoDB.

To identify the service principal that aws:CalledVia returns for each AWS service, see to the CalledVia Services table in the documentation. AWS will update this list as more services add support for this key.

You can use condition operators, such as StringLike and StringEquals, in your policies to check the contents of the key during authorization. Because aws:CalledVia is a multivalued condition key, you also need to use the ForAnyValue or ForAllValues set operators along with your comparison operators to check the content of the key. For more information, see Creating a Condition with Multiple Keys or Values in the IAM documentation.

Here’s an example policy that follows this use case.


{ 
   "Version":"2012-10-17",
   "Statement":[ 
      { 
         "Sid":"AllowCFNActions",
         "Effect":"Allow",
         "Action":[ 
            "cloudformation:CreateStack*"
         ],
         "Resource":"*"
      },
      { 
         "Sid":"AllowDDBActionsViaCFN",
         "Effect":"Allow",
         "Action":[ 
            "dynamodb:GetItem",
            "dynamodb:BatchGetItem",
            "dynamodb:PutItem",
            "dynamodb:UpdateItem",
            "dynamodb:DeleteItem"
         ],
         "Resource":"arn:aws:dynamodb:region:111122223333:table",
         "Condition":{ 
            "ForAnyValue:StringEquals":{ 
               "aws:CalledVia":[ 
                  "cloudformation.amazonaws.com"
               ]
            }
         }
      },
      { 
         "Sid":"AllowKMSActionsViaDDB",
         "Effect":"Allow",
         "Action":[ 
            "kms:Encrypt",
            "kms:Decrypt",
            "kms:ReEncrypt*",
            "kms:GenerateDataKey",
            "kms:DescribeKey"
         ],
         "Resource":[
            "arn:aws:kms:region:111122223333:key/example"
         ],
         "Condition":{ 
            "ForAnyValue:StringEquals":{ 
               "aws:CalledVia":[ 
                  "dynamodb.amazonaws.com"
               ]
            }
         }
      }
   ]
}

I’ll walk you through each statement in this policy. In the first statement, AllowCFNActions, I grant access to use CloudFormation to create stacks and stack sets. The second statement, AllowDDBActionsViaCFN, grants access to read and write from a specific DynamoDB table, but only if CloudFormation took the action. Finally, the third statement AllowKMSActionsViaDDB allows AWS KMS encrypt and decrypt operations that are triggered through Amazon DynamoDB, using the customer master key (CMK) specified in the Resource element. Each condition statement uses the ForAnyValue:StringEquals combination of operators to check for the existence of the CloudFormation or DynamoDB service principals. Putting it all together, the policy allows an IAM principal to do the following:

  1. Create stacks and stack sets by using CloudFormation.
  2. Read and write to Amazon DynamoDB tables via CloudFormation.
  3. Encrypt and decrypt by using AWS KMS actions via Amazon DynamoDB.

Controlling access based on the first and last requesting services

Along with aws:CalledVia, AWS has introduced two companion keys to make it easy to retrieve the first and last services in the chain of requests. The aws:CalledViaFirst condition key returns the first service principal in the chain, and aws:CalledViaLast returns the last service principal in the chain. For example, if aws:CalledVia contained [ “cloudformation.amazonaws.com” , “dynamodb.amazonaws.com” ], then aws:CalledViaFirst would contain “cloudformation.amazonaws.com” and aws:CalledViaLast would contain “dynamodb.amazonaws.com”.

Still following the previous example, here’s a policy that uses aws:CalledViaFirst to allow access to Amazon DynamoDB and AWS KMS, as long as the entry point is CloudFormation. You can use this policy to ensure that Amazon DynamoDB tables in your organization are accessed according to the best practices you’ve defined in your CloudFormation templates.


{ 
   "Version":"2012-10-17",
   "Statement":[ 
      { 
         "Sid":"AllowCFNActions",
         "Effect":"Allow",
         "Action":[ 
            "cloudformation:CreateStack*"
         ],
         "Resource":"*"
      },
      { 
         "Sid":"AllowDDBAndKMSActionsViaCFN",
         "Effect":"Allow",
         "Action":[ 
            "dynamodb:GetItem",
            "dynamodb:BatchGetItem",
            "dynamodb:PutItem",
            "dynamodb:UpdateItem",
            "dynamodb:DeleteItem",
            "kms:Encrypt",
            "kms:Decrypt",
            "kms:ReEncrypt*",
            "kms:GenerateDataKey",
            "kms:DescribeKey"
         ],
         "Resource":[
            "arn:aws:dynamodb:region:111122223333:table"
            "arn:aws:kms:region:111122223333:key/example"
         ],
         "Condition":{ 
            "StringEquals":{ 
               "aws:CalledViaFirst":[ 
                  "cloudformation.amazonaws.com"
               ]
            }
         }
      }
   ]
}

I’ll walk you through each statement in this policy. The first statement of the policy, AllowCFNActions, enables the principal to create stacks and stack sets through CloudFormation. The second statement, AllowDDBAndKMSActionsViaCFN, allows Amazon DynamoDB and AWS KMS actions for a specific CMK and table, but only under the condition that the first request made by the principal was to the CloudFormation service.

Now that I’ve described the conditions and their usage, I’ll share detailed examples in the use case that follows.

Use case: Permissions to use Athena inside your Virtual Private Cloud

Amazon Athena is a service you can use to analyze data in Amazon S3 by using standard SQL. Athena is serverless, which makes it cost-effective for operating a data lake. In this example, I define the IAM permissions for a role that manages and executes Athena queries from behind a secure network perimeter.

In this use case, my business metrics team needs an IAM role to execute queries against my data lake that is stored in Amazon S3. They’ll use the IAM role to manage the BizMetrics workgroup in Athena, which is responsible for managing exploratory queries and generating regular reports for key performance indicators within my company. Because my business metrics are internal to my organization, I want to ensure the data is only accessible inside my Amazon Virtual Private Cloud (VPC). VPCs let you create a logically-isolated section of the AWS Cloud and connect it to your private network. For more information about VPCs, see What is Amazon VPC? in the VPC documentation.

When an IAM role makes a call to Athena to execute a query inside a VPC, Athena makes subsequent calls to Amazon S3 and other services to complete the task. You can see the interaction on the following diagram.
 

Figure 1: IAM role makes a call to Athena to execute a query inside a VPC

Figure 1: IAM role makes a call to Athena to execute a query inside a VPC

The calls from the IAM role to Athena, and from Athena to Amazon S3, use the same role credentials. This means that the principal needs permissions for both Athena and Amazon S3 actions to accomplish the query. You can also see that the IAM role calls Athena through the VPC endpoint, rather than the public AWS endpoint. VPC endpoints allow you to communicate with AWS from your private network without a connection to the public internet. For more information about VPC endpoints, see Interface VPC Endpoints (AWS PrivateLink) in the VPC documentation.

The subsequent calls between Athena and Amazon S3 don’t use the VPC endpoint. If I want to require that each call to Athena must use the VPC endpoint, I cannot apply the same restriction to Athena’s calls to Amazon S3. I will need to use aws:CalledVia to define distinct permissions for the initial call to Athena, and the call to Amazon S3 from Athena. I’ll apply these permissions to the IAM role I create, as well as the Amazon S3 bucket that contains the data.

Step 1: Define permissions for the IAM Role

For the purposes of this use case, my organization’s data lake has one database, revenuedata, in the bucket called examplecorp-business-data. When you follow along with these examples, you should replace the bucket and database names in the policies with your own resources.

  1. First, I create the IAM role BizMetricsQuery. I can create the role in the IAM console without any permission policies attached. If you haven’t created a role before, see Creating IAM Roles in the IAM documentation.
  2. Next, I create a managed policy that defines permissions for the BizMetricsQuery role. For more information about creating a managed policy or attaching it to the BizMetricsQuery role, see Managing IAM Policies in the IAM documentation.

    The following policy grants the permissions I need:

    
    	{
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "AllowAthenaReadActions",
                "Effect": "Allow",
                "Action": [
                    "athena:ListWorkGroups",
                    "athena:GetExecutionEngine",
                    "athena:GetExecutionEngines",
                    "athena:GetNamespace",
                    "athena:GetCatalogs",
                    "athena:GetNamespaces",
                    "athena:GetTables",
                    "athena:GetTable"
                ],
                "Resource": "*",
                "Condition":{ 
                   "StringEquals":{ 
                      "aws:SourceVpce":[ 
                         "vpce-0e880cb0a9EXAMPLE"
                      ]
                   }
                }
            },
            {
                "Sid": "AllowAthenaWorkgroupActions",
                "Effect": "Allow",
                "Action": [
                    "athena:StartQueryExecution",
                    "athena:GetQueryResults",
                    "athena:DeleteNamedQuery",
                    "athena:GetNamedQuery",
                    "athena:ListQueryExecutions",
                    "athena:StopQueryExecution",
                    "athena:GetQueryResultsStream",
                    "athena:ListNamedQueries",
                    "athena:CreateNamedQuery",
                    "athena:GetQueryExecution",
                    "athena:BatchGetNamedQuery",
                    "athena:BatchGetQueryExecution",
                    "athena:GetWorkGroup"
                ],
                "Resource": [
                    "arn:aws:athena:us-east-1:111122223333:workgroup/BizMetrics"
                ],
                "Condition":{ 
                   "StringEquals":{ 
                      "aws:SourceVpce":[ 
                         "vpce-0e880cb0a9EXAMPLE"
                      ]
                   }
                }
            },
            {
                "Sid": "AllowGlueActionsViaVPCE",
                "Effect": "Allow",
                "Action": [
                    "glue:GetDatabase",
                    "glue:GetDatabases",
                    "glue:CreateDatabase",
                    "glue:GetTables",
                    "glue:GetTable"
                ],
                "Resource": [
                    "arn:aws:glue:us-east-1:111122223333:catalog",
                    "arn:aws:glue:us-east-1:111122223333:database/default",
                    "arn:aws:glue:us-east-1:111122223333:database/revenuedata",
                    "arn:aws:glue:us-east-1:111122223333:table/revenuedata/*"
                ],
                "Condition":{ 
                   "StringEquals":{ 
                      "aws:SourceVpce":[ 
                         "vpce-0e880cb0a9EXAMPLE"
                      ]
                   }
                }
            },
            {
                "Sid": "AllowGlueActionsViaAthena",
                "Effect": "Allow",
                "Action": [
                    "glue:GetDatabase",
                    "glue:GetDatabases",
                    "glue:CreateDatabase",
                    "glue:GetTables",
                    "glue:GetTable"
                ],
                "Resource": [
                    "arn:aws:glue:us-east-1:111122223333:catalog",
                    "arn:aws:glue:us-east-1:111122223333:database/default",
                    "arn:aws:glue:us-east-1:111122223333:database/revenuedata",
                    "arn:aws:glue:us-east-1:111122223333:table/revenuedata/*"
                ],
                "Condition":{ 
                   "ForAnyValue:StringEquals":{ 
                      "aws:CalledVia":[ 
                         "athena.amazonaws.com"
                      ]
                   }
                }
            },
    
            {
                "Sid": "AllowS3ActionsViaAthena",
                "Effect": "Allow",
                "Action": [
                    "s3:GetBucketLocation",
                    "s3:GetObject",
                    "s3:ListBucket",
                    "s3:ListBucketMultipartUploads",
                    "s3:ListMultipartUploadParts",
                    "s3:AbortMultipartUpload",
                    "s3:CreateBucket",
                    "s3:PutObject"
                ],
                "Resource": [
                    "arn:aws:s3:::examplecorp-business-data/*",
                    "arn:aws:s3:::athena-examples-*"
                ],
                "Condition":{ 
                   "ForAnyValue:StringEquals":{ 
                      "aws:CalledVia":[ 
                         "athena.amazonaws.com"
                      ]
                   }
                }
            }
        ]
    }
    	

    I’ll walk you through each statement in this policy. First, the AllowAthenaReadActions and AllowAthenaWorkgroupActions statements allow the role to use Athena actions within the BizMetrics workgroup. The role calls Athena directly from inside the VPC, so there is a condition on these statements that requires that the calls must come from my VPC endpoint.

    Next, the AllowGlueActionsViaVPCE and AllowGlueActionsViaAthena statements allow the role to access the AWS Glue Data Catalog and view the tables within my data lake. AWS Glue makes it easy to catalog your data and make it searchable, queryable, and available for ETL operations. To view the database and tables in the Athena console, I need access to these AWS Glue actions. The role calls AWS Glue directly, and allows Athena to call AWS Glue, so the policy has two statements that allow both paths of communication respectively. For more information about required permissions for Athena and AWS Glue, see Fine-Grained Access to Databases and Tables in the AWS Glue Data Catalog in the Amazon Athena documentation.

    Finally, the AllowS3ActionsViaAthena statement enables Athena to call Amazon S3 on my behalf. I use the aws:CalledVia condition to require that these S3 actions are only available through Athena. In the resource section of the policy, I specify that the access is limited to the examplecorp-business-data bucket, as well as the Athena examples repository, which is required for using Athena with the AWS console.

  3. Next, I attach the policy to the BizMetricsQuery role. With this policy attached, I ensure the role can use Athena to access data in the Amazon S3 bucket without granting permissions to access the bucket directly. I also ensure that the role can only access the data through the VPC endpoint located within my private network.

Step 2: Define permissions on the S3 bucket

I need to make sure the BizMetricsQuery role is the only IAM identity that has access to the data in the examplecorp-business-data S3 bucket. To do this, I need to update the bucket policy that is attached to the bucket.

The following bucket policy grants the BizMetricsQuery role access to the data through the VPC endpoint.


{
	"Version": "2012-10-17",
	"Statement": [
    {
      "Sid": "AllowS3ActionsThroughAthena",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:role/BizMetricsQuery"
      },
      "Action": [
        "s3:GetBucketLocation",
        "s3:GetObject",
        "s3:ListBucketMultipartUploads",
        "s3:ListMultipartUploadParts",
        "s3:AbortMultipartUpload",
        "s3:PutObject"
      ],
      "Resource": [
        "arn:aws:s3:::examplecorp-business-data/*",
        "arn:aws:s3:::examplecorp-business-data"
      ],
      "Condition": {
        "ForAnyValue:StringEquals": {
          "aws:CalledVia": [
            "athena.amazonaws.com"
          ]
        }
      }
    }
  ]
}

This policy is very similar to the statement on the role policy that grants permissions to Amazon S3. In this policy, I allow the role to take Amazon S3 operations, but only if the aws:CalledVia context includes the Athena service. In the Principal element of the policy, I include the BizMetricsQuery role’s ARN. This is an important step because without this requirement, anyone could interact with the data in the bucket as long as they make the call using Athena. I want to require that the only role that can reach the data is BizMetricsQuery, which is also subject to the VPC rules.

With this policy attached to the examplecorp-business-data bucket, I ensure that access to the databases is only available through the BizMetricsQuery role. Because the BizMetricsQuery role can only access the data through Athena, and all calls to Athena must use my VPC endpoint, this also ensures the Amazon S3 data is only accessible from within my VPC.

Step 3: Run the Athena queries

I’ll try a simple query in Athena from inside the VPC to make sure everything works as expected. I use the BizMetricsQuery role to view the service_revenue table in the revenuedata database. If you are following along with this example, you should replace the database and table in the query with your own resource names.

  1. In the Athena console, on the Query Editor page, enter the following query in a new tab:
     
    SELECT * FROM “revenuedata”.”service_revenue” limit 10;
     
    Then select Run query.
     
    Figure 2: Run an Athena test query

    Figure 2: Run an Athena test query

    When I run this test query, my role performs the StartQueryExecution action. For this request, the aws:CalledVia context is empty because the role takes the action directly. The role is allowed to use Athena actions, so it passes the first authorization check.

    To perform the query, Athena subsequently calls the Amazon S3 service. These requests have the aws:CalledVia context athena.amazonaws.com. The role has access to the Amazon S3 actions when aws:CalledVia is equal to this value, so it passes the second and final authorization check.

  2. When I try to access the data directly by using Amazon S3, I get an Access Denied error message.
     
    Figure 3: Error message attempting to access directly from Amazon S3

    Figure 3: Error message attempting to access directly from Amazon S3

    In this case, the aws:CalledVia context is empty because I made the call directly to the Amazon S3 service. The policy requires me to call Amazon S3 only through Athena, so this request is denied.

  3. When I call Athena by using the same query from outside my VPC, I get the following error message.
     
    Figure 4: Error message attempting to call Athena from outside my VPC

    Figure 4: Error message attempting to call Athena from outside my VPC

In the Athena case, the SourceVPCE context is not present because I didn’t make the call using the VPC endpoint. In Amazon S3, the policy requires that the calls can only happen from inside my private network, so I get an access denied message in that case, as well. With this policy and example, I’ve demonstrated that I can only call Athena from within the VPC, and shown that calling the dependent services directly is not possible.

Next steps

By using the aws:CalledVia condition key, you can now define specific permissions for when AWS services make calls to other services on your behalf. This makes it easier to ensure that your company’s guidelines are followed consistently. For more information about aws:CalledVia and other IAM conditions, see AWS Global Condition Context Keys in the IAM documentation. If you have any questions, comments, or concerns, please reach out to AWS Support or our Identity and Access Management Forums.

If you have feedback about this blog post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Michael Switzer

Mike is the product manager for the Identity and Access Management service at AWS. He enjoys working directly with customers to identify solutions to their challenges, and using data-driven decision making to drive his work. Outside of work, Mike is an avid cyclist and outdoorsperson. He holds a master’s degree in computational mathematics from the University of Washington.

NextGen Healthcare: Build and Deployment Pipelines with AWS

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/nextgen-healthcare-build-and-deployment-pipelines-with-aws/

Owen Zacharias, Vice President of Application Delivery at NextGen Healthcare, explains to AWS Solutions Architect Andrea Sabet how his company developed a series of build and deployment pipelines using native AWS services in the highly regulated healthcare sector.

Learn how the following services can be used to build and deploy infrastructure and application code:

Discover how AWS resources can be rapidly created and updated as part of a CI/CD pipeline while ensuring HIPAA compliance through approved/vetted AWS Identity and Access Management (IAM) roles that AWS CloudFormation is permitted to assume.

February’s AWS Architecture Monthly magazine is all about healthcare. Check it out on Kindle Newsstand, download the PDF, or see it on Flipboard.

*Check out more This Is My Architecture video series.

Five Talent Collaborates with Customers Using the AWS Well-Architected Tool

Post Syndicated from Scott Sprinkel original https://aws.amazon.com/blogs/architecture/five-talent-collaborates-with-customers-using-the-aws-well-architected-tool/

Since its launch at re:Invent 2018, the AWS Well-Architected Tool (AWS W-A Tool) has provided a consistent process for documenting and measuring architecture workloads using the best practices from the AWS Well-Architected Framework. However, sharing workload reports for collaborative work experience was time consuming.

Well-Architected Tool

The new workload sharing feature solves these issues by offering a simple way to share workloads with other AWS accounts and AWS Identity and Account Management (IAM) users. Companies can leverage workload sharing to securely and efficiently collaborate and provide feedback about architecture implementation and design without sharing confidential account details through emails and PDFs. Multiple people across multiple organizations can now review a workload simultaneously and provide feedback and input.

Five Talent, a partner on the AWS Partner Network (APN) uses the workload sharing feature for a more collaborative Well-Architected review experience. With workload sharing, the company provides its clients with improved efficiency, transparency, and security.

“The new sharing feature has increased the efficiency across client and partner teams, which decreases the average time to remediate the high risks. By sharing the reviews in the AWS Console, we can protect sensitive customer data while staying informed in real time.” – Ryan Comingdeer, Chief Talent Officer, Five Talent.

Previously, Five Talent defined workloads in its AWS account and asked its clients to submit their workload information in a custom-built webform. Five Talent then generated and sent PDF reports via email. This was problematic for several reasons: Five Talent and its clients couldn’t control who had access to the report PDFs, it had no way to expedite high risk issue (HRI) remediation, and its recommendations could easily get lost in email correspondence. The workload sharing feature solves these problems and builds customer confidence through the ability for multiple people to work on reviews collaboratively.

Five Talent added extra security by customizing its workload sharing access controls based on its customer needs. The company is using the notes sections in the workload for secure and accurate communication. It shares links to documentation that can help clients take initiative and remediate HRIs — enabling quicker remediation and more transparent review cycles. Five Talent also highlights milestones in the AWS W-A Tool, enabling customers to prioritize HRIs without sorting through lengthy PDFs and email threads, which ultimately expedites the review and revision process.

The workload sharing feature has helped Five Talent drive down HRIs without requiring direct access to the AWS account where the workload is defined. This transparency and ability to work simultaneously helps keep all teams accountable while reinforcing the principles of the Well-Architected Framework.

Sign in to the Console and check out the new Shares tab in the AWS Well-Architected Tool, or visit the workload shares documentation to learn more.

New for Amazon EFS – IAM Authorization and Access Points

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-for-amazon-efs-iam-authorization-and-access-points/

When building or migrating applications, we often need to share data across multiple compute nodes. Many applications use file APIs and Amazon Elastic File System (EFS) makes it easy to use those applications on AWS, providing a scalable, fully managed Network File System (NFS) that you can access from other AWS services and on-premises resources.

EFS scales on demand from zero to petabytes with no disruptions, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity. By using it, you get strong file system consistency across 3 Availability Zones. EFS performance scales with the amount of data stored, with the option to provision the throughput you need.

Last year, the EFS team focused on optimizing costs with the introduction of the EFS Infrequent Access (IA) storage class, with storage prices up to 92% lower compared to EFS Standard. You can quickly start reducing your costs by setting a Lifecycle Management policy to move to EFS IA the files that haven’t been accessed for a certain amount of days.

Today, we are introducing two new features that simplify managing access, sharing data sets, and protecting your EFS file systems:

  • IAM authentication and authorization for NFS Clients, to identify clients and use IAM policies to manage client-specific permissions.
  • EFS access points, to enforce the use of an operating system user and group, optionally restricting access to a directory in the file system.

Using IAM Authentication and Authorization
In the EFS console, when creating or updating an EFS file system, I can now set up a file system policy. This is an IAM resource policy, similar to bucket policies for Amazon Simple Storage Service (S3), and can be used, for example, to disable root access, enforce read-only access, or enforce in-transit encryption for all clients.

Identity-based policies, such as those used by IAM users, groups, or roles, can override these default permissions. These new features work on top of EFS’s current network-based access using security groups.

I select the option to disable root access by default, click on Set policy, and then select the JSON tab. Here, I can review the policy generated based on my settings, or create a more advanced policy, for example to grant permissions to a different AWS account or a specific IAM role.

The following actions can be used in IAM policies to manage access permissions for NFS clients:

  • ClientMount to give permission to mount a file system with read-only access
  • ClientWrite to be able to write to the file system
  • ClientRootAccess to access files as root

I look at the policy JSON. I see that I can mount and read (ClientMount) the file system, and I can write (ClientWrite) in the file system, but since I selected the option to disable root access, I don’t have ClientRootAccess permissions.

Similarly, I can attach a policy to an IAM user or role to give specific permissions. For example, I create a IAM role to give full access to this file system (including root access) with this policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "elasticfilesystem:ClientMount",
                "elasticfilesystem:ClientWrite",
                "elasticfilesystem:ClientRootAccess"
            ],
            "Resource": "arn:aws:elasticfilesystem:us-east-2:123412341234:file-system/fs-d1188b58"
        }
    ]
}

I start an Amazon Elastic Compute Cloud (EC2) instance in the same Amazon Virtual Private Cloud as the EFS file system, using Amazon Linux 2 and a security group that can connect to the file system. The EC2 instance is using the IAM role I just created.

The open source efs-utils are required to connect a client using IAM authentication, in-transit encryption, or both. Normally, on Amazon Linux 2, I would install efs-utils using yum, but the new version is still rolling out, so I am following the instructions to build the package from source in this repository. I’ll update this blog post when the updated package is available.

To mount the EFS file system, I use the mount command. To leverage in-transit encryption, I add the tls option. I am not using IAM authentication here, so the permissions I specified for the “*” principal in my file system policy apply to this connection.

$ sudo mkdir /mnt/shared
$ sudo mount -t efs -o tls fs-d1188b58 /mnt/shared

My file system policy disables root access by default, so I can’t create a new file as root.

$ sudo touch /mnt/shared/newfile
touch: cannot touch ‘/mnt/shared/newfile’: Permission denied

I now use IAM authentication adding the iam option to the mount command (tls is required for IAM authentication to work).

$ sudo mount -t efs -o iam,tls fs-d1188b58 /mnt/shared

When I use this mount option, the IAM role from my EC2 instance profile is used to connect, along with the permissions attached to that role, including root access:

$ sudo touch /mnt/shared/newfile
$ ls -la /mnt/shared/newfile
-rw-r--r-- 1 root root 0 Jan  8 09:52 /mnt/shared/newfile

Here I used the IAM role to have root access. Other common use cases are to enforce in-transit encryption (using the aws:SecureTransport condition key) or create different roles for clients needing write or read-only access.

EFS IAM permission checks are logged by AWS CloudTrail to audit client access to your file system. For example, when a client mounts a file system, a NewClientConnection event is shown in my CloudTrail console.

Using EFS Access Points
EFS access points allow you to easily manage application access to NFS environments, specifying a POSIX user and group to use when accessing the file system, and restricting access to a directory within a file system.

Use cases that can benefit from EFS access points include:

  • Container-based environments, where developers build and deploy their own containers (you can also see this blog post for using EFS for container storage).
  • Data science applications, that require read-only access to production data.
  • Sharing a specific directory in your file system with other AWS accounts.

In the EFS console, I create two access points for my file system, each using a different POSIX user and group:

  • /data – where I am sharing some data that must be read and updated by multiple clients.
  • /config – where I share some configuration files that must not be updated by clients using the /data access point.

I used file permissions 755 for both access points. That means that I am giving read and execute access to everyone and write access to the owner of the directory only. Permissions here are used when creating the directory. Within the directory, permissions are under full control of the user.

I mount the /data access point adding the accesspoint option to the mount command:

$ sudo mount -t efs -o tls,accesspoint=fsap-0204ce67a2208742e fs-d1188b58 /mnt/shared

I can now create a file, because I am not doing that as root, but I am automatically using the user and group ID of the access point:

$ sudo touch /mnt/shared/datafile
$ ls -la /mnt/shared/datafile
-rw-r--r-- 1 1001 1001 0 Jan  8 09:58 /mnt/shared/datafile

I mount the file system again, without specifying an access point. I see that datafile was created in the /data directory, as expected considering the access point configuration. When using the access point, I was unable to access any files that were in the root or other directories of my EFS file system.

$ sudo mount -t efs -o tls /mnt/shared/
$ ls -la /mnt/shared/data/datafile 
-rw-r--r-- 1 1001 1001 0 Jan  8 09:58 /mnt/shared/data/datafile

To use IAM authentication with access points, I add the iam option:

$ sudo mount -t efs -o iam,tls,accesspoint=fsap-0204ce67a2208742e fs-d1188b58 /mnt/shared

I can restrict a IAM role to use only a specific access point adding a Condition on the AccessPointArn to the policy:

"Condition": {
    "StringEquals": {
        "elasticfilesystem:AccessPointArn" : "arn:aws:elasticfilesystem:us-east-2:123412341234:access-point/fsap-0204ce67a2208742e"
    }
}

Using IAM authentication and EFS access points together simplifies securely sharing data for container-based architectures and multi-tenant-applications, because it ensures that every application automatically gets the right operating system user and group assigned to it, optionally limiting access to a specific directory, enforcing in-transit encryption, or giving read-only access to the file system.

Available Now
IAM authorization for NFS clients and EFS access points are available in all regions where EFS is offered, as described in the AWS Region Table. There is no additional cost for using them. You can learn more about using EFS with IAM and access points in the documentation.

It’s now easier to create scalable architectures sharing data and configurations. Let me know what you are going use these new features for!

Danilo

Orchestrating a security incident response with AWS Step Functions

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/orchestrating-a-security-incident-response-with-aws-step-functions/

In this post I will show how to implement the callback pattern of an AWS Step Functions Standard Workflow. This is used to add a manual approval step into an automated security incident response framework. The framework could be extended to remediate automatically, according to the individual policy actions defined. For example, applying alternative actions, or restricting actions to specific ARNs.

The application uses Amazon EventBridge to trigger a Step Functions Standard Workflow on an IAM policy creation event. The workflow compares the policy action against a customizable list of restricted actions. It uses AWS Lambda and Step Functions to roll back the policy temporarily, then notify an administrator and wait for them to approve or deny.

Figure 1: High-level architecture diagram.

Important: the application uses various AWS services, and there are costs associated with these services after the Free Tier usage. Please see the AWS pricing page for details.

You can deploy this application from the AWS Serverless Application Repository. You then create a new IAM Policy to trigger the rule and run the application.

Deploy the application from the Serverless Application Repository

  1. Find the “Automated-IAM-policy-alerts-and-approvals” app in the Serverless Application Repository.
  2. Complete the required application settings
    • Application name: an identifiable name for the application.
    • EmailAddress: an administrator’s email address for receiving approval requests.
    • restrictedActions: the IAM Policy actions you want to restrict.

      Figure 2 Deployment Fields

  3. Choose Deploy.

Once the deployment process is completed, 21 new resources are created. This includes:

  • Five Lambda functions that contain the business logic.
  • An Amazon EventBridge rule.
  • An Amazon SNS topic and subscription.
  • An Amazon API Gateway REST API with two resources.
  • An AWS Step Functions state machine

To receive Amazon SNS notifications as the application administrator, you must confirm the subscription to the SNS topic. To do this, choose the Confirm subscription link in the verification email that was sent to you when deploying the application.

EventBridge receives new events in the default event bus. Here, the event is compared with associated rules. Each rule has an event pattern defined, which acts as a filter to match inbound events to their corresponding rules. In this application, a matching event rule triggers an AWS Step Functions execution, passing in the event payload from the policy creation event.

Running the application

Trigger the application by creating a policy either via the AWS Management Console or with the AWS Command Line Interface.

Using the AWS CLI

First install and configure the AWS CLI, then run the following command:

aws iam create-policy --policy-name my-bad-policy1234 --policy-document '{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:GetBucketObjectLockConfiguration",
                "s3:DeleteObjectVersion",
                "s3:DeleteBucket"
            ],
            "Resource": "*"
        }
    ]
}'

Using the AWS Management Console

  1. Go to Services > Identity Access Management (IAM) dashboard.
  2. Choose Create policy.
  3. Choose the JSON tab.
  4. Paste the following JSON:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "VisualEditor0",
                "Effect": "Allow",
                "Action": [
                    "s3:GetBucketObjectLockConfiguration",
                    "s3:DeleteObjectVersion",
                    "s3:DeleteBucket"
                ],
                "Resource": "*"
            }
        ]
    }
  5. Choose Review policy.
  6. In the Name field, enter my-bad-policy.
  7. Choose Create policy.

Either of these methods creates a policy with the permissions required to delete Amazon S3 buckets. Deleting an S3 bucket is one of the restricted actions set when the application is deployed:

Figure 3 default restricted actions

This sends the event to EventBridge, which then triggers the Step Functions state machine. The Step Functions state machine holds each state object in the workflow. Some of the state objects use the Lambda functions created during deployment to process data.

Others use Amazon States Language (ASL) enabling the application to conditionally branch, wait, and transition to the next state. Using a state machine decouples the business logic from the compute functionality.

After triggering the application, go to the Step Functions dashboard and choose the newly created state machine. Choose the current running state machine from the executions table.

Figure 4 State machine executions.

You see a visual representation of the current execution with the workflow is paused at the AskUser state.

Figure 5 Workflow Paused

These are the states in the workflow:

ModifyData
State Type: Pass
Re-structures the input data into an object that is passed throughout the workflow.

ValidatePolicy
State type: Task. Services: AWS Lambda
Invokes the ValidatePolicy Lambda function that checks the new policy document against the restricted actions.

ChooseAction
State type: Choice
Branches depending on input from ValidatePolicy step.

TempRemove
State type: Task. Service: AWS Lambda
Creates a new default version of the policy with only permissions for Amazon CloudWatch Logs and deletes the previously created policy version.

AskUser
State type: Choice
Sends an approval email to user via SNS, with the task token that initiates the callback pattern.

UsersChoice
State type: Choice
Branch based on the user action to approve or deny.

Denied
State type: Pass
Ends the execution with no further action.

Approved
State type: Task. Service: AWS Lambda
Restores the initial policy document by creating as a new version.

AllowWithNotification
State type: Task. Services: AWS Lambda
With no restricted actions detected, the user is still notified of change (via an email from SNS) before execution ends.

The callback pattern

An important feature of this application is the ability for an administrator to approve or deny a new policy. The Step Functions callback pattern makes this possible.

The callback pattern allows a workflow to pause during a task and wait for an external process to return a task token. The task token is generated when the task starts. When the AskUser function is invoked, it is passed a task token. The task token is published to the SNS topic along with the API resources for approval and denial. These API resources are created when the application is first deployed.

When the administrator clicks on the approve or deny links, it passes the token with the API request to the receiveUser Lambda function. This Lambda function uses the incoming task token to resume the AskUser state.

The lifecycle of the task token as it transitions through each service is shown below:

Figure 6 Task token lifecycle

  1. To invoke this callback pattern, the askUser state definition is declared using the .waitForTaskToken identifier, with the task token passed into the Lambda function as a payload parameter:
    "AskUser":{
     "Type": "Task",
     "Resource": "arn:aws:states:::lambda:invoke.waitForTaskToken",
     "Parameters":{  
     "FunctionName": "${AskUser}",
     "Payload":{  
     "token.$":"$$.Task.Token"
      }
     },
      "ResultPath":"$.taskresult",
      "Next": "usersChoice"
      },
  2. The askUser Lambda function can then access this token within the event object:
    exports.handler = async (event,context) => {
        let approveLink = `process.env.APIAllowEndpoint?token=${JSON.stringify(event.token)}`
        let denyLink = `process.env.APIDenyEndpoint?token=${JSON.stringify(event.token)}
    //code continues
  3. The task token is published to an SNS topic along with the message text parameter:
        let params = {
     TopicArn: process.env.Topic,
     Message: `A restricted Policy change has been detected Approve:${approveLink} Or Deny:${denyLink}` 
    }
     let res = await sns.publish(params).promise()
    //code continues
  4. The administrator receives an email with two links, one to approve and one to deny. The task token is appended to these links as a request query string parameter named token:

    Figure 7 Approve / deny email.

  5. Using the Amazon API Gateway proxy integration, the task token is passed directly to the recieveUser Lambda function from the API resource, and accessible from within in the function code as part of the event’s queryStringParameter object:
    exports.handler = async(event, context) => {
    //some code
        let taskToken = event.queryStringParameters.token
    //more code
    
  6.  The token is then sent back to the askUser state via an API call from within the recieveUser Lambda function.  This API call also defines the next course of action for the workflow to take.
    //some code 
    let params = {
            output: JSON.stringify({"action":NextAction}),
            taskToken: taskTokenClean
        }
    let res = await stepfunctions.sendTaskSuccess(params).promise()
    //code continues
    

Each Step Functions execution can last for up to a year, allowing for long wait periods for the administrator to take action. There is no extra cost for a longer wait time as you pay for the number of state transitions, and not for the idle wait time.

Conclusion

Using EventBridge to route IAM policy creation events directly to AWS Step Functions reduces the need for unnecessary communication layers. It helps promote good use of compute resources, ensuring Lambda is used to transform data, and not transport or orchestrate.

Using Step Functions to invoke services sequentially has two important benefits for this application. First, you can identify the use of restricted policies quickly and automatically. Also, these policies can be removed and held in a ‘pending’ state until approved.

Step Functions Standard Workflow’s callback pattern can create a robust orchestration layer that allows administrators to review each change before approving or denying.

For the full code base see the GitHub repository https://github.com/bls20AWS/AutomatedPolicyOrchestrator.

For more information on other Step Functions patterns, see our documentation on integration patterns.

Identify Unintended Resource Access with AWS Identity and Access Management (IAM) Access Analyzer

Post Syndicated from Brandon West original https://aws.amazon.com/blogs/aws/identify-unintended-resource-access-with-aws-identity-and-access-management-iam-access-analyzer/

Today I get to share my favorite kind of announcement. It’s the sort of thing that will improve security for just about everyone that builds on AWS, it can be turned on with almost no configuration, and it costs nothing to use. We’re launching a new, first-of-its-kind capability called AWS Identity and Access Management (IAM) Access Analyzer. IAM Access Analyzer mathematically analyzes access control policies attached to resources and determines which resources can be accessed publicly or from other accounts. It continuously monitors all policies for Amazon Simple Storage Service (S3) buckets, IAM roles, AWS Key Management Service (KMS) keys, AWS Lambda functions, and Amazon Simple Queue Service (SQS) queues. With IAM Access Analyzer, you have visibility into the aggregate impact of your access controls, so you can be confident your resources are protected from unintended access from outside of your account.

Let’s look at a couple examples. An IAM Access Analyzer finding might indicate an S3 bucket named my-bucket-1 is accessible to an AWS account with the id 123456789012 when originating from the source IP 11.0.0.0/15. Or IAM Access Analyzer may detect a KMS key policy that allow users from another account to delete the key, identifying a data loss risk you can fix by adjusting the policy. If the findings show intentional access paths, they can be archived.

So how does it work? Using the kind of math that shows up on unexpected final exams in my nightmares, IAM Access Analyzer evaluates your policies to determine how a given resource can be accessed. Critically, this analysis is not based on historical events or pattern matching or brute force tests. Instead, IAM Access Analyzer understands your policies semantically. All possible access paths are verified by mathematical proofs, and thousands of policies can be analyzed in a few seconds. This is done using a type of cognitive science called automated reasoning. IAM Access Analyzer is the first service powered by automated reasoning available to builders everywhere, offering functionality unique to AWS. To start learning about automated reasoning, I highly recommend this short video explainer. If you are interested in diving a bit deeper, check out this re:Invent talk on automated reasoning from Byron Cook, Director of the AWS Automated Reasoning Group. And if you’re really interested in understanding the methodology, make yourself a nice cup of chamomile tea, grab a blanket, and get cozy with a copy of Semantic-based Automated Reasoning for AWS Access Policies using SMT.

Turning on IAM Access Analyzer is way less stressful than an unexpected nightmare final exam. There’s just one step. From the IAM Console, select Access analyzer from the menu on the left, then click Create analyzer.

Creating an Access Analyzer

Analyzers generate findings in the account from which they are created. Analyzers also work within the region defined when they are created, so create one in each region for which you’d like to see findings.

Once our analyzer is created, findings that show accessible resources appear in the Console. My account has a few findings that are worth looking into, such as KMS keys and IAM roles that are accessible by other accounts and federated users.Viewing Access Analyzer Findings

I’m going to click on the first finding and take a look at the access policy for this KMS key.

An Access Analyzer Finding

From here we can see the open access paths and details about the resources and principals involved. I went over to the KMS console and confirmed that this is intended access, so I archived this particular finding.

All IAM Access Analyzer findings are visible in the IAM Console, and can also be accessed using the IAM Access Analyzer API. Findings related to S3 buckets can be viewed directly in the S3 Console. Bucket policies can then be updated right in the S3 Console, closing the open access pathway.

An Access Analyzer finding in S3

You can also see high-priority findings generated by IAM Access Analyzer in AWS Security Hub, ensuring a comprehensive, single source of truth for your compliance and security-focused team members. IAM Access Analyzer also integrates with CloudWatch Events, making it easy to automatically respond to or send alerts regarding findings through the use of custom rules.

Now that you’ve seen how IAM Access Analyzer provides a comprehensive overview of cloud resource access, you should probably head over to IAM and turn it on. One of the great advantages of building in the cloud is that the infrastructure and tools continue to get stronger over time and IAM Access Analyzer is a great example. Did I mention that it’s free? Fire it up, then send me a tweet sharing some of the interesting things you find. As always, happy building!

— Brandon

Rely on employee attributes from your corporate directory to create fine-grained permissions in AWS

Post Syndicated from Sulay Shah original https://aws.amazon.com/blogs/security/rely-employee-attributes-from-corporate-directory-create-fine-grained-permissions-aws/

In my earlier post Simplify granting access to your AWS resources by using tags on AWS IAM users and roles, I explained how to implement attribute-based access control (ABAC) in AWS to simplify permissions management at scale. In that scenario, I talked about relying on attributes on your IAM users and roles for access control in AWS. But more often, customers manage workforce user identities with an identity provider (IdP) and want to use identity attributes from their IdP for fine-grained permissions in AWS. In this post I introduce a new capability that enables you to do just that.

In AWS, you can configure your IdP to allow workforce users federated access to AWS resources using credentials from your corporate directory. Along with user credentials, your directory also stores user attributes such as cost center, department, and email address. Now you can configure your IdP to pass in user attributes as tags in federated AWS sessions. These are called session tags. You can then control access to AWS resources based on these session tags. Moreover, when user attributes change or new users are added to your directory, permissions automatically apply based on these attributes. For example, developers can federate into AWS using an IAM role, but can only access resources specific to their project. This is because you define permissions that require the project attribute from their IdP to match the project tag on AWS resources. Additionally, AWS logs these attributes in AWS CloudTrail and enable security administrators to track the user identity for a given role session.

In this post, I introduce session tags and walk you through an example of how to use session tags for ABAC and tracking user activity.

What are session tags?

Session tags are attributes passed in the AWS session. You can use session tags for access control in IAM policies and for monitoring. These tags are not stored in AWS and are valid only for the duration of the session. You define session tags just like tags in AWS—consisting of a customer-defined key and an optional value.

How to pass session tags in the AWS session?

One of the most widely used mechanisms for requesting a session in AWS is by assuming an IAM role. For user identities stored in an external directory, you can configure your SAML IdP in IAM to allow your users federated access to AWS using IAM roles. To understand how to set up SAML federation using an IdP, read AWS Federated Authentication with Active Directory Federation Services (ADFS). If you’re using IAM users, you can also request a session in AWS using AssumeRole and GetFederationToken APIs or using AssumeRoleWithWebIdentity API for applications that require access to AWS resources.

For session tags, you can use all of the above-mentioned APIs to pass tags into your AWS session based on your use case. For details on how to use these APIs to pass session tags, please visit Tags in AWS Sessions.

What permissions do I need to use session tags?

To perform any action in AWS, developers need permissions. For example, to assume a role, your developers need sts:AssumeRole permission. Similarly with session tags, we’re introducing a new action, sts:TagSession, that is required to pass session tags in the session. Additionally, you can require and control session tags using existing AWS conditions:

ActionUse CaseWhere to add
sts:TagSessionRequired to pass attributes as session tags when using AssumeRole, AssumRoleWithSAML, AssumeRoleWithWebIdentity, or GetFederatioToken APIRole’s trust policy or IAM user’s permissions policy based on the API you are using to pass session tags.
Condition KeyUse CaseActions that supports the condition key
aws:RequestTagUse this condition to require specific tags in the session.sts:TagSession
aws:TagKeysUse this condition key to control the tag keys that are allowed in the session.sts:TagSession
aws:PrincipalTag*Use this condition in IAM policies to compare tags on AWS resources.AWS Global Condition Keys (all actions across all services support this condition key)

Note: The table above explains only the additional use cases that the keys now support. Support for existing use cases, such as IAM users and roles remains unchanged. For details please visit AWS Global Condition Keys.

Now, I’ll show you how to create fine-grained permissions based on user attributes from your directory and how permissions automatically apply based on attributes when employees switch projects within your organization.

Example: Grant employees access to their project resources in AWS based on their job function

Consider a scenario where your organization deployed AWS EC2 and RDS instances in your AWS account for your company’s production web applications. Your systems engineers manage the EC2 instances and database engineers manage the RDS instances. They both access AWS by federating into your AWS account from a SAML IdP. Your organization’s security policy requires employees to have access to manage only the resources related to their job function and project they work on.

To meet these requirements, your cloud administrator, Michelle, implements attribute-based access control (ABAC) using the jobfunction and project attributes as session tags by following three steps:

  1. Michelle tags all existing EC2 and RDS instances with the corresponding project attribute.
  2. She creates a MyProjectResources IAM role and an IAM permission policy for this role such that employees can access resources with their jobfunction and project tags.
  3. She then configures your SAML IdP to pass the jobfunction and project attributes in the federated session when employees federate into AWS using the MyProjectResources role.

Let’s have a look at these steps in detail.

Step 1: Tag all the project resources

Michelle tags all the project resources with the appropriate project tag. This is important since she wants to create permission rules based on this tag to implement ABAC. To learn how to tag resources in EC2 and RDS, read tagging your Amazon EC2 resources and tagging Amazon RDS resources.

Step 2: Create an IAM role with permissions based on attributes

Next, Michelle creates an IAM role called MyProjectResources using the AWS Management Console or CLI. This is the role that your systems engineers and database engineers will assume when they federate into AWS to access and manage the EC2 and RDS instances respectively. To grant this role permissions, Michelle creates the following IAM policy and attaches it to the MyProjectResources role.

IAM Permissions Policy


{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Effect": "Allow",
			"Action": "rds:DescribeDBInstances",
			"Resource": "*"
		},
		{
			"Effect": "Allow",
			"Action": [
				"rds:RebootDBInstance",
				"rds:StartDBInstance",
				"rds:StopDBInstance"
			],
			"Resource": "*",
			"Condition": {
				"StringEquals": {
					"aws:PrincipalTag/jobfunction": "DatabaseEngineer",
					"rds:db-tag/project": "${aws:PrincipalTag/project}"
				}
			}
		},
		{
			"Effect": "Allow",
			"Action": "ec2:DescribeInstances",
			"Resource": "*"
		},
		{
			"Effect": "Allow",
			"Action": [
				"ec2:StartInstances",
				"ec2:StopInstances",
				"ec2:RebootInstances",
				"ec2:TerminateInstances"
			],
			"Resource": "*",
			"Condition": {
				"StringEquals": {
					"aws:PrincipalTag/jobfunction": "SystemsEngineer",
					"ec2:ResourceTag/project": "${aws:PrincipalTag/project}"
				}
			}
		}
	]
}

In the policy above, Michelle allows specific actions related to EC2 and RDS that the systems engineers and database engineers need to manage their project instances. In the condition element of the policy statements, Michelle adds a condition based on the jobfunction and project attributes to ensure engineers can access only the instances which belong to their jobfunction and have a matching project tag.

To ensure your systems engineers and database engineers can assume this role when they federate into AWS from your IdP, Michelle modifies the role’s trust policy to trust your SAML IdP as shown in the policy statement below. Since we also want to include session tags when engineers federate in, Michelle adds the new action sts:TagSession in the policy statement as shown below. She also adds a condition that requires the jobfunction and project attributes to be included as session tags when engineers assume this role.

Role Trust Policy


{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Effect": "Allow",
			"Principal": {
				"Federated": "arn:aws:iam::999999999999:saml-provider/ExampleCorpProvider"
			},
			"Action": [
				"sts:AssumeRoleWithSAML",
				"sts:TagSession"
			],
			"Condition": {
				"StringEquals": {
					"SAML:aud": "https://signin.aws.amazon.com/saml"
				},
				"StringLike": {
					"aws:RequestTag/project": "*",
					"aws:RequestTag/jobfunction": [
						"SystemsEngineer",
						"DatabaseEngineer"
					]
				}
			}
		}
	]
}

Step 3: Configuring your SAML IdP to pass the jobfunction and project attributes as session tags

Once Michelle creates the role and permissions policy in AWS, she configures her SAML IdP to include the jobfunction and project attributes as session tags in the SAML assertion when engineers federate into AWS using this role.

To pass attributes as session tags in the federated session, the SAML assertion must contain the attributes with the following prefix:

https://aws.amazon.com/SAML/Attributes/PrincipalTag

The example given below shows a part of the SAML assertion generated from my IdP with two attributes (project:Automation and jobfunction:SystemsEngineer) that we want to pass as session tags.


<Attribute Name="https://aws.amazon.com/SAML/Attributes/PrincipalTag:project">
		< AttributeValue >Automation<AttributeValue>
</ Attribute>
<Attribute Name="https://aws.amazon.com/SAML/Attributes/PrincipalTag:jobfunction">
		< AttributeValue >SystemsEngineer<AttributeValue>
</ Attribute>

Note: This sample only contains the new properties in the SAML assertion. There are additional required fields in the SAML assertion that must be present to successfully federate into AWS. To learn more about creating SAML assertions with session tags, visit configuring SAML assertions for the authentication response.

AWS identity partners such as Ping Identity, OneLogin, Auth0, ForgeRock, IBM, Okta, and RSA have validated the end-to-end experience for this new capability with their identity solutions, and we look forward to additional partners validating this capability. To learn more about how to use these identity providers for configuring session tags, please visit integrating third-party SAML solution providers with AWS. If you are using Active Directory Federation Services (ADFS) for SAML federation with AWS, then please visit Configuring ADFS to start using session tags for attribute-based access control.

Now, when your systems engineers and database engineers federate into AWS using the MyProjectResources role, they only get access to their project resources based on the project and jobfunction attributes passed in their federated session. Session tags enabled Michelle to define unique permissions based on user attributes without having to create and manage multiple roles and policies. This helps simplify permissions management in her company.

Permissions automatically apply when employees change projects

Consider the same example with a scenario where your systems engineer, Bob, switches from the automation project to the integration project. Due to this switch, Michelle sets Bob’s project attribute in the IdP to integration. Now, the next time Bob federates into AWS he automatically has access to resources in integration project. Using session tags, permissions automatically apply when you update attributes or create new AWS resources with appropriate attributes without requiring any permissions updates in AWS.

Track user identity using session tags

When developers federate into AWS with session tags, AWS CloudTrail logs these tags to make it easier for security administrators to track the user identity of the session. To view session tags in CloudTrail, your administrator Michelle looks for the AssumeRoleWithSAML event in the eventName filter of CloudTrail. In the example below, Michelle has configured the SAML IdP to pass three session tags: project, jobfunction, and userID. When developers federate into your account, Michelle views the AssumeRoleWithSAML event in CloudTrail to track the user identity of the session using the session tags project, jobfunction, and userID as shown below:
 

Figure 1: Search for the logged events

Figure 1: Search for the logged events

Note: You can use session tags in conjunction with the instructions to track account activity to its origin using AWS CloudTrail to trace the identity of the session.

Summary

You can use session tags to rely on your employee attributes from your corporate directory to create fine-grained permissions at scale in AWS to simplify your permissions management workflows. To learn more about session tags, please visit tags in AWS session.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon IAM forum.

Want more AWS Security news? Follow us on Twitter.

Sulay Shah

Sulay is a Senior Product Manager for Identity and Access Management service at AWS. He strongly believes in the customer first approach and is always looking for new opportunities to assist customers. Outside of work, Sulay enjoys playing soccer and watching movies. Sulay holds a master’s degree in computer science from the North Carolina State University.

Use attribute-based access control with AD FS to simplify IAM permissions management

Post Syndicated from Louay Shaat original https://aws.amazon.com/blogs/security/attribute-based-access-control-ad-fs-simplify-iam-permissions-management/

AWS Identity and Access Management (IAM) allows customers to provide granular access control to resources in AWS. One approach to granting access to resources is to use attribute-based access control (ABAC) to centrally govern and manage access to your AWS resources across accounts. Using ABAC enables you to simplify your authentication strategy by enabling you to scale your authorization strategy by granting access to groups of resources, as specified by tags, as opposed to managing long lists of individual resources. The new ability to include tags in sessions—combined with the ability to tag IAM users and roles—means that you can now incorporate user attributes from your AD FS environment as part of your tagging and authorization strategy.

In other words, you can use ABAC to simplify permissions management at scale. This means administrators can create a reusable policy that applies permissions based on the attributes of the IAM principal (such as tags). For example, as an administrator you can use a single IAM policy that grants developers in your organization access to AWS resources that match the developers’ project tag. As the team of developers adds resources to projects, permissions are automatically applied based on attributes (tags, in this case). As a result, each new resource that gets added requires no update to the IAM permissions policy.

In this blog post, I walk you through how to enable AD FS to pass tags as part of the SAML 2.0 token, so that you can enable ABAC for your AWS resources.

AD FS federated authentication process

The following diagram describes the process that a user follows to authenticate to AWS by using Active Directory and AD FS as the identity provider:
 

Figure 1: AD FS federation to AWS

Figure 1: AD FS federation to AWS

  1. A corporate user accesses the corporate Active Directory Federation Services (AD FS) portal sign-in page and provides their Active Directory authentication credentials.
  2. AD FS authenticates the user against Active Directory.
  3. Active Directory returns the user’s information, including Active Directory group membership information.
  4. AD FS dynamically builds a list of Amazon Resource Names (ARNs) for IAM Roles in one or more AWS accounts; these mappings are defined in advance by the administrator and rely on user attributes and Active Directory group memberships.
  5. AD FS sends a signed SAML 2.0 token to the user’s browser with a redirect to post the token to AWS Security Token Service (STS) including the attributes that use define in the claim rules.
  6. Temporary credentials are returned using STS AssumeRoleWithSAML.
  7. The user is authenticated and provided access to the AWS Management Console.

Prerequisites

Attribute-based access control in AWS relies on the use of tags for access-control decisions. Therefore, it’s important to have in place a tagging strategy for your resources. Please see AWS Tagging Strategies.

Implementing ABAC enables organizations to enhance the use of tags from an operational and billing construct to a security construct. Ensuring that tagging is enforced and secure is essential to an enterprise-wide strategy.

For more information about enforcing a tagging policy, see the blog post Enforce Centralized Tag Compliance Using AWS Service Catalog, DynamoDB, Lambda, and CloudWatch Events.

AD FS session tagging setup

After you’ve set up AD FS federation to AWS, you can enable additional attributes to be sent as part of the SAML token. For information about how to enable AD FS, see the blog post AWS Federated Authentication with Active Directory Federation Services (AD FS).

Follow these steps to send standard Active Directory attributes to AWS in the SAML token:

  1. Open Server Manager, choose Tools, then choose AD FS Management.
  2. Under Relying Party Trusts, choose AWS.
  3. Choose Edit Claim Issuance Policy, choose Add Rule, choose Send LDAP Attributes as Claims, then choose Next.
  4. On the Edit Rule page, add the requested details.
    For example, to create a Department Attribute claim rule, add the following details:

    • Claim rule name: Department Attribute
    • Attribute Store: Active Directory
    • LDAP Attribute: Department
    • Outgoing Claim Type:
      https://aws.amazon.com/SAML/Attributes/PrincipalTag:<department> (where <department> is the tag that will be passed in the session)

     

    Figure 2: Claim Rule

    Figure 2: Claim Rule

  5. Repeat the previous step for each attribute you want to send, modifying the details as necessary. For more information about how to configure claim rules, see Configuring Claim Rules in the Windows Server 2012 AD FS Deployment Guide.

Send custom attributes to AWS as part of a federated session

Session Tags in AWS can be derived from Active Directory custom attributes as well as standard attributes, which we demonstrated in the example above. For more information on custom attributes, please How to Create a Custom Attribute in Active Directory.

To ensure that your setup is correct, you should confirm that AD FS is configured correctly:

  1. Perform an identity provider (IdP) initiated authentication. Go to the following URL: https://<your.domain.name>/adfs/ls/idpinitiatedsignon.aspx, where <your.domain.name> is the DNS name of your AD FS server.
  2. Select Sign in to one of the following sites, then choose AWS from the dropdown:
     
    Figure 3: Choose AWS

    Figure 3: Choose AWS

  3. Choose Sign in, then enter your credentials.
  4. After you’ve successfully logged into AWS, navigate to AWS CloudTrail events within 15 minutes of your login event.
  5. Filter on AssumeRoleWithSAML, then locate your login event and look under principalTags to ensure you see the tags that you configured.
     
    Figure 4: CloudTrail – AssumeRoleWithSAML Event

    Figure 4: CloudTrail – AssumeRoleWithSAML Event

    After you see the tags were sent, you’re ready to use the tags to build your IAM policies.

The following is an example policy that uses multiple tags.

Example: grant IAM users access to your AWS resources by using tags

In this example I will assume that two tags will be passed from AD FS to enable you to build your ABAC tagging strategy: Project and Department. This example assumes that you have multiple teams of developers who need permissions to start and stop specific Amazon Elastic Compute Cloud (Amazon EC2) instances, based on their project allocation. Because these EC2 instances are part of AWS Autoscaling Groups, they come and go depending on scaling conditions; therefore, it would be impractical to try to write policies that refers to lists of individual EC2 instances; writing policies against the tags on the EC2 instances is a more manageable approach. In the following policy, I specify the EC2 actions ec2:StartInstances and ec2:StopInstances in the Action element, and all resources in the Resource element of the policy. In the Condition element of the policy, I use two conditions:

  • Matching statement where the resource tag project ec2:ResourceTag/project matches the key aws:PrincipalTag for project.
  • Matching statement where the resource tag project ec2:ResourceTag/department matches the key aws:PrincipalTag for department.

This ensures that the principal is able to start and stop an instance only if the project and department tags match value of the tags on the principal. Attaching this policy to your developer roles or groups simplifies permissions management, because you only need to manage a single policy for all your development teams that require permissions to start and stop instances, and you can rely on tag values to specify the resources.


{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:StartInstances",
"ec2:StopInstances"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"ec2:ResourceTag/project": "${aws:PrincipalTag/project}",
"ec2:ResourceTag/department": "${aws:PrincipalTag/department}" }  } }
]
}

This policy will ensure that the user can only start and stop EC2 instances for the resources that are assigned to their department and project.

Conclusion

In this post, I’ve shown how you can enable AD FS to pass tags as part of the SAML token, so that you can enable ABAC for your AWS resources to simplify permissions management at scale.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Single Sign-On forum.

Want more AWS Security news? Follow us on Twitter.

Louay Shaat

Louay Shaat

Louay Shaat

Louay is a Solutions Architect with AWS based out of Melbourne. He spends his days working with customers, from startups to the largest of enterprises helping them build cool new capabilities and accelerating their cloud journey. He has a strong focus on Security and Automation helping customers improve their security, risk, and compliance in the cloud.

New for Identity Federation – Use Employee Attributes for Access Control in AWS

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-for-identity-federation-use-employee-attributes-for-access-control-in-aws/

When you manage access to resources on AWS or many other systems, you most probably use Role-Based Access Control (RBAC). When you use RBAC, you define access permissions to resources, group these permissions in policies, assign policies to roles, assign roles to entities such as a person, a group of persons, a server, an application, etc. Many AWS customers told us they are doing so to simplify granting access permissions to related entities, such as persons sharing similar business functions in the organisation.

For example, you might create a role for a finance database administrator and give that role access to the tables and compute resources necessary for finance. When Alice, a database admin, moves into that department, you assign her the finance database administrator role.

On AWS, you use AWS Identity and Access Management (IAM) permissions policies and IAM roles to implement your RBAC strategy.

The multiplication of resources makes it difficult to scale. When a new resource is added to the system, system administrators must add permissions for that new resource to all relevant policies. How do you scale this to thousands of resources and thousands of policies? How do you verify that a change in one policy does not grant unnecessary privileges to a user or application?

Attribute-Based Access Control
To simplify the management of permissions, in the context of an evergrowing number of resources, a new paradigm emerged: Attribute-Based Access Control (ABAC). When you use ABAC, you define permissions based on matching attributes. You can use any type of attributes in the policies: user attributes, resource attributes, environment attributes. Policies are IF … THEN rules, for example: IF user attribute role == manager THEN she can access file resources having attribute sensitivity == confidential.

Using ABAC permission control allows to scale your permission system, as you no longer need to update policies when adding resources. Instead, you ensure that resources have the proper attributes attached to them. ABAC allows you to manage fewer policies because you do not need to create policies per job role.

On AWS, attributes are called tags. You can attach tags to resources such as Amazon Elastic Compute Cloud (EC2) instance, Amazon Elastic Block Store (EBS) volumes, AWS Identity and Access Management (IAM) users and many others. Having the possibility to tag resources, combined with the possibility to define permissions conditions on tags, effectively allows you to adopt the ABAC paradigm to control access to your AWS resources.

You can learn more about how to use ABAC permissions on AWS by reading the new ABAC section of the documentation or taking the tutorial, or watching Brigid’s session at re:Inforce.

This was a big step, but it only worked if your user attributes were stored in AWS. Many AWS customers manage identities (and their attributes) in another source and use federation to manage AWS access for their users.

Pass in Attributes for Federated Users
We’re excited to announce that you can now pass user attributes in the AWS session when your users federate into AWS, using standards-based SAML. You can now use attributes defined in external identity systems as part of attributes-based access control decisions within AWS. Administrators of the external identity system manage user attributes and define attributes to pass in during federation. The attributes you pass in are called “session tags”. Session tags are temporary tags which are only valid for the duration of the federated session.

Granting access to cloud resources using ABAC has several advantages. One of them is you have fewer roles to manage. For example, imagine a situation where Bob and Alice share the same job function, but different cost centers; and you want to grant access only to resources belonging to each individual’s cost center. With ABAC, only one role is required, instead of two roles with RBAC. Alice and Bob assume the same role. The policy will grant access to resources where their cost center tag value matches the resource cost center tag value. Imagine now you have over 1,000 people across 20 cost centers. ABAC can reduce the cost center roles from 20 to 1.

Let us consider another example. Let’s say your systems engineer configures your external identity system to include CostCenter as a session tag when developers federate into AWS using an IAM role. All federated developers assume the same role, but are granted access only to AWS resources belonging to their cost center, because permissions apply based on the CostCenter tag included in their federated session and on the resources.

Let’s illustrate this example with the diagram below:


In the figure above, blue, yellow, and green represent the three cost centers my workforce users are attached to. To setup ABAC, I first tag all project resources with their respective CostCenter tags and configure my external identity system to include the CostCenter tag in the developer session. The IAM role in this scenario grants access to project resources based on the CostCenter tag. The IAM permissions might look like this :

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [ "ec2:DescribeInstances"],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": ["ec2:StartInstances","ec2:StopInstances"],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "ec2:ResourceTag/CostCenter": "${aws:PrincipalTag/CostCenter}"
                }
            }
        }
    ]
}

The access will be granted (Allow) only when the condition matches: when the value of the resources’ CostCenter tag matches the value of the principal’s CostCenter tag. Now, whenever my workforce users federate into AWS using this role, they only get access to the resources belonging to their cost center based on the CostCenter tag included in the federated session.

If a user switches from cost center green to blue, your system administrator will update the external identity system with CostCenter = blue, and permissions in AWS automatically apply to grant access to the blue cost center AWS resources, without requiring permissions update in AWS. Similarly, when your system administrator adds a new workforce user in the external identity system, this user immediately gets access to the AWS resources belonging to her cost center.

We have worked with Auth0, ForgeRock, IBM, Okta, OneLogin, Ping Identity, and RSA to ensure the attributes defined in their systems are correctly propagated to AWS sessions. You can refer to their published guidelines on configuring session tags for AWS for more details. In case you are using other Identity Providers, you may still be able to configure session tags, if they support the industry standards SAML 2.0 or OpenID Connect (OIDC). We look forward to working with additional Identity Providers to certify Session Tags with their identity solutions.

Sessions Tags are available in all AWS Regions today at no additional cost. You can read our new session tags documentation page to follow step-by-step instructions to configure an ABAC-based permission system.

— seb

How to use CI/CD to deploy and configure AWS security services with Terraform

Post Syndicated from Jonathan Rau original https://aws.amazon.com/blogs/security/how-use-ci-cd-deploy-configure-aws-security-services-terraform/

Like the infrastructure your applications are built on, security infrastructure can be handled using infrastructure as code (IAC) and continuous integration/continuous deployment (CI/CD). In this post, I’ll show you how to build a CI/CD pipeline using AWS Developer Tools and HashiCorp’s Terraform platform as an IAC tool for AWS Web Application Firewall (WAF) deployments. AWS WAF is a web application firewall that helps protect your applications from common web exploits that could affect availability, compromise security, or consume excessive resources.

Terraform is an open-source tool for building, changing, and versioning infrastructure safely and efficiently. With Terraform, you can manage AWS services and custom defined provisioning logic. You create a configuration file that describes to Terraform the components needed to run a single application or your entire AWS footprint. When Terraform consumes the configuration file, it generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure.

In this solution, you’ll use Terraform configuration files to build your WAF, deploy it automatically through a CI/CD pipeline, and retain the WAF state files to be later referenced, changed, or destroyed through subsequent deployments in a durable backend. The CI/CD solution is flexible enough to deploy many other AWS services, security or otherwise, using Terraform. For a full list of supported services, see HashiCorp’s documentation.

Note: This post assumes you’re comfortable with Terraform and its core concepts, such as state management, syntax, and command terms. You can learn about Terraform here.

Solution Overview

Figure 1: Architecture diagram

Figure 1: Architecture diagram

For this solution, you’ll use AWS CodePipeline, an automated CD service to form the foundation of the CI/CD pipeline. CodePipeline helps us automate our release pipeline through build, test, and deployment. For the purpose of this post, I will not demonstrate how to configure any test or deployment stages.

The source stage uses AWS CodeCommit, which is the AWS fully-managed managed, Git-based source code management service that can be interacted with via the console and CLI. CodeCommit encrypts the source at rest and in transit, and is integrated with AWS Identity and Access Management (IAM) to customize fine-grained access controls to the source.

Note: CodePipeline supports different sources, such as S3 or GitHub – if you’re comfortable with those services, feel free to substitute them as you walk through the solution.

For the build stage, you’ll use AWS CodeBuild, which is a fully managed CI service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. CodeBuild uses a build specification file, which is a collection of build commands, variables and related settings, in a YAML file, that CodeBuild uses to run a build.

Finally, you’ll create a new Amazon Simple Storage Service (S3) bucket and Amazon DynamoDB table to durably store the Terraform state files outside of the CI/CD pipeline. These files are used by Terraform to map real world resources to your configuration, keep track of metadata, and to improve performance for large infrastructures.

For the purpose of this post, the security infrastructure resource deployed through the pipeline will be an AWS WAF, specifically a Global Web ACL that can attach to an Amazon CloudFront distribution, with a sample SQL Injection and Blacklist filtering rule.

The deployment steps will be as shown in Figure 1:

  1. Push artifacts, Terraform configuration files and a build specification to a CodePipeline source.
  2. CodePipeline automatically invokes CodeBuild and downloads the source files.
  3. CodeBuild installs and executes Terraform according to your build specification.
  4. Terraform stores the state files in S3 and a record of the deployment in DynamoDB.
  5. The WAF Web ACL is deployed and ready for use by your application teams.

Step 1: Set-up

In this step, you’ll create a new CodeCommit repository, S3 bucket, and DynamoDB table.

Create a CodeCommit repository

  1. Navigate to the AWS CodeCommit console, and then choose Create repository.
  2. Enter a name, description, and then choose Create. You will be taken to your repository after creation.
  3. Scroll down, and then choose Create file, as shown in Figure 2:
     
    Figure 2: CodeCommit create file

    Figure 2: CodeCommit create file

  4. You will be taken to a new screen to create a sample file, write readme into the text body, name the file readme.md, and then choose Commit changes, as shown in Figure 3:
     
    Figure 3: CodeCommit editing files

    Figure 3: CodeCommit editing files

Note: You need to create a sample file to initialize your Master branch that will not interfere with the build process. You can safely delete this file later.

Create a DynamoDB table

  1. Navigate to the Amazon DynamoDB console, and then choose Create table.
  2. Give your table a name like terraform-state-lock-dynamo.
  3. Enter LockID as your Primary key, keep the box checked for Use default settings, and then choose Create, as shown in Figure 4.

Note: Copy the name and ARN of the DynamoDB table because you will need it later when configuring your Terraform backend and CodeBuild service role.

 

Figure 4: Create DynamoDB table

Figure 4: Create DynamoDB table

Create an S3 bucket

  1. Navigate to the Amazon S3 console, and then choose Create bucket.
  2. Enter a unique name and choose the Region you have built the rest of your resources in, and then choose Next.
  3. Enable Versioning and Default encryption, and then choose Next.
  4. Select Block all public access, choose Next, and then choose Create bucket.

Note: Copy the name and ARN of the S3 bucket because you will need it later when configuring your Terraform backend and CodeBuild service role.

Step 2: Create the CI/CD pipeline

In this step, you will create the rest of your pipeline using CodePipeline and CodeBuild. If you have decided to not use CodeCommit, read CodePipeline’s documentation here about other sources.

  1. Navigate to the AWS CodePipeline console, and then choose Create pipeline.
  2. Enter a Pipeline name, select New service role, and then choose Next, as shown in Figure 5:
     
    Figure 5: CodePipeline settings

    Figure 5: CodePipeline settings

  3. Select AWS CodeCommit as the Source provider, select the name of the repository you created, and then choose master as your Branch name.
  4. Choose Amazon CloudWatch Events (recommended) as your detection option, and then choose Next, as shown in Figure 6:
     
    Figure 6: CodePipeline source stage

    Figure 6: CodePipeline source stage

  5. For Build provider, choose AWS CodeBuild and change your region as needed, and then choose Create project.

    Important: Selecting Create Project will open a new screen in your browser with the AWS CodeBuild console; do not close the browser because you will need it!

  6. Enter a Project name and description, and then scroll to the Environment section.
  7. For Environment image, choose Managed image, and then configure the following sub-selections, as shown in Figure 7:
    1. Operating system: Ubuntu
    2. Runtimes(s): Standard
    3. Image: aws/codebuild/standard:1.0
    4. Image version: Always use the latest image for this runtime version
       
      Figure 7: CodeBuild environment image

      Figure 7: CodeBuild environment image

  8. Select the checkbox under Privileged, select New service role, and take note of this Role name because you will be modifying it later.
     
    Figure 8: CodeBuild service role

    Figure 8: CodeBuild service role

  9. Choose the dropdown menu named Additional configuration (shown in Figure 8), scroll down to Environment variables, and then enter the following values, as shown in Figure 9:
    1. Name: TF_COMMAND
    2. Value: apply (this is case sensitive)
    3. Type: Plaintex
       
      Figure 9: CodeBuild variables

      Figure 9: CodeBuild variables

      Note: These values are used by the build specification to inject Terraform commands into Runtime.

  10. In the Buildspec section, choose Use a buildspec file. You don’t need to provide a name because buildspec.yaml in your ZIP package is the default value CodeBuild will look for.
  11. In the Logs section, choose the checkbox next to CloudWatch logs – optional, and then choose Continue to CodePipeline (see Figure 10).
     
    Figure 10: CodeBuild logging

    Figure 10: CodeBuild logging

    Note: The separate window will close at this point and you will be back in the CodePipeline console.

  12. Now, back in the CodePipeline console, choose Next, choose Skip deploy stage, and then choose Skip when prompted, as shown in Figure 11.
     
    Figure 11: CodePipeline skip deploy stage

    Figure 11: CodePipeline skip deploy stage

  13. Confirm your details are correct in the Review screen, and then choose Create pipeline.

After creation, you will be taken to the Pipeline Status view for the pipeline you just created. This interface allows you to monitor the status of CodePipeline in near real time. You can pivot to your Source repository and Build project by selecting the Details link, as shown in Figure 12.
 

Figure 12: CodePipeline status

Figure 12: CodePipeline status

You can also see previous CodePipeline runs by choosing the History view on the navigation pane on the left, as shown in Figure 13. This view is also useful for viewing multiple concurrent CodePipeline runs.
 

Figure 13: CodePipeline History

Figure 13: CodePipeline History

Step 3: Modify the CodeBuild service role

In this section, you will add an additional policy to your CodeBuild service role to allow Terraform to deploy your WAF and write state information to DynamoDB and S3.

  1. Navigate to the IAM Console, and then choose Roles from the navigation pane.
  2. Search for the CodeBuild service role, select it, and then choose Add inline policy.

    Note: The inline policy is used to avoid accidental deletions or modifications, and provide a one-to-one relationship between the permissions and the service role.

  3. Choose the JSON tab and paste in the following policy. Ensure you populate the Resources section of the policy with the ARN of your S3 Bucket and DynamoDB table created in Step 3.1, as shown in Figure 14.
    
    	{
        "Version": "2012-10-17",
        "Statement": [
          {
            "Sid": "WafSID",
            "Action": [
                "waf:CreateIPSet",
                "waf:CreateRule",
                "waf:CreateRuleGroup",
                "waf:CreateSqlInjectionMatchSet",
                "waf:CreateWebACL",
                "waf:DeleteIPSet",
                "waf:DeleteLoggingConfiguration",
                "waf:DeletePermissionPolicy",
                "waf:DeleteRule",
                "waf:DeleteRuleGroup",
                "waf:DeleteSqlInjectionMatchSet",
                "waf:DeleteWebACL",
                "waf:GetChangeToken",
                "waf:GetChangeTokenStatus",
                "waf:GetGeoMatchSet",
                "waf:GetIPSet",
                "waf:GetLoggingConfiguration",
                "waf:GetPermissionPolicy",
                "waf:GetRule",
                "waf:GetRuleGroup",
                "waf:GetSampledRequests",
                "waf:GetSqlInjectionMatchSet",
                "waf:GetWebACL",
                "waf:ListActivatedRulesInRuleGroup",
                "waf:ListGeoMatchSets",
                "waf:ListIPSets",
                "waf:ListLoggingConfigurations",
                "waf:ListRuleGroups",
                "waf:ListRules",
                "waf:ListSqlInjectionMatchSets",
                "waf:ListSubscribedRuleGroups",
                "waf:ListTagsForResource",
                "waf:ListWebACLs",
                "waf:PutLoggingConfiguration",
                "waf:PutPermissionPolicy",
                "waf:TagResource",
                "waf:UntagResource",
                "waf:UpdateIPSet",
                "waf:UpdateRule",
                "waf:UpdateRuleGroup",
                "waf:UpdateSqlInjectionMatchSet",
                "waf:UpdateWebACL"
              ],
            "Effect": "Allow",
            "Resource": "*"
          },
          {
            "Sid": "S3SID",
            "Action": [
              "s3:GetObject",
              "s3:ListBucket",
              "s3:PutObject"
            ],
            "Effect": "Allow",
            "Resource": ""
          },
          {
            "Sid": "DDBSID",
            "Action": [
              "dynamodb:DeleteItem",
              "dynamodb:GetItem",
              "dynamodb:PutItem"
            ],
            "Effect": "Allow",
            "Resource": ""
          }
        ]
      }
    

     

    Figure 14: IAM resource edits

    Figure 14: IAM resource edits

  4. Choose Review policy, enter a name for the inline policy, and then choose Create policy.

You now have the required permissions to deploy, modify, and delete your WAF, as needed. For pipelines that will be deploying multiple services, or using different backends for the state files, the permissions will need to be much more broadly defined.

Step 4: Deploy the WAF with CodePipeline

With all permissions and supporting infrastructure set up, you can now deploy your WAF. Navigate to this GitHub repository and clone it; there are five files you will need:

  • provider.tf
  • variables.tf
  • waf-conditions.tf
  • waf-rules.tf
  • buildspec.yaml
  1. Open the file named provider.tf in a text editor and modify the following values, as shown in Figure 15:
    1. region=: Enter your preferred AWS Region (on lines 3 & 13)
    2. bucket=: Name of your S3 bucket (on line 10)
    3. dynamodb_table=: Name of your DynamoDB table (on line 11)
       
      Figure 15: provider.tf modification

      Figure 15: provider.tf modification

  2. Save and close this file, navigate to the AWS CodeCommit console, and then select your repository.
  3. Choose the drop-down menu named Add file, and then select Upload file (see Figure 16).
     
    Figure 16: CodeCommit Upload files

    Figure 16: CodeCommit Upload files

  4. Using the Console, upload all five files downloaded from GitHub. Alternatively, you can learn how to do this using the CLI in the AWS CodeCommit User Guide.
  5. After you’ve uploaded the last file, navigate to the CodePipeline console, and then select your pipeline.

    Note: If the source message within the UI doesn’t match what you entered for your last upload commit message, use the History tab to find your execution with all files added because the previous deployments will fail due to the missing files.

  6. To access the Build project Build logs console, in the Build section, choose Details, as shown in Figure 17.
     
    Figure 17: CodePipeline status details

    Figure 17: CodePipeline status details

  7. Choose Tail logs to view logs in near real-time from the CodeBuild environment. You will be able to see the output from Terraform, as well as other information, such as errors and environmental logs, from the CodeBuild service, as shown in Figure 18.This view can be useful for debugging missing permissions for Terraform, as it will cause a failure and Terraform will log what IAM permissions were denied
     
    Figure 18: CodeBuild tail logs

    Figure 18: CodeBuild tail logs

  8. After a successful deployment, navigate to the AWS WAF Web ACL Console, and then choose the Web ACL that was deployed.
  9. Choose the Rules tab, and then select the Rules’ hyperlinks to inspect how they were created, as shown in Figure 19.
     
    Figure 19: Web ACL views

    Figure 19: Web ACL views

From here, you can associate the Global Web ACL with a CloudFront distribution to test the efficacy. This AWS Samples GitHub repository contains a more in-depth demo on how to effectively tune a WAF.

Important clean up

You will now clean up your deployed Web ACL. Doing this is important because you will be charged $5.00 USD per Web ACL, and $1.00 per rule per Web ACL, per month, on top of other related charges. Read the AWS WAF Pricing page for more details around AWS WAF pricing.

  1. Navigate to the AWS CodeBuild console, and then choose your CodeBuild project.
  2. Choose the Build details tab, scroll to the Environment section, and then choose Edit.
  3. Expand the Additional configuration drop-down menu, and then scroll to Environment variables.
  4. Under the Value of your previously created variable, replace the value with destroy, and then choose Update environment.
  5. Navigate back to the Pipelines menu in the AWS CodePipeline console, and then select your pipeline.
  6. Choose Release Change, and then choose Release, when prompted. Wait for the Build stage to report success to confirm deletion of our WAF resources.

Conclusion

In this post, you learned how to use AWS Developer Tools to create a Serverless CI/CD pipeline that you can use to automate deployments of infrastructure with Terraform. By using Terraform and CI/CD, your security engineers can deploy security infrastructure services in a clearly defined and immutable process, such as AWS WAF.

To further extend this solution, you can include manual confirmation stages via Amazon Simple Notification Service (SNS) to enforce approvals before all CI/CD pipelines deploy resources into your accounts. You can also choose to isolate your CI/CD pipelines by placing them in a VPC. Finally, you can select the WAF Rules deployed by Terraform as the starting point for a Rule group in AWS Firewall Management Service (FMS), which allows you to define multi-account WAF deployments for accounts in AWS Organizations.

Jonthan Rau

Jonathan is the Senior TPM for AWS Security Hub. He holds an AWS Certified Specialty-Security certification and is extremely passionate about cyber security, data privacy, and new emerging technologies, such as blockchain. He devotes personal time into research and advocacy about those same topics.

Use IAM to share your AWS resources with groups of AWS accounts in AWS Organizations

Post Syndicated from Michael Switzer original https://aws.amazon.com/blogs/security/iam-share-aws-resources-groups-aws-accounts-aws-organizations/

You can now reference Organizational Units (OUs), which are groups of AWS accounts in AWS Organizations, in AWS Identity and Access Management (IAM) policies, making it easier to define access for your IAM principals (users and roles) to the AWS resources in your organization. AWS Organizations lets you organize your accounts into OUs to align them with your business or security purposes. Now, you can use a new condition key, aws:PrincipalOrgPaths, in your policies to allow or deny access based on a principal’s membership in an OU. This makes it easier than ever to share resources between accounts you own in your AWS environments.

For example, you might have an Amazon S3 bucket you need to share with developers and applications from accounts that are members of a specific OU. To accomplish this, you can specify the aws:PrincipalOrgPaths condition and set the value to the organizational unit ID of the caller in the resource-based policy attached to the bucket. When a principal tries to access the bucket, AWS verifies that their account’s OU matches the one specified in the policy. With this condition, permissions automatically apply when you add accounts to the OU without any additional updates to the policy.

In this post, I introduce the new condition key, and show you how to use it in two examples. In the first example you will see how to use the aws:PrincipalOrgPaths condition key to grant multiple AWS accounts access to a resource, without needing to maintain a list of account IDs in your policy. In the second example, you will see how to add a guardrail to your administrative roles that prevents access to powerful actions unless coming from a protected OU in your organization.

AWS Organizations Concepts

Before I walk through the condition, let’s review some important concepts from AWS Organizations.

AWS Organizations allows you to group a set of AWS accounts into an organization that you can manage centrally. Once the accounts have joined the organization, you can group them into organizational units (OUs), allowing you to set policies that help you meet your security and compliance requirements. You can create multiple OUs within a single organization, and you can create OUs within other OUs to form hierarchical relationships between your accounts. When you create an organization, AWS Organizations creates your first account container automatically. It has a special name, called a root. All OUs you create exist inside the root.

Organizations, roots, and OUs use a different format for their identifiers. You can see the differences in the table below:

ResourceID FormatExample ValueGlobally Unique
Organizationo-exampleorgido-p8iu8lkookYes
Rootr-examplerootidr-tkh7No
Organizational Unitou-examplerootid-exampleouidou-tkh7-pbevdy6hNo

Organization IDs are globally unique, meaning no organizations share Organization IDs. OU and Root IDs are not globally unique. This means another customer’s organization OU may have the same ID as those from your organization. OU and Root IDs are unique within an organization. Therefore, you should always include the organization identifier when specifying an OU to make sure it is unique to your organization.

Control access to resources based on OU

You use condition keys in the condition element of an IAM policy. A condition is an optional IAM policy element you can use to specify circumstances under which the policy grants or denies permission. A condition includes a condition key, operator, and value for the condition.

Condition keyDescriptionOperator(s)Value(s)
aws:PrincipalOrgPathsThe paths of the principals’ OU from AWS OrganizationsAll string operatorsPaths of AWS Organization IDs and organizational unit IDs

The aws:PrincipalOrgPaths condition key is a global condition, meaning you can use it in conjunction with any AWS action. When you use it in the condition element of your IAM policy, it validates the organization, root, and OUs of the principal performing the action on the resource. For example, let’s say a principal was a member of an OU with the id ou-abcd-zzyyxxww inside a root r-abcd in the organization o-1122334455. When the principal makes a request on the resource, its aws:PrincipalOrgPaths value is:

["o-1122334455/r-abcd/ou-abcd-zzyyxxww/"]

The path includes the organization ID to ensure global uniqueness. This ensures only principals from your organization can access your AWS resources. You can use any string operator, such as StringEquals, with the condition. You can also use the wildcard characters (* and ?) when providing a path.

Aws:PrincipalOrgPaths is a multi-value condition key. Multi-value keys allow you to provide multiple values in a list format. Here’s a sample condition statement from a policy that uses the key to validate that a principal is from either ou-1 or ou-2:


"Condition":{
	"ForAnyValue:StringLike":{
		"aws:PrincipalOrgPaths":[
		  "o-1122334455/r-abcd/ou-1/",
		  "o-1122334455/r-abcd/ou-2/"
		]
	}
}

For all multi-value condition keys, you must provide the value as a JSON-formatted list as shown above, even if you’re only specifying one value. As shown in the example above, you also must use the ForAnyValue qualifier in your conditions to specify you’re checking membership of one OU path. For more information, see Creating a Condition That Tests Multiple Key Values in the IAM documentation.

In the next section, I’ll go over an example of how to use the new condition key to protect resources in your account from access outside of a given OU.

Example: Grant S3 bucket access to all principals in an OU in your organization

This example demonstrates how you can use the new condition key to share resources with groups of accounts. By placing the accounts into an OU and granting access based on membership, you can grant targeted access without having to list and maintain all the AWS account IDs in your permission policies.

Consider an example where I want to grant my Machine Learning team permissions to access an S3 bucket training-data that contains images that the team will use to train their machine learning models. I’ve set up my organization such that all AWS accounts owned by my Machine Learning team are part of a specific OU with the ID ou-machinelearn. For the purpose of this example, my organization ID is o-myorganization.

For this example, I want to allow users and applications from the Machine Learning OU or any OU beneath it to have permissions to read the training-data S3 bucket. Any other AWS accounts should not have the ability to view the resource.

To grant these permissions, I author an S3 bucket policy for my training-data resource as shown below.


{
	"Version":"2012-10-17",
	"Statement":{
		"Sid":"TrainingDataS3ReadOnly",
		"Effect":"Allow",
		"Principal": "*",
		"Action":"s3:GetObject",
		"Resource":"arn:aws:s3:::training-data/*",
		"Condition":{
			"ForAnyValue:StringLike":{
				"aws:PrincipalOrgPaths":["o-myorganization/*/ou-machinelearn/*"]
			}
		}
	}
}

In the policy above, I assert that principals trying to read the contents of the training-data bucket must be either a member of the OU that corresponds to the ou-machinelearn ID I provided (my Machine Learning OU Identifier), or a member of any OUs that are children of it. For the aws:PrincipalOrgPaths value, I used two asterisk (*) wildcards. I used the first asterisk (*) between my organization ID and my OU ID because OU IDs are unique within my organization. This means specifying the full path is not necessary to select the OU I need. The second asterisk (*), at the end of the path, is used to specify that I want to allow all child OUs to be included in my string comparison. If I didn’t want to include the child OUs, I could remove the wildcard character.

With this policy on the bucket, any principals in the Machine Learning OU may read objects inside the bucket if the user or role has the appropriate S3 permissions. Note that if this policy did not have the condition statement, it would be accessible by any AWS account. As a best practice, AWS recommends only granting access to the principals that need it. As for next steps, I could edit the Principal section of the policy to restrict access to specific principals in my Machine Learning accounts. For more information, see Specifying a Principal in a Policy in the S3 documentation.

Example: Restrict access to an IAM role to only accounts in an OU in my organization

The next example will show how to use aws:PrincipalOrgPaths to add another layer of security to your existing IAM role trust policies, ensuring only members of specific OUs may assume your roles.

For this example, say my company requires that only network security engineers can create or manage AWS Virtual Private Cloud (VPC) resources in my accounts. The network security team has a dedicated OU, ou-netsec, for their workloads. I have the same organization ID as the previous example, o-myorganization.

Each account in my organization has a dedicated IAM role, VPCManager, with the permissions needed to manage VPCs. I want to ensure that only my network security team, who use principals that are tagged as such, has access to the role. To do this, I edited the role trust policy for VPCManager, which defines who can access an IAM role. In this case, I added a condition to the policy to require that anyone assuming the role must come from an account in ou-netsec.

This is the trust policy I created for VPCManager:


{
  "Version": "2012-10-17",
  "Statement": [
	{
			"Effect": "Allow",
			"Principal": {
			"AWS": [
				"123456789012",
				"345678901234",
				"567890123456"
			]
		},
		"Action": "sts:AssumeRole",
		"Condition":{
		"StringEquals":{
		"aws:PrincipalTag/JobRole":"NetworkAdmin"
			},
			"ForAnyValue:StringLike":{
				"aws:PrincipalOrgPaths":["o-myorganization/*/ou-netsec/"]
            }
         }
      }
   ]
}

I started by adding the Effect, Principal, and Action to allow principals from three network security accounts to assume the role. To ensure they have the right job role, I added a condition to require the JobRole=NetworkAdmin tag must be applied to principals before they can assume the role. Finally, as an added layer of security, I added the second condition that requires anyone assuming the role must come from an account in the network security OU. This final step ensures that I specified the correct account IDs for my network security accounts—even if I accidentally provided an account that is not part of my organization, members of that account won’t be able to assume the role because they aren’t part of ou-netsec.

Though only members of the network security team may assume the role, it’s still possible for any principals with IAM permissions to modify it. As next steps, I could apply a Service Control Policy (SCP) that protects the role from modification and prevents other roles in the account from modifying VPCs. For more information, see How to use service control policies to set permission guardrails in the AWS Security Blog.

Summary

AWS offers tools to control access for individual principals, accounts, OUs, or entire organizations—this helps you manage permissions at the appropriate scale for your business. You can now use the aws:PrincipalOrgPaths condition key to control access to your resources based on OUs configured in AWS Organizations. For more information about these global condition keys and policy examples, read the IAM documentation.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon Identity and Access Management forum.

Want more AWS Security news? Follow us on Twitter.

Michael Switzer, Senior Product Manager AWS Identity

Michael Switzer

Mike Switzer is the product manager for the Identity and Access Management service at AWS. He enjoys working directly with customers to identify solutions to their challenges, and using data-driven decision making to drive his work. Outside of work, Mike is an avid cyclist and outdoorsperson. Mike holds a master’s degree in computational mathematics from the University of Washington.

Continuously monitor unused IAM roles with AWS Config

Post Syndicated from Michael Chan original https://aws.amazon.com/blogs/security/continuously-monitor-unused-iam-roles-aws-config/

Developing in the cloud encourages you to iterate frequently as your applications and resources evolve. You should also apply this iterative approach to the AWS Identity and Access Management (IAM) roles you create. Periodically ensuring that all the resources you’ve created are still being used can reduce operational complexity by eliminating the need to track unnecessary resources. It also improves security: identifying unused IAM roles helps reduce the potential for improper or unintended access to your critical infrastructure and workloads.

The IAM API now provides you with information about when a role has last been used to make an AWS request. In this post, I demonstrate how you can identify inactive roles using role last used information. Additionally, I’ll show you how to implement continuous monitoring of role activity using AWS Config.

AWS services and features used

This solution uses the following services and features:

  • AWS IAM: This service enables you to manage access to AWS services and resources securely. It provides an API to retrieve the timestamp of an IAM role’s last use when making an AWS request, and the region where the request was made.
  • AWS Config: This service allows you to continuously monitor and record your AWS resource configurations. It will periodically trigger your AWS Config rule (see next bullet) and will record compliance status.
  • AWS Config Rule: This resource represents your desired configuration settings for specific AWS resources or for an entire AWS account. This resource will check the compliance status of your AWS resources. You can provide the logic that determines compliance, which enables you to mark IAM roles in use as “compliant” and inactive roles as “non-compliant.”
  • AWS Lambda: This service lets you run code without provisioning or managing servers. Lambda will be used to execute API calls to retrieve role last used information and to provide compliance evaluations to AWS Config.
  • Amazon Simple Storage Service (Amazon S3): This is a highly available and durable object store. You’ll use it to store your Lambda code in .zip format prior to deploying your Lambda function.
  • AWS CloudFormation: This service provides a common language for you to describe and provision all the infrastructure resources in your cloud environment. You’ll use it to provision all the resources described in this solution.

Solution logic

This solution identifies unused IAM roles within your account. First, you’ll identify unused roles based on a time window (last number of days) you set. I use 60 days in my example, but this range is configurable. Second, you’ll use AWS Lambda to process all the roles in your account. Third, you’ll determine if they’re compliant based on their creation time and role last used information. Last, you’ll send your evaluations to AWS Config, which records the results and reports if each role is compliant or not. If not, you can take steps to remediate, such as denying all actions that the role can perform.

Prerequisites

This solution has the following prerequisites:

Solution architecture

 

Figure 1: Solution architecture

Figure 1: Solution architecture

As shown in the diagram, AWS Config (1) executes the AWS Config custom rule daily, and this frequency is configurable (2), which in turn invokes the Lambda function (3). The Lambda function enumerates each role and determines its creation date and role last used timestamp, both of which are provided via IAM’s GetAccountAuthorizationDetails API (4). When the Lambda function has determined the compliance of all your roles, the function returns the compliance results to AWS Config (5). AWS Config retains the history of compliance changes evaluated by the rule. If configured, compliance notifications can be sent to an Amazon Simple Notification Service (Amazon SNS) topic. Compliance status is viewable either in the AWS Management Console or through use of the AWS CLI or AWS SDK.

Deploying the solution

The resources for this solution are deployed through AWS CloudFormation. You must prepare the Lambda function’s source code for packaging before AWS CloudFormation can deploy the complete solution into your account.

Step 1: Prepare the Lambda deployment

First, make sure you’re running a *nix prompt (Linux, Mac, or Windows subsystem for Linux). Follow the commands below to create an empty folder named iam-role-last-used where you’ll place your Lambda source code.


mkdir iam-role-last-used
cd iam-role-last-used
touch lambda_function.py

Note that the directory you create and the code it contains will later be compressed into a .zip file by the AWS CLI’s cloudformation package command. This command also uploads the deployment .zip file to your S3 bucket. The cloudformation deploy command will reference this bucket when deploying the solution.

Next, create a Lambda layer with the latest boto3 package. This ensures that your Lambda function is using an up-to-date boto3 SDK and allows you to control the dependencies in your function’s deployment package. You can do this by following steps 1 through 4 in these directions. Be sure to record the Lambda layer ARN that you create because you will use it later.

Finally, open the lambda_function.py file in your favorite editor or integrated development environment (IDE), and place the following code into the lambda_function.py file:


import boto3
from botocore.exceptions import ClientError
from botocore.config import Config
import datetime
import fnmatch
import json
import os
import re
import logging


logger = logging.getLogger()
logging.basicConfig(
    format="[%(asctime)s] %(levelname)s [%(module)s.%(funcName)s:%(lineno)d] %(message)s", datefmt="%H:%M:%S"
)
logger.setLevel(os.getenv('log_level', logging.INFO))

# Configure boto retries
BOTO_CONFIG = Config(retries=dict(max_attempts=5))

# Define the default resource to report to Config Rules
DEFAULT_RESOURCE_TYPE = 'AWS::IAM::Role'

CONFIG_ROLE_TIMEOUT_SECONDS = 60

# Set to True to get the lambda to assume the Role attached on the Config service (useful for cross-account).
ASSUME_ROLE_MODE = False

# Evaluation strings for Config evaluations
COMPLIANT = 'COMPLIANT'
NON_COMPLIANT = 'NON_COMPLIANT'


# This gets the client after assuming the Config service role either in the same AWS account or cross-account.
def get_client(service, execution_role_arn):
    if not ASSUME_ROLE_MODE:
        return boto3.client(service)
    credentials = get_assume_role_credentials(execution_role_arn)
    return boto3.client(service, aws_access_key_id=credentials['AccessKeyId'],
                        aws_secret_access_key=credentials['SecretAccessKey'],
                        aws_session_token=credentials['SessionToken'],
                        config=BOTO_CONFIG
                        )


def get_assume_role_credentials(execution_role_arn):
    sts_client = boto3.client('sts')
    try:
        assume_role_response = sts_client.assume_role(RoleArn=execution_role_arn,
                                                      RoleSessionName="configLambdaExecution",
                                                      DurationSeconds=CONFIG_ROLE_TIMEOUT_SECONDS)
        return assume_role_response['Credentials']
    except ClientError as ex:
        if 'AccessDenied' in ex.response['Error']['Code']:
            ex.response['Error']['Message'] = "AWS Config does not have permission to assume the IAM role."
        else:
            ex.response['Error']['Message'] = "InternalError"
            ex.response['Error']['Code'] = "InternalError"
        raise ex


# Validates role pathname whitelist as passed via AWS Config parameters and returns a list of comma separated patterns.
def validate_whitelist(unvalidated_role_pattern_whitelist):
    # Names of users, groups, roles must be alphanumeric, including the following common
    # characters: plus (+), equal (=), comma (,), period (.), at (@), underscore (_), and hyphen (-).

    if not unvalidated_role_pattern_whitelist:
        return None

    regex = re.compile('^[-a-zA-Z0-9+=,[email protected]_/|*]+')
    if regex.search(unvalidated_role_pattern_whitelist):
        raise ValueError("[Error] Provided whitelist has invalid characters")

    return unvalidated_role_pattern_whitelist.split('|')


# This uses Unix filename pattern matching (as opposed to regular expressions), as documented here:
# https://docs.python.org/3.7/library/fnmatch.html.  Please note that if using a wildcard, e.g. "*", you should use
# it sparingly/appropriately.
# If the rolename matches the pattern, then it is whitelisted
def is_whitelisted_role(role_pathname, pattern_list):
    if not pattern_list:
        return False

    # If role_pathname matches pattern, then return True, else False
    # eg. /service-role/aws-codestar-service-role matches pattern /service-role/*
    # https://docs.python.org/3.7/library/fnmatch.html
    for pattern in pattern_list:
        if fnmatch.fnmatch(role_pathname, pattern):
            # whitelisted
            return True

    # not whitelisted
    return False


# Form an evaluation as a dictionary. Suited to report on scheduled rules.  More info here:
#   https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/config.html#ConfigService.Client.put_evaluations
def build_evaluation(resource_id, compliance_type, notification_creation_time, resource_type=DEFAULT_RESOURCE_TYPE, annotation=None):
    evaluation = {}
    if annotation:
        evaluation['Annotation'] = annotation
    evaluation['ComplianceResourceType'] = resource_type
    evaluation['ComplianceResourceId'] = resource_id
    evaluation['ComplianceType'] = compliance_type
    evaluation['OrderingTimestamp'] = notification_creation_time
    return evaluation


# Determine if any roles were used to make an AWS request
def determine_last_used(role_name, role_last_used, max_age_in_days, notification_creation_time):

    last_used_date = role_last_used.get('LastUsedDate', None)
    used_region = role_last_used.get('Region', None)

    if not last_used_date:
        compliance_result = NON_COMPLIANT
        reason = "No record of usage"
        logger.info(f"NON_COMPLIANT: {role_name} has never been used")
        return build_evaluation(role_name, compliance_result, notification_creation_time, resource_type=DEFAULT_RESOURCE_TYPE, annotation=reason)


    days_unused = (datetime.datetime.now() - last_used_date.replace(tzinfo=None)).days

    if days_unused > max_age_in_days:
        compliance_result = NON_COMPLIANT
        reason = f"Was used {days_unused} days ago in {used_region}"
        logger.info(f"NON_COMPLIANT: {role_name} has not been used for {days_unused} days, last use in {used_region}")
        return build_evaluation(role_name, compliance_result, notification_creation_time, resource_type=DEFAULT_RESOURCE_TYPE, annotation=reason)

    compliance_result = COMPLIANT
    reason = f"Was used {days_unused} days ago in {used_region}"
    logger.info(f"COMPLIANT: {role_name} used {days_unused} days ago in {used_region}")
    return build_evaluation(role_name, compliance_result, notification_creation_time, resource_type=DEFAULT_RESOURCE_TYPE, annotation=reason)


# Returns a list of docts, each of which has authorization details of each role.  More info here:
#   https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/iam.html#IAM.Client.get_account_authorization_details
def get_role_authorization_details(iam_client):

    roles_authorization_details = []
    roles_list = iam_client.get_account_authorization_details(Filter=['Role'])

    while True:
        roles_authorization_details += roles_list['RoleDetailList']
        if 'Marker' in roles_list:
            roles_list = iam_client.get_account_authorization_details(Filter=['Role'], MaxItems=100, Marker=roles_list['Marker'])
        else:
            break

    return roles_authorization_details


# Check the compliance of each role by determining if role last used is > than max_days_for_last_used
def evaluate_compliance(event, context):

    # Initialize our AWS clients
    iam_client = get_client('iam', event["executionRoleArn"])
    config_client = get_client('config', event["executionRoleArn"])

    # List of resource evaluations to return back to AWS Config
    evaluations = []

    # List of dicts of each role's authorization details as returned by boto3
    all_roles = get_role_authorization_details(iam_client)

    # Timestamp of when AWS Config triggered this evaluation
    notification_creation_time = str(json.loads(event['invokingEvent'])['notificationCreationTime'])

    # ruleParameters is received from AWS Config's user-defined parameters
    rule_parameters = json.loads(event["ruleParameters"])

    # Maximum allowed days that a role can be unused, or has been last used for an AWS request
    max_days_for_last_used = int(os.environ.get('max_days_for_last_used', '60'))
    if 'max_days_for_last_used' in rule_parameters:
        max_days_for_last_used = int(rule_parameters['max_days_for_last_used'])

    whitelisted_role_pattern_list = []
    if 'role_whitelist' in rule_parameters:
        whitelisted_role_pattern_list = validate_whitelist(rule_parameters['role_whitelist'])

    # Iterate over all our roles.  If the creation date of a role is <= max_days_for_last_used, it is compliant
    for role in all_roles:

        role_name = role['RoleName']
        role_path = role['Path']
        role_creation_date = role['CreateDate']
        role_last_used = role['RoleLastUsed']
        role_age_in_days = (datetime.datetime.now() - role_creation_date.replace(tzinfo=None)).days

        if is_whitelisted_role(role_path + role_name, whitelisted_role_pattern_list):
            compliance_result = COMPLIANT
            reason = "Role is whitelisted"
            evaluations.append(
                build_evaluation(role_name, compliance_result, notification_creation_time, resource_type=DEFAULT_RESOURCE_TYPE, annotation=reason))
            logger.info(f"COMPLIANT: {role} is whitelisted")
            continue

        if role_age_in_days <= max_days_for_last_used:
            compliance_result = COMPLIANT
            reason = f"Role age is {role_age_in_days} days"
            evaluations.append(
                build_evaluation(role_name, compliance_result, notification_creation_time, resource_type=DEFAULT_RESOURCE_TYPE, annotation=reason))
            logger.info(f"COMPLIANT: {role_name} - {role_age_in_days} is newer or equal to {max_days_for_last_used} days")
            continue

        evaluation_result = determine_last_used(role_name, role_last_used, max_days_for_last_used, notification_creation_time)
        evaluations.append(evaluation_result)

    # Iterate over our evaluations 100 at a time, as put_evaluations only accepts a max of 100 evals.
    evaluations_copy = evaluations[:]
    while evaluations_copy:
        config_client.put_evaluations(Evaluations=evaluations_copy[:100], ResultToken=event['resultToken'])
        del evaluations_copy[:100]

Here’s how the above code works. The AWS Config custom rule invokes the Lambda function, calling the evaluate_compliance() method. evaluate_compliance() does the following:

  1. Retrieves information on all roles from IAM using the GetAccountAuthorizationDetails API as mentioned previously. This includes each role’s creation date and role last used timestamp.
  2. Marks each role as compliant if the role name matches one of the patterns in your whitelisted_role_pattern_list. This pattern list is passed to your rule via a user-configurable AWS CloudFormation parameter named RolePatternWhitelist. “Whitelisting roles,” below, provides instructions about how to do this.
  3. Marks each role as compliant if the age of the role in days (role_age_in_days) is less than or equal to the parameter MaxDaysForLastUsed (max_days_for_last_used). This is set via a user-configurable parameter in your CloudFormation stack. You’ll use this parameter to set the time window for how long a role can be inactive.
  4. If neither of the above conditions are met, then determine_last_used() is called, and each role will be marked as non-compliant if days_unused is greater than max_age_in_days.
  5. Finally, evaluate_compliance() calls put_evaluations() against AWS Config to store your evaluations of each role.

Step 2: Deploy the AWS CloudFormation template

Next, create an AWS CloudFormation template file named  iam-role-last-used.yml. This template uses the AWS Serverless Application Model (AWS SAM), which is an extension of CloudFormation. AWS SAM simplifies the deployment so that you don’t have to manually upload your deployment .zip file to your Amazon S3 bucket. To ensure that your template knows the location of your code .zip file, place the file on the same directory level as the iam-role-last-used directory that you created above. Then copy and paste the code below and save it to the iam-role-last-used.yml file.


AWSTemplateFormatVersion: '2010-09-09'
Description: "Creates an AWS Config rule and Lambda to check all roles' last used compliance"
Transform: 'AWS::Serverless-2016-10-31'
Parameters:

  MaxDaysForLastUsed:
    Description: Checks the number of days allowed for a role to not be used before being non-compliant
    Type: Number
    Default: 60
    MaxValue: 365

  NameOfSolution:
    Type: String
    Default: iam-role-last-used
    Description: The name of the solution - used for naming of created resources

  RolePatternWhitelist:
    Description: Pipe separated whitelist of role pathnames using simple pathname matching
    Type: String
    Default: ''
    AllowedPattern: '[-a-zA-Z0-9+=,[email protected]_/|*]+|^$'

  LambdaLayerArn:
    Type: String
    Description: The ARN for the Lambda Layer you will use.
  
Resources:
  LambdaInvokePermission:
    Type: 'AWS::Lambda::Permission'
    DependsOn: CheckRoleLastUsedLambda
    Properties: 
      FunctionName: !GetAtt CheckRoleLastUsedLambda.Arn
      Action: lambda:InvokeFunction
      Principal: config.amazonaws.com
      SourceAccount: !Ref 'AWS::AccountId'

  LambdaExecutionRole:
    Type: 'AWS::IAM::Role'
    Properties:
      RoleName: !Sub '${NameOfSolution}-${AWS::Region}'
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Effect: Allow
          Principal:
            Service: lambda.amazonaws.com
          Action:
          - sts:AssumeRole
      Path: /
      Policies:
      - PolicyName: !Sub '${NameOfSolution}'
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: Allow
            Action:
            - config:PutEvaluations
            Resource: '*'
          - Effect: Allow
            Action:
            - iam:GetAccountAuthorizationDetails
            Resource: '*'
          - Effect: Allow
            Action:
            - logs:CreateLogStream
            - logs:PutLogEvents
            Resource:
            - !Sub 'arn:${AWS::Partition}:logs:${AWS::Region}:*:log-group:/aws/lambda/${NameOfSolution}:log-stream:*'

  CheckRoleLastUsedLambda:
    Type: 'AWS::Serverless::Function'
    Properties:
      Description: "Checks IAM roles' last used info for AWS Config"
      FunctionName: !Sub '${NameOfSolution}'
      Handler: lambda_function.evaluate_compliance
      MemorySize: 256
      Role: !GetAtt LambdaExecutionRole.Arn
      Runtime: python3.7
      Timeout: 300
      CodeUri: ./iam-role-last-used
      Layers:
      - !Ref LambdaLayerArn

  LambdaLogGroup:
    Type: 'AWS::Logs::LogGroup'
    Properties: 
      LogGroupName: !Sub '/aws/lambda/${NameOfSolution}'
      RetentionInDays: 30

  ConfigCustomRule:
    Type: 'AWS::Config::ConfigRule'
    DependsOn:
    - LambdaInvokePermission
    - LambdaExecutionRole
    Properties:
      ConfigRuleName: !Sub '${NameOfSolution}'
      Description: Checks the number of days that an IAM role has not been used to make a service request. If the number of days exceeds the specified threshold, it is marked as non-compliant.
      InputParameters: !Sub '{"role-whitelist":"${RolePatternWhitelist}","max_days_for_last_used":"${MaxDaysForLastUsed}"}'
      Source: 
        Owner: CUSTOM_LAMBDA
        SourceDetails: 
        - EventSource: aws.config
          MaximumExecutionFrequency: TwentyFour_Hours
          MessageType: ScheduledNotification
        SourceIdentifier: !GetAtt CheckRoleLastUsedLambda.Arn

For your reference, below is a summary of the template.

  • Parameters (these are user-configurable variables):
    • MaxDaysForLastUsed—maximum amount of days allowed for a role that has not been used to make an AWS request before becoming non-compliant
    • NameOfSolution—the name of the solution, used for naming of created resources
    • RolePatternWhitelist—a pipe (“|”) separated whitelist of role pathnames using simple pathname matching (see Whitelisting roles below)
    • LambdaLayerArn—the unique ARN for your Lambda layer
  • Resources (these are the AWS resources that will be created within your account):
    • LambdaInvokePermission—allows AWS Config to invoke your Lambda function
    • LambdaExecutionRole—the role and permissions that Lambda will assume to process your roles. The policies assigned to this role allow you to perform the iam:GetAccountAuthorizationDetails, config:PutEvaluations, logs:CreateLogStream, and logs:PutLogEvents actions. The PutEvaluations action allows you to send evaluation results back to AWS Config. The CreateLogStream and PutLogEvents actions allows you to write the Lambda execution logs to AWS CloudWatch Logs.
    • CheckRoleLastUsedLambda—defines your Lambda function and its attributes
    • LambdaLogGroup—logs from Lambda will be written to this CloudWatch Log Group
    • ConfigCustomRule—defines your custom AWS Config rule and its attributes

With the CloudFormation template you created above, use the AWS CLI’s cloudformation package command to zip the deployment package and upload it to the S3 bucket that you specify, as shown below. Make sure to replace <YOUR S3 BUCKET> with your bucket name only. Do not include the s3:// prefix:


aws cloudformation package --region <YOUR REGION> --template-file iam-role-last-used.yml \
--s3-bucket <YOUR S3 BUCKET> \
--output-template-file iam-role-last-used-transformed.yml

This will create the file iam-role-last-used-transformed.yml, which adds a reference to the S3 bucket and the pathname needed by CloudFormation to deploy your Lambda function.

Finally, deploy the solution into your AWS account using the cloudformation deploy command below. You can provide different values for NameOfSolutionMaxDaysForLastAccess, or RolePatternWhitelist by using the –parameter-overrides option. Otherwise, defaults will be used. These are specified at the top of the AWS Cloudformation template pasted above, under the Parameters section.


aws cloudformation deploy --region <YOUR REGION> --template-file iam-role-last-used-transformed.yml \
--stack-name iam-role-last-used \
--parameter-overrides NameOfSolution='iam-role-last-used' \
MaxDaysForLastUsed=60 \
RolePatternWhitelist='/breakglass-role|/security-*' \
LambdaLayerArn='<YOUR LAMBDA LAYER ARN>' \
--capabilities CAPABILITY_NAMED_IAM

The deployment is complete after the AWS CLI indicates success. This typically takes only a few minutes:


Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - iam-role-last-used

Step 3: View your findings

Now that your deployment is complete, you can view your compliance findings by going to the AWS Config console.

  1. Select the same region where you deployed the CloudFormation template.
  2. Select Rules in the left pane, which brings up the current list of rules in your account.
  3. Select the iam-role-last-used rule to view the rule’s details, as shown in Figure 2.

When a successful evaluation is indicated in the Overall rule status field, the compliance evaluation is complete. You may need to wait a few minutes for the function to complete successfully as results may not be available yet. You can periodically refresh your web browser to check for completion.
 

Figure 2: AWS Config rule details

Figure 2: AWS Config custom rule details

After the rule completes its evaluations of your roles, you’ll be able to view your compliance results on the same page. In the screenshot below, you can see that there are multiple non-compliant roles. You can switch between viewing compliant and non-compliant resources by selecting the dropdown menu under Compliance status.
 

Figure 3: Viewing the compliance status

Figure 3: Viewing the compliance status

For more insight, you can hover over the “i” symbol, which provides additional information about the role’s non-compliant status (see Figure 4).
 

Figure 4: Hover over the information icon

Figure 4: Hover over the information icon

Step 4: Export a report of your compliance

Once a successful evaluation has completed, you may want to create an exportable report of compliance. You can use the AWS CLI to programmatically script and automatically generate reports for your application, infrastructure, and security teams. They can use these reports to review non-compliant roles and take action if the role is no longer needed. The AWS CLI command below demonstrates how you can achieve this. Note that the command below encompasses a single line:

aws configservice get-compliance-details-by-config-rule –config-rule-name iam-role-last-used –output text –query ‘EvaluationResults [*].{A:EvaluationResultIdentifier.EvaluationResultQualifier.ResourceId,B:ComplianceType,C:Annotation}’

The output is tab-delimited and will be similar to the lines below. The first column displays the role name. The second column states the compliance status. The last column explains the reason for the compliance status:

AdminRole   COMPLIANT      Was last used in us-west-2 46 days ago
Ec2DevRole  NON_COMPLIANT  No record of usage

Remediation

Now that you have a report of non-compliant roles, you must decide what to do with them. If your teams agree that a role is not necessary, the remediation can be to simply delete the role. If unsure, you can retain the role but deny it from performing any action. You can do this by attaching a new permissions policy that will deny all actions for all resources. Re-enabling the role would be as easy as removing the added policy. Otherwise, if the role is necessary but not frequently used, you can whitelist the role through the method below.

Whitelisting roles

Whitelisted roles will be reported as compliant by the custom rule even if left unused. You might have roles such as a security incident response or a break-glass role that require whitelisting.

The whitelist is supplied via the CloudFormation parameter RolePatternWhitelist and is stored as an AWS Config rule parameter. The syntax uses UNIX filename pattern matching. If you need to specify multiple patterns, you can use the | (pipe) character as a delimiter between each pattern. Each delimited pattern will then be matched against the role name, including the path. For example, if you wish to whitelist the breakglass-role, security-incident-response-role and security-audit-role roles, the whitelist patterns you provide to the AWS CloudFormation template might be:

/breakglass-role|/security-*

Important: The use of wildcards (*) should be used thoughtfully, as they will match anything.

Enhancements

In this walkthrough, I’ve kept the architecture and code simple to make the solution easier to follow. You can further customize the solution through the following enhancements:

Conclusion

In this post, I’ve shown you how to use AWS IAM and AWS Config to implement a detective security control that provides visibility into your IAM roles and their last time of use. I’ve also shown how you can view the results in the AWS Management Console and export them using the AWS CLI. Finally, I’ve presented different options for remediation and a means to whitelist roles that are necessary but infrequently used. These techniques can augment your security and compliance program by preventing unintended access through your IAM roles.

Additional resources

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the IAM forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Michael Chan

Michael Chan

Michael is a Developer Advocate for AWS Identity. Prior to this, he was a Professional Services Consultant who assisted customers with their journey to AWS. He enjoys understanding customer problems and working backwards to provide practical solutions.

Roland AbiHanna

Roland is a Sr. Solutions Architect with Amazon Web Services. He’s focused on helping enterprise customers realize their business needs through cloud solutions, specializing in DevOps and automation. Prior to AWS, Roland ran DevOps for a variety of start-ups in Europe and the Middle East. Outside of work, Roland enjoys hiking and searching for the perfect blend of hops, barley, and water.

Identify unused IAM roles and remove them confidently with the last used timestamp

Post Syndicated from Mathangi Ramesh original https://aws.amazon.com/blogs/security/identify-unused-iam-roles-remove-confidently-last-used-timestamp/

As you build on AWS, you create AWS Identity and Access Management (IAM) roles to enable teams and applications to use AWS services. As those teams and applications evolve, you might only rely on a sub-set of your original roles to meet your needs. This can leave unused roles in your AWS account. To help you identify these unused roles, IAM now reports the last-used timestamp that represents when a role was last used to make an AWS request. You or your security team can use this information to identify, analyze, and then confidently remove unused roles. This helps you improve the security posture of your AWS environments. Additionally, by removing unused roles, you can simplify your monitoring and auditing efforts by focusing only on roles that are in use. You can review when a role was last used to access your AWS environment in the IAM console, using the AWS Command Line Interface (AWS CLI), or AWS SDK.

In this post, I demonstrate how to identify and remove roles that your team or applications don’t use by viewing the last-used timestamp in the IAM console.  Before I share an example, I’ll describe the existing IAM APIs where we now also report the last-used timestamp:

  • Get-role: Returns role details, including the path, ARN (Amazon resource number), and trust policy. You can now use this API to retrieve the last-used timestamp.
  • Get-account-authorization-details: Retrieves information about all the IAM users, groups, roles, and policies in your AWS account. You can now view the last-used timestamp along with the other role details.

How to use the AWS Management Console to view last-used information for roles

Imagine you’re a system administrator for Example Inc. and your development team is working on a new application. To enable them to get started with AWS quickly, you create roles for the team and their application. As the application goes through final review, you learn the team and application now rely on a smaller set of roles to access AWS services. This leaves unused roles in your AWS accounts that you might want to remove. You’re going to check the last time each role made a request to AWS and use this information to determine whether the team is using the role. If they aren’t, you plan to remove it knowing the team doesn’t need it for the application.

To view role-last-used information in the IAM Console, select Roles in the IAM navigation pane, then look for the Last activity column (see Figure 1 below). This displays the number of days that have passed since each role made an AWS service request. AWS records last-used information for the trailing 400 days. This is referred to as the tracking period. You can sort the column to identify the roles your team has not used recently.

In the case of Example Inc., let’s say you want to get rid of any roles that have been inactive for 90 days or more. From the information in Figure 1, you see that your team is using ApplicationEC2Access, TestRole, and CodeDeployRole. You also see they haven’t used AdminAccess, EC2FullAccess, and InfraSetupRole in the last 90 days. You can now delete these roles confidently. (Last activity “None”, as seen for the AdminAccess role, means that the role was not used within the trailing 400-day tracking period to make any service request.)
 

Figure 1: "Last activity" column in IAM console

Figure 1: “Last activity” column in IAM console

While analyzing the last-used timestamp for each role, you notice that the MigrationRole role was last active two months ago. You want to gather more information about the role’s access patterns to determine whether you ought to delete it. To do this, select the name of the role. From the role detail page, navigate to the Access Advisor tab and investigate the list of accessed services and verify what the role was used for. Access advisor provides a report that displays a list of services and timestamps that indicate when the selected IAM principal last accessed each of the services that it has permissions to. Based on this report, you can decide to follow up with the development team to see if they still need this role. Thus, you have reduced the number of roles in your account from 9 to 6, making it easier to monitor active roles and restrict access to your AWS environments.
 

Figure 2: Access Advisor report

Summary

In this post, I showed you how to use role-last-used information to identify and remove unused roles. By removing unused roles, you can simplify monitoring and improve your security posture. To learn more about deleting roles, visit the deleting roles or instance profiles documentation.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the IAM forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Mathangi Ramesh

Mathangi Ramesh

Mathangi is the product manager for AWS Identity and Access Management. She enjoys talking to customers and working with data to solve problems. Outside of work, Mathangi is a fitness enthusiast and a Bharatanatyam dancer. She holds an MBA degree from Carnegie Mellon University.

Working backward: From IAM policies and principal tags to standardized names and tags for your AWS resources

Post Syndicated from Michael Chan original https://aws.amazon.com/blogs/security/working-backward-from-iam-policies-and-principal-tags-to-standardized-names-and-tags-for-your-aws-resources/

When organizations first adopt AWS, they have to make many decisions that will lay the foundation for their future footprint in the cloud. Part of this includes making decisions about the number of AWS accounts you choose to operate, but another fundamental task is constructing practical access control policies so that your application teams can’t affect each other’s resources within the same account. With AWS Identity and Access Management (IAM), you can customize granular access control policies that are appropriate for your organization, helping you follow security best practices such as separation-of-duties and least-privilege. As every organization is different, you’ll need to carefully consider what your cloud security policies are and how they relate to your cloud engineering teams. Things to consider include who should be authorized to perform which actions, how your teams operate with one another, and which IAM mechanisms are suitable for ensuring that only authorized access is allowed.

In this blog post, I’ll show you an approach that works backwards, starting with a set of customer requirements, then utilizing AWS features such as IAM conditions and principal tagging. Combined with an AWS resource naming and tagging strategy, this approach can help you meet your access control objectives. AWS recently enabled tags on IAM principals (users and roles), which allows you to create a single reusable policy that provides access based on the tags of the IAM principal. When you combine this feature with a standardized resource naming and tagging convention, you can craft a set of IAM roles and policies suitable for your organization.

AWS features used in this approach

To follow along, you should have a working knowledge of IAM and tagging, and familiarity with the following concepts:

Introducing Example Corporation

To illustrate the strategies I discuss, I’ll refer to a fictitious customer throughout my post: Example Corporation is a large organization that wants to use their existing Microsoft Active Directory (AD) as their identity store, with Active Directory Federation Services (AD FS) as the means to federate into their AWS accounts. They also have multiple business projects, some of which will need their own AWS accounts, and others that will share AWS accounts due to the dependencies of the applications within those projects. Each project has multiple application teams who do not need to access each other’s AWS resources.

Example Corporation’s access control requirements

Example Corporation doesn’t always dedicate a single AWS account to one team or one environment. Sometimes, multiple project teams work within the same account, and sometimes they have more than one environment in an account. Figure 1 shows how the Website Marketing and Customer Marketing project teams (each of which has multiple application teams) share two AWS accounts: a development and staging AWS account and a production AWS account. Although production has a dedicated AWS account, Example Corporation has decided that a shared development and staging account is acceptable.
 

Figure 1: AWS accounts shared by Example Corp's teams

Figure 1: AWS accounts shared by Example Corp’s teams

The development and staging environments share an AWS account, and the two teams do work closely together. All projects within an account will be allowed access to the read-only metadata of other resources, such as EC2 instance names, tags, and IAM information. However, each project team wants to prevent their application resources from being modified by the other team’s members.

Initial decisions for supporting shared account access control

Example Corporation decides to continue using their existing identity federation solution for access to AWS, as the existing processes for handling joiners, movers, and leavers can be extended to manage identities within AWS. They will enable this via Security Assertion Markup Language (SAML) provided by ADFS to allow Example Corporation’s AD users to access AWS by assuming IAM roles. Initially, they will create three IAM roles—project administrator, application administrator, and application operator—with additional roles to come later.

The company knows they need to implement access controls through IAM, and they’ve created an initial list of AWS services (EC2, RDS, S3, SNS, and Amazon CloudWatch) to secure. Infrastructure as code (IaC) is a new concept at Example Corporation, so they want to keep initial IAM roles and policies as simple as possible. IAM principal tags will help them reuse standard policies across accounts. Principal tags are global condition keys assigned to a user or role. They can be used within a condition to ensure that a new resource is tagged on creation with a value that matches your principal. They can also be used to verify that an existing resource has a matching tag prior to allowing an action against that resource.

Many, but not all, AWS services support tag-based authorization of AWS resources. For services that don’t support tag-based authorization, Example Corporation will enable access control by utilizing ARN paths with wildcards (ARN matching). The name of the resource and its ARN path will explicitly state which projects, applications, and operators have access to that resource. This will require the company to design and enforce a mandatory naming convention.

Please see the IAM user guide for an up-to-date a list of resources that support tag-based authorization.

Using multiple tags to meet access control requirements

The web and marketing teams have settled on three common roles and have decided their access levels as follows:

  • Project administrator: Able to access and modify all resources for a specific project, including all the resources belonging to application teams under the project.
  • Application administrator: Able to access and modify only the resources owned by a particular application team.
  • Application operator: Able to access and modify only the resources owned by a specific application team, plus those that reside within one of three environments: development, staging, or production.

 

Figure 2: Example Corp's teams - administrators and operators with AWS access

Figure 2: Example Corp’s teams—administrators and operators with AWS access

As for the principal tags, there will be three unique tags named with the prefix access-, with tag values that differentiate the roles and their resources from other projects, applications, and environments.

Finally, because the AWS account is shared, Example Corporation needs to account for the service usage costs of the two teams. By adding a mandatory tag for “cost center,” they can determine the costs of the web team’s resources versus the marketing team’s resources in AWS Cost Explorer and AWS Cost and Usage Report.

Below is an example of the web team’s tags.

IAM principal tags used for the website project administrator role:

Tag nameTag value
access-projectweb
cost-center123456

Tags for the website application administrator role:

Tag nameTag value
access-projectweb
access-applicationnginx
cost-center123456

Tags for the website application operator role—specifically for developer access to the dev environment:

Tag nameTag value
access-projectweb
access-applicationnginx
access-environmentdev
cost-center123456

Access control for AWS services and resources that support tag-based authorization

Example Corporation now needs to write IAM policies for their targeted resources. They begin with EC2, as that will be their most widely used service. The IAM documentation for EC2 shows that most write actions (create, modify, delete) support tag-based authorization, allowing the principal to execute the action only if the resource’s tag matches a predefined value.

For example, the following policy statement will only allow EC2 instances to be started or stopped if the resource tag value matches the “web” project name:


{
    "Action":[
        "ec2:StartInstances",
        "ec2:StopInstances"
    ],
    "Resource":[
        "arn:aws:ec2:*:*:instance/*"
    ],
    "Effect":"Allow",
    "Condition":{
        "StringEquals":{
            "ec2:ResourceTag/access-project":"web"
        }
    }
}         

However, if Example Corporation uses a policy variable instead of hardcoding the project name, the company can reuse the policy by taking advantage of the aws:PrincipalTag condition key:


{
    "Action":[
        "ec2:StartInstances",
        "ec2:StopInstances"
    ],
    "Resource":[
        "arn:aws:ec2:*:*:instance/*"
    ],
    "Effect":"Allow",
    "Condition":{
        "StringEquals":{
            "ec2:ResourceTag/access-project":"${aws:PrincipalTag/access-project}"
        }
    }
}    

Without policy variables, every IAM policy for every project would need a unique value to control access to the resource. Because the text of every policy document would be different, Example Corporation wouldn’t be able to reuse policies from one account to another or from one environment to another. Variables allow them to deploy the same policy file to all of their accounts, while allowing the effect of the policy to differ based on the tags that are used in each account.

As a result, Example Corporation will base the right to manipulate resources like EC2 on resource tags as much as possible. It is important, then, for their teams to tag each resource at the time of creation, if the resource supports it. Untagged resources won’t be manageable, but resources tagged properly will become automatically manageable. The company will use the aws:RequestTag IAM condition key to ensure that the requested access tags and cost allocation tags are assigned at the time of EC2 creation. The IAM policy associated with the application-operator role will therefore be:


{       
    "Sid": "AllowEC2ResourceCreationWithRequiredTags",
    "Action": [
        "ec2:CreateVolume",
        "ec2:RunInstances"
    ],      
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],      
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-application": "${aws:PrincipalTag/access-application}",
            "aws:RequestTag/access-environment": "${aws:PrincipalTag/access-environment}",
            "aws:RequestTag/cost-center": "${aws:PrincipalTag/cost-center}"
        }
    }
},
{       
    "Sid": "AllowCreateTagsIfRequestingValidTags",
    "Action": [
        "ec2:CreateTags"
    ],
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-application": "${aws:PrincipalTag/access-application}",
            "aws:RequestTag/access-environment": "${aws:PrincipalTag/access-environment}",
            "ec2:CreateAction": "RunInstances"
        }
    }
}

If someone tries to create an EC2 instance without setting proper tags, the RunInstances API call will fail. The application-administrator policy will be similar, with the added ability to create a resource in any environment:


{       
    "Sid": "AllowEC2ResourceCreationWithRequiredTags",
    "Action": [
        "ec2:CreateVolume",
        "ec2:RunInstances"
    ],      
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],      
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-application": "${aws:PrincipalTag/access-application}",
            "aws:RequestTag/access-zone": [ "dev", "stg", "prd" ],   
            "aws:RequestTag/cost-center": "${aws:PrincipalTag/cost-center}"
        }
    }
},
{       
    "Sid": "AllowCreateTagsIfRequestingValidTags",
    "Action": [
        "ec2:CreateTags"
    ],
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-application": "${aws:PrincipalTag/access-application}",
            "aws:RequestTag/access-environment": [ "dev", "stg", "prd" ],
            "ec2:CreateAction": "RunInstances"  
        }
    }
}    

And finally, the project-administrator policy will have the most access. Note that even though this policy is for a project administrator, the user is still limited to modifying resources only within three environments. In addition, to ensure that all resources have the required access-application tag, Example Corporation has added a null condition to verify that the tag value is non-empty:


{       
    "Sid": "AllowEC2ResourceCreationWithRequiredTags",
    "Action": [
        "ec2:CreateVolume",
        "ec2:RunInstances"
    ],      
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],      
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-environment": [ "dev", "stg", "prd" ],
            "aws:RequestTag/cost-center": "${aws:PrincipalTag/cost-center}"
        },
        "Null": {
            "aws:RequestTag/access-application": false
        }
    }
},
{       
    "Sid": "AllowCreateTagsIfRequestingValidTags",
    "Action": [
        "ec2:CreateTags"
    ],
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-environment": [ "dev", "stg", "prd" ],
            "ec2:CreateAction": "RunInstances"  
        }
    }
}

Access control for AWS services and resources without tag-based authorization

Some services don’t support tag-based authorization. In those cases, Example Corporation will use ARN pattern matching. Many AWS resources use ARNs that contain a user-created name. Therefore, the company’s proposal is to name resources following a naming convention. A name will look like: [project]-[application]-[environment]-myresourcename. For resources that are globally unique, such as S3, Example Corporation additionally requires its abbreviated name, “exco,” to be at the beginning of the resource so as to avoid a naming collision with another corporation’s buckets:


arn:aws:s3:::exco-web-nginx-dev-staticassets

To enforce this naming convention, they craft a reusable IAM policy that ensures that only intended users with matching access-project, access-application, and access-environment tag values can modify their resources. In addition, using * wildcard matches, they are able to allow for custom resource name suffixes such as staticassets in the above example. Using an AWS SNS topic as an example, a snippet of the IAM policy associated with the application-operator role will look like this:


{       
    "Sid": "AllowSNSListAccess",
    "Effect": "Allow",
    "Action": [
        "sns:ListTopics",
        "sns:ListSubscriptions*",
        ...
    ],      
    "Resource": "*"
},
{       
    "Sid": "AllowSNSAccessBasedOnArnMatching",
    "Effect": "Allow",
    "Action": [
        "sns:CreateTopic",
        "sns:DeleteTopic",
        ...
    ],      
    "Resource": [
        "arn:aws:sns:*:*:${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-${aws:PrincipalTag/access-environment}-*"
    ]
} 

And here’s an IAM policy for an application-admin:


{       
    "Sid": "AllowSNSListAccess",
    "Effect": "Allow",
    "Action": [
        "sns:ListTopics",
        "sns:ListSubscriptions*",
        ...
    ],      
    "Resource": "*"
},
{       
    "Sid": "AllowSNSAccessBasedOnArnMatching",
    "Effect": "Allow",
    "Action": [
        "sns:CreateTopic",
        "sns:DeleteTopic",
        ...
    ],            
    "Resource": [
        "arn:aws:sns:*:*:${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-dev-*",
        "arn:aws:sns:*:*:${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-stg-*",
        "arn:aws:sns:*:*:${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-prd-*"
    ]
}

And finally, here’s the IAM policy for a project-admin:


{       
    "Sid": "AllowSNSListAccess",
    "Effect": "Allow",
    "Action": [
        "sns:ListTopics",
        "sns:ListSubscriptions*",
        ...
    ],      
    "Resource": "*"
},
{       
    "Sid": "AllowSNSAccessBasedOnArnMatching",
    "Effect": "Allow",
    "Action": [
        "sns:*" 
    ],      
    "Resource": [
        "arn:aws:sns:*:*:${aws:PrincipalTag/access-project}-*"
    ]
}

The above policies have two caveats, however. First, they require that the principal tags have values that do not include a hyphen, as it is used as a delimiter according to Example Corporation’s new tag-based convention for access control. In addition, a forward slash cannot be used, as it is in use within ARNs by many AWS resources, such as S3 buckets:


arn:aws:s3:::awsexamplebucket/exampleobject.png

It is important that the company doesn’t let users create resources with disallowed or invalid tags. The following application admin permissions boundary policy uses a condition to permit IAM roles to be created, but only if they are tagged appropriately. Please note that these are just snippets of the boundary policy for the sake of illustration:


{       
    "Sid": "AllowIamCreateTagsOnUserOrRole",
    "Action": [
        "iam:TagUser",
        "iam:TagRole"
    ],
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-application": "${aws:PrincipalTag/access-application}",
            "aws:RequestTag/access-environment": [ "dev", "stg", "prd" ]
        },      
        "StringNotLike": {
            "aws:RequestTag/access-project": [ "*-*", "*/*" ],
            "aws:RequestTag/access-application": [ "*-*", "*/*" ]            
        }       
    },      
    "Resource": [
        "arn:aws:iam::*:user/${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-*",
        "arn:aws:iam::*:role/${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-*"
    ]
}

And likewise, this permissions boundary policy attached to the project admin will do the same:


{       
    "Sid": "AllowIamCreateTagsOnUserOrRole",
    "Action": [
        "iam:TagUser",
        "iam:TagRole"
    ],
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-environment": [ "dev", "stg", "prd" ]
        },      
        "StringNotLike": {
            "aws:RequestTag/access-project": [ "*-*", "*/*" ],
            "aws:RequestTag/access-application": [ "*-*", "*/*" ]            
        }       
    },      
    "Resource": [
        "arn:aws:iam::*:user/${aws:PrincipalTag/access-project}-*",
        "arn:aws:iam::*:role/${aws:PrincipalTag/access-project}-*"
    ]
}

Note that the above boundary policies can be also be crafted using allow statements and multiple explicit deny statements.

Example Corporation’s resource naming convention requirements

As shown in the above examples, Example Corporation has given project teams the ability to create resources with name-based access control for services that currently do not support tag-based authorization (such as SQS and S3). Through the use of wildcards, teams can still provide custom names to their resources to differentiate from other resources created within the same team.

AWS resources have various limits on the structure and composition of names, so the company restricts the character length on access tags. For example, Amazon ElastiCache cluster names must be 20 alphanumeric characters or less, including hyphens. Most AWS resources have higher character limits, but Example Corporation limits their [project]-[application]-[environment] prefix to a 3-character project ID, 5-character application ID, and 3-character maximum environment name to satisfy their requirements, as this will equal a total of 14 characters (for example, web-nginx-prd-), which leaves 6 characters remaining for the user-specified cluster name.

Summary of Key Decisions

  • Services that support tag-based authorization (TBA) must have resources that follow a tagging convention for access control. Tagging on resource creation will be enforced where possible.
  • Services that do not support TBA must have resources that follow a naming convention. The cost center tag will still be required and will be applied after resource creation.
  • Services that do not support TBA, and cannot have user-specified names in their ARN (less common), will be addressed on a case-by-case basis. They will either need to allow access for all projects and application teams sharing the same account, or allow access via a custom IAM policy created on a case-by-case basis so that only the desired team can access the resource. Each IAM role should leave a few unused slots short of the maximum number of policies allowed per role in order to accommodate custom policies.
  • It is acceptable to allow basic List* and Describe* IAM permissions for AWS resources for all users who log in to the account, as the company’s project teams work closely together.
  • IAM user and role names created by project and application admins must adhere to the approved resource naming conventions. Admins themselves will have a permissions boundary policy applied to their roles. This policy, in turn, will require that all users and roles the admins create have a permissions boundary policy. This is especially important for roles associated with resources that can potentially create or modify IAM resources, such as EC2 and Lambda.
  • Active Directory users who need access to AWS resources must assume different IAM roles in order to utilize the different levels of access that the project admin, application admin, and application operator each provide. Users must also assume a different role if they need access to a different project. This is because each role’s tag has a single value. In this scheme, a single role cannot be assigned to multiple projects or application teams.

Conclusion

Example Corporation was able to allow their project teams to share the same AWS account while still limiting access to a majority of the account’s AWS resources. Through the use of IAM principal tagging, combined with a resource naming and tagging convention, they created a reusable set of IAM policies that restricted access not only between project and application admins and users, but also between different development, stage, and production users.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, please start a new thread on the IAM forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Michael Chan

Michael is a Professional Services Consultant who has assisted commercial and Federal customers with their journey to AWS. He enjoys understanding customer problems and working backwards to provide practical solutions.

Create fine-grained session permissions using IAM managed policies

Post Syndicated from Sulay Shah original https://aws.amazon.com/blogs/security/create-fine-grained-session-permissions-using-iam-managed-policies/

As a security best practice, AWS Identity and Access Management (IAM) recommends that you use temporary security credentials from AWS Security Token Service (STS) when you access your AWS resources. Temporary credentials are short-term credentials generated dynamically and provided to the user upon request. Today, one of the most widely used mechanisms for requesting temporary credentials in AWS is an IAM role. The advantage of using an IAM role is that multiple users in your organization can assume the same IAM role. By default, all users assuming the same role get the same permissions for their role session. To create distinctive role session permissions or to further restrict session permissions, users or systems can set a session policy when assuming a role. A session policy is an inline permissions policy which users pass in the session when they assume the role. You can pass the policy yourself, or you can configure your broker to insert the policy when your identities federate in to AWS (if you have an identity broker configured in your environment). This allows your administrators to reduce the number of roles they need to create, since multiple users can assume the same role yet have unique session permissions. If users don’t require all the permissions associated to the role to perform a specific action in a given session, your administrator can configure the identity broker to pass a session policy to reduce the scope of session permissions when users assume the role. This helps administrators set permissions for users to perform only those specific actions for that session.

With today’s launch, AWS now enables you to specify multiple IAM managed policies as session policies when users assume a role. This means you can use multiple IAM managed policies to create fine-grained session permissions for your user’s sessions. Additionally, you can centrally manage session permissions using IAM managed policies.
In this post, I review session policies and their current capabilities, introduce the concept of using IAM managed policies as session policies to control session permissions, and show you how to use managed policies to create fine-grained session permissions in AWS.

How do session policies work?

Before I walk through an example, I’ll review session policies.

A session policy is an inline policy that you can create on the fly and pass in the session during role assumption to further scope the permissions of the role session. The effective permissions of the session are the intersection of the role’s identity-based policies and the session policy. The maximum permissions that a session can have are the permissions that are allowed by the role’s identity-based policies. You can pass a single inline session policy programmatically by using the policy parameter with the AssumeRole, AssumeRoleWithSAML, AssumeRoleWithWebIdentity, and GetFederationToken API operations.

Next, I’ll provide an example with an inline session policy to demonstrate how you can restrict session permissions.

Example: Passing a session policy with AssumeRole API to restrict session permissions

Consider a scenario where security administrator John has administrative privileges when he assumes the role SecurityAdminAccess in the organization’s AWS account. When John assumes this role, he knows the specific actions he’ll perform using this role. John is cautious of the role permissions and follows the practice of restricting his own permissions by using a session policy when assuming the role. This way, John ensures that at any given point in time, his role session can only perform the specific action for which he assumed the SecurityAdminAccess role.

In my example, John only needs permissions to access an Amazon Simple Storage Service (S3) bucket called NewHireOrientation in the same account. He passes a session policy using the policy.json file below to reduce his session permissions when assuming the role SecurityAdminAccess.


{
"Version":"2012-10-17",
"Statement":[{
    "Sid":"Statement1",
    "Effect":"Allow",
    "Action":["s3:GetBucket", "s3:GetObject"],
    "Resource": ["arn:aws:s3:::NewHireOrientation", "arn:aws:s3:::NewHireOrientation/*"]
    }]
}  

In this example, the action and resources elements of the policy statement allow access only to the NewHireOrientation bucket and all the objects inside this bucket.

Using the AWS Command Line Interface (AWS CLI), John can pass the session policy’s file path (that is, file://policy.json) while calling the AssumeRole API with the following commands:


aws sts assume-role 
--role-arn "arn:aws:iam::111122223333:role/SecurityAdminAccess" 
--role-session-name "s3-session" 
--policy file://policy.json 

When John assumes the SecurityAdminAccess role using the above command, his effective session permissions are the intersection of the permissions on the role and the session policy. This means that although the SecurityAdminAccess role had administrative privileges, John’s resulting session permissions are s3:GetBucket and s3:GetObject on the NewHireOrientation bucket. This way, John can ensure he only has access to the NewHireOrientation bucket for this session.

Using IAM managed policies as session policies

You can now pass up to 10 IAM managed policies as session policies. This gives you the ability to further restrict session permissions. The managed policy you pass can be AWS managed or customer managed. To pass managed policies as session policies, you need to specify the Amazon Resource Name (ARN) of the IAM policies using the new policy-arns parameter in the AssumeRole, AssumeRoleWithSAML, AssumeRoleWithWebIdentity, or GetFederationToken API operations. You can use existing managed policies or create new policies in your account and pass them as session policies with any of the aforementioned APIs. The managed policies passed in the role session must be in the same account as that of the role. Additionally, you can pass an inline session policy and ARNs of managed policies in the same role session. To learn more about the sizing guidelines for session policies, please review the STS documentation.

Next, I’ll provide an example using IAM managed policies as session policies to help you understand how you can use multiple managed policies to create fine-grained session permissions.

Example: Passing IAM managed policies in a role session

Consider an example where Mary has a software development team in California (us-west-1) working on a project using Amazon Elastic Compute Cloud (EC2). This team needs permissions to spin up new EC2 instances to meet the project’s scalability requirements. Mary’s organization has a security policy that requires developers to create and manage AWS resources in their respective geographic locations only. This means a developer from California should have permissions to launch new EC2 instances only in California. Now, Mary’s organization has an identity and authentication system such as Active Directory, for which all employees already have identities created. Additionally, there is a custom identity broker application which verifies that employees are signed into the existing identity and authentication system. This broker application is configured to obtain temporary security credentials for the employees using the AssumeRole API. (To learn more about using identity provider and identity broker with AWS, please see AWS Federated Authentication with Active Directory Federation Services..)

Mary creates a managed policy called DevCalifornia and adds a region restriction for California using the aws:RequestedRegion condition key. Following the best practice of granting least privilege, Mary lists out the specific actions the developers would need for spinning up EC2 instances:


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeAccountAttributes",
                "ec2:DescribeAvailabilityZones",
                "ec2:DescribeInternetGateways",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeSubnets",
                "ec2:DescribeVpcAttribute",
                "ec2:DescribeVpcs",
                "ec2:DescribeInstances",
                "ec2:DescribeImages",
                "ec2:DescribeKeyPairs",
                "ec2:RunInstances"                               
            ],
            "Resource": "*",
                    "Condition": {
                "StringEquals": {
                    "aws:RequestedRegion": "us-west-1"
                }
            }
        }
        
    ]
}    

The above policy grants specific permissions to launch EC2 instances. The condition element of the policy sets a restriction on the Region where these actions can be performed. The condition key aws:RequestedRegion ensures that these service-specific actions can only be performed in California.

For Mary’s team’s use case, instead of creating a new role Mary uses an existing role in her account called EC2Admin, which has the AmazonEC2FullAccess AWS managed policy attached to it, granting full access to Amazon EC2. Next, Mary configures the identity broker in such a way that the developers from the team in California can assume the EC2Admin role but with reduced session permissions. The broker passes the DevCalifornia managed policy as a session policy to reduce the scope of the session permissions when a developer from Mary’s team assumes the role. This way, Mary can ensure the team remains compliant with her organization’s security policy.

If performed using the AWS CLI, the command would look like this:

aws sts assume-role –role-arn “arn:aws:iam::444455556666:role/AppDev” –role-session-name “teamCalifornia-session” –policy-arns arn=”arn:aws:iam::444455556666:policy/DevCalifornia”

If you want to pass multiple managed policies as session policies, then the command would look like this:

aws sts assume-role –role-arn “arn:aws:iam::<accountID>:role/<RoleName>” –role-session-name “<example-session>” –policy-arns arn=”arn:aws:iam::<accountID>:policy/<PolicyName1>” arn=”arn:aws:iam::<accountID>:policy/<PolicyName2>”

In the above example, PolicyName1 and PolicyName2 can be AWS managed or customer managed policies. You can also use them in conjunction, where PolicyName1 is an AWS managed policy and PolicyName2 a customer managed policy.

Conclusion

You can now use IAM managed policies as session policies in role sessions and federated sessions to create fine-grained session permissions. You can use this functionality today by creating IAM managed policies using your existing inline session policies and referencing their policy ARNs in your role sessions. You can also keep using your existing session policy and pass the ARNs of IAM managed policies using the new policy-arn parameter to further scope your session permissions.

If you have comments about this post, submit them in the Comments section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum.

Sulay Shah

Sulay is the product manager for Identity and Access Management service at AWS. He strongly believes in the customer first approach and is always looking for new opportunities to assist customers. Outside of work, Sulay enjoys playing soccer and watching movies. Sulay holds a master’s degree in computer science from the North Carolina State University.

How to centralize and automate IAM policy creation in sandbox, development, and test environments

Post Syndicated from Mahmoud ElZayet original https://aws.amazon.com/blogs/security/how-to-centralize-and-automate-iam-policy-creation-in-sandbox-development-and-test-environments/

To keep pace with AWS innovation, many customers allow their application teams to experiment with AWS services in sandbox environments as they move toward production-ready architecture. These teams need timely access to various sets of AWS services and resources, which means they also need a mechanism to help ensure least privilege is granted. In other words, your application team generally shouldn’t have access to administrative resources, such as an AWS Lambda function that takes periodic Amazon Elastic Block Store snapshot backups, or an Amazon CloudWatch Events rule that sends events to a centralized information security account managed by your security team.

In this blog post, I’ll show you how to create a centralized and automated workflow that creates and validates AWS Identity and Access Management (IAM) policies for application teams working in various sandbox, development, and test environments. Your security developers can customize this workflow according to the specific requirements of your security team. They can create logic to limit the allowed permission sets based on account type or owning team. I’ll use AWS CodePipeline to create and manage a workflow containing various stages and spanning multiple AWS accounts that I’ll describe in more detail in the next section.

Solution overview

I’ll start with this scenario: Alice is an administrator for an AWS sandbox account used by her organization’s data scientists to try out AWS analytics services such as Amazon Athena and Amazon EMR. The data scientists assess the suitability of these services for their production use cases by running sample analytics jobs on portions of real data sets after any sensitive information has been taken out. The data sets are stored in an existing Amazon Simple Storage Service (Amazon S3) bucket. For every new project, Alice authors a new IAM policy that allows the project team to access their requested Amazon S3 bucket and create their analytics clusters. However, Alice must follow a company guideline that sandbox accounts can only launch specific Amazon Elastic Compute Cloud (Amazon EC2) instance types. She must also restrict access to all administrative AWS Lambda functions and CloudWatch Events rules that the security team use to monitor sandbox account compliance. Below is the solution that meets these requirements and makes it easier for Alice and other administrators to perform their tasks.
 

Figure 1: Solution architecture

Figure 1: Solution architecture

  1. Alice uses the IAM visual editor to author a template that gives the data science team access to launch and manage EMR clusters that analyze S3-based data sets. She then uploads the IAM JSON policy document into an existing S3 bucket using an AWS Key Management Service (AWS KMS) key. The key and the S3 bucket are already created by the security team as part of account baselining, which I will detail later in this post.
  2. AWS CodePipeline automatically fetches the IAM JSON policy document and invokes a sequence of validation checks that use a single and central Lambda function hosted in an AWS account managed by the security team.
  3. If the IAM JSON policy adheres to all account and general security requirements coded by the security developers, the central Lambda function automatically creates the policy in Alice’s account and the pipeline will succeed. The central validation Lambda function will also attach a set of predefined explicit denies to the IAM policy to ensure that it limits undesired user capabilities in the sandbox account. If the IAM JSON policy fails the checks, the pipeline will fail and provide Alice the specific reason for non-compliance. Alice must then modify the policy and resubmit. When the policy has been successfully created, Alice will attach it to the right IAM user, group, or role.

Solution deployment

This solution includes the following three steps:

Prerequisites

As this solution manages permissions granted to AWS services or IAM entities, I highly recommend that you try the solution first in an isolated test environment to make sure it meets all your security requirements.

  1. You’ll need administrator access in two AWS accounts to set up the solution. The deployment of this solution is typically done by one of your organization’s administrators while setting up new AWS accounts. These are the two account types you’ll need access to:
     

    • A sandbox account. This lets application teams experiment with various AWS architectures. This could be a development or test account, as mentioned earlier.
    • A central information security account. Typically, this is owned by an information security team who monitors and enforces security compliance within a multi-account structure.


Important
: Because the Lambda function that you’ll create in the information security account has highly privileged permissions, it’s important to strictly follow best practices for securing the account. You need to limit account access to security team members. Sandbox account administrators should also not give this central Lambda function any IAM permissions in their sandbox account beyond IAM Policy creation.

  1. Because you’ll use the AWS Management Console for both AWS accounts, I strongly recommended that you have roles in both AWS accounts and use the console’s Switch Role feature. You can attach an alias to each account and give each a different color code so that you always know which one you’re logged into.
  2. Make sure to use the same AWS region for all the resources that you create for this solution.

Step 1: Deploy the solution prerequisites

Before building the pipeline across the two AWS accounts, you must first configure the required resources in both accounts, such as IAM roles and encryption keys. This configuration is typically done according to your security team’s guidelines when your organization first sets up the sandbox, development, or test environment.

Important

  • In addition to the initial setup you’ll create in this section, your security team must explicitly deny sandbox, development, or test account administrators from attaching IAM Policies that do not meet the allowed security policies for that account type, such as the AdministratorAccess IAM policy. Moreover, your security team must ensure any current or future users, groups, or roles in the account have no permissions to directly set or update IAM policies like (for example) CreatePolicy, CreatePolicyVersion, PutRolePolicy, PutUserPolicy, PutGroupPolicy, or UpdateAssumeRolePolicy. You want to ensure that creating permissions can only be done through the automation pipeline, which I’ll show you how to build shortly.
  • Because the solution I’ll be describing focuses on the creation of least privilege permissions, it’s highly advisable that your security team combines the solution with IAM permission boundaries to make sure that any permissions defined in this solution are scoped by a set of pre-defined permissions for every type of account in the organization. For example, your account administrators might only be allowed to create IAM users or roles with a pre-defined set of permission boundaries that limit the permissions attached to those principals. For more information about permission boundaries, please refer to this AWS Security blog post.

Create the sandbox account prerequisites

Follow the steps below to deploy an AWS CloudFormation template that will create the following resources in the sandbox account:

  • An S3 bucket where your sandbox administrators will upload IAM policies
  • An IAM role that your automated pipeline will use to access the S3 bucket that stores the IAM policies
  • An AWS KMS key that you will use to encrypt the IAM policies in your S3 bucket
  1. While logged in to your sandbox account in your default browser, select this link to launch an AWS stack with the sandbox environment prerequisites. You’ll be redirected to the CloudFormation console with the template URL already populated.
     
    Figure 2: CloudFormation console

    Figure 2: CloudFormation console with prepopulated URL

  2. Select Next and, optionally, provide a name for your stack. A suggested stack name, Sandbox-Prerequisites, should already be populated.
  3. The template defines an input parameter called CentralAccount that you can populate with the AWS account ID of your security account. For more information on how to find the account ID of your security account, check here.
  4. Select Next, and then select Next again.
  5. To have the stack create the IAM roles that your pipeline will use, select the check box that says I acknowledge that AWS CloudFormation might create IAM resources with custom names, and then select Create Stack.
  6. Select the Stack info tab and refresh periodically while watching the Stack Status field value. After your stack reaches the state CREATE_COMPLETE, navigate to the CloudFormation Outputs tab and copy the following output values to the text editor of your choice. You’ll use these values in subsequent CloudFormation stacks.
     
    Figure 3: CloudFormation Outputs tab

    Figure 3: CloudFormation Outputs tab

Create the information security account prerequisites

Follow the steps below to deploy a CloudFormation template that will create the following resources in your information security account:

  • An IAM role used by your automated pipeline to invoke your central Lambda function and to provide access to the sandbox account KMS key
  • An IAM role used by the central Lambda function to assume a role in the sandbox account and manage IAM policies
  1. While logged in to your security account in your default browser, select this link to launch an AWS stack with the security environment prerequisites. You’ll be redirected to the CloudFormation console with the template URL already populated.
  2. Select Next and, optionally, provide a name for your stack. A suggested stack name, Sandbox-Prerequisites, should already be populated.
  3. Populate the following input parameter fields:
    • SandboxAccount: The AWS account ID for the sandbox account.
    • ArtifactBucket: The bucket name that you noted in your text editor from the previous stack run in the sandbox account
    • CMKARN: The Amazon Resource Name (ARN) of the KMS key that you noted in your text editor from the previous stack run in the sandbox account
    • PolicyCheckerFunctionName: The name of the Lambda function to be created later. The default value is PolicyChecker
  4. Select Next, and then select Next again.
  5. To have the stack create the IAM roles used by your pipeline, select the box that reads I acknowledge that AWS CloudFormation might create IAM resources with custom names, and then select Create Stack.
  6. Wait for your stack until it reaches the state CREATE_COMPLETE.

Create the sandbox account pipeline

Now, switch back to your sandbox account and deploy the CloudFormation template that will create the following resources in the sandbox account:

  • An AWS CodePipeline automation pipeline that fetches the IAM policy document from S3 and sends it to the security account for centralized validation. If valid, a Lambda function in the information security account will also create the IAM policy in the sandbox account.
  • An S3 bucket policy to allow your central Lambda function to fetch the IAM policy JSON document from your bucket
  • An IAM role that will be assumed by the Lambda function in the central information security account and used to create IAM policies in the sandbox account. Sandbox account administrator can then attach those IAM policies to the required entities, like an IAM user or role.
  1. While logged in to your sandbox account in your default browser, select this link to launch an AWS stack with the sandbox environment prerequisites. You’ll be redirected to the CloudFormation console with the template URL already populated.
  2. Click Next and, optionally, provide a name for your stack. A suggested stack name, Sandbox-Pipeline, should already be populated.
  3. Populate the following input parameter fields:
    • CentralAccount: The AWS account ID of the information security account, without hyphens.
    • ArtifactBucket: The same bucket name that you noted in your text editor earlier and used in the previous stack in the information security account.
    • CMKARN: The ARN of the KMS key that you noted in your text editor earlier and used in the previous stack in the information security account.
    • PolicyCheckerFunctionName: Again, the name of the Lambda function to be created later. It must be the same value you provided to the information security account template.
  4. Select Next, and then select Next again.
  5. To have the stack create the required IAM roles, select the box that reads I acknowledge that AWS CloudFormation might create IAM resources with custom names, and then select Create Stack.
  6. Wait for your stack until it reaches the state CREATE_COMPLETE.

Step 2: Set up the policy validation Lambda function in the central information security account

In the central information security account, create the Lambda function to validate the IAM policies created in sandbox environment.

  1. In the AWS Lambda console, select Create Function and then select Author from scratch. Provide values for the following fields:
    • Name. This must be the same function name defined as input parameter PolicyCheckerFunctionName to CloudFormation in step 1, when you set up the information security account prerequisites. If you did not change the default value in step 1, the default is still PolicyChecker.
    • Runtime. Python 2.7.
    • Role. To set the role, select Choose an existing role, and then select the role named policy-checker-lambda-role. This is the role you created in step 1, when you set up the information security account prerequisites.

    Choose Create Function, scroll down to Function Code, and then paste the following code into the editor (replacing the existing code):

    
    #  Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
    #  Licensed under the Apache License, Version 2.0 (the "License"). You may not
    #  use this file except in compliance with
    #  the License. A copy of the License is located at
    #      http://aws.amazon.com/apache2.0/
    #  or in the "license" file accompanying this file. This file is distributed
    #  on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
    #  either express or implied. See the License for the
    #  specific language governing permissions and
    #  limitations under the License.
    from __future__ import print_function
    import json
    import boto3
    import zipfile
    import tempfile
    import os
    
    print('Loading function')
    PERMISSIVE_ERROR_MSG = """Policy creation request rejected: * permissions not
                             allowed in both actions and resources"""
    GENERAL_ERROR_MSG = """An error has occurred while validating policy.
                            Please contact admin"""
    
    
    def get_template(event, s3, artifact, file_in_zip):
        tmp_file = tempfile.NamedTemporaryFile()
        bucket = event['CodePipeline.job']['data']['inputArtifacts'][0]['location']['s3Location']['bucketName']
        key = event['CodePipeline.job']['data']['inputArtifacts'][0]['location']['s3Location']['objectKey']
    
        with tempfile.NamedTemporaryFile() as tmp_file:
            s3.download_file(bucket, key, tmp_file.name)
            with zipfile.ZipFile(tmp_file.name, 'r') as zip:
                return zip.read(file_in_zip)
    
    
    def get_sts_session(event, account, rolename):
        sts = boto3.client("sts")
        RoleArn = str("arn:aws:iam::" + account + ":role/" + rolename)
        response = sts.assume_role(
            RoleArn=RoleArn,
            RoleSessionName='SecurityManageAccountPermissions',
            DurationSeconds=900)
        sts_session = boto3.Session(
            aws_access_key_id=response['Credentials']['AccessKeyId'],
            aws_secret_access_key=response['Credentials']['SecretAccessKey'],
            aws_session_token=response['Credentials']['SessionToken'],
            region_name=os.environ['AWS_REGION'],
            botocore_session=None,
            profile_name=None)
        return (sts_session)
    
    
    def ManagePolicy(event, context):
        # Set boto session to get pipeline artifact from sandbox/dev/test account
        artifact_session = boto3.Session(
            aws_access_key_id=event['CodePipeline.job']['data']
                                   ['artifactCredentials']['accessKeyId'],
            aws_secret_access_key=event['CodePipeline.job']['data']
                                       ['artifactCredentials']['secretAccessKey'],
            aws_session_token=event['CodePipeline.job']['data']
                                   ['artifactCredentials']['sessionToken'],
            region_name=os.environ['AWS_REGION'],
            botocore_session=None,
            profile_name=None)
        # Fetch pipeline artifact from S3
        s3 = artifact_session.client('s3')
        permission_doc = get_template(event, s3, '', 'policy.json')
        metadata_doc = json.loads(get_template(event, s3, '', 'metadata.json'))
        permission_doc_json = json.loads(permission_doc)
        # Assume the central account role in sandbox/dev/test account
        global STS_SESSION
        STS_SESSION = ''  
        STS_SESSION = get_sts_session(
            event, event['CodePipeline.job']['accountId'], 'central-account-role')
        iam = STS_SESSION.client('iam')
        codepipeline = STS_SESSION.client('codepipeline')
        policy_arn = 'arn:aws:iam::' + event['CodePipeline.job']['accountId'] + ':policy/' + metadata_doc['PolicyName']
    
        try:
            # 1.Sample code - Validate policy sent from sandbox/dev/test account:
            # look for * actions and * resources
            for statement in permission_doc_json['Statement']:
                if statement['Action'] == '*' and statement['Resource'] == '*':
                    return codepipeline.put_job_failure_result(
                                        jobId=event['CodePipeline.job']['id'],
                                        failureDetails={
                                            'type': 'JobFailed',
                                            'message': PERMISSIVE_ERROR_MSG})
            # 2.Sample code - Attach any required denies from central
            # pre-defined policy
            iam_local = boto3.client('iam')
            account_id = context.invoked_function_arn.split(":")[4]
            local_policy_arn = 'arn:aws:iam::' + account_id + ':policy/central-deny-policy-sandbox'
            policy_response = iam_local.get_policy(PolicyArn=local_policy_arn)
            policy_version_id = policy_response['Policy']['DefaultVersionId']
            policy_version_doc = iam_local.get_policy_version(
                PolicyArn=local_policy_arn,
                VersionId=policy_version_id)
            for statement in policy_version_doc['PolicyVersion']['Document']['Statement']:
                permission_doc_json['Statement'].append(
                   statement
                )
            # 3. If validated successfully, create policy in
            # sandbox/dev/test account
            iam.create_policy(
                PolicyName=metadata_doc['PolicyName'],
                PolicyDocument=json.dumps(permission_doc_json),
                Description=metadata_doc['PolicyDescription'])
    
            # successful creation, put result back to
            # sandbox/dev/test account pipeline
            codepipeline.put_job_success_result(
                jobId=event['CodePipeline.job']['id'])
        except Exception as e:
            print('Error: ' + str(e))
            codepipeline.put_job_failure_result(
                jobId=event['CodePipeline.job']['id'],
                failureDetails={'type': 'JobFailed', 'message': GENERAL_ERROR_MSG})
    
    def lambda_handler(event, context):
        print(event)
        ManagePolicy(event, context)
    

    This sample code shows how the Lambda function checks the IAM JSON policy submitted by Alice for policies that are too permissive because they allow all IAM actions on all account resources. The sample code also shows an IAM Deny action that prevents the launch of Amazon EC2 instances that are not part of the T2 EC2 instance family. An explicit deny here ensures that only T2 instances can be launched. Your security developers should author code similar to this sample code, in order to meet the security policies of every account type and control the IAM policies created in various sandbox, development, and test environments.

  2. Before saving your new Lambda function code, scroll further down to the Basic Settings section and increase the function timeout to 10 seconds.
  3. Select Save.

Step 3: Test the sandbox account pipeline

Now it’s time to deploy the solution in your sandbox account.

  1. Create the following files and compress them into an archive with the name policy.zip (this is the name expected by your created pipeline).
    • metadata.json: This file contains metadata like the name and description of the IAM policy to be created.
      
                      {
                      "PolicyDescription": "ec2 start permission policy",
                      "PolicyName": "Ec2RunTeamA"
                      }
                      

    • policy.json: This file contains the JSON body of the IAM policy to be created.
      
                      {
                      "Version": "2012-10-17",
                      "Statement": [
                              {
                              "Sid": "EC2Run",
                              "Effect": "Allow",
                              "Action": "ec2:RunInstances",
                              "Resource": "*"
                              }
                      ]
                      }
                      

  2. To upload your policy.zip file to the bucket you created earlier, go to the Amazon S3 console in the sandbox account and, in the search box at the top of the page, search for the bucket you noted in your text editor earlier as ArtifactBucket.
  3. When you locate your bucket, select the bucket name, and then select Upload. The upload dialog will appear.
  4. Select Add Files and navigate to the folder with the policy.zip file. Select the file, select Open, select Next, and then select Next again.
     
    Figure 4: S3 upload dialog

    Figure 4: S3 upload dialog

  5. Select the AWS KMS master-key radio button, and then select the KMS key that has the alias codepipeline-policy-crossaccounts.
     
    Figure 5: Selecting the KMS key

    Figure 5: Selecting the KMS key

  6. Select Next, and then select Upload.
  7. Go to AWS CodePipeline console, select your sandbox pipeline, and wait for the pipeline to start running. It can take up to a minute for it to start.
     
    Figure 6: AWS CodePipeline console

    Figure 6: AWS CodePipeline console

  8. Wait for your pipeline to complete. There should be no validation errors for the IAM policy you just uploaded and your IAM policy should be successfully created. To view the newly created IAM policy, open the AWS IAM console.
  9. Select Policies on the left and search for the policy with the name defined in the metadata.json file.
     
    Figure 7: Viewing your new policy

    Figure 7: Viewing your new policy

  10. Select the policy name. Note the IAM deny that was automatically added to your defined policy.

If you’d like to test the pipeline further, you can modify the policy to permit all actions on all resources. When policy.zip is uploaded again, the pipeline should return the following error:


Policy creation request rejected: * permissions not allowed in both actions and resources

If you encounter any errors as you modify your Lambda function code, you can always go back to the Lambda function logs in the central information security account. For more information on how to access Lambda function logs, please refer to the documentation.

The same logic used here can be extended to other sandbox, development, or test environments. However, for the central information security account, the existing roles will need to be updated to trust and have access to the resources in the newly added sandbox, development, or test account.

Summary

In this blog post, I showed you how to centralize the validation and creation of IAM policies across various AWS accounts. This allows your security developers to start coding your security best practices; permitting automatic creation and validation of IAM policies across your various sandbox, development, and test accounts. Account administrators can then attach those validated IAM policies to the required IAM users, groups or roles. This process strikes the balance between agility and control. It empowers your account administrators to create compliant and least-privilege permission IAM policies, while also allowing your application teams to keep quickly experimenting and innovating. If you have feedback about this blog post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Mahmoud ElZayet

Mahmoud is a Global Accounts Solutions Architect at AWS. He works with large enterprise customers providing guidance and technical assistance for building cloud solutions. Mahmoud is passionate about DevOps and Cloud Compliance topics. Outside of work, he enjoys exploring new places with his wife and two kids.