Tag Archives: IAM policies

Setting permissions to enable accounts for upcoming AWS Regions

Post Syndicated from Sulay Shah original https://aws.amazon.com/blogs/security/setting-permissions-to-enable-accounts-for-upcoming-aws-regions/

The AWS Cloud spans 61 Availability Zones within 20 geographic regions around the world, and has announced plans to expand to 12 more Availability Zones and four more Regions: Hong Kong, Bahrain, Cape Town, and Milan. Customers have told us that they want an easier way to control the Regions where their AWS accounts operate. Based on this feedback, AWS is changing the default behavior for these four and all future Regions so customers will opt in the accounts they want to operate in each new Region. For new AWS Regions, Identity and Access Management (IAM) resources such as users and roles will only be propagated to the Regions that you enable. When the next Region launches, you can enable this Region for your account using the AWS Regions setting under My Account in the AWS Management Console. You will need to enable a new Region for your account before you can create and manage resources in that Region. At this time, there are no changes to existing AWS Regions.

We recommend that you review who in your account will have access to enable and disable AWS Regions. Additionally, you can prepare for this change by setting permissions so that only approved account administrators can enable and disable AWS Regions. Starting today, you can use IAM permissions policies to control which IAM principals (users and roles) can perform these actions.

In this post, I describe the new account permissions for enabling and disabling new AWS Regions. I also describe the updates we’ve made to deny these permissions in the AWS-managed PowerUserAccess policy that many customers use to restrict access to administrative actions. For customers who use custom policies to manage administrative access, I show how to secure access to enable and disable new AWS Regions using IAM permissions policies and Service Control Policies in AWS Organizations. Finally, I explain the compatibility of Security Token Service (STS) session tokens with Regions.

IAM Permissions to enable and disable new AWS Regions for your account

To control access to enable and disable new AWS Regions for your account, you can set IAM permissions using two new account actions. By default, IAM denies access to new actions unless you have explicitly allowed these permissions in an existing policy. You can use IAM permissions policies to allow or deny the actions to enable and disable AWS Regions to IAM principals in your account. The new actions are:

ActionDescription
account:EnableRegionAllows you to opt in an account to a new AWS Region (for Regions launched after March 20, 2019). This action propagates your IAM resources such as users and roles to the Region.
account:DisableRegionAllows you to opt out an account from a new AWS Region (for Regions launched after March 20, 2019). This action removes your IAM resources such as users and roles from the Region.

When granting permissions using IAM policies, some administrators may have granted full access to AWS services except for administrative services such as IAM and Organizations. These IAM principals will automatically get access to the new administrative actions in your account to enable and disable AWS Regions. If you prefer not to provide account permissions to enable or disable AWS Regions to these principals, we recommend that you add a statement to your policies to deny access to account permissions. To do this, you can add a deny statement for account:*. As new Regions launch, you will be able to specify the Regions where these permissions are granted or denied.

At this time, the account actions to enable and disable AWS Regions apply to all upcoming AWS Regions launched after March 20, 2019. To learn more about managing access to existing AWS Regions, review my post, Easier way to control access to AWS regions using IAM policies.

Updates to AWS managed PowerUserAccess Policy

If you’re using the AWS managed PowerUserAccess policy to grant permissions to AWS services without granting access to administrative actions for IAM and Organizations, we have updated this policy as shown below to exclude access to account actions to enable and disable new AWS Regions. You do not need to take further action to restrict these actions for any IAM principals for which this policy applies. We updated the first policy statement, which now allows access to all existing and future AWS service actions except for IAM, AWS Organizations, and account. We also updated the second policy statement to allow the read-only action for listing Regions. The rest of the policy remains unchanged.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "NotAction": [
                "iam:*",
                "organizations:*",
				"account:*"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:CreateServiceLinkedRole",
                "iam:DeleteServiceLinkedRole",
                "iam:ListRoles",
                "organizations:DescribeOrganization",
	  			"account:ListRegions"
            ],
            "Resource": "*"
        }
    ]
}

Restrict Region permissions across multiple accounts using Service Control Policies in AWS Organizations

You can also centrally restrict access to enable and disable Regions for all principals across all accounts in AWS Organizations using Service Control Policies (SCPs). You would use SCPs to restrict this access if you do not anticipate using new Regions. SCPs enable administrators to set permission guardrails that apply to accounts in your organization or an organization unit. To learn more about SCPs and how to create and attach them, read About Service Control Policies.

Next, I show how to restrict the Region enable and disable actions for accounts in an AWS organization using an SCP. In the policy below, I explicitly deny using the Effect block of the policy statement. In the Action block, you add the new permissions account:EnableRegion and account:DisableRegion.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Deny",
            "Action": [
                "account:EnableRegion",
                "account:DisableRegion"
            ],
            "Resource": "*"
        }
    ]
}

Once you create the policy, you can attach this policy to the root of your organization. This will restrict permissions across all accounts in your organization.

Check if users have permissions to enable or disable new AWS Regions in my account

You can use the IAM Policy Simulator to check if any IAM principal in your account has access to the new account actions for enabling and disabling Regions. The simulator evaluates the policies that you choose for a user or role and determines the effective permissions for each of the actions that you specify. Learn more about using the IAM Policy Simulator.

Region compatibility of AWS STS session tokens

For new AWS Regions, we’re also changing region compatibility for session tokens from the AWS Security Token Service (STS) global endpoint. As a best practice, we recommend using the regional STS endpoints to reduce latency. If you’re using regional STS endpoints or don’t plan to operate in new AWS Regions, then the following change doesn’t apply to you and no action is required.

If you’re using the global STS endpoint (https://sts.amazonaws.com) for session tokens and plan to operate in new AWS Regions, the session token size is going to increase. This may impact functionality if you store session tokens in any of your systems. To ensure your systems work with this change, we recommend that you update your existing systems to use regional STS endpoints using the AWS SDK.

Summary

AWS is changing the default behavior for all new Regions going forward. For new AWS Regions, you will opt in to enable your account to operate in those Regions. This makes it easier for you to select the regions where you can create and manage AWS resources. To prepare for upcoming Region launches, we recommend that you validate the capability to enable and disable AWS Regions to ensure only approved IAM principals can enable and disable AWS Regions for your account.

If you have comments about this post, submit them in the Comments section below. If you have questions about or suggestions for this solution, start a new thread on the AWS forums.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

The author

Sulay Shah

Sulay is the product manager for Identity and Access Management service at AWS. He strongly believes in the customer first approach and is always looking for new opportunities to assist customers. Outside of work, Sulay enjoys playing soccer and watching movies. Sulay holds a master’s degree in computer science from the North Carolina State University.

Simplify granting access to your AWS resources by using tags on AWS IAM users and roles

Post Syndicated from Sulay Shah original https://aws.amazon.com/blogs/security/simplify-granting-access-to-your-aws-resources-by-using-tags-on-aws-iam-users-and-roles/

Recently, AWS enabled tags on IAM principals (users and roles). With this update, you can now use attribute-based access control (ABAC) to simplify permissions management at scale. This means administrators can create a reusable policy that applies permissions based on the attributes of the IAM principal (such as tags). For example, you can use an IAM policy that grants developers access to resources that match their project tag. As the team adds resources to projects, permissions automatically apply based on attributes. No policy update required for each new resource.

In this blog post, I walk through three examples of how you can control access permissions by using tags on IAM principals and AWS resources. It’s important to note that you can use tags to control access to your AWS resources, but only if the AWS service in question supports tag-based permissions. To learn more about AWS services that support tag-based permissions, see AWS Services That Work with IAM.

As a reminder, I introduced the following tagging condition keys in my post about tagging. Adding tags to the Condition element of a policy tailors the policy’s permissions and limits its actions and resources.

Condition keyDescriptionActions that support the condition key
aws:RequestTagTags that you request to be added or removed.iam:CreateUser, iam:Create Role, iam:TagRole, iam:UntagRole, iam:TagUser, iam:UntagUser
aws:TagKeysTag keys that are checked before the actions are executed.iam:CreateUser, iam:Create Role, iam:TagRole, iam:UntagRole, iam:TagUser, iam:UntagUser
aws:PrincipalTagTags that exist on the user or role making the call.A global condition (all actions across all services support this condition key)
iam:ResourceTagTags that exist on an IAM resource.All IAM APIs that supports an IAM user or role and sts:AssumeRole

Example 1: Grant IAM users access to your AWS resources by using tags

Assume that you have multiple teams of developers who need permissions to start and stop specific EC2 instances based on their cost center. In the following policy, I specify the EC2 actions ec2:StartInstances and ec2:StopInstances in the Action element and all resources in the Resource element of the policy. In the Condition element of the policy, I use the condition key aws:PrincipalTag. This will help ensure that the principal is able to start and stop that instance only if value of the ec2 instance CostCenter tag matches value of the CostCenter tag on the principal. Attaching this policy to your developer roles or groups simplifies permissions management, as you only need to manage a single policy for all your dev teams requiring permissions to start and stop instances and rely on tag values to specify the resources.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeInstances"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:StartInstances",
                "ec2:StopInstances"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "ec2:ResourceTag/CostCenter": "${aws:PrincipalTag/CostCenter}"
                }
            }
        }
    ]
}

Example 2: Grant users in an IAM group access to your AWS resources by using tags

Assume there are database administrators in your account who need start, stop, and reboot permissions for specific Amazon RDS instances. In the following policy, I define the start, stop, and reboot actions for Amazon RDS in the Action element of the policy, and all resources in the Resource element of the policy. In the Condition element of the policy, I use the condition key, aws:PrincipalTag, to select users with the tag, CostCenter=0735. I use the StringEquals condition operator to check for an exact match of the value. I also use the condition key, rds:db-tag, to control access to databases tagged with Project=DataAnalytics. I attach this policy to an IAM group which contains all the database administrators in my account. Now, any database administrator in this group with tag CostCenter=0735 gets access to the RDS instance tagged Project=DataAnalytics.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "rds:DescribeDBInstances",
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "rds:RebootDBInstance",
                "rds:StartDBInstance",
                "rds:StopDBInstance"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "aws:PrincipalTag/CostCenter": "0735",
                    "rds:db-tag/Project": "DataAnalytics"
                }
            }
        }
    ]
}

Example 3: Use tags to control access to IAM roles

Let’s say a user, Bob in Account A, needs to manage several applications and needs to assume specific roles in Account B. The following policy grants Bob’s IAM user permissions to assume all roles tagged with ExampleCorpABC. In the Action element of the policy, I define sts:AssumeRole, which grants permissions to assume roles. In the Resource element of the policy, I define a wildcard (*) to grant access to all roles, but use the condition key, iam:ResourceTag, in the Condition element to scope down the roles that Bob can assume. As with the previous policy, I use the StringEquals operator to ensure that Bob can assume roles that have the tag, Project=ExampleCorpABC. Now, whenever I create a role in Account B and trust Bob’s account in the role’s trust policy, Bob can only assume this role if it is tagged with Project=ExampleCorpABC.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "sts:AssumeRole",
      "Resource": "*",
      "Condition": {
	          "StringEquals": 
		    {"iam:ResourceTag/Project": "ExampleCorpABC"}
      }
    }
  ]
} 
 

Summary

You now can tag your IAM principals to control access to your AWS resources, and the three examples I’ve included in this post show how tags can help you simplify access management.

If you have comments about this post, submit them in the Comments section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum.

Want more AWS Security news? Follow us on Twitter.

The author

Sulay Shah

Sulay is the product manager for Identity and Access Management service at AWS. He strongly believes in the customer first approach and is always looking for new opportunities to assist customers. Outside of work, Sulay enjoys playing soccer and watching movies. Sulay holds a master’s degree in computer science from the North Carolina State University.

Add Tags to Manage Your AWS IAM Users and Roles

Post Syndicated from Sulay Shah original https://aws.amazon.com/blogs/security/add-tags-to-manage-your-aws-iam-users-and-roles/

We made it easier for you to manage your AWS Identity and Access Management (IAM) resources by enabling you to add tags to your IAM users and roles (also known as IAM principals). Tags enable you to add customizable key-value pairs to resources, and many AWS services support tagging of AWS resources. Now, you can use tags to add custom attributes such as project name and cost center to your IAM principals. Additionally, tags on IAM principals simplify permissions management. For example, you can author a policy that allows a user to assume the roles for a specific project by using a tag. As you add roles with that tag, users gain permissions to assume those roles automatically. In a subsequent post, I will review how you can use tags on IAM principals to control access to your AWS resources.

In this blog post, I introduce the new APIs and conditions you can use to tag IAM principals, show three example policies that address three tagging use cases, and I show how to add tags to IAM principals by using the AWS Console and CLI. The first example policy grants permissions to tag principals. The second example policy requires specific tags for new users, and the third grants permissions to manage specific tags on principals.

Note: You must have the latest version of the AWS CLI to tag your IAM principals. Follow these instructions to update the AWS CLI.

New IAM APIs for tagging IAM principals

The following table lists the new IAM APIs that you must grant access to using an IAM policy so that you can view and modify tags on IAM principals. These APIs support resource-level permissions so that you can grant permissions to tag only specific principals.

ActionsDescriptionSupports resource-level permissions
iam:ListUserTagsLists the tags on an IAM user.arn:aws:iam::<ACCOUNT-ID>:user/<USER-NAME>
iam:ListRoleTagsLists the tags on an IAM role.arn:aws:iam::<ACCOUNT-ID>:role/<ROLE-NAME>
iam:TagUserCreates or modifies the tags on an IAM user.arn:aws:iam::<ACCOUNT-ID>:user/<USER-NAME>
iam:TagRoleCreates or modifies the tags on an IAM role.arn:aws:iam::<ACCOUNT-ID>:role/<ROLE-NAME>
iam:UntagUserRemoves the tags on an IAM user.arn:aws:iam::<ACCOUNT-ID>:user/<USER-NAME>
iam:UntagRoleRemoves the tags on an IAM role.arn:aws:iam::<ACCOUNT-ID>:role/<ROLE-NAME>

In addition to the new APIs, tagging parameters now are available for the existing iam:CreateUser and iam:CreateRole APIs to enable you to tag your users and roles when they are created. I show how you can add tags to a new user later in this blog post.

Now that you know the APIs you can use to tag IAM principals, let’s review an example of how to grant permissions to tag by using an IAM policy.

Example policy 1: Grant permissions to tag specific users and all roles

To get started using tags, you must first ensure you grant permissions to do so. The following policy grants permissions to tag one IAM user and all roles.

Note: Replace <ACCOUNT-ID> with your 12-digit account number.

        
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "iam:ListUsers",
                "iam:ListRoles"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:ListUserTags",
                "iam:ListRoleTags",
                "iam:TagUser",
                "iam:TagRole",
                "iam:UntagUser",
                "iam:UntagRole"
            ],
            "Resource": [
                "arn:aws:iam:: <ACCOUNT-ID>:user/John",
                "arn:aws:iam:: <ACCOUNT-ID>:role/*"
            ]
        }
    ]
}

This policy lists all the actions required to see and modify tags for IAM principals. The Resource element of the policy grants permissions to tag one user, John, and all roles in the account by specifying the Amazon Resource Name (ARN).

Now that I have reviewed the new APIs you can use to view and modify tags on your IAM principals, let’s go over the new IAM condition keys you can use in policies.

New IAM condition keys for tagging IAM principals

The following table lists the condition keys you can use in your IAM policies to control access by using tags. In this section, I also show examples of how context keys in policies can help you grant more specific access for tagging IAM principals.

Condition keyDescriptionActions that support the condition key
aws:RequestTagTags that you request to be added or removed from a user or roleiam:CreateUser, iam:CreateRole, iam:TagRole, iam:UntagRole, iam:TagUser, iam:UntagUser
aws:TagKeysTag keys that are checked before the actions are executediam:CreateUser, iam:CreateRole, iam:TagRole, iam:UntagRole, iam:TagUser, iam:UntagUser
aws:PrincipalTagTags that exist on the user or role making the callglobal condition (all actions across all services support this condition key)
iam:ResourceTagTags that exist on the resourceAny IAM API that supports an IAM user or role and sts:AssumeRole

Now that I have explained both the new APIs and condition keys for tagging IAM users and roles, let’s review two more use cases with tags.

Example policy 2: Require tags for new IAM users

Let’s say I want to apply the same tags to all new IAM users so that I can track them consistently along with my other AWS resources. Now, when you create a user, you can also pass in one or more tags. Let’s say I want to ensure that all the administrators on my team apply a CostCenter tag. I create an IAM policy that includes the actions required to create and tag users. I also use the Condition element to list the tags required to be added to each new user during creation. If an administrator forgets to add a tag, the administrator’s attempt to create the user fails.

Note: These actions are creating new users by using the AWS CLI.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ThisRequiresSpecificTagsWhenYouCreateANewUsers",
            "Effect": "Allow",
            "Action": [
                "iam:CreateUser",
                "iam:TagUser"
            ],
            "Resource": "*",
            "Condition": {
                "StringLike": {
                    
                        "aws:RequestTag/CostCenter": "*"
                 
            }
	  }
        }
    ]
} 

The preceding policy grants iam:CreateUser and iam:TagUser to allow creating and tagging IAM users in the AWS CLI. The Condition element that specifies the CostCenter tag is required during creation by using the condition key aws:RequestTag.

Example policy 3: Grant permissions to manage specific tags on IAM principals

Let’s say I want an administrator on my team, Alice, to manage two tags, Project and CostCenter, for all IAM principals in our account. The following policy allows Alice to be able to assign any value to the Project tag, but limits the values she can assign to the CostCenter tag.


{
    "Version": "2012-10-17",
    "Statement": [
       {
            "Sid": "ViewAllTags", 
            "Effect": "Allow",
            "Action": [
                "iam:ListUsers",
                "iam:ListRoles",
				"iam:ListUserTags",
                "iam:ListRoleTags"
            ],
            "Resource": "*"
        },
        {
           "Sid": "TagUserandRoleWithAnyProjectName",
            "Effect": "Allow",
            "Action": [
                "iam:TagUser",
                "iam:TagRole"
            ],
            "Resource": "*",
            "Condition": {
                "StringLike": {
                    "aws:RequestTag/Project": "*"
                }
            }
        },
        {
           "Sid": "TagUserandRoleWithTwoCostCenterValues",
            "Effect": "Allow",
            "Action": [
                "iam:TagUser",
                "iam:TagRole"
            ],
            "Resource": "*",
            "Condition": {
                "StringLike": {
                    "aws:RequestTag/CostCenter": [
                        "1234",
                        "5678"]}}
        },
        {
           "Sid": "UntagUserandRoleProjectCostCenter",
            "Effect": "Allow",
            "Action": [
                "iam:UntagUser",
                "iam:UntagRole"
            ],
            "Resource": "*",
            "Condition": {
                "ForAllValues:StringLike": {
                    "aws:TagKeys": [
                        "CostCenter",
                        "Project"
                    ]
                }
            }
        }
    ]
}

This policy permits Alice to view, add, and remove the Project and CostCenter tags for all principals in the account. In the Condition element of the second and third statements of the policy, I use the condition, aws:RequestTag, to define the tags Alice is allowed to add or remove as well as the values she is able to assign to those tags. Alice can assign any value to the tag, Project, but is limited to two values, 1234 and 5678, for the tag, CostCenter.

Now that you understand how to grant permissions to tag IAM principals, I will show you how to run the commands to tag a new user and an existing role.

How to add tags to a new IAM user

Using the CLI
Let’s say that IAM user, John, is a new team member and needs access to AWS. To manage resources, I use the following command to create John and add the Project, CostCenter, and EmailID tags.

aws iam create-user --user-name John --tags Key=CostCenter,Value=1234, Key=EmailID,[email protected] 

To give John access to the appropriate AWS actions and resources, you can use the use the CLI to attach policies to John.

Using the console
You can also add tags to a user using the AWS console through the user creation flow as shown below.

  1. Sign in to the AWS Management Console and navigate to the IAM console.
  2. In the left navigation pane, select Users, and then select Add user.
  3. Type the user name for the new user.
  4. Select the type of access this user will have. You can select programmatic access, access to the AWS Management Console, or both.
  5. Select Next: Permissions.
  6. On the Set permissions page, specify how you want to assign permissions to this set of new users. You can choose between Add user to group, Copy permissions from existing user, or Attach existing policies to user directly.
  7. Select Next:Tags.
  8. On the Add tags (optional) page, add the tags you want to attach to this principal. I add the CostCenter tag key with a value of 1234 and the EmailID tag key with value of [email protected].
     
    Figure 1: Add tags

    Figure 1: Add tags

  9. Select Next: Review.
  10. Once you reviewed all the information, select Create user. This action creates your user John with the permissions and tags you attached. You can navigate to the user Details page to view this user.

    How to add tags to an existing IAM role

    Using the CLI
    To manage custom data for each role in my account, I need to add the following tags to all existing roles: Company, Project, Service, and CreationDate. The following command adds these tags to all existing roles. To be able to run the commands I just demonstrated, you must have permissions granted to you in an IAM policy.

    aws iam tag-role --role-name * --tags Key=Project, Key=Service

    I can define the value of the tags for a specific role, Migration, by using the following command:

    aws iam add-role-tags --role-name Migration --tags Key=Project,Value=IAM, Key=Service,Value=S3
    

    Using the console
    You can use the console to add tags to roles individually. To do this, on the left side, select Roles, and then select the role you want to add tags to.
     

    Figure 2: Add tags to individual roles

    Figure 2: Add tags to individual roles

    To view the existing tags on the role, select the Tags tab. The image shown below shows a Migration role in my account with two existing tags: Project with value IAM and key Service with value S3. To add tags or edit the existing tags, select Edit tags.
     

    Figure 3: Edit tags

    Figure 3: Edit tags

    Summary

    When you tag IAM principals, you add custom attributes to the users and roles in your account to make it easier to manage your IAM resources. In this post, I reviewed the new APIs and condition keys and showed three policy examples that address use cases to grant permissions to tag your IAM principals. In a subsequent post, I will review how you can use tags on IAM principals to control access to AWS resources and other accounts.

    If you have comments about this post, submit them in the Comments section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum.

    Want more AWS Security news? Follow us on Twitter.

    The author

    Sulay Shah

    Sulay is the product manager for Identity and Access Management service at AWS. He strongly believes in the customer first approach and is always looking for new opportunities to assist customers. Outside of work, Sulay enjoys playing soccer and watching movies. Sulay holds a master’s degree in computer science from the North Carolina State University.

Delegate permission management to developers by using IAM permissions boundaries

Post Syndicated from Apurv Awasthi original https://aws.amazon.com/blogs/security/delegate-permission-management-to-developers-using-iam-permissions-boundaries/

Today, AWS released a new IAM feature that makes it easier for you to delegate permissions management to trusted employees. As your organization grows, you might want to allow trusted employees to configure and manage IAM permissions to help your organization scale permission management and move workloads to AWS faster. For example, you might want to grant a developer the ability to create and manage permissions for an IAM role required to run an application on Amazon EC2. This ability is powerful and might be used inappropriately or accidentally to attach an administrator access policy to obtain full access to all resources in an account. Now, you can set a permissions boundary to control the maximum permissions employees can grant to the IAM principals (that is, users and roles) that they create and manage.

A permissions boundary is an advanced feature that allows you to limit the maximum permissions that a principal can have. Before we walk you through a specific example, here is an overview of how permissions boundaries work. As the IAM administrator, you can define one or more permissions boundaries using managed policies and allow your employee to create a principal with this boundary. The employee can then attach a permissions policy to this principal. However, the effective permissions of the principal are the intersection of the permissions boundary and permissions policy. As a result, the new principal cannot exceed the boundary that you defined. See the following diagram for a visual representation.
 

Figure 1: The intersection of permission boundaries and policies

Figure 1: The intersection of permissions boundary and permissions policy

In this post, we’ll walk through an example that shows how to grant an employee permission to create IAM roles and assign permissions. We’ll also show how to ensure that these IAM roles can only access Amazon DynamoDB actions and resources in the AWS EU (Frankfurt) region. This solution requires the following steps.

IAM administrator tasks

  1. Define the permissions boundary by creating a customer-managed policy.
  2. Create and attach a permissions policy to allow an employee to create roles, but only with a permissions boundary and a name that meets a specific convention.
  3. Create and attach a permissions policy to allow an employee to pass this role to Amazon EC2.

Employee tasks

  1. Create a role with the required permissions boundary.
  2. Attach a permissions policy to the role.

Administrator step 1: Define the permissions boundary

As an IAM administrator, we’ll create a customer managed policy that grants permissions to put, update, and delete items on all DynamoDB tables in the AWS EU (Frankfurt) region. We’ll require employees to set this policy as the permissions boundary for the roles they create. To follow along, paste the following JSON policy in a file with the name DynamoDB_Boundary_Frankfurt_Text.json.


{
  "Version" : "2012-10-17",
  "Statement" : [
  {
    "Effect": "Allow",
    "Action": [
                "dynamodb:PutItem",
                "dynamodb:UpdateItem",
                "dynamodb:DeleteItem"
   ],
    "Resource": "*",
    "Condition": {
        "StringEquals": {
            "aws:RequestedRegion": "eu-central-1"
        }
    }
  }
]
}

Next, use the create-policy AWS CLI command to create the policy, DynamoDB_Boundary_Frankfurt.

$aws iam create-policy –policy-name DynamoDB_Boundary_Frankfurt –policy-document file://DynamoDB_Boundary_Frankfurt_Text.json

Note: You can also use an AWS managed policy as a permissions boundary.

Administrator step 2: Create and attach the permissions policy

Create a policy that grants permissions to create IAM roles with the DynamoDB_Boundary_Frankfurt permissions boundary, and a name that begins with the prefix MyTestApp. This policy also grants permissions to create and attach IAM policies to roles with this boundary and naming convention. The permissions boundary controls the maximum permissions these roles can have. The naming convention enables administrators to more effectively grant access to manage and use these roles, without updating the employee’s permissions when they create a role. The naming convention also makes it easier to audit and identify roles created by an employee. To create this policy, paste the following JSON policy document in a file with the name Permissions_Policy_For_Employee_Text.json. Make sure to replace the variable <ACCOUNT NUMBER> with your own AWS account number. You can update the policy to grant additional permissions, such as launching EC2 instances in a specific subnet or allowing read-only access on items in a DynmoDB table.


{
  "Version" : "2012-10-17",
  "Statement" : [
     {
"Sid": "SetPermissionsBoundary",
"Effect": "Allow",
"Action": [
"iam:CreateRole",
"iam:AttachRolePolicy",
"iam:DetachRolePolicy"
],
"Resource": "arn:aws:iam::<ACCOUNT_NUMBER>:role/MyTestApp*",
"Condition": {
     "StringEquals": {
     "iam:PermissionsBoundary":     
     "arn:aws:iam::<ACCOUNT_NUMBER>:policy/DynamoDB_Boundary_Frankfurt"}}
      },
     {
      "Sid": "CreateAndEditPermissionsPolicy",
"Effect": "Allow",
"Action": [
"iam:CreatePolicy",
      "iam:CreatePolicyVersion"
],
"Resource": "*"
     }
]
}

Next, use the create-policy command to create the customer managed policy, Permissions_Policy_For_Employee, and use the attach-role-policy command to attach this policy to the principal, MyEmployeeRole, used by your employee.

$aws iam create-policy –policy-name Permissions_Policy_For_Employee –policy-document file://Permissions_Policy_For_Employee_Text.json

$aws iam attach-role-policy –policy-arn arn:aws:iam::<ACCOUNT_NUMBER>:policy/Permissions_Policy_For_Employee –role-name MyEmployeeRole

Administrator step 3: Create and attach the permissions policy for granting permissions to pass the role

Create a policy to allow the employee to pass the roles they created to AWS services, such as Amazon EC2, enabling these services to assume the roles and perform actions on the employee’s behalf. To do this, paste the following JSON policy document in a file with the name Pass_Role_Policy_Text.json.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "arn:aws:iam::<ACCOUNT_NUMBER>:role/MyTestApp*"
        }
    ]
}

Then, use the create-policycreate-policy command to create the policy, Pass_Role_Policy, and the attach-role-policy command to attach this policy to the principal, MyEmployeeRole.

$aws iam create-policy –policy-name Pass_Role_Policy –policy-document file://Pass_Role_Policy_Text.json

$aws iam attach-role-policy –policy-arn arn:aws:iam::<ACCOUNT_NUMBER>:policy/Pass_Role_Policy –role-name MyEmployeeRole

As the IAM administrator, we’ve successfully defined a permissions boundary. We’ve also granted our employee the ability to create IAM roles and attach permissions policies, while ensuring the permissions of the roles don’t exceed the boundary that we set.

Managing Permissions Boundaries

Changing and modifying a permissions boundary is a powerful permission. You should reserve this permission for full administrators in an account. You can do this by ensuring that policies you use as permissions boundaries don’t include the DeleteUserPermissionsBoundary and DeleteRolePermissionsBoundary actions. Or, if you allow “iam:*actions, then you must explicitly deny those actions.

Employee step 1: Create a role by providing the permissions boundary

Your employee can now use the create-role command to create a new IAM role with the DynamoDB_Boundary_Frankfurt permissions boundary and the attach-role-policy command to attach permissions policies to this role.

For this post, we assume that your employee operates an application, MyTestApp, on Amazon EC2 that requires access to the Amazon DynamoDB table, MyTestApp_DDB_Table. The employee can paste the following JSON policy document and save it as Role_Trust_Policy_Text.json to define the trust policy.


{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Then, the employee can use the create-role command to create the IAM role, MyTestAppRole, and define the permissions boundary as DynamoDB_Boundary_Frankfurt. The create-role command will fail if the employee doesn’t provide the appropriate permissions boundary. Make sure to the <ACCOUNT NUMBER> variable is replaced with the employee’s in the policy below.

$aws iam create-role –role-name MyTestAppRole
–assume-role-policy-document file://Role_Trust_Policy_Text.json
–permissions-boundary arn:aws:iam::<ACCOUNT_NUMBER>:policy/DynamoDB_Boundary_Frankfurt

Next, the employee grants permissions to this role by using the attach-role-policy command to attach the following policy, MyTestApp_DDB_Permissions. This policy grants the ability to perform all actions on the DynamoDB table, MyTestApp_DDB_Table.


{
    "Version": "2012-10-17",
    "Statement": [
{
"Effect": "Allow",
"Action": "dynamodb:*",
"Resource": [
"arn:aws:dynamodb:eu-central-1:<ACCOUNT_NUMBER>:table/MyTestApp_DDB_Table"]
}
]
}

$aws iam attach-role-policy –policy-arn arn:aws:iam::<ACCOUNT_NUMBER>:policy/MyTestApp_DDB_Permissions
–role-name MyTestAppRole

Although the employee granted full DynamoDB access, the effective permissions for this IAM role are the intersection of the permissions boundary, DynamoDB_Boundary_Frankfurt, and the permissions policy, MyTestApp_DDB_Permissions. This means the role only has access to put, update, and delete items on the MyTestApp_DDB_Table in the AWS EU (Frankfurt) region. See the following diagram for a visual representation.
 

Figure 2: Effective permissions for the IAM role

Figure 2: Effective permissions for the IAM role

Summary

We demonstrated how to use permissions boundaries to delegate IAM permission management. Using permissions boundaries can help you scale permission management in your organization and move workloads to AWS faster. To learn more, see the IAM documentation for permissions boundaries.

If you have comments about this post, submit them in the Comments section below. If you have questions or suggestions, please start a new thread on the IAM forum.

Want more AWS Security news? Follow us on Twitter.

Easier way to control access to AWS regions using IAM policies

Post Syndicated from Sulay Shah original https://aws.amazon.com/blogs/security/easier-way-to-control-access-to-aws-regions-using-iam-policies/

We made it easier for you to comply with regulatory standards by controlling access to AWS Regions using IAM policies. For example, if your company requires users to create resources in a specific AWS region, you can now add a new condition to the IAM policies you attach to your IAM principal (user or role) to enforce this for all AWS services. In this post, I review conditions in policies, introduce the new condition, and review a policy example to demonstrate how you can control access across multiple AWS services to a specific region.

Condition concepts

Before I introduce the new condition, let’s review the condition element of an IAM policy. A condition is an optional IAM policy element that lets you specify special circumstances under which the policy grants or denies permission. A condition includes a condition key, operator, and value for the condition. There are two types of conditions: service-specific conditions and global conditions. Service-specific conditions are specific to certain actions in an AWS service. For example, the condition key ec2:InstanceType supports specific EC2 actions. Global conditions support all actions across all AWS services.

Now that I’ve reviewed the condition element in an IAM policy, let me introduce the new condition.

AWS:RequestedRegion condition key

The new global condition key, , supports all actions across all AWS services. You can use any string operator and specify any AWS region for its value.

Condition keyDescriptionOperator(s)Value
aws:RequestedRegionAllows you to specify the region to which the IAM principal (user or role) can make API callsAll string operators (for example, StringEqualsAny AWS region (for example, us-east-1)

I’ll now demonstrate the use of the new global condition key.

Example: Policy with region-level control

Let’s say a group of software developers in my organization is working on a project using Amazon EC2 and Amazon RDS. The project requires a web server running on an EC2 instance using Amazon Linux and a MySQL database instance in RDS. The developers also want to test Amazon Lambda, an event-driven platform, to retrieve data from the MySQL DB instance in RDS for future use.

My organization requires all the AWS resources to remain in the Frankfurt, eu-central-1, region. To make sure this project follows these guidelines, I create a single IAM policy for all the AWS services that this group is going to use and apply the new global condition key aws:RequestedRegion for all the services. This way I can ensure that any new EC2 instances launched or any database instances created using RDS are in Frankfurt. This policy also ensures that any Lambda functions this group creates for testing are also in the Frankfurt region.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeAccountAttributes",
                "ec2:DescribeAvailabilityZones",
                "ec2:DescribeInternetGateways",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeSubnets",
                "ec2:DescribeVpcAttribute",
                "ec2:DescribeVpcs",
                "ec2:DescribeInstances",
                "ec2:DescribeImages",
                "ec2:DescribeKeyPairs",
                "rds:Describe*",
                "iam:ListRolePolicies",
                "iam:ListRoles",
                "iam:GetRole",
                "iam:ListInstanceProfiles",
                "iam:AttachRolePolicy",
                "lambda:GetAccountSettings"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:RunInstances",
                "rds:CreateDBInstance",
                "rds:CreateDBCluster",
                "lambda:CreateFunction",
                "lambda:InvokeFunction"
            ],
            "Resource": "*",
      "Condition": {"StringEquals": {"aws:RequestedRegion": "eu-central-1"}}

        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:PassRole"
            ],
            "Resource": "arn:aws:iam::account-id:role/*"
        }
    ]
}

The first statement in the above example contains all the read-only actions that let my developers use the console for EC2, RDS, and Lambda. The permissions for IAM-related actions are required to launch EC2 instances with a role, enable enhanced monitoring in RDS, and for AWS Lambda to assume the IAM execution role to execute the Lambda function. I’ve combined all the read-only actions into a single statement for simplicity. The second statement is where I give write access to my developers for the three services and restrict the write access to the Frankfurt region using the aws:RequestedRegion condition key. You can also list multiple AWS regions with the new condition key if your developers are allowed to create resources in multiple regions. The third statement grants permissions for the IAM action iam:PassRole required by AWS Lambda. For more information on allowing users to create a Lambda function, see Using Identity-Based Policies for AWS Lambda.

Summary

You can now use the aws:RequestedRegion global condition key in your IAM policies to specify the region to which the IAM principal (user or role) can invoke an API call. This capability makes it easier for you to restrict the AWS regions your IAM principals can use to comply with regulatory standards and improve account security. For more information about this global condition key and policy examples using aws:RequestedRegion, see the IAM documentation.

If you have comments about this post, submit them in the Comments section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum.

Want more AWS Security news? Follow us on Twitter.

Rotate Amazon RDS database credentials automatically with AWS Secrets Manager

Post Syndicated from Apurv Awasthi original https://aws.amazon.com/blogs/security/rotate-amazon-rds-database-credentials-automatically-with-aws-secrets-manager/

Recently, we launched AWS Secrets Manager, a service that makes it easier to rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. You can configure Secrets Manager to rotate secrets automatically, which can help you meet your security and compliance needs. Secrets Manager offers built-in integrations for MySQL, PostgreSQL, and Amazon Aurora on Amazon RDS, and can rotate credentials for these databases natively. You can control access to your secrets by using fine-grained AWS Identity and Access Management (IAM) policies. To retrieve secrets, employees replace plaintext secrets with a call to Secrets Manager APIs, eliminating the need to hard-code secrets in source code or update configuration files and redeploy code when secrets are rotated.

In this post, I introduce the key features of Secrets Manager. I then show you how to store a database credential for a MySQL database hosted on Amazon RDS and how your applications can access this secret. Finally, I show you how to configure Secrets Manager to rotate this secret automatically.

Key features of Secrets Manager

These features include the ability to:

  • Rotate secrets safely. You can configure Secrets Manager to rotate secrets automatically without disrupting your applications. Secrets Manager offers built-in integrations for rotating credentials for Amazon RDS databases for MySQL, PostgreSQL, and Amazon Aurora. You can extend Secrets Manager to meet your custom rotation requirements by creating an AWS Lambda function to rotate other types of secrets. For example, you can create an AWS Lambda function to rotate OAuth tokens used in a mobile application. Users and applications retrieve the secret from Secrets Manager, eliminating the need to email secrets to developers or update and redeploy applications after AWS Secrets Manager rotates a secret.
  • Secure and manage secrets centrally. You can store, view, and manage all your secrets. By default, Secrets Manager encrypts these secrets with encryption keys that you own and control. Using fine-grained IAM policies, you can control access to secrets. For example, you can require developers to provide a second factor of authentication when they attempt to retrieve a production database credential. You can also tag secrets to help you discover, organize, and control access to secrets used throughout your organization.
  • Monitor and audit easily. Secrets Manager integrates with AWS logging and monitoring services to enable you to meet your security and compliance requirements. For example, you can audit AWS CloudTrail logs to see when Secrets Manager rotated a secret or configure AWS CloudWatch Events to alert you when an administrator deletes a secret.
  • Pay as you go. Pay for the secrets you store in Secrets Manager and for the use of these secrets; there are no long-term contracts or licensing fees.

Get started with Secrets Manager

Now that you’re familiar with the key features, I’ll show you how to store the credential for a MySQL database hosted on Amazon RDS. To demonstrate how to retrieve and use the secret, I use a python application running on Amazon EC2 that requires this database credential to access the MySQL instance. Finally, I show how to configure Secrets Manager to rotate this database credential automatically. Let’s get started.

Phase 1: Store a secret in Secrets Manager

  1. Open the Secrets Manager console and select Store a new secret.
     
    Secrets Manager console interface
     
  2. I select Credentials for RDS database because I’m storing credentials for a MySQL database hosted on Amazon RDS. For this example, I store the credentials for the database superuser. I start by securing the superuser because it’s the most powerful database credential and has full access over the database.
     
    Store a new secret interface with Credentials for RDS database selected
     

    Note: For this example, you need permissions to store secrets in Secrets Manager. To grant these permissions, you can use the AWSSecretsManagerReadWriteAccess managed policy. Read the AWS Secrets Manager Documentation for more information about the minimum IAM permissions required to store a secret.

  3. Next, I review the encryption setting and choose to use the default encryption settings. Secrets Manager will encrypt this secret using the Secrets Manager DefaultEncryptionKeyDefaultEncryptionKey in this account. Alternatively, I can choose to encrypt using a customer master key (CMK) that I have stored in AWS KMS.
     
    Select the encryption key interface
     
  4. Next, I view the list of Amazon RDS instances in my account and select the database this credential accesses. For this example, I select the DB instance mysql-rds-database, and then I select Next.
     
    Select the RDS database interface
     
  5. In this step, I specify values for Secret Name and Description. For this example, I use Applications/MyApp/MySQL-RDS-Database as the name and enter a description of this secret, and then select Next.
     
    Secret Name and description interface
     
  6. For the next step, I keep the default setting Disable automatic rotation because my secret is used by my application running on Amazon EC2. I’ll enable rotation after I’ve updated my application (see Phase 2 below) to use Secrets Manager APIs to retrieve secrets. I then select Next.

    Note: If you’re storing a secret that you’re not using in your application, select Enable automatic rotation. See our AWS Secrets Manager getting started guide on rotation for details.

     
    Configure automatic rotation interface
     

  7. Review the information on the next screen and, if everything looks correct, select Store. We’ve now successfully stored a secret in Secrets Manager.
  8. Next, I select See sample code.
     
    The See sample code button
     
  9. Take note of the code samples provided. I will use this code to update my application to retrieve the secret using Secrets Manager APIs.
     
    Python sample code
     

Phase 2: Update an application to retrieve secret from Secrets Manager

Now that I have stored the secret in Secrets Manager, I update my application to retrieve the database credential from Secrets Manager instead of hard coding this information in a configuration file or source code. For this example, I show how to configure a python application to retrieve this secret from Secrets Manager.

  1. I connect to my Amazon EC2 instance via Secure Shell (SSH).
  2. Previously, I configured my application to retrieve the database user name and password from the configuration file. Below is the source code for my application.
    import MySQLdb
    import config

    def no_secrets_manager_sample()

    # Get the user name, password, and database connection information from a config file.
    database = config.database
    user_name = config.user_name
    password = config.password

    # Use the user name, password, and database connection information to connect to the database
    db = MySQLdb.connect(database.endpoint, user_name, password, database.db_name, database.port)

  3. I use the sample code from Phase 1 above and update my application to retrieve the user name and password from Secrets Manager. This code sets up the client and retrieves and decrypts the secret Applications/MyApp/MySQL-RDS-Database. I’ve added comments to the code to make the code easier to understand.
    # Use the code snippet provided by Secrets Manager.
    import boto3
    from botocore.exceptions import ClientError

    def get_secret():
    #Define the secret you want to retrieve
    secret_name = "Applications/MyApp/MySQL-RDS-Database"
    #Define the Secrets mManager end-point your code should use.
    endpoint_url = "https://secretsmanager.us-east-1.amazonaws.com"
    region_name = "us-east-1"

    #Setup the client
    session = boto3.session.Session()
    client = session.client(
    service_name='secretsmanager',
    region_name=region_name,
    endpoint_url=endpoint_url
    )

    #Use the client to retrieve the secret
    try:
    get_secret_value_response = client.get_secret_value(
    SecretId=secret_name
    )
    #Error handling to make it easier for your code to tolerate faults
    except ClientError as e:
    if e.response['Error']['Code'] == 'ResourceNotFoundException':
    print("The requested secret " + secret_name + " was not found")
    elif e.response['Error']['Code'] == 'InvalidRequestException':
    print("The request was invalid due to:", e)
    elif e.response['Error']['Code'] == 'InvalidParameterException':
    print("The request had invalid params:", e)
    else:
    # Decrypted secret using the associated KMS CMK
    # Depending on whether the secret was a string or binary, one of these fields will be populated
    if 'SecretString' in get_secret_value_response:
    secret = get_secret_value_response['SecretString']
    else:
    binary_secret_data = get_secret_value_response['SecretBinary']

    # Your code goes here.

  4. Applications require permissions to access Secrets Manager. My application runs on Amazon EC2 and uses an IAM role to obtain access to AWS services. I will attach the following policy to my IAM role. This policy uses the GetSecretValue action to grant my application permissions to read secret from Secrets Manager. This policy also uses the resource element to limit my application to read only the Applications/MyApp/MySQL-RDS-Database secret from Secrets Manager. You can visit the AWS Secrets Manager Documentation to understand the minimum IAM permissions required to retrieve a secret.
    {
    "Version": "2012-10-17",
    "Statement": {
    "Sid": "RetrieveDbCredentialFromSecretsManager",
    "Effect": "Allow",
    "Action": "secretsmanager:GetSecretValue",
    "Resource": "arn:aws:secretsmanager:::secret:Applications/MyApp/MySQL-RDS-Database"
    }
    }

Phase 3: Enable Rotation for Your Secret

Rotating secrets periodically is a security best practice because it reduces the risk of misuse of secrets. Secrets Manager makes it easy to follow this security best practice and offers built-in integrations for rotating credentials for MySQL, PostgreSQL, and Amazon Aurora databases hosted on Amazon RDS. When you enable rotation, Secrets Manager creates a Lambda function and attaches an IAM role to this function to execute rotations on a schedule you define.

Note: Configuring rotation is a privileged action that requires several IAM permissions and you should only grant this access to trusted individuals. To grant these permissions, you can use the AWS IAMFullAccess managed policy.

Next, I show you how to configure Secrets Manager to rotate the secret Applications/MyApp/MySQL-RDS-Database automatically.

  1. From the Secrets Manager console, I go to the list of secrets and choose the secret I created in the first step Applications/MyApp/MySQL-RDS-Database.
     
    List of secrets in the Secrets Manager console
     
  2. I scroll to Rotation configuration, and then select Edit rotation.
     
    Rotation configuration interface
     
  3. To enable rotation, I select Enable automatic rotation. I then choose how frequently I want Secrets Manager to rotate this secret. For this example, I set the rotation interval to 60 days.
     
    Edit rotation configuration interface
     
  4. Next, Secrets Manager requires permissions to rotate this secret on your behalf. Because I’m storing the superuser database credential, Secrets Manager can use this credential to perform rotations. Therefore, I select Use the secret that I provided in step 1, and then select Next.
     
    Select which secret to use in the Edit rotation configuration interface
     
  5. The banner on the next screen confirms that I have successfully configured rotation and the first rotation is in progress, which enables you to verify that rotation is functioning as expected. Secrets Manager will rotate this credential automatically every 60 days.
     
    Confirmation banner message
     

Summary

I introduced AWS Secrets Manager, explained the key benefits, and showed you how to help meet your compliance requirements by configuring AWS Secrets Manager to rotate database credentials automatically on your behalf. Secrets Manager helps you protect access to your applications, services, and IT resources without the upfront investment and on-going maintenance costs of operating your own secrets management infrastructure. To get started, visit the Secrets Manager console. To learn more, visit Secrets Manager documentation.

If you have comments about this post, submit them in the Comments section below. If you have questions about anything in this post, start a new thread on the Secrets Manager forum.

Want more AWS Security news? Follow us on Twitter.

AWS Certificate Manager Launches Private Certificate Authority

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/aws-certificate-manager-launches-private-certificate-authority/

Today we’re launching a new feature for AWS Certificate Manager (ACM), Private Certificate Authority (CA). This new service allows ACM to act as a private subordinate CA. Previously, if a customer wanted to use private certificates, they needed specialized infrastructure and security expertise that could be expensive to maintain and operate. ACM Private CA builds on ACM’s existing certificate capabilities to help you easily and securely manage the lifecycle of your private certificates with pay as you go pricing. This enables developers to provision certificates in just a few simple API calls while administrators have a central CA management console and fine grained access control through granular IAM policies. ACM Private CA keys are stored securely in AWS managed hardware security modules (HSMs) that adhere to FIPS 140-2 Level 3 security standards. ACM Private CA automatically maintains certificate revocation lists (CRLs) in Amazon Simple Storage Service (S3) and lets administrators generate audit reports of certificate creation with the API or console. This service is packed full of features so let’s jump in and provision a CA.

Provisioning a Private Certificate Authority (CA)

First, I’ll navigate to the ACM console in my region and select the new Private CAs section in the sidebar. From there I’ll click Get Started to start the CA wizard. For now, I only have the option to provision a subordinate CA so we’ll select that and use my super secure desktop as the root CA and click Next. This isn’t what I would do in a production setting but it will work for testing out our private CA.

Now, I’ll configure the CA with some common details. The most important thing here is the Common Name which I’ll set as secure.internal to represent my internal domain.

Now I need to choose my key algorithm. You should choose the best algorithm for your needs but know that ACM has a limitation today that it can only manage certificates that chain up to to RSA CAs. For now, I’ll go with RSA 2048 bit and click Next.

In this next screen, I’m able to configure my certificate revocation list (CRL). CRLs are essential for notifying clients in the case that a certificate has been compromised before certificate expiration. ACM will maintain the revocation list for me and I have the option of routing my S3 bucket to a custome domain. In this case I’ll create a new S3 bucket to store my CRL in and click Next.

Finally, I’ll review all the details to make sure I didn’t make any typos and click Confirm and create.

A few seconds later and I’m greeted with a fancy screen saying I successfully provisioned a certificate authority. Hooray! I’m not done yet though. I still need to activate my CA by creating a certificate signing request (CSR) and signing that with my root CA. I’ll click Get started to begin that process.

Now I’ll copy the CSR or download it to a server or desktop that has access to my root CA (or potentially another subordinate – so long as it chains to a trusted root for my clients).

Now I can use a tool like openssl to sign my cert and generate the certificate chain.


$openssl ca -config openssl_root.cnf -extensions v3_intermediate_ca -days 3650 -notext -md sha256 -in csr/CSR.pem -out certs/subordinate_cert.pem
Using configuration from openssl_root.cnf
Enter pass phrase for /Users/randhunt/dev/amzn/ca/private/root_private_key.pem:
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
stateOrProvinceName   :ASN.1 12:'Washington'
localityName          :ASN.1 12:'Seattle'
organizationName      :ASN.1 12:'Amazon'
organizationalUnitName:ASN.1 12:'Engineering'
commonName            :ASN.1 12:'secure.internal'
Certificate is to be certified until Mar 31 06:05:30 2028 GMT (3650 days)
Sign the certificate? [y/n]:y


1 out of 1 certificate requests certified, commit? [y/n]y
Write out database with 1 new entries
Data Base Updated

After that I’ll copy my subordinate_cert.pem and certificate chain back into the console. and click Next.

Finally, I’ll review all the information and click Confirm and import. I should see a screen like the one below that shows my CA has been activated successfully.

Now that I have a private CA we can provision private certificates by hopping back to the ACM console and creating a new certificate. After clicking create a new certificate I’ll select the radio button Request a private certificate then I’ll click Request a certificate.

From there it’s just similar to provisioning a normal certificate in ACM.

Now I have a private certificate that I can bind to my ELBs, CloudFront Distributions, API Gateways, and more. I can also export the certificate for use on embedded devices or outside of ACM managed environments.

Available Now
ACM Private CA is a service in and of itself and it is packed full of features that won’t fit into a blog post. I strongly encourage the interested readers to go through the developer guide and familiarize themselves with certificate based security. ACM Private CA is available in in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), EU (Frankfurt) and EU (Ireland). Private CAs cost $400 per month (prorated) for each private CA. You are not charged for certificates created and maintained in ACM but you are charged for certificates where you have access to the private key (exported or created outside of ACM). The pricing per certificate is tiered starting at $0.75 per certificate for the first 1000 certificates and going down to $0.001 per certificate after 10,000 certificates.

I’m excited to see administrators and developers take advantage of this new service. As always please let us know what you think of this service on Twitter or in the comments below.

Randall

AWS Secrets Manager: Store, Distribute, and Rotate Credentials Securely

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/aws-secrets-manager-store-distribute-and-rotate-credentials-securely/

Today we’re launching AWS Secrets Manager which makes it easy to store and retrieve your secrets via API or the AWS Command Line Interface (CLI) and rotate your credentials with built-in or custom AWS Lambda functions. Managing application secrets like database credentials, passwords, or API Keys is easy when you’re working locally with one machine and one application. As you grow and scale to many distributed microservices, it becomes a daunting task to securely store, distribute, rotate, and consume secrets. Previously, customers needed to provision and maintain additional infrastructure solely for secrets management which could incur costs and introduce unneeded complexity into systems.

AWS Secrets Manager

Imagine that I have an application that takes incoming tweets from Twitter and stores them in an Amazon Aurora database. Previously, I would have had to request a username and password from my database administrator and embed those credentials in environment variables or, in my race to production, even in the application itself. I would also need to have our social media manager create the Twitter API credentials and figure out how to store those. This is a fairly manual process, involving multiple people, that I have to restart every time I want to rotate these credentials. With Secrets Manager my database administrator can provide the credentials in secrets manager once and subsequently rely on a Secrets Manager provided Lambda function to automatically update and rotate those credentials. My social media manager can put the Twitter API keys in Secrets Manager which I can then access with a simple API call and I can even rotate these programmatically with a custom lambda function calling out to the Twitter API. My secrets are encrypted with the KMS key of my choice, and each of these administrators can explicitly grant access to these secrets with with granular IAM policies for individual roles or users.

Let’s take a look at how I would store a secret using the AWS Secrets Manager console. First, I’ll click Store a new secret to get to the new secrets wizard. For my RDS Aurora instance it’s straightforward to simply select the instance and provide the initial username and password to connect to the database.

Next, I’ll fill in a quick description and a name to access my secret by. You can use whatever naming scheme you want here.

Next, we’ll configure rotation to use the Secrets Manager-provided Lambda function to rotate our password every 10 days.

Finally, we’ll review all the details and check out our sample code for storing and retrieving our secret!

Finally I can review the secrets in the console.

Now, if I needed to access these secrets I’d simply call the API.

import json
import boto3
secrets = boto3.client("secretsmanager")
rds = json.dumps(secrets.get_secrets_value("prod/TwitterApp/Database")['SecretString'])
print(rds)

Which would give me the following values:


{'engine': 'mysql',
 'host': 'twitterapp2.abcdefg.us-east-1.rds.amazonaws.com',
 'password': '-)Kw>THISISAFAKEPASSWORD:lg{&sad+Canr',
 'port': 3306,
 'username': 'ranman'}

More than passwords

AWS Secrets Manager works for more than just passwords. I can store OAuth credentials, binary data, and more. Let’s look at storing my Twitter OAuth application keys.

Now, I can define the rotation for these third-party OAuth credentials with a custom AWS Lambda function that can call out to Twitter whenever we need to rotate our credentials.

Custom Rotation

One of the niftiest features of AWS Secrets Manager is custom AWS Lambda functions for credential rotation. This allows you to define completely custom workflows for credentials. Secrets Manager will call your lambda with a payload that includes a Step which specifies which step of the rotation you’re in, a SecretId which specifies which secret the rotation is for, and importantly a ClientRequestToken which is used to ensure idempotency in any changes to the underlying secret.

When you’re rotating secrets you go through a few different steps:

  1. createSecret
  2. setSecret
  3. testSecret
  4. finishSecret

The advantage of these steps is that you can add any kind of approval steps you want for each phase of the rotation. For more details on custom rotation check out the documentation.

Available Now
AWS Secrets Manager is available today in US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), EU (Frankfurt), EU (Ireland), EU (London), and South America (São Paulo). Secrets are priced at $0.40 per month per secret and $0.05 per 10,000 API calls. I’m looking forward to seeing more users adopt rotating credentials to secure their applications!

Randall

Tag Amazon EBS Snapshots on Creation and Implement Stronger Security Policies

Post Syndicated from Woo Kim original https://aws.amazon.com/blogs/compute/tag-amazon-ebs-snapshots-on-creation-and-implement-stronger-security-policies/

This blog was contributed by Rucha Nene, Sr. Product Manager for Amazon EBS

AWS customers use tags to track ownership of resources, implement compliance protocols, control access to resources via IAM policies, and drive their cost accounting processes. Last year, we made tagging for Amazon EC2 instances and Amazon EBS volumes easier by adding the ability to tag these resources upon creation. We are now extending this capability to EBS snapshots.

Earlier, you could tag your EBS snapshots only after the resource had been created and sometimes, ended up with EBS snapshots in an untagged state if tagging failed. You also could not control the actions that users and groups could take over specific snapshots, or enforce tighter security policies.

To address these issues, we are making tagging for EBS snapshots more flexible and giving customers more control over EBS snapshots by introducing two new capabilities:

  • Tag on creation for EBS snapshots – You can now specify tags for EBS snapshots as part of the API call that creates the resource or via the Amazon EC2 Console when creating an EBS snapshot.
  • Resource-level permission and enforced tag usage – The CreateSnapshot, DeleteSnapshot, and ModifySnapshotAttrribute API actions now support IAM resource-level permissions. You can now write IAM policies that mandate the use of specific tags when taking actions on EBS snapshots.

Tag on creation

You can now specify tags for EBS snapshots as part of the API call that creates the resources. The resource creation and the tagging are performed atomically; both must succeed in order for the operation CreateSnapshot to succeed. You no longer need to build tagging scripts that run after EBS snapshots have been created.

Here’s how you specify tags when you create an EBS snapshot, using the console:

  1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
  2. In the navigation pane, choose Snapshots, Create Snapshot.
  3. On the Create Snapshot page, select the volume for which to create a snapshot.
  4. (Optional) Choose Add tags to your snapshot. For each tag, provide a tag key and a tag value.
  5. Choose Create Snapshot.

Using the AWS CLI:

aws ec2 create-snapshot --volume-id vol-0c0e757e277111f3c --description 'Prod_Backup' --tag-specifications 
'ResourceType=snapshot,Tags=[{Key=costcenter,Value=115},{Key=IsProd,Value=Yes}]'

To learn more, see Using Tags.

Resource-level permissions and enforced tag usage

CreateSnapshot, DeleteSnapshot, and ModifySnapshotAttribute now support resource-level permissions, which allow you to exercise more control over EBS snapshots. You can write IAM policies that give you precise control over access to resources and let you specify which users are able to create snapshots for a given set of volumes. You can also enforce the use of specific tags to help track resources and achieve more accurate cost allocation reporting.

For example, here’s a statement that requires that the costcenter tag (with a value of “115”) be present on the volume from which snapshots are being created. It requires that this tag be applied to all newly created snapshots. In addition, it requires that the created snapshots are tagged with User:username for the customer.

{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Effect":"Allow",
         "Action":"ec2:CreateSnapshot",
         "Resource":"arn:aws:ec2:us-east-1:123456789012:volume/*",
	   "Condition": {
		"StringEquals":{
               "ec2:ResourceTag/costcenter":"115"
}
 }
	
      },
      {
         "Sid":"AllowCreateTaggedSnapshots",
         "Effect":"Allow",
         "Action":"ec2:CreateSnapshot",
         "Resource":"arn:aws:ec2:us-east-1::snapshot/*",
         "Condition":{
            "StringEquals":{
               "aws:RequestTag/costcenter":"115",
		   "aws:RequestTag/User":"${aws:username}"
            },
            "ForAllValues:StringEquals":{
               "aws:TagKeys":[
                  "costcenter",
			"User"
               ]
            }
         }
      },
      {
         "Effect":"Allow",
         "Action":"ec2:CreateTags",
         "Resource":"arn:aws:ec2:us-east-1::snapshot/*",
         "Condition":{
            "StringEquals":{
               "ec2:CreateAction":"CreateSnapshot"
            }
         }
      }
   ]
}

To implement stronger compliance and security policies, you could also restrict access to DeleteSnapshot, if the resource is not tagged with the user’s name. Here’s a statement that allows the deletion of a snapshot only if the snapshot is tagged with User:username for the customer.

{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Effect":"Allow",
         "Action":"ec2:DeleteSnapshot",
         "Resource":"arn:aws:ec2:us-east-1::snapshot/*",
         "Condition":{
            "StringEquals":{
               "ec2:ResourceTag/User":"${aws:username}"
            }
         }
      }
   ]
}

To learn more and to see some sample policies, see IAM Policies for Amazon EC2 and Working with Snapshots.

Available Now

These new features are available now in all AWS Regions. You can start using it today from the Amazon EC2 Console, AWS Command Line Interface (CLI), or the AWS APIs.

Now Available – AWS Serverless Application Repository

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-aws-serverless-application-repository/

Last year I suggested that you Get Ready for the AWS Serverless Application Repository and gave you a sneak peek. The Repository is designed to make it as easy as possible for you to discover, configure, and deploy serverless applications and components on AWS. It is also an ideal venue for AWS partners, enterprise customers, and independent developers to share their serverless creations.

Now Available
After a well-received public preview, the AWS Serverless Application Repository is now generally available and you can start using it today!

As a consumer, you will be able to tap in to a thriving ecosystem of serverless applications and components that will be a perfect complement to your machine learning, image processing, IoT, and general-purpose work. You can configure and consume them as-is, or you can take them apart, add features, and submit pull requests to the author.

As a publisher, you can publish your contribution in the Serverless Application Repository with ease. You simply enter a name and a description, choose some labels to increase discoverability, select an appropriate open source license from a menu, and supply a README to help users get started. Then you enter a link to your existing source code repo, choose a SAM template, and designate a semantic version.

Let’s take a look at both operations…

Consuming a Serverless Application
The Serverless Application Repository is accessible from the Lambda Console. I can page through the existing applications or I can initiate a search:

A search for “todo” returns some interesting results:

I simply click on an application to learn more:

I can configure the application and deploy it right away if I am already familiar with the application:

I can expand each of the sections to learn more. The Permissions section tells me which IAM policies will be used:

And the Template section displays the SAM template that will be used to deploy the application:

I can inspect the template to learn more about the AWS resources that will be created when the template is deployed. I can also use the templates as a learning resource in preparation for creating and publishing my own application.

The License section displays the application’s license:

To deploy todo, I name the application and click Deploy:

Deployment starts immediately and is done within a minute (application deployment time will vary, depending on the number and type of resources to be created):

I can see all of my deployed applications in the Lambda Console:

There’s currently no way for a SAM template to indicate that an API Gateway function returns binary media types, so I set this up by hand and then re-deploy the API:

Following the directions in the Readme, I open the API Gateway Console and find the URL for the app in the API Gateway Dashboard:

I visit the URL and enter some items into my list:

Publishing a Serverless Application
Publishing applications is a breeze! I visit the Serverless App Repository page and click on Publish application to get started:

Then I assign a name to my application, enter my own name, and so forth:

I can choose from a long list of open-source friendly SPDX licenses:

I can create an initial version of my application at this point, or I can do it later. Either way, I simply provide a version number, a URL to a public repository containing my code, and a SAM template:

Available Now
The AWS Serverless Application Repository is available now and you can start using it today, paying only for the AWS resources consumed by the serverless applications that you deploy.

You can deploy applications in the US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central), EU (Frankfurt), EU (Ireland), EU (London), and South America (São Paulo) Regions. You can publish from the US East (N. Virginia) or US East (Ohio) Regions for global availability.

Jeff;

 

Sharing Secrets with AWS Lambda Using AWS Systems Manager Parameter Store

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/sharing-secrets-with-aws-lambda-using-aws-systems-manager-parameter-store/

This post courtesy of Roberto Iturralde, Sr. Application Developer- AWS Professional Services

Application architects are faced with key decisions throughout the process of designing and implementing their systems. One decision common to nearly all solutions is how to manage the storage and access rights of application configuration. Shared configuration should be stored centrally and securely with each system component having access only to the properties that it needs for functioning.

With AWS Systems Manager Parameter Store, developers have access to central, secure, durable, and highly available storage for application configuration and secrets. Parameter Store also integrates with AWS Identity and Access Management (IAM), allowing fine-grained access control to individual parameters or branches of a hierarchical tree.

This post demonstrates how to create and access shared configurations in Parameter Store from AWS Lambda. Both encrypted and plaintext parameter values are stored with only the Lambda function having permissions to decrypt the secrets. You also use AWS X-Ray to profile the function.

Solution overview

This example is made up of the following components:

  • An AWS SAM template that defines:
    • A Lambda function and its permissions
    • An unencrypted Parameter Store parameter that the Lambda function loads
    • A KMS key that only the Lambda function can access. You use this key to create an encrypted parameter later.
  • Lambda function code in Python 3.6 that demonstrates how to load values from Parameter Store at function initialization for reuse across invocations.

Launch the AWS SAM template

To create the resources shown in this post, you can download the SAM template or choose the button to launch the stack. The template requires one parameter, an IAM user name, which is the name of the IAM user to be the admin of the KMS key that you create. In order to perform the steps listed in this post, this IAM user will need permissions to execute Lambda functions, create Parameter Store parameters, administer keys in KMS, and view the X-Ray console. If you have these privileges in your IAM user account you can use your own account to complete the walkthrough. You can not use the root user to administer the KMS keys.

SAM template resources

The following sections show the code for the resources defined in the template.
Lambda function

ParameterStoreBlogFunctionDev:
    Type: 'AWS::Serverless::Function'
    Properties:
      FunctionName: 'ParameterStoreBlogFunctionDev'
      Description: 'Integrating lambda with Parameter Store'
      Handler: 'lambda_function.lambda_handler'
      Role: !GetAtt ParameterStoreBlogFunctionRoleDev.Arn
      CodeUri: './code'
      Environment:
        Variables:
          ENV: 'dev'
          APP_CONFIG_PATH: 'parameterStoreBlog'
          AWS_XRAY_TRACING_NAME: 'ParameterStoreBlogFunctionDev'
      Runtime: 'python3.6'
      Timeout: 5
      Tracing: 'Active'

  ParameterStoreBlogFunctionRoleDev:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          -
            Effect: Allow
            Principal:
              Service:
                - 'lambda.amazonaws.com'
            Action:
              - 'sts:AssumeRole'
      ManagedPolicyArns:
        - 'arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole'
      Policies:
        -
          PolicyName: 'ParameterStoreBlogDevParameterAccess'
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              -
                Effect: Allow
                Action:
                  - 'ssm:GetParameter*'
                Resource: !Sub 'arn:aws:ssm:${AWS::Region}:${AWS::AccountId}:parameter/dev/parameterStoreBlog*'
        -
          PolicyName: 'ParameterStoreBlogDevXRayAccess'
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              -
                Effect: Allow
                Action:
                  - 'xray:PutTraceSegments'
                  - 'xray:PutTelemetryRecords'
                Resource: '*'

In this YAML code, you define a Lambda function named ParameterStoreBlogFunctionDev using the SAM AWS::Serverless::Function type. The environment variables for this function include the ENV (dev) and the APP_CONFIG_PATH where you find the configuration for this app in Parameter Store. X-Ray tracing is also enabled for profiling later.

The IAM role for this function extends the AWSLambdaBasicExecutionRole by adding IAM policies that grant the function permissions to write to X-Ray and get parameters from Parameter Store, limited to paths under /dev/parameterStoreBlog*.
Parameter Store parameter

SimpleParameter:
    Type: AWS::SSM::Parameter
    Properties:
      Name: '/dev/parameterStoreBlog/appConfig'
      Description: 'Sample dev config values for my app'
      Type: String
      Value: '{"key1": "value1","key2": "value2","key3": "value3"}'

This YAML code creates a plaintext string parameter in Parameter Store in a path that your Lambda function can access.
KMS encryption key

ParameterStoreBlogDevEncryptionKeyAlias:
    Type: AWS::KMS::Alias
    Properties:
      AliasName: 'alias/ParameterStoreBlogKeyDev'
      TargetKeyId: !Ref ParameterStoreBlogDevEncryptionKey

  ParameterStoreBlogDevEncryptionKey:
    Type: AWS::KMS::Key
    Properties:
      Description: 'Encryption key for secret config values for the Parameter Store blog post'
      Enabled: True
      EnableKeyRotation: False
      KeyPolicy:
        Version: '2012-10-17'
        Id: 'key-default-1'
        Statement:
          -
            Sid: 'Allow administration of the key & encryption of new values'
            Effect: Allow
            Principal:
              AWS:
                - !Sub 'arn:aws:iam::${AWS::AccountId}:user/${IAMUsername}'
            Action:
              - 'kms:Create*'
              - 'kms:Encrypt'
              - 'kms:Describe*'
              - 'kms:Enable*'
              - 'kms:List*'
              - 'kms:Put*'
              - 'kms:Update*'
              - 'kms:Revoke*'
              - 'kms:Disable*'
              - 'kms:Get*'
              - 'kms:Delete*'
              - 'kms:ScheduleKeyDeletion'
              - 'kms:CancelKeyDeletion'
            Resource: '*'
          -
            Sid: 'Allow use of the key'
            Effect: Allow
            Principal:
              AWS: !GetAtt ParameterStoreBlogFunctionRoleDev.Arn
            Action:
              - 'kms:Encrypt'
              - 'kms:Decrypt'
              - 'kms:ReEncrypt*'
              - 'kms:GenerateDataKey*'
              - 'kms:DescribeKey'
            Resource: '*'

This YAML code creates an encryption key with a key policy with two statements.

The first statement allows a given user (${IAMUsername}) to administer the key. Importantly, this includes the ability to encrypt values using this key and disable or delete this key, but does not allow the administrator to decrypt values that were encrypted with this key.

The second statement grants your Lambda function permission to encrypt and decrypt values using this key. The alias for this key in KMS is ParameterStoreBlogKeyDev, which is how you reference it later.

Lambda function

Here I walk you through the Lambda function code.

import os, traceback, json, configparser, boto3
from aws_xray_sdk.core import patch_all
patch_all()

# Initialize boto3 client at global scope for connection reuse
client = boto3.client('ssm')
env = os.environ['ENV']
app_config_path = os.environ['APP_CONFIG_PATH']
full_config_path = '/' + env + '/' + app_config_path
# Initialize app at global scope for reuse across invocations
app = None

class MyApp:
    def __init__(self, config):
        """
        Construct new MyApp with configuration
        :param config: application configuration
        """
        self.config = config

    def get_config(self):
        return self.config

def load_config(ssm_parameter_path):
    """
    Load configparser from config stored in SSM Parameter Store
    :param ssm_parameter_path: Path to app config in SSM Parameter Store
    :return: ConfigParser holding loaded config
    """
    configuration = configparser.ConfigParser()
    try:
        # Get all parameters for this app
        param_details = client.get_parameters_by_path(
            Path=ssm_parameter_path,
            Recursive=False,
            WithDecryption=True
        )

        # Loop through the returned parameters and populate the ConfigParser
        if 'Parameters' in param_details and len(param_details.get('Parameters')) > 0:
            for param in param_details.get('Parameters'):
                param_path_array = param.get('Name').split("/")
                section_position = len(param_path_array) - 1
                section_name = param_path_array[section_position]
                config_values = json.loads(param.get('Value'))
                config_dict = {section_name: config_values}
                print("Found configuration: " + str(config_dict))
                configuration.read_dict(config_dict)

    except:
        print("Encountered an error loading config from SSM.")
        traceback.print_exc()
    finally:
        return configuration

def lambda_handler(event, context):
    global app
    # Initialize app if it doesn't yet exist
    if app is None:
        print("Loading config and creating new MyApp...")
        config = load_config(full_config_path)
        app = MyApp(config)

    return "MyApp config is " + str(app.get_config()._sections)

Beneath the import statements, you import the patch_all function from the AWS X-Ray library, which you use to patch boto3 to create X-Ray segments for all your boto3 operations.

Next, you create a boto3 SSM client at the global scope for reuse across function invocations, following Lambda best practices. Using the function environment variables, you assemble the path where you expect to find your configuration in Parameter Store. The class MyApp is meant to serve as an example of an application that would need its configuration injected at construction. In this example, you create an instance of ConfigParser, a class in Python’s standard library for handling basic configurations, to give to MyApp.

The load_config function loads the all the parameters from Parameter Store at the level immediately beneath the path provided in the Lambda function environment variables. Each parameter found is put into a new section in ConfigParser. The name of the section is the name of the parameter, less the base path. In this example, the full parameter name is /dev/parameterStoreBlog/appConfig, which is put in a section named appConfig.

Finally, the lambda_handler function initializes an instance of MyApp if it doesn’t already exist, constructing it with the loaded configuration from Parameter Store. Then it simply returns the currently loaded configuration in MyApp. The impact of this design is that the configuration is only loaded from Parameter Store the first time that the Lambda function execution environment is initialized. Subsequent invocations reuse the existing instance of MyApp, resulting in improved performance. You see this in the X-Ray traces later in this post. For more advanced use cases where configuration changes need to be received immediately, you could implement an expiry policy for your configuration entries or push notifications to your function.

To confirm that everything was created successfully, test the function in the Lambda console.

  1. Open the Lambda console.
  2. In the navigation pane, choose Functions.
  3. In the Functions pane, filter to ParameterStoreBlogFunctionDev to find the function created by the SAM template earlier. Open the function name to view its details.
  4. On the top right of the function detail page, choose Test. You may need to create a new test event. The input JSON doesn’t matter as this function ignores the input.

After running the test, you should see output similar to the following. This demonstrates that the function successfully fetched the unencrypted configuration from Parameter Store.

Create an encrypted parameter

You currently have a simple, unencrypted parameter and a Lambda function that can access it.

Next, you create an encrypted parameter that only your Lambda function has permission to use for decryption. This limits read access for this parameter to only this Lambda function.

To follow along with this section, deploy the SAM template for this post in your account and make your IAM user name the KMS key admin mentioned earlier.

  1. In the Systems Manager console, under Shared Resources, choose Parameter Store.
  2. Choose Create Parameter.
    • For Name, enter /dev/parameterStoreBlog/appSecrets.
    • For Type, select Secure String.
    • For KMS Key ID, choose alias/ParameterStoreBlogKeyDev, which is the key that your SAM template created.
    • For Value, enter {"secretKey": "secretValue"}.
    • Choose Create Parameter.
  3. If you now try to view the value of this parameter by choosing the name of the parameter in the parameters list and then choosing Show next to the Value field, you won’t see the value appear. This is because, even though you have permission to encrypt values using this KMS key, you do not have permissions to decrypt values.
  4. In the Lambda console, run another test of your function. You now also see the secret parameter that you created and its decrypted value.

If you do not see the new parameter in the Lambda output, this may be because the Lambda execution environment is still warm from the previous test. Because the parameters are loaded at Lambda startup, you need a fresh execution environment to refresh the values.

Adjust the function timeout to a different value in the Advanced Settings at the bottom of the Lambda Configuration tab. Choose Save and test to trigger the creation of a new Lambda execution environment.

Profiling the impact of querying Parameter Store using AWS X-Ray

By using the AWS X-Ray SDK to patch boto3 in your Lambda function code, each invocation of the function creates traces in X-Ray. In this example, you can use these traces to validate the performance impact of your design decision to only load configuration from Parameter Store on the first invocation of the function in a new execution environment.

From the Lambda function details page where you tested the function earlier, under the function name, choose Monitoring. Choose View traces in X-Ray.

This opens the X-Ray console in a new window filtered to your function. Be aware of the time range field next to the search bar if you don’t see any search results.
In this screenshot, I’ve invoked the Lambda function twice, one time 10.3 minutes ago with a response time of 1.1 seconds and again 9.8 minutes ago with a response time of 8 milliseconds.

Looking at the details of the longer running trace by clicking the trace ID, you can see that the Lambda function spent the first ~350 ms of the full 1.1 sec routing the request through Lambda and creating a new execution environment for this function, as this was the first invocation with this code. This is the portion of time before the initialization subsegment.

Next, it took 725 ms to initialize the function, which includes executing the code at the global scope (including creating the boto3 client). This is also a one-time cost for a fresh execution environment.

Finally, the function executed for 65 ms, of which 63.5 ms was the GetParametersByPath call to Parameter Store.

Looking at the trace for the second, much faster function invocation, you see that the majority of the 8 ms execution time was Lambda routing the request to the function and returning the response. Only 1 ms of the overall execution time was attributed to the execution of the function, which makes sense given that after the first invocation you’re simply returning the config stored in MyApp.

While the Traces screen allows you to view the details of individual traces, the X-Ray Service Map screen allows you to view aggregate performance data for all traced services over a period of time.

In the X-Ray console navigation pane, choose Service map. Selecting a service node shows the metrics for node-specific requests. Selecting an edge between two nodes shows the metrics for requests that traveled that connection. Again, be aware of the time range field next to the search bar if you don’t see any search results.

After invoking your Lambda function several more times by testing it from the Lambda console, you can view some aggregate performance metrics. Look at the following:

  • From the client perspective, requests to the Lambda service for the function are taking an average of 50 ms to respond. The function is generating ~1 trace per minute.
  • The function itself is responding in an average of 3 ms. In the following screenshot, I’ve clicked on this node, which reveals a latency histogram of the traced requests showing that over 95% of requests return in under 5 ms.
  • Parameter Store is responding to requests in an average of 64 ms, but note the much lower trace rate in the node. This is because you only fetch data from Parameter Store on the initialization of the Lambda execution environment.

Conclusion

Deduplication, encryption, and restricted access to shared configuration and secrets is a key component to any mature architecture. Serverless architectures designed using event-driven, on-demand, compute services like Lambda are no different.

In this post, I walked you through a sample application accessing unencrypted and encrypted values in Parameter Store. These values were created in a hierarchy by application environment and component name, with the permissions to decrypt secret values restricted to only the function needing access. The techniques used here can become the foundation of secure, robust configuration management in your enterprise serverless applications.