Tag Archives: IAM policies

Now Use AWS IAM to Delete a Service-Linked Role When You No Longer Require an AWS Service to Perform Actions on Your Behalf

Post Syndicated from Ujjwal Pugalia original https://aws.amazon.com/blogs/security/now-use-aws-iam-to-delete-a-service-linked-role-when-you-no-longer-require-an-aws-service-to-perform-actions-on-your-behalf/

Earlier this year, AWS Identity and Access Management (IAM) introduced service-linked roles, which provide you an easy and secure way to delegate permissions to AWS services. Each service-linked role delegates permissions to an AWS service, which is called its linked service. Service-linked roles help with monitoring and auditing requirements by providing a transparent way to understand all actions performed on your behalf because AWS CloudTrail logs all actions performed by the linked service using service-linked roles. For information about which services support service-linked roles, see AWS Services That Work with IAM. Over time, more AWS services will support service-linked roles.

Today, IAM added support for the deletion of service-linked roles through the IAM console and the IAM API/CLI. This means you now can revoke permissions from the linked service to create and manage AWS resources in your account. When you delete a service-linked role, the linked service no longer has the permissions to perform actions on your behalf. To ensure your AWS services continue to function as expected when you delete a service-linked role, IAM validates that you no longer have resources that require the service-linked role to function properly. This prevents you from inadvertently revoking permissions required by an AWS service to manage your existing AWS resources and helps you maintain your resources in a consistent state. If there are any resources in your account that require the service-linked role, you will receive an error when you attempt to delete the service-linked role, and the service-linked role will remain in your account. If you do not have any resources that require the service-linked role, you can delete the service-linked role and IAM will remove the service-linked role from your account.

In this blog post, I show how to delete a service-linked role by using the IAM console. To learn more about how to delete service-linked roles by using the IAM API/CLI, see the DeleteServiceLinkedRole API documentation.

Note: The IAM console does not currently support service-linked role deletion for Amazon Lex, but you can delete your service-linked role by using the Amazon Lex console. To learn more, see Service Permissions.

How to delete a service-linked role by using the IAM console

If you no longer need to use an AWS service that uses a service-linked role, you can remove permissions from that service by deleting the service-linked role through the IAM console. To delete a service-linked role, you must have permissions for the iam:DeleteServiceLinkedRole action. For example, the following IAM policy grants the permission to delete service-linked roles used by Amazon Redshift. To learn more about working with IAM policies, see Working with Policies.

{ 
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowDeletionOfServiceLinkedRolesForRedshift",
            "Effect": "Allow",
            "Action": ["iam:DeleteServiceLinkedRole"],
            "Resource": ["arn:aws:iam::*:role/aws-service-role/redshift.amazonaws.com/AWSServiceRoleForRedshift*"]
	 }
    ]
}

To delete a service-linked role by using the IAM console:

  1. Navigate to the IAM console and choose Roles from the navigation pane.

Screenshot of the Roles page in the IAM console

  1. Choose the service-linked role you want to delete and then choose Delete role. In this example, I choose the  AWSServiceRoleForRedshift service-linked role.

Screenshot of the AWSServiceRoleForRedshift service-linked role

  1. A dialog box asks you to confirm that you want to delete the service-linked role you have chosen. In the Last activity column, you can see when the AWS service last used the service-linked role, which tells you when the linked service last used the service-linked role to perform an action on your behalf. If you want to continue to delete the service-linked role, choose Yes, delete to delete the service-linked role.

Screenshot of the "Delete role" window

  1. IAM then checks whether you have any resources that require the service-linked role you are trying to delete. While IAM checks, you will see the status message, Deletion in progress, below the role name. Screenshot showing "Deletion in progress"
  1. If no resources require the service-linked role, IAM deletes the role from your account and displays a success message on the console.

Screenshot of the success message

  1. If there are AWS resources that require the service-linked role you are trying to delete, you will see the status message, Deletion failed, below the role name.

Screenshot showing the "Deletion failed"

  1. If you choose View details, you will see a message that explains the deletion failed because there are resources that use the service-linked role.
    Screenshot showing details about why the role deletion failed
  2. Choose View Resources to view the Amazon Resource Names (ARNs) of the first five resources that require the service-linked role. You can delete the service-linked role only after you delete all resources that require the service-linked role. In this example, only one resource requires the service-linked role.

Conclusion

Service-linked roles make it easier for you to delegate permissions to AWS services to create and manage AWS resources on your behalf and to understand all actions the service will perform on your behalf. If you no longer need to use an AWS service that uses a service-linked role, you can remove permissions from that service by deleting the service-linked role through the IAM console. However, before you delete a service-linked role, you must delete all the resources associated with that role to ensure that your resources remain in a consistent state.

If you have any questions, submit a comment in the “Comments” section below. If you need help working with service-linked roles, start a new thread on the IAM forum or contact AWS Support.

– Ujjwal

Simplify Your Jenkins Builds with AWS CodeBuild

Post Syndicated from Paul Roberts original https://aws.amazon.com/blogs/devops/simplify-your-jenkins-builds-with-aws-codebuild/

Jeff Bezos famously said, “There’s a lot of undifferentiated heavy lifting that stands between your idea and that success.” He went on to say, “…70% of your time, energy, and dollars go into the undifferentiated heavy lifting and only 30% of your energy, time, and dollars gets to go into the core kernel of your idea.”

If you subscribe to this maxim, you should not be spending valuable time focusing on operational issues related to maintaining the Jenkins build infrastructure. Companies such as Riot Games have over 1.25 million builds per year and have written several lengthy blog posts about their experiences designing a complex, custom Docker-powered Jenkins build farm. Dealing with Jenkins slaves at scale is a job in itself and Riot has engineers focused on managing the build infrastructure.

Typical Jenkins Build Farm

 

As with all technology, the Jenkins build farm architectures have evolved. Today, instead of manually building your own container infrastructure, there are Jenkins Docker plugins available to help reduce the operational burden of maintaining these environments. There is also a community-contributed Amazon EC2 Container Service (Amazon ECS) plugin that helps remove some of the overhead, but you still need to configure and manage the overall Amazon ECS environment.

There are various ways to create and manage your Jenkins build farm, but there has to be a way that significantly reduces your operational overhead.

Introducing AWS CodeBuild

AWS CodeBuild is a fully managed build service that removes the undifferentiated heavy lifting of provisioning, managing, and scaling your own build servers. With CodeBuild, there is no software to install, patch, or update. CodeBuild scales up automatically to meet the needs of your development teams. In addition, CodeBuild is an on-demand service where you pay as you go. You are charged based only on the number of minutes it takes to complete your build.

One AWS customer, Recruiterbox, helps companies hire simply and predictably through their software platform. Two years ago, they began feeling the operational pain of maintaining their own Jenkins build farms. They briefly considered moving to Amazon ECS, but chose an even easier path forward instead. Recuiterbox transitioned to using Jenkins with CodeBuild and are very happy with the results. You can read more about their journey here.

Solution Overview: Jenkins and CodeBuild

To remove the heavy lifting from managing your Jenkins build farm, AWS has developed a Jenkins AWS CodeBuild plugin. After the plugin has been enabled, a developer can configure a Jenkins project to pick up new commits from their chosen source code repository and automatically run the associated builds. After the build is successful, it will create an artifact that is stored inside an S3 bucket that you have configured. If an error is detected somewhere, CodeBuild will capture the output and send it to Amazon CloudWatch logs. In addition to storing the logs on CloudWatch, Jenkins also captures the error so you do not have to go hunting for log files for your build.

 

AWS CodeBuild with Jenkins Plugin

 

The following example uses AWS CodeCommit (Git) as the source control management (SCM) and Amazon S3 for build artifact storage. Logs are stored in CloudWatch. A development pipeline that uses Jenkins with CodeBuild plugin architecture looks something like this:

 

AWS CodeBuild Diagram

Initial Solution Setup

To keep this blog post succinct, I assume that you are using the following components on AWS already and have applied the appropriate IAM policies:

·         AWS CodeCommit repo.

·         Amazon S3 bucket for CodeBuild artifacts.

·         SNS notification for text messaging of the Jenkins admin password.

·         IAM user’s key and secret.

·         A role that has a policy with these permissions. Be sure to edit the ARNs with your region, account, and resource name. Use this role in the AWS CloudFormation template referred to later in this post.

 

Jenkins Installation with CodeBuild Plugin Enabled

To make the integration with Jenkins as frictionless as possible, I have created an AWS CloudFormation template here: https://s3.amazonaws.com/proberts-public/jenkins.yaml. Download the template, sign in the AWS CloudFormation console, and then use the template to create a stack.

 

CloudFormation Inputs

Jenkins Project Configuration

After the stack is complete, log in to the Jenkins EC2 instance using the user name “admin” and the password sent to your mobile device. Now that you have logged in to Jenkins, you need to create your first project. Start with a Freestyle project and configure the parameters based on your CodeBuild and CodeCommit settings.

 

AWS CodeBuild Plugin Configuration in Jenkins

 

Additional Jenkins AWS CodeBuild Plugin Configuration

 

After you have configured the Jenkins project appropriately you should be able to check your build status on the Jenkins polling log under your project settings:

 

Jenkins Polling Log

 

Now that Jenkins is polling CodeCommit, you can check the CodeBuild dashboard under your Jenkins project to confirm your build was successful:

Jenkins AWS CodeBuild Dashboard

Wrapping Up

In a matter of minutes, you have been able to provision Jenkins with the AWS CodeBuild plugin. This will greatly simplify your build infrastructure management. Now kick back and relax while CodeBuild does all the heavy lifting!


About the Author

Paul Roberts is a Strategic Solutions Architect for Amazon Web Services. When he is not working on Serverless, DevOps, or Artificial Intelligence, he is often found in Lake Tahoe exploring the various mountain ranges with his family.

AWS IAM Policy Summaries Now Help You Identify Errors and Correct Permissions in Your IAM Policies

Post Syndicated from Joy Chatterjee original https://aws.amazon.com/blogs/security/iam-policy-summaries-now-help-you-identify-errors-and-correct-permissions-in-your-iam-policies/

In March, we made it easier to view and understand the permissions in your AWS Identity and Access Management (IAM) policies by using IAM policy summaries. Today, we updated policy summaries to help you identify and correct errors in your IAM policies. When you set permissions using IAM policies, for each action you specify, you must match that action to supported resources or conditions. Now, you will see a warning if these policy elements (Actions, Resources, and Conditions) defined in your IAM policy do not match.

When working with policies, you may find that although the policy has valid JSON syntax, it does not grant or deny the desired permissions because the Action element does not have an applicable Resource element or Condition element defined in the policy. For example, you may want to create a policy that allows users to view a specific Amazon EC2 instance. To do this, you create a policy that specifies ec2:DescribeInstances for the Action element and the Amazon Resource Name (ARN) of the instance for the Resource element. When testing this policy, you find AWS denies this access because ec2:DescribeInstances does not support resource-level permissions and requires access to list all instances. Therefore, to grant access to this Action element, you need to specify a wildcard (*) in the Resource element of your policy for this Action element in order for the policy to function correctly.

To help you identify and correct permissions, you will now see a warning in a policy summary if the policy has either of the following:

  • An action that does not support the resource specified in a policy.
  • An action that does not support the condition specified in a policy.

In this blog post, I walk through two examples of how you can use policy summaries to help identify and correct these types of errors in your IAM policies.

How to use IAM policy summaries to debug your policies

Example 1: An action does not support the resource specified in a policy

Let’s say a human resources (HR) representative, Casey, needs access to the personnel files stored in HR’s Amazon S3 bucket. To do this, I create the following policy to grant all actions that begin with s3:List. In addition, I grant access to s3:GetObject in the Action element of the policy. To ensure that Casey has access only to a specific bucket and not others, I specify the bucket ARN in the Resource element of the policy.

Note: This policy does not grant the desired permissions.

This policy does not work. Do not copy.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ThisPolicyDoesNotGrantAllListandGetActions",
            "Effect": "Allow",
            "Action": ["s3:List*",
                       "s3:GetObject"],
            "Resource": ["arn:aws:s3:::HumanResources"]
        }
    ]
}

After I create the policy, HRBucketPermissions, I select this policy from the Policies page to view the policy summary. From here, I check to see if there are any warnings or typos in the policy. I see a warning at the top of the policy detail page because the policy does not grant some permissions specified in the policy, which is caused by a mismatch among the actions, resources, or conditions.

Screenshot showing the warning at the top of the policy

To view more details about the warning, I choose Show remaining so that I can understand why the permissions do not appear in the policy summary. As shown in the following screenshot, I see no access to the services that are not granted by the IAM policy in the policy, which is expected. However, next to S3, I see a warning that one or more S3 actions do not have an applicable resource.

Screenshot showing that one or more S3 actions do not have an applicable resource

To understand why the specific actions do not have a supported resource, I choose S3 from the list of services and choose Show remaining. I type List in the filter to understand why some of the list actions are not granted by the policy. As shown in the following screenshot, I see these warnings:

  • This action does not support resource-level permissions. This means the action does not support resource-level permissions and requires a wildcard (*) in the Resource element of the policy.
  • This action does not have an applicable resource. This means the action supports resource-level permissions, but not the resource type defined in the policy. In this example, I specified an S3 bucket for an action that supports only an S3 object resource type.

From these warnings, I see that s3:ListAllMyBuckets, s3:ListBucketMultipartUploadsParts3:ListObjects , and s3:GetObject do not support an S3 bucket resource type, which results in Casey not having access to the S3 bucket. To correct the policy, I choose Edit policy and update the policy with three statements based on the resource that the S3 actions support. Because Casey needs access to view and read all of the objects in the HumanResources bucket, I add a wildcard (*) for the S3 object path in the Resource ARN.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "TheseActionsSupportBucketResourceType",
            "Effect": "Allow",
            "Action": ["s3:ListBucket",
                       "s3:ListBucketByTags",
                       "s3:ListBucketMultipartUploads",
                       "s3:ListBucketVersions"],
            "Resource": ["arn:aws:s3:::HumanResources"]
        },{
            "Sid": "TheseActionsRequireAllResources",
            "Effect": "Allow",
            "Action": ["s3:ListAllMyBuckets",
                       "s3:ListMultipartUploadParts",
                       "s3:ListObjects"],
            "Resource": [ "*"]
        },{
            "Sid": "TheseActionsRequireSupportsObjectResourceType",
            "Effect": "Allow",
            "Action": ["s3:GetObject"],
            "Resource": ["arn:aws:s3:::HumanResources/*"]
        }
    ]
}

After I make these changes, I see the updated policy summary and see that warnings are no longer displayed.

Screenshot of the updated policy summary that no longer shows warnings

In the previous example, I showed how to identify and correct permissions errors that include actions that do not support a specified resource. In the next example, I show how to use policy summaries to identify and correct a policy that includes actions that do not support a specified condition.

Example 2: An action does not support the condition specified in a policy

For this example, let’s assume Bob is a project manager who requires view and read access to all the code builds for his team. To grant him this access, I create the following JSON policy that specifies all list and read actions to AWS CodeBuild and defines a condition to limit access to resources in the us-west-2 Region in which Bob’s team develops.

This policy does not work. Do not copy. 
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ListReadAccesstoCodeServices",
            "Effect": "Allow",
            "Action": [
                "codebuild:List*",
                "codebuild:BatchGet*"
            ],
            "Resource": ["*"], 
             "Condition": {
                "StringEquals": {
                    "ec2:Region": "us-west-2"
                }
            }
        }
    ]	
}

After I create the policy, PMCodeBuildAccess, I select this policy from the Policies page to view the policy summary in the IAM console. From here, I check to see if the policy has any warnings or typos. I see an error at the top of the policy detail page because the policy does not grant any permissions.

Screenshot with an error showing the policy does not grant any permissions

To view more details about the error, I choose Show remaining to understand why no permissions result from the policy. I see this warning: One or more conditions do not have an applicable action. This means that the condition is not supported by any of the actions defined in the policy.

From the warning message (see preceding screenshot), I realize that ec2:Region is not a supported condition for any actions in CodeBuild. To correct the policy, I separate the list actions that do not support resource-level permissions into a separate Statement element and specify * as the resource. For the remaining CodeBuild actions that support resource-level permissions, I use the ARN to specify the us-west-2 Region in the project resource type.

CORRECT POLICY 
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "TheseActionsSupportAllResources",
            "Effect": "Allow",
            "Action": [
                "codebuild:ListBuilds",
                "codebuild:ListProjects",
                "codebuild:ListRepositories",
                "codebuild:ListCuratedEnvironmentImages",
                "codebuild:ListConnectedOAuthAccounts"
            ],
            "Resource": ["*"] 
        }, {
            "Sid": "TheseActionsSupportAResource",
            "Effect": "Allow",
            "Action": [
                "codebuild:ListBuildsForProject",
                "codebuild:BatchGet*"
            ],
            "Resource": ["arn:aws:codebuild:us-west-2:123456789012:project/*"] 
        }

    ]	
}

After I make the changes, I view the updated policy summary and see that no warnings are displayed.

Screenshot showing the updated policy summary with no warnings

When I choose CodeBuild from the list of services, I also see that for the actions that support resource-level permissions, the access is limited to the us-west-2 Region.

Screenshow showing that for the Actions that support resource-level permissions, the access is limited to the us-west-2 region.

Conclusion

Policy summaries make it easier to view and understand the permissions and resources in your IAM policies by displaying the permissions granted by the policies. As I’ve demonstrated in this post, you can also use policy summaries to help you identify and correct your IAM policies. To understand the types of warnings that policy summaries support, you can visit Troubleshoot IAM Policies. To view policy summaries in your AWS account, sign in to the IAM console and navigate to any policy on the Policies page of the IAM console or the Permissions tab on a user’s page.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum or contact AWS Support.

– Joy

New – VPC Endpoints for DynamoDB

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/new-vpc-endpoints-for-dynamodb/

Starting today Amazon Virtual Private Cloud (VPC) Endpoints for Amazon DynamoDB are available in all public AWS regions. You can provision an endpoint right away using the AWS Management Console or the AWS Command Line Interface (CLI). There are no additional costs for a VPC Endpoint for DynamoDB.

Many AWS customers run their applications within a Amazon Virtual Private Cloud (VPC) for security or isolation reasons. Previously, if you wanted your EC2 instances in your VPC to be able to access DynamoDB, you had two options. You could use an Internet Gateway (with a NAT Gateway or assigning your instances public IPs) or you could route all of your traffic to your local infrastructure via VPN or AWS Direct Connect and then back to DynamoDB. Both of these solutions had security and throughput implications and it could be difficult to configure NACLs or security groups to restrict access to just DynamoDB. Here is a picture of the old infrastructure.

Creating an Endpoint

Let’s create a VPC Endpoint for DynamoDB. We can make sure our region supports the endpoint with the DescribeVpcEndpointServices API call.


aws ec2 describe-vpc-endpoint-services --region us-east-1
{
    "ServiceNames": [
        "com.amazonaws.us-east-1.dynamodb",
        "com.amazonaws.us-east-1.s3"
    ]
}

Great, so I know my region supports these endpoints and I know what my regional endpoint is. I can grab one of my VPCs and provision an endpoint with a quick call to the CLI or through the console. Let me show you how to use the console.

First I’ll navigate to the VPC console and select “Endpoints” in the sidebar. From there I’ll click “Create Endpoint” which brings me to this handy console.

You’ll notice the AWS Identity and Access Management (IAM) policy section for the endpoint. This supports all of the fine grained access control that DynamoDB supports in regular IAM policies and you can restrict access based on IAM policy conditions.

For now I’ll give full access to my instances within this VPC and click “Next Step”.

This brings me to a list of route tables in my VPC and asks me which of these route tables I want to assign my endpoint to. I’ll select one of them and click “Create Endpoint”.

Keep in mind the note of warning in the console: if you have source restrictions to DynamoDB based on public IP addresses the source IP of your instances accessing DynamoDB will now be their private IP addresses.

After adding the VPC Endpoint for DynamoDB to our VPC our infrastructure looks like this.

That’s it folks! It’s that easy. It’s provided at no cost. Go ahead and start using it today. If you need more details you can read the docs here.

Newly Updated: Example AWS IAM Policies for You to Use and Customize

Post Syndicated from Deren Smith original https://aws.amazon.com/blogs/security/newly-updated-example-policies-for-you-to-use-and-customize/

To help you grant access to specific resources and conditions, the Example Policies page in the AWS Identity and Access Management (IAM) documentation now includes more than thirty policies for you to use or customize to meet your permissions requirements. The AWS Support team developed these policies from their experiences working with AWS customers over the years. The example policies cover common permissions use cases you might encounter across services such as Amazon DynamoDB, Amazon EC2, AWS Elastic Beanstalk, Amazon RDS, Amazon S3, and IAM.

In this blog post, I introduce the updated Example Policies page and explain how to use and customize these policies for your needs.

The new Example Policies page

The Example Policies page in the IAM User Guide now provides an overview of the example policies and includes a link to view each policy on a separate page. Note that each of these policies has been reviewed and approved by AWS Support. If you would like to submit a policy that you have found to be particularly useful, post it on the IAM forum.

To give you an idea of the policies we have included on this page, the following are a few of the EC2 policies on the page:

To see the full list of available policies, see the Example Polices page.

In the following section, I demonstrate how to use a policy from the Example Policies page and customize it for your needs.

How to customize an example policy for your needs

Suppose you want to allow an IAM user, Bob, to start and stop EC2 instances with a specific resource tag. After looking through the Example Policies page, you see the policy, Allows Starting or Stopping EC2 Instances a User Has Tagged, Programmatically and in the Console.

To apply this policy to your specific use case:

  1. Navigate to the Policies section of the IAM console.
  2. Choose Create policy.
    Screenshot of choosing "Create policy"
  3. Choose the Select button next to Create Your Own Policy. You will see an empty policy document with boxes for Policy Name, Description, and Policy Document, as shown in the following screenshot.
  4. Type a name for the policy, copy the policy from the Example Policies page, and paste the policy in the Policy Document box. In this example, I use “start-stop-instances-for-owner-tag” as the policy name and “Allows users to start or stop instances if the instance tag Owner has the value of their user name” as the description.
  5. Update the placeholder text in the policy (see the full policy that follows this step). For example, replace <REGION> with a region from AWS Regions and Endpoints and <ACCOUNTNUMBER> with your 12-digit account number. The IAM policy variable, ${aws:username}, is a dynamic property in the policy that automatically applies to the user to which it is attached. For example, when the policy is attached to Bob, the policy replaces ${aws:username} with Bob. If you do not want to use the key value pair of Owner and ${aws:username}, you can edit the policy to include your desired key value pair. For example, if you want to use the key value pair, CostCenter:1234, you can modify “ec2:ResourceTag/Owner”: “${aws:username}” to “ec2:ResourceTag/CostCenter”: “1234”.
    {
        "Version": "2012-10-17",
        "Statement": [
           {
          "Effect": "Allow",
          "Action": [
              "ec2:StartInstances",
              "ec2:StopInstances"
          ],
                 "Resource": "arn:aws:ec2:<REGION>:<ACCOUNTNUMBER>:instance/*",
                 "Condition": {
              "StringEquals": {
                  "ec2:ResourceTag/Owner": "${aws:username}"
              }
          }
            },
            {
                 "Effect": "Allow",
                 "Action": "ec2:DescribeInstances",
                 "Resource": "*"
            }
        ]
    }

  6. After you have edited the policy, choose Create policy.

You have created a policy that allows an IAM user to stop and start EC2 instances in your account, as long as these instances have the correct resource tag and the policy is attached to your IAM users. You also can attach this policy to an IAM group and apply the policy to users by adding them to that group.

Summary

We updated the Example Policies page in the IAM User Guide so that you have a central location where you can find examples of the most commonly requested and used IAM policies. In addition to these example policies, we recommend that you review the list of AWS managed policies, including the AWS managed policies for job functions. You can choose these predefined policies from the IAM console and associate them with your IAM users, groups, and roles.

We will add more IAM policies to the Example Policies page over time. If you have a useful policy you would like to share with others, post it on the IAM forum. If you have comments about this post, submit them in the “Comments” section below.

– Deren

Manage Instances at Scale without SSH Access Using EC2 Run Command

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/manage-instances-at-scale-without-ssh-access-using-ec2-run-command/

The guest post below, written by Ananth Vaidyanathan (Senior Product Manager for EC2 Systems Manager) and Rich Urmston (Senior Director of Cloud Architecture at Pegasystems) shows you how to use EC2 Run Command to manage a large collection of EC2 instances without having to resort to SSH.

Jeff;


Enterprises often have several managed environments and thousands of Amazon EC2 instances. It’s important to manage systems securely, without the headaches of Secure Shell (SSH). Run Command, part of Amazon EC2 Systems Manager, allows you to run remote commands on instances (or groups of instances using tags) in a controlled and auditable manner. It’s been a nice added productivity boost for Pega Cloud operations, which rely daily on Run Command services.

You can control Run Command access through standard IAM roles and policies, define documents to take input parameters, control the S3 bucket used to return command output. You can also share your documents with other AWS accounts, or with the public. All in all, Run Command provides a nice set of remote management features.

Better than SSH
Here’s why Run Command is a better option than SSH and why Pegasystems has adopted it as their primary remote management tool:

Run Command Takes Less Time –  Securely connecting to an instance requires a few steps e.g. jumpboxes to connect to or IP addresses to whitelist etc. With Run Command, cloud ops engineers can invoke commands directly from their laptop, and never have to find keys or even instance IDs. Instead, system security relies on AWS auth, IAM roles and policies.

Run Command Operations are Fully Audited – With SSH, there is no real control over what they can do, nor is there an audit trail. With Run Command, every invoked operation is audited in CloudTrail, including information on the invoking user, instances on which command was run, parameters, and operation status. You have full control and ability to restrict what functions engineers can perform on a system.

Run Command has no SSH keys to Manage – Run Command leverages standard AWS credentials, API keys, and IAM policies. Through integration with a corporate auth system, engineers can interact with systems based on their corporate credentials and identity.

Run Command can Manage Multiple Systems at the Same Time – Simple tasks such as looking at the status of a Linux service or retrieving a log file across a fleet of managed instances is cumbersome using SSH. Run Command allows you to specify a list of instances by IDs or tags, and invokes your command, in parallel, across the specified fleet. This provides great leverage when troubleshooting or managing more than the smallest Pega clusters.

Run Command Makes Automating Complex Tasks Easier – Standardizing operational tasks requires detailed procedure documents or scripts describing the exact commands. Managing or deploying these scripts across the fleet is cumbersome. Run Command documents provide an easy way to encapsulate complex functions, and handle document management and access controls. When combined with AWS Lambda, documents provide a powerful automation platform to handle any complex task.

Example – Restarting a Docker Container
Here is an example of a simple document used to restart a Docker container. It takes one parameter; the name of the Docker container to restart. It uses the AWS-RunShellScript method to invoke the command. The output is collected automatically by the service and returned to the caller. For an example of the latest document schema, see Creating Systems Manager Documents.

{
  "schemaVersion":"1.2",
  "description":"Restart the specified docker container.",
  "parameters":{
    "param":{
      "type":"String",
      "description":"(Required) name of the container to restart.",
      "maxChars":1024
    }
  },
  "runtimeConfig":{
    "aws:runShellScript":{
      "properties":[
        {
          "id":"0.aws:runShellScript",
          "runCommand":[
            "docker restart {{param}}"
          ]
        }
      ]
    }
  }
}

Putting Run Command into practice at Pegasystems
The Pegasystems provisioning system sits on AWS CloudFormation, which is used to deploy and update Pega Cloud resources. Layered on top of it is the Pega Provisioning Engine, a serverless, Lambda-based service that manages a library of CloudFormation templates and Ansible playbooks.

A Configuration Management Database (CMDB) tracks all the configurations details and history of every deployment and update, and lays out its data using a hierarchical directory naming convention. The following diagram shows how the various systems are integrated:

For cloud system management, Pega operations uses a command line version called cuttysh and a graphical version based on the Pega 7 platform, called the Pega Operations Portal. Both tools allow you to browse the CMDB of deployed environments, view configuration settings, and interact with deployed EC2 instances through Run Command.

CLI Walkthrough
Here is a CLI walkthrough for looking into a customer deployment and interacting with instances using Run Command.

Launching the cuttysh tool brings you to the root of the CMDB and a list of the provisioned customers:

% cuttysh
d CUSTA
d CUSTB
d CUSTC
d CUSTD

You interact with the CMDB using standard Linux shell commands, such as cd, ls, cat, and grep. Items prefixed with s are services that have viewable properties. Items prefixed with d are navigable subdirectories in the CMDB hierarchy.

In this example, change directories into customer CUSTB’s portion of the CMDB hierarchy, and then further into a provisioned Pega environment called env1, under the Dev network. The tool displays the artifacts that are provisioned for that environment. These entries map to provisioned CloudFormation templates.

> cd CUSTB
/ROOT/CUSTB/us-east-1 > cd DEV/env1

The ls –l command shows the version of the provisioned resources. These version numbers map back to source control–managed artifacts for the CloudFormation, Ansible, and other components that compose a version of the Pega Cloud.

/ROOT/CUSTB/us-east-1/DEV/env1 > ls -l
s 1.2.5 RDSDatabase 
s 1.2.5 PegaAppTier 
s 7.2.1 Pega7 

Now, use Run Command to interact with the deployed environments. To do this, use the attach command and specify the service with which to interact. In the following example, you attach to the Pega Web Tier. Using the information in the CMDB and instance tags, the CLI finds the corresponding EC2 instances and displays some basic information about them. This deployment has three instances.

/ROOT/CUSTB/us-east-1/DEV/env1 > attach PegaWebTier
 # ID         State  Public Ip    Private Ip  Launch Time
 0 i-0cf0e84 running 52.63.216.42 10.96.15.70 2017-01-16 
 1 i-0043c1d running 53.47.191.22 10.96.15.43 2017-01-16 
 2 i-09b879e running 55.93.118.27 10.96.15.19 2017-01-16 

From here, you can use the run command to invoke Run Command documents. In the following example, you run the docker-ps document against instance 0 (the first one on the list). EC2 executes the command and returns the output to the CLI, which in turn shows it.

/ROOT/CUSTB/us-east-1/DEV/env1 > run 0 docker-ps
. . 
CONTAINER ID IMAGE             CREATED      STATUS        NAMES
2f187cc38c1  pega-7.2         10 weeks ago  Up 8 weeks    pega-web

Using the same command and some of the other documents that have been defined, you can restart a Docker container or even pull back the contents of a file to your local system. When you get a file, Run Command also leaves a copy in an S3 bucket in case you want to pass the link along to a colleague.

/ROOT/CUSTB/us-east-1/DEV/env1 > run 0 docker-restart pega-web
..
pega-web

/ROOT/CUSTB/us-east-1/DEV/env1 > run 0 get-file /var/log/cfn-init-cmd.log
. . . . . 
get-file

Data has been copied locally to: /tmp/get-file/i-0563c9e/data
Data is also available in S3 at: s3://my-bucket/CUSTB/cuttysh/get-file/data

Now, leverage the Run Command ability to do more than one thing at a time. In the following example, you attach to a deployment with three running instances and want to see the uptime for each instance. Using the par (parallel) option for run, the CLI tells Run Command to execute the uptime document on all instances in parallel.

/ROOT/CUSTB/us-east-1/DEV/env1 > run par uptime
 …
Output for: i-006bdc991385c33
 20:39:12 up 15 days, 3:54, 0 users, load average: 0.42, 0.32, 0.30

Output for: i-09390dbff062618
 20:39:12 up 15 days, 3:54, 0 users, load average: 0.08, 0.19, 0.22

Output for: i-08367d0114c94f1
 20:39:12 up 15 days, 3:54, 0 users, load average: 0.36, 0.40, 0.40

Commands are complete.
/ROOT/PEGACLOUD/CUSTB/us-east-1/PROD/prod1 > 

Summary
Run Command improves productivity by giving you faster access to systems and the ability to run operations across a group of instances. Pega Cloud operations has integrated Run Command with other operational tools to provide a clean and secure method for managing systems. This greatly improves operational efficiency, and gives greater control over who can do what in managed deployments. The Pega continual improvement process regularly assesses why operators need access, and turns those operations into new Run Command documents to be added to the library. In fact, their long-term goal is to stop deploying cloud systems with SSH enabled.

If you have any questions or suggestions, please leave a comment for us!

— Ananth and Rich

Secure API Access with Amazon Cognito Federated Identities, Amazon Cognito User Pools, and Amazon API Gateway

Post Syndicated from Ed Lima original https://aws.amazon.com/blogs/compute/secure-api-access-with-amazon-cognito-federated-identities-amazon-cognito-user-pools-and-amazon-api-gateway/

Ed Lima, Solutions Architect

 

Our identities are what define us as human beings. Philosophical discussions aside, it also applies to our day-to-day lives. For instance, I need my work badge to get access to my office building or my passport to travel overseas. My identity in this case is attached to my work badge or passport. As part of the system that checks my access, these documents or objects help define whether I have access to get into the office building or travel internationally.

This exact same concept can also be applied to cloud applications and APIs. To provide secure access to your application users, you define who can access the application resources and what kind of access can be granted. Access is based on identity controls that can confirm authentication (AuthN) and authorization (AuthZ), which are different concepts. According to Wikipedia:

 

The process of authorization is distinct from that of authentication. Whereas authentication is the process of verifying that “you are who you say you are,” authorization is the process of verifying that “you are permitted to do what you are trying to do.” This does not mean authorization presupposes authentication; an anonymous agent could be authorized to a limited action set.

Amazon Cognito allows building, securing, and scaling a solution to handle user management and authentication, and to sync across platforms and devices. In this post, I discuss the different ways that you can use Amazon Cognito to authenticate API calls to Amazon API Gateway and secure access to your own API resources.

 

Amazon Cognito Concepts

 

It’s important to understand that Amazon Cognito provides three different services:

Today, I discuss the use of the first two. One service doesn’t need the other to work; however, they can be configured to work together.
 

Amazon Cognito Federated Identities

 
To use Amazon Cognito Federated Identities in your application, create an identity pool. An identity pool is a store of user data specific to your account. It can be configured to require an identity provider (IdP) for user authentication, after you enter details such as app IDs or keys related to that specific provider.

After the user is validated, the provider sends an identity token to Amazon Cognito Federated Identities. In turn, Amazon Cognito Federated Identities contacts the AWS Security Token Service (AWS STS) to retrieve temporary AWS credentials based on a configured, authenticated IAM role linked to the identity pool. The role has appropriate IAM policies attached to it and uses these policies to provide access to other AWS services.

Amazon Cognito Federated Identities currently supports the IdPs listed in the following graphic.

 



Continue reading Secure API Access with Amazon Cognito Federated Identities, Amazon Cognito User Pools, and Amazon API Gateway

New Features for IAM Policy Summaries – An Easier Way to Detect Potential Typos in Your IAM Policies

Post Syndicated from Joy Chatterjee original https://aws.amazon.com/blogs/security/new-features-for-iam-policy-summaries-an-easier-way-to-detect-potential-typos-in-your-iam-policies/

Last month, we introduced policy summaries to make it easier for you to understand the permissions in your AWS Identity and Access Management (IAM) policies. On Thursday, May 25, I announced three new features that have been added to policy summaries and reviewed resource summaries. Yesterday, I reviewed the benefits of being able to view services and actions that are implicitly denied by a policy.

Today, I demonstrate how policy summaries make it easier for you to detect potential typos in your policies by showing you unrecognized services and actions. In this post, I show how this new feature can help you detect and fix potential typos in your policies.

Unrecognized services and actions

You can now use policy summaries to see unrecognized services and actions. One key benefit of this feature is that it helps you find possible typos in a policy. Let’s say your developer, Bob, creates a policy granting full List and Read permissions to some Amazon S3 buckets and full access to Amazon DynamoDB. Unfortunately, when testing the policy, Bob sees “Access denied” messages when he tries to use those services. To troubleshoot, Bob returns to the IAM console to review the policy summary. Bob sees that he inadvertently misspelled “DynamoDB” as “DynamoBD” (reversing the position of the last two letters) in the policy and notices that he does not have all of the list permissions for S3.

Screenshot showing the "dynamobd" typo

When Bob chooses S3, he sees that ListBuckets is an unrecognized action, as shown in the following screenshot.

Screenshot showing that ListBuckets is not recognized by IAM

Bob chooses Show remaining 26 and realizes the correct action is s3:ListBucket and not s3:ListBuckets. He also confirms this by looking at the list of actions for S3.

Screenshot showing the true action name

Bob fixes the mistakes by choosing the Edit policy button, making the necessary updates, and saving the changes. He returns to the policy summary and sees that the policy no longer has unrecognized services and actions.

Exceptions

If you have a service or action that appears in the Unrecognized services or Unrecognized actions section of the policy summary, it may be because the service is in preview mode. If you think a service or action should be recognized, please submit feedback by choosing the Feedback link located in the bottom left corner of the IAM console.

Summary

Policy summaries make it easier to troubleshoot possible errors in policies. The newest updates I have explored this week on the AWS Security Blog make it easy to understand the resources defined in a policy, show the services and actions that are implicitly denied by a policy, and help you troubleshoot possible typos in a policy. To see policy summaries in your AWS account, sign in to the IAM console and navigate to any policy on the Policies page of the IAM console or the Permissions tab on a user’s page.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum.

– Joy

New Features for IAM Policy Summaries – Services and Actions Not Granted by a Policy

Post Syndicated from Joy Chatterjee original https://aws.amazon.com/blogs/security/new-features-for-iam-policy-summaries-services-and-actions-not-granted-by-a-policy/

Last month, we introduced policy summaries to make it easier for you to understand the permissions in your AWS Identity and Access Management (IAM) policies. On Thursday, May 25, I announced three new features that have been added to policy summaries and reviewed one of those features: resource summaries. Tomorrow, I will discuss how policy summaries can help you find potential typos in your IAM policies.

Today, I describe how you can view the services and actions that are implicitly denied, which is the same as if the services or actions are not granted by an IAM policy. This feature allows you to see which actions are not included at each access level for a service that has limited access, which can help you pinpoint the actions that are necessary to grant Full: List and Read permissions to a specific service, for example. In this blog post, I cover two examples that show how you can use this feature to see which services and actions are not granted by a policy.

Show remaining services and actions

From the policy summary in the IAM console, you can now see the services and actions that are not granted by a policy by choosing the link next to the Allow heading (see the following screenshot). This enables you to view the remaining services or actions in a service with partial access, without having to go to the documentation.

Let’s look at the AWS managed policy for the Developer Power User. This policy grants access to 99 out 100 services, as shown in the following screenshot. You might want to view the remaining service to determine if you should grant access to it, or you might want to confirm that this policy does not grant access to IAM. To see which service is missing from the policy, I choose the Show remaining 1 link.

Screenshot showing the "Show remaining 1" link

I then scroll down and look for the service that has None as the access level. I see that IAM is not included for this policy.

Screenshot showing that the policy does not grant access to IAM

To go back to the original view, I choose Hide Remaining 1.

Screenshot showing the "Hide remaining 1" link

Let’s look at how this feature can help you pinpoint which actions you need to grant for a specific access level. For policies that grant limited access to a service, this link shows in the service details summary the actions that are not granted by the policy. Let’s say I created a policy that grants full Amazon S3 list and read access. After creating the policy, I realize I did not grant all the list actions because I see Limited: List in the policy summary, as shown in the following screenshot.

Screenshot showing Limited: List in the policy summary

Rather than going to the documentation to find out which actions I am missing, I review the policy summary to determine what I forgot to include. When I choose S3, I see that only 3 out of 4 actions are granted. When I choose Show remaining 27, I see the list action I might have forgotten to include in the list-access level.

Screenshot showing the "Show remaining 27" link

The following screenshot shows I forgot to include s3:ListObjects in the policy. I choose Edit policy and add this action to the IAM policy to ensure I have Full: List and Read access to S3.

Screenshot showing the action left out of the policy

For some policies, you will not see the links shown in this post. This is because the policy grants full access to the services and there are no remaining services to be granted.

Summary

Policy summaries make it easy to view and understand permissions and resources defined in a policy without having to view the associated JSON. You can now view services and actions not included in a policy to see what was omitted by the policy without having to refer to the related documentation. To see policy summaries in your AWS account, sign in to the IAM console and navigate to any policy on the Policies page of the IAM console or the Permissions tab on a user’s page. Tomorrow, I will explain how policy summaries can help you find and troubleshoot typos in IAM policies.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum.

– Joy

New Features for IAM Policy Summaries – Resource Summaries

Post Syndicated from Joy Chatterjee original https://aws.amazon.com/blogs/security/new-features-for-iam-policy-summaries-resource-summaries/

In March, we introduced policy summaries, which make it easier for you to understand the permissions in your AWS Identity and Access Management (IAM) policies. Today, we added three new features to policy summaries to improve the experience of understanding and troubleshooting your policies. First, we added resource summaries for you to see the resources defined in your policies. Second, you can now see which services and actions are implicitly denied by a policy. This allows you to see the remaining actions available for a service with limited access. Third, it is now easier for you to identify potential typos in your policies because you can now see which services and actions are unrecognized by IAM. Today, Tuesday, and Wednesday, I will demonstrate these three new features. In today’s post, I review resource summaries.

Resource summaries

Policy summaries now show you the resources defined in a policy. Previously, policy summaries displayed either All for all resources, the Amazon Resource Name (ARN) for one resource, or Multiple for multiple resources specified in the policy. Starting today, you can see the resource type, region, and account ID to summarize the list of resources defined for each action in a policy. Let’s review a policy summary that specifies multiple resources.

The following policy grants access to three Amazon S3 buckets with multiple conditions.

{
 "Version":"2012-10-17",
 "Statement":[
   {
     "Effect":"Allow",
     "Action":["s3:PutObject","s3:PutObjectAcl"],
     "Resource":["arn:aws:s3:::Apple_bucket"],
     "Condition":{"StringEquals":{"s3:x-amz-acl":["public-read"]}}
   },{
     "Effect":"Allow",
     "Action":["s3:PutObject","s3:PutObjectAcl"],
     "Resource":["arn:aws:s3:::Orange_bucket"],
     "Condition":{"StringEquals":{"s3:prefix":["custom", "test"]}}
   },{
     "Effect":"Allow",
     "Action":["s3:PutObject","s3:PutObjectAcl"],
     "Resource":["arn:aws:s3:::Purple_bucket"],
     "Condition":{"DateGreaterThan":{"aws:CurrentTime":"2016-10-31T05:00:00Z"}}
   }
 ]
}

The policy summary (see the following screenshot) shows Limited: Write, Permissions management actions for S3 on Multiple resources and request conditions. Limited means that some but not all of the actions in the Write and Permissions management are granted in the policy.

Screenshot of the policy summary

If I choose S3, I see that the actions defined in the policy grant access to multiple resources, as shown in the following screenshot. To see the resource summary, I can choose either PutObject or PutObjectAcl.

Screenshot showing that the actions defined in the policy grant access to multiple resources

I choose PutObjectAcl to see the resources and conditions defined in the policy for this S3 action. If the policy has one condition, I see it in the policy summary. I can view multiple conditions in the JSON.

Screenshot showing the resources and the conditions defined in the policy for this S3 action

As the preceding screenshot shows, the PutObjectAcl action has access to three S3 buckets with respective request conditions.

Summary

Policy summaries make it easy to view and understand the permissions and resources defined in a policy without having to view the associated JSON. To see policy summaries in your AWS account, sign in to the IAM console and navigate to any policy on the Policies page of the IAM console or the Permissions tab on a user’s page. On Tuesday, I will review the benefits of viewing the services and actions not granted in a policy.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum.

– Joy

Introducing an Easier Way to Delegate Permissions to AWS Services: Service-Linked Roles

Post Syndicated from Abhishek Pandey original https://aws.amazon.com/blogs/security/introducing-an-easier-way-to-delegate-permissions-to-aws-services-service-linked-roles/

Some AWS services create and manage AWS resources on your behalf. To do this, these services require you to delegate permissions to them by using AWS Identity and Access Management (IAM) roles. Today, AWS IAM introduces service-linked roles, which give you an easier and more secure way to delegate permissions to AWS services. To start, you can use service-linked roles with Amazon Lex, a service that enables you to build conversational interfaces in any application by using voice and text. Over time, more AWS services will use service-linked roles as a way for you to delegate permissions to them to create and manage AWS resources on your behalf. In this blog post, I walk through the details of service-linked roles and show how to use them.

Creation and management of service-linked roles

Each service-linked role links to an AWS service, which is called the linked service. Service-linked roles provide a secure way to delegate permissions to AWS services because only the linked service can assume a service-linked role. Additionally, AWS automatically defines and sets the permissions of service-linked roles, depending on the actions that the linked service performs on your behalf. This makes it easier for you to manage the permissions you delegate to AWS services. AWS allows only those changes to service-linked roles that do not remove the permissions required by the linked service to manage your resources, preventing you from making any changes that would leave your AWS resources in an inconsistent state. Service-linked roles also help you meet your monitoring and auditing requirements because all actions performed on your behalf by an AWS service using a service-linked role appear in your AWS CloudTrail logs.

When you work with an AWS service that uses service-linked roles, the service automatically creates a service-linked role for you. After that, whenever the service must act on your behalf to manage your resources, it assumes the service-linked role. You can view the details of the service-linked roles in your account by using the IAM console, IAM APIs, or the AWS CLI.

Service-linked roles follow a specific naming convention that includes a mandatory prefix that is defined by AWS and an optional suffix defined by you. The examples in the following table show how the role names of service-linked roles may appear.

Service-linked role name Prefix Optional suffix
AWSServiceRoleForLexBots AWSServiceRoleForLexBots Not set
AWSServiceRoleForElasticBeanstalk_myRole AWSServiceRoleForElasticBeanstalk myRole

If you are the administrator of your account and you do not want to grant permissions to other users to create roles or delegate permissions to AWS services, you can create service-linked roles for users in your account by using the IAM console, IAM APIs, or the AWS CLI. For more information about how to create service-linked roles through IAM, see the IAM documentation about creating a role to delegate permissions to an AWS service.

To create a service-linked role or to enable an AWS service to create one on your behalf, you must have permission for the iam:CreateServiceLinkedRole action. The following IAM policy grants the permission to create service-linked roles for Amazon Lex.

{ 
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowCreationOfServiceLinkedRoleForLex",
            "Effect": "Allow",
            "Action": ["iam:CreateServiceLinkedRole"],
            "Resource": ["arn:aws:iam::*:role/aws-service-role/lex.amazonaws.com/AWSServiceRoleForLex*"],
	    "Condition": {
	         "StringLike":{
			"iam:AWSServiceName": "lex.amazonaws.com"
		  }
 	    }
        }
    ]
}

The preceding policy allows the iam:CreateServiceLinkedRole action when the linked service is Amazon Lex, and the name of the service-linked role starts with AWSServiceRoleForLex. For more information, see Working with Policies.

If you no longer wish to use a specific AWS service, you can revoke permissions for that service by deleting the service-linked role. You can do this from the linked service, and the service might require you to delete the resources that depend on the service-linked role. This helps ensure that you do not inadvertently delete a role that is required for your AWS resources to function properly. To learn more about how to delete a service-linked role, see the linked service’s documentation.

Permissions of service-linked roles

Just like existing IAM roles, the permissions of service-linked roles come from two policies: a permission policy and a trust policy. The permission policy determines what the role can and cannot do, and the trust policy defines who can assume the role. AWS automatically sets the permission and trust policies of service-linked roles.

For the permission policy, service-linked roles use an AWS managed policy. This means that when a service adds a new feature, AWS automatically updates the managed policy to enable the new functionality without requiring you to change the policy. In most cases, you do not have to update the permission policy of a service-linked role. However, some services may require you to add specific permissions to the role such as access to a specific Amazon S3 bucket. To learn more about how to add permissions if a service requires specific permissions, see the linked service’s documentation.

Only the linked AWS service can assume a service-linked role, which is why you cannot modify the trust policy of a service-linked role. You can allow your users to create service-linked roles for AWS services while not permitting them to escalate their own privileges. For example, imagine Alice is a developer on your team and she wants to delegate permissions to Amazon Lex. When Alice creates a service-linked role for Amazon Lex, AWS automatically attaches the permission and trust policies. The permission policy includes only the permissions that Amazon Lex needs to manage your resources (this best practice is known as least privilege), and the trust policy defines Amazon Lex as the trusted entity. As a result, Alice is able to create a service-linked role to delegate permissions to Amazon Lex. However, she is unable to edit the trust policy to include additional trusted entities. This prevents her from granting unapproved access to other users or escalating her own privileges, while still having the necessary permissions to create service-linked roles for Amazon Lex.

How to create a service-linked role by using the IAM console

The following steps lead you through creating a service-linked role by using the IAM console. However, before you create a service-linked role, make sure you have the right permissions to do so.

To create a service-linked role by using the IAM console:

  1. Navigate to the IAM console and choose Roles in the navigation pane.
    Screenshot of the IAM console
  2. Choose Create new role.
    Screenshot showing the "Create new role" button
  3. On the Select role type page, in the AWS service-linked role section, choose the AWS service for which you want to create the role. For this example, I choose Amazon Lex – Bots.
    Screenshot of choosing "Amazon Lex - Bots"
  4. Notice that the role name prefix is automatically populated. Type the role name suffix for the service-linked role. Some AWS services, such as Amazon Lex, do not support custom suffixes, in which case you should leave the role name suffix box blank.
    Screenshot of the role name suffix box
  5. Include a description of the new role. Notice that IAM automatically suggests a description for this role, which you can edit. For this example, I keep the suggested description.
    Screenshot of the "Role description" box
  6. Choose Create role. After the role is created, you can view it in the IAM console. Service-linked roles are marked with a cube-shaped icon in the console to help you distinguish these roles from other roles in your account.

Conclusion

With service-linked roles, delegating permissions to AWS services is easier because when you work with an AWS service that uses these roles, the service creates the role for you. You do not have to create IAM policies for delegating permissions to AWS services. Any changes to these roles that might interfere with your AWS resources do not go through. Delegation of permissions is secure because only the linked service is able to use these roles.

To see which AWS services support service-linked roles, see AWS services that work with IAM. If you have any comments about service-linked roles, submit a comment in the “Comments” section below. If you have questions about working with service-linked roles, please start a new thread on the IAM forum.

– Abhishek

Scaling Your Desktop Application Streams with Amazon AppStream 2.0

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/scaling-your-desktop-application-streams-with-amazon-appstream-2-0/

Want to stream desktop applications to a web browser, without rewriting them? Amazon AppStream 2.0 is a fully managed, secure, application streaming service. An easy way to learn what the service does is to try out the end-user experience, at no cost.

In this post, I describe how you can scale your AppStream 2.0 environment, and achieve some cost optimizations. I also add some setup and monitoring tips.

AppStream 2.0 workflow

You import your applications into AppStream 2.0 using an image builder. The image builder allows you to connect to a desktop experience from within the AWS Management Console, and then install and test your apps. Then, create an image that is a snapshot of the image builder.

After you have an image containing your applications, select an instance type and launch a fleet of streaming instances. Each instance in the fleet is used by only one user, and you match the instance type used in the fleet to match the needed application performance. Finally, attach the fleet to a stack to set up user access. The following diagram shows the role of each resource in the workflow.

Figure 1: Describing an AppStream 2.0 workflow

appstreamscaling_1.png

Setting up AppStream 2.0

To get started, set up an example AppStream 2.0 stack or use the Quick Links on the console. For this example, I named my stack ds-sample, selected a sample image, and chose the stream.standard.medium instance type. You can explore the resources that you set up in the AWS console, or use the describe-stacks and describe-fleets commands as follows:

Figure 2: Describing an AppStream 2.0 stack

appstreamscaling_1.png

Figure 3: Describing an AppStream 2.0 fleet

appstreamscaling_2.43%20AM

To set up user access to your streaming environment, you can use your existing SAML 2.0 compliant directory. Your users can then use their existing credentials to log in. Alternatively, to quickly test a streaming connection, or to start a streaming session from your own website, you can create a streaming URL. In the console, choose Stacks, Actions, Create URL, or call create-streaming-url as follows:

Figure 4: Creating a streaming URL

appstreamscaling_3.png

You can paste the streaming URL into a browser, and open any of the displayed applications.

appstreamscaling_4.30%20PM

Now that you have a sample environment set up, here are a few tips on scaling.

Scaling and cost optimization for AppStream 2.0

To provide an instant-on streaming connection, the instances in an AppStream 2.0 fleet are always running. You are charged for running instances, and each running instance can serve exactly one user at any time. To optimize your costs, match the number of running instances to the number of users who want to stream apps concurrently. This section walks through three options for doing this:

  • Fleet Auto Scaling
  • Fixed fleets based on a schedule
  • Fleet Auto Scaling with schedules

Fleet Auto Scaling

To dynamically update the number of running instances, you can use Fleet Auto Scaling. This feature allows you to scale the size of the fleet automatically between a minimum and maximum value based on demand. This is useful if you have user demand that changes constantly, and you want to scale your fleet automatically to match this demand. For examples about setting up and managing scaling policies, see Fleet Auto Scaling.

You can trigger changes to the fleet through the available Amazon CloudWatch metrics:

  • CapacityUtilization – the percentage of running instances already used.
  • AvailableCapacity – the number of instances that are unused and can receive connections from users.
  • InsufficientCapacityError – an error that is triggered when there is no available running instance to match a user’s request.

You can create and attach scaling policies using the AWS SDK or AWS Management Console. I find it convenient to set up the policies using the console. Use the following steps:

  1. In the AWS Management Console, open AppStream 2.0.
  2. Choose Fleets, select a fleet, and choose Scaling Policies.
  3. For Minimum capacity and Maximum capacity, enter values for the fleet.

Figure 5: Fleets tab for setting scaling policies

appstreamscaling_5.png

  1. Create scale out and scale in policies by choosing Add Policy in each section.

Figure 6: Adding a scale out policy

appstreamscaling_6.png

Figure 7: Adding a scale in policy

appstreamscaling_7.png

After you create the policies, they are displayed as part of your fleet details.

appstreamscaling_8.png

The scaling policies are triggered by CloudWatch alarms. These alarms are automatically created on your behalf when you create the scaling policies using the console. You can view and modify the alarms via the CloudWatch console.

Figure 8: CloudWatch alarms for triggering fleet scaling

appstreamscaling_9.png

Fixed fleets based on a schedule

An alternative option to optimize costs and respond to predictable demand is to fix the number of running instances based on the time of day or day of the week. This is useful if you have a fixed number of users signing in at different times of the day― scenarios such as a training classes, call center shifts, or school computer labs. You can easily set the number of instances that are running using the AppStream 2.0 update-fleet command. Update the Desired value for the compute capacity of your fleet. The number of Running instances changes to match the Desired value that you set, as follows:

Figure 9: Updating desired capacity for your fleet

appstreamscaling_10.png

Set up a Lambda function to update your fleet size automatically. Follow the example below to set up your own functions. If you haven’t used Lambda before, see Step 2: Create a HelloWorld Lambda Function and Explore the Console.

To create a function to change the fleet size

  1. In the Lambda console, choose Create a Lambda function.
  2. Choose the Blank Function blueprint. This gives you an empty blueprint to which you can add your code.
  3. Skip the trigger section for now. Later on, you can add a trigger based on time, or any other input.
  4. In the Configure function section:
    1. Provide a name and description.
    2. For Runtime, choose Node.js 4.3.
    3. Under Lambda function handler and role, choose Create a custom role.
    4. In the IAM wizard, enter a role name, for example Lambda-AppStream-Admin. Leave the defaults as is.
    5. After the IAM role is created, attach an AppStream 2.0 managed policy “AmazonAppStreamFullAccess” to the role. For more information, see Working with Managed Policies. This allows Lambda to call the AppStream 2.0 API on your behalf. You can edit and attach your own IAM policy, to limit access to only actions you would like to permit. To learn more, see Controlling Access to Amazon AppStream 2.0.
    6. Leave the default values for the rest of the fields, and choose Next, Create function.
  5. To change the AppStream 2.0 fleet size, choose Code and add some sample code, as follows:
    'use strict';
    
    /**
    This AppStream2 Update-Fleet blueprint sets up a schedule for a streaming fleet
    **/
    
    const AWS = require('aws-sdk');
    const appstream = new AWS.AppStream();
    const fleetParams = {
      Name: 'ds-sample-fleet', /* required */
      ComputeCapacity: {
        DesiredInstances: 1 /* required */
    
      }
    };
    
    exports.handler = (event, context, callback) => {
        console.log('Received event:', JSON.stringify(event, null, 2));
    
        var resource = event.resources[0];
        var increase = resource.includes('weekday-9am-increase-capacity')
    
        try {
            if (increase) {
                fleetParams.ComputeCapacity.DesiredInstances = 3
            } else {
                fleetParams.ComputeCapacity.DesiredInstances = 1
            }
            appstream.updateFleet(fleetParams, (error, data) => {
                if (error) {
                    console.log(error, error.stack);
                    return callback(error);
                }
                console.log(data);
                return callback(null, data);
            });
        } catch (error) {
            console.log('Caught Error: ', error);
            callback(error);
        }
    };

  6. Test the code. Choose Test and use the “Hello World” test template. The first time you do this, choose Save and Test. Create a test input like the following to trigger the scaling update.

    appstreamscaling_11.png

  7. You see output text showing the result of the update-fleet call. You can also use the CLI to check the effect of executing the Lambda function.

Next, to set up a time-based schedule, set a trigger for invoking the Lambda function.

To set a trigger for the Lambda function

  1. Choose Triggers, Add trigger.
  2. Choose CloudWatch Events – Schedule.
  3. Enter a rule name, such as “weekday-9am-increase-capacity”, and a description. For Schedule expression, choose cron. You can edit the value for the cron later.
  4. After the trigger is created, open the event weekday-9am-increase-capacity.
  5. In the CloudWatch console, edit the event details. To scale out the fleet at 9 am on a weekday, you can adjust the time to be: 00 17 ? * MON-FRI *. (If you’re not in Seattle (Pacific Time Zone), change this to another specific time zone).
  6. You can also add another event that triggers at the end of a weekday.

appstreamscaling_12.png

This setup now triggers scale-out and scale-in automatically, based on the time schedule that you set.

Fleet Auto Scaling with schedules

You can choose to combine both the fleet scaling and time-based schedule approaches to manage more complex scenarios. This is useful to manage the number of running instances based on business and non-business hours, and still respond to changes in demand. You could programmatically change the minimum and maximum sizes for your fleet based on time of day or day of week, and apply the default scale-out or scale-in policies. This allows you to respond to predictable minimum demand based on a schedule.

For example, at the start of a work day, you might expect a certain number of users to request streaming connections at one time. You wouldn’t want to wait for the fleet to scale out and meet this requirement. However, during the course of the day, you might expect the demand to scale in or out, and would want to match the fleet size to this demand.

To achieve this, set up the scaling polices via the console, and create a Lambda function to trigger changes to the minimum, maximum, and desired capacity for your fleet based on a schedule. Replace the code for the Lambda function that you created earlier with the following code:

'use strict';

/**
This AppStream2 Update-Fleet function sets up a schedule for a streaming fleet
**/

const AWS = require('aws-sdk');
const appstream = new AWS.AppStream();
const applicationAutoScaling = new AWS.ApplicationAutoScaling();

const fleetParams = {
  Name: 'ds-sample-fleet', /* required */
  ComputeCapacity: {
    DesiredInstances: 1 /* required */
  }
};

var scalingParams = {
  ResourceId: 'fleet/ds-sample-fleet', /* required - fleet name*/
  ScalableDimension: 'appstream:fleet:DesiredCapacity', /* required */
  ServiceNamespace: 'appstream', /* required */
  MaxCapacity: 1,
  MinCapacity: 6,
  RoleARN: 'arn:aws:iam::659382443255:role/service-role/ApplicationAutoScalingForAmazonAppStreamAccess'
};

exports.handler = (event, context, callback) => {
    
    console.log('Received this event now:', JSON.stringify(event, null, 2));
    
    var resource = event.resources[0];
    var increase = resource.includes('weekday-9am-increase-capacity')

    try {
        if (increase) {
            //usage during business hours - start at capacity of 10 and scale
            //if required. This implies at least 10 users can connect instantly. 
            //More users can connect as the scaling policy triggers addition of
            //more instances. Maximum cap is 20 instances - fleet will not scale
            //beyond 20. This is the cap for number of users.
            fleetParams.ComputeCapacity.DesiredInstances = 10
            scalingParams.MinCapacity = 10
            scalingParams.MaxCapacity = 20
        } else {
            //usage during non-business hours - start at capacity of 1 and scale
            //if required. This implies only 1 user can connect instantly. 
            //More users can connect as the scaling policy triggers addition of
            //more instances. 
            fleetParams.ComputeCapacity.DesiredInstances = 1
            scalingParams.MinCapacity = 1
            scalingParams.MaxCapacity = 10
        }
        
        //Update minimum and maximum capacity used by the scaling policies
        applicationAutoScaling.registerScalableTarget(scalingParams, (error, data) => {
             if (error) console.log(error, error.stack); 
             else console.log(data);                     
            });
            
        //Update the desired capacity for the fleet. This sets 
        //the number of running instances to desired number of instances
        appstream.updateFleet(fleetParams, (error, data) => {
            if (error) {
                console.log(error, error.stack);
                return callback(error);
            }

            console.log(data);
            return callback(null, data);
        });
            
    } catch (error) {
        console.log('Caught Error: ', error);
        callback(error);
    }
};

Note: To successfully execute this code, you need to add IAM policies to the role used by the Lambda function. The policies allow Lambda to call the Application Auto Scaling service on your behalf.

Figure 10: Inline policies for using Application Auto Scaling with Lambda

{
"Version": "2012-10-17",
"Statement": [
   {
      "Effect": "Allow", 
         "Action": [
            "iam:PassRole"
         ],
         "Resource": "*"
   }
]
}
{
"Version": "2012-10-17",
"Statement": [
   {
      "Effect": "Allow", 
         "Action": [
            "application-autoscaling:*"
         ],
         "Resource": "*"
   }
]
}

Monitoring usage

After you have set up scaling for your fleet, you can use CloudWatch metrics with AppStream 2.0, and create a dashboard for monitoring. This helps optimize your scaling policies over time based on the amount of usage that you see.

For example, if you were very conservative with your initial set up and over-provisioned resources, you might see long periods of low fleet utilization. On the other hand, if you set the fleet size too low, you would see high utilization or errors from insufficient capacity, which would block users’ connections. You can view CloudWatch metrics for up to 15 months, and drive adjustments to your fleet scaling policy.

Figure 11: Dashboard with custom Amazon CloudWatch metrics

appstreamscaling_13.53%20PM

Summary

These are just a few ideas for scaling AppStream 2.0 and optimizing your costs. Let us know if these are useful, and if you would like to see similar posts. If you have comments about the service, please post your feedback on the AWS forum for AppStream 2.0.

Easily Tag Amazon EC2 Instances and Amazon EBS Volumes on Creation

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/easily-tag-amazon-ec2-instances-and-amazon-ebs-volumes-on-creation/

In 2010, AWS launched resource tagging for Amazon EC2 instances and other EC2 resources. Since that launch, we have raised the allowable number of tags per resource from 10 to 50 and made tags more useful with the introduction of resource groups and Tag Editor. AWS customers use tags to track ownership, drive their cost accounting processes, implement compliance protocols, and control access to resources via AWS Identity and Access Management (IAM) policies.

The AWS tagging model provides separate functions for resource creation and resource tagging. Though this is flexible and has worked well for many of our users, it does result in a small time window when the resources exist in an untagged state. Using two separate functions means that it is possible for resource creation to succeed and tagging to fail, which would leave resources in an untagged state.

New this week, we have made tagging more flexible and more useful, with four new features:

  • Tag on creation – You can now specify tags for EC2 instances and Amazon Elastic Block Store (Amazon EBS) volumes as part of the API call that creates the resources.
  • Enforced tag usage – You can now write IAM policies that mandate the use of specific tags on EC2 instances and EBS volumes.
  • Resource-level permissions – By popular request, the CreateTags and DeleteTags functions now support IAM’s resource-level permissions.
  • Enforced volume encryption – You can now write IAM policies that mandate the use of encryption for newly created EBS volumes.

To learn more, see the full blog post on the AWS Blog.

– Craig

New – Tag EC2 Instances & EBS Volumes on Creation

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-tag-ec2-instances-ebs-volumes-on-creation/

Way back in 2010, we launched Resource Tagging for EC2 instances and other EC2 resources. Since that launch, we have raised the allowable number of tags per resource from 10 to 50, and we have made tags more useful with the introduction of resource groups and a tag editor. Our customers use tags to track ownership, drive their cost accounting processes, implement compliance protocols, and to control access to resources via IAM policies.

The AWS tagging model provides separate functions for resource creation and resource tagging. While this is flexible and has worked well for many of our users, it does result in a small time window where the resources exist in an untagged state. Using two separate functions means that resource creation could succeed only for tagging to fail, again leaving resources in an untagged state.

Today we are making tagging more flexible and more useful, with four new features:

Tag on Creation – You can now specify tags for EC2 instances and EBS volumes as part of the API call that creates the resources.

Enforced Tag Usage – You can now write IAM policies that mandate the use of specific tags on EC2 instances or EBS volumes.

Resource-Level Permissions – By popular request, the CreateTags and DeleteTags functions now support IAM’s resource-level permissions.

Enforced Volume Encryption – You can now write IAM policies that mandate the use of encryption for newly created EBS volumes.

Tag on Creation
You now have the ability to specify tags for EC2 instances and EBS volumes as part of the API call that creates the resources (if the call creates both instances and volumes, you can specify distinct tags for the instance and for each volume). The resource creation and the tagging are performed atomically; both must succeed in order for the operation (RunInstances, CreateVolume, and other functions that create resources) to succeed. You no longer need to build tagging scripts that run after instances or volumes have been created.

Here’s how you specify tags when you launch an EC2 instance (the CostCenter and SaveSnapshotFlag tags are also set on any EBS volumes created when the instance is launched):

To learn more, read Using Tags.

Resource-Level Permissions
CreateTags and DeleteTags now support IAM’s resource-level permissions, as requested by many customers. This gives you additional control over the tag keys and values on existing resources.

Also, RunInstances and CreateVolume now support additional resource-level permissions. This allows you to exercise control over the users and groups that can tag resources on creation.

To learn more, see Example Policies for Working with the AWS CLI or an AWS SDK.

Enforced Tag Usage
You can now write IAM policies that enforce the use of specific tags. For example, you could write a policy that blocks the deletion of tags named Owner or Account. Or, you could write a “Deny” policy that disallows the creation of new tags for specific existing resources. You could also use an IAM policy to enforce the use of Department and CostCenter tags to help you achieve more accurate cost allocation reporting. In order to implement stronger compliance and security policies, you could also restrict access to DeleteTags if the resource is not tagged with the user’s name. The ability to enforce tag usage gives you precise control over access to resources, ownership, and cost allocation.

Here’s a statement that requires the use of costcenter and stack tags (with values of “115” and “prod,” respectively) for all newly created volumes:

"Statement": [
    {
      "Sid": "AllowCreateTaggedVolumes",
      "Effect": "Allow",
      "Action": "ec2:CreateVolume",
      "Resource": "arn:aws:ec2:us-east-1:123456789012:volume/*",
      "Condition": {
        "StringEquals": {
          "aws:RequestTag/costcenter": "115",
          "aws:RequestTag/stack": "prod"
         },
         "ForAllValues:StringEquals": {
             "aws:TagKeys": ["costcenter","stack"]
         }
       }
     },
     {
       "Effect": "Allow",
       "Action": [
         "ec2:CreateTags"
       ],
       "Resource": "arn:aws:ec2:us-east-1:123456789012:volume/*",
       "Condition": {
         "StringEquals": {
             "ec2:CreateAction" : "CreateVolume"
        }
      }
    }
  ]

Enforced Volume Encryption
Using the additional IAM resource-level permissions now supported by RunInstances and CreateVolume, you can now write IAM policies that mandate the use of encryption for any EBS boot or data volumes created. You can use this to comply with regulatory requirements, enforce enterprise security policies, and to protect your data in compliance with applicable auditing requirements.

Here’s a sample statement that you can incorporate into an IAM policy for RunInstances and CreateVolume to enforce EBS volume encryption:

"Statement": [
        {
            "Effect": "Deny",
            "Action": [
                       "ec2:RunInstances",
                       "ec2:CreateVolume"
            ],
            "Resource": [
                "arn:aws:ec2:*:*:volume/*"
            ],
            "Condition": {
                "Bool": {
                    "ec2:Encrypted": "false"
                }
            }
        },

To learn more and to see some sample policies, take a look at Example Policies for Working with the AWS CLI or an AWS SDK and IAM Policies for Amazon EC2.

Available Now
As you can see, the combination of tagging and the new resource-level permissions on the resource creation and tag manipulation functions gives you the ability to track and control access to your EC2 resources.

This new feature is available now in all regions except AWS GovCloud (US) and China (Beijing). You can start using it today from the AWS Management Console, AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, or the AWS APIs.

We are planning to add support for additional EC2 resource types over time; stay tuned for more information!

Jeff;

Move Over JSON – Policy Summaries Make Understanding IAM Policies Easier

Post Syndicated from Joy Chatterjee original https://aws.amazon.com/blogs/security/move-over-json-policy-summaries-make-understanding-iam-policies-easier/

Today, we added policy summaries to the IAM console, making it easier for you to understand the permissions in your AWS Identity and Access Management (IAM) policies. Instead of reading JSON policy documents, you can scan a table that summarizes services, actions, resources, and conditions for each policy. You can find this summary on the policy detail page or the Permissions tab on an individual IAM user’s page.

In this blog post, I introduce policy summaries and review the details of a policy summary.

How to read a policy summary

The following screenshot shows an example policy summary. The table provides you with an at-a-glance view of each service’s granted access level, resources, and conditions.

The columns in a policy summary are defined this way:

  • Service – The Amazon services defined in the policy. Click each service name to see the specific actions granted for the service.
  • Access level – Actions defined for each service in the policy (I provide more details below).
  • Resource –The resources defined for each service in the policy. This column displays one of the following values:
    • All resources – Access is granted or denied to all resources in the service.
    • Multiple – Some but not all of the resources are granted or denied in the service.
    • Amazon Resource Name (ARN) – The policy defines one resource in the service. You will see the actual ARN displayed for one resource.
  • Request condition – The conditions defined for each service. Conditions can be global conditions or conditions specific to the service. This column displays one of the following values:
    • None – No conditions are defined for the service.
    • Multiple – Multiple conditions are defined for the service.
    • Condition – One condition is defined for the service and applies to all actions defined in the policy for the service. You will see the condition defined in the policy in the table. For example, the preceding screenshot shows a condition for Amazon Elastic Beanstalk.

If you prefer reading and managing policies in JSON, choose View and edit JSON above the policy summary to see the policy in JSON.

Before I go over an example of a policy summary, I will explain access levels in more detail, a new concept we introduced with policy summaries.

Access levels in policy summaries

To help you understand the permissions defined in a policy, each AWS service’s actions are categorized in four access levels: List, Read, Write, and Permissions management. For example, the following table defines the access levels and provides examples using Amazon S3 actions. Full and Limited further qualify the access levels for each service. Full refers to all the actions within an access level, and Limited refers to at least one but not all actions in an access level. Note: You can see the complete list of actions and access levels for all services in the AWS IAM Policy Actions Grouped by Access Level documentation.

Access level Description Example
List Actions that allow you to see a list of resources s3:ListBucket, s3:ListAllMyBuckets
Read Actions that allow you to read the content in resources s3:GetObject, s3:GetBucketTagging
Write Actions that allow you to create, delete, or modify resources s3:PutObject, s3:DeleteBucket
Permissions management Actions that allow you to grant or modify permissions to resources s3:PutBucketPolicy

Note: Not all AWS services have actions in all access levels.

In the following screenshot, the access level for S3 is Full access, which means the policy permits all actions of the S3 List, Read, Write, and Permissions management access levels. The access level for EC2 is Full: List,Read and Limited: Write, meaning that the policy grants all actions of the List and Read access levels, but only a portion of the actions of the Write access level. You can view the specific actions defined in the policy by choosing the service in the policy summary.

Reviewing a policy summary in detail

Let’s look at a policy summary in the IAM console. Imagine that Alice is a developer on my team who analyzes data and generates quarterly reports for our finance team. To grant her the permissions she needs, I have added her to the Data_Analytics IAM group.

To see the policies attached to user Alice, I navigate to her user page by choosing her user name on the Users page of the IAM console. The following screenshot shows that Alice has 3 policies attached to her.

I will review the permissions defined in the Data_Analytics policy, but first, let’s look at the JSON syntax for the policy so that you can compare the different views.

{
    "Version": "2012-10-17",
    "Statement": [{
        "Effect": "Allow",
        "Action": [
            "autoscaling:*",
            "ec2:CancelSpotInstanceRequests",
            "ec2:CancelSpotFleetRequests",
            "ec2:CreateTags",
            "ec2:DeleteTags",
            "ec2:Describe*",
            "ec2:ModifyImageAttribute",
            "ec2:ModifyInstanceAttribute",
            "ec2:ModifySpotFleetRequest",
            "ec2:RequestSpotInstances",
            "ec2:RequestSpotFleet",
            "elasticmapreduce:*",
            "es:Describe*",
            "es:List*",
            "es:Update*",
            "es:Add*",
            "es:Create*",
            "es:Delete*",
            "es:Remove*",
            "lambda:Create*",
            "lambda:Delete*",
            "lambda:Get*",
            "lambda:InvokeFunction",
            "lambda:Update*",
            "lambda:List*"
        ],
        "Resource": "*"
    },

    {
        "Effect": "Allow",
        "Action": [
            "s3:ListBucket",
            "s3:GetObject",
            "s3:PutObject",
            "s3:PutBucketPolicy"
        ],
        "Resource": [
            "arn:aws:s3:::DataTeam"
        ],
        "Condition": {
            "StringLike": {
                "s3:prefix": [
                    "Sales/*"
                ]
            }
        }
    }, 
	{
        "Effect": "Allow",
        "Action": [
            "elasticfilesystem:*"
        ],
        "Resource": [
            "arn:aws:elasticfilesystem:*:111122223333:file-system/2017sales"
        ]
    }, 
	{
        "Effect": "Allow",
        "Action": [
            "rds:*"
        ],
         "Resource": [
            "arn:aws:rds:*:111122223333:db/mySQL_Instance"
        ]
    }, 
	{
        "Effect": "Allow",
        "Action": [
            "dynamodb:*"
         ],
        "Resource": [
            "arn:aws:dynamodb:*:111122223333:table/Sales_2017Annual"
        ]
    }, 
	{
        "Effect": "Allow",
        "Action": [
            "iam:GetInstanceProfile",
            "iam:GetRole",
            "iam:GetPolicy",
            "iam:GetPolicyVersion",
            "iam:ListRoles"
        ],
        "Condition": {
            "IpAddress": {
                "aws:SourceIp": "192.0.0.8"
            }
        },
        "Resource": [
            "*"
        ]
    }
  ]
}

To view the policy summary, I can either choose the policy name, which takes me to the policy’s page, or I can choose the arrow next to the policy name, which expands the policy summary on Alice‘s user page. The following screenshot shows the policy summary of the Data_Analytics policy that is attached to Alice.

Looking at this policy summary, I can see that Alice has access to multiple services with different access levels. She has Full access to Amazon EMR, but only Limited List and Limited Read access to IAM. I can also see the high-level summary of resources and conditions granted for each service. In this policy, Alice can access only the 2017sales file system in Amazon EFS and a single Amazon RDS instance. She has access to Multiple Amazon S3 buckets and Amazon DynamoDB tables. Looking at the Request condition column, I see that Alice can access IAM only from a specific IP range. To learn more about the details for resources and request conditions, see the IAM documentation on Understanding Policy Summaries in the AWS Management Console.

In the policy summary, to see the specific actions granted for a service, I choose a service name. For example, when I choose Elasticsearch, I see all the actions organized by access level, as shown in the following screenshot. In this case, Alice has access to all Amazon ES resources and has no request conditions.

Some exceptions

For policies that are complex or contain unrecognized actions, the policy summary may not be able to generate a simple, human-readable table. For these edge cases, we will continue to show the JSON policy without the policy summary.

For policies that include Deny statements, you will see a separate table that shows the permissions that the policy explicitly denies. You can see an example of a policy summary that includes both an Allow statement and a Deny statement in our documentation.

Conclusion

To see policy summaries in your AWS account, sign in to the IAM console and navigate to any managed policy on the Policies page of the IAM console or the Permissions tab on a user’s page. Policy summaries make it easy to scan for certain permissions, such as quickly identifying who has Full access or Permissions management privileges. You can also compare policies to determine which policies define conditions or specify resources for better security posture.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or suggestions for this solution, please start a new thread on the IAM forum.

– Joy