Tag Archives: AWS IAM

How to Detect and Automatically Remediate Unintended Permissions in Amazon S3 Object ACLs with CloudWatch Events

Post Syndicated from Mustafa Torun original https://aws.amazon.com/blogs/security/how-to-detect-and-automatically-remediate-unintended-permissions-in-amazon-s3-object-acls-with-cloudwatch-events/

Amazon S3 Access Control Lists (ACLs) enable you to specify permissions that grant access to S3 buckets and objects. When S3 receives a request for an object, it verifies whether the requester has the necessary access permissions in the associated ACL. For example, you could set up an ACL for an object so that only the users in your account can access it, or you could make an object public so that it can be accessed by anyone.

If the number of objects and users in your AWS account is large, ensuring that you have attached correctly configured ACLs to your objects can be a challenge. For example, what if a user were to call the PutObjectAcl API call on an object that is supposed to be private and make it public? Or, what if a user were to call the PutObject with the optional Acl parameter set to public-read, therefore uploading a confidential file as publicly readable? In this blog post, I show a solution that uses Amazon CloudWatch Events to detect PutObject and PutObjectAcl API calls in near real time and helps ensure that the objects remain private by making automatic PutObjectAcl calls, when necessary.

Note that this process is a reactive approach, a complement to the proactive approach in which you would use the AWS Identity and Access Management (IAM) policy conditions to force your users to put objects with private access (see Specifying Conditions in a Policy for more information). The reactive approach I present in this post is for “just in case” situations in which the change on the ACL is accidental and must be fixed.

Solution overview

The following diagram illustrates this post’s solution:

  1. An IAM or root user in your account makes a PutObjectAcl or PutObject call.
  2. S3 sends the corresponding API call event to both AWS CloudTrail and CloudWatch Events in near real time.
  3. A CloudWatch Events rule delivers the event to an AWS Lambda function.
  4. If the object is in a bucket in which all the objects need to be private and the object is not private anymore, the Lambda function makes a PutObjectAcl call to S3 to make the object private.

Solution diagram

To detect the PutObjectAcl call and modify the ACL on the object, I:

  1. Turn on object-level logging in CloudTrail for the buckets I want to monitor.
  2. Create an IAM execution role to be used when the Lambda function is being executed so that Lambda can make API calls to S3 on my behalf.
  3. Create a Lambda function that receives the PutObjectAcl API call event, checks whether the call is for a monitored bucket, and, if so, ensures the object is private.
  4. Create a CloudWatch Events rule that matches the PutObjectAcl API call event and invokes the Lambda function created in the previous step.

The remainder of this blog post details the steps of this solution’s deployment and the testing of its setup.

Deploying the solution

In this section, I follow the four solution steps outlined in the previous section to use CloudWatch Events to detect and fix unintended access permissions in S3 object ACLs automatically. I start with turning on object-level logging in CloudTrail for the buckets of interest.

I use the AWS CLI in this section. (To learn more about setting up the AWS CLI, see Getting Set Up with the AWS Command Line Interface.) Before you start, make sure your installed AWS CLI is up to date (specifically, you must use version 1.11.28 or newer). You can check the version of your CLI as follows.

$ aws --version

I run the following AWS CLI command to create an S3 bucket named everything-must-be-private. Remember to replace the placeholder bucket names with your own bucket names, in this instance and throughout the post (bucket names are global in S3).

$ aws s3api create-bucket \
--bucket everything-must-be-private

As the bucket name suggests, I want all the files in this bucket to be private. In the rest of this section, I detail above four steps for deploying the solution.

Step 1: Turn on object-level logging in CloudTrail for the S3 bucket

In this step, I create a CloudTrail trail and turn on object-level logging for the bucket, everything-must-be-private. My goal is to achieve the following setup, which are also illustrated in the following diagram:

  1. A requester makes an API call against this bucket.
  2. I let a CloudTrail trail named my-object-level-s3-trail receive the corresponding events.
  3. I log the events to a bucket named bucket-for-my-object-level-s3-trail and deliver them to CloudWatch Events.

Diagram2-012417-MT

First, I create a file named bucket_policy.json and populate it with the following policy. Remember to replace the placeholder account ID with your own account ID, in this instance and throughout the post.

{
    "Version": "2012-10-17",
    "Statement": [{
        "Effect": "Allow",
        "Principal": {
            "Service": "cloudtrail.amazonaws.com"
        },
        "Action": "s3:GetBucketAcl",
        "Resource": "arn:aws:s3:::bucket-for-my-object-level-s3-trail"
    }, {
        "Effect": "Allow",
        "Principal": {
            "Service": "cloudtrail.amazonaws.com"
        },
        "Action": "s3:PutObject",
        "Resource": "arn:aws:s3:::bucket-for-my-object-level-s3-trail/AWSLogs/123456789012/*",
        "Condition": {
            "StringEquals": {
                "s3:x-amz-acl": "bucket-owner-full-control"
            }
        }
    }]
}

Then, I run the following commands to create the bucket for logging with the preceding policy so that CloudTrail can deliver log files to the bucket.

$ aws s3api create-bucket \
--bucket bucket-for-my-object-level-s3-trail

$ aws s3api put-bucket-policy \
--bucket bucket-for-my-object-level-s3-trail \
--policy file://bucket_policy.json

Next, I create a trail and start logging on the trail.

$ aws cloudtrail create-trail \
--name my-object-level-s3-trail \
--s3-bucket-name bucket-for-my-object-level-s3-trail

$ aws cloudtrail start-logging \
--name my-object-level-s3-trail

I then create a file named my_event_selectors.json and populate it with the following content.

[{
        "IncludeManagementEvents": false,
        "DataResources": [{
            "Values": [
                 "arn:aws:s3:::everything-must-be-private/"
            ],
            "Type": "AWS::S3::Object"
        }],
        "ReadWriteType": "All"
}]

By default, S3 object-level operations are not logged in CloudTrail. Only bucket-level operations are logged. As a result, I finish my trail setup by creating the event selector shown previously to have the object-level operations logged in those two buckets. Note that I explicitly set IncludeManagementEvents to false because I want only object-level operations to be logged.

$ aws cloudtrail put-event-selectors \
--trail-name my-object-level-s3-trail \
--event-selectors file://my_event_selectors.json

Step 2: Create the IAM execution role for the Lambda function

In this step, I create an IAM execution role for my Lambda function. The role allows Lambda to perform S3 actions on my behalf while the function is being executed. The role also allows CreateLogGroup, CreateLogStream, and PutLogEvents CloudWatch Logs APIs so that the Lambda function can write logs for debugging.

I start with putting the following trust policy document in a file named trust_policy.json.

{
        "Version": "2012-10-17",
        "Statement": [{
            "Effect": "Allow",
            "Principal": {
                "Service": "lambda.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }]
}

Next, I run the following command to create the IAM execution role.

$ aws iam create-role \
--role-name AllowLogsAndS3ACL \
--assume-role-policy-document file://trust_policy.json

I continue by putting the following access policy document in a file named access_policy.json.

{
        "Version": "2012-10-17",
        "Statement": [{
            "Effect": "Allow",
            "Action": [
                "s3:GetObjectAcl",
                "s3:PutObjectAcl"
            ],
            "Resource": "arn:aws:s3:::everything-must-be-private/*"
        }, {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": "arn:aws:logs:*:*:*"
        }]
}

Finally, I run the following command to define the access policy for the IAM execution role I have just created.

$ aws iam put-role-policy \
--role-name AllowLogsAndS3ACL \
--policy-name AllowLogsAndS3ACL \
--policy-document file://access_policy.json

Step 3: Create a Lambda function that processes the PutObjectAcl API call event

In this step, I create the Lambda function that processes the event. It also decides whether the ACL on the object needs to be changed, and, if so, makes a PutObjectAcl call to make it private. I start with adding the following code to a file named lambda_function.py.

from __future__ import print_function

import json
import boto3

print('Loading function')

s3 = boto3.client('s3')

bucket_of_interest = "everything-must-be-private"

# For a PutObjectAcl API Event, gets the bucket and key name from the event
# If the object is not private, then it makes the object private by making a
# PutObjectAcl call.
def lambda_handler(event, context):
    # Get bucket name from the event
    bucket = event['detail']['requestParameters']['bucketName']
    if (bucket != bucket_of_interest):
        print("Doing nothing for bucket = " + bucket)
        return
    
    # Get key name from the event
    key = event['detail']['requestParameters']['key']
    
    # If object is not private then make it private
    if not (is_private(bucket, key)):
        print("Object with key=" + key + " in bucket=" + bucket + " is not private!")
        make_private(bucket, key)
    else:
        print("Object with key=" + key + " in bucket=" + bucket + " is already private.")
    
# Checks an object with given bucket and key is private
def is_private(bucket, key):
    # Get the object ACL from S3
    acl = s3.get_object_acl(Bucket=bucket, Key=key)
    
    # Private object should have only one grant which is the owner of the object
    if (len(acl['Grants']) > 1):
        return False
    
    # If canonical owner and grantee ids do no match, then conclude that the object
    # is not private
    owner_id = acl['Owner']['ID']
    grantee_id = acl['Grants'][0]['Grantee']['ID']
    if (owner_id != grantee_id):
        return False
    return True

# Makes an object with given bucket and key private by calling the PutObjectAcl API.
def make_private(bucket, key):
    s3.put_object_acl(Bucket=bucket, Key=key, ACL="private")
    print("Object with key=" + key + " in bucket=" + bucket + " is marked as private.")

Next, I zip the lambda_function.py file into an archive named CheckAndCorrectObjectACL.zip and run the following command to create the Lambda function.

$ aws lambda create-function \
--function-name CheckAndCorrectObjectACL \
--zip-file fileb://CheckAndCorrectObjectACL.zip \
--role arn:aws:iam::123456789012:role/AllowLogsAndS3ACL \
--handler lambda_function.lambda_handler \
--runtime python2.7

Finally, I run the following command to allow CloudWatch Events to invoke my Lambda function for me.

$ aws lambda add-permission \
--function-name CheckAndCorrectObjectACL \
--statement-id AllowCloudWatchEventsToInvoke \
--action 'lambda:InvokeFunction' \
--principal events.amazonaws.com \
--source-arn arn:aws:events:us-east-1:123456789012:rule/S3ObjectACLAutoRemediate

Step 4: Create the CloudWatch Events rule

Now, I create the CloudWatch Events rule that is triggered when an event is received from S3. I start with defining the event pattern for my rule. I create a file named event_pattern.json and populate it with the following code.

{
	"detail-type": [
		"AWS API Call via CloudTrail"
	],
	"detail": {
		"eventSource": [
			"s3.amazonaws.com"
		],
		"eventName": [
			"PutObjectAcl",
			"PutObject"
		],
		"requestParameters": {
			"bucketName": [
				"everything-must-be-private"
			]
		}
	}
}

With this event pattern, I am configuring my rule so that it is triggered only when a PutObjectAcl or a PutObject API call event from S3 via CloudTrail is delivered for the objects in the bucket I choose. I finish my setup by running the following commands to create the CloudWatch Events rule and adding to it as a target the Lambda function I created in the previous step.

$ aws events put-rule \
--name S3ObjectACLAutoRemediate \
--event-pattern file://event_pattern.json

$ aws events put-targets \
--rule S3ObjectACLAutoRemediate \
--targets Id=1,Arn=arn:aws:lambda:us-east-1:123456789012:function:CheckAndCorrectObjectACL

Test the setup

From now on, whenever a user in my account makes a PutObjectAcl call against the bucket everything-must-be-private, S3 will deliver the corresponding event to CloudWatch Events via CloudTrail. The event must match the CloudWatch Events rule in order to be delivered to the Lambda function. Finally, the function checks if the object ACL is expected. If not, the function makes the object private.

For testing the setup, I create an empty file named MyCreditInfo in the bucket everything-must-be-private, and I check its ACL.

$ aws s3api put-object \
--bucket everything-must-be-private \
--key MyCreditInfo

$ aws s3api get-object-acl \
--bucket everything-must-be-private \
--key MyCreditInfo

In the response to my command, I see only one grantee, which is the owner (me). This means the object is private. Now, I add public read access to this object (which is supposed to stay private).

$ aws s3api put-object-acl \
--bucket everything-must-be-private \
--key MyCreditInfo \
--acl public-read

If I act quickly and describe the ACL on the object again by calling the GetObjectAcl API, I see another grantee that allows everybody to read this object.

{
	"Grantee": {
		"Type": "Group",
		"URI": "http://acs.amazonaws.com/groups/global/AllUsers"
	},
	"Permission": "READ"
}

When I describe the ACL again, I see that the grantee for public read access has been removed. Therefore, the file is private again, as it should be. You can also test the PutObject API call by putting another object in this bucket with public read access.

$ aws s3api put-object \
--bucket everything-must-be-private \
--key MyDNASequence \
--acl public-read

Conclusion

In this post, I showed how you can detect unintended public access permissions in the ACL of an S3 object and how to revoke them automatically with the help of CloudWatch Events. Keep in mind that object-level S3 API call events and Lambda functions are only a small set of the events and targets that are available in CloudWatch Events. To learn more, see Using CloudWatch Events.

If you have comments about this blog post, submit them in the “Comments” section below. If you have questions about this post or how to implement the solution described, please start a new thread on the CloudWatch forum.

– Mustafa

The Top 20 Most Viewed AWS IAM Documentation Pages in 2016

Post Syndicated from Dave Bishop original https://aws.amazon.com/blogs/security/the-top-20-most-viewed-aws-iam-documentation-pages-in-2016/

The following 20 pages were the most viewed AWS Identity and Access Management (IAM) documentation pages in 2016. I have included a brief description with each link to give you a clearer idea of what each page covers. Use this list to see what other people have been viewing and perhaps to pique your own interest about a topic you’ve been meaning to research.

  1. What Is IAM?
    IAM is a web service that helps you securely control access to AWS resources for your users. You use IAM to control who can use your AWS resources (authentication) and what resources they can use and in what ways (authorization).
  2. Creating an IAM User in Your AWS Account
    You can create one or more IAM users in your AWS account. You might create an IAM user when someone joins your organization, or when you have a new application that needs to make API calls to AWS.
  3. The IAM Console and the Sign-in Page
    This page provides information about the IAM-enabled AWS Management Console sign-in page and explains how to create a unique sign-in URL for your account.
  4. How Users Sign In to Your Account
    After you create IAM users and passwords for each, users can sign in to the AWS Management Console for your AWS account with a special URL.
  5. IAM Best Practices
    To help secure your AWS resources, follow these recommendations for IAM.
  6. IAM Policy Elements Reference
    This page describes the elements that you can use in an IAM policy. The elements are listed here in the general order you use them in a policy.
  7. Managing Access Keys for IAM Users
    Users need their own access keys to make programmatic calls to AWS from the AWS Command Line Interface (AWS CLI), Tools for Windows PowerShell, the AWS SDKs, or direct HTTP calls using the APIs for individual AWS services. To fill this need, you can create, modify, view, or rotate access keys (access key IDs and secret access keys) for IAM users.
  8. Working with Server Certificates
    Some AWS services can use server certificates that you manage with IAM or AWS Certificate Manager (ACM). In many cases, we recommend that you use ACM to provision, manage, and deploy your SSL/TLS certificates.
  9. Your AWS Account ID and Its Alias
    Learn how to find your AWS account ID number and its alias.
  10. Overview of IAM Policies
    This page provides an overview of IAM policies. A policy is a document that formally states one or more permissions.
  11. Using Multi-Factor Authentication (MFA) in AWS
    For increased security, we recommend that you configure MFA to help protect your AWS resources. MFA adds extra security because it requires users to enter a unique authentication code from an approved authentication device or SMS text message when they access AWS websites or services.
  12. Example Policies for Administering AWS Resources
    This page shows some examples of policies that control access to resources in AWS services.
  13. Using an IAM Role to Grant Permissions to Applications Running on Amazon EC2 Instances
    Use an IAM role to manage temporary credentials for applications that run on an EC2 instance. When you use a role, you do not have to distribute long-term credentials to an EC2 instance. Instead, the role supplies temporary permissions that applications can use when they make calls to other AWS resources.
  14. IAM Roles
    An IAM role is similar to a user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it.
  15. Enabling a Virtual MFA Device
    A virtual MFA device uses a software application to generate a six-digit authentication code that is compatible with the time-based one-time password (TOTP) standard, as described in RFC 6238. The app can run on mobile hardware devices, including smartphones.
  16. Creating Your First IAM Admin User and Group
    This procedure describes how to create an IAM group named Administrators, grant the group full permissions for all AWS services, and then create an administrative IAM user for yourself by adding the user to the Administrators group.
  17. Using Instance Profiles
    An instance profile is a container for an IAM role that you can use to pass role information to an EC2 instance when the instance starts.
  18. Working with Server Certificates
    After you obtain or create a server certificate, you upload it to IAM so that other AWS services can use it. You might also need to get certificate information, rename or delete a certificate, or perform other management tasks.
  19. Temporary Security Credentials
    You can use the AWS Security Token Service (AWS STS) to create and provide trusted users with temporary security credentials that can control access to your AWS resources. Temporary security credentials work almost identically to the long-term access key credentials that your IAM users can use.
  20. Setting an Account Password Policy for IAM Users
    You can set a password policy on your AWS account to specify complexity requirements and mandatory rotation periods for your IAM users’ passwords.

In the “Comments” section below, let us know if you would like to see anything on these or other IAM documentation pages expanded or updated to make the documentation more useful for you.

– Dave

The Most Viewed AWS Security Blog Posts in 2016

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/the-most-viewed-aws-security-blog-posts-in-2016/

The following 10 posts were the most viewed AWS Security Blog posts that we published during 2016. You can use this list as a guide to catch up on your blog reading or even read a post again that you found particularly useful.

  1. How to Set Up DNS Resolution Between On-Premises Networks and AWS Using AWS Directory Service and Amazon Route 53
  2. How to Control Access to Your Amazon Elasticsearch Service Domain
  3. How to Restrict Amazon S3 Bucket Access to a Specific IAM Role
  4. Announcing AWS Organizations: Centrally Manage Multiple AWS Accounts
  5. How to Configure Rate-Based Blacklisting with AWS WAF and AWS Lambda
  6. How to Use AWS WAF to Block IP Addresses That Generate Bad Requests
  7. How to Record SSH Sessions Established Through a Bastion Host
  8. How to Manage Secrets for Amazon EC2 Container Service–Based Applications by Using Amazon S3 and Docker
  9. Announcing Industry Best Practices for Securing AWS Resources
  10. How to Set Up DNS Resolution Between On-Premises Networks and AWS Using AWS Directory Service and Microsoft Active Directory

The following 10 posts published since the blog’s inception in April 2013 were the most viewed AWS Security Blog posts in 2016.

  1. Writing IAM Policies: How to Grant Access to an Amazon S3 Bucket
  2. Securely Connect to Linux Instances Running in a Private Amazon VPC
  3. A New and Standardized Way to Manage Credentials in the AWS SDKs
  4. Where’s My Secret Access Key?
  5. Enabling Federation to AWS Using Windows Active Directory, ADFS, and SAML 2.0
  6. IAM Policies and Bucket Policies and ACLs! Oh, My! (Controlling Access to S3 Resources)
  7. How to Connect Your On-Premises Active Directory to AWS Using AD Connector
  8. Writing IAM Policies: Grant Access to User-Specific Folders in an Amazon S3 Bucket
  9. How to Help Prepare for DDoS Attacks by Reducing Your Attack Surface
  10. How to Set Up DNS Resolution Between On-Premises Networks and AWS Using AWS Directory Service and Amazon Route 53

Let us know in the comments section below if there is a specific security or compliance topic you would like us to cover on the Security Blog in 2017.

– Craig

SAML Identity Federation: Follow-Up Questions, Materials, Guides, and Templates from an AWS re:Invent 2016 Workshop (SEC306)

Post Syndicated from Quint Van Deman original https://aws.amazon.com/blogs/security/saml-identity-federation-follow-up-questions-materials-guides-and-templates-from-an-aws-reinvent-2016-workshop-sec306/

As part of the re:Source Mini Con for Security Services at AWS re:Invent 2016, we conducted a workshop focused on Security Assertion Markup Language (SAML) identity federation: Choose Your Own SAML Adventure: A Self-Directed Journey to AWS Identity Federation Mastery. As part of this workshop, attendees were able to submit their own federation-focused questions to a panel of AWS experts. In this post, I share the questions and answers from that workshop because this information can benefit any AWS customer interested in identity federation.

I have also made available the full set of workshop materials, lab guides, and AWS CloudFormation templates. I encourage you to use these materials to enrich your exploration of SAML for use with AWS.

Q: SAML assertions are limited to 50,000 characters. We often hit this limit by being in too many groups. What can AWS do to resolve this size-limit problem?

A: Because the SAML assertion is ultimately part of an API call, an upper bound must be in place for the assertion size.

On the AWS side, your AWS solution architect can log a feature request on your behalf to increase the maximum size of the assertion in a future release. The AWS service teams use these feature requests, in conjunction with other avenues of customer feedback, to plan and prioritize the features they deliver. To facilitate this process you need two things: the proposed higher value to which you’d like to see the maximum size raised, and a short written description that would help us understand what this increased limit would enable you to do.

On the AWS customer’s side, we often find that these cases are most relevant to centralized cloud teams that have broad, persistent access across many roles and accounts. This access is often necessary to support troubleshooting or simply as part of an individual’s job function. However, in many cases, exchanging persistent access for just-in-time access enables the same level of access but with better levels of visibility, reduced blast radius, and better adherence to the principle of least privilege. For example, you might implement a fast, efficient, and monitored workflow that allows you to provision a user into the necessary backend directory group for a short duration when needed in lieu of that user maintaining all of those group memberships on a persistent basis. This approach could effectively resolve the limit issue you are facing.

Q: Can we use OpenID Connect (OIDC) for federated authentication and authorization into the AWS Management Console? If so, does it have a similar size limit?

A: Currently, AWS support for OIDC is oriented around providing access to AWS resources from mobile or web applications, not access to the AWS Management Console. This is possible to do, but it requires the construction of a custom identity broker. In this solution, this broker would consume the OIDC identity, use its own logic to authenticate and authorize the user (thus being subject only to any size limits you enforce on the OIDC side), and use the sts:GetFederatedToken call to vend the user an AWS Security Token Service (STS) token for either AWS Management Console use or API/CLI use. During this sts:GetFederatedToken call, you attach a scoping policy with a limit of 2,048 characters. See Creating a URL that Enables Federated Users to Access the AWS Management Console (Custom Federation Broker) for additional details about custom identity brokers.

Q: We want to eliminate permanent AWS Identity and Access Management (IAM) access keys, but we cannot do so because of third-party tools. We are contemplating using HashiCorp Vault to vend permanent keys. Vault lets us tie keys to LDAP identities. Have you seen this work elsewhere? Do you think it will work for us?

A: For third-party tools that can run within Amazon EC2, you should use EC2 instance profiles to eliminate long-term credentials and their associated management (distribution, rotation, etc.). For third-party tools that cannot run within EC2, most customers opt to leverage their existing secrets-management tools and processes for the long-lived keys. These tools are often enhanced to make use of AWS APIs such as iam:GetCredentialReport (rotation information) and iam:{Create,Update,Delete}AccessKey (rotation operations). HashiCorp Vault is a popular tool with an available AWS Quick Start Reference Deployment, but any secrets management platform that is able to efficiently fingerprint the authorized resources and is extensible to work with the previously mentioned APIs will fill this need for you nicely.

Q: Currently, we use an Object Graph Navigation Library (OGNL) script in our identity provider (IdP) to build role Amazon Resource Names (ARNs) for the role attribute in the SAML assertion. The script consumes a list of distribution list display names from our identity management platform of which the user is a member. There is a 60-character limit on display names, which leaves no room for IAM pathing (which has a 512-character limit). We are contemplating a change. The proposed solution would make AWS API calls to get role ARNs from the AWS APIs. Have you seen this before? Do you think it will work? Does the AWS SAML integration support full-length role ARNs that would include up to 512 characters for the IAM path?

A: In most cases, we recommend that you use regular expression-based transformations within your IdP to translate a list of group names to a list of role ARNs for inclusion in the SAML assertion. Without pathing, you need to know the 12-digit AWS account number and the role name in order to be able to do so, which is accomplished using our recommended group-naming convention (AWS-Acct#-RoleName). With pathing (because “/” is not a valid character for group names within most directories), you need an additional source from which to pull this third data element. This could be as simple as an extra dimension in the group name (such as AWS-Acct#-Path-RoleName); however, that would multiply the number of groups required to support the solution. Instead, you would most likely derive the path element from a user attribute, dynamic group information, or even an external information store. It should work as long as you can reliably determine all three data elements for the user.

We do not recommend drawing the path information from AWS APIs, because the logic within the IdP is authoritative for the user’s authorization. In other words, the IdP should know for which full path role ARNs the user is authorized without asking AWS. You might consider using the AWS APIs to validate that the role actually exists, but that should really be an edge case. This is because you should integrate any automation that builds and provisions the roles with the frontend authorization layer. This way, there would never be a case in which the IdP authorizes a user for a role that doesn’t exist. AWS SAML integration supports full-length role ARNs.

Q: Why do AWS STS tokens contain a session token? This makes them incompatible with third-party tools that only support permanent keys. Is there a way to get rid of the session token to make temporary keys that contain only the access_key and the secret_key components?

A: The session token contains information that AWS uses to confirm that the AWS STS token is valid. There is no way to create a token that does not contain this third component. Instead, the preferred AWS mechanisms for distributing AWS credentials to third-party tools are EC2 instance profiles (in EC2), IAM cross-account trust (SaaS), or IAM access keys with secrets management (on-premises). Using IAM access keys, you can rotate the access keys as often as the third-party tool and your secrets management platform allow.

Q: How can we use SAML to authenticate and authorize code instead of having humans do the work? One proposed solution is to use our identity management (IDM) platform to generate X.509 certificates for identities, and then present these certificates to our IdP in order to get valid SAML assertions. This could then be included in an sts:AssumeRoleWithSAML call. Have you seen this working before? Do you think it will work for us?

A: Yes, when you receive a SAML assertion from your IdP by using your desired credential form (user name/password, X.509, etc.), you can use the sts:AssumeRoleWithSAML call to retrieve an AWS STS token. See How to Implement a General Solution for Federated API/CLI Access Using SAML 2.0 for a reference implementation.

Q: As a follow-up to the previous question, how can we get code using multi-factor authentication (MFA)? There is a gauth project that uses NodeJS to generate virtual MFAs. Code could theoretically get MFA codes from the NodeJS gauth server.

A: The answer depends on your choice of IdP and MFA provider. Generically speaking, you need to authenticate (either web-based or code-based) to the IdP using all of your desired factors before the SAML assertion is generated. This assertion can then include details of the authentication mechanism used as an additional attribute in role-assumption conditions within the trust policy in AWS. This lab guide from the workshop provides further details and a how-to guide for MFA-for-SAML.

This blog post clarifies some re:Invent 2016 attendees’ questions about SAML-based federation with AWS. For more information presented in the workshop, see the full set of workshop materials, lab guides, and CloudFormation templates. If you have follow-up questions, start a new thread in the IAM forum.

– Quint

New – Amazon Cognito Groups and Fine-Grained Role-Based Access Control

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/new-amazon-cognito-groups-and-fine-grained-role-based-access-control-2/

One of the challenges in building applications has been around user authentication and management.  Let’s face it, not many developers want to build yet another user identification and authentication system for their application nor would they want to cause a user to create yet another account unless needed.  Amazon Cognito makes it simpler for developers to manage user identities, authentication, and permissions in order to access their applications’ data and back end systems.  Now, if only there were a service feature to make it even easier for developers to assign different permissions to different users of their applications.

Today we are excited to announce Cognito User Pools support for groups and Cognito Federated Identities support for fine-grained Role-Based Access Control (RBAC).  With Groups support in Cognito, developers can easily customize users’ app experience by creating groups which represent different user types and app usage permissions.  Developers have the ability to add users and remove users from groups and manage group permissions for sets of users.

Speaking of permissions, support for fine-grained Role-Based Access Control (RBAC) in Cognito Federated Identities allows developers to now assign different IAM roles to different authenticated users.  Previously, Amazon Cognito only supported one IAM role for all authenticated users.  With fine-grained RBAC, a developer can map federated users to different IAM roles; this functionality is available for both user authentication using existing identity providers like Facebook or Active Directory and using Cognito User Pools.

Groups in Cognito User Pools

The best way to examine the new Cognito group feature is take a walkthrough of creating a new group in the Amazon Cognito console and adding users to the different group types.

Cognito - UserPoolCreateGroup

 

After selecting my user pool, TestAppPool, I see the updated menu item; Users and groups.  Upon selection of the menu option, a panel is presented with tabs for both User and Groups.  To create my new group, I select the Create group button.

A dialog box will open to allow for the creation of my group.  Here I will create a group for admin users named AdminGroup.  I will fill in the name for the group, provide a description for the group, and setting the order of precedence the group is ready to be created.  Note that setting the numerical precedence a group determine which group permission is prioritized and therefore utilized for users that have been assigned to multiple groups.  The lower the numerical precedence the higher the prioritization of the group to be used by the user.  Since this is my AdminGroup, I will give this group the precedence of zero (0).  After I click the Create group button, I have successfully created my user pool group.

Cognito - CreateGroupDialog-s

Now all that is left to do is add my user(s) to the group. In my test app pool, I have two (2) users; TestAdminUser and TestUnregisteredUser as shown below. I will add my TestAdminUser to my newly created group.

Cognito - Users

To add my user to the AdminGroup user pool group, I simply go into the Groups tab and select my AdminGroup.  Once the AdminGroup details screen is shown, a click of the Add users button will bring up a dialog box displaying users within my user pool.  Adding a user to this group is a straightforward process, which only requires me to selecting the plus symbol next to the username desired to be added.  Once I receive the confirmation that the user has been added to the group, the process is complete.

As you can see from the walkthrough is easy for a developer to create groups in user pools. Groups can be created and managed groups in a user pool from the AWS management console, the APIs, and the CLI. As a developer you can create, read, update, delete, and list the groups for a user pool using AWS Credentials. Each user pool can contain up to 25 groups.  Additionally, you can add users and remove users from groups within a user pool, and you can use groups to control permissions to access your resources in AWS by assigning an AWS IAM roles for the groups.  You can also use Amazon Cognito combined with Amazon API Gateway to control permissions to your own back end resources.

Cognito - AddUsersAdminGroup - small

 

Fine-grained Role-Based Access Control in Cognito Federated Identities

Let’s now dig into the Cognito Federated Identities’ feature, fine-grained Role-Based Access Control, which we will refer to going forward as RBAC. Before we dive into to RBAC, let do a quick review of features of Cognito Federated Identities.  Cognito Identity assigns users a set of temporary, limited privilege credentials to access the AWS resources from your application without having to use AWS account credentials. The permissions for each user are controlled through AWS IAM roles that you create.

At this point let’s journey into RBAC by doing another walkthrough in the management console.  Once in the console and selected the Cognito service, we will now select Federated Identities.  I think it would be best to show Cognito user pools and Federated Identities in action while examining RBAC, so I am going to create a new identity pool that utilizes Cognito user pools as its authentication provider.  To create a new pool, I will first enter a name for my identity pool and select the Enable access to unauthenticated identities checkbox.  Then under Authentication Providers, I will select the Cognito tab so that I can enter my TestAppPool user pool ID and the app client ID.  Please note, that you must have created an app (app client) within your Cognito user pool in order to obtain the app client ID and to allow the app leveraging the Cognito identity pool to access the associated user pool.

Cognito - CreateIdentityPool

Now that we have created our identity pool, let’s assign role-based access for the Cognito user pool authentication method.  The simplest way to assign different roles is by defining rules in a Cognito identity pool.  Each rule specifies a user attribute or as noted in the console, a claim.  A claim is simply a value in a token for that attribute that will be matched by the rule and associated to a specific IAM role.

In order to truly show the benefit of RBAC, I will need a role for our Test App that gives users that are in the Engineering department access to put objects in S3 and access DynamoDB.  In order to create this role, I first have to create a policy with PutObject access to S3 and GetItem, Query, Scan, and BatchGetItem access to DynamoDB. Let’s call this policy, TestAppEngineerPolicy.  After constructing the aforementioned policy, I will create an IAM role named, EngineersRole, which will leverage this policy.

At this point we have a role with fine grained access to AWS resources, so let’s return to our Cognito identity pool.  Click Edit identity pool and drop down the section for Authentication providers.  Since the authentication provider for our identity pool is a Cognito user pool, we will select the Cognito tab.  Since we are establishing fine-grained RBAC for the federated identity, I will focus my attention to the Authenticated role selection section of the Authentication provider to define a rule.  In this section, click the drop down and select the option Choose role with rules.

Cognito - AuthenticationProvider

We will now setup rule with a claim (an attribute), a value to match, and the specific IAM role, EngineersRole.. For our example, the rule I am creating will assign our specific IAM role, i.e. EngineersRole, to any users authenticated in our Cognito user pool with a department attribute set as ‘Engineering’. Please note: The department attribute that we are basing our rule on, is a custom attribute that I created in our user pool, TestAppPool, as shown in graphic below.

Cognito - CustomAttributes - small

Now that we have that cleared up, let’s focus back on the creation of our rule.  For the claim, I will type the aforementioned custom attribute, department.  This rule will be applicable when the value of department is equal to the string “Engineering”, therefore, in the Match Type field I will select the Equals match type.  Finally, I will type the actual string value, “Engineering”, for the attribute value that should be matched in the rule.  If a user has a matching value for the department attribute, they can assume the EngineersRole IAM role when they get credentials. After completing this and clicking the Save Changes button, I have successfully created rule that allows for users that are authenticated with our Cognito user pool who are in the Engineering department to have different permissions than other authenticated users using the application.

Cognito - AuthenticationRoleSelection

Since we’ve completed our walkthrough of setting up a rule to assign different roles in a Cognito identity pool, let us discuss some key points to remember about fine-grained RBAC.  Firstly, rules are defined with an order and the IAM role for the first matching rule will be applied. Secondly to set up RBAC, you can define rules or leverage the roles passed via the ID token that was assigned by the user pool. For each authentication provider configured in your identity pool, there is a maximum of a total of 25 rules that can be created. Additionally, user permissions are controlled via AWS IAM roles that you create.

Pricing and Availability

Developers can get started right away to take advantage of these exciting new features.  Learn more about these new features and the other benefits of leveraging the Amazon Cognito service by visiting our developer resources page.

Great news is that there is no additional cost for using groups within a user pool. You pay only for Monthly Active Users (MAUs) after the free tier. Also remember that using the Cognito Federated Identities feature for controlling user permissions and generating unique identifiers is always free with Amazon Cognito.  See the Amazon Cognito pricing page for more information.

Tara

IT Governance in a Dynamic DevOps Environment

Post Syndicated from Shashi Prabhakar original https://aws.amazon.com/blogs/devops/it-governance-in-a-dynamic-devops-environment/

IT Governance in a Dynamic DevOps Environment
Governance involves the alignment of security and operations with productivity to ensure a company achieves its business goals. Customers who are migrating to the cloud might be in various stages of implementing governance. Each stage poses its own challenges. In this blog post, the first in a series, I will discuss a four-step approach to automating governance with AWS services.

Governance and the DevOps Environment
Developers with a DevOps and agile mindset are responsible for building and operating services. They often rely on a central security team to develop and apply policies, seek security reviews and approvals, or implement best practices.

These policies and rules are not strictly enforced by the security team. They are treated as guidelines that developers can follow to get the much-desired flexibility from using AWS. However, due to time constraints or lack of awareness, developers may not always follow best practices and standards. If these best practices and rules were strictly enforced, the security team could become a bottleneck.

For customers migrating to AWS, the automated governance mechanisms described in this post will preserve flexibility for developers while providing controls for the security team.

These are some common challenges in a dynamic development environment:

·      Quick or short path to accomplishing tasks like hardcoding credentials in code.

·      Cost management (for example, controlling the type of instance launched).

·      Knowledge transfer.

·      Manual processes.

Steps to Governance
Here is a four-step approach to automating governance:

At initial setup, you want to implement some (1) controls for high-risk actions. After they are in place, you need to (2) monitor your environment to make sure you have configured resources correctly. Monitoring will help you discover issues you want to (3) fix as soon as possible. You’ll also want to regularly produce an (4) audit report that shows everything is compliant.

The example in this post helps illustrate the four-step approach: A central IT team allows its Big Data team to run a test environment of Amazon EMR clusters. The team runs the EMR job with 100 t2.medium instances, but when a team member spins up 100 r3.8xlarge instances to complete the job more quickly, the business incurs an unexpected expense.

The central IT team cares about governance and implements a few measures to prevent this from happening again:

·      Control elements: The team uses CloudFormation to restrict the number and type of instances and AWS Identity and Access Management to allow only a certain group to modify the EMR cluster.

·      Monitor elements: The team uses tagging, AWS Config, and AWS Trusted Advisor to monitor the instance limit and determine if anyone exceeded the number of allowed instances.

·      Fix: The team creates a custom Config rule to terminate instances that are not of the type specified.

·      Audit: The team reviews the lifecycle of the EMR instance in AWS Config.

 

Control

You can prevent mistakes by standardizing configurations (through AWS CloudFormation), restricting configuration options (through AWS Service Catalog), and controlling permissions (through IAM).

AWS CloudFormation helps you control the workflow environment in a single package. In this example, we use a CloudFormation template to restrict the number and type of instances and tagging to control the environment.

For example, the team can prevent the choice of r3.8xlarge instances by using CloudFormation with a fixed instance type and a fixed number of instances (100).

Cloudformation Template Sample

EMR cluster with tag:

{
“Type” : “AWS::EMR::Cluster”,
“Properties” : {
“AdditionalInfo” : JSON object,
“Applications” : [ Applications, … ],
“BootstrapActions” [ Bootstrap Actions, … ],
“Configurations” : [ Configurations, … ],
“Instances” : JobFlowInstancesConfig,
“JobFlowRole” : String,
“LogUri” : String,
“Name” : String,
“ReleaseLabel” : String,
“ServiceRole” : String,
“Tags” : [ Resource Tag, … ],
“VisibleToAllUsers” : Boolean
}
}
EMR cluster JobFlowInstancesConfig InstanceGroupConfig with fixed instance type and number:
{

“BidPrice” : String,

“Configurations” : [ Configuration, … ],

“EbsConfiguration” : EBSConfiguration,

“InstanceCount” : Integer,

“InstanceType” : String,

“Market” : String,

“Name” : String

}
AWS Service Catalog can be used to distribute approved products (servers, databases, websites) in AWS. This gives IT administrators more flexibility in terms of which user can access which products. It also gives them the ability to enforce compliance based on business standards.

AWS IAM is used to control which users can access which AWS services and resources. By using IAM role, you can avoid the use of root credentials in your code to access AWS resources.

In this example, we give the team lead full EMR access, including console and API access (not covered here), and give developers read-only access with no console access. If a developer wants to run the job, the developer just needs PEM files.

IAM Policy
This policy is for the team lead with full EMR access:

{
“Version”: “2012-10-17”,
“Statement”: [
{
“Effect”: “Allow”,
“Action”: [
“cloudwatch:*”,
“cloudformation:CreateStack”,
“cloudformation:DescribeStackEvents”,
“ec2:AuthorizeSecurityGroupIngress”,
“ec2:AuthorizeSecurityGroupEgress”,
“ec2:CancelSpotInstanceRequests”,
“ec2:CreateRoute”,
“ec2:CreateSecurityGroup”,
“ec2:CreateTags”,
“ec2:DeleteRoute”,
“ec2:DeleteTags”,
“ec2:DeleteSecurityGroup”,
“ec2:DescribeAvailabilityZones”,
“ec2:DescribeAccountAttributes”,
“ec2:DescribeInstances”,
“ec2:DescribeKeyPairs”,
“ec2:DescribeRouteTables”,
“ec2:DescribeSecurityGroups”,
“ec2:DescribeSpotInstanceRequests”,
“ec2:DescribeSpotPriceHistory”,
“ec2:DescribeSubnets”,
“ec2:DescribeVpcAttribute”,
“ec2:DescribeVpcs”,
“ec2:DescribeRouteTables”,
“ec2:DescribeNetworkAcls”,
“ec2:CreateVpcEndpoint”,
“ec2:ModifyImageAttribute”,
“ec2:ModifyInstanceAttribute”,
“ec2:RequestSpotInstances”,
“ec2:RevokeSecurityGroupEgress”,
“ec2:RunInstances”,
“ec2:TerminateInstances”,
“elasticmapreduce:*”,
“iam:GetPolicy”,
“iam:GetPolicyVersion”,
“iam:ListRoles”,
“iam:PassRole”,
“kms:List*”,
“s3:*”,
“sdb:*”,
“support:CreateCase”,
“support:DescribeServices”,
“support:DescribeSeverityLevels”
],
“Resource”: “*”
}
]
}
This policy is for developers with read-only access:

{
“Version”: “2012-10-17”,
“Statement”: [
{
“Effect”: “Allow”,
“Action”: [
“elasticmapreduce:Describe*”,
“elasticmapreduce:List*”,
“s3:GetObject”,
“s3:ListAllMyBuckets”,
“s3:ListBucket”,
“sdb:Select”,
“cloudwatch:GetMetricStatistics”
],
“Resource”: “*”
}
]
}
These are IAM managed policies. If you want to change the permissions, you can create your own IAM custom policy.

 

Monitor

Use logs available from AWS CloudTrail, Amazon Cloudwatch, Amazon VPC, Amazon S3, and Elastic Load Balancing as much as possible. You can use AWS Config, Trusted Advisor, and CloudWatch events and alarms to monitor these logs.

AWS CloudTrail can be used to log API calls in AWS. It helps you fix problems, secure your environment, and produce audit reports. For example, you could use CloudTrail logs to identify who launched those r3.8xlarge instances.

Picture1

AWS Config can be used to keep track of and act on rules. Config rules check the configuration of your AWS resources for compliance. You’ll also get, at a glance, the compliance status of your environment based on the rules you configured.

Amazon CloudWatch can be used to monitor and alarm on incorrectly configured resources. CloudWatch entities–metrics, alarms, logs, and events–help you monitor your AWS resources. Using metrics (including custom metrics), you can monitor resources and get a dashboard with customizable widgets. Cloudwatch Logs can be used to stream data from AWS-provided logs in addition to your system logs, which is helpful for fixing and auditing.

CloudWatch Events help you take actions on changes. VPC flow, S3, and ELB logs provide you with data to make smarter decisions when fixing problems or optimizing your environment.

AWS Trusted Advisor analyzes your AWS environment and provides best practice recommendations in four categories: cost, performance, security, and fault tolerance. This online resource optimization tool also includes AWS limit warnings.

We will use Trusted Advisor to make sure a limit increase is not going to become bottleneck in launching 100 instances:

Trusted Advisor

Picture2

Fix
Depending on the violation and your ability to monitor and view the resource configuration, you might want to take action when you find an incorrectly configured resource that will lead to a security violation. It’s important the fix doesn’t result in unwanted consequences and that you maintain an auditable record of the actions you performed.

Picture3

You can use AWS Lambda to automate everything. When you use Lambda with Amazon Cloudwatch Events to fix issues, you can take action on an instance termination event or the addition of new instance to an Auto Scaling group. You can take an action on any AWS API call by selecting it as source. You can also use AWS Config managed rules and custom rules with remediation. While you are getting informed about the environment based on AWS Config rules, you can use AWS Lambda to take action on top of these rules. This helps in automating the fixes.

AWS Config to Find Running Instance Type

Picture4
To fix the problem in our use case, you can implement a Config custom rule and trigger (for example, the shutdown of the instances if the instance type is larger than .xlarge or the tearing down of the EMR cluster).

 

Audit

You’ll want to have a report ready for the auditor at the end of the year or quarter. You can automate your reporting system using AWS Config resources.

You can view AWS resource configurations and history so you can see when the r3.8xlarge instance cluster was launched or which security group was attached. You can even search for deleted or terminated instances.

AWS Config Resources

Picture5

Picture6

More Control, Monitor, and Fix Examples
Armando Leite from AWS Professional Services has created a sample governance framework that leverages Cloudwatch Events and AWS Lambda to enforce a set of controls (flows between layers, no OS root access, no remote logins). When a deviation is noted (monitoring), automated action is taken to respond to an event and, if necessary, recover to a known good state (fix).

·      Remediate (for example, shut down the instance) through custom Config rules or a CloudWatch event to trigger the workflow.

·      Monitor a user’s OS activity and escalation to root access. As events unfold, new Lambda functions dynamically enable more logs and subscribe to log data for further live analysis.

·      If the telemetry indicates it’s appropriate, restore the system to a known good state.

Wednesday, November 30: Security and Compliance Sessions Today at re:Invent

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/wednesday-november-30-security-and-compliance-sessions-today-at-reinvent/

re:Invent stage photo

Today, the following security and compliance sessions will be presented at AWS re:Invent 2016 in Las Vegas (all times local). See the re:Invent Session Catalog for complete information about every session. You can also download the AWS re:Invent 2016 Event App for the latest updates and information.

If you are not attending re:Invent 2016, keep in mind that all videos of and slide decks from these sessions will be made available next week. We will publish a post on the Security Blog next week that links to all videos and slide decks from security and compliance sessions.

11:00 A.M.

2:30 P.M.

3:30 P.M.

5:30 P.M.

– Craig

Automated Cleanup of Unused Images in Amazon ECR

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/automated-cleanup-of-unused-images-in-amazon-ecr/

Thanks to my colleague Anuj Sharma for a great blog on cleaning up old images in Amazon ECR using AWS Lambda.
—-

When you use Amazon ECR as part of a container build and deployment pipeline, a new image is created and pushed to ECR on every code change. As a result, repositories tend to quickly fill up with new revisions. Each Docker push command creates a version that may or may not be in use by Amazon ECS tasks, depending on which version of the image is actually used to run the task.

It’s important to have an automated way to discover the image that is in use by running tasks and to delete the stale images. You may also want to keep a predetermined number of images in your repositories for easy fallback on a previous version. In this way, repositories can be kept clean and manageable, and save costs by cleaning undesired images.

In this post, I show you how to query running ECS tasks, identify the images which are in use, and delete the remaining images in ECR. To keep a predetermined number of images irrespective of images in use by tasks, the solution should keep the latest n number of images. As ECR repositories may be hosted in multiple regions, the solution should also query either all available regions where ECR is available or only a specific region. You should be able to identify and print the images to be deleted, so that extra caution can be exercised. Finally, the solution should run at a scheduled time on a reoccurring basis, if required.

Cleaning solution

The solution contains two components:

Python script

The Python script is available from the AWS Labs repository. The logic coded is as follows:

  • Get a list of all the ECS clusters using ListClusters API operation.
  • For each cluster, get a list of all the running tasks using ListTasks.
  • Get the ARNs for each running task by calling DescribeTasks.
  • Get the container image for each running task using DescribeTaskDefinition.
  • Filter out only those container images that contain “.dkr.ecr.” and “:” . This ensures that you get a list of all the containers running on ECR that have an associated tag.
  • Get a list of all the repositories using DescribeRepositories.
  • For each repository, get the imagePushedAt value, tags, and SHA for every image using DescribeImages.
  • Ignore those images from the list that have a “latest” tag or which are currently running (as discovered in the earlier steps).
  • Delete the images that have the tags as discovered earlier, using BatchDeleteImage.
    • The -dryrun flag prints out the images marked for deletion, without deleting them. Defaults to True, which only prints the images to be deleted.
    • The -region flag deletes images only in the specified region. If not specified, all regions where ECS is available are queried and images deleted. Defaults to None, which queries all regions.
    • The -imagestokeep flag keeps the latest n number of specified images and does not deletes them. Defaults to 100. For example, if –imagestokeep 20 is specified, the last 20 images in each repository are not deleted and the remaining images that satisfy the logic mentioned above are deleted.

Lambda function

The Lambda function runs the Python script, which is executed using CloudWatch scheduled events. Here’s a diagram of the workflow:

(1) The build system builds and pushes the container image. The build system also tags the image being pushed, either with the source control commit SHA hash value or the incremental package version. This gives full control over running a versioned container image in the task definition. The versioned images (images tagged in this way version the container images) are further used to run the tasks, using your own scheduler or by using the ECS scheduling capabilities.
(2) The Lambda function gets invoked by CloudWatch Events at the specified time and queries all available clusters.
(3) Based on the coded Python logic, the container image tags being used by running tasks are discovered.
(4) The Lambda function also queries all images available in ECR and, based on the coded logic, decides on the image tags to be cleaned.
(5) The Lambda function deletes the images as discovered by the coded logic.

Important: It’s a good practice to run the task definition with an image, which has a version in image tag, so that there is a better visibility and control over the version of running container. The build system can tag the image with the source code version or package version before pushing the image so that each image has the version tagged with each push, which can be used further to run the task. The script assumes that the task definitions have a versioned container specified. I recommend running the script first with the –dryrun flag to identify the images to be deleted.

Usage examples

Prints the images that are not used by running tasks and which are older than the last 100 versions, in all regions: python main.py

Deletes the images that are not used by running tasks and which are older than the last 100 versions, in all regions: python main.py -dryrun False

Deletes the images that are not used by running tasks and which are older than the last 20 versions (in each repository), in all regions: python main.py -dryrun False -imagestokeep 20

Deletes the images that are not used by running tasks and which are older than the last 20 versions (in each repository), in Oregon only: python main.py -dryrun False -imagestokeep 20 -region us-west-2

Assumptions

I’ve made the following assumptions for this solution:

  • Your ECR repository is in the same account as the running ECS clusters.
  • Each container image that has been pushed is tagged with a unique tag.

Docker tags and container image names are mutable. Multiple names can point to the same underlying image and the same image name can point to different underlying images at different points in time.

To trace the version of running code in a container to the source code, it’s a good practice to push the container image with a unique tag, which could be the source control commit SHA hash value or a package’s incremental version number; never reuse the same tag. In this way, you have better control over identifying the version of code to run in the container image, with no ambiguity.

Lambda function deployment

Now it’s time to create and deploy the Lambda function. You can do this through the console or the AWS CLI.

Using the AWS Management Console

Use the console to create a policy, role, and function.

Create a policy

Create a policy with the following policy test. This defines the access for the necessary APIs.

Sign in to the IAM console and choose Policies, Create Policy, Create Your Own Policy. Copy and paste the following policies, type a name, and choose Create.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1476919244000",
            "Effect": "Allow",
            "Action": [
                "ecr:BatchDeleteImage",
                "ecr:DescribeRepositories",
                "ecr:ListImages",
                "ecr:DescribeImages"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Sid": "Stmt1476919353000",
            "Effect": "Allow",
            "Action": [
                "ecs:DescribeClusters",
                "ecs:DescribeTaskDefinition",
                "ecs:DescribeTasks",
                "ecs:ListClusters",
                "ecs:ListTaskDefinitions",
                "ecs:ListTasks"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

Create a Lambda role

Create a role for the Lambda function, using the policy that you just created.

In the IAM console, choose Roles and enter a name, such as LAMBDA_ECR_CLEANUP. Choose AWS Lambda, select your custom policy, and choose Create Role.

Create a Lambda function

Create a Lambda function, with the role that you just created and the Python code available from the Lambda_ECR_Cleanup repository. For more information, see the Lambda function handler page in the Lambda documentation.

Schedule the Lambda function

Add the trigger for your Lambda function.

In the CloudWatch console, choose Events, Create rule, and then under Event selector, choose Schedule. For example, you can put the cron schedule as * 22 * * * to run the function every day at 10PM. For more information, see Scenario 2: Take Scheduled EBS Snapshots.

Using the AWS CLI

Use the following code examples to create the Lambda function using the AWS CLI. Replace values as needed.

Create a policy

Create a file with the following contents, called LAMBDA_ECR_DELETE.json. You reference this file when you create the IAM role.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": [
          "lambda.amazonaws.com"
        ]
      },
      "Action": [
                "ecr:BatchDeleteImage",
                "ecr:DescribeRepositories",
                "ecr:ListImages",
                "ecr:DescribeImages",
                "ecs:DescribeClusters",
                "ecs:DescribeTaskDefinition",
                "ecs:DescribeTasks",
                "ecs:ListClusters",
                "ecs:ListTaskDefinitions",
                "ecs:ListTasks"

            ],
      "Resource": ["*"]

    }
  ]
}

Create a Lambda role

aws iam create-role --role-name LAMBDA_ECR_DELETE --assume-role-policy-document file://LAMBDA_ECR_DELETE.json

Create a Lambda function

aws lambda create-function --function-name {NAME_OF_FUNCTION} --runtime python2.7 
    --role {ARN_NUMBER} --handler main.handler --timeout 15 
    --zip-file fileb://{ZIP_FILE_PATH}

Schedule a Lambda function

For more information, see Scenario 6: Run an AWS Lambda Function on a Schedule Using the AWS CLI.

Conclusion

The SDK for Python provides methods and access to ECS and ECR APIs, which can be leveraged to write logic to identify stale container images. Lambda can be used to run the code and avoid provisioning any servers. CloudWatch Events can be used to trigger the Lambda code on a recurring schedule, to keep your repositories clean and manageable and control costs.

If you have questions or suggestions, please comment below.

Dates, Times, and Locations of All Security and Compliance Sessions Taking Place at AWS re:Invent 2016

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/all-security-and-compliance-sessions-at-reinvent-2016/

re:Invent stage photo

AWS re:Invent 2016 will  take place November 28 through December 2 in Las Vegas, Nevada, and the following security and compliance sessions will be presented. See the re:Invent Session Catalog for complete information about these sessions.

If you are not attending re:Invent 2016, keep in mind that we will publish a post on the Security Blog the week after re:Invent that links to all videos and slide decks from these sessions.

Tuesday, November 29

9:30 A.M.

10:00 A.M.

11:00 A.M.

12:30 P.M.

1:00 P.M.

2:00 P.M.

3:30 P.M.

5:00 P.M.

Wednesday, November 30

11:00 A.M.

2:30 P.M.

3:30 P.M.

5:30 P.M.

Thursday, December 1

11:00 A.M.

11:30 A.M.

12:30 P.M.

1:00 P.M.

2:00 P.M.

2:30 P.M.

3:30 P.M.

4:00 P.M.

Friday, December 2

9:00 A.M.

9:30 A.M.

11:00 A.M.

12:30 P.M.

– Craig

Now Create and Manage Users More Easily with the AWS IAM Console

Post Syndicated from Rob Moncur original https://aws.amazon.com/blogs/security/now-create-and-manage-users-more-easily-with-the-aws-iam-console/

Today, we updated the AWS Identity and Access Management (IAM) console to make it easier for you to create and manage your IAM users. These improvements include an updated user creation workflow and new ways to assign and manage permissions. The new user workflow guides you through the process of setting user details, including enabling programmatic access (via access key) and console access (via password). In addition, you can assign permissions by adding users to a group, copying permissions from an existing user, and attaching policies directly to users. We have also updated the tools to view details and manage permissions for existing users. Finally, we’ve added 10 new AWS managed policies for job functions that you can use when assigning permissions.

In this post, I show how to use the updated user creation workflow and introduce changes to the user details pages. If you want to learn more about the new AWS managed policies for job functions, see How to Assign Permissions Using New AWS Managed Policies for Job Functions.

The new user creation workflow

Let’s imagine a new database administrator, Arthur, has just joined your team and will need access to your AWS account. To give Arthur access to your account, you must create a new IAM user for Arthur and assign relevant permissions so that Arthur can do his job.

To create a new IAM user:

  1. Navigate to the IAM console.
  2. To create the new user for Arthur, choose Users in the left pane, and then choose Add user.
    Screenshot of creating new user

Set user details
The first step in creating the user arthur is to enter a user name for Arthur and assign his access type:

  1. Type arthur in the User name box. (Note that this workflow allows you to create multiple users at a time. If you create more than one user in a single workflow, all users will have the same access type and permissions.)
    Screenshot of establishing the user name
  2. In addition to using the AWS Management Console, Arthur needs to use the AWS CLI and make API calls using the AWS SDK; therefore, you have to configure both programmatic and AWS Management Console access for him (see the following screenshot). If you select AWS Management Console access, you also have the option to use either an Autogenerated password, which will generate a random password retrievable after the user had been created, or a Custom password, which you can define yourself. In this case, choose Autogenerated password.

By enabling the option User must change password at next sign-in, arthur will be required to change his password the first time he signs in. If you do not have the accountwide setting enabled that allows users to change their own passwords, the workflow will automatically add the IAMUserChangePassword policy to arthur, giving him the ability to change his own password (and no one else’s).
Screenshot of configure both configuring programmatic and AWS Management Console access for Arthur

You can see the contents of the policy by clicking IAMUserChangePassword. This policy grants access to the IAM action, iam:ChangePassword, and it leverages an IAM policy variable, ${aws:username}, which will resolve the current username of the authenticating user. This will enable any user to which it is applied the ability to change only their own password. It also grants access to the IAM action, iam:GetAccountPasswordPolicy, which lets a user see the account password policy details that are shown to help them set a password that conforms to this policy.

{
    "Version": "2012-10-17",
    "Statement": [
        {
           "Effect": "Allow",
           "Action": [
               "iam:ChangePassword"
           ],
           "Resource": [
               "arn:aws:iam::*:user/${aws:username}"
           ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:GetAccountPasswordPolicy"
            ],
            "Resource": "*"
        }

    ]
}

Assign permissions

Arthur needs the necessary permissions to do his job as a database administrator. Because you do not have an IAM group set up for database administrators yet, you will create a new group and assign the proper permissions to it:

  1. Knowing that you may grow the database administrator team, using a group to assign permissions will make it easy to assign permissions to additional team members in the future. Choose Add user to group.
  2. Choose Create group. This opens a new window where you can create a new group.
    Screenshot of creating the group
  3. Call the new group DatabaseAdmins and attach the DatabaseAdministrator policy from the new AWS managed policies for job functions, as shown in the following screenshot. This policy enables Arthur to use AWS database services and other supporting services to do his job as a database administrator.

Note: This policy enables you to use additional features in other AWS services (such as Amazon CloudWatch and AWS Data Pipeline). In order to do so, you must create one or more IAM service roles. To understand the different features and the service roles required, see our documentation.
Screenshot of creating DatabaseAdmins group

Review the permissions you assigned to arthur

After creating the group, move to the review step to verify the details and permissions you assigned to arthur (see the following screenshot). If you decide you need to adjust the permissions, you can choose Previous to add or remove assigned permissions. After confirming that arthur has the correct access type and permissions, you can proceed to the next step by choosing Create user.

Screenshot of reviewing the permissions

Retrieve security credentials and email sign-in instructions

The user arthur has now been created with the appropriate permissions.

Screenshot of the "Success" message

You can now retrieve security credentials (access key ID, secret access key, and console password). By choosing Send email, an email with instructions about how to sign in to the AWS Management Console will be generated in your default email application.

Screenshot of the "Send email" link

This email provides a convenient way to send instructions about how to sign in to the AWS Management Console. The email does not include access keys or passwords, so to enable users to get started, you also will need to securely transmit those credentials according to your organization’s security policies.

Screenshot of email with sign-in instructions

Updates to the user details pages

We have also refreshed the user details pages. On the Permissions tab, you will see that the previous Attach policy button is now called Add permissions. This will launch you into the same permissions assignment workflow used in the user creation process. We’ve also changed the way that policies attached to a user are displayed and have added the count of groups attached to the user in the label of the Groups tab.

Screenshot of the changed user details page

On the Security credentials tab, we’ve updated a few items as well. We’ve updated the Sign-in credentials section and added Console password, which shows if AWS Management Console access is enabled or disabled. We’ve also added the Console login link to make it easier to find. We have also updated the Console password, Create access key, and Upload SSH public key workflows so that they are easier to use.

Screenshot of updates made to the "Security credentials" tab

Conclusion

We made these improvements to make it easier for you to create and manage permissions for your IAM users. As you are using these new tools, make sure to review and follow IAM best practices, which can help you improve your security posture and make your account easier to manage.

If you have any feedback or questions about these new features, submit a comment below or start a new thread on the IAM forum.

– Rob

Building a Backup System for Scaled Instances using AWS Lambda and Amazon EC2 Run Command

Post Syndicated from Vyom Nagrani original https://aws.amazon.com/blogs/compute/building-a-backup-system-for-scaled-instances-using-aws-lambda-and-amazon-ec2-run-command/

Diego Natali Diego Natali, AWS Cloud Support Engineer

When an Auto Scaling group needs to scale in, replace an unhealthy instance, or re-balance Availability Zones, the instance is terminated, data on the instance is lost and any on-going tasks are interrupted. This is normal behavior but sometimes there are use cases when you might need to run some commands, wait for a task to complete, or execute some operations (for example, backing up logs) before the instance is terminated. So Auto Scaling introduced lifecycle hooks, which give you more control over timing after an instance is marked for termination.

In this post, I explore how you can leverage Auto Scaling lifecycle hooks, AWS Lambda, and Amazon EC2 Run Command to back up your data automatically before the instance is terminated. The solution illustrated allows you to back up your data to an S3 bucket; however, with minimal changes, it is possible to adapt this design to carry out any task that you prefer before the instance gets terminated, for example, waiting for a worker to complete a task before terminating the instance.

 

Using Auto Scaling lifecycle hooks, Lambda, and EC2 Run Command

You can configure your Auto Scaling group to add a lifecycle hook when an instance is selected for termination. The lifecycle hook enables you to perform custom actions as Auto Scaling launches or terminates instances. In order to perform these actions automatically, you can leverage Lambda and EC2 Run Command to allow you to avoid the use of additional software and to rely completely on AWS resources.

For example, when an instance is marked for termination, Amazon CloudWatch Events can execute an action based on that. This action can be a Lambda function to execute a remote command on the machine and upload your logs to your S3 bucket.

EC2 Run Command enables you to run remote scripts through the agent running within the instance. You use this feature to back up the instance logs and to complete the lifecycle hook so that the instance is terminated.

The example provided in this post works precisely this way. Lambda gathers the instance ID from CloudWatch Events and then triggers a remote command to back up the instance logs.

Architecture Graph

 

Set up the environment

Make sure that you have the latest version of the AWS CLI installed locally. For more information, see Getting Set Up with the AWS Command Line Interface.

Step 1 – Create an SNS topic to receive the result of the backup

In this step, you create an Amazon SNS topic in the region in which to run your Auto Scaling group. This topic allows EC2 Run Command to send you the outcome of the backup. The output of the aws iam create-topic command includes the ARN. Save the ARN, as you need it for future steps.

aws sns create-topic --name backupoutcome

Now subscribe your email address as the endpoint for SNS to receive messages.

aws sns subscribe --topic-arn <enter-your-sns-arn-here> --protocol email --notification-endpoint <your_email>

Step 2 – Create an IAM role for your instances and your Lambda function

In this step, you use the AWS console to create the AWS Identity and Access Management (IAM) role for your instances and Lambda to enable them to run the SSM agent, upload your files to your S3 bucket, and complete the lifecycle hook.

First, you need to create a custom policy to allow your instances and Lambda function to complete lifecycle hooks and publish to the SNS topic set up in Step 1.

  1. Log into the IAM console.
  2. Choose Policies, Create Policy
  3. For Create Your Own Policy, choose Select.
  4. For Policy Name, type “ASGBackupPolicy”.
  5. For Policy Document, paste the following policy which allows to complete a lifecycle hook:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "autoscaling:CompleteLifecycleAction",
        "sns:Publish"
      ],
      "Effect": "Allow",
      "Resource": "*"
    }
  ]
}

Create the role for EC2.

  1. In the left navigation pane, choose Roles, Create New Role.
  2. For Role Name, type “instance-role” and choose Next Step.
  3. Choose Amazon EC2 and choose Next Step.
  4. Add the policies AmazonEC2RoleforSSM and ASGBackupPolicy.
  5. Choose Next Step, Create Role.

Create the role for the Lambda function.

  1. In the left navigation pane, choose Roles, Create New Role.
  2. For Role Name, type “lambda-role” and choose Next Step.
  3. Choose AWS Lambda and choose Next Step.
  4. Add the policies AmazonSSMFullAccess, ASGBackupPolicy, and AWSLambdaBasicExecutionRole.
  5. Choose Next Step, Create Role.

Step 3 – Create an Auto Scaling group and configure the lifecycle hook

In this step, you create the Auto Scaling group and configure the lifecycle hook.

  1. Log into the EC2 console.
  2. Choose Launch Configurations, Create launch configuration.
  3. Select the latest Amazon Linux AMI and whatever instance type you prefer, and choose Next: Configuration details.
  4. For Name, type “ASGBackupLaunchConfiguration”.
  5. For IAM role, choose “instance-role” and expand Advanced Details.
  6. For User data, add the following lines to install and launch the SSM agent at instance boot:
    #!/bin/bash
    sudo yum install amazon-ssm-agent -y
    sudo /sbin/start amazon-ssm-agent
  7. Choose Skip to review, Create launch configuration, select your key pair, and then choose Create launch configuration.
  8. Choose Create an Auto Scaling group using this launch configuration.
  9. For Group name, type “ASGBackup”.
  10. Select your VPC and at least one subnet and then choose Next: Configuration scaling policies, Review, and Create Auto Scaling group.

Your Auto Scaling group is now created and you need to add the lifecycle hook named “ASGBackup” by using the AWS CLI:

aws autoscaling put-lifecycle-hook --lifecycle-hook-name ASGBackup --auto-scaling-group-name ASGBackup --lifecycle-transition autoscaling:EC2_INSTANCE_TERMINATING --heartbeat-timeout 3600

Step 4 – Create an S3 bucket for files

Create an S3 bucket where your data will be saved, or use an existing one. To create a new one, you can use this AWS CLI command:

aws s3api create-bucket --bucket <your_bucket_name>

Step 5 – Create the SSM document

The following JSON document archives the files in “BACKUPDIRECTORY” and then copies them to your S3 bucket “S3BUCKET”. Every time this command completes its execution, a SNS message is sent to the SNS topic specified by the “SNSTARGET” variable and completes the lifecycle hook.

In your JSON document, you need to make a few changes according to your environment:

Auto Scaling group name (line 12)“ASGNAME=’ASGBackup’”,
Lifecycle hook name (line 13)“LIFECYCLEHOOKNAME=’ASGBackup’”,
Directory to back up (line 14)“BACKUPDIRECTORY=’/var/log’”,
S3 bucket (line 15)“S3BUCKET='<your_bucket_name>’”,
SNS target (line 16)“SNSTARGET=’arn:aws:sns:’${REGION}’:<your_account_id>:<your_sns_ backupoutcome_topic>”

Here is the document:

{
  "schemaVersion": "1.2",
  "description": "Backup logs to S3",
  "parameters": {},
  "runtimeConfig": {
    "aws:runShellScript": {
      "properties": [
        {
          "id": "0.aws:runShellScript",
          "runCommand": [
            "",
            "ASGNAME='ASGBackup'",
            "LIFECYCLEHOOKNAME='ASGBackup'",
            "BACKUPDIRECTORY='/var/log'",
            "S3BUCKET='<your_bucket_name>'",
            "SNSTARGET='arn:aws:sns:'${REGION}':<your_account_id>:<your_sns_ backupoutcome_topic>'",           
            "INSTANCEID=$(curl http://169.254.169.254/latest/meta-data/instance-id)",
            "REGION=$(curl http://169.254.169.254/latest/meta-data/placement/availability-zone)",
            "REGION=${REGION::-1}",
            "HOOKRESULT='CONTINUE'",
            "MESSAGE=''",
            "",
            "tar -cf /tmp/${INSTANCEID}.tar $BACKUPDIRECTORY &> /tmp/backup",
            "if [ $? -ne 0 ]",
            "then",
            "   MESSAGE=$(cat /tmp/backup)",
            "else",
            "   aws s3 cp /tmp/${INSTANCEID}.tar s3://${S3BUCKET}/${INSTANCEID}/ &> /tmp/backup",
            "       MESSAGE=$(cat /tmp/backup)",
            "fi",
            "",
            "aws sns publish --subject 'ASG Backup' --message \"$MESSAGE\"  --target-arn ${SNSTARGET} --region ${REGION}",
            "aws autoscaling complete-lifecycle-action --lifecycle-hook-name ${LIFECYCLEHOOKNAME} --auto-scaling-group-name ${ASGNAME} --lifecycle-action-result ${HOOKRESULT} --instance-id ${INSTANCEID}  --region ${REGION}"
          ]
        }
      ]
    }
  }
}
  1. Log into the EC2 console.
  2. Choose Command History, Documents, Create document.
  3. For Document name, enter “ASGLogBackup”.
  4. For Content, add the above JSON, modified for your environment.
  5. Choose Create document.

Step 6 – Create the Lambda function

The Lambda function uses modules included in the Python 2.7 Standard Library and the AWS SDK for Python module (boto3), which is preinstalled as part of Lambda. The function code performs the following:

  • Checks to see whether the SSM document exists. This document is the script that your instance runs.
  • Sends the command to the instance that is being terminated. It checks for the status of EC2 Run Command and if it fails, the Lambda function completes the lifecycle hook.
  1. Log in to the Lambda console.
  2. Choose Create Lambda function.
  3. For Select blueprint, choose Skip, Next.
  4. For Name, type “lambda_backup” and for Runtime, choose Python 2.7.
  5. For Lambda function code, paste the Lambda function from the GitHub repository.
  6. Choose Choose an existing role.
  7. For Role, choose lambda-role (previously created).
  8. In Advanced settings, configure Timeout for 5 minutes.
  9. Choose Next, Create function.

Your Lambda function is now created.

Step 7 – Configure CloudWatch Events to trigger the Lambda function

Create an event rule to trigger the Lambda function.

  1. Log in to the CloudWatch console.
  2. Choose Events, Create rule.
  3. For Select event source, choose Auto Scaling.
  4. For Specific instance event(s), choose EC2 Instance-terminate Lifecycle Action and for Specific group name(s), choose ASGBackup.
  5. For Targets, choose Lambda function and for Function, select the Lambda function that you previously created, “lambda_backup”.
  6. Choose Configure details.
  7. In Rule definition, type a name and choose Create rule.

Your event rule is now created; whenever your Auto Scaling group “ASGBackup” starts terminating an instance, your Lambda function will be triggered.

Step 8 – Test the environment

From the Auto Scaling console, you can change the desired capacity and the minimum for your Auto Scaling group to 0 so that the instance running starts being terminated. After the termination starts, you can see from Instances tab that the instance lifecycle status changed to Termination:Wait. While the instance is in this state, the Lambda function and the command are executed.

You can review your CloudWatch logs to see the Lambda output. In the CloudWatch console, choose Logs and /aws/lambda/lambda_backup to see the execution output.

You can go to your S3 bucket and check that the files were uploaded. You can also check Command History in the EC2 console to see if the command was executed correctly.

Conclusion

Now that you’ve seen an example of how you can combine various AWS services to automate the backup of your files by relying only on AWS services, I hope you are inspired to create your own solutions.

Auto Scaling lifecycle hooks, Lambda, and EC2 Run Command are powerful tools because they allow you to respond to Auto Scaling events automatically, such as when an instance is terminated. However, you can also use the same idea for other solutions like exiting processes gracefully before an instance is terminated, deregistering your instance from service managers, and scaling stateful services by moving state to other running instances. The possible use cases are only limited by your imagination.

Learn more about:

I’ve open-sourced the code in this example in the awslabs GitHub repo; I can’t wait to see your feedback and your ideas about how to improve the solution.

How to Assign Permissions Using New AWS Managed Policies for Job Functions

Post Syndicated from Joy Chatterjee original https://aws.amazon.com/blogs/security/how-to-assign-permissions-using-new-aws-managed-policies-for-job-functions/

Today, AWS Identity and Access Management (IAM) made 10 AWS managed policies available that align with common job functions in organizations. AWS managed policies enable you to set permissions using policies that AWS creates and manages, and with a single AWS managed policy for job functions, you can grant the permissions necessary for network or database administrators, for example.

You can attach multiple AWS managed policies to your users, roles, or groups if they span multiple job functions. As with all AWS managed policies, AWS will keep these policies up to date as we introduce new services or actions. You can use any AWS managed policy as a template or starting point for your own custom policy if the policy does not fully meet your needs (however, AWS will not automatically update any of your custom policies). In this blog post, I introduce the new AWS managed policies for job functions and show you how to use them to assign permissions.

The following table lists the new AWS managed policies for job functions and their descriptions.

 Job functionDescription
AdministratorThis policy grants full access to all AWS services.
BillingThis policy grants permissions for billing and cost management. These permissions include viewing and modifying budgets and payment methods. An additional step is required to access the AWS Billing and Cost Management pages after assigning this policy.
Data ScientistThis policy grants permissions for data analytics and analysis. Access to the following AWS services is a part of this policy: Amazon Elastic Map Reduce, Amazon Redshift, Amazon Kinesis, Amazon Machine Learning, Amazon Data Pipeline, Amazon S3, and Amazon Elastic File System. This policy additionally enables you to use optional IAM service roles to leverage features in other AWS services. To grant such access, you must create a role for each of these services.
Database AdministratorThis policy grants permissions to all AWS database services. Access to the following AWS services is a part of this policy: Amazon DynamoDB, Amazon ElastiCache, Amazon RDS, and Amazon Redshift. This policy additionally enables you to use optional IAM service roles to leverage features in other AWS services. To grant such access, you must create a role for each of these services.
Developer Power UserThis policy grants full access to all AWS services except for IAM.
Network AdministratorThis policy grants permissions required for setting up and configuring network resources. Access to the following AWS services is included in this policy: Amazon Route 53, Route 53 Domains, Amazon VPC, and AWS Direct Connect. This policy grants access to actions that require delegating permissions to CloudWatch Logs. This policy additionally enables you to use optional IAM service roles to leverage features in other AWS services. To grant such access, you must create a role for this service.
Security AuditorThis policy grants permissions required for configuring security settings and for monitoring events and logs in the account.
Support UserThis policy grants permissions to troubleshoot and resolve issues in an AWS account. This policy also enables the user to contact AWS support to create and manage cases.
System AdministratorThis policy grants permissions needed to support system and development operations. Access to the following AWS services is included in this policy: AWS CloudTrail, Amazon CloudWatch, CodeCommit, CodeDeploy, AWS Config, AWS Directory Service, EC2, IAM, AWS KMS, Lambda, RDS, Route 53, S3, SES, SQS, AWS Trusted Advisor, and Amazon VPC. This policy grants access to actions that require delegating permissions to EC2, CloudWatch, Lambda, and RDS. This policy additionally enables you to use optional IAM service roles to leverage features in other AWS services. To grant such access, you must create a role for each of these services.
View Only UserThis policy grants permissions to view existing resources across all AWS services within an account.

Some of the policies in the preceding table enable you to take advantage of additional features that are optional. These policies grant access to iam:passrole, which passes a role to delegate permissions to an AWS service to carry out actions on your behalf.  For example, the Network Administrator policy passes a role to CloudWatch’s flow-logs-vpc so that a network administrator can log and capture IP traffic for all the Amazon VPCs they create. You must create IAM service roles to take advantage of the optional features. To follow security best practices, the policies already include permissions to pass the optional service roles with a naming convention. This  avoids escalating or granting unnecessary permissions if there are other service roles in the AWS account. If your users require the optional service roles, you must create a role that follows the naming conventions specified in the policy and then grant permissions to the role.

For example, your system administrator may want to run an application on an EC2 instance, which requires passing a role to Amazon EC2. The system administrator policy already has permissions to pass a role named ec2-sysadmin-*. When you create a role called ec2-sysadmin-example-application, for example, and assign the necessary permissions to the role, the role is passed automatically to the service and the system administrator can start using the features. The documentation summarizes each of the use cases for each job function that requires delegating permissions to another AWS service.

How to assign permissions by using an AWS managed policy for job functions

Let’s say that your company is new to AWS, and you have an employee, Alice, who is a database administrator. You want Alice to be able to use and manage all the database services while not giving her full administrative permissions. In this scenario, you can use the Database Administrator AWS managed policy. This policy grants view, read, write, and admin permissions for RDS, DynamoDB, Amazon Redshift, ElastiCache, and other AWS services a database administrator might need.

The Database Administrator policy passes several roles for various use cases. The following policy shows the different service roles that are applicable to a database administrator.

{
            "Effect": "Allow",
            "Action": [
                "iam:GetRole",
                "iam:PassRole"
            ],
            "Resource": [
                "arn:aws:iam::*:role/rds-monitoring-role",
                "arn:aws:iam::*:role/rdbms-lambda-access",
                "arn:aws:iam::*:role/lambda_exec_role",
                "arn:aws:iam::*:role/lambda-dynamodb-*",
                "arn:aws:iam::*:role/lambda-vpc-execution-role",
                "arn:aws:iam::*:role/DataPipelineDefaultRole",
                "arn:aws:iam::*:role/DataPipelineDefaultResourceRole"
            ]
        }

However, in order for user Alice to be able to leverage any of the features that require another service, you must create a service role and grant it permissions. In this scenario, Alice wants only to monitor RDS databases. To enable Alice to monitor RDS databases, you must create a role called rds-monitoring-role and assign the necessary permissions to the role.

Steps to assign permissions to the Database Administrator policy in the IAM console

  1. Sign in to the IAM console.
  2. In the left pane, choose Policies and type database in the Filter box.
    Screenshot of typing "database" in filter box
  1. Choose the Database Administrator policy, and then choose Attach in the Policy Actions drop-down menu.
    Screenshot of choosing "Attach" from drop-down list
  1. Choose the user (in this case, I choose Alice), and then choose Attach Policy.
    Screenshot of selecting the user
  2. User Alice now has Database Administrator permissions. However, for Alice to monitor RDS databases, you must create a role called rds-monitoring-role. To do this, in the left pane, choose Roles, and then choose Create New Role.
  3. For the Role Name, type rds-monitoring-role to match the name that is specified in the Database Administrator. Choose Next Step.
    Screenshot of creating the role called rds-monitoring-role
  1. In the AWS Service Roles section, scroll down and choose Amazon RDS Role for Enhanced Monitoring.
    Screenshot of selecting "Amazon RDS Role for Enhanced Monitoring"
  1. Choose the AmazonRDSEnhancedMonitoringRole policy and then choose Select.
    Screenshot of choosing AmazonRDSEnhancedMonitoringRole
  2. After reviewing the role details, choose Create Role.

User Alice now has Database Administrator permissions and can now monitor RDS databases. To use other roles for the Database Administrator AWS managed policy, see the documentation.

To learn more about AWS managed policies for job functions, see the IAM documentation. If you have comments about this post, submit them in the “Comments” section below. If you have questions about these new policies, please start a new thread on the IAM forum.

– Joy

re:Invent 2016 Security and Compliance Sessions with Seats Remaining

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/reinvent-2016-security-and-compliance-sessions-with-seats-remaining/

AWS re:Invent 2016 banner

Some sessions are already full in the re:Invent 2016 Security & Compliance track and the re:Source Mini Con for Security Services. However, the following sessions have seats remaining. If you have already registered for re:Invent, we encourage you to reserve your seats in these sessions today.

We will update this page as these sessions reach capacity and any new repeat sessions are added.

Security & Compliance Track sessions with seats remaining

re:Source Mini Con for Security Services sessions with seats remaining

– Craig

Running Swift Web Applications with Amazon ECS

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/running-swift-web-applications-with-amazon-ecs/

This is a guest post from Asif Khan about how to run Swift applications on Amazon ECS.

—–

Swift is a general-purpose programming language built using a modern approach to safety, performance, and software design patterns. A goal for Swift is to be the best language for uses ranging from systems programming, to mobile and desktop applications, scaling up to cloud services. As a developer, I am thrilled with the possibility of a homogeneous application stack and being able to leverage the benefits of Swift both on the client and server side. My code becomes more concise, and is more tightly integrated to the iOS environment.

In this post, I provide a walkthrough on building a web application using Swift and deploying it to Amazon ECS with an Ubuntu Linux image and Amazon ECR.

Overview of container deployment

Swift provides an Ubuntu version of the compiler that you can use. You still need a web server, a container strategy, and an automated cluster management with automatic scaling for traffic peaks.

There are some decisions to make in your approach to deploy services to the cloud:

  • HTTP server
    Choose a HTTP server which supports Swift. I found Vapor to be the easiest. Vapor is a type-safe web framework for Swift 3.0 that works on iOS, MACOS, and Ubuntu. It is very simple and easy to deploy a Swift application. Vapor comes with a CLI that will help you create new Vapor applications, generate Xcode projects and build them, as well as deploy your applications to Heroku or Docker. Another Swift webserver is Perfect. In this post, I use Vapor as I found it easier to get started with.

Tip: Join the Vapor slack group; it is super helpful. I got answers on a long weekend which was super cool.

  • Container model
    Docker is an open-source technology that that allows you to build, run, test, and deploy distributed applications inside software containers. It allows you to package a piece of software in a standardized unit for software development, containing everything the software needs to run: code, runtime, system tools, system libraries, etc. Docker enables you to quickly, reliably, and consistently deploy applications regardless of environment.
    In this post, you’ll use Docker, but if you prefer Heroku, Vapor is compatible with Heroku too.
  • Image repository
    After you choose Docker as the container deployment unit, you need to store your Docker image in a repository to automate the deployment at scale. Amazon ECR is a fully-managed Docker registry and you can employ AWS IAM policies to secure your repositories.
  • Cluster management solution
    Amazon ECS is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances. ECS eliminates the need for you to install, operate, and scale your own cluster management infrastructure.

With ECS, it is very easy to adopt containers as a building block for your applications (distributed or otherwise) by skipping the need for you to install, operate, and scale your own cluster infrastructure. Using Docker container within ECS provides flexibility to schedule long-running applications, services, and batch processes. ECS maintains application availability and allows you to scale containers.

To put it all together, you have your Swift web application running in a HTTP server (Vapor), deployed on containers (Docker) with images are stored on a secure repository (ECR) with automated cluster management (ECS) to scale horizontally.

Prepare an AWS Account

  1. If you don’t already have an AWS account, create one at http://aws.amazon.com by following the on-screen instructions.
  2. Use the region selector in the navigation bar to choose the AWS Region where you want to deploy Swift web applications on AWS.
  3. Create a key pair in your preferred region.

Walkthrough

The following steps are required to set up your first web application written in Swift and deploy it to ECS:

  1. Download and launch an instance of the AWS CloudFormation template. The CloudFormation template installs Swift, Vapor, Docker, and the AWS CLI.
  2. SSH into the instance.
  3. Download the vapor example code
  4. Test the Vapor web application locally.
  5. Enhance the Vapor example code to include a new API.
  6. Push your code to a code repository
  7. Create a Docker image of your code.
  8. Push your image to Amazon ECR.
  9. Deploy your Swift web application to Amazon ECS.

Detailed steps

  1. Download the CloudFormation template and spin up an EC2 instance. The CloudFormation has Swift , Vapor, Docker, and git installed and configured. To launch an instance, launch the CloudFormation template from here.
  2. SSH into your instance:
    ssh –i [email protected]
  3. Download the Vapor example code – this code helps deploy the example you are using for your web application:
    git clone https://github.com/awslabs/ecs-swift-sample-app.git
  4. Test the Vapor application locally:
    1. Build a Vapor project:
      cd ~/ecs-swift-sample-app/example \
      vapor build
    2. Run the Vapor project:
      vapor run serve --port=8080
    3. Validate that server is running (in a new terminal window):
      ssh -i [email protected] curl localhost:8080
  5. Enhance the Vapor code:
    1. Follow the guide to add a new route to the sample application: https://Vapor.readme.io/docs/hello-world
    2. Test your web application locally:
      vapor run serve --port=8080
      curl http://localhost/hello.
  6. Commit your changes and push this change to your GitHub repository:
    git add –all
    git commit –m
    git push
  7. Build a new Docker image with your code:
    docker build -t swift-on-ecs \
    --build-arg SWIFT_VERSION=DEVELOPMENT-SNAPSHOT-2016-06-06-a \
    --build-arg REPO_CLONE_URL= \
    ~/ ecs-swift-sample-app/example
  8. Upload to ECR: Create an ECR repository and push the image following the steps in Getting Started with Amazon ECR.
  9. Create a ECS cluster and run tasks following the steps in Getting Started with Amazon ECS:
    1. Be sure to use the full registry/repository:tag naming for your ECR images when creating your task. For example, aws_account_id.dkr.ecr.us-east-1.amazonaws.com/my-web-app:latest.
    2. Ensure that you have port forwarding 8080 set up.
  10. You can now go to the container, get the public IP address, and try to access it to see the result.
    1. Open your running task and get the URL:
    2. Open the public URL in a browser:

Your first Swift web application is now running.

At this point, you can use ECS with Auto Scaling to scale your services and also monitor them using CloudWatch metrics and events.

Conclusion

If you want to leverage the benefits of Swift, you can use Vapor as the web container with Amazon ECS and Amazon ECR to deploy Swift web applications at scale and delegate the cluster management to Amazon ECS.

There are many interesting things you could do with Swift beyond this post. To learn more about Swift, see the additional Swift libraries and read the Swift documentation.

If you have questions or suggestions, please comment below.

In Case You Missed These: AWS Security Blog Posts from September and October

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/in-case-you-missed-these-aws-security-blog-posts-from-september-and-october/

In case you missed any AWS Security Blog posts from September and October, they are summarized and linked to below. The posts are shown in reverse chronological order (most recent first), and the subject matter ranges from enabling multi-factor authentication on your AWS API calls to using Amazon CloudWatch Events to monitor application health.

October

October 30: Register for and Attend This November 10 Webinar—Introduction to Three AWS Security Services
As part of the AWS Webinar Series, AWS will present Introduction to Three AWS Security Services on Thursday, November 10. This webinar will start at 10:30 A.M. and end at 11:30 A.M. Pacific Time. AWS Solutions Architect Pierre Liddle shows how AWS Identity and Access Management (IAM), AWS Config Rules, and AWS Cloud Trail can help you maintain control of your environment. In a live demo, Pierre shows you how to track changes, monitor compliance, and keep an audit record of API requests.

October 26: How to Enable MFA Protection on Your AWS API Calls
Multi-factor authentication (MFA) provides an additional layer of security for sensitive API calls, such as terminating Amazon EC2 instances or deleting important objects stored in an Amazon S3 bucket. In some cases, you may want to require users to authenticate with an MFA code before performing specific API requests, and by using AWS Identity and Access Management (IAM) policies, you can specify which API actions a user is allowed to access. In this blog post, I show how to enable an MFA device for an IAM user and author IAM policies that require MFA to perform certain API actions such as EC2’s TerminateInstances.

October 19: Reserved Seating Now Open for AWS re:Invent 2016 Sessions
Reserved seating is new to re:Invent this year and is now open! Some important things you should know about reserved seating:

  1. All sessions have a predetermined number of seats available and must be reserved ahead of time.
  2. If a session is full, you can join a waitlist.
  3. Waitlisted attendees will receive a seat in the order in which they were added to the waitlist and will be notified via email if and when a seat is reserved.
  4. Only one session can be reserved for any given time slot (in other words, you cannot double-book a time slot on your re:Invent calendar).
  5. Don’t be late! The minute the session begins, if you have not badged in, attendees waiting in line at the door might receive your seat.
  6. Waitlisting will not be supported onsite and will be turned off 7-14 days before the beginning of the conference.

October 17: How to Help Achieve Mobile App Transport Security (ATS) Compliance by Using Amazon CloudFront and AWS Certificate Manager
Web and application users and organizations have expressed a growing desire to conduct most of their HTTP communication securely by using HTTPS. At its 2016 Worldwide Developers Conference, Apple announced that starting in January 2017, apps submitted to its App Store will be required to support App Transport Security (ATS). ATS requires all connections to web services to use HTTPS and TLS version 1.2. In addition, Google has announced that starting in January 2017, new versions of its Chrome web browser will mark HTTP websites as being “not secure.” In this post, I show how you can generate Secure Sockets Layer (SSL) or Transport Layer Security (TLS) certificates by using AWS Certificate Manager (ACM), apply the certificates to your Amazon CloudFront distributions, and deliver your websites and APIs over HTTPS.

October 5: Meet AWS Security Team Members at Grace Hopper 2016
For those of you joining this year’s Grace Hopper Celebration of Women in Computing in Houston, you may already know the conference will have a number of security-specific sessions. A group of women from AWS Security will be at the conference, and we would love to meet you to talk about your cloud security and compliance questions. Are you a student, an IT security veteran, or an experienced techie looking to move into security? Make sure to find us to talk about career opportunities.

September

September 29: How to Create a Custom AMI with Encrypted Amazon EBS Snapshots and Share It with Other Accounts and Regions
An Amazon Machine Image (AMI) provides the information required to launch an instance (a virtual server) in your AWS environment. You can launch an instance from a public AMI, customize the instance to meet your security and business needs, and save configurations as a custom AMI. With the recent release of the ability to copy encrypted Amazon Elastic Block Store (Amazon EBS) snapshots between accounts, you now can create AMIs with encrypted snapshots by using AWS Key Management Service (KMS) and make your AMIs available to users across accounts and regions. This allows you to create your AMIs with required hardening and configurations, launch consistent instances globally based on the custom AMI, and increase performance and availability by distributing your workload while meeting your security and compliance requirements to protect your data.

September 19: 32 Security and Compliance Sessions Now Live in the re:Invent 2016 Session Catalog
AWS re:Invent 2016 begins November 28, and now, the live session catalog includes 32 security and compliance sessions. 19 of these sessions are in the Security & Compliance track and 13 are in the re:Source Mini Con for Security Services. All 32se titles and abstracts are included below.

September 8: Automated Reasoning and Amazon s2n
In June 2015, AWS Chief Information Security Officer Stephen Schmidt introduced AWS’s new Open Source implementation of the SSL/TLS network encryption protocols, Amazon s2n. s2n is a library that has been designed to be small and fast, with the goal of providing you with network encryption that is more easily understood and fully auditable. In the 14 months since that announcement, development on s2n has continued, and we have merged more than 100 pull requests from 15 contributors on GitHub. Those active contributors include members of the Amazon S3, Amazon CloudFront, Elastic Load Balancing, AWS Cryptography Engineering, Kernel and OS, and Automated Reasoning teams, as well as 8 external, non-Amazon Open Source contributors.

September 6: IAM Service Last Accessed Data Now Available for the Asia Pacific (Mumbai) Region
In December, AWS Identity and Access Management (IAM) released service last accessed data, which helps you identify overly permissive policies attached to an IAM entity (a user, group, or role). Today, we have extended service last accessed data to support the recently launched Asia Pacific (Mumbai) Region. With this release, you can now view the date when an IAM entity last accessed an AWS service in this region. You can use this information to identify unnecessary permissions and update policies to remove access to unused services.

If you have questions about or issues with implementing the solutions in any of these posts, please start a new thread on the AWS IAM forum.

– Craig

AWS Config: Checking for Compliance with New Managed Rule Options

Post Syndicated from Shawn O'Connor original https://aws.amazon.com/blogs/devops/aws-config-checking-for-compliance-with-new-managed-rule-options/

Change is the inherent nature of the Cloud. New ideas can immediately take shape through the provisioning of resources, projects will iterate and evolve, and sometimes we must go back to the drawing board. Amazon Web Services accelerates this process by providing the ability to programmatically provision and de-provision resources. However, this freedom requires some responsibility on the part the organization in implementing consistent business and security policies.

AWS Config enables AWS resource inventory and change management as well as Config Rules to confirm that resources are configured in compliance with policies that you define. New options are now available for AWS Config managed rules, providing additional flexibility with regards to the type of rules and policies that can be created.

  • AWS Config rules can now check that running instances are using approved Amazon Machine Images, or AMIs. You can specify a list of approved AMI by ID or provide a tag to specify the list of AMI Ids.
  • The Required-Tag managed rule can now require up to 6 tags with optional values in a single rule. Previously, each rule accepted only a single tag/value combo.
  • Additionally, the Required-Tag managed rule now accepts a comma-separated list of values for each checked tag. This allows for a rule to be compliant if any one of a supplied list of tags is present on the resource.

A Sample Policy

Let’s explore how these new rule options can be incorporated into a broader compliance policy. A typical company’s policy might state that:

  • All data must be encrypted at rest.
  • The AWS IAM password policy must meet the corporate standard.
  • Resources must be billed to the correct cost center.
  • We want to know who owns a resource in case there is an issue or question about the resource.
  • We want to identify whether a resource is a part of Dev, QA, Production, or staging so that we can apply the correct SLAs and make sure the appropriate teams have access.
  • We need to identify any systems that handled privileged information such as Personally Identifiable Information (PII) or credit card data (PCI) as a part of our compliance requirements.
  • Our Ops teams regularly provides hardened images with the latest patches and required software (e.g. security and/or configuration management agents). We want to ensure that all Linux instances are built with these images. We do regular reviews to ensure our images are up to date.

We see organizations implementing a variety of strategies like the one we have defined. Tagging is typically used to categorize and identify resources. We will be using tags and AWS Config managed rules to satisfy each of our policy requirements. In addition to the AWS Config managed rules, custom compliance rules can be developed using AWS Lambda functions.

Implementation

In the AWS Management console, we have built 5 rules.

  • strong_password_policy – Checks that we have a Strong Password policy for AWS IAM Users
  • only_encrypted_volumes – Checks that all attached EBS volumes are encrypted
  • approved_ami_by_id – Checks that approved AMI IDs have been used
  • approved_ami_by_tag – Check that AMIs tagged as sec_approved have been used
  • ec2_required_tags – Check that all required tags have been applied to EC2 instances:

    Tag Name (Key)Tag Value(s)Description
    EnvironmentProd / Dev / QA / StagingThe environment in which the resource deployed
    OwnerJoe, Steve, AnneWho is responsible?
    CostCenterGeneral / Ops / Partner / MarketingWho is paying for the resource?
    LastReviewedDate (e.g. 09272016)Last time this instance was reviewed for compliance
    AuditLevelNormal / PII / PCIThe required audit level for the instance

Strong Password Policy

AWS Config has pre-built rules to check against the configured IAM password Policy. We have implemented this rule using the default configurations.

We can see that our IAM password policy is Compliant with our managed rule.

blog1

Approved AMIs

We have our hardened AMI from the Ops team. The image also includes encrypted EBS volumes. We have added a tag called sec_approved that we will use to test our image.

blog2

We will create two additional rules to check that EC2 instances are launched with our approved AMIs. One will check that our sec_approved tag is present on the AMI. A second rule checks that the AMI ID (ami-1480c503) is included in a comma-separated list of approved images. Now we will launch two instances. One with a standard Amazon Linux AMI (i-019b0e790a67dd7f1) and one instance with our approved AMI (i-0caf40fcb4f48517c).

blog3

If we look at the details of the approved_amis_by_tag and approved_amis_by_id rules, we see that the instance that was launched with the unapproved image (i-019b0e790a67dd7f1) has been marked Noncompliant. Our hardened instance (i-0caf40fcb4f48517c) is Compliant.

blog4

blog5

Encrypted EBS Volumes

In the EC2 Console we see the volumes attached to our instances. We can see that the EBS volumes created by our hardened AMI have been encrypted as configured. They are attached to instance i-0caf40fcb4f48517c.

blog6

If we look at the details of the only_encrypted_volumes rule, we see that the instance that was launched with the unapproved image (i-019b0e790a67dd7f1) has been marked Noncompliant. Our hardened instance is Compliant.

blog7

Required Tags

Finally, let’s take a look at the ec2_required_tags rule. We have defined our required tags in our AWS Config Rule. Specific tag values are optional, but will be required if specified. Our hardened instance was configured with all required tags.

tags01

tags2

We can see that our tagged instance is compliant with our Required Tags Rule as well.

tags3

How to Enable MFA Protection on Your AWS API Calls

Post Syndicated from Zaher Dannawi original https://aws.amazon.com/blogs/security/how-to-enable-mfa-protection-on-your-aws-api-calls/

Multi-factor authentication (MFA) provides an additional layer of security for sensitive API calls, such as terminating Amazon EC2 instances or deleting important objects stored in an Amazon S3 bucket. In some cases, you may want to require users to authenticate with an MFA code before performing specific API requests, and by using AWS Identity and Access Management (IAM) policies, you can specify which API actions a user is allowed to access. In this blog post, I show how to enable an MFA device for an IAM user and author IAM policies that require MFA to perform certain API actions such as EC2’s TerminateInstances.

Let’s say Alice, an AWS account administrator, wants to add another layer of protection over her users’ access to EC2. Alice wants to allow IAM users to perform RunInstances, DescribeInstances, and StopInstances actions. However, Alice also wants to restrict actions such as TerminateInstances to ensure that users can only perform such API calls if they authenticate with MFA. To accomplish this, Alice must follow the following process’s two parts.

Part 1: Assign an approved authentication device to IAM users

  1. Get an MFA token. Alice can purchase a hardware MFA key fob from Gemalto, a third-party provider. Alternatively, she can install a virtual MFA app for no additional cost, which enables IAM users to use any OATH TOTP–compatible application on their smartphone, tablet, or computer.
  2. Activate the MFA token. Alice must then sign in to the IAM console and follow these steps to activate an MFA device for a user (note that Alice can delegate MFA device activation to her users by granting them access to provision and manage MFA devices):
    1. Click Users in the left pane of the IAM console.
    2. Select a user.
    3. On the Security Credentials tab, choose Manage MFA Device, as shown in the following screenshot.
      Screenshot of Manage MFA Device button
For Virtual MFA devices:
      1. Select A virtual MFA device and choose Next Step.
        Screenshot showing the selection of "A virtual MFA device"
      2. Open the virtual MFA app and choose the option to create a new account.
      3. Use the app to scan the generated QR code. Alternatively, you can select Show secret key for manual configuration (as shown the following screenshot), and then type the secret configuration key in the MFA application.
        Screenshot showing selection of "Show secret key for manual configuration"
      4. Finally, in the Authentication Code 1 box, type the one-time password that currently appears in the virtual MFA device. Wait up to 30 seconds for the device to generate a new one-time password. Then type the second one-time password in the Authentication Code 2 Choose Activate Virtual MFA.For Hardware MFA devices:
For Hardware MFA devices:
      1. Select A hardware MFA device and choose Next Step.
        Screenshot of choosing "A hardware MFA device"
      2. Type the device serial number. The serial number is usually on the back of the device.
        Screenshot of the "Serial Number" box
      3. In the Authentication Code 1 box, type the six-digit number displayed by the MFA device. Wait up to 30 seconds for the device to generate a new one-time password. Then type the second one-time password in the Authentication Code 2 Choose Next Step.

Part 2: Author an IAM policy to grant “Allow” access for MFA-authenticated users

Now, Alice has to author an IAM policy that requires MFA authentication to access EC2’s TerminateInstances API action and attach it to her users. To do that, Alice has to insert a condition key in the IAM policy statement. Alice can choose one of the following condition element keys:

  1. aws:MultiFactorAuthPresent

Alice can select this condition key to verify that the user did authenticate with MFA. This key is present only when the user authenticates with short-term credentials. The following example policy demonstrates how to use this condition key to require MFA authentication for users attempting to call the EC2 TerminateInstances action.

{
  "Version": "2012-10-17",
  "Statement":[
   {
          "Sid": "AllowActionsForEC2",
          "Effect": "Allow",
          "Action": ["ec2:RunInstances",
                     "ec2:DescribeInstances",
                     "ec2:StopInstances "],
          "Resource": "*"
   },
   {
      "Sid": "AllowActionsForEC2WhenMFAIsPresent",
      "Effect":"Allow",
      "Action":"ec2:TerminateInstances",
      "Resource":"*",
      "Condition":{
         "bool":{"aws:MultiFactorAuthPresent":"true"}
      }
    }
  ]
}

  1. aws:MultiFactorAuthAge

Alice can select this condition key if she wants to grant access to users only within a specific time period after they authenticate with MFA. For instance, Alice can restrict access to EC2’s TerminateInstances API to users who authenticated with an MFA device within the past 300 seconds (5 minutes). Users with short-term credentials older than 300 seconds must reauthenticate to gain access to this API. The following example policy demonstrates how to use this condition key.

{
  "Version": "2012-10-17",
   "Statement":[
    {
            "Sid": "AllowActionsForEC2",
            "Effect": "Allow",
            "Action": ["ec2:RunInstances",
               "ec2:DescribeInstances",
               "ec2:StopInstances "],
            "Resource": "*"
    },
    {
           "Sid": "AllowActionsForEC2WhenMFAIsPresent",
           "Effect":"Allow",
           "Action":"ec2:TerminateInstances",
           "Resource":"*",
           "Condition":{
             " NumericLessThan ":{"aws: MultiFactorAuthAge":"300"}
    }
   }
  ]
}

Summary

In this blog post, I showed how you can use MFA to protect access to AWS API actions and resources. I first showed how to assign a virtual or hardware MFA device to an IAM user and then how to author IAM policies with a condition element key that requires MFA authentication before granting access to AWS API actions. I covered two condition element keys: the MultiFactorAuthPresent condition key, which matches when a user authenticates with an MFA device, and the MultiFactorAuthAge condition key, which matches when a user authenticates with an MFA device within a given time interval.

For more information about MFA-protected API access, see Configuring MFA-Protected API Access. If you have a comment about this post, submit it in the “Comments” section below. If you have implementation questions, please start a new thread on the IAM forum.

– Zaher

AWS Week in Review – October 17, 2016

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-october-17-2016/

Wow, a lot is happening in AWS-land these days! Today’s post included submissions from several dozen internal and external contributors, along with material from my RSS feeds, my inbox, and other things that come my way. To join in the fun, create (or find) some awesome AWS-related content and submit a pull request!

Monday

October 17

Tuesday

October 18

Wednesday

October 19

Thursday

October 20

Friday

October 21

Saturday

October 22

Sunday

October 23

New & Notable Open Source

New SlideShare Presentations

Upcoming Events

New AWS Marketplace Listings

  • Application Development
    • Joomia 3.6.0 + Apache + MySQL + AMAZONLINUX AMI by MIRI Infotech Inc, sold by Miri Infotech.
    • LAMP 5 MariaDB and LAMP 7 MariaDB, sold by Jetware.
    • Secured Acquia Drupal on Windows 2008 R2, sold by Cognosys Inc.
    • Secured BugNet on Windows 2008 R2, sold by Cognosys Inc.
    • Secured CMS Gallery on Windows 2008 R2, sold by Cognosys Inc.
    • Secured Kooboo CMS on Windows 2008 R2, sold by Cognosys Inc.
    • Secured Lemoon on Windows 2008 R2, sold by Cognosys Inc.
    • Secured Magento on Windows 2008 R2, sold by Cognosys Inc.
    • Secured MyCV on Windows 2012 R2<, sold by Cognosys Inc.
    • Secured nService on Windows 2012 R2, sold by Cognosys Inc.
    • Secured Orchard CMS on Windows 2008 R2, sold by Cognosys Inc.
  • Application Servers
    • Microsoft Dynamics NAV 2016 for Business, sold by Data Resolution.
    • Microsoft Dynamics GP 2015 for Business, sold by Data Resolution.
    • Microsoft Dynamics AX 2012 for Business, sold by Data Resolution.
    • Microsoft Dynamics SL 2015 for Business, sold by Data Resolution.
    • Redis 3.0, sold by Jetware.
  • Application Stacks
    • LAMP 5 Percona and LAMP 7 Percona, sold by Jetware.
    • MySQL 5.1, MySQL 5.6, and MySQL 5.7, sold by Jetware.
    • Percona Server for MySQL 5.7, sold by Jetware.
    • Perfect7 LAMP v1.1 Multi-PHP w/Security (HVM), sold by Archisoft.
    • Perfect7 LAMP v1.1 Multi-PHP Base (HVM), sold by Archisoft.
  • Content Management
    • DNN Platform 9 Sandbox – SQL 2016, IIS 8.5, .Net 4.6, W2K12R2, sold by Benjamin Hermann.
    • iBase 7, sold by iBase.
    • MediaWiki powered by Symetricore (Plus Edition), sold by Symetricore.
    • Secured CompositeC1 on Windows 2008 R2, sold by Cognosys Inc.
    • Secured Dot Net CMS on Windows 2008 R2, sold by Cognosys Inc.
    • Secured Gallery Server on Windows 2008 R2,sold by Cognosys Inc.
    • Secured Joomia on Windows 2008 R2, sold by Cognosys Inc.
    • Secured Mayando on Windows 2008 R2, sold by Cognosys Inc.
    • Secured phpBB on Windows 2008 R2, sold by Cognosys Inc.
    • Secured Wiki Asp.net on Windows 2008 R2, sold by Cognosys Inc.
    • SharePoint 2016 Enterprise bYOL with paid support, sold by Data Resolution.
    • WordPress Powered by AMIMOTO (Auto-Scaling ready), sold by DigitalCube Co. Ltd.
  • Databases
    • MariaDB 5.5, 10.0, and 10.1, sold by Jetware.
    • Redis 3.2, sold by Jetware
  • Databases
    • MariaDB 5.5, 10.0, and 10.1, sold by Jetware.
    • Redis 3.2, sold by Jetware.
  • eCommerce
    • Secured AspxCommerce on Windows 2008 R2, sold by Cognosys Inc.
    • Secured BeYourMarket on Windows 2008 R2, sold by Cognosys Inc.
    • Secured DashComerce on Windows 2008 R2, sold by Cognosys Inc.
    • Vikrio, sold by Vikrio.
  • Issue & Bug Tracking
    • Redmine 2.6 and Redmine 3.3, sold by Jetware.
  • Monitoring
    • Memcached 1.4, sold by Jetware
  • Network Infrastructure
    • 500 Mbps Load Balancer with Commercial WAF Subscription, sold by KEMP Technologies.
  • Operating System
    • Ubuntu Desktop 16.04 LTS (HVM), sold by Netspectrum Inc.
  • Security
    • AlienVault USM (Unified Security Management) Anywhere, sold by AlienVault.
    • Armor Anywhere CORE, sold by Armor Defense.
    • Hillstone CloudEdge Virtual-Firewall Advanced Edition, sold by Hillstone Networks.
    • Negative SEO Monitoring, sold by SEO Defend.

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

IAM Service Last Accessed Data Now Available for the Asia Pacific (Mumbai) Region

Post Syndicated from Zaher Dannawi original https://aws.amazon.com/blogs/security/iam-service-last-accessed-data-now-available-for-the-asia-pacific-mumbai-region-2/

In December, AWS Identity and Access Management (IAM) released service last accessed data, which helps you identify overly permissive policies attached to an IAM entity (a user, group, or role). Today, we have extended service last accessed data to support the recently launched Asia Pacific (Mumbai) Region. With this release, you can now view the date when an IAM entity last accessed an AWS service in this region. You can use this information to identify unnecessary permissions and update policies to remove access to unused services.

The IAM console now shows service last accessed data in 11 regions: US East (N. Virginia), US West (Oregon), US West (N. California), EU (Ireland), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Mumbai), and South America (Sao Paulo).

Note: IAM began collecting service last accessed data in most regions on October 1, 2015. Information about AWS services accessed before this date is not included in service last accessed data. If you need historical access information about your IAM entities before this date, see the AWS CloudTrail documentation. Also, see Tracking Period Regional Differences to learn the start date of service last accessed data for supported regions.

For more information about IAM and service last accessed data, see Service Last Accessed Data. If you have a comment about service last accessed data, submit it below. If you have a question, please start a new thread on the IAM forum.

– Zaher