Tag Archives: AWS IAM

New! Set permission guardrails confidently by using IAM access advisor to analyze service-last-accessed information for accounts in your AWS organization

Post Syndicated from Ujjwal Pugalia original https://aws.amazon.com/blogs/security/set-permission-guardrails-using-iam-access-advisor-analyze-service-last-accessed-information-aws-organization/

You can use AWS Organizations to centrally govern and manage multiple accounts as you scale your AWS workloads. With AWS Organizations, central security administrators can use service control policies (SCPs) to establish permission guardrails that all IAM users and roles in the organization’s accounts adhere to. When teams and projects are just getting started, administrators may allow access to a broader range of AWS services to inspire innovation and agility. However, as developers and applications settle into common access patterns, administrators need to set permission guardrails to remove permissions for services that have not or should not be accessed by their accounts. Whether you’re just getting started with SCPs or have existing SCPs, you can now use AWS Identity and Access Management (IAM) access advisor to help you restrict permissions confidently.

IAM access advisor uses data analysis to help you set permission guardrails confidently by providing you service-last-accessed information for accounts in your organization. By analyzing last-accessed information, you can determine the services not used by IAM users and roles. You can implement permissions guardrails using SCPs that restrict access to those services. For example, you can identify services not accessed in an organizational units (OU) for the last 90 days, create an SCP that denies access to these services, and attach it to the OU to restrict access to all IAM users and roles across the accounts in the OU. You can view service-last-accessed information for your accounts, OUs, and your organization using the IAM console in the account you used to create your organization. You can access this information programmatically using IAM access advisor APIs with the AWS Command Line Interface (AWS CLI) or a programmatic client.

In this post, I first review the service-last-accessed information provided by IAM access advisor using the IAM console. Next, I walk through an example to demonstrate how you can use this information to remove permissions for services not accessed by IAM users and roles within your production OU by creating an SCP.

Use IAM access advisor to view service-last-accessed information using the AWS management console

Access advisor provides an access report that displays a list of services and the last-accessed timestamps for when an IAM principal accessed each service. To view the access report in the console, sign in to the IAM console using the account you used to create your organization. Additionally, you need to enable SCPs on your organization root to view the access report. You can view the service-last-accessed information in two ways. First, you can use the Organization activity view to review the service-last-accessed information for an organizational entity such as an account or OU. Second, you can use the SCP view to review the service-last-accessed information for services allowed by existing SCPs attached to your organizational entities.

The Organization activity view lists your OUs and accounts. You can select an OU or account to view the services that the entity is allowed to access and the service-last-accessed information for those services. This tells you services that have not been accessed in an organizational entity. Using this information, you can remove permissions for these services by creating a new SCP and attaching it the organizational entity or updating an existing SCP attached to the entity.

The SCP view lists all the SCPs in your organization. You can select a SCP to view the services allowed by the SCP and the service-last-accessed information for those services. The service-last-accessed information is the last-accessed timestamp across all the organizational entities that the SCP is attached to. This tells you services that have not been accessed but are allowed by the SCP. Using this information, you can refine your existing permission guardrails to remove permissions for services not accessed for your existing SCPs.

Figure 1 shows an example of the access report for an OU. You can see the service-last-accessed information for all services that IAM users and roles can access in all the accounts in ProductionOU. You can see that services such as AWS Ground Station and Amazon GameLift have not been used in the last year. You can also see that Amazon DynamoDB was last accessed in account Application1 10 days ago.
 

Figure 1: An example access report for an OU

Figure 1: An example access report for an OU

Now that I’ve described how to view service-last-accessed information, I will walk through an example.

Example: Restrict access to services not accessed in production by creating an SCP

For this example, assume ExampleCorp uses AWS Organizations to organize their development, test, and production environments into organizational units (OUs). Alice is a central security administrator responsible for managing the accounts in the production OU for ExampleCorp. She wants to ensure that her production OU called ProductionOU has permissions to only the services that are required to run existing workloads. Currently, Alice hasn’t set any permission guardrails on her production OU. I will show you how you can help Alice review the service-last-accessed information for her production OU and set a permission guardrail confidently using a SCP to restrict access to services not accessed by ExampleCorp developers and applications in production.

Prerequisites

  1. Ensure that the SCP policy type is enabled for the organization. If you haven’t enabled SCPs, you can enable it for your organization root by following the steps mentioned in Enabling and Disabling a Policy Type on a Root.
  2. Ensure that your IAM roles or users have appropriate permissions to view the access report, you can do so by attaching the IAMAccessAdvisorReadOnly managed policy.

How to review service-last-accessed information for ProductionOU in the IAM console

In this section, you’ll review the service-last-accessed information using IAM access advisor to determine the services that have not been accessed across all the accounts in ProductionOU.

  1. Start by signing in to the IAM console in the account that you used to create the organization.
  2. In the left navigation pane, under the AWS Organizations section, select the Organization activity view.

    Note: Enabling the SCP policy type does not set any permission guardrails for your organization unless you start attaching SCPs to accounts and OUs in your organization.

  3. In the Organization activity view, select ProductionOU from the organization structure displayed on the console so you can review the service last accessed information across all accounts in that OU.
     
    Figure 2: Select 'ProductionOU' from the organizational structure

    Figure 2: Select ‘ProductionOU’ from the organizational structure

  4. Selecting ProductionOU opens the Details and activity tab, which displays the access report for this OU. In this example, I have no permission guardrail set on the ProductionOU, so the default FULLAWSACCESS SCP is attached, allowing the ProductionOU to have access to all services. The access report displays all AWS services along with their last-accessed timestamps across accounts in the OU.
     
    Figure 3: The service access report

    Figure 3: The service access report

  5. Review the access report for ProductionOU to determine services that have not been accessed across accounts in this OU. In this example, there are multiple accounts in ProductionOU. Based on the report, you can identify that services Ground Station and GameLift have not been used in 365 days. Using this information, you can confidently set a permission guardrail by creating and attaching a new SCP that removes permissions for these services from ProductionOU. You can use a different time period, such as 90 days or 6 months, to determine if a service is not accessed based on your preference.
     
    Figure 4: Amazon GameLift and AWS Ground Station are not accessed

    Figure 4: Amazon GameLift and AWS Ground Station are not accessed

Create and attach a new SCP to ProductionOU in the AWS Organizations console

In this section, you’ll use the access insights you gained from using IAM access advisor to create and attach a new SCP to ProductionOU that removes permissions to Ground Station and GameLift.

  1. In the AWS Organizations console, select the Policies tab, and then select Create policy.
  2. In the Create new policy window, give your policy a name and description that will help you quickly identify it. For this example, I use the following name and description.
    • Name: ProductionGuardrail
    • Description: Restricts permissions to services not accessed in ProductionOU.
  3. The policy editor provides you with an empty statement in the text editor to get started. Position your cursor inside the policy statement. The editor detects the content of the policy statement you selected, and allows you to add relevant Actions, Resources, and Conditions to it using the left panel.
     
    Figure 5: SCP editor tool

    Figure 5: SCP editor tool

  4. Next, add the services you want to restrict. Using the left panel, select services Ground Station and GameLift. Denying access to services using SCPs is a powerful action if these services are in use. From the service last accessed information I reviewed in step 6 of the previous section, I know these services haven’t been used for more than 365 days, so it is safe to remove access to these services. In this example, I’m not adding any resource or condition to my policy statement.
     
    Figure 6: Add the services you want to restrict

    Figure 6: Add the services you want to restrict

  5. Next, use the Resource policy element, which allows you to provide specific resources. In this example, I select the resource type as All Resources.
  6.  

    Figure 9: Select resource type as All Resources

    Figure 7: Select resource type as “All Resources”

  7. Select the Create Policy button to create your policy. You can see the new policy in the Policies tab.
     
    Figure 10: The new policy on the “Policies” tab

    Figure 8: The new policy on the “Policies” tab

  8. Finally, attach the policy to ProductionOU where you want to apply the permission guardrail.

Alice can now review the service-last-accessed information for the ProductionOU and set permission guardrails for her production accounts. This ensures that the permission guardrail Alice set for her production accounts provides permissions to only the services that are required to run existing workloads.

Summary

In this post, I reviewed how access advisor provides service-last-accessed information for AWS organizations. Then, I demonstrated how you can use the Organization activity view to review service-last-accessed information and set permission guardrails to restrict access only to the services that are required to run existing workloads. You can also retrieve service-last-accessed information programmatically. To learn more, visit the documentation for retrieving service last accessed information using APIs.

If you have comments about using IAM access advisor for your organization, submit them in the Comments section below. For questions related to reviewing the service last accessed information through the console or programmatically, start a thread on the IAM forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Ujjwal Pugalia

Ujjwal is the product manager for the console sign-in and sign-up experience at AWS. He enjoys working in the customer-centric environment at Amazon because it aligns with his prior experience building an enterprise marketplace. Outside of work, Ujjwal enjoys watching crime dramas on Netflix. He holds an MBA from Carnegie Mellon University (CMU) in Pittsburgh.

Working backward: From IAM policies and principal tags to standardized names and tags for your AWS resources

Post Syndicated from Michael Chan original https://aws.amazon.com/blogs/security/working-backward-from-iam-policies-and-principal-tags-to-standardized-names-and-tags-for-your-aws-resources/

When organizations first adopt AWS, they have to make many decisions that will lay the foundation for their future footprint in the cloud. Part of this includes making decisions about the number of AWS accounts you choose to operate, but another fundamental task is constructing practical access control policies so that your application teams can’t affect each other’s resources within the same account. With AWS Identity and Access Management (IAM), you can customize granular access control policies that are appropriate for your organization, helping you follow security best practices such as separation-of-duties and least-privilege. As every organization is different, you’ll need to carefully consider what your cloud security policies are and how they relate to your cloud engineering teams. Things to consider include who should be authorized to perform which actions, how your teams operate with one another, and which IAM mechanisms are suitable for ensuring that only authorized access is allowed.

In this blog post, I’ll show you an approach that works backwards, starting with a set of customer requirements, then utilizing AWS features such as IAM conditions and principal tagging. Combined with an AWS resource naming and tagging strategy, this approach can help you meet your access control objectives. AWS recently enabled tags on IAM principals (users and roles), which allows you to create a single reusable policy that provides access based on the tags of the IAM principal. When you combine this feature with a standardized resource naming and tagging convention, you can craft a set of IAM roles and policies suitable for your organization.

AWS features used in this approach

To follow along, you should have a working knowledge of IAM and tagging, and familiarity with the following concepts:

Introducing Example Corporation

To illustrate the strategies I discuss, I’ll refer to a fictitious customer throughout my post: Example Corporation is a large organization that wants to use their existing Microsoft Active Directory (AD) as their identity store, with Active Directory Federation Services (AD FS) as the means to federate into their AWS accounts. They also have multiple business projects, some of which will need their own AWS accounts, and others that will share AWS accounts due to the dependencies of the applications within those projects. Each project has multiple application teams who do not need to access each other’s AWS resources.

Example Corporation’s access control requirements

Example Corporation doesn’t always dedicate a single AWS account to one team or one environment. Sometimes, multiple project teams work within the same account, and sometimes they have more than one environment in an account. Figure 1 shows how the Website Marketing and Customer Marketing project teams (each of which has multiple application teams) share two AWS accounts: a development and staging AWS account and a production AWS account. Although production has a dedicated AWS account, Example Corporation has decided that a shared development and staging account is acceptable.
 

Figure 1: AWS accounts shared by Example Corp's teams

Figure 1: AWS accounts shared by Example Corp’s teams

The development and staging environments share an AWS account, and the two teams do work closely together. All projects within an account will be allowed access to the read-only metadata of other resources, such as EC2 instance names, tags, and IAM information. However, each project team wants to prevent their application resources from being modified by the other team’s members.

Initial decisions for supporting shared account access control

Example Corporation decides to continue using their existing identity federation solution for access to AWS, as the existing processes for handling joiners, movers, and leavers can be extended to manage identities within AWS. They will enable this via Security Assertion Markup Language (SAML) provided by ADFS to allow Example Corporation’s AD users to access AWS by assuming IAM roles. Initially, they will create three IAM roles—project administrator, application administrator, and application operator—with additional roles to come later.

The company knows they need to implement access controls through IAM, and they’ve created an initial list of AWS services (EC2, RDS, S3, SNS, and Amazon CloudWatch) to secure. Infrastructure as code (IaC) is a new concept at Example Corporation, so they want to keep initial IAM roles and policies as simple as possible. IAM principal tags will help them reuse standard policies across accounts. Principal tags are global condition keys assigned to a user or role. They can be used within a condition to ensure that a new resource is tagged on creation with a value that matches your principal. They can also be used to verify that an existing resource has a matching tag prior to allowing an action against that resource.

Many, but not all, AWS services support tag-based authorization of AWS resources. For services that don’t support tag-based authorization, Example Corporation will enable access control by utilizing ARN paths with wildcards (ARN matching). The name of the resource and its ARN path will explicitly state which projects, applications, and operators have access to that resource. This will require the company to design and enforce a mandatory naming convention.

Please see the IAM user guide for an up-to-date a list of resources that support tag-based authorization.

Using multiple tags to meet access control requirements

The web and marketing teams have settled on three common roles and have decided their access levels as follows:

  • Project administrator: Able to access and modify all resources for a specific project, including all the resources belonging to application teams under the project.
  • Application administrator: Able to access and modify only the resources owned by a particular application team.
  • Application operator: Able to access and modify only the resources owned by a specific application team, plus those that reside within one of three environments: development, staging, or production.

 

Figure 2: Example Corp's teams - administrators and operators with AWS access

Figure 2: Example Corp’s teams—administrators and operators with AWS access

As for the principal tags, there will be three unique tags named with the prefix access-, with tag values that differentiate the roles and their resources from other projects, applications, and environments.

Finally, because the AWS account is shared, Example Corporation needs to account for the service usage costs of the two teams. By adding a mandatory tag for “cost center,” they can determine the costs of the web team’s resources versus the marketing team’s resources in AWS Cost Explorer and AWS Cost and Usage Report.

Below is an example of the web team’s tags.

IAM principal tags used for the website project administrator role:

Tag nameTag value
access-projectweb
cost-center123456

Tags for the website application administrator role:

Tag nameTag value
access-projectweb
access-applicationnginx
cost-center123456

Tags for the website application operator role—specifically for developer access to the dev environment:

Tag nameTag value
access-projectweb
access-applicationnginx
access-environmentdev
cost-center123456

Access control for AWS services and resources that support tag-based authorization

Example Corporation now needs to write IAM policies for their targeted resources. They begin with EC2, as that will be their most widely used service. The IAM documentation for EC2 shows that most write actions (create, modify, delete) support tag-based authorization, allowing the principal to execute the action only if the resource’s tag matches a predefined value.

For example, the following policy statement will only allow EC2 instances to be started or stopped if the resource tag value matches the “web” project name:


{
    "Action":[
        "ec2:StartInstances",
        "ec2:StopInstances"
    ],
    "Resource":[
        "arn:aws:ec2:*:*:instance/*"
    ],
    "Effect":"Allow",
    "Condition":{
        "StringEquals":{
            "ec2:ResourceTag/access-project":"web"
        }
    }
}         

However, if Example Corporation uses a policy variable instead of hardcoding the project name, the company can reuse the policy by taking advantage of the aws:PrincipalTag condition key:


{
    "Action":[
        "ec2:StartInstances",
        "ec2:StopInstances"
    ],
    "Resource":[
        "arn:aws:ec2:*:*:instance/*"
    ],
    "Effect":"Allow",
    "Condition":{
        "StringEquals":{
            "ec2:ResourceTag/access-project":"${aws:PrincipalTag/access-project}"
        }
    }
}    

Without policy variables, every IAM policy for every project would need a unique value to control access to the resource. Because the text of every policy document would be different, Example Corporation wouldn’t be able to reuse policies from one account to another or from one environment to another. Variables allow them to deploy the same policy file to all of their accounts, while allowing the effect of the policy to differ based on the tags that are used in each account.

As a result, Example Corporation will base the right to manipulate resources like EC2 on resource tags as much as possible. It is important, then, for their teams to tag each resource at the time of creation, if the resource supports it. Untagged resources won’t be manageable, but resources tagged properly will become automatically manageable. The company will use the aws:RequestTag IAM condition key to ensure that the requested access tags and cost allocation tags are assigned at the time of EC2 creation. The IAM policy associated with the application-operator role will therefore be:


{       
    "Sid": "AllowEC2ResourceCreationWithRequiredTags",
    "Action": [
        "ec2:CreateVolume",
        "ec2:RunInstances"
    ],      
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],      
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-application": "${aws:PrincipalTag/access-application}",
            "aws:RequestTag/access-environment": "${aws:PrincipalTag/access-environment}",
            "aws:RequestTag/cost-center": "${aws:PrincipalTag/cost-center}"
        }
    }
},
{       
    "Sid": "AllowCreateTagsIfRequestingValidTags",
    "Action": [
        "ec2:CreateTags"
    ],
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-application": "${aws:PrincipalTag/access-application}",
            "aws:RequestTag/access-environment": "${aws:PrincipalTag/access-environment}",
            "ec2:CreateAction": "RunInstances"
        }
    }
}

If someone tries to create an EC2 instance without setting proper tags, the RunInstances API call will fail. The application-administrator policy will be similar, with the added ability to create a resource in any environment:


{       
    "Sid": "AllowEC2ResourceCreationWithRequiredTags",
    "Action": [
        "ec2:CreateVolume",
        "ec2:RunInstances"
    ],      
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],      
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-application": "${aws:PrincipalTag/access-application}",
            "aws:RequestTag/access-zone": [ "dev", "stg", "prd" ],   
            "aws:RequestTag/cost-center": "${aws:PrincipalTag/cost-center}"
        }
    }
},
{       
    "Sid": "AllowCreateTagsIfRequestingValidTags",
    "Action": [
        "ec2:CreateTags"
    ],
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-application": "${aws:PrincipalTag/access-application}",
            "aws:RequestTag/access-environment": [ "dev", "stg", "prd" ],
            "ec2:CreateAction": "RunInstances"  
        }
    }
}    

And finally, the project-administrator policy will have the most access. Note that even though this policy is for a project administrator, the user is still limited to modifying resources only within three environments. In addition, to ensure that all resources have the required access-application tag, Example Corporation has added a null condition to verify that the tag value is non-empty:


{       
    "Sid": "AllowEC2ResourceCreationWithRequiredTags",
    "Action": [
        "ec2:CreateVolume",
        "ec2:RunInstances"
    ],      
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],      
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-environment": [ "dev", "stg", "prd" ],
            "aws:RequestTag/cost-center": "${aws:PrincipalTag/cost-center}"
        },
        "Null": {
            "aws:RequestTag/access-application": false
        }
    }
},
{       
    "Sid": "AllowCreateTagsIfRequestingValidTags",
    "Action": [
        "ec2:CreateTags"
    ],
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-environment": [ "dev", "stg", "prd" ],
            "ec2:CreateAction": "RunInstances"  
        }
    }
}

Access control for AWS services and resources without tag-based authorization

Some services don’t support tag-based authorization. In those cases, Example Corporation will use ARN pattern matching. Many AWS resources use ARNs that contain a user-created name. Therefore, the company’s proposal is to name resources following a naming convention. A name will look like: [project]-[application]-[environment]-myresourcename. For resources that are globally unique, such as S3, Example Corporation additionally requires its abbreviated name, “exco,” to be at the beginning of the resource so as to avoid a naming collision with another corporation’s buckets:


arn:aws:s3:::exco-web-nginx-dev-staticassets

To enforce this naming convention, they craft a reusable IAM policy that ensures that only intended users with matching access-project, access-application, and access-environment tag values can modify their resources. In addition, using * wildcard matches, they are able to allow for custom resource name suffixes such as staticassets in the above example. Using an AWS SNS topic as an example, a snippet of the IAM policy associated with the application-operator role will look like this:


{       
    "Sid": "AllowSNSListAccess",
    "Effect": "Allow",
    "Action": [
        "sns:ListTopics",
        "sns:ListSubscriptions*",
        ...
    ],      
    "Resource": "*"
},
{       
    "Sid": "AllowSNSAccessBasedOnArnMatching",
    "Effect": "Allow",
    "Action": [
        "sns:CreateTopic",
        "sns:DeleteTopic",
        ...
    ],      
    "Resource": [
        "arn:aws:sns:*:*:${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-${aws:PrincipalTag/access-environment}-*"
    ]
} 

And here’s an IAM policy for an application-admin:


{       
    "Sid": "AllowSNSListAccess",
    "Effect": "Allow",
    "Action": [
        "sns:ListTopics",
        "sns:ListSubscriptions*",
        ...
    ],      
    "Resource": "*"
},
{       
    "Sid": "AllowSNSAccessBasedOnArnMatching",
    "Effect": "Allow",
    "Action": [
        "sns:CreateTopic",
        "sns:DeleteTopic",
        ...
    ],            
    "Resource": [
        "arn:aws:sns:*:*:${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-dev-*",
        "arn:aws:sns:*:*:${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-stg-*",
        "arn:aws:sns:*:*:${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-prd-*"
    ]
}

And finally, here’s the IAM policy for a project-admin:


{       
    "Sid": "AllowSNSListAccess",
    "Effect": "Allow",
    "Action": [
        "sns:ListTopics",
        "sns:ListSubscriptions*",
        ...
    ],      
    "Resource": "*"
},
{       
    "Sid": "AllowSNSAccessBasedOnArnMatching",
    "Effect": "Allow",
    "Action": [
        "sns:*" 
    ],      
    "Resource": [
        "arn:aws:sns:*:*:${aws:PrincipalTag/access-project}-*"
    ]
}

The above policies have two caveats, however. First, they require that the principal tags have values that do not include a hyphen, as it is used as a delimiter according to Example Corporation’s new tag-based convention for access control. In addition, a forward slash cannot be used, as it is in use within ARNs by many AWS resources, such as S3 buckets:


arn:aws:s3:::awsexamplebucket/exampleobject.png

It is important that the company doesn’t let users create resources with disallowed or invalid tags. The following application admin permissions boundary policy uses a condition to permit IAM roles to be created, but only if they are tagged appropriately. Please note that these are just snippets of the boundary policy for the sake of illustration:


{       
    "Sid": "AllowIamCreateTagsOnUserOrRole",
    "Action": [
        "iam:TagUser",
        "iam:TagRole"
    ],
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-application": "${aws:PrincipalTag/access-application}",
            "aws:RequestTag/access-environment": [ "dev", "stg", "prd" ]
        },      
        "StringNotLike": {
            "aws:RequestTag/access-project": [ "*-*", "*/*" ],
            "aws:RequestTag/access-application": [ "*-*", "*/*" ]            
        }       
    },      
    "Resource": [
        "arn:aws:iam::*:user/${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-*",
        "arn:aws:iam::*:role/${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-*"
    ]
}

And likewise, this permissions boundary policy attached to the project admin will do the same:


{       
    "Sid": "AllowIamCreateTagsOnUserOrRole",
    "Action": [
        "iam:TagUser",
        "iam:TagRole"
    ],
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-environment": [ "dev", "stg", "prd" ]
        },      
        "StringNotLike": {
            "aws:RequestTag/access-project": [ "*-*", "*/*" ],
            "aws:RequestTag/access-application": [ "*-*", "*/*" ]            
        }       
    },      
    "Resource": [
        "arn:aws:iam::*:user/${aws:PrincipalTag/access-project}-*",
        "arn:aws:iam::*:role/${aws:PrincipalTag/access-project}-*"
    ]
}

Note that the above boundary policies can be also be crafted using allow statements and multiple explicit deny statements.

Example Corporation’s resource naming convention requirements

As shown in the above examples, Example Corporation has given project teams the ability to create resources with name-based access control for services that currently do not support tag-based authorization (such as SQS and S3). Through the use of wildcards, teams can still provide custom names to their resources to differentiate from other resources created within the same team.

AWS resources have various limits on the structure and composition of names, so the company restricts the character length on access tags. For example, Amazon ElastiCache cluster names must be 20 alphanumeric characters or less, including hyphens. Most AWS resources have higher character limits, but Example Corporation limits their [project]-[application]-[environment] prefix to a 3-character project ID, 5-character application ID, and 3-character maximum environment name to satisfy their requirements, as this will equal a total of 14 characters (for example, web-nginx-prd-), which leaves 6 characters remaining for the user-specified cluster name.

Summary of Key Decisions

  • Services that support tag-based authorization (TBA) must have resources that follow a tagging convention for access control. Tagging on resource creation will be enforced where possible.
  • Services that do not support TBA must have resources that follow a naming convention. The cost center tag will still be required and will be applied after resource creation.
  • Services that do not support TBA, and cannot have user-specified names in their ARN (less common), will be addressed on a case-by-case basis. They will either need to allow access for all projects and application teams sharing the same account, or allow access via a custom IAM policy created on a case-by-case basis so that only the desired team can access the resource. Each IAM role should leave a few unused slots short of the maximum number of policies allowed per role in order to accommodate custom policies.
  • It is acceptable to allow basic List* and Describe* IAM permissions for AWS resources for all users who log in to the account, as the company’s project teams work closely together.
  • IAM user and role names created by project and application admins must adhere to the approved resource naming conventions. Admins themselves will have a permissions boundary policy applied to their roles. This policy, in turn, will require that all users and roles the admins create have a permissions boundary policy. This is especially important for roles associated with resources that can potentially create or modify IAM resources, such as EC2 and Lambda.
  • Active Directory users who need access to AWS resources must assume different IAM roles in order to utilize the different levels of access that the project admin, application admin, and application operator each provide. Users must also assume a different role if they need access to a different project. This is because each role’s tag has a single value. In this scheme, a single role cannot be assigned to multiple projects or application teams.

Conclusion

Example Corporation was able to allow their project teams to share the same AWS account while still limiting access to a majority of the account’s AWS resources. Through the use of IAM principal tagging, combined with a resource naming and tagging convention, they created a reusable set of IAM policies that restricted access not only between project and application admins and users, but also between different development, stage, and production users.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, please start a new thread on the IAM forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Michael Chan

Michael is a Professional Services Consultant who has assisted commercial and Federal customers with their journey to AWS. He enjoys understanding customer problems and working backwards to provide practical solutions.

Setting permissions to enable accounts for upcoming AWS Regions

Post Syndicated from Sulay Shah original https://aws.amazon.com/blogs/security/setting-permissions-to-enable-accounts-for-upcoming-aws-regions/

The AWS Cloud spans 61 Availability Zones within 20 geographic regions around the world, and has announced plans to expand to 12 more Availability Zones and four more Regions: Hong Kong, Bahrain, Cape Town, and Milan. Customers have told us that they want an easier way to control the Regions where their AWS accounts operate. Based on this feedback, AWS is changing the default behavior for these four and all future Regions so customers will opt in the accounts they want to operate in each new Region. For new AWS Regions, Identity and Access Management (IAM) resources such as users and roles will only be propagated to the Regions that you enable. When the next Region launches, you can enable this Region for your account using the AWS Regions setting under My Account in the AWS Management Console. You will need to enable a new Region for your account before you can create and manage resources in that Region. At this time, there are no changes to existing AWS Regions.

We recommend that you review who in your account will have access to enable and disable AWS Regions. Additionally, you can prepare for this change by setting permissions so that only approved account administrators can enable and disable AWS Regions. Starting today, you can use IAM permissions policies to control which IAM principals (users and roles) can perform these actions.

In this post, I describe the new account permissions for enabling and disabling new AWS Regions. I also describe the updates we’ve made to deny these permissions in the AWS-managed PowerUserAccess policy that many customers use to restrict access to administrative actions. For customers who use custom policies to manage administrative access, I show how to secure access to enable and disable new AWS Regions using IAM permissions policies and Service Control Policies in AWS Organizations. Finally, I explain the compatibility of Security Token Service (STS) session tokens with Regions.

IAM Permissions to enable and disable new AWS Regions for your account

To control access to enable and disable new AWS Regions for your account, you can set IAM permissions using two new account actions. By default, IAM denies access to new actions unless you have explicitly allowed these permissions in an existing policy. You can use IAM permissions policies to allow or deny the actions to enable and disable AWS Regions to IAM principals in your account. The new actions are:

ActionDescription
account:EnableRegionAllows you to opt in an account to a new AWS Region (for Regions launched after March 20, 2019). This action propagates your IAM resources such as users and roles to the Region.
account:DisableRegionAllows you to opt out an account from a new AWS Region (for Regions launched after March 20, 2019). This action removes your IAM resources such as users and roles from the Region.

When granting permissions using IAM policies, some administrators may have granted full access to AWS services except for administrative services such as IAM and Organizations. These IAM principals will automatically get access to the new administrative actions in your account to enable and disable AWS Regions. If you prefer not to provide account permissions to enable or disable AWS Regions to these principals, we recommend that you add a statement to your policies to deny access to account permissions. To do this, you can add a deny statement for account:*. As new Regions launch, you will be able to specify the Regions where these permissions are granted or denied.

At this time, the account actions to enable and disable AWS Regions apply to all upcoming AWS Regions launched after March 20, 2019. To learn more about managing access to existing AWS Regions, review my post, Easier way to control access to AWS regions using IAM policies.

Updates to AWS managed PowerUserAccess Policy

If you’re using the AWS managed PowerUserAccess policy to grant permissions to AWS services without granting access to administrative actions for IAM and Organizations, we have updated this policy as shown below to exclude access to account actions to enable and disable new AWS Regions. You do not need to take further action to restrict these actions for any IAM principals for which this policy applies. We updated the first policy statement, which now allows access to all existing and future AWS service actions except for IAM, AWS Organizations, and account. We also updated the second policy statement to allow the read-only action for listing Regions. The rest of the policy remains unchanged.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "NotAction": [
                "iam:*",
                "organizations:*",
				"account:*"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:CreateServiceLinkedRole",
                "iam:DeleteServiceLinkedRole",
                "iam:ListRoles",
                "organizations:DescribeOrganization",
	  			"account:ListRegions"
            ],
            "Resource": "*"
        }
    ]
}

Restrict Region permissions across multiple accounts using Service Control Policies in AWS Organizations

You can also centrally restrict access to enable and disable Regions for all principals across all accounts in AWS Organizations using Service Control Policies (SCPs). You would use SCPs to restrict this access if you do not anticipate using new Regions. SCPs enable administrators to set permission guardrails that apply to accounts in your organization or an organization unit. To learn more about SCPs and how to create and attach them, read About Service Control Policies.

Next, I show how to restrict the Region enable and disable actions for accounts in an AWS organization using an SCP. In the policy below, I explicitly deny using the Effect block of the policy statement. In the Action block, you add the new permissions account:EnableRegion and account:DisableRegion.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Deny",
            "Action": [
                "account:EnableRegion",
                "account:DisableRegion"
            ],
            "Resource": "*"
        }
    ]
}

Once you create the policy, you can attach this policy to the root of your organization. This will restrict permissions across all accounts in your organization.

Check if users have permissions to enable or disable new AWS Regions in my account

You can use the IAM Policy Simulator to check if any IAM principal in your account has access to the new account actions for enabling and disabling Regions. The simulator evaluates the policies that you choose for a user or role and determines the effective permissions for each of the actions that you specify. Learn more about using the IAM Policy Simulator.

Region compatibility of AWS STS session tokens

For new AWS Regions, we’re also changing region compatibility for session tokens from the AWS Security Token Service (STS) global endpoint. As a best practice, we recommend using the regional STS endpoints to reduce latency. If you’re using regional STS endpoints or don’t plan to operate in new AWS Regions, then the following change doesn’t apply to you and no action is required.

If you’re using the global STS endpoint (https://sts.amazonaws.com) for session tokens and plan to operate in new AWS Regions, the session token size is going to increase. This may impact functionality if you store session tokens in any of your systems. To ensure your systems work with this change, we recommend that you update your existing systems to use regional STS endpoints using the AWS SDK.

Summary

AWS is changing the default behavior for all new Regions going forward. For new AWS Regions, you will opt in to enable your account to operate in those Regions. This makes it easier for you to select the regions where you can create and manage AWS resources. To prepare for upcoming Region launches, we recommend that you validate the capability to enable and disable AWS Regions to ensure only approved IAM principals can enable and disable AWS Regions for your account.

If you have comments about this post, submit them in the Comments section below. If you have questions about or suggestions for this solution, start a new thread on the AWS forums.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

The author

Sulay Shah

Sulay is the product manager for Identity and Access Management service at AWS. He strongly believes in the customer first approach and is always looking for new opportunities to assist customers. Outside of work, Sulay enjoys playing soccer and watching movies. Sulay holds a master’s degree in computer science from the North Carolina State University.

How to rotate Amazon DocumentDB and Amazon Redshift credentials in AWS Secrets Manager

Post Syndicated from Apurv Awasthi original https://aws.amazon.com/blogs/security/how-to-rotate-amazon-documentdb-and-amazon-redshift-credentials-in-aws-secrets-manager/

Using temporary credentials is an AWS Identity and Access Management (IAM) best practice. Even Dilbert is learning to set up temporary credentials. Today, AWS Secrets Manager made it easier to follow this best practice by launching support for rotating credentials for Amazon DocumentDB and Amazon Redshift automatically. Now, with a few clicks, you can configure Secrets Manager to rotate these credentials automatically, turning a typical, long-term credential into a temporary credential.

In this post, I summarize the key features of AWS Secrets Manager. Then, I show you how to store a database credential for an Amazon DocumentDB cluster and how your applications can access this secret. Finally, I show you how to configure AWS Secrets Manager to rotate this secret automatically.

Key features of Secrets Manager

These features include the ability to:

  • Rotate secrets safely. You can configure Secrets Manager to rotate secrets automatically without disrupting your applications, turning long-term secrets into temporary secrets. Secrets Manager natively supports rotating secrets for all Amazon database services—Amazon RDS, Amazon DocumentDB, and Amazon Redshift—that require a user name and password. You can extend Secrets Manager to meet your custom rotation requirements by creating an AWS Lambda function to rotate other types of secrets.
  • Manage access with fine-grained policies. You can store all your secrets centrally and control access to these securely using fine-grained AWS Identity and Access Management (IAM) policies and resource-based policies. You can also tag secrets to help you discover, organize, and control access to secrets used throughout your organization.
  • Audit and monitor secrets centrally. Secrets Manager integrates with AWS logging and monitoring services to enable you to meet your security and compliance requirements. For example, you can audit AWS AWS CloudTrail logs to see when Secrets Manager rotated a secret or configure AWS CloudWatch Events to alert you when an administrator deletes a secret.
  • Pay as you go. Pay for the secrets you store in Secrets Manager and for the use of these secrets; there are no long-term contracts or licensing fees.
  • Compliance. You can use AWS Secrets Manager to manage secrets for workloads that are subject to U.S. Health Insurance Portability and Accountability Act (HIPAA), Payment Card Industry Data Security Standard (PCI-DSS), and ISO/IEC 27001, ISO/IEC 27017, ISO/IEC 27018, or ISO 9001.

Phase 1: Store a secret in Secrets Manager

Now that you’re familiar with the key features, I’ll show you how to store the credential for a DocumentDB cluster. To demonstrate how to retrieve and use the secret, I use a Python application running on Amazon EC2 that requires this database credential to access the DocumentDB cluster. Finally, I show how to configure Secrets Manager to rotate this database credential automatically.

  1. In the Secrets Manager console, select Store a new secret.
     
    Figure 1: Select "Store a new secret"

    Figure 1: Select “Store a new secret”

  2. Next, select Credentials for DocumentDB database. For this example, I store the credentials for the database masteruser. I start by securing the masteruser because it’s the most powerful database credential and has full access over the database.
     
    Figure 2: Select "Credentials for DocumentDB database"

    Figure 2: Select “Credentials for DocumentDB database”

    Note: To follow along, you need the AWSSecretsManagerReadWriteAccess managed policy because this policy grants permissions to store secrets in Secrets Manager. Read the AWS Secrets Manager Documentation for more information about the minimum IAM permissions required to store a secret.

  3. By default, Secrets Manager creates a unique encryption key for each AWS region and AWS account where you use Secrets Manager. I chose to encrypt this secret with the default encryption key.
     
    Figure 3: Select the default or your CMK

    Figure 3: Select the default or your CMK

  4. Next, view the list of DocumentDB clusters in my account and select the database this credential accesses. For this example, I select the DB instance documentdb-instance, and then select Next.
     
    Figure 4: Select the instance you created

    Figure 4: Select the instance you created

  5. In this step, specify values for Secret Name and Description. Based on where you will use this secret, give it a hierarchical name, such as Applications/MyApp/Documentdb-instancee, and then select Next.
     
    Figure 5: Provide a name and description

    Figure 5: Provide a name and description

  6. For the next step, I chose to keep the Disable automatic rotation default setting because in my example my application that uses the secret is running on Amazon EC2. I’ll enable rotation after I’ve updated my application (see Phase 2 below) to use Secrets Manager APIs to retrieve secrets. Select Next.
     
    Figure 6: Choose to either enable or disable automatic rotation

    Figure 6: Choose to either enable or disable automatic rotation

    Note:If you’re storing a secret that you’re not using in your application, select Enable automatic rotation. See AWS Secrets Manager getting started guide on rotation for details.

  7. Review the information on the next screen and, if everything looks correct, select Store. You’ve now successfully stored a secret in Secrets Manager.
  8. Next, select See sample code in Python.
     
    Figure 7: Select the "See sample code" button

    Figure 7: Select the “See sample code” button

  9. Finally, take note of the code samples provided. You will use this code to update your application to retrieve the secret using Secrets Manager APIs.
     
    Figure 8: Copy the code sample for use in your application

    Figure 8: Copy the code sample for use in your application

Phase 2: Update an application to retrieve a secret from Secrets Manager

Now that you’ve stored the secret in Secrets Manager, you can update your application to retrieve the database credential from Secrets Manager instead of hard-coding this information in a configuration file or source code. For this example, I show how to configure a Python application to retrieve this secret from Secrets Manager.

  1. I connect to my Amazon EC2 instance via Secure Shell (SSH).
    
        import DocumentDB
        import config
        
        def no_secrets_manager_sample()
        
        # Get the user name, password, and database connection information from a config file.
        database = config.database
        user_name = config.user_name
        password = config.password                
        

  2. Previously, I configured my application to retrieve the database user name and password from the configuration file. Below is the source code for my application.
    
        # Use the user name, password, and database connection information to connect to the database
        db = Database.connect(database.endpoint, user_name, password, database.db_name, database.port) 
        

  3. I use the sample code from Phase 1 above and update my application to retrieve the user name and password from Secrets Manager. This code sets up the client, then retrieves and decrypts the secret Applications/MyApp/Documentdb-instance. I’ve added comments to the code to make the code easier to understand.
    
        # Use this code snippet in your app.
        # If you need more information about configurations or implementing the sample code, visit the AWS docs:   
        # https://aws.amazon.com/developers/getting-started/python/
        
        import boto3
        import base64
        from botocore.exceptions import ClientError
        
        
        def get_secret():
        
            secret_name = "Applications/MyApp/Documentdb-instance"
            region_name = "us-west-2"
        
            # Create a Secrets Manager client
            session = boto3.session.Session()
            client = session.client(
                service_name='secretsmanager',
                region_name=region_name
            )
        
            # In this sample we only handle the specific exceptions for the 'GetSecretValue' API.
            # See https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html
            # We rethrow the exception by default.
        
            try:
                get_secret_value_response = client.get_secret_value(
                    SecretId=secret_name
                )
            except ClientError as e:
                if e.response['Error']['Code'] == 'DecryptionFailureException':
                    # Secrets Manager can't decrypt the protected secret text using the provided KMS key.
                    # Deal with the exception here, and/or rethrow at your discretion.
                    raise e
                elif e.response['Error']['Code'] == 'InternalServiceErrorException':
                    # An error occurred on the server side.
                    # Deal with the exception here, and/or rethrow at your discretion.
                    raise e
                elif e.response['Error']['Code'] == 'InvalidParameterException':
                    # You provided an invalid value for a parameter.
                    # Deal with the exception here, and/or rethrow at your discretion.
                    raise e
                elif e.response['Error']['Code'] == 'InvalidRequestException':
                    # You provided a parameter value that is not valid for the current state of the resource.
                    # Deal with the exception here, and/or rethrow at your discretion.
                    raise e
                elif e.response['Error']['Code'] == 'ResourceNotFoundException':
                    # We can't find the resource that you asked for.
                    # Deal with the exception here, and/or rethrow at your discretion.
                    raise e
            else:
                # Decrypts secret using the associated KMS CMK.
                # Depending on whether the secret is a string or binary, one of these fields will be populated.
                if 'SecretString' in get_secret_value_response:
                    secret = get_secret_value_response['SecretString']
                else:
                    decoded_binary_secret = base64.b64decode(get_secret_value_response['SecretBinary'])
                    
            # Your code goes here.                          
        

  4. Applications require permissions to access Secrets Manager. My application runs on Amazon EC2 and uses an IAM role to obtain access to AWS services. I will attach the following policy to my IAM role. This policy uses the GetSecretValue action to grant my application permissions to read a secret from Secrets Manager. This policy also uses the resource element to limit my application to read only the Applications/MyApp/Documentdb-instance secret from Secrets Manager. You can visit the AWS Secrets Manager documentation to understand the minimum IAM permissions required to retrieve a secret.
    
        {
        "Version": "2012-10-17",
        "Statement": {
        "Sid": "RetrieveDbCredentialFromSecretsManager",
        "Effect": "Allow",
        "Action": "secretsmanager:GetSecretValue",
        "Resource": "arn:aws:secretsmanager:::secret:Applications/MyApp/Documentdb-instance"
        }
        }                   
        

Phase 3: Enable rotation for your secret

Rotating secrets regularly is a security best practice. Secrets Manager makes it easier to follow this security best practice by offering built-in integrations and supporting extensibility with Lambda. When you enable rotation, Secrets Manager creates a Lambda function and attaches an IAM role to this function to execute rotations on a schedule you define.

Note: Configuring rotation is a privileged action that requires several IAM permissions, and you should only grant this access to trusted individuals. To grant these permissions, you can use the AWS IAMFullAccess managed policy.

Now, I show you how to configure Secrets Manager to rotate the secret
Applications/MyApp/Documentdb-instance automatically.

  1. From the Secrets Manager console, I go to the list of secrets and choose the secret I created in phase 1, Applications/MyApp/Documentdb-instance.
     
    Figure 9: Choose the secret from Phase 1

    Figure 9: Choose the secret from Phase 1

  2. Scroll to Rotation configuration, and then select Edit rotation.
     
    Figure 10: Select the Edit rotation configuration

    Figure 10: Select the Edit rotation configuration

  3. To enable rotation, select Enable automatic rotation, and then choose how frequently Secrets Manager rotates this secret. For this example, I set the rotation interval to 30 days. Then, choose create a new Lambda function to perform rotation and give the function an easy to remember name. For this example, I choose the name RotationFunctionforDocumentDB.
     
    Figure 11: Chose to enable automatic rotation, select a rotation interval, create a new Lambda function, and give it a name

    Figure 11: Chose to enable automatic rotation, select a rotation interval, create a new Lambda function, and give it a name

  4. Next, Secrets Manager requires permissions to rotate this secret on your behalf. Because I’m storing the masteruser database credential, Secrets Manager can use this credential to perform rotations. Therefore, I select Use this secret, and then select Save.
     
    Figure12: Select credentials for Secret Manager to use

    Figure12: Select credentials for Secret Manager to use

  5. The banner on the next screen confirms that I successfully configured rotation and the first rotation is in progress, which enables you to verify that rotation is functioning as expected. Secrets Manager will rotate this credential automatically every 30 days.
     
    Figure 13: The banner at the top of the screen will show the status of the rotation

    Figure 13: The banner at the top of the screen will show the status of the rotation

Summary

I explained the key benefits of AWS Secrets Manager and showed how you can use temporary credentials to access your Amazon DocumentDB clusters and Amazon Redshift instances securely. You can follow similar steps to rotate credentials for Amazon Redshift.

Secrets Manager helps you protect access to your applications, services, and IT resources without the upfront investment and on-going maintenance costs of operating your own secrets management infrastructure. To get started, visit the Secrets Manager console. To learn more, read the Secrets Manager documentation. If you have comments about this post, submit them in the Comments section below. If you have questions about anything in this post, start a new thread on the Secrets Manager forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Apurv Awasthi

Apurv is the product manager for credentials management services at AWS, including AWS Secrets Manager and IAM Roles. He enjoys the “Day 1” culture at Amazon because it aligns with his experience building startups in the sports and recruiting industries. Outside of work, Apurv enjoys hiking. He holds an MBA from UCLA and an MS in computer science from University of Kentucky.

How to centralize and automate IAM policy creation in sandbox, development, and test environments

Post Syndicated from Mahmoud ElZayet original https://aws.amazon.com/blogs/security/how-to-centralize-and-automate-iam-policy-creation-in-sandbox-development-and-test-environments/

To keep pace with AWS innovation, many customers allow their application teams to experiment with AWS services in sandbox environments as they move toward production-ready architecture. These teams need timely access to various sets of AWS services and resources, which means they also need a mechanism to help ensure least privilege is granted. In other words, your application team generally shouldn’t have access to administrative resources, such as an AWS Lambda function that takes periodic Amazon Elastic Block Store snapshot backups, or an Amazon CloudWatch Events rule that sends events to a centralized information security account managed by your security team.

In this blog post, I’ll show you how to create a centralized and automated workflow that creates and validates AWS Identity and Access Management (IAM) policies for application teams working in various sandbox, development, and test environments. Your security developers can customize this workflow according to the specific requirements of your security team. They can create logic to limit the allowed permission sets based on account type or owning team. I’ll use AWS CodePipeline to create and manage a workflow containing various stages and spanning multiple AWS accounts that I’ll describe in more detail in the next section.

Solution overview

I’ll start with this scenario: Alice is an administrator for an AWS sandbox account used by her organization’s data scientists to try out AWS analytics services such as Amazon Athena and Amazon EMR. The data scientists assess the suitability of these services for their production use cases by running sample analytics jobs on portions of real data sets after any sensitive information has been taken out. The data sets are stored in an existing Amazon Simple Storage Service (Amazon S3) bucket. For every new project, Alice authors a new IAM policy that allows the project team to access their requested Amazon S3 bucket and create their analytics clusters. However, Alice must follow a company guideline that sandbox accounts can only launch specific Amazon Elastic Compute Cloud (Amazon EC2) instance types. She must also restrict access to all administrative AWS Lambda functions and CloudWatch Events rules that the security team use to monitor sandbox account compliance. Below is the solution that meets these requirements and makes it easier for Alice and other administrators to perform their tasks.
 

Figure 1: Solution architecture

Figure 1: Solution architecture

  1. Alice uses the IAM visual editor to author a template that gives the data science team access to launch and manage EMR clusters that analyze S3-based data sets. She then uploads the IAM JSON policy document into an existing S3 bucket using an AWS Key Management Service (AWS KMS) key. The key and the S3 bucket are already created by the security team as part of account baselining, which I will detail later in this post.
  2. AWS CodePipeline automatically fetches the IAM JSON policy document and invokes a sequence of validation checks that use a single and central Lambda function hosted in an AWS account managed by the security team.
  3. If the IAM JSON policy adheres to all account and general security requirements coded by the security developers, the central Lambda function automatically creates the policy in Alice’s account and the pipeline will succeed. The central validation Lambda function will also attach a set of predefined explicit denies to the IAM policy to ensure that it limits undesired user capabilities in the sandbox account. If the IAM JSON policy fails the checks, the pipeline will fail and provide Alice the specific reason for non-compliance. Alice must then modify the policy and resubmit. When the policy has been successfully created, Alice will attach it to the right IAM user, group, or role.

Solution deployment

This solution includes the following three steps:

Prerequisites

As this solution manages permissions granted to AWS services or IAM entities, I highly recommend that you try the solution first in an isolated test environment to make sure it meets all your security requirements.

  1. You’ll need administrator access in two AWS accounts to set up the solution. The deployment of this solution is typically done by one of your organization’s administrators while setting up new AWS accounts. These are the two account types you’ll need access to:
     

    • A sandbox account. This lets application teams experiment with various AWS architectures. This could be a development or test account, as mentioned earlier.
    • A central information security account. Typically, this is owned by an information security team who monitors and enforces security compliance within a multi-account structure.


Important
: Because the Lambda function that you’ll create in the information security account has highly privileged permissions, it’s important to strictly follow best practices for securing the account. You need to limit account access to security team members. Sandbox account administrators should also not give this central Lambda function any IAM permissions in their sandbox account beyond IAM Policy creation.

  1. Because you’ll use the AWS Management Console for both AWS accounts, I strongly recommended that you have roles in both AWS accounts and use the console’s Switch Role feature. You can attach an alias to each account and give each a different color code so that you always know which one you’re logged into.
  2. Make sure to use the same AWS region for all the resources that you create for this solution.

Step 1: Deploy the solution prerequisites

Before building the pipeline across the two AWS accounts, you must first configure the required resources in both accounts, such as IAM roles and encryption keys. This configuration is typically done according to your security team’s guidelines when your organization first sets up the sandbox, development, or test environment.

Important

  • In addition to the initial setup you’ll create in this section, your security team must explicitly deny sandbox, development, or test account administrators from attaching IAM Policies that do not meet the allowed security policies for that account type, such as the AdministratorAccess IAM policy. Moreover, your security team must ensure any current or future users, groups, or roles in the account have no permissions to directly set or update IAM policies like (for example) CreatePolicy, CreatePolicyVersion, PutRolePolicy, PutUserPolicy, PutGroupPolicy, or UpdateAssumeRolePolicy. You want to ensure that creating permissions can only be done through the automation pipeline, which I’ll show you how to build shortly.
  • Because the solution I’ll be describing focuses on the creation of least privilege permissions, it’s highly advisable that your security team combines the solution with IAM permission boundaries to make sure that any permissions defined in this solution are scoped by a set of pre-defined permissions for every type of account in the organization. For example, your account administrators might only be allowed to create IAM users or roles with a pre-defined set of permission boundaries that limit the permissions attached to those principals. For more information about permission boundaries, please refer to this AWS Security blog post.

Create the sandbox account prerequisites

Follow the steps below to deploy an AWS CloudFormation template that will create the following resources in the sandbox account:

  • An S3 bucket where your sandbox administrators will upload IAM policies
  • An IAM role that your automated pipeline will use to access the S3 bucket that stores the IAM policies
  • An AWS KMS key that you will use to encrypt the IAM policies in your S3 bucket
  1. While logged in to your sandbox account in your default browser, select this link to launch an AWS stack with the sandbox environment prerequisites. You’ll be redirected to the CloudFormation console with the template URL already populated.
     
    Figure 2: CloudFormation console

    Figure 2: CloudFormation console with prepopulated URL

  2. Select Next and, optionally, provide a name for your stack. A suggested stack name, Sandbox-Prerequisites, should already be populated.
  3. The template defines an input parameter called CentralAccount that you can populate with the AWS account ID of your security account. For more information on how to find the account ID of your security account, check here.
  4. Select Next, and then select Next again.
  5. To have the stack create the IAM roles that your pipeline will use, select the check box that says I acknowledge that AWS CloudFormation might create IAM resources with custom names, and then select Create Stack.
  6. Select the Stack info tab and refresh periodically while watching the Stack Status field value. After your stack reaches the state CREATE_COMPLETE, navigate to the CloudFormation Outputs tab and copy the following output values to the text editor of your choice. You’ll use these values in subsequent CloudFormation stacks.
     
    Figure 3: CloudFormation Outputs tab

    Figure 3: CloudFormation Outputs tab

Create the information security account prerequisites

Follow the steps below to deploy a CloudFormation template that will create the following resources in your information security account:

  • An IAM role used by your automated pipeline to invoke your central Lambda function and to provide access to the sandbox account KMS key
  • An IAM role used by the central Lambda function to assume a role in the sandbox account and manage IAM policies
  1. While logged in to your security account in your default browser, select this link to launch an AWS stack with the security environment prerequisites. You’ll be redirected to the CloudFormation console with the template URL already populated.
  2. Select Next and, optionally, provide a name for your stack. A suggested stack name, Sandbox-Prerequisites, should already be populated.
  3. Populate the following input parameter fields:
    • SandboxAccount: The AWS account ID for the sandbox account.
    • ArtifactBucket: The bucket name that you noted in your text editor from the previous stack run in the sandbox account
    • CMKARN: The Amazon Resource Name (ARN) of the KMS key that you noted in your text editor from the previous stack run in the sandbox account
    • PolicyCheckerFunctionName: The name of the Lambda function to be created later. The default value is PolicyChecker
  4. Select Next, and then select Next again.
  5. To have the stack create the IAM roles used by your pipeline, select the box that reads I acknowledge that AWS CloudFormation might create IAM resources with custom names, and then select Create Stack.
  6. Wait for your stack until it reaches the state CREATE_COMPLETE.

Create the sandbox account pipeline

Now, switch back to your sandbox account and deploy the CloudFormation template that will create the following resources in the sandbox account:

  • An AWS CodePipeline automation pipeline that fetches the IAM policy document from S3 and sends it to the security account for centralized validation. If valid, a Lambda function in the information security account will also create the IAM policy in the sandbox account.
  • An S3 bucket policy to allow your central Lambda function to fetch the IAM policy JSON document from your bucket
  • An IAM role that will be assumed by the Lambda function in the central information security account and used to create IAM policies in the sandbox account. Sandbox account administrator can then attach those IAM policies to the required entities, like an IAM user or role.
  1. While logged in to your sandbox account in your default browser, select this link to launch an AWS stack with the sandbox environment prerequisites. You’ll be redirected to the CloudFormation console with the template URL already populated.
  2. Click Next and, optionally, provide a name for your stack. A suggested stack name, Sandbox-Pipeline, should already be populated.
  3. Populate the following input parameter fields:
    • CentralAccount: The AWS account ID of the information security account, without hyphens.
    • ArtifactBucket: The same bucket name that you noted in your text editor earlier and used in the previous stack in the information security account.
    • CMKARN: The ARN of the KMS key that you noted in your text editor earlier and used in the previous stack in the information security account.
    • PolicyCheckerFunctionName: Again, the name of the Lambda function to be created later. It must be the same value you provided to the information security account template.
  4. Select Next, and then select Next again.
  5. To have the stack create the required IAM roles, select the box that reads I acknowledge that AWS CloudFormation might create IAM resources with custom names, and then select Create Stack.
  6. Wait for your stack until it reaches the state CREATE_COMPLETE.

Step 2: Set up the policy validation Lambda function in the central information security account

In the central information security account, create the Lambda function to validate the IAM policies created in sandbox environment.

  1. In the AWS Lambda console, select Create Function and then select Author from scratch. Provide values for the following fields:
    • Name. This must be the same function name defined as input parameter PolicyCheckerFunctionName to CloudFormation in step 1, when you set up the information security account prerequisites. If you did not change the default value in step 1, the default is still PolicyChecker.
    • Runtime. Python 2.7.
    • Role. To set the role, select Choose an existing role, and then select the role named policy-checker-lambda-role. This is the role you created in step 1, when you set up the information security account prerequisites.

    Choose Create Function, scroll down to Function Code, and then paste the following code into the editor (replacing the existing code):

    
    #  Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
    #  Licensed under the Apache License, Version 2.0 (the "License"). You may not
    #  use this file except in compliance with
    #  the License. A copy of the License is located at
    #      http://aws.amazon.com/apache2.0/
    #  or in the "license" file accompanying this file. This file is distributed
    #  on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
    #  either express or implied. See the License for the
    #  specific language governing permissions and
    #  limitations under the License.
    from __future__ import print_function
    import json
    import boto3
    import zipfile
    import tempfile
    import os
    
    print('Loading function')
    PERMISSIVE_ERROR_MSG = """Policy creation request rejected: * permissions not
                             allowed in both actions and resources"""
    GENERAL_ERROR_MSG = """An error has occurred while validating policy.
                            Please contact admin"""
    
    
    def get_template(event, s3, artifact, file_in_zip):
        tmp_file = tempfile.NamedTemporaryFile()
        bucket = event['CodePipeline.job']['data']['inputArtifacts'][0]['location']['s3Location']['bucketName']
        key = event['CodePipeline.job']['data']['inputArtifacts'][0]['location']['s3Location']['objectKey']
    
        with tempfile.NamedTemporaryFile() as tmp_file:
            s3.download_file(bucket, key, tmp_file.name)
            with zipfile.ZipFile(tmp_file.name, 'r') as zip:
                return zip.read(file_in_zip)
    
    
    def get_sts_session(event, account, rolename):
        sts = boto3.client("sts")
        RoleArn = str("arn:aws:iam::" + account + ":role/" + rolename)
        response = sts.assume_role(
            RoleArn=RoleArn,
            RoleSessionName='SecurityManageAccountPermissions',
            DurationSeconds=900)
        sts_session = boto3.Session(
            aws_access_key_id=response['Credentials']['AccessKeyId'],
            aws_secret_access_key=response['Credentials']['SecretAccessKey'],
            aws_session_token=response['Credentials']['SessionToken'],
            region_name=os.environ['AWS_REGION'],
            botocore_session=None,
            profile_name=None)
        return (sts_session)
    
    
    def ManagePolicy(event, context):
        # Set boto session to get pipeline artifact from sandbox/dev/test account
        artifact_session = boto3.Session(
            aws_access_key_id=event['CodePipeline.job']['data']
                                   ['artifactCredentials']['accessKeyId'],
            aws_secret_access_key=event['CodePipeline.job']['data']
                                       ['artifactCredentials']['secretAccessKey'],
            aws_session_token=event['CodePipeline.job']['data']
                                   ['artifactCredentials']['sessionToken'],
            region_name=os.environ['AWS_REGION'],
            botocore_session=None,
            profile_name=None)
        # Fetch pipeline artifact from S3
        s3 = artifact_session.client('s3')
        permission_doc = get_template(event, s3, '', 'policy.json')
        metadata_doc = json.loads(get_template(event, s3, '', 'metadata.json'))
        permission_doc_json = json.loads(permission_doc)
        # Assume the central account role in sandbox/dev/test account
        global STS_SESSION
        STS_SESSION = ''  
        STS_SESSION = get_sts_session(
            event, event['CodePipeline.job']['accountId'], 'central-account-role')
        iam = STS_SESSION.client('iam')
        codepipeline = STS_SESSION.client('codepipeline')
        policy_arn = 'arn:aws:iam::' + event['CodePipeline.job']['accountId'] + ':policy/' + metadata_doc['PolicyName']
    
        try:
            # 1.Sample code - Validate policy sent from sandbox/dev/test account:
            # look for * actions and * resources
            for statement in permission_doc_json['Statement']:
                if statement['Action'] == '*' and statement['Resource'] == '*':
                    return codepipeline.put_job_failure_result(
                                        jobId=event['CodePipeline.job']['id'],
                                        failureDetails={
                                            'type': 'JobFailed',
                                            'message': PERMISSIVE_ERROR_MSG})
            # 2.Sample code - Attach any required denies from central
            # pre-defined policy
            iam_local = boto3.client('iam')
            account_id = context.invoked_function_arn.split(":")[4]
            local_policy_arn = 'arn:aws:iam::' + account_id + ':policy/central-deny-policy-sandbox'
            policy_response = iam_local.get_policy(PolicyArn=local_policy_arn)
            policy_version_id = policy_response['Policy']['DefaultVersionId']
            policy_version_doc = iam_local.get_policy_version(
                PolicyArn=local_policy_arn,
                VersionId=policy_version_id)
            for statement in policy_version_doc['PolicyVersion']['Document']['Statement']:
                permission_doc_json['Statement'].append(
                   statement
                )
            # 3. If validated successfully, create policy in
            # sandbox/dev/test account
            iam.create_policy(
                PolicyName=metadata_doc['PolicyName'],
                PolicyDocument=json.dumps(permission_doc_json),
                Description=metadata_doc['PolicyDescription'])
    
            # successful creation, put result back to
            # sandbox/dev/test account pipeline
            codepipeline.put_job_success_result(
                jobId=event['CodePipeline.job']['id'])
        except Exception as e:
            print('Error: ' + str(e))
            codepipeline.put_job_failure_result(
                jobId=event['CodePipeline.job']['id'],
                failureDetails={'type': 'JobFailed', 'message': GENERAL_ERROR_MSG})
    
    def lambda_handler(event, context):
        print(event)
        ManagePolicy(event, context)
    

    This sample code shows how the Lambda function checks the IAM JSON policy submitted by Alice for policies that are too permissive because they allow all IAM actions on all account resources. The sample code also shows an IAM Deny action that prevents the launch of Amazon EC2 instances that are not part of the T2 EC2 instance family. An explicit deny here ensures that only T2 instances can be launched. Your security developers should author code similar to this sample code, in order to meet the security policies of every account type and control the IAM policies created in various sandbox, development, and test environments.

  2. Before saving your new Lambda function code, scroll further down to the Basic Settings section and increase the function timeout to 10 seconds.
  3. Select Save.

Step 3: Test the sandbox account pipeline

Now it’s time to deploy the solution in your sandbox account.

  1. Create the following files and compress them into an archive with the name policy.zip (this is the name expected by your created pipeline).
    • metadata.json: This file contains metadata like the name and description of the IAM policy to be created.
      
                      {
                      "PolicyDescription": "ec2 start permission policy",
                      "PolicyName": "Ec2RunTeamA"
                      }
                      

    • policy.json: This file contains the JSON body of the IAM policy to be created.
      
                      {
                      "Version": "2012-10-17",
                      "Statement": [
                              {
                              "Sid": "EC2Run",
                              "Effect": "Allow",
                              "Action": "ec2:RunInstances",
                              "Resource": "*"
                              }
                      ]
                      }
                      

  2. To upload your policy.zip file to the bucket you created earlier, go to the Amazon S3 console in the sandbox account and, in the search box at the top of the page, search for the bucket you noted in your text editor earlier as ArtifactBucket.
  3. When you locate your bucket, select the bucket name, and then select Upload. The upload dialog will appear.
  4. Select Add Files and navigate to the folder with the policy.zip file. Select the file, select Open, select Next, and then select Next again.
     
    Figure 4: S3 upload dialog

    Figure 4: S3 upload dialog

  5. Select the AWS KMS master-key radio button, and then select the KMS key that has the alias codepipeline-policy-crossaccounts.
     
    Figure 5: Selecting the KMS key

    Figure 5: Selecting the KMS key

  6. Select Next, and then select Upload.
  7. Go to AWS CodePipeline console, select your sandbox pipeline, and wait for the pipeline to start running. It can take up to a minute for it to start.
     
    Figure 6: AWS CodePipeline console

    Figure 6: AWS CodePipeline console

  8. Wait for your pipeline to complete. There should be no validation errors for the IAM policy you just uploaded and your IAM policy should be successfully created. To view the newly created IAM policy, open the AWS IAM console.
  9. Select Policies on the left and search for the policy with the name defined in the metadata.json file.
     
    Figure 7: Viewing your new policy

    Figure 7: Viewing your new policy

  10. Select the policy name. Note the IAM deny that was automatically added to your defined policy.

If you’d like to test the pipeline further, you can modify the policy to permit all actions on all resources. When policy.zip is uploaded again, the pipeline should return the following error:


Policy creation request rejected: * permissions not allowed in both actions and resources

If you encounter any errors as you modify your Lambda function code, you can always go back to the Lambda function logs in the central information security account. For more information on how to access Lambda function logs, please refer to the documentation.

The same logic used here can be extended to other sandbox, development, or test environments. However, for the central information security account, the existing roles will need to be updated to trust and have access to the resources in the newly added sandbox, development, or test account.

Summary

In this blog post, I showed you how to centralize the validation and creation of IAM policies across various AWS accounts. This allows your security developers to start coding your security best practices; permitting automatic creation and validation of IAM policies across your various sandbox, development, and test accounts. Account administrators can then attach those validated IAM policies to the required IAM users, groups or roles. This process strikes the balance between agility and control. It empowers your account administrators to create compliant and least-privilege permission IAM policies, while also allowing your application teams to keep quickly experimenting and innovating. If you have feedback about this blog post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Mahmoud ElZayet

Mahmoud is a Global Accounts Solutions Architect at AWS. He works with large enterprise customers providing guidance and technical assistance for building cloud solutions. Mahmoud is passionate about DevOps and Cloud Compliance topics. Outside of work, he enjoys exploring new places with his wife and two kids.

How to create and manage users within AWS Single Sign-On

Post Syndicated from Vijay Sharma original https://aws.amazon.com/blogs/security/how-to-create-and-manage-users-within-aws-sso/

AWS Single Sign-On (AWS SSO) is a cloud service that allows you to grant your users access to AWS resources, such as Amazon EC2 instances, across multiple AWS accounts. By default, AWS SSO now provides a directory that you can use to create users, organize them in groups, and set permissions across those groups. You can also grant the users that you create in AWS SSO permissions to applications such Salesforce, Box, and Office 365. AWS SSO and its directory are available at no additional cost to you.

A directory is a key building block that allows you to manage the users to whom you want to grant access to AWS resources and applications. AWS Identity and Access Management (IAM) provides a way to create users that can be used to access AWS resources within one AWS account. However, many businesses prefer an approach that enables users to sign in once with a single credential and access multiple AWS accounts and applications. You can now create your users centrally in AWS SSO and manage user access to all your AWS accounts and applications. Your users sign in to a user portal with a single set of credentials configured in AWS SSO, allowing them to access all of their assigned accounts and applications in a single place.

Note: If you manage your users in a Microsoft Active Directory (Microsoft AD) directory, AWS SSO already provides you with an option to connect to a Microsoft AD directory. By connecting your Microsoft AD directory once with AWS SSO, you can assign permissions for AWS accounts and applications directly to your users by easily looking up users and groups from your Microsoft AD directory. Your users can then use their existing Microsoft AD credentials to sign into the AWS SSO user portal and access their assigned accounts and applications in a single place. Customers who manage their users in an existing Lightweight Directory Access Protocol (LDAP) directory or through a cloud identity provider such as Microsoft Azure AD can continue to use IAM federation to enable their users’ access to AWS resources.

How to create users and groups in AWS SSO

You can create users in AWS SSO by configuring their email address and name. When you create a user, AWS SSO sends an email to the user by default so that they can set their own password. Your user will use their email address and a password they configure in AWS SSO to sign into the user portal and access all of their assigned accounts and applications in a single place.

You can also add the users that you create in AWS SSO to groups you create in AWS SSO. In addition, you can create permissions sets that define permitted actions on an AWS resource, and assign them to your users and groups. For example, you can grant the DevOps group permissions to your production AWS accounts. When you add users to the DevOps group, they get access to your production AWS accounts automatically.

In this post, I will show you how to create users and groups in AWS SSO, how to create permission sets, how to assign your groups and users to permission sets and AWS accounts, and how your users can sign into the AWS SSO user portal to access AWS accounts. To learn more about how to grant users that you create in AWS SSO permissions to business applications such as Office 365 and Salesforce, see Manage SSO to Your Applications.

Walk-through prerequisites

For this walk-through, I assume the following:

Overview

To illustrate how to add users in AWS SSO and how to grant permissions to multiple AWS accounts, imagine that you’re the IT manager for a company, Example.com, that wants to make it easy for its users to access resources in multiple AWS accounts. Example.com has five AWS accounts: a master account (called MasterAcct), two developer accounts (DevAccount1 and DevAccount2), and two production accounts (ProdAccount1 and ProdAccount2). Example.com uses AWS Organizations to manage these accounts and has already enabled AWS SSO.

Example.com has two developers, Martha and Richard, who need full access to Amazon EC2 and Amazon S3 in the developer accounts (DevAccount1 and DevAccount2) and read-only access to EC2 and S3 resources in the production accounts (ProdAccount1 and ProdAccount2).

The following diagram illustrates how you can grant Martha and Richard permissions to the developer and production accounts in four steps:

  1. Add users and groups in AWS SSO: Add users Martha and Richard in AWS SSO by configuring their names and email addresses. Add a group called Developers in AWS SSO and add Martha and Richard to the Developers group.
  2. Create permission sets: Create two permission sets. In the first permission set, include policies that give full access to Amazon EC2 and Amazon S3. In second permission set, include policies that give read-only access to Amazon EC2 and Amazon S3.
  3. Assign groups to accounts and permission sets: Assign the Developers group to your developer accounts and assign the permission set that gives full access to Amazon EC2 and Amazon S3. Assign the Developers group to your production accounts, too, and assign the permission set that gives read-only access to Amazon EC2 and Amazon S3. Martha and Richard now have full access to Amazon EC2 and Amazon S3 in the developer accounts and read-only access in the production accounts.
  4. Users sign into the User Portal to access accounts: Martha and Richard receive email from AWS to set their passwords with AWS SSO. Martha and Richard can now sign into the AWS SSO User Portal using their email addresses and the passwords they set with AWS SSO, allowing them to access their assigned AWS accounts.
Figure 1: Architecture diagram

Figure 1: Architecture diagram

Step 1: Add users and groups in AWS SSO

To add users in AWS SSO, navigate to the AWS SSO Console. Then, follow the steps below to add Martha as a user, to create a group called Developers, and to add Martha to the Developers group in AWS SSO.

  1. In the AWS SSO Dashboard, choose Manage your directory to navigate to the Directory tab.
    Figure 2: Navigating to the "Manage your directory" page

    Figure 2: Navigating to the “Manage your directory” page

  2. By default, AWS SSO provides you a directory that you can use to manage users and groups in AWS SSO. To add a user in AWS SSO, choose Add user. If you previously connected a Microsoft AD directory with AWS SSO, you can switch to using the directory that AWS SSO now provides by default by following the steps in Change Directory.
    Figure 3: Adding new users to your directory

    Figure 3: Adding new users to your directory

  3. On the Add User page, enter an email address, first name, and last name for the user, then create a display name. In this example, you’re adding “Martha Rivera” as a user. For the password, choose Send an email to the user with password instructions. This allows users to set their own passwords.

    Optionally, you can also set a mobile phone number and add additional user attributes.

    Figure 4: Adding user details

    Figure 4: Adding user details

  4. Next, you’re ready to add the user to groups. First, you need to create a group. Later, in Step 3, you can grant your group permissions to an AWS account so that any users added to the group will inherit the group’s permissions automatically. In this example, you will create a group called Developers and add Martha to the group. To do so, from the Add user to groups page, choose Create group.
    Figure 5: Creating a group

    Figure 5: Creating a new group

  5. In the Create group window, title your group by filling out the Group name field. For this example, enter Developers. Optionally, you can also enter a description of the group in the Description field. Choose Create to create the group.
    Figure 6: Adding a name and description to your new group

    Figure 6: Adding a name and description to your new group

  6. On the Add users to group page, check the box next to the group you just created, and then choose Add user. Following this process will allow you to add Martha to the Developers group.
    Figure 7: Adding a user to your group

    Figure 7: Adding a user to your new group

You’ve successfully created the user Martha and added her to the Developers group. You can repeat sub-steps 2, 3, and 6 above to create more users and add them to the group. This is the process you should follow to create the user Richard and add him to the Developers group.

Next, you’ll grant the Developers group permissions to AWS resources within multiple AWS accounts. To follow along, you’ll first need to create permission sets.

Step 2: Create permission sets

To grant user permissions to AWS resources, you must create permission sets. A permission set is a collection of administrator-defined policies that AWS SSO uses to determine a user’s permissions for any given AWS account. Permission sets can contain either AWS managed policies or custom policies that are stored in AWS SSO. Policies contain statements that represent individual access controls (allow or deny) for various tasks. This determines what tasks users can or cannot perform within the AWS account. To learn more about permission sets, see Permission Sets.

For this use case, you’ll create two permissions sets: 1) EC2AndS3FullAccess, which has AmazonEC2FullAccess and AmazonS3FullAccess managed policies attached and 2) EC2AndS3ReadAccess, which has AmazonEC2ReadOnlyAccess and AmazonS3ReadOnlyAccess managed policies attached. Later, in Step 3, you can assign groups to these permissions sets and AWS accounts, so that your users have access to these resources. To learn more about creating permission sets with different levels of access, see Create Permission Set.

Follow the steps below to create permission sets:

  1. Navigate to the AWS SSO Console and choose AWS accounts in the left-hand navigation menu.
  2. Switch to the Permission sets tab on the AWS Accounts page, and then choose Create permissions set.
    Figure 8: Creating a permission set

    Figure 8: Creating a permission set

  3. On the Create new permissions set page, choose Create a custom permission set. To learn more about choosing between an existing job function policy and a custom permission set, see Create Permission Set.
    Figure 9: Customizing a permission set

    Figure 9: Customizing a permission set

  4. Enter EC2AndS3FullAccess in the Name field and choose Attach AWS managed policies. Then choose AmazonEC2FullAccess and AmazonS3FullAccess. Choose Create to create the permission set.
    Figure 10: Attaching AWS managed policies to your permission set

    Figure 10: Attaching AWS managed policies to your permission set

You’ve successfully created a permission set. You can use the steps above to create another permission set, called EC2AndS3ReadAccess, by attaching the AmazonEC2ReadOnlyAccess and AmazonS3ReadOnlyAccess managed policies. Now you’re ready to assign your groups to accounts and permission sets.

Step 3: Assign groups to accounts and permission sets

In this step, you’ll assign your Developers group full access to Amazon EC2 and Amazon S3 in the developer accounts and read-only access to these resources in the production accounts. To do so, you’ll assign the Developers group to the EC2AndS3FullAccess permission set and to the two developer accounts (DevAccount1 and DevAccount2). Similarly, you’ll assign the Developers group to the EC2AndS3ReadAccess permission set and to the production AWS accounts (ProdAccount1 and ProdAccount2).

Follow the steps below to assign the Developers group to the EC2AndS3FullAccess permission set and developer accounts (DevAccount1 and DevAccount2). To learn more about how to manage access to your AWS accounts, see Manage SSO to Your AWS Accounts.

  1. Navigate to the AWS SSO Console and choose AWS Accounts in the left-hand navigation menu.
  2. Switch to the AWS organization tab and choose the accounts to which you want to assign your group. For this example, select accounts DevAccount1 and DevAccount2 from the list of AWS accounts. Next, choose Assign users.
    Figure 11: Assigning users to your accounts

    Figure 11: Assigning users to your accounts

  3. On the Select users and groups page, type the name of the group you want to add into the search box and choose Search. For this example, you will be looking for the group called Developers. Check the box next to the correct group and choose Next: Permission Sets.
    Figure 12: Setting permissions for the "Developers" group

    Figure 12: Setting permissions for the “Developers” group

  4. On the Select permissions sets page, select the permission sets that you want to assign to your group. For this use case, you’ll select the EC2AndS3FullAccess permission set. Then choose Finish.
    Figure 13: Choosing permission sets

    Figure 13: Choosing permission sets

You’ve successfully granted users in the Developers group access to accounts DevAccount1 and DevAccount2, with full access to Amazon EC2 and Amazon S3.

You can follow the same steps above to grant users in the Developers group access to accounts ProdAccount1 and ProdAccount2 with the permissions in the EC2AndS3ReadAccess permission set. This will grant the users in the Developers group read-only access to Amazon EC2 and Amazon S3 in the production accounts.

Figure 14: Notification of successful account configuration

Figure 14: Notification of successful account configuration

Step 4: Users sign into User Portal to access accounts

Your users can now sign into the AWS SSO User Portal to manage resources in their assigned AWS accounts. The user portal provides your users with single sign-on access to all their assigned accounts and business applications. From the user portal, your users can sign into multiple AWS accounts by choosing the AWS account icon in the portal and selecting the account that they want to access.

You can follow the steps below to see how Martha signs into the user portal to access her assigned AWS accounts.

  1. When you added Martha as a user in Step 1, you selected the option Send the user an email with password setup instructions. AWS SSO sent instructions to set a password to Martha at the email that you configured when creating the user. This is the email that Martha received:
    Figure 15: AWS SSO password configuration email

    Figure 15: AWS SSO password configuration email

  2. To set her password, Martha will select Accept invitation in the email that she received from AWS SSO. Selecting Accept invitation will take Martha to a page where she can set her password. After Martha sets her password, she can navigate to the User Portal.
    Figure 16: User Portal sign-in

    Figure 16: User Portal sign-in

  3. In the User Portal, Martha can select the AWS Account icon to view all the AWS accounts to which she has permissions.
    Figure 17: View of AWS Account icon from User Portal

    Figure 17: View of AWS Account icon from User Portal

  4. Martha can now see the developer and production accounts that you granted her permissions to in previous steps. For each account, she can also see the list of roles that she can assume within the account. For example, for DevAccount1 and DevAccount2, Martha can assume the EC2AndS3FullAccess role that gives her full access to manage Amazon EC2 and Amazon S3. Similarly, for ProdAccount1 and ProdAccount2, Martha can assume the EC2AndS3ReadAccess role that gives her read-only access to Amazon EC2 and Amazon S3. Martha can select accounts and choose Management Console next to the role she wants to assume, letting her sign into the AWS Management Console to manage AWS resources. To switch to a different account, Martha can navigate to the User Portal and select a different account. From the User Portal, Martha can also get temporary security credentials for short-term access to resources in an AWS account using AWS Command Line Interface (CLI). To learn more, see How to Get Credentials of an IAM Role for Use with CLI Access to an AWS Account.
    Figure 17: Switching accounts from the User Portal

    Figure 18: Switching accounts from the User Portal

  5. Martha bookmarks the user portal URL in her browser so that she can quickly access the user portal the next time she wants to access AWS accounts.

Summary

By default, AWS now provides you with a directory that you can use to manage users and groups within AWS SSO and to grant user permissions to resources in multiple AWS accounts and business applications. In this blog post, I showed you how to manage users and groups within AWS SSO and grant them permissions to multiple AWS accounts. I also showed how your users sign into the user portal to access their assigned AWS accounts.

If you have feedback or questions about AWS SSO, start a new thread on the AWS SSO forum or contact AWS Support. If you have feedback about this blog post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Vijay Sharma

Vijay is a Senior Product Manager with AWS Identity.

Use YubiKey security key to sign into AWS Management Console with YubiKey for multi-factor authentication

Post Syndicated from Ujjwal Pugalia original https://aws.amazon.com/blogs/security/use-yubikey-security-key-sign-into-aws-management-console/

AWS Identity and Access Management (IAM) best practice is to require all IAM and root users in your account to sign into the AWS Management Console with multi-factor authentication (MFA). When MFA is enabled, AWS prompts users for their username and password (the first factor – what they know) and also provides an authentication challenge such as one-time passcode (OTP) to their MFA device (the second factor – what they have). Now you can enable a YubiKey security key (manufactured by Yubico, a third party provider) as your users’ MFA device.

YubiKey security keys use Universal 2nd Factor (U2F), an open authentication standard that enables users to easily and securely access multiple online services using a single security key, without needing to install drivers or client software. AWS allows you to enable a YubiKey security key as the MFA device for your IAM users. You can also enable a single key for multiple IAM and root users across AWS accounts, making it easier to manage your MFA device for access to multiple users. Now, you can use your existing key to authenticate to other third-party applications, such as GitHub or Dropbox, to sign in to the AWS Management Console.

In this post, I demonstrate how to enable a YubiKey for your IAM users in the IAM console. I then demonstrate how to sign into the AWS Management Console as an IAM user using the YubiKey security key as your MFA device.

Note: You can enable a YubiKey security key as MFA device for your root users from the Security Credentials page by following a similar setup process. Also, the AWS Console Mobile App and mobile browsers do not currently support YubiKey security as MFA for AWS. For more information, please review Supported Configurations for Using U2F Security Keys.

Enabling a YubiKey security key as MFA device for IAM users

To follow along, you must have a YubiKey security key that you want to associate with your IAM user. You can order a YubiKey security key using Amazon.com or other retailers.

Follow these steps to enable a YubiKey security key for your IAM user:

  1. Sign in to the IAM console.
  2. In the left navigation pane, select Users and then choose the name of the user for whom you want to enable a YubiKey.
  3. Select the Security Credentials tab, and then select the Manage link next to Assigned MFA device.
    Figure 1: Managing assigned MFA devices

    Figure 1: Managing assigned MFA devices

  4. In the Manage MFA Device wizard, select U2F security key and then select Continue.
    Figure 2: Selecting your U2F security key

    Figure 2: Selecting your U2F security key

  5. Insert the YubiKey security key into the USB port of your computer, wait for the key to blink, and then touch the button or gold disk on your key. If your key doesn’t blink, please select Troubleshoot U2F to review instructions to troubleshoot the issue.
    Image 3: Inserting the security key

    Figure 3: Inserting the security key

  6. You’ll receive a notification that the security key assignment was successful. The YubiKey security key is ready for use. Select Close.
    Figure 4: Notification of successful setup

    Figure 4: Notification of successful setup

    The Security Credentials tab will now display the U2F security key next to Assigned MFA device.

    Figure 5: Verifying your assigned MFA device

    Figure 5: Verifying your assigned MFA device

Now that you’ve successfully enabled a YubiKey security key as the MFA device for your IAM user (in this example, DBAdmin), I’ll demonstrate how your IAM user can use their YubiKey security key in addition to their username and password to sign into the AWS Management Console.

Using your YubiKey security key to sign into the AWS Management Console as an IAM user

As an IAM user with MFA enabled, you must use your MFA device to sign into the AWS Management Console. During sign-in, you first need to enter your username and password. Next, you need to complete the authentication challenge using your MFA device. Once you have successfully completed the MFA challenge, you can access the AWS Management Console.
Follow these steps to sign into the AWS Management Console using your YubiKey security key as the MFA device:

  1. Enter your AWS account ID or alias to sign in as an IAM user and select Next.
    Figure 6: Signing in as an IAM user

    Figure 6: Signing in as an IAM user

  2. From the IAM sign-in page, re-enter your AWS account ID or alias, plus the username and password for your IAM user. Then select Sign in.
    Figure 7: Entering your IAM account details

    Figure 7: Entering your IAM account details

  3. To authenticate with your YubiKey security key, insert your key into the USB port on your computer, wait for the key to blink, and then touch the button or gold disk on your YubiKey security key. If your key doesn’t blink, please select Troubleshoot MFA to review instructions to troubleshoot the issue.
    Figure 8: Completing sign-in with MFA

    Figure 8: Completing sign-in with MFA

Your IAM user has successfully completed the MFA challenge and signed into the AWS Management console.

Summary

In this blog post, I shared the benefits of using YubiKey security keys as your MFA device. I demonstrated how you can enable a YubiKey security key for your IAM users through the IAM console. I also showed you how to sign into the AWS Management Console using the YubiKey security key associated with your IAM user. You can also enable a U2F security key as an MFA device for root users by following a similar process.

If you have comments about enabling YubiKey or other MFA devices for your users, submit them in the Comments section below. If you have issues enabling YubiKey for your users, start a thread on the IAM forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Ujjwal Pugalia

Ujjwal is the product manager for the console sign-in and sign-up experience at AWS. He enjoys working in the customer-centric environment at Amazon because it aligns with his prior experience building an enterprise marketplace. Outside of work, Ujjwal enjoys watching crime dramas on Netflix. He holds an MBA from Carnegie Mellon University (CMU) in Pittsburgh.

Rotate Amazon RDS database credentials automatically with AWS Secrets Manager

Post Syndicated from Apurv Awasthi original https://aws.amazon.com/blogs/security/rotate-amazon-rds-database-credentials-automatically-with-aws-secrets-manager/

Recently, we launched AWS Secrets Manager, a service that makes it easier to rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. You can configure Secrets Manager to rotate secrets automatically, which can help you meet your security and compliance needs. Secrets Manager offers built-in integrations for MySQL, PostgreSQL, and Amazon Aurora on Amazon RDS, and can rotate credentials for these databases natively. You can control access to your secrets by using fine-grained AWS Identity and Access Management (IAM) policies. To retrieve secrets, employees replace plaintext secrets with a call to Secrets Manager APIs, eliminating the need to hard-code secrets in source code or update configuration files and redeploy code when secrets are rotated.

In this post, I introduce the key features of Secrets Manager. I then show you how to store a database credential for a MySQL database hosted on Amazon RDS and how your applications can access this secret. Finally, I show you how to configure Secrets Manager to rotate this secret automatically.

Key features of Secrets Manager

These features include the ability to:

  • Rotate secrets safely. You can configure Secrets Manager to rotate secrets automatically without disrupting your applications. Secrets Manager offers built-in integrations for rotating credentials for Amazon RDS databases for MySQL, PostgreSQL, and Amazon Aurora. You can extend Secrets Manager to meet your custom rotation requirements by creating an AWS Lambda function to rotate other types of secrets. For example, you can create an AWS Lambda function to rotate OAuth tokens used in a mobile application. Users and applications retrieve the secret from Secrets Manager, eliminating the need to email secrets to developers or update and redeploy applications after AWS Secrets Manager rotates a secret.
  • Secure and manage secrets centrally. You can store, view, and manage all your secrets. By default, Secrets Manager encrypts these secrets with encryption keys that you own and control. Using fine-grained IAM policies, you can control access to secrets. For example, you can require developers to provide a second factor of authentication when they attempt to retrieve a production database credential. You can also tag secrets to help you discover, organize, and control access to secrets used throughout your organization.
  • Monitor and audit easily. Secrets Manager integrates with AWS logging and monitoring services to enable you to meet your security and compliance requirements. For example, you can audit AWS CloudTrail logs to see when Secrets Manager rotated a secret or configure AWS CloudWatch Events to alert you when an administrator deletes a secret.
  • Pay as you go. Pay for the secrets you store in Secrets Manager and for the use of these secrets; there are no long-term contracts or licensing fees.

Get started with Secrets Manager

Now that you’re familiar with the key features, I’ll show you how to store the credential for a MySQL database hosted on Amazon RDS. To demonstrate how to retrieve and use the secret, I use a python application running on Amazon EC2 that requires this database credential to access the MySQL instance. Finally, I show how to configure Secrets Manager to rotate this database credential automatically. Let’s get started.

Phase 1: Store a secret in Secrets Manager

  1. Open the Secrets Manager console and select Store a new secret.
     
    Secrets Manager console interface
     
  2. I select Credentials for RDS database because I’m storing credentials for a MySQL database hosted on Amazon RDS. For this example, I store the credentials for the database superuser. I start by securing the superuser because it’s the most powerful database credential and has full access over the database.
     
    Store a new secret interface with Credentials for RDS database selected
     

    Note: For this example, you need permissions to store secrets in Secrets Manager. To grant these permissions, you can use the AWSSecretsManagerReadWriteAccess managed policy. Read the AWS Secrets Manager Documentation for more information about the minimum IAM permissions required to store a secret.

  3. Next, I review the encryption setting and choose to use the default encryption settings. Secrets Manager will encrypt this secret using the Secrets Manager DefaultEncryptionKeyDefaultEncryptionKey in this account. Alternatively, I can choose to encrypt using a customer master key (CMK) that I have stored in AWS KMS.
     
    Select the encryption key interface
     
  4. Next, I view the list of Amazon RDS instances in my account and select the database this credential accesses. For this example, I select the DB instance mysql-rds-database, and then I select Next.
     
    Select the RDS database interface
     
  5. In this step, I specify values for Secret Name and Description. For this example, I use Applications/MyApp/MySQL-RDS-Database as the name and enter a description of this secret, and then select Next.
     
    Secret Name and description interface
     
  6. For the next step, I keep the default setting Disable automatic rotation because my secret is used by my application running on Amazon EC2. I’ll enable rotation after I’ve updated my application (see Phase 2 below) to use Secrets Manager APIs to retrieve secrets. I then select Next.

    Note: If you’re storing a secret that you’re not using in your application, select Enable automatic rotation. See our AWS Secrets Manager getting started guide on rotation for details.

     
    Configure automatic rotation interface
     

  7. Review the information on the next screen and, if everything looks correct, select Store. We’ve now successfully stored a secret in Secrets Manager.
  8. Next, I select See sample code.
     
    The See sample code button
     
  9. Take note of the code samples provided. I will use this code to update my application to retrieve the secret using Secrets Manager APIs.
     
    Python sample code
     

Phase 2: Update an application to retrieve secret from Secrets Manager

Now that I have stored the secret in Secrets Manager, I update my application to retrieve the database credential from Secrets Manager instead of hard coding this information in a configuration file or source code. For this example, I show how to configure a python application to retrieve this secret from Secrets Manager.

  1. I connect to my Amazon EC2 instance via Secure Shell (SSH).
  2. Previously, I configured my application to retrieve the database user name and password from the configuration file. Below is the source code for my application.
    import MySQLdb
    import config

    def no_secrets_manager_sample()

    # Get the user name, password, and database connection information from a config file.
    database = config.database
    user_name = config.user_name
    password = config.password

    # Use the user name, password, and database connection information to connect to the database
    db = MySQLdb.connect(database.endpoint, user_name, password, database.db_name, database.port)

  3. I use the sample code from Phase 1 above and update my application to retrieve the user name and password from Secrets Manager. This code sets up the client and retrieves and decrypts the secret Applications/MyApp/MySQL-RDS-Database. I’ve added comments to the code to make the code easier to understand.
    # Use the code snippet provided by Secrets Manager.
    import boto3
    from botocore.exceptions import ClientError

    def get_secret():
    #Define the secret you want to retrieve
    secret_name = "Applications/MyApp/MySQL-RDS-Database"
    #Define the Secrets mManager end-point your code should use.
    endpoint_url = "https://secretsmanager.us-east-1.amazonaws.com"
    region_name = "us-east-1"

    #Setup the client
    session = boto3.session.Session()
    client = session.client(
    service_name='secretsmanager',
    region_name=region_name,
    endpoint_url=endpoint_url
    )

    #Use the client to retrieve the secret
    try:
    get_secret_value_response = client.get_secret_value(
    SecretId=secret_name
    )
    #Error handling to make it easier for your code to tolerate faults
    except ClientError as e:
    if e.response['Error']['Code'] == 'ResourceNotFoundException':
    print("The requested secret " + secret_name + " was not found")
    elif e.response['Error']['Code'] == 'InvalidRequestException':
    print("The request was invalid due to:", e)
    elif e.response['Error']['Code'] == 'InvalidParameterException':
    print("The request had invalid params:", e)
    else:
    # Decrypted secret using the associated KMS CMK
    # Depending on whether the secret was a string or binary, one of these fields will be populated
    if 'SecretString' in get_secret_value_response:
    secret = get_secret_value_response['SecretString']
    else:
    binary_secret_data = get_secret_value_response['SecretBinary']

    # Your code goes here.

  4. Applications require permissions to access Secrets Manager. My application runs on Amazon EC2 and uses an IAM role to obtain access to AWS services. I will attach the following policy to my IAM role. This policy uses the GetSecretValue action to grant my application permissions to read secret from Secrets Manager. This policy also uses the resource element to limit my application to read only the Applications/MyApp/MySQL-RDS-Database secret from Secrets Manager. You can visit the AWS Secrets Manager Documentation to understand the minimum IAM permissions required to retrieve a secret.
    {
    "Version": "2012-10-17",
    "Statement": {
    "Sid": "RetrieveDbCredentialFromSecretsManager",
    "Effect": "Allow",
    "Action": "secretsmanager:GetSecretValue",
    "Resource": "arn:aws:secretsmanager:::secret:Applications/MyApp/MySQL-RDS-Database"
    }
    }

Phase 3: Enable Rotation for Your Secret

Rotating secrets periodically is a security best practice because it reduces the risk of misuse of secrets. Secrets Manager makes it easy to follow this security best practice and offers built-in integrations for rotating credentials for MySQL, PostgreSQL, and Amazon Aurora databases hosted on Amazon RDS. When you enable rotation, Secrets Manager creates a Lambda function and attaches an IAM role to this function to execute rotations on a schedule you define.

Note: Configuring rotation is a privileged action that requires several IAM permissions and you should only grant this access to trusted individuals. To grant these permissions, you can use the AWS IAMFullAccess managed policy.

Next, I show you how to configure Secrets Manager to rotate the secret Applications/MyApp/MySQL-RDS-Database automatically.

  1. From the Secrets Manager console, I go to the list of secrets and choose the secret I created in the first step Applications/MyApp/MySQL-RDS-Database.
     
    List of secrets in the Secrets Manager console
     
  2. I scroll to Rotation configuration, and then select Edit rotation.
     
    Rotation configuration interface
     
  3. To enable rotation, I select Enable automatic rotation. I then choose how frequently I want Secrets Manager to rotate this secret. For this example, I set the rotation interval to 60 days.
     
    Edit rotation configuration interface
     
  4. Next, Secrets Manager requires permissions to rotate this secret on your behalf. Because I’m storing the superuser database credential, Secrets Manager can use this credential to perform rotations. Therefore, I select Use the secret that I provided in step 1, and then select Next.
     
    Select which secret to use in the Edit rotation configuration interface
     
  5. The banner on the next screen confirms that I have successfully configured rotation and the first rotation is in progress, which enables you to verify that rotation is functioning as expected. Secrets Manager will rotate this credential automatically every 60 days.
     
    Confirmation banner message
     

Summary

I introduced AWS Secrets Manager, explained the key benefits, and showed you how to help meet your compliance requirements by configuring AWS Secrets Manager to rotate database credentials automatically on your behalf. Secrets Manager helps you protect access to your applications, services, and IT resources without the upfront investment and on-going maintenance costs of operating your own secrets management infrastructure. To get started, visit the Secrets Manager console. To learn more, visit Secrets Manager documentation.

If you have comments about this post, submit them in the Comments section below. If you have questions about anything in this post, start a new thread on the Secrets Manager forum.

Want more AWS Security news? Follow us on Twitter.

How to Patch Linux Workloads on AWS

Post Syndicated from Koen van Blijderveen original https://aws.amazon.com/blogs/security/how-to-patch-linux-workloads-on-aws/

Most malware tries to compromise your systems by using a known vulnerability that the operating system maker has already patched. As best practices to help prevent malware from affecting your systems, you should apply all operating system patches and actively monitor your systems for missing patches.

In this blog post, I show you how to patch Linux workloads using AWS Systems Manager. To accomplish this, I will show you how to use the AWS Command Line Interface (AWS CLI) to:

  1. Launch an Amazon EC2 instance for use with Systems Manager.
  2. Configure Systems Manager to patch your Amazon EC2 Linux instances.

In two previous blog posts (Part 1 and Part 2), I showed how to use the AWS Management Console to perform the necessary steps to patch, inspect, and protect Microsoft Windows workloads. You can implement those same processes for your Linux instances running in AWS by changing the instance tags and types shown in the previous blog posts.

Because most Linux system administrators are more familiar with using a command line, I show how to patch Linux workloads by using the AWS CLI in this blog post. The steps to use the Amazon EBS Snapshot Scheduler and Amazon Inspector are identical for both Microsoft Windows and Linux.

What you should know first

To follow along with the solution in this post, you need one or more Amazon EC2 instances. You may use existing instances or create new instances. For this post, I assume this is an Amazon EC2 for Amazon Linux instance installed from Amazon Machine Images (AMIs).

Systems Manager is a collection of capabilities that helps you automate management tasks for AWS-hosted instances on Amazon EC2 and your on-premises servers. In this post, I use Systems Manager for two purposes: to run remote commands and apply operating system patches. To learn about the full capabilities of Systems Manager, see What Is AWS Systems Manager?

As of Amazon Linux 2017.09, the AMI comes preinstalled with the Systems Manager agent. Systems Manager Patch Manager also supports Red Hat and Ubuntu. To install the agent on these Linux distributions or an older version of Amazon Linux, see Installing and Configuring SSM Agent on Linux Instances.

If you are not familiar with how to launch an Amazon EC2 instance, see Launching an Instance. I also assume you launched or will launch your instance in a private subnet. You must make sure that the Amazon EC2 instance can connect to the internet using a network address translation (NAT) instance or NAT gateway to communicate with Systems Manager. The following diagram shows how you should structure your VPC.

Diagram showing how to structure your VPC

Later in this post, you will assign tasks to a maintenance window to patch your instances with Systems Manager. To do this, the IAM user you are using for this post must have the iam:PassRole permission. This permission allows the IAM user assigning tasks to pass his own IAM permissions to the AWS service. In this example, when you assign a task to a maintenance window, IAM passes your credentials to Systems Manager. You also should authorize your IAM user to use Amazon EC2 and Systems Manager. As mentioned before, you will be using the AWS CLI for most of the steps in this blog post. Our documentation shows you how to get started with the AWS CLI. Make sure you have the AWS CLI installed and configured with an AWS access key and secret access key that belong to an IAM user that have the following AWS managed policies attached to the IAM user you are using for this example: AmazonEC2FullAccess and AmazonSSMFullAccess.

Step 1: Launch an Amazon EC2 Linux instance

In this section, I show you how to launch an Amazon EC2 instance so that you can use Systems Manager with the instance. This step requires you to do three things:

  1. Create an IAM role for Systems Manager before launching your Amazon EC2 instance.
  2. Launch your Amazon EC2 instance with Amazon EBS and the IAM role for Systems Manager.
  3. Add tags to the instances so that you can add your instances to a Systems Manager maintenance window based on tags.

A. Create an IAM role for Systems Manager

Before launching an Amazon EC2 instance, I recommend that you first create an IAM role for Systems Manager, which you will use to update the Amazon EC2 instance. AWS already provides a preconfigured policy that you can use for the new role and it is called AmazonEC2RoleforSSM.

  1. Create a JSON file named trustpolicy-ec2ssm.json that contains the following trust policy. This policy describes which principal (an entity that can take action on an AWS resource) is allowed to assume the role we are going to create. In this example, the principal is the Amazon EC2 service.
    {
      "Version": "2012-10-17",
      "Statement": {
        "Effect": "Allow",
        "Principal": {"Service": "ec2.amazonaws.com"},
        "Action": "sts:AssumeRole"
      }
    }

  1. Use the following command to create a role named EC2SSM that has the AWS managed policy AmazonEC2RoleforSSM attached to it. This generates JSON-based output that describes the role and its parameters, if the command is successful.
    $ aws iam create-role --role-name EC2SSM --assume-role-policy-document file://trustpolicy-ec2ssm.json

  1. Use the following command to attach the AWS managed IAM policy (AmazonEC2RoleforSSM) to your newly created role.
    $ aws iam attach-role-policy --role-name EC2SSM --policy-arn arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM

  1. Use the following commands to create the IAM instance profile and add the role to the instance profile. The instance profile is needed to attach the role we created earlier to your Amazon EC2 instance.
    $ aws iam create-instance-profile --instance-profile-name EC2SSM-IP
    $ aws iam add-role-to-instance-profile --instance-profile-name EC2SSM-IP --role-name EC2SSM

B. Launch your Amazon EC2 instance

To follow along, you need an Amazon EC2 instance that is running Amazon Linux. You can use any existing instance you may have or create a new instance.

When launching a new Amazon EC2 instance, be sure that:

  1. Use the following command to launch a new Amazon EC2 instance using an Amazon Linux AMI available in the US East (N. Virginia) Region (also known as us-east-1). Replace YourKeyPair and YourSubnetId with your information. For more information about creating a key pair, see the create-key-pair documentation. Write down the InstanceId that is in the output because you will need it later in this post.
    $ aws ec2 run-instances --image-id ami-cb9ec1b1 --instance-type t2.micro --key-name YourKeyPair --subnet-id YourSubnetId --iam-instance-profile Name=EC2SSM-IP

  1. If you are using an existing Amazon EC2 instance, you can use the following command to attach the instance profile you created earlier to your instance.
    $ aws ec2 associate-iam-instance-profile --instance-id YourInstanceId --iam-instance-profile Name=EC2SSM-IP

C. Add tags

The final step of configuring your Amazon EC2 instances is to add tags. You will use these tags to configure Systems Manager in Step 2 of this post. For this example, I add a tag named Patch Group and set the value to Linux Servers. I could have other groups of Amazon EC2 instances that I treat differently by having the same tag name but a different tag value. For example, I might have a collection of other servers with the tag name Patch Group with a value of Web Servers.

  • Use the following command to add the Patch Group tag to your Amazon EC2 instance.
    $ aws ec2 create-tags --resources YourInstanceId --tags --tags Key="Patch Group",Value="Linux Servers"

Note: You must wait a few minutes until the Amazon EC2 instance is available before you can proceed to the next section. To make sure your Amazon EC2 instance is online and ready, you can use the following AWS CLI command:

$ aws ec2 describe-instance-status --instance-ids YourInstanceId

At this point, you now have at least one Amazon EC2 instance you can use to configure Systems Manager.

Step 2: Configure Systems Manager

In this section, I show you how to configure and use Systems Manager to apply operating system patches to your Amazon EC2 instances, and how to manage patch compliance.

To start, I provide some background information about Systems Manager. Then, I cover how to:

  1. Create the Systems Manager IAM role so that Systems Manager is able to perform patch operations.
  2. Create a Systems Manager patch baseline and associate it with your instance to define which patches Systems Manager should apply.
  3. Define a maintenance window to make sure Systems Manager patches your instance when you tell it to.
  4. Monitor patch compliance to verify the patch state of your instances.

You must meet two prerequisites to use Systems Manager to apply operating system patches. First, you must attach the IAM role you created in the previous section, EC2SSM, to your Amazon EC2 instance. Second, you must install the Systems Manager agent on your Amazon EC2 instance. If you have used a recent Amazon Linux AMI, Amazon has already installed the Systems Manager agent on your Amazon EC2 instance. You can confirm this by logging in to an Amazon EC2 instance and checking the Systems Manager agent log files that are located at /var/log/amazon/ssm/.

To install the Systems Manager agent on an instance that does not have the agent preinstalled or if you want to use the Systems Manager agent on your on-premises servers, see Installing and Configuring the Systems Manager Agent on Linux Instances. If you forgot to attach the newly created role when launching your Amazon EC2 instance or if you want to attach the role to already running Amazon EC2 instances, see Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI or use the AWS Management Console.

A. Create the Systems Manager IAM role

For a maintenance window to be able to run any tasks, you must create a new role for Systems Manager. This role is a different kind of role than the one you created earlier: this role will be used by Systems Manager instead of Amazon EC2. Earlier, you created the role, EC2SSM, with the policy, AmazonEC2RoleforSSM, which allowed the Systems Manager agent on your instance to communicate with Systems Manager. In this section, you need a new role with the policy, AmazonSSMMaintenanceWindowRole, so that the Systems Manager service can execute commands on your instance.

To create the new IAM role for Systems Manager:

  1. Create a JSON file named trustpolicy-maintenancewindowrole.json that contains the following trust policy. This policy describes which principal is allowed to assume the role you are going to create. This trust policy allows not only Amazon EC2 to assume this role, but also Systems Manager.
    {
       "Version":"2012-10-17",
       "Statement":[
          {
             "Sid":"",
             "Effect":"Allow",
             "Principal":{
                "Service":[
                   "ec2.amazonaws.com",
                   "ssm.amazonaws.com"
               ]
             },
             "Action":"sts:AssumeRole"
          }
       ]
    }

  1. Use the following command to create a role named MaintenanceWindowRole that has the AWS managed policy, AmazonSSMMaintenanceWindowRole, attached to it. This command generates JSON-based output that describes the role and its parameters, if the command is successful.
    $ aws iam create-role --role-name MaintenanceWindowRole --assume-role-policy-document file://trustpolicy-maintenancewindowrole.json

  1. Use the following command to attach the AWS managed IAM policy (AmazonEC2RoleforSSM) to your newly created role.
    $ aws iam attach-role-policy --role-name MaintenanceWindowRole --policy-arn arn:aws:iam::aws:policy/service-role/AmazonSSMMaintenanceWindowRole

B. Create a Systems Manager patch baseline and associate it with your instance

Next, you will create a Systems Manager patch baseline and associate it with your Amazon EC2 instance. A patch baseline defines which patches Systems Manager should apply to your instance. Before you can associate the patch baseline with your instance, though, you must determine if Systems Manager recognizes your Amazon EC2 instance. Use the following command to list all instances managed by Systems Manager. The --filters option ensures you look only for your newly created Amazon EC2 instance.

$ aws ssm describe-instance-information --filters Key=InstanceIds,Values= YourInstanceId

{
    "InstanceInformationList": [
        {
            "IsLatestVersion": true,
            "ComputerName": "ip-10-50-2-245",
            "PingStatus": "Online",
            "InstanceId": "YourInstanceId",
            "IPAddress": "10.50.2.245",
            "ResourceType": "EC2Instance",
            "AgentVersion": "2.2.120.0",
            "PlatformVersion": "2017.09",
            "PlatformName": "Amazon Linux AMI",
            "PlatformType": "Linux",
            "LastPingDateTime": 1515759143.826
        }
    ]
}

If your instance is missing from the list, verify that:

  1. Your instance is running.
  2. You attached the Systems Manager IAM role, EC2SSM.
  3. You deployed a NAT gateway in your public subnet to ensure your VPC reflects the diagram shown earlier in this post so that the Systems Manager agent can connect to the Systems Manager internet endpoint.
  4. The Systems Manager agent logs don’t include any unaddressed errors.

Now that you have checked that Systems Manager can manage your Amazon EC2 instance, it is time to create a patch baseline. With a patch baseline, you define which patches are approved to be installed on all Amazon EC2 instances associated with the patch baseline. The Patch Group resource tag you defined earlier will determine to which patch group an instance belongs. If you do not specifically define a patch baseline, the default AWS-managed patch baseline is used.

To create a patch baseline:

  1. Use the following command to create a patch baseline named AmazonLinuxServers. With approval rules, you can determine the approved patches that will be included in your patch baseline. In this example, you add all Critical severity patches to the patch baseline as soon as they are released, by setting the Auto approval delay to 0 days. By setting the Auto approval delay to 2 days, you add to this patch baseline the Important, Medium, and Low severity patches two days after they are released.
    $ aws ssm create-patch-baseline --name "AmazonLinuxServers" --description "Baseline containing all updates for Amazon Linux" --operating-system AMAZON_LINUX --approval-rules "PatchRules=[{PatchFilterGroup={PatchFilters=[{Values=[Critical],Key=SEVERITY}]},ApproveAfterDays=0,ComplianceLevel=CRITICAL},{PatchFilterGroup={PatchFilters=[{Values=[Important,Medium,Low],Key=SEVERITY}]},ApproveAfterDays=2,ComplianceLevel=HIGH}]"
    
    {
        "BaselineId": "YourBaselineId"
    }

  1. Use the following command to register the patch baseline you created with your instance. To do so, you use the Patch Group tag that you added to your Amazon EC2 instance.
    $ aws ssm register-patch-baseline-for-patch-group --baseline-id YourPatchBaselineId --patch-group "Linux Servers"
    
    {
        "PatchGroup": "Linux Servers",
        "BaselineId": "YourBaselineId"
    }

C.  Define a maintenance window

Now that you have successfully set up a role, created a patch baseline, and registered your Amazon EC2 instance with your patch baseline, you will define a maintenance window so that you can control when your Amazon EC2 instances will receive patches. By creating multiple maintenance windows and assigning them to different patch groups, you can make sure your Amazon EC2 instances do not all reboot at the same time.

To define a maintenance window:

  1. Use the following command to define a maintenance window. In this example command, the maintenance window will start every Saturday at 10:00 P.M. UTC. It will have a duration of 4 hours and will not start any new tasks 1 hour before the end of the maintenance window.
    $ aws ssm create-maintenance-window --name SaturdayNight --schedule "cron(0 0 22 ? * SAT *)" --duration 4 --cutoff 1 --allow-unassociated-targets
    
    {
        "WindowId": "YourMaintenanceWindowId"
    }

For more information about defining a cron-based schedule for maintenance windows, see Cron and Rate Expressions for Maintenance Windows.

  1. After defining the maintenance window, you must register the Amazon EC2 instance with the maintenance window so that Systems Manager knows which Amazon EC2 instance it should patch in this maintenance window. You can register the instance by using the same Patch Group tag you used to associate the Amazon EC2 instance with the AWS-provided patch baseline, as shown in the following command.
    $ aws ssm register-target-with-maintenance-window --window-id YourMaintenanceWindowId --resource-type INSTANCE --targets "Key=tag:Patch Group,Values=Linux Servers"
    
    {
        "WindowTargetId": "YourWindowTargetId"
    }

  1. Assign a task to the maintenance window that will install the operating system patches on your Amazon EC2 instance. The following command includes the following options.
    1. name is the name of your task and is optional. I named mine Patching.
    2. task-arn is the name of the task document you want to run.
    3. max-concurrency allows you to specify how many of your Amazon EC2 instances Systems Manager should patch at the same time. max-errors determines when Systems Manager should abort the task. For patching, this number should not be too low, because you do not want your entire patch task to stop on all instances if one instance fails. You can set this, for example, to 20%.
    4. service-role-arn is the Amazon Resource Name (ARN) of the AmazonSSMMaintenanceWindowRole role you created earlier in this blog post.
    5. task-invocation-parameters defines the parameters that are specific to the AWS-RunPatchBaseline task document and tells Systems Manager that you want to install patches with a timeout of 600 seconds (10 minutes).
      $ aws ssm register-task-with-maintenance-window --name "Patching" --window-id "YourMaintenanceWindowId" --targets "Key=WindowTargetIds,Values=YourWindowTargetId" --task-arn AWS-RunPatchBaseline --service-role-arn "arn:aws:iam::123456789012:role/MaintenanceWindowRole" --task-type "RUN_COMMAND" --task-invocation-parameters "RunCommand={Comment=,TimeoutSeconds=600,Parameters={SnapshotId=[''],Operation=[Install]}}" --max-concurrency "500" --max-errors "20%"
      
      {
          "WindowTaskId": "YourWindowTaskId"
      }

Now, you must wait for the maintenance window to run at least once according to the schedule you defined earlier. If your maintenance window has expired, you can check the status of any maintenance tasks Systems Manager has performed by using the following command.

$ aws ssm describe-maintenance-window-executions --window-id "YourMaintenanceWindowId"

{
    "WindowExecutions": [
        {
            "Status": "SUCCESS",
            "WindowId": "YourMaintenanceWindowId",
            "WindowExecutionId": "b594984b-430e-4ffa-a44c-a2e171de9dd3",
            "EndTime": 1515766467.487,
            "StartTime": 1515766457.691
        }
    ]
}

D.  Monitor patch compliance

You also can see the overall patch compliance of all Amazon EC2 instances using the following command in the AWS CLI.

$ aws ssm list-compliance-summaries

This command shows you the number of instances that are compliant with each category and the number of instances that are not in JSON format.

You also can see overall patch compliance by choosing Compliance under Insights in the navigation pane of the Systems Manager console. You will see a visual representation of how many Amazon EC2 instances are up to date, how many Amazon EC2 instances are noncompliant, and how many Amazon EC2 instances are compliant in relation to the earlier defined patch baseline.

Screenshot of the Compliance page of the Systems Manager console

In this section, you have set everything up for patch management on your instance. Now you know how to patch your Amazon EC2 instance in a controlled manner and how to check if your Amazon EC2 instance is compliant with the patch baseline you have defined. Of course, I recommend that you apply these steps to all Amazon EC2 instances you manage.

Summary

In this blog post, I showed how to use Systems Manager to create a patch baseline and maintenance window to keep your Amazon EC2 Linux instances up to date with the latest security patches. Remember that by creating multiple maintenance windows and assigning them to different patch groups, you can make sure your Amazon EC2 instances do not all reboot at the same time.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing any part of this solution, start a new thread on the Amazon EC2 forum or contact AWS Support.

– Koen

Power data ingestion into Splunk using Amazon Kinesis Data Firehose

Post Syndicated from Tarik Makota original https://aws.amazon.com/blogs/big-data/power-data-ingestion-into-splunk-using-amazon-kinesis-data-firehose/

In late September, during the annual Splunk .conf, Splunk and Amazon Web Services (AWS) jointly announced that Amazon Kinesis Data Firehose now supports Splunk Enterprise and Splunk Cloud as a delivery destination. This native integration between Splunk Enterprise, Splunk Cloud, and Amazon Kinesis Data Firehose is designed to make AWS data ingestion setup seamless, while offering a secure and fault-tolerant delivery mechanism. We want to enable customers to monitor and analyze machine data from any source and use it to deliver operational intelligence and optimize IT, security, and business performance.

With Kinesis Data Firehose, customers can use a fully managed, reliable, and scalable data streaming solution to Splunk. In this post, we tell you a bit more about the Kinesis Data Firehose and Splunk integration. We also show you how to ingest large amounts of data into Splunk using Kinesis Data Firehose.

Push vs. Pull data ingestion

Presently, customers use a combination of two ingestion patterns, primarily based on data source and volume, in addition to existing company infrastructure and expertise:

  1. Pull-based approach: Using dedicated pollers running the popular Splunk Add-on for AWS to pull data from various AWS services such as Amazon CloudWatch or Amazon S3.
  2. Push-based approach: Streaming data directly from AWS to Splunk HTTP Event Collector (HEC) by using AWS Lambda. Examples of applicable data sources include CloudWatch Logs and Amazon Kinesis Data Streams.

The pull-based approach offers data delivery guarantees such as retries and checkpointing out of the box. However, it requires more ops to manage and orchestrate the dedicated pollers, which are commonly running on Amazon EC2 instances. With this setup, you pay for the infrastructure even when it’s idle.

On the other hand, the push-based approach offers a low-latency scalable data pipeline made up of serverless resources like AWS Lambda sending directly to Splunk indexers (by using Splunk HEC). This approach translates into lower operational complexity and cost. However, if you need guaranteed data delivery then you have to design your solution to handle issues such as a Splunk connection failure or Lambda execution failure. To do so, you might use, for example, AWS Lambda Dead Letter Queues.

How about getting the best of both worlds?

Let’s go over the new integration’s end-to-end solution and examine how Kinesis Data Firehose and Splunk together expand the push-based approach into a native AWS solution for applicable data sources.

By using a managed service like Kinesis Data Firehose for data ingestion into Splunk, we provide out-of-the-box reliability and scalability. One of the pain points of the old approach was the overhead of managing the data collection nodes (Splunk heavy forwarders). With the new Kinesis Data Firehose to Splunk integration, there are no forwarders to manage or set up. Data producers (1) are configured through the AWS Management Console to drop data into Kinesis Data Firehose.

You can also create your own data producers. For example, you can drop data into a Firehose delivery stream by using Amazon Kinesis Agent, or by using the Firehose API (PutRecord(), PutRecordBatch()), or by writing to a Kinesis Data Stream configured to be the data source of a Firehose delivery stream. For more details, refer to Sending Data to an Amazon Kinesis Data Firehose Delivery Stream.

You might need to transform the data before it goes into Splunk for analysis. For example, you might want to enrich it or filter or anonymize sensitive data. You can do so using AWS Lambda. In this scenario, Kinesis Data Firehose buffers data from the incoming source data, sends it to the specified Lambda function (2), and then rebuffers the transformed data to the Splunk Cluster. Kinesis Data Firehose provides the Lambda blueprints that you can use to create a Lambda function for data transformation.

Systems fail all the time. Let’s see how this integration handles outside failures to guarantee data durability. In cases when Kinesis Data Firehose can’t deliver data to the Splunk Cluster, data is automatically backed up to an S3 bucket. You can configure this feature while creating the Firehose delivery stream (3). You can choose to back up all data or only the data that’s failed during delivery to Splunk.

In addition to using S3 for data backup, this Firehose integration with Splunk supports Splunk Indexer Acknowledgments to guarantee event delivery. This feature is configured on Splunk’s HTTP Event Collector (HEC) (4). It ensures that HEC returns an acknowledgment to Kinesis Data Firehose only after data has been indexed and is available in the Splunk cluster (5).

Now let’s look at a hands-on exercise that shows how to forward VPC flow logs to Splunk.

How-to guide

To process VPC flow logs, we implement the following architecture.

Amazon Virtual Private Cloud (Amazon VPC) delivers flow log files into an Amazon CloudWatch Logs group. Using a CloudWatch Logs subscription filter, we set up real-time delivery of CloudWatch Logs to an Kinesis Data Firehose stream.

Data coming from CloudWatch Logs is compressed with gzip compression. To work with this compression, we need to configure a Lambda-based data transformation in Kinesis Data Firehose to decompress the data and deposit it back into the stream. Firehose then delivers the raw logs to the Splunk Http Event Collector (HEC).

If delivery to the Splunk HEC fails, Firehose deposits the logs into an Amazon S3 bucket. You can then ingest the events from S3 using an alternate mechanism such as a Lambda function.

When data reaches Splunk (Enterprise or Cloud), Splunk parsing configurations (packaged in the Splunk Add-on for Kinesis Data Firehose) extract and parse all fields. They make data ready for querying and visualization using Splunk Enterprise and Splunk Cloud.

Walkthrough

Install the Splunk Add-on for Amazon Kinesis Data Firehose

The Splunk Add-on for Amazon Kinesis Data Firehose enables Splunk (be it Splunk Enterprise, Splunk App for AWS, or Splunk Enterprise Security) to use data ingested from Amazon Kinesis Data Firehose. Install the Add-on on all the indexers with an HTTP Event Collector (HEC). The Add-on is available for download from Splunkbase.

HTTP Event Collector (HEC)

Before you can use Kinesis Data Firehose to deliver data to Splunk, set up the Splunk HEC to receive the data. From Splunk web, go to the Setting menu, choose Data Inputs, and choose HTTP Event Collector. Choose Global Settings, ensure All tokens is enabled, and then choose Save. Then choose New Token to create a new HEC endpoint and token. When you create a new token, make sure that Enable indexer acknowledgment is checked.

When prompted to select a source type, select aws:cloudwatch:vpcflow.

Create an S3 backsplash bucket

To provide for situations in which Kinesis Data Firehose can’t deliver data to the Splunk Cluster, we use an S3 bucket to back up the data. You can configure this feature to back up all data or only the data that’s failed during delivery to Splunk.

Note: Bucket names are unique. Thus, you can’t use tmak-backsplash-bucket.

aws s3 create-bucket --bucket tmak-backsplash-bucket --create-bucket-configuration LocationConstraint=ap-northeast-1

Create an IAM role for the Lambda transform function

Firehose triggers an AWS Lambda function that transforms the data in the delivery stream. Let’s first create a role for the Lambda function called LambdaBasicRole.

Note: You can also set this role up when creating your Lambda function.

$ aws iam create-role --role-name LambdaBasicRole --assume-role-policy-document file://TrustPolicyForLambda.json

Here is TrustPolicyForLambda.json.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "lambda.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

 

After the role is created, attach the managed Lambda basic execution policy to it.

$ aws iam attach-role-policy 
  --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole 
  --role-name LambdaBasicRole

 

Create a Firehose Stream

On the AWS console, open the Amazon Kinesis service, go to the Firehose console, and choose Create Delivery Stream.

In the next section, you can specify whether you want to use an inline Lambda function for transformation. Because incoming CloudWatch Logs are gzip compressed, choose Enabled for Record transformation, and then choose Create new.

From the list of the available blueprint functions, choose Kinesis Data Firehose CloudWatch Logs Processor. This function unzips data and place it back into the Firehose stream in compliance with the record transformation output model.

Enter a name for the Lambda function, choose Choose an existing role, and then choose the role you created earlier. Then choose Create Function.

Go back to the Firehose Stream wizard, choose the Lambda function you just created, and then choose Next.

Select Splunk as the destination, and enter your Splunk Http Event Collector information.

Note: Amazon Kinesis Data Firehose requires the Splunk HTTP Event Collector (HEC) endpoint to be terminated with a valid CA-signed certificate matching the DNS hostname used to connect to your HEC endpoint. You receive delivery errors if you are using a self-signed certificate.

In this example, we only back up logs that fail during delivery.

To monitor your Firehose delivery stream, enable error logging. Doing this means that you can monitor record delivery errors.

Create an IAM role for the Firehose stream by choosing Create new, or Choose. Doing this brings you to a new screen. Choose Create a new IAM role, give the role a name, and then choose Allow.

If you look at the policy document, you can see that the role gives Kinesis Data Firehose permission to publish error logs to CloudWatch, execute your Lambda function, and put records into your S3 backup bucket.

You now get a chance to review and adjust the Firehose stream settings. When you are satisfied, choose Create Stream. You get a confirmation once the stream is created and active.

Create a VPC Flow Log

To send events from Amazon VPC, you need to set up a VPC flow log. If you already have a VPC flow log you want to use, you can skip to the “Publish CloudWatch to Kinesis Data Firehose” section.

On the AWS console, open the Amazon VPC service. Then choose VPC, Your VPC, and choose the VPC you want to send flow logs from. Choose Flow Logs, and then choose Create Flow Log. If you don’t have an IAM role that allows your VPC to publish logs to CloudWatch, choose Set Up Permissions and Create new role. Use the defaults when presented with the screen to create the new IAM role.

Once active, your VPC flow log should look like the following.

Publish CloudWatch to Kinesis Data Firehose

When you generate traffic to or from your VPC, the log group is created in Amazon CloudWatch. The new log group has no subscription filter, so set up a subscription filter. Setting this up establishes a real-time data feed from the log group to your Firehose delivery stream.

At present, you have to use the AWS Command Line Interface (AWS CLI) to create a CloudWatch Logs subscription to a Kinesis Data Firehose stream. However, you can use the AWS console to create subscriptions to Lambda and Amazon Elasticsearch Service.

To allow CloudWatch to publish to your Firehose stream, you need to give it permissions.

$ aws iam create-role --role-name CWLtoKinesisFirehoseRole --assume-role-policy-document file://TrustPolicyForCWLToFireHose.json


Here is the content for TrustPolicyForCWLToFireHose.json.

{
  "Statement": {
    "Effect": "Allow",
    "Principal": { "Service": "logs.us-east-1.amazonaws.com" },
    "Action": "sts:AssumeRole"
  }
}

 

Attach the policy to the newly created role.

$ aws iam put-role-policy 
    --role-name CWLtoKinesisFirehoseRole 
    --policy-name Permissions-Policy-For-CWL 
    --policy-document file://PermissionPolicyForCWLToFireHose.json

Here is the content for PermissionPolicyForCWLToFireHose.json.

{
    "Statement":[
      {
        "Effect":"Allow",
        "Action":["firehose:*"],
        "Resource":["arn:aws:firehose:us-east-1:YOUR-AWS-ACCT-NUM:deliverystream/ FirehoseSplunkDeliveryStream"]
      },
      {
        "Effect":"Allow",
        "Action":["iam:PassRole"],
        "Resource":["arn:aws:iam::YOUR-AWS-ACCT-NUM:role/CWLtoKinesisFirehoseRole"]
      }
    ]
}

Finally, create a subscription filter.

$ aws logs put-subscription-filter 
   --log-group-name " /vpc/flowlog/FirehoseSplunkDemo" 
   --filter-name "Destination" 
   --filter-pattern "" 
   --destination-arn "arn:aws:firehose:us-east-1:YOUR-AWS-ACCT-NUM:deliverystream/FirehoseSplunkDeliveryStream" 
   --role-arn "arn:aws:iam::YOUR-AWS-ACCT-NUM:role/CWLtoKinesisFirehoseRole"

When you run the AWS CLI command preceding, you don’t get any acknowledgment. To validate that your CloudWatch Log Group is subscribed to your Firehose stream, check the CloudWatch console.

As soon as the subscription filter is created, the real-time log data from the log group goes into your Firehose delivery stream. Your stream then delivers it to your Splunk Enterprise or Splunk Cloud environment for querying and visualization. The screenshot following is from Splunk Enterprise.

In addition, you can monitor and view metrics associated with your delivery stream using the AWS console.

Conclusion

Although our walkthrough uses VPC Flow Logs, the pattern can be used in many other scenarios. These include ingesting data from AWS IoT, other CloudWatch logs and events, Kinesis Streams or other data sources using the Kinesis Agent or Kinesis Producer Library. We also used Lambda blueprint Kinesis Data Firehose CloudWatch Logs Processor to transform streaming records from Kinesis Data Firehose. However, you might need to use a different Lambda blueprint or disable record transformation entirely depending on your use case. For an additional use case using Kinesis Data Firehose, check out This is My Architecture Video, which discusses how to securely centralize cross-account data analytics using Kinesis and Splunk.

 


Additional Reading

If you found this post useful, be sure to check out Integrating Splunk with Amazon Kinesis Streams and Using Amazon EMR and Hunk for Rapid Response Log Analysis and Review.


About the Authors

Tarik Makota is a solutions architect with the Amazon Web Services Partner Network. He provides technical guidance, design advice and thought leadership to AWS’ most strategic software partners. His career includes work in an extremely broad software development and architecture roles across ERP, financial printing, benefit delivery and administration and financial services. He holds an M.S. in Software Development and Management from Rochester Institute of Technology.

 

 

 

Roy Arsan is a solutions architect in the Splunk Partner Integrations team. He has a background in product development, cloud architecture, and building consumer and enterprise cloud applications. More recently, he has architected Splunk solutions on major cloud providers, including an AWS Quick Start for Splunk that enables AWS users to easily deploy distributed Splunk Enterprise straight from their AWS console. He’s also the co-author of the AWS Lambda blueprints for Splunk. He holds an M.S. in Computer Science Engineering from the University of Michigan.