Tag Archives: AWS Organizations

Architecture Patterns for Red Hat OpenShift on AWS

Post Syndicated from Ryan Niksch original https://aws.amazon.com/blogs/architecture/architecture-patterns-for-red-hat-openshift-on-aws/

Editor’s note: Although this blog post and its accompanying code make use of the word “Master,” Red Hat is making open source code more inclusive by eradicating “problematic language.” Read more about this.

Introduction

Red Hat OpenShift is an application platform that provides customers with turnkey application platform that is much more than a simple Kubernetes orchestration.

OpenShift customers choose AWS as their cloud of choice because of the efficiency, security, and reliability, scalability, and elasticity it provides. Customers seeking to modernize their business, process, and application stacks are drawn to the rich AWS service and feature sets.

As such, we see some customers migrate from on-premises to AWS or exist in a hybrid context with application workloads running in various locations. For OpenShift customers, this poses a few questions and considerations:

  • What are the recommendations for the best way to deploy OpenShift on AWS?
  • How is this different from what customers were used to on-premises?
  • How does this ensure resilience and availability?
  • Do customers need a multi-region, multi-account approach?

For hybrid customers, there are assumptions and misconceptions:

  • Where does the control plane exist?
  •  Is there replication, and if so, what are the considerations and ramifications?

In this post I will run through some of the more common questions and patterns for OpenShift on AWS, while looking at some of the terminology and conceptual differences of AWS. I’ll explore migration and hybrid use cases and address some misconceptions.

OpenShift building blocks

On AWS, OpenShift 4x is the norm. To that effect, I will focus on OpenShift 4, but many of the considerations will apply to both OpenShift 3 and OpenShift 4.

Let’s unpack some of the OpenShift building blocks. An OpenShift cluster consists of Master, infrastructure, and worker nodes. The Master forms the control plane and infrastructure nodes cater to a routing layer and additional functions, such as logging, monitoring etc. Worker nodes are the nodes that customer application container workloads will exist on.

When deployed on-premises, OpenShift nodes will be placed in separate network subnets. Depending on distance, latency, etc., a single OpenShift cluster may span two data centers that have some nodes in a subnet in one data center and other subnets in a different data center. This applies to customers with data centers within a few miles of each other with high-speed connectivity. An alternative would be an OpenShift cluster in each data center.

AWS concepts and terminology

At AWS, the concept of “region” is a geolocation, such as EMEA (Europe, Middle East, and Africa) or APAC (Asian Pacific) rather than a data center or specific building. An Availability Zone (AZ) is the closest construct on AWS that maps to a physical data center. Within each region you will find multiple (typically three or more) AZs. Note that a single AZ will contain multiple physical data centers but we treat it as a single point of failure. For example, an event that impacts an AZ would be expected to impact all the data centers within that AZ. To this effect, customers should deploy workloads spanning multiple AZs to protect against any event that would impact a single AZ.

Read more about Regions, Availability Zones, and Edge Locations.

Deploying OpenShift

When deploying an OpenShift cluster on AWS, we recommend starting with three Master nodes spread across three AWS AZs and three worker nodes spread across three AZs. This allows for the combination of resilience and availably constructs provided by AWS as well as Red Hat OpenShift. The OpenShift installer provides a means of deploying the underlying AWS infrastructure in two ways: IPI Installer-provisioned infrastructure and UPI user-provisioned infrastructure. Both Red Hat and AWS collect customer feedback and use this to drive recommended patterns that are then included in the OpenShift installer. As such, the OpenShift installer IPI mode becomes a living reference architecture for deploying OpenShift on AWS.

Deploying OpenShift

The installer will require inputs for the environment on which it’s being deployed. In this case, since I am deploying on AWS, I will need to provide the AWS region, AZs, or subnets that related to the AZs, as well as EC2 instance type. The installer will then generate a set of ignition files that will be used during the deployment of OpenShift:

apiVersion: v1
baseDomain: example.com 
controlPlane: 
  hyperthreading: Enabled   
  name: master
  platform:
    aws:
      zones:
      - us-west-2a
      - us-west-2b
      - us-west-2c
      rootVolume:
        iops: 4000
        size: 500
        type: io1
      type: m5.xlarge 
  replicas: 3
compute: 
- hyperthreading: Enabled 
  name: worker
  platform:
    aws:
      rootVolume:
        iops: 2000
        size: 500
        type: io1 
      type: m5.xlarge
      zones:
      - us-west-2a
      - us-west-2b
      - us-west-2c
  replicas: 3
metadata:
  name: test-cluster 
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  networkType: OpenShiftSDN
  serviceNetwork:
  - 172.30.0.0/16
platform:
  aws:
    region: us-west-2 
    userTags:
      adminContact: jdoe
      costCenter: 7536
pullSecret: '{"auths": ...}' 
fips: false 
sshKey: ssh-ed25519 AAAA... 

What does this look like at scale?

For larger implementations, we would see additional worker nodes spread across three or more AZs. As more worker nodes are added, use of the control plane increases. Initially scaling up the Amazon Elastic Compute Cloud (EC2) instance type to a larger instance type is an effective way of addressing this. It’s possible to add more Master nodes, and we recommend that an odd number of nodes are maintained. It is more common to see scaling out of the infrastructure nodes before there is a need to scale Masters. For large-scale implementations, infrastructure functions such as the router, monitoring, and logging functions can be moved to separate EC2 instances from the Master nodes, as well as from each other. It is important to spread the routing layer across multiple AZs, which is critical to maintaining availability and resilience.

The process of resource separation is now controlled by infrastructure machine sets within OpenShift. An infrastructure machine set would need to be defined, then the infrastructure role edited to be moved from the default to this new infrastructure machine set. Read about this in greater detail.

OpenShift in a multi-account context

Using AWS accounts as a means of separation is a common well-architected pattern. AWS Organizations and AWS Control Tower are services that are commonly adopted as part of a multi-account strategy. This is very much the case when looking to enable teams to use their own accounts and when an account vending process is needed to cater for self-service account provisioning.

OpenShift in a multi-account context

OpenShift clusters are deployed into multiple accounts. An OpenShift dev cluster is deployed into an AWS Dev account. This account would typically have AWS Developer Support associated with it. A separate production OpenShift cluster would be provisioned into an AWS production account with AWS Enterprise Support. Enterprise support provides for faster support case response times, and you get the benefit of dedicated resources such as a technical account manager and solutions architect.

CICD pipelines and processes are then used to control the application life cycle from code to dev to production. The pipelines would push the code to different OpenShift cluster end points at different stages of the life cycle.

Hybrid use case implementation

A common misconception of hybrid implementations is that there is a single cluster or control plan that has worker nodes in various locations. For example, there could be a cluster where the Master and infrastructure nodes are deployed in one location, but also worker nodes registered with this cluster that exist on-premises as well as in the cloud.

Having a single customer control plane for a hybrid implementation, even if technically possible, introduces undesired risks.

There is the potential to take multiple environments with very different resilience characteristics and make them interdependent of each other. This can result in performance and reliability issues, and these may increase not only the possibility of the risk manifesting, but also increase in the impact or blast radius.

Instead, hybrid implementations will see separate OpenShift clusters deployed into various locations. A customer may deploy clusters on-premises to cater for a workload that can’t be migrated to the cloud in the short term. Separate OpenShift clusters can then deployed into accounts in AWS for workloads on the cloud. Customers can also deploy separate OpenShift clusters in different AWS regions to cater for proximity to the consuming customer.

Though adding multiple clusters doesn’t add significant administrative overhead, there is a desire to be able to gain visibility and telemetry to all the deployed clusters from a central location. This may see the OpenShift clusters registered with Red Hat Advanced Cluster Manager for Kubernetes.

Summary

Take advantage of the IPI model, not only as a guide but to also save time. Make AWS Organizations, AWS Control Tower, and the AWS Service catalog part of your cloud and hybrid strategies. These will not only speed up migrations but also form building blocks for a modernized business with a focus of enabling prescriptive self-service. Consider Red Hat advanced cluster manager for multi cluster management.

Field Notes: Building a Shared Account Structure Using AWS Organizations

Post Syndicated from Abhijit Vaidya original https://aws.amazon.com/blogs/architecture/field-notes-building-a-shared-account-structure-using-aws-organizations/

For customers considering the AWS Solution Provider Program, there are challenges to mitigate when building a shared account model with SI partners. AWS Organizations make it possible to build the right account structure to support a resale arrangement. In this engagement model, the end customer gets an AWS invoice from an AWS authorized partner instead of AWS directly.

Partners and customers who want to engage in this service resale arrangement need to build a new account structure. This process includes linking or transferring existing customer accounts to the partner master account. This is so that all the billing data from customer accounts is consolidated into the partner master account.

While linking or transferring existing customer accounts to the new master account, the partner must check the new master account structure. It should not compromise any customer security controls and continue to provide full control of linked accounts to the customer.  The new account structure must fulfill the following requirements for both the AWS customer and partner:

  • The customer maintains full access to the AWS organization and able to perform all critical security-related tasks except access to billing data.
  • The Partner is able to control only billing information and not able to perform any other task in the root account (master payer account) without approval from the customer.
  • In case of contract breach / termination, the customer is able to gain back full control of all accounts including the Master.

In this post, you will learn about how partners can create a master account with shared ownership. We also show how to link or transfer customer organization accounts to the new organization master account and set up policies that would provide appropriate access to both partner and customer.

Account Structure

The following architecture represents the account structure setup that can fulfill customer and partner requirements as a part of a service resale arrangement.

As illustrated in the preceding diagram, the following list includes the key architectural components:

Architectural Components

As a part of resale arrangement, the customer’s existing AWS organization and related accounts are linked to the partner’s master payer account. The customer can continue to maintain their existing master root account, while all child accounts are linked to the master account (as shown in the list).

Customers may have valid concerns about linking/transferring accounts to the owned master payee account and may come up with many ‘what-if’ scenarios for example “What if the partner shuts down environment/servers?”  or, “What if partner blocks access to child accounts?”.

This account structure provides the right controls to both customer and partner that would address customer concerns around the security of their accounts. This includes the following benefits:

  • Starting with access to the root account, neither customer nor partner can access the root account without the other party’s involvement.
  • The partner controls the id/password for the root account while the customer maintains the MFA token for the account. The customer also controls the phone number, security questions associated with the root account. That way, the partner cannot replace the MFA token on their own.
  • The partner only has billing access and does not control any other parts of account including child accounts. Anytime the root account access is needed, both customer and partner team need to collaborate and access the root account.
  • The customer or partner cannot assign new IAM roles to themselves, therefore protecting the initial account setup.

Security considerations in the shared account setup

The following table highlights both customer and partner responsibilities and access controls provided by the architecture in the previous section.

The following points highlight security recommendations to provide adequate access rights to both partner and customers.

  • New master payer/ root account has a joint ownership between the Partner and the Customer.
  • AWS account root users (user id/password) would be with Partner and MFA (multi-factor authentication) device with Customer.
  • IAM (AWS Identity and Access Management) role to be created under the master payer with policies “FullOrganizationAccess”, “Amazon S3” (Amazon Simple Storage Service), “CloudTrail” (AWS CloudTrail), “CloudWatch”(Amazon CloudWatch) for the Customer.
  • Security team to log in and manage security of the account. Additional permissions to be added to this role as needed in future. This role does not have ANY billing permissions.
  • Only the partner has access to AWS billing and usage data.
  • IAM role / user would be created under master payer with just billing permission for the Partner team to log in and download all invoices. This role does not have any other permissions except billing and usage reports.
  • Any root login attempts to master payer triggers a notification to Customer’s SOC team and the Partner’s Customer account management team.
  • The Partner’ email address is used to create an account so invoices can be emailed to the partner’s email. The Customer cannot see these invoices.
  • The Customer phone number is used to create a master account and the customer maintains security questions/answers. This prevents replacement of MFA token by the Partner team without informing customer.  The Customer wouldn’t need the Partner’s help or permission to login and manage any security.
  • No aspect of  the Master Payer / Root Partner team can login to the master payer/Root without the Customer providing an MFA token.

Setting up the shared master AWS account structure

Create a playbook for the account transfer activity based on the following tasks. For each task, identify the owner. Make sure that owners have the right permissions to perform the tasks.

Part I – Setting up new partner master account

  1. Create new Partner Master payee Account
  2. Update payment details section with the required details for payment in Partner Master payee Account
  3. Enable MFA in the Partner Master payee Account
  4. Update contact for security and operations in the Partner Master payee Account
  5. Update demographics -Address and contact details in Partner Master payee Account
  6. Create an IAM role for Customer Team in Partner Master Payee account. IAM role is created under master payer with “FullOrganizationAccess”, “Amazon S3”, “CloudTrail”, “CloudWatch” “CloudFormationFullAccess” for the Customer SOC team to login and manage security of the account. Additional permissions can be added to this role in future if needed.

Select the roles:

create role

7. Create an IAM role/user for Partner billing role in the Partner Master Payee account.

Part II – Setting up customer master account

1.      Create an IAM user in the customer’s master account. This user assumes role into the new master payer/root account.

aws management console

 

2.      Confirm that when the IAM user from the customer account assumes a role in the new master account, and that the user does not have Billing Access.

Billing and cost management dashboard

Part III – Creating an organization structure in partner account

  1. Create an Organization in the Partner Master Payee Account
  2. Create Multiple Organizational Units (OU) in the Partner Master Payee Account

 

3. Enable Service Control Policies from AWS Organization’s Policies menu.

service control policies

5. Create/Copy Multiple in to Partner Master Payee Account from Customer root Account.  Any service control policies from the customer root account should be manually copied to new partner account.

6. If customer root account has any special software installed for example, security, install same software in Partner Master Payee Account.

7. Set alerts in Partner Master Payee root account. Any login to the root account would send alerts to customer and partner teams.

8. It is recommended to keep a copy of all billing history invoices for all accounts to be transferred to partner organization. This could be achieved by either downloading CSV or printing all invoices and storing files in Amazon S3 for long term archival. Billing history and invoices are found by clicking Orders and Invoices on Billing & Cost Management Dashboard. After accounts are transferred to new organization, historic billing data will not be available for those accounts.

9. Remove all the Member Accounts from the current Customer Root Account/ Organization. This step is performed by customer account admin and required before account can be transferred to Partner Account organization.

10. Send an invite from the Partner Master Payee Account to the delinked Member Account

master payee account

11.      Member Accounts to accept the invite from the Partner Master Payee Account.

invitations

12.      Move the Customer member account to the appropriate OU in the Partner Master Payee Account.

Setting the shared security model between partner and customer contact

While setting up the master account, three contacts need to be updated for notification.

  • Billing –  this is owned by the Partner
  • Operations – this is owned by the Customer
  • Security – this is owned by the Customer.

This will trigger a notification of any activity on the root account. The contact details contain Name, Title, Email Address and Phone number.  It is recommended to use the Customer’s SOC team distribution email for security and operations, and a phone number that belongs to the organization, and not the individual.

Alternate contacts

Additionally, before any root account activity takes place, AWS Support will verify using the security challenge questionnaire. These questions and answers are owned by the Customer’s SOC team.

security challenge questions

If a customer is not able to access the AWS account, alternate support options are available at Contact us by expanding the “I’m an AWS customer and I’m looking for billing or account support” menu. While contacting AWS Support, all the details that are listed on the account are needed, including full name, phone number, address, email address, and the last four digits of the credit card.

Clean Up

After recovering the account, the Customer should close any accounts that are not in use. It’s a good idea not to have open accounts in your name that could result in charges. For more information, review Closing an Account in the Billing and Cost Management User Guide.

The Shared master root account should be only used for selected activities referred to in the following document.

Conclusion

In this post, you learned how AWS Organizations features can be used to create a shared master account structure.  This helps both customer and partner engage in a service resale business engagement. Using AWS Organizations and cross account access, this solution allows customers to control all key aspects of managing the AWS Organization (Security / Logging / Monitoring) and also allows partners to control any billing related data.

Additional Resources

Cross Account Access

Field Notes provides hands-on technical guidance from AWS Solutions Architects, consultants, and technical account managers, based on their experiences in the field solving real-world business problems for customers.

Securing resource tags used for authorization using a service control policy in AWS Organizations

Post Syndicated from Michael Chan original https://aws.amazon.com/blogs/security/securing-resource-tags-used-for-authorization-using-service-control-policy-in-aws-organizations/

In this post, I explain how you can use attribute-based access controls (ABAC) in Amazon Web Services (AWS) to help provision simple, maintainable access controls to different projects, teams, and workloads as your organization grows. ABAC gives you access to granular permissions and employee-attribute based authorization. By using ABAC, you need fewer AWS Identity and Access Management (IAM) roles and have less administrative overhead. The attributes used by ABAC in AWS are called tags, which are metadata associated with the principals (users or roles) and resources within your AWS account. Securing tags designated for authorization is important because they’re used to grant access to your resources. This post describes an approach to secure these tags across all of your AWS accounts through the use of AWS Organizations service control policies (SCPs) so that only authorized users can modify a resource’s tags. Organizations helps you centrally govern your accounts as you grow and scale your workloads on AWS; SCPs are a feature of Organizations that restricts what services and actions are allowed in your accounts. Together they provide flexible account management and security features that help you centrally secure your business.

Let’s say you want to give users a standard way to access their accounts and resources in AWS. For authentication, you want to continue using your existing identity provider (IdP). For authorization, you’ll need a method that can be scaled to meet the needs of your organization. To accomplish this, you can use ABAC in AWS, which requires fewer roles, helps you control permissions for many resources at once, and allows for simple access rules to different projects and teams. For example, all developers in your organization can have a project name assigned to their identities as a tag. The tag will be used as the basis for access to AWS resources via ABAC. Because access is granted based on these tags, they need to be secured from unauthorized changes.

Securing authorization tags

Using ABAC as your access control strategy for AWS depends on securing the tags associated with identities in your AWS organization’s accounts as well as with the resources those identities need to access. When users sign in, the authentication process looks something like this:
 

Figure 1: Using tags for secure access to resources

Figure 1: Using tags for secure access to resources

  1. Their identities are passed into AWS through federation.
  2. Federation is implemented through SAML assertions.
  3. Their identities each assume an AWS IAM role (via AWS Security Token Service).
  4. The project attribute is passed to AWS as a session tag named access-project (Rely on employee attributes from your corporate directory to create fine-grained permissions in AWS has full details).
  5. The session tag acts as a principal tag, and if its value matches the access-project authorization tag of the resource, the user is given access to that resource.

To ensure that the role’s session tags persist even after assuming subsequent roles, you also need to set the TransitiveTagKeys SAML attribute.

Now that identity tags are persisted, you need to ensure that the resource tags used for authorization are secured. For sake of brevity, let’s call these authorization tags. Let’s assume that you intend to use ABAC for all the accounts in your AWS organization and control access through an SCP. SCPs ensure that resource authorization tags are initially set to an authorized value, and secure the tags against modification once set.

Authorization tag access control requirements

Before you implement a policy to secure your resource’s authorization tag from unintended modification, you need to define the requirements that the policy will enforce:

  • After the authorization tag is set on a resource:
    • It cannot be modified except by an IAM administrator.
    • Only a principal whose authorization tag key and value pair exactly match the key and value pair of the resource can modify the non-authorization tags for that resource.
  • If no authorization tag has been set on a resource, and if the authorization tag is present on the principal:
    • The resource’s authorization tag key and value pair can only be set if exactly matching the key and value tag of the principal.
    • All other non-authorization tags can be modified.
  • If any authorization tags are missing from a principal, the principal shouldn’t be allowed to perform any tag operations.

Review an SCP for securing resource tags

Let’s take a look at an SCP that fulfills the access control requirements listed above. SCPs are similar to IAM permission policies, however, an SCP never grants permissions. Instead, SCPs are IAM policies that specify the maximum permissions for an organization, organizational unit (OU), or account. Therefore, the recommended solution is to also assign identity policies that allow modification of tags to all roles that need the ability to create and delete tags. You can then assign your SCP to an OU that includes all of the accounts that need resource authorization tags secured. In the example that follows, the AWS Services That Work with IAM documentation page would be used to identify the first set of resources with ABAC support that your organization’s developers will use. These resources include Amazon Elastic Compute Cloud (Amazon EC2), IAM users and roles, Amazon Relational Database Service (Amazon RDS), and Amazon Elastic File System (Amazon EFS). The SCP itself consists of three statements, which I review in the following section.

Deny modification and deletion of tags if a resource’s authorization tags don’t match the principal’s

This first policy statement is below. It addresses what needs to be enforced after the authorization tag is set on a resource. It denies modification of any tag—including the resource’s authorization tags—if the resource’s authorization tag doesn’t match the principal’s tag. Note that the resource’s authorization tag must exist. Let’s review the components of the statement.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "DenyModifyTagsIfResAuthzTagAndPrinTagNotMatchedEc2",
            "Effect": "Deny",
            "Action": [
                "ec2:CreateTags",
                "ec2:DeleteTags"
            ],
            "Resource": [
                "*"
            ],
            "Condition": {
                "StringNotEquals": {
                    "ec2:ResourceTag/access-project": "${aws:PrincipalTag/access-project}",
                    "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
                },
                "Null": {
                    "ec2:ResourceTag/access-project": false
                }
            }
        },
  • Line 6:
    
    "Effect": "Deny",
    

    First, specify the Effect to be Deny to deny the following actions under certain conditions.

  • Lines 7-10:
    
    "Action": [
    	"ec2:CreateTags",
    	"ec2:DeleteTags"
    ],
    

    Specify the policy actions to be denied, which are ec2:CreateTags and ec2:DeleteTags. When combined with line 6, this denies modification of tags. The ec2:CreateTags action is included as it also grants the permission to overwrite tags, and we want to prevent that.

  • Lines 14-22:
    
    "Condition": {
    	"StringNotEquals": {
    		"ec2:ResourceTag/access-project": "${aws:PrincipalTag/access-project}",
    		"aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
    	},
    	"Null": {
    		"ec2:ResourceTag/access-project": false
    	}
    }
    

    Specify the conditions for which this statement—to deny policy actions—is in effect.

    The first condition operator is StringNotEquals. According to policy evaluation logic, if multiple condition keys are attached to an operator, each needs to evaluate as true in order for StringNotEquals to also evaluate as true.

  • Line 16:
    
    "ec2:ResourceTag/access-project": "${aws:PrincipalTag/access-project}",
    

    Evaluate if the access-project EC2 resource tag’s key and value pairs don’t match the principal’s.

  • Line 17:
    
    "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
    

    Evaluate if the principal is not the iam-admin role for the account. aws:PrincipalArn is a global condition key for comparing the Amazon Resource Name (ARN) of the principal that made the request with the ARN that you specify in the policy.

  • Line 19:
    
    "Null": {
    

    The second condition operator is a Null condition. It evaluates if the attached condition key’s tag exists in the request context. If the key’s right side value is set to false, the expression will return true if the tag exists and has a value. The following table illustrates the evaluation logic:

    Condition key and righthand side valueDescriptionEvaluation of expression if tag value existsEvaluation of expression if tag value doesn’t exist
    ec2:ResourceTag/access-project: trueIf my access-project resource tag is null (set to true)FALSETRUE
    ec2:ResourceTag/access-project: falseIf my access-project resource tag is not null (set to false)TRUEFALSE
  • Line 20:
    
    "ec2:ResourceTag/access-project": false
    

    Deny only when the access-project tag is present on the resource. This is needed because tagging actions shouldn’t be denied for new resources that might not yet have the access-project authorization tag set, which is covered next in the DenyModifyResAuthzTagIfPrinTagNotMatched statement.

Note that all elements of this condition are evaluated with a logical AND. For a deny to occur, both the StringNotEquals and Null operators must evaluate as true.

Deny modification and deletion of tags if requesting that a resource’s authorization tag be set to any value other than the principal’s

This second statement addresses what to do if there’s no authorization tag, or if the tag is on the principal but not on the resource. It denies modification of any tag, including the resource’s authorization tags, if the resource’s access-project tag is one of the requested tags to be modified and its value doesn’t match the principal’s access-project tag. The statement is needed when the resource might not have an authorization tag yet, and access shouldn’t be granted without a tag. Fortunately, access can be based on what the principal is requesting the resource authorization tag to be set to, which must be equal to the principal’s tag value.


{   
    "Sid": "DenyModifyResAuthzTagIfPrinTagNotMatched",
    "Effect": "Deny",
    "Action": [
        "ec2:CreateTags",
        "ec2:DeleteTags"
    ],
    "Resource": [
        "*"
    ],
    "Condition": {
        "StringNotEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
        },
        "ForAnyValue:StringEquals": {
            "aws:TagKeys": [
                "access-project"
            ]   
        }   
    }       
},
  • Line 36:
    
    "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
    

    Check to see if the desired access-project tag value for your resource, aws:RequestTag/access-project, isn’t equal to the principal’s access-project tag value. aws:RequestTag is a global condition key that can be used for tag actions to compare the tag key-value pair that was passed in the request with the tag pair that you specify in the policy.

  • Line 39-43:
    
    "ForAnyValue:StringEquals": {
         "aws:TagKeys": [
             "access-project"
         ]               
    }          
    

    This ensures that a deny can only occur if the access-project tag is among the tags in the request context, which would be the case if the resource’s authorization tag was one of the requested tags to be modified.

Deny modification and deletion of tags if the principal’s access-project tag does not exist

The third statement addresses the requirement that if any authorization tags are missing from a principal, the principal shouldn’t be allowed to perform any tag operations. This policy statement will deny modification of any tag if the principal’s access-project tag doesn’t exist. The reason for this is that if authorization is intended to be based on the principal’s authorization tag, it must be present for any tag modification operation to be allowed.


{       
    "Sid": "DenyModifyTagsIfPrinTagNotExists",
    "Effect": "Deny", 
    "Action": [
        "ec2:CreateTags",
        "ec2:DeleteTags"
    ],      
    "Resource": [
        "*"     
    ],      
    "Condition": {
        "StringNotEquals": {
            "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
        },      
        "Null": {
            "aws:PrincipalTag/access-project": true
        }       
    }       
}

You’ve now created a basic SCP that protects against unintended modification of Amazon EC2 resource tags used for authorization. You will also need to create identity policies for your project’s roles that enable users to tag resources. You will also want to test your EC2 instances and ensure that your tags cannot be modified except by authorized principals. The next step is to add IAM, Amazon RDS, and Amazon EFS resources to this SCP.

Adding IAM users and roles to the SCP

Now that you’re confident you’ve done what you can to secure your Amazon EC2 tags, you can to do the same for your IAM resources, specifically users and roles. This is important because user and role resources can also be principals, and have authorization based off their tags. For example, not all roles will be assumed by employees and receive session tags. Applications can also assume a role and thus have authorization based on a non-session tag. To account for that possibility, you can add the following IAM actions to the prior statement, which denies modification and deletion of tags if a resource’s authorization tags don’t match the principal:


{       
    "Sid": "DenyModifyTagsIfPrinTagNotExists",
    "Effect": "Deny", 
    "Action": [
        "ec2:CreateTags",
        "ec2:DeleteTags",
        "iam:TagRole",
        "iam:TagUser",
        "iam:UntagRole",
        "iam:UntagUser" 
    ],          
    "Resource": [
        "*" 
    ],          
    "Condition": {  
        "StringNotEquals": {
            "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
        },      
        "Null": {   
            "aws:PrincipalTag/access-project": true
        }   
    }   
}

You can use the following statements to add the IAM tagging actions to the previous statements that (1) deny modification and deletion of tags if requesting that a resource’s authorization tag be set to a different value than the principal’s and (2) denying modification and deletion of tags if the principal’s project tag doesn’t exist.


{
    "Sid": "DenyModifyResAuthzTagIfPrinTagNotMatched",
    "Effect": "Deny",
    "Action": [
        "ec2:CreateTags",
        "ec2:DeleteTags",
        "iam:TagRole",
        "iam:TagUser",
        "iam:UntagRole",
        "iam:UntagUser"
    ],
    "Resource": [
        "*"
    ],
    "Condition": {
        "StringNotEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
        },
        "ForAnyValue:StringEquals": {
            "aws:TagKeys": [
                "access-project"
            ]
        }
    }
},
{
    "Sid": "DenyModifyTagsIfPrinTagNotExists",
    "Effect": "Deny",
    "Action": [
        "ec2:CreateTags",
        "ec2:DeleteTags",
        "iam:TagRole",
        "iam:TagUser",
        "iam:UntagRole",
        "iam:UntagUser"
    ],
    "Resource": [
        "*"
    ],
    "Condition": {
        "StringNotEquals": {
            "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
        },
        "Null": {
            "aws:PrincipalTag/access-project": true
        }
    }
} 

Adding resources that support the global aws:ResourceTag to the SCP

Now we will add Amazon RDS and Amazon EFS resources to complete the SCP. RDS and EFS are different with regards to tagging because they support the global aws:ResourceTag condition key instead of a service specific key such as ec2:ResourceTag. Instead of requiring you to create multiple statements similar to the code used above to deny modification and deletion of tags if a resource’s authorization tags don’t match the principal, you can create a single reusable policy statement that you can continue to add more actions to, as shown in the following code.


{
    "Sid": "DenyModifyTagsIfResAuthzTagAndPrinTagNotMatched",
    "Effect": "Deny",
    "Action": [
        "elasticfilesystem:TagResource",
        "elasticfilesystem:UntagResource",
        "rds:AddTagsToResource",
        "rds:RemoveTagsFromResource"
    ],
    "Resource": [
        "*"
    ],
    "Condition": {
        "StringNotEquals": {
            "aws:ResourceTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
        },
        "Null": {
            "aws:ResourceTag/access-project": false
        }
    }
}, 

Review the full SCP to secure authorization tags

Let’s review the final SCP. It includes all the services you’ve targeted, as shown in the code sample below. This policy is 2,859 characters in length, and uses non-Unicode characters, which means that it uses a byte of data for each character. The policy therefore uses 2,859 bytes of the current 5,120 byte quota for an SCP. This means that roughly 4 to 18 more services can be added to this policy, depending on if the services use their own service-specific resource tag condition key or the global aws:ResourceTag key. Keep these numbers and limits in mind when adding additional resources, and remember that a maximum of five SCPs can be associated with an OU. It is unwise to consume the entirety of your available SCP policy space, as you’ll want to ensure you can make later changes and have growing room for your future business requirements.

This SCP doesn’t enforce tag-on-create, which lets you require that tags be applied when a resource is created, as shown in Example service control polices. Enforcing tag-on-create is necessary if you need resources to be accessible only from appropriately tagged identities from the time resources are created. Enforcement can be done through an SCP or by creating identity policies for the roles in each account that require it.

If necessary, you can use another employee attribute in addition to access-project, but that will use more of the available quota of bytes. Adding another attribute can potentially double the bytes used in a policy, especially for services that require use of a service-specific resource tag condition key. Using Amazon EC2 as an example, you would need to create separate statements just for the added employee attribute, duplicating the policy statements in the first three code samples.

Here’s the final SCP in its entirety:


{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Sid": "DenyModifyTagsIfResAuthzTagAndPrinTagNotMatchedEc2",
			"Effect": "Deny",
			"Action": [
				"ec2:CreateTags",
				"ec2:DeleteTags"
			],
			"Resource": [
				"*"
			],
			"Condition": {
				"StringNotEquals": {
					"ec2:ResourceTag/access-project": "${aws:PrincipalTag/access-project}",
					"aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
				},
				"Null": {
					"ec2:ResourceTag/access-project": false
				}
			}
		},
		{
			"Sid": "DenyModifyTagsIfResAuthzTagAndPrinTagNotMatchedIam",
			"Effect": "Deny",
			"Action": [
				"iam:TagRole",
				"iam:TagUser",
				"iam:UntagRole",
				"iam:UntagUser"
			],
			"Resource": [
				"*"
			],
			"Condition": {
				"StringNotEquals": {
					"iam:ResourceTag/access-project": "${aws:PrincipalTag/access-project}",
					"aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
				},
				"Null": {
					"iam:ResourceTag/access-project": false
				}
			}
		},
		{
			"Sid": "DenyModifyTagsIfResAuthzTagAndPrinTagNotMatched",
			"Effect": "Deny",
			"Action": [
				"elasticfilesystem:TagResource",
				"elasticfilesystem:UntagResource",
				"rds:AddTagsToResource",
				"rds:RemoveTagsFromResource"
			],
			"Resource": [
				"*"
			],
			"Condition": {
				"StringNotEquals": {
					"aws:ResourceTag/access-project": "${aws:PrincipalTag/access-project}",
					"aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
				},
				"Null": {
					"aws:ResourceTag/access-project": false
				}
			}
		},
		{
			"Sid": "DenyModifyResAuthzTagIfPrinTagNotMatched",
			"Effect": "Deny",
			"Action": [
				"elasticfilesystem:TagResource",
				"elasticfilesystem:UntagResource",
				"ec2:CreateTags",
				"ec2:DeleteTags",
				"iam:TagRole",
				"iam:TagUser",
				"iam:UntagRole",
				"iam:UntagUser",
				"rds:AddTagsToResource",
				"rds:RemoveTagsFromResource"
			],
			"Resource": [
				"*"
			],
			"Condition": {
				"StringNotEquals": {
					"aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
					"aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
				},
				"ForAnyValue:StringEquals": {
					"aws:TagKeys": [
						"access-project"
					]
				}
			}
		},
		{
			"Sid": "DenyModifyTagsIfPrinTagNotExists",
			"Effect": "Deny",
			"Action": [
				"elasticfilesystem:TagResource",
				"elasticfilesystem:UntagResource",
				"ec2:CreateTags",
				"ec2:DeleteTags",
				"iam:TagRole",
				"iam:TagUser",
				"iam:UntagRole",
				"iam:UntagUser",
				"rds:AddTagsToResource",
				"rds:RemoveTagsFromResource"
			],
			"Resource": [
				"*"
			],
			"Condition": {
				"StringNotEquals": {
					"aws:PrincipalArn": "arn:aws:iam::123456789012:role/org-admins/iam-admin"
				},
				"Null": {
					"aws:PrincipalTag/access-project": true
				}
			}
		}
	]
}

Next Steps

Now that you have an SCP that protects your resources’ authorization tags, you’ll also want to consider ensuring that identities cannot be assigned unauthorized tags when assuming roles, as shown in Using SAML Session Tags for ABAC. Additionally, you’ll also want detective and responsive controls to verify that identities are permitted to only access their own resources, and that responsive action can be taken if unintended access occurs.

Summary

You’ve created a multi-account organizational guardrail to protect the tags used for your ABAC implementation. Through the use of employee attributes, transitive session tags, and a service control policy you’ve created a preventive control that will deny anyone except an IAM administrator from modifying tags used for authorization once set on a resource.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS IAM forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Michael Chan

Michael Chan

Michael is a Developer Advocate for AWS Identity. Prior to this, he was a Professional Services Consultant who assisted customers with their journey to AWS. He enjoys understanding customer problems and working backwards to provide practical solutions.

How to use AWS Organizations to simplify security at enormous scale

Post Syndicated from Pradeep Ramarao original https://aws.amazon.com/blogs/security/how-to-use-aws-organizations-to-simplify-security-at-enormous-scale/

AWS Organizations provides central governance and management across AWS accounts, and AWS Control Tower is the easiest way to set up and govern a new, secure multi-account AWS environment. In this post, we explain how AWS Organizations and AWS Control Tower can make the lives of your Information Security engineers easier, based on our experience in the Information Security team at Amazon. The service control policies (SCPs) feature in AWS Organizations offers you central control over permissions for all accounts in your organization, which helps you to ensure that your accounts stay within your organization’s access control guidelines. This can give your Information Security team an effective mechanism to prevent unsafe actions in your AWS accounts. AWS Control Tower sets up many of these best-practice SCPs automatically, and you can modify or add more SCPs if needed based on your specific business requirements.

In this post, we show you how to implement SCPs to restrict the types of actions that can be performed by your employees within your company’s AWS accounts, and we provide example SCPs that are used by Amazon’s Information Security team for the following restrictions:

  1. Prevent deletion or modification of AWS Identity and Access Management (IAM) roles created by the Information Security team
  2. Prevent creation of IAM users with a login profile
  3. Prevent tampering with AWS CloudTrail
  4. Prevent public Amazon Simple Storage Service (Amazon S3) access
  5. Prevent users from performing any action related to AWS Organizations

All of these rules, and the reasons for implementing them, are described in detail in this post. Note that Amazon uses many more SCPs than those listed above, but these are some examples that we consider to be best practices and that can be shared publicly.

Scope of problem

At Amazon, security is the number one priority, and it’s a common challenge in the technology industry to keep pace with emerging security threats while still encouraging experimentation and innovation. The Information Security team at Amazon is responsible for ensuring that employees can rapidly develop new products and services built on AWS technologies, while enforcing robust security standards. How do we do this? Among other things we use AWS Organizations and SCPs to uphold our security requirements.

Amazon has thousands of AWS accounts, and before the launch of AWS Organizations, the Information Security team at Amazon had to implement custom-built reactive rules in each account by using AWS Config, AWS CloudTrail, and Amazon CloudWatch to detect and mitigate any possible security concerns. This was effective, but managing all of the rules for thousands of AWS accounts was challenging. The Information Security team needed to tackle two specific challenges:

  1. How can we shift to a preventive approach—that is, blocking unsafe actions from happening in the first place—and then use reactive controls solely as a fallback mechanism?
  2. How can we continue to ensure that our critical infrastructure running in AWS accounts is tamper proof, without building custom monitoring and remediation infrastructure? For example, we create custom IAM roles to broker single sign-on access to our AWS accounts. We need to ensure that employees cannot delete these roles accidentally, and that unauthorized users cannot delete these roles maliciously.

The launch of AWS Organizations provided the Information Security team at Amazon with a way to tackle both challenges.

Replace reactive controls with preventive controls

Let’s take a look at some of the key rules that the Information Security team at Amazon has in place for managing security posture across AWS accounts. These are company-wide security controls, so we defined them in the organization root so that they are enforced across all AWS accounts in the organization. You can also implement these security controls at the organizational unit (OU) or individual account level, which gives you flexibility to implement different sets of guardrails for different parts of your organization.

Rule 1: Prevent deletion or modification of IAM roles created by the Information Security team

The Information Security team at Amazon creates IAM roles in every AWS account that is used by Amazon, and we use these IAM roles to centrally monitor and control access in accordance with Amazon security policies. Accordingly, no one other than the Information Security team should be able to delete or modify these roles.

Before the use of AWS Organizations and SCPs, the Information Security team at Amazon used reactive controls to scan every account, to make sure that nobody had deleted these important roles. Reactive controls detect whether someone has made a configuration update that does not comply with company security policies.

What are some possible drawbacks to this approach?

  1. This is a reactive approach that detects a problem only after it has occurred—in this case, after somebody has deleted one of the important roles.
  2. It puts the responsibility for security in the hands of your employees, and you rely on their goodwill and expertise to ensure that they uphold the rules.

Your employees want to focus on their core competencies and getting their jobs done effectively. Requiring that they also be information security experts is not an effective strategy. It’s much easier for everyone in your organization if you simply remove this burden in the first place. SCPs allow Amazon to do this. With SCPs, you can help avoid giving unwanted access to employees outside your security access team. As you will see in the following example, you can ensure that all roles required for the Information Security team begin with the string InfoSec, and you can create an SCP that prevents employees from deleting these roles.

The following is an example of an SCP that prevents the deletion or modification of any IAM role that begins with InfoSec in all accounts, unless the initiator uses an InfoSec role. This example demonstrates how you can use condition statements in your SCPs, which give you a lot of flexibility in determining when your SCP should be applied.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PreventInfoSecRoleActions",
            "Effect": "Deny",
            "Action": [
                "*"
            ],
            "Resource": [
                "arn:aws:iam::*:role/InfoSec*"
            ],
            "Condition": {
                "StringNotLike": {
                    "aws:PrincipalARN": "arn:aws:iam::*:role/InfoSec*"
                }
            }
        }
    ]
}

Rule 2: Prevent creation of IAM users with a login profile

When your employees are authenticated in your corporate network, it’s a security best practice to use identity federation to allow them to sign in to your company-owned AWS accounts, rather than to create individual IAM users for all of your employees, each with their own passwords.

The following are some of the reasons that it’s a best practice to use identity federation instead of creating individual IAM users for all of your employees:

  1. Using identity federation makes it easier for the Information Security team, so that they don’t have to control individual credentials across thousands of accounts and users.
  2. When using identity federation, users log into your AWS accounts with temporary credentials, and there is no need to store long-term, persistent passwords, thus reducing the possibility of an external entity guessing a user’s password.

The following is an example of an SCP you can use to prevent employees from creating individual IAM users.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PreventIAMUserWithLoginprofile",
            "Effect": "Deny",
            "Action": [
                "iam:ChangePassword",
                "iam:CreateLoginProfile",
                "iam:UpdateLoginProfile",
                "iam:UpdateAccountPasswordPolicy"
            ],
            "Resource": [
                "*"
            ]
     }]
}

Rule 3: Prevent tampering with AWS CloudTrail

It’s very important for your Information Security team to be able to track the activities performed by all users in all accounts. In addition to this being a best practice, many compliance standards require you to maintain an accurate history of all user activity in your AWS accounts. You can use AWS CloudTrail to keep this history, and use an SCP to help ensure that unauthorized users cannot disable AWS CloudTrail or modify any of its outputs.

The following is an example of an SCP to prevent tampering with CloudTrail. To use the example, replace exampletrail with the name of the trail that you want to protect.


{
            "Sid": "PreventTamperingWithCloudTrail",
            "Effect": "Deny",
            "Action": [
                "cloudtrail:DeleteTrail",
                "cloudtrail:StopLogging",
                "cloudtrail:UpdateTrail"
            ],
            "Resource": [
                "arn:aws:cloudtrail:*:*:trail/exampletrail"
            ]
}

Rule 4: Prevent public S3 access

It’s a security best practice to only allow public access to an S3 bucket and its contents if you specifically want external entities to be able to publicly access those items. However, you need to make sure that information that might be sensitive is securely protected and not made publicly available. For this reason, the Information Security team at Amazon prevents Amazon employees from making S3 buckets or content publicly available, unless they first go through a security review and get explicit approval from the Information Security team. We use the Amazon S3 Block Public Access feature in all of our accounts, and the following is an example of an SCP that prevents employees from changing those settings.


{
            "Sid": "PreventS3PublicAccess",
            "Action": [
                "s3:PutAccountPublicAccessBlock"
            ],
            "Resource": "*",
            "Effect": "Deny"
}

Rule 5: Prevent users from performing any action related to AWS Organizations

We also want to help ensure that only the Information Security team can perform actions related to AWS Organizations, and to prevent other users from performing activities such as leaving the organization or discovering other accounts in the organization. The following SCP prevents users from taking actions in AWS Organizations.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PreventOrgsActions",
            "Effect": "Deny",
            "Action": [
                "organizations:*"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

The preceding SCP also prevents the use of AWS Organizations-related features in some AWS services that rely on read permissions, such as AWS Resource Access Manager (AWS RAM). To allow AWS Organizations to have read permissions for those services, you can modify the SCP to specify more granular actions in the Action section of the SCP definition. For example, you can modify the SCP to prevent member accounts from leaving the organization while still allowing the use of features such as AWS RAM, as demonstrated in the following SCP. For more information about all of the AWS Organizations API actions that you can access in each account, see AWS Organizations API Reference: API operations by account.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PreventLeavingOrganization",
            "Effect": "Deny",
            "Action": [
                "organizations:LeaveOrganization"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

Conclusion

Before the launch of AWS Organizations, the Information Security team at Amazon had to implement custom-built reactive rules to manage security controls across thousands of AWS accounts, and to ensure that all employees were managing their AWS resources in alignment with company policies. AWS Organizations has dramatically simplified these tasks, making it easier to control what Amazon employees are allowed to do in their AWS accounts. We’ve shared this information and examples of SCPs to help you implement similar controls.

Further reading

Learn more about Policy Evaluation Logic in the IAM User Guide.

Try it out for yourself. AWS has published information on how to create an AWS Organization and how to create SCPs.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Organizations forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Pradeep Ramarao

Pradeep is a Senior Manager of Software Development at Amazon. He enables Amazon developers to rapidly innovate on AWS technologies, while enforcing robust security standards. Outside of work, he enjoys watching movies and improving his soccer skills.

Author

Kieran Kavanagh

Kieran is a Principal Solutions Architect at AWS. He works with customers to design and build solutions on AWS across a broad range of technology domains including security, data analytics, and machine learning. In his spare time, he likes hiking, snowboarding, and practicing martial arts.

Author

Casey Falk

Casey is a Software Development Manager at Amazon. His team builds governance and security tooling that protects Amazon’s customer data. Outside of work he enjoys banjo, philosophy, and Dungeons & Dragons.

How to use G Suite as an external identity provider for AWS SSO

Post Syndicated from Yegor Tokmakov original https://aws.amazon.com/blogs/security/how-to-use-g-suite-as-external-identity-provider-aws-sso/

Do you want to control access to your Amazon Web Services (AWS) accounts with G Suite? In this post, we show you how to set up G Suite as an external identity provider in AWS Single Sign-On (SSO). We also show you how to configure permissions for your users, and how they can access different accounts.

Introduction

G Suite is used for common business functions like email, calendar, and document sharing. If your organization is using AWS and G Suite, you can use G Suite as an identity provider (IdP) for AWS. You can connect AWS SSO to G Suite, allowing your users to access AWS accounts with their G Suite credentials.

You can grant access by assigning G Suite users to accounts governed by AWS Organizations. The user’s effective permissions in an account are determined by permission sets defined in AWS SSO. They allow you to define and grant permissions based on the user’s job function (such as administrator, data scientist, or developer). These should follow the least privilege principle, granting only permissions that are necessary to perform the job. This way, you can centrally manage user accounts for your employees in the Google Admin console and have fine-grained control over the access permissions of individual users to AWS resources.

In this post, we walk you through the process of setting up G Suite as an external IdP in AWS SSO.

How it works

AWS SSO authenticates your G Suite users by using Security Assertion Markup Language (SAML) 2.0 authentication. SAML is an open standard for secure exchange of authentication and authorization data between IdPs and service providers without exposing users’ credentials. When you use AWS as a service provider and G Suite as an external IdP, the login process is as follows:

  1. A user with a G Suite account opens the link to the AWS SSO user portal of your AWS Organizations.
  2. If the user isn’t already authenticated, they will be redirected to the G Suite account login. The user will log in using their G Suite credentials.
  3. If the login is successful, a response is created and sent to AWS SSO. It contains three different types of SAML assertions: authentication, authorization, and user attributes.
  4. When AWS SSO receives the response, the user’s access to the AWS SSO user portal is determined. A successful login shows accessible AWS accounts.
  5. The user selects the account to access and is redirected to the AWS Management Console.

This authentication flow is shown in the following diagram.

Figure 1: AWS SSO authentication flow

Figure 1: AWS SSO authentication flow

The user journey starts at the AWS SSO user portal and ends with the access to the AWS Management Console. Your users experience a unified access to the AWS Cloud, and you don’t have to manage user accounts in AWS Identity and Access Management (IAM) or AWS Directory Service.

User permissions in an AWS account are controlled by permission sets and groups in AWS SSO. A permission set is a collection of administrator-defined policies that determine a user’s effective permissions in an account. They can contain AWS managed policies or custom policies that are stored in AWS SSO, and are ultimately created as IAM roles in a given AWS account. Your users assume these roles when they access a given AWS account and get their effective permissions. This obliges you to fine control the access to the accounts, following the shared-responsibility model established in the cloud.

When you use G Suite to authenticate and manage your users, you have to create a user entity in AWS SSO. The user entity is not a user account, but a logical object. It maps a G Suite user via its primary email address as the username to the user account in AWS SSO. The user entity in AWS SSO allows you to grant a G Suite user access to AWS accounts and define its permissions in those accounts.

AWS SSO initial setup

The AWS SSO service has some prerequisites. You need to first set up AWS Organizations with All features set to enabled, and then sign in with the AWS Organization’s master account credentials. You also need super administrator privileges in G Suite and access to the Google Admin console.

If you’re already using AWS SSO in your account, refer to Considerations for Changing Your Identity Source before making changes.

To set up an external identity provider in AWS SSO

  1. Open the service page in the AWS Management Console. Then choose Enable AWS SSO.

    Figure 2: AWS SSO service welcome page

    Figure 2: AWS SSO service welcome page

  2. After AWS SSO is enabled, you can connect an identity source. On the overview page of the service, select Choose your identity source.

    Figure 3: Choose your identity source

    Figure 3: Choose your identity source

  3. In the Settings, look for Identity source and choose Change.

    Figure 4: Settings

    Figure 4: Settings

  4. By default, AWS SSO uses its own directory as the identity provider. To use G Suite as your identity provider, you have to switch to an external identity provider. Select External identity provider from the available identity sources.

    Figure 5: AWS SSO identity provider options

    Figure 5: AWS SSO identity provider options

  5. Choosing the External identity provider option reveals additional information needed to configure it. Choose Show individual metadata values to show the information you need to configure a custom SAML application.

    Figure 6: AWS SSO SAML metadata

    Figure 6: AWS SSO SAML metadata

For the next steps, you need to switch to your Google Admin console and use the service provider metadata information to configure AWS SSO as a custom SAML application.

G Suite SAML application setup

Open your Google Admin console in a new browser tab, so that you can easily copy the metadata information from the previous step. Now use the information to configure a custom SAML application.

To configure a custom SAML application in G Suite

  1. Navigate to the SAML Applications section in the Admin console and choose Add a service/App to your domain.

    Figure 7: Add a service or app

    Figure 7: Add a service or app

  2. In the modal dialog that opens, choose SETUP MY OWN CUSTOM APP.

    Figure 8: Set up a custom app

    Figure 8: Set up a custom app

  3. Go to Option 2 and choose Download to download the Google IdP metadata. It downloads an XML file named GoogleIDPMetadata-your_domain.xml, which you will use to configure G Suite as the IdP in AWS SSO. Choose Next.

    Figure 9: Download IdP metadata

    Figure 9: Download IdP metadata

  4. Configure the name and description of the application. Enter AWS SSO as the application name or use a name that clearly identifies this application for your users. Choose Next to continue.

    Figure 10: Name and describe your app

    Figure 10: Name and describe your app

  5. Fill in the Service Provider Details using the metadata information from AWS SSO, then choose Next to create your custom application. The mapping for the metadata is:
    • Enter the AWS SSO Sign in URL as the Start URL
    • Enter the AWS SSO ACS URL as the ACS URL
    • Enter the AWS SSO Issue URL as the Entity ID

     

    Figure 11: Add service provider details

    Figure 11: Add service provider details

  6. Next is a confirmation screen with a reminder that you have some steps still to do. Choose OK to continue.

    Figure 12: Reminder of remaining steps

    Figure 12: Reminder of remaining steps

  7. The final steps enable the application for your users. Select the application from the list and choose EDIT SERVICE from the top corner.

    Figure 13: Edit service

    Figure 13: Edit service

  8. Change the service status to ON for everyone and choose SAVE. If you want to manage access for particular users you can do this via organizational units (for example, you can enable the AWS SSO application for your engineering department). This doesn’t give access to any resources inside of your AWS accounts. Permissions are granted in AWS SSO.

    Figure 14: Service on for everyone

    Figure 14: Service on for everyone

You’re done configuring AWS SSO in G Suite. Return to the browser tab with the AWS SSO configuration.

AWS SSO configuration

After creating the G Suite application, you can finish SSO setup by uploading Google IdP metadata in the AWS Management Console.

To add identity provider metadata in AWS SSO

  1. When you configured the custom application in G Suite, you downloaded the GoogleIDPMetadata-your_domain.xml file. Choose Browse… on the configuration page and select this file from your download folder. Finish this step by choosing Next: Review.

    Figure 15: Upload IdP SAML metadata file

    Figure 15: Upload IdP SAML metadata file

  2. Type CONFIRM at the bottom of the list of changes and choose Change identity source to complete the setup.

    Figure 16: Confirm changes

    Figure 16: Confirm changes

  3. Next is a message that your change to the configuration is complete. At this point, you can choose Return to settings and proceed to user provisioning.

    Figure 17: Settings complete

    Figure 17: Settings complete

Manage Users and Permissions

AWS SSO supports automatic user provisioning via the System for Cross-Identity Management (SCIM). However, this is not yet supported for G Suite custom SAML applications. To add a user to AWS SSO, you have to add the user manually. The username in AWS SSO must be the primary email address of that user, and it must follow the pattern username@gsuite_domain.com.

To add a user to AWS SSO

  1. Select Users from the sidebar of the AWS SSO overview and then choose Add user.

    Figure 18: Add user

    Figure 18: Add user

  2. Enter the user details and use your user’s primary email address as the username. Choose Next: Groups to add the user to a group.

    Figure 19: Add user

    Figure 19: Add user

  3. We aren’t going to create user groups in this walkthrough. Skip the Add user to groups step by choosing Add user. You will reach the user list page displaying your newly created user and status enabled.

    Figure 20: User list

    Figure 20: User list

  4. The next step is to assign the user to a particular AWS account in your AWS Organization. This allows the user to access the assigned account. Select the account you want to assign your user to and choose Assign users.

    Figure 21: Assign users

    Figure 21: Assign users

  5. Select the user you just added, then choose Next: Permission sets to continue configuring the effective permissions of the user in the assigned account.

    Figure 22: Select user

    Figure 22: Select user

  6. Since you didn’t configure a permission set before, you need to configure one now. Choose Create new permission set.

    Figure 23: Create new permission set

    Figure 23: Create new permission set

  7. AWS SSO has managed permission sets that are similar to the AWS managed policies you already know. Make sure that Use an existing job function policy is selected, then select PowerUserAccess from the list of existing job function policies and choose Create.

    Figure 24: Create permission set

    Figure 24: Create permission set

  8. You can now select the created permission set from the list of available sets for the user. Select the PowerUserAccess permission set and choose Finish to assign the user to the account.

    Figure 25: Select permission set

    Figure 25: Select permission set

  9. You see a message that the assignment has been successful.

    Figure 26: Assignment complete

    Figure 26: Assignment complete

Access an AWS Account with G Suite

You can find your user portal URL in the AWS SSO settings, as shown in the following screenshot. Unauthenticated users who use the link will be redirected to the Google account login page and use their G Suite credentials to log in.

Figure 27: The user portal URL

Figure 27: The user portal URL

After authenticating, users are redirected to the user portal. They can select from the list of assigned accounts, as shown in the following example, and access the AWS Management Console of these accounts.

Figure 28: Select an assigned account

Figure 28: Select an assigned account

You’ve successfully set up G Suite as an external identity provider for AWS SSO. Your users can access your AWS accounts using the credentials they already use.

Another way your users can use AWS SSO is by selecting it from their Google Apps to be redirected to the user portal, as shown in the following screenshot. That is one of the quickest ways for users to access accounts.

Figure 29: Apps in the user portal

Figure 29: Apps in the user portal

Using AWS CLI with SSO

You can use the AWS Command Line Interface (CLI) to access AWS resources. AWS CLI version 2 supports access via AWS SSO. You can automatically or manually configure a profile for the CLI to access resources in your AWS accounts. To authenticate your user, it opens the user portal in your default browser. If you aren’t authenticated, you’re redirected to the G Suite login page. After a successful login, you can select the AWS account you want to access from the terminal.

To upgrade to AWS CLI version 2, follow the instructions in the AWS CLI user guide.

Conclusion

You’ve set up G Suite as an external IdP for AWS SSO, granted access to an AWS account for a G Suite user, and enforced fine-grained permission controls for this user. This enables your business to have easy access to the AWS Cloud.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Single Sign-on forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Yegor Tokmakov

Yegor is a solutions architect at AWS, working with startups. Previously, he was Chief Technology Officer at a healthcare startup in Berlin and was responsible for architecture and operations, as well as product development and growth of the tech team. Yegor is passionate about novel AI applications and data analytics. In his free time, he enjoys traveling, surfing, and cycling.

Sebastian Doell

Sebastian is a solutions architect at AWS. He helps startups to execute on their ideas at speed and at scale. Sebastian maintains a number of open source projects and is an advocate of Dart and Flutter.

Best practices for organizing larger serverless applications

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/best-practices-for-organizing-larger-serverless-applications/

Well-designed serverless applications are decoupled, stateless, and use minimal code. As projects grow, a goal for development managers is to maintain the simplicity of design and low-code implementation. This blog post provides recommendations for designing and managing code repositories in larger serverless projects, and best practices for deploying releases of production systems.

Organizing your code repositories

Many serverless applications begin as monolithic applications. This can occur either because a simple application has grown more complex over time, or because developers are following existing development practices. A monolithic application is represented by a single AWS Lambda function performing multiple tasks, and a mono-repo is a single repository containing the entire application logic.

Monoliths work well for the simplest serverless applications that perform single-purpose functions. These are small applications such as cron jobs, data processing tasks, and some asynchronous processes. As those applications evolve into workflows or develop new features, it becomes important to refactor the code into smaller services.

Using frameworks such as the AWS Serverless Application Model (SAM) or the Serverless Framework can make it easier to group common pieces of functionality into smaller services. Each of these can have a separate code repository. For SAM, the template.yaml file contains all the resources and function definitions needed for an application. Consequently, breaking an application into microservices with separate templates is a simple way to split repos and resource groups.

Separate templates for microservices

In the smallest unit of a serverless application, it’s also possible to create one repository per function. If these functions are independent and do not share other AWS resources, this may be appropriate. Helper functions and simple event processing code are examples of candidates for this kind of repo structure.

In most cases, it makes sense to create repos around groups of functions and resources that define a microservice. In an ecommerce example, “Payment processing” is a microservice with multiple smaller related functions that share common resources.

As with any software, the repo design depends upon the use-case and structure of development teams. One large repo makes it harder for developer teams to work on different features, and test and deploy. Having too many repos can create duplicate code, and difficulty in sharing resources across repos. Finding the balance for your project is an important step in designing your application architecture.

Using AWS services instead of code libraries

AWS services are important building blocks for your serverless applications. These can frequently provide greater scale, performance, and reliability than bundled code packages with similar functionality.

For example, many web applications that are migrated to Lambda use web frameworks like Flask (for Python) or Express (for Node.js). Both packages support routing and separate user contexts that are well suited if the application is running on a web server. Using these packages in Lambda functions results in architectures like this:

Web servers in Lambda functions

In this case, Amazon API Gateway proxies all requests to the Lambda function to handle routing. As the application develops more routes, the Lambda function grows in size and deployments of new versions replace the entire function. It becomes harder for multiple developers to work on the same project in this context.

This approach is generally unnecessary, and it’s often better to take advantage of the native routing functionality available in API Gateway. In many cases, there is no need for the web framework in the Lambda function, which increases the size of the deployment package. API Gateway is also capable of validating parameters, reducing the need for checking parameters with custom code. It can also provide protection against unauthorized access, and a range of other features more suited to be handled at the service level. When using API Gateway this way, the new architecture looks like this:

Using API Gateway for routing

Additionally, the Lambda functions consist of less code and fewer package dependencies. This makes testing easier and reduces the need to maintain code library versions. Different developers in a team can work on separate routing functions independently, and it becomes simpler to reuse code in future projects. You can configure routes in API Gateway in the application’s SAM template:

Resources:
  GetProducts:
    Type: AWS::Serverless::Function 
    Properties:
      CodeUri: getProducts/
      Handler: app.handler
      Runtime: nodejs12.x
      Events:
        GetProductsAPI:
          Type: Api 
          Properties:
            Path: /getProducts
            Method: get

Similarly, you should usually avoid performing workflow orchestrations within Lambda functions. These are sections of code that call out to other services and functions, and perform subsequent actions based on successful execution or failure.

Lambda functions with embedded workflow orchestrations

These workflows quickly become fragile and difficult to modify for new requirements. They can cause idling in the Lambda function, meaning that the function is waiting for return values from external sources, increasingly the cost of execution.

Often, a better approach is to use AWS Step Functions, which can represent complex workflows as JSON definitions in the application’s SAM template. This service reduces the amount of custom code required, and enables long-lived workflows that minimize idling in Lambda functions. It also manages in-flight executions as workflows are upgraded. The example above, rearchitected with a Step Functions workflow, looks like this:

Using Step Functions for orchestration

Using multiple AWS accounts for development teams

There are many ways to deploy serverless applications to production. As applications grow and become more important to your business, development managers generally want to improve the robustness of the deployment process. You have a number of options within AWS for managing the development and deployment of serverless applications.

First, it is highly recommended to use more than one AWS account. Using AWS Organizations, you can centrally manage the billing, compliance, and security of these accounts. You can attach policies to groups of accounts to avoid custom scripts and manual processes. One simple approach is to provide each developer with an AWS account, and then use separate accounts for a beta deployment stage and production:

Multiple AWS accounts in a deployment pipeline

The developer accounts can contains copies of production resources and provide the developer with admin-level permissions to these resources. Each developer has their own set of limits for the account, so their usage does not impact your production environment. Individual developers can deploy CloudFormation stacks and SAM templates into these accounts with minimal risk to production assets.

This approach allows developers to test Lambda functions locally on their development machines against live cloud resources in their individual accounts. It can help create a robust unit testing process, and developers can then push code to a repository like AWS CodeCommit when ready.

By integrating with AWS Secrets Manager, you can store different sets of secrets in each environment and eliminate any need for credentials stored in code. As code is promoted from developer account through to the beta and production accounts, the correct set of credentials is automatically used. You do not need to share environment-level credentials with individual developers.

It’s also possible to implement a CI/CD process to start build pipelines when code is deployed. To deploy a sample application using a multi-account deployment flow, follow this serverless CI/CD tutorial.

Managing feature releases in serverless applications

As you implement CI/CD pipelines for your production serverless applications, it is best practice to favor safe deployments over entire application upgrades. Unlike traditional software deployments, serverless applications are a combination of custom code in Lambda functions and AWS service configurations.

A feature release may consist of a version change in a Lambda function. It may have a different endpoint in API Gateway, or use a new resource such as a DynamoDB table. Access to the deployed feature may be controlled via user configuration and feature toggles, depending upon the application. AWS SAM has AWS CodeDeploy built-in, which allows you to configure canary deployments in the YAML configuration:

Resources:
 GetProducts:
   Type: AWS::Serverless::Function
   Properties:
     CodeUri: getProducts/
     Handler: app.handler
     Runtime: nodejs12.x

     AutoPublishAlias: live

     DeploymentPreference:
       Type: Canary10Percent10Minutes 
       Alarms:
         # A list of alarms that you want to monitor
         - !Ref AliasErrorMetricGreaterThanZeroAlarm
         - !Ref LatestVersionErrorMetricGreaterThanZeroAlarm
       Hooks:
         # Validation Lambda functions run before/after traffic shifting
         PreTraffic: !Ref PreTrafficLambdaFunction
         PostTraffic: !Ref PostTrafficLambdaFunction

CodeDeploy automatically creates aliases pointing to the old and versions of a function. The canary deployment enables you to gradually shift traffic from the old to the new alias, as you become confident that the new version is working as expected. Or you can rollback the update if needed. You can also set PreTraffic and PostTraffic hooks to invoke Lambda functions before and after traffic shifting.

Conclusion

As any software application grows in size, it’s important for development managers to organize code repositories and manage releases. There are established patterns in serverless to help manage larger applications. Generally, it’s best to avoid monolithic functions and mono-repos, and you should scope repositories to either the microservice or function level.

Well-designed serverless applications use custom code in Lambda functions to connect with managed services. It’s important to identify libraries and packages that can be replaced with services to minimize the deployment size and simplify the code base. This is especially true in applications that have been migrated from server-based environments.

Using AWS Organizations, you manage groups of accounts to enable your developers to have their own AWS accounts for development. This enables engineers to clone production assets and test against the AWS Cloud when writing and debugging code. You can use a CI/CD pipeline to push code through a beta environment to production, while safeguarding secrets using Secrets Manager. You can also use CodeDeploy to manage canary deployments easily.

To learn more about deploying Lambda functions with SAM and CodeDeploy, follow the steps in this tutorial.

New – Use AWS IAM Access Analyzer in AWS Organizations

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-use-aws-iam-access-analyzer-in-aws-organizations/

Last year at AWS re:Invent 2019, we released AWS Identity and Access Management (IAM) Access Analyzer that helps you understand who can access resources by analyzing permissions granted using policies for Amazon Simple Storage Service (S3) buckets, IAM roles, AWS Key Management Service (KMS) keys, AWS Lambda functions, and Amazon Simple Queue Service (SQS) queues.

AWS IAM Access Analyzer uses automated reasoning, a form of mathematical logic and inference, to determine all possible access paths allowed by a resource policy. We call these analytical results provable security, a higher level of assurance for security in the cloud.

Today I am pleased to announce that you can create an analyzer in the AWS Organizations master account or a delegated member account with the entire organization as the zone of trust. Now for each analyzer, you can create a zone of trust to be either a particular account or an entire organization, and set the logical bounds for the analyzer to base findings upon. This helps you quickly identify when resources in your organization can be accessed from outside of your AWS Organization.

AWS IAM Access Analyzer for AWS Organizations – Getting started
You can enable IAM Access Analyzer, in your organization with one click in the IAM Console. Once enabled, IAM Access Analyzer analyzes policies and reports a list of findings for resources that grant public or cross-account access from outside your AWS Organizations in the IAM console and through APIs.

When you create an analyzer on your organization, it recognizes your organization as a zone of trust, meaning all accounts within the organization are trusted to have access to AWS resources. Access analyzer will generate a report that identifies access to your resources from outside of the organization.

For example, if you create an analyzer for your organization then it provides active findings for resource such as S3 buckets in your organization that are accessible publicly or from outside the organization.

When policies change, IAM Access Analyzer automatically triggers a new analysis and reports new findings based on the policy changes. You can also trigger a re-evaluation manually. You can download the details of findings into a report to support compliance audits.

Analyzers are specific to the region in which they are created. You need to create a unique analyzer for each region where you want to enable IAM Access Analyzer.

You can create multiple analyzers for your entire organization in your organization’s master account. Additionally, you can also choose a member account in your organization as a delegated administrator for IAM Access Analyzer. When you choose a member account as the delegated administrator, the member account has a permission to create analyzers within the organization. Additionally individual accounts can create analyzers to identify resources accessible from outside those accounts.

IAM Access Analyzer sends an event to Amazon EventBridge for each generated finding, for a change to the status of an existing finding, and when a finding is deleted. You can monitor IAM Access Analyzer findings with EventBridge. Also, all IAM Access Analyzer actions are logged by AWS CloudTrail and AWS Security Hub. Using the information collected by CloudTrail, you can determine the request that was made to Access Analyzer, the IP address from which the request was made, who made the request, when it was made, and additional details.

Now available!
This integration is available in all AWS Regions where IAM Access Analyzer is available. There is no extra cost for creating an analyzer with organization as the zone of trust. You can learn more through these talks of Dive Deep into IAM Access Analyzer and Automated Reasoning on AWS at AWS re:Invent 2019. Take a look at the feature page and the documentation to learn more.

Please send us feedback either in the AWS forum for IAM or through your usual AWS support contacts.

Channy;

The AWS Serverless Application Repository adds sharing for AWS Organizations

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/the-aws-serverless-application-repository-adds-sharing-for-aws-organizations/

The AWS Serverless Application Repository (SAR) enables builders to package serverless applications and reuse these within their own AWS accounts, or share with a broader audience. Previously, SAR applications could only be shared with specific AWS account IDs or made publicly available to all users. For organizations with large numbers of AWS accounts, this means managing a large list of IDs. It also involves tracking when accounts are added or removed to the organizational group.

This new feature allows developers to share SAR applications with AWS Organizations without specifying a list of AWS account IDs. Additionally, if you add or remove accounts later from AWS Organizations, you do not need to manually maintain a list for sharing your SAR application. This new feature also brings more granular controls over the permissions granted, including the ability for master accounts to unshare applications.

This blog post explains these new features and how you can use them to share your new and existing SAR applications.

Sharing a new SAR application with AWS Organizations

First, find your AWS Organization ID by visiting the AWS Organizations console, and choose Settings from the menu bar. Copy your Organization ID from the Organization details card and save this for later. If you do not currently have an organization configured and want to use this feature, see this tutorial for instructions on how to set up an AWS Organization.

Organization ID

Go to the Serverless Application Repository, choose Publish Application and follow the process for publishing an application to your own account. After the application is published, you will see a new tab on the Application Details page called Sharing.

SAR sharing tab

From the Sharing tab, choose Create Statement in the Application policy statements card. To share the application with an entire organization:

  • Enter a Statement Id, which is a helpful reference for the policy statement.
  • Select “With an organization” from the list of sharing options.
  • Enter the Organization ID from earlier.
  • Check the option to “Enable all actions needed to deploy”.
  • Check the acknowledgment check box.
  • Choose Save.

Statement configuration

Now your application is published to all the account IDs within your AWS Organization. As you add policy statements to define how an application is shared, these appear as cards at the end of the Sharing tab.

Policy statements in the sharing tab

Existing shared SAR applications

If you have previous created shared SAR applications with individual accounts, or shared these applications publicly, the policy statements have already been configured. In this case, the policy statements that are generated automatically reflect the existing scope of sharing, so there is no change in the level of visibility of the application.

For example, for a SAR application that was previously shared with two AWS account IDs, there is now a policy statement showing these two account principals and the permission to deploy the application. A Statement Id value is automatically generated as a random globally unique identifier.

GUID statement ID

To change these automatically generated policy statements, select the policy statement you want to change and choose Edit. The subsequent page allows you to modify the random Statement Id, configure the sharing options, and modify the deployment actions allowed.

Statement configuration

For a SAR application that was previously shared publicly, there is now a policy statement showing all accounts as an asterisk (*), with the permission to deploy. To disable public access, choose Edit in the Public Sharing card, and modify the settings on the panel.

Public sharing

More flexibility for defining permissions

This new feature allows you to define specific API actions allowed in each policy statement, and you can associate different permitted actions with different organizations. This allows you to more precisely control how applications are discovered and used within different accounts.

To learn more, read about the definition of these API actions in the SAR documentation.

Allowed actions

Any changes made to the resource policy sharing statements are effective immediately, and applications shared with AWS Organizations and other AWS accounts are only shared within the Region where the application was initially published. This new ability to share applications with organizations can be used in all Regions where the Serverless Application Repository is available.

In addition to creating and modifying resource policy statements within the AWS Management Console, builders can also choose to manage application sharing via the AWS SDK or AWS CLI.

Unsharing an application from AWS Organizations

You can also unshare an application from AWS Organizations through the AWS Management Console by doing the following:

  1. From the Serverless Application Repository console, choose Available Applications in the left navigation pane.
  2. In the application’s tile, choose Unshare.
  3. In the unshare confirmation dialog, enter the Organization ID and application name, then choose Save.

To learn more, read about unsharing published applications.

Conclusion

This post shows how to enable sharing for SAR applications across AWS Organizations using application policy statements, and how to modify existing resource policies. Additionally, it covers how existing SAR applications that are already shared now expose a resource policy that reflects the previously selected sharing preferences, and how you can also modify this policy statement.

The new sharing interface in the SAR console continues to support the previous capabilities of sharing applications with other AWS accounts, and making applications public. This new feature makes it much easier for builders to share SAR applications across large number of accounts within an organization, without needing to manually manage list of account IDs, and provides more granularity over the access controls granted.

For more information about sharing applications using AWS Organization IDs, visit the SAR documentation.

New: Use AWS CloudFormation StackSets for Multiple Accounts in an AWS Organization

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-use-aws-cloudformation-stacksets-for-multiple-accounts-in-an-aws-organization/

Infrastructure-as-code is the process of managing and creating IT infrastructure through machine-readable text files, such as JSON or YAML definitions or using familiar programming languages, such as Java, Python, or TypeScript. AWS Customers typically uses AWS CloudFormation or the AWS Cloud Development Kit to automate the creation and management of their cloud infrastructure.

CloudFormation StackSets allow you to roll out CloudFormation stacks over multiple AWS accounts and in multiple Regions with just a couple of clicks. When we launched StackSets, grouping accounts was primarily for billing purposes. Since the launch of AWS Organizations, you can centrally manage multiple AWS accounts across diverse business needs including billing, access control, compliance, security and resource sharing.

Use CloudFormation StackSets with Organizations
Today, we are simplifying the use of CloudFormation StackSets for customers managing multiple accounts with AWS Organizations.

You can now centrally orchestrate any AWS CloudFormation enabled service across multiple AWS accounts and regions. For example, you can deploy your centralized AWS Identity and Access Management (IAM) roles, provision Amazon Elastic Compute Cloud (EC2) instances or AWS Lambda functions across AWS Regions and accounts in your organization. CloudFormation StackSets simplify the configuration of cross-accounts permissions and allow for automatic creation and deletion of resources when accounts are joining or are removed from your Organization.

You can get started by enabling data sharing between CloudFormation and Organizations from the StackSets console. Once done, you will be able to use StackSets in the Organizations master account to deploy stacks to all accounts in your organization or in specific organizational units (OUs). A new service managed permission model is available with these StackSets. Choosing Service managed permissions allows StackSets to automatically configure the necessary IAM permissions required to deploy your stack to the accounts in your organization.

In addition to setting permissions, CloudFormation StackSets now offers the option for automatically creating or removing your CloudFormation stacks when a new AWS account joins or quits your Organization. You do not need to remember to manually connect to the new account to deploy your common infrastructure or to delete infrastructure when an account is removed from your Organization. When an account leaves the organization, the stack will be removed from the management of StackSets. However, you can choose to either delete or retain the resources managed by the stack.

Lastly, you choose whether to deploy a stack to your entire Organization or just to one or more Organization Units (OU). You also choose a couple of deployment options: how many accounts will be prepared in parallel, and how many failures you tolerate before stopping the entire deployment.

For a full description how StackSets works, you can read the initial blog article from Jeff.

There is no extra cost for using AWS CloudFormation StackSet with AWS Organizations. The integration is available in all AWS Regions where StackSets is available.

— seb

New – Use Tag Policies to Manage Tags Across Multiple AWS Accounts

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-use-tag-policies-to-manage-tags-across-multiple-aws-accounts/

Shortly after we launched EC2, customers started asking for ways to identify, classify, or categorize their instances. We launched tagging for EC2 instances and other EC2 resources way back in 2010, and have added support for many other resource types over the years. We added the ability to tag instances and EBS volumes at creation time a few years ago, and also launched tagging APIs and a Tag Editor. Today, tags serve many important purposes. They can be used to identify resources for cost allocation, and to control access to AWS resources (either directly or via tags on IAM users & roles). In addition to these tools, we have also provided you with comprehensive recommendations on Tag Strategies, which can be used as the basis for the tagging standards that you set up for your own organization.

Beyond Good Intentions
All of these tools and recommendations create a strong foundation, and you might start to use tags with only the very best of intentions. However, as Jeff Bezos, often reminds us, “Good intentions don’t work, but mechanisms do.” Standardizing on names, values, capitalization, and punctuation is a great idea, but challenging to put in to practice. When tags are used to control access to resources or to divvy up bills, small errors can create big problems!

Today we are giving you a mechanism that will help you to implement a consistent, high-quality tagging discipline that spans multiple AWS accounts and Organizational Units (OUs) within an AWS Organization. You can now create and apply Tag Policies and apply them to any desired AWS accounts or OUs within your Organization, or to the the entire Organization. The policies at each level are aggregated into an effective policy for an account.

Each tag policy contains a set of tag rules. Each rule maps a tag key to the allowable values for the key. The tag policies are checked when you perform operations that affect the tags on an existing resource. After you set up your tag policies, you can easily discover tagged resources that do not conform.

Creating Tag Policies
Tag Policies are easy to use. I start by logging in to the AWS account that represents my Organization, and ensure that it has Tag policies enabled in the Settings:

Then I click Policies and Tag policies to create a tag policy for my Organization:

I can see my existing policies, and I click Create policy to make another one:

I enter a name and a description for my policy:

Then I specify the tag key, indicate if capitalization must match, and optionally enter a set of allowable values:

I have three options at this point:

Create Policy – Create a policy that advises me (via a report) of any noncompliant resources in the Root, OUs, and accounts that I designate.

Add Tag Key – Add another tag key to the policy.

Prevent Noncompliant Operations – Strengthen the policy so that it enforces the proper use of the tag by blocking noncompliant operations. To do this, I must also choose the desired resource types:

Then I click Create policy, and I am ready to move ahead. I select my policy, and can then attach it to the Root, or to any desired OUs or Accounts:

Checking Policy Compliance
Once I have created and attached a policy, I can visit the Tag policies page in the Resource Groups console to check on compliance:

I can also download a compliance report (CSV format), or request that a fresh one be generated:

Things to Know
Here are a couple of things to keep in mind:

Policy Inheritance – The examples above used the built-in inheritance system: Organization to OUs to accounts. I can also fine-tune the inheritance model in order to add or remove acceptable tag values, or to limit the changes that child policies are allowed to make. To learn more, read How Policy Inheritance Works.

Tag Remediation – As an account administrator, I can use the Resource Groups console to view the effective tag policy (after inheritance) so that I can take action to fix any non-compliant tags.

Tags for New Resources – I can use Org-level Service Control Policies or IAM policies to prevent the creation of new resources that are not tagged in accord with my organization’s internal standards (see Require a Tag Upon Resource Creation for an example policy).

Access – This new feature is available from the AWS Management Console, AWS Command Line Interface (CLI), and through the AWS SDKs.

Available Now
You can use Tag Policies today in all commercial AWS Regions, at no extra charge.

Jeff;

Use IAM to share your AWS resources with groups of AWS accounts in AWS Organizations

Post Syndicated from Michael Switzer original https://aws.amazon.com/blogs/security/iam-share-aws-resources-groups-aws-accounts-aws-organizations/

You can now reference Organizational Units (OUs), which are groups of AWS accounts in AWS Organizations, in AWS Identity and Access Management (IAM) policies, making it easier to define access for your IAM principals (users and roles) to the AWS resources in your organization. AWS Organizations lets you organize your accounts into OUs to align them with your business or security purposes. Now, you can use a new condition key, aws:PrincipalOrgPaths, in your policies to allow or deny access based on a principal’s membership in an OU. This makes it easier than ever to share resources between accounts you own in your AWS environments.

For example, you might have an Amazon S3 bucket you need to share with developers and applications from accounts that are members of a specific OU. To accomplish this, you can specify the aws:PrincipalOrgPaths condition and set the value to the organizational unit ID of the caller in the resource-based policy attached to the bucket. When a principal tries to access the bucket, AWS verifies that their account’s OU matches the one specified in the policy. With this condition, permissions automatically apply when you add accounts to the OU without any additional updates to the policy.

In this post, I introduce the new condition key, and show you how to use it in two examples. In the first example you will see how to use the aws:PrincipalOrgPaths condition key to grant multiple AWS accounts access to a resource, without needing to maintain a list of account IDs in your policy. In the second example, you will see how to add a guardrail to your administrative roles that prevents access to powerful actions unless coming from a protected OU in your organization.

AWS Organizations Concepts

Before I walk through the condition, let’s review some important concepts from AWS Organizations.

AWS Organizations allows you to group a set of AWS accounts into an organization that you can manage centrally. Once the accounts have joined the organization, you can group them into organizational units (OUs), allowing you to set policies that help you meet your security and compliance requirements. You can create multiple OUs within a single organization, and you can create OUs within other OUs to form hierarchical relationships between your accounts. When you create an organization, AWS Organizations creates your first account container automatically. It has a special name, called a root. All OUs you create exist inside the root.

Organizations, roots, and OUs use a different format for their identifiers. You can see the differences in the table below:

ResourceID FormatExample ValueGlobally Unique
Organizationo-exampleorgido-p8iu8lkookYes
Rootr-examplerootidr-tkh7No
Organizational Unitou-examplerootid-exampleouidou-tkh7-pbevdy6hNo

Organization IDs are globally unique, meaning no organizations share Organization IDs. OU and Root IDs are not globally unique. This means another customer’s organization OU may have the same ID as those from your organization. OU and Root IDs are unique within an organization. Therefore, you should always include the organization identifier when specifying an OU to make sure it is unique to your organization.

Control access to resources based on OU

You use condition keys in the condition element of an IAM policy. A condition is an optional IAM policy element you can use to specify circumstances under which the policy grants or denies permission. A condition includes a condition key, operator, and value for the condition.

Condition keyDescriptionOperator(s)Value(s)
aws:PrincipalOrgPathsThe paths of the principals’ OU from AWS OrganizationsAll string operatorsPaths of AWS Organization IDs and organizational unit IDs

The aws:PrincipalOrgPaths condition key is a global condition, meaning you can use it in conjunction with any AWS action. When you use it in the condition element of your IAM policy, it validates the organization, root, and OUs of the principal performing the action on the resource. For example, let’s say a principal was a member of an OU with the id ou-abcd-zzyyxxww inside a root r-abcd in the organization o-1122334455. When the principal makes a request on the resource, its aws:PrincipalOrgPaths value is:

["o-1122334455/r-abcd/ou-abcd-zzyyxxww/"]

The path includes the organization ID to ensure global uniqueness. This ensures only principals from your organization can access your AWS resources. You can use any string operator, such as StringEquals, with the condition. You can also use the wildcard characters (* and ?) when providing a path.

Aws:PrincipalOrgPaths is a multi-value condition key. Multi-value keys allow you to provide multiple values in a list format. Here’s a sample condition statement from a policy that uses the key to validate that a principal is from either ou-1 or ou-2:


"Condition":{
	"ForAnyValue:StringLike":{
		"aws:PrincipalOrgPaths":[
		  "o-1122334455/r-abcd/ou-1/",
		  "o-1122334455/r-abcd/ou-2/"
		]
	}
}

For all multi-value condition keys, you must provide the value as a JSON-formatted list as shown above, even if you’re only specifying one value. As shown in the example above, you also must use the ForAnyValue qualifier in your conditions to specify you’re checking membership of one OU path. For more information, see Creating a Condition That Tests Multiple Key Values in the IAM documentation.

In the next section, I’ll go over an example of how to use the new condition key to protect resources in your account from access outside of a given OU.

Example: Grant S3 bucket access to all principals in an OU in your organization

This example demonstrates how you can use the new condition key to share resources with groups of accounts. By placing the accounts into an OU and granting access based on membership, you can grant targeted access without having to list and maintain all the AWS account IDs in your permission policies.

Consider an example where I want to grant my Machine Learning team permissions to access an S3 bucket training-data that contains images that the team will use to train their machine learning models. I’ve set up my organization such that all AWS accounts owned by my Machine Learning team are part of a specific OU with the ID ou-machinelearn. For the purpose of this example, my organization ID is o-myorganization.

For this example, I want to allow users and applications from the Machine Learning OU or any OU beneath it to have permissions to read the training-data S3 bucket. Any other AWS accounts should not have the ability to view the resource.

To grant these permissions, I author an S3 bucket policy for my training-data resource as shown below.


{
	"Version":"2012-10-17",
	"Statement":{
		"Sid":"TrainingDataS3ReadOnly",
		"Effect":"Allow",
		"Principal": "*",
		"Action":"s3:GetObject",
		"Resource":"arn:aws:s3:::training-data/*",
		"Condition":{
			"ForAnyValue:StringLike":{
				"aws:PrincipalOrgPaths":["o-myorganization/*/ou-machinelearn/*"]
			}
		}
	}
}

In the policy above, I assert that principals trying to read the contents of the training-data bucket must be either a member of the OU that corresponds to the ou-machinelearn ID I provided (my Machine Learning OU Identifier), or a member of any OUs that are children of it. For the aws:PrincipalOrgPaths value, I used two asterisk (*) wildcards. I used the first asterisk (*) between my organization ID and my OU ID because OU IDs are unique within my organization. This means specifying the full path is not necessary to select the OU I need. The second asterisk (*), at the end of the path, is used to specify that I want to allow all child OUs to be included in my string comparison. If I didn’t want to include the child OUs, I could remove the wildcard character.

With this policy on the bucket, any principals in the Machine Learning OU may read objects inside the bucket if the user or role has the appropriate S3 permissions. Note that if this policy did not have the condition statement, it would be accessible by any AWS account. As a best practice, AWS recommends only granting access to the principals that need it. As for next steps, I could edit the Principal section of the policy to restrict access to specific principals in my Machine Learning accounts. For more information, see Specifying a Principal in a Policy in the S3 documentation.

Example: Restrict access to an IAM role to only accounts in an OU in my organization

The next example will show how to use aws:PrincipalOrgPaths to add another layer of security to your existing IAM role trust policies, ensuring only members of specific OUs may assume your roles.

For this example, say my company requires that only network security engineers can create or manage AWS Virtual Private Cloud (VPC) resources in my accounts. The network security team has a dedicated OU, ou-netsec, for their workloads. I have the same organization ID as the previous example, o-myorganization.

Each account in my organization has a dedicated IAM role, VPCManager, with the permissions needed to manage VPCs. I want to ensure that only my network security team, who use principals that are tagged as such, has access to the role. To do this, I edited the role trust policy for VPCManager, which defines who can access an IAM role. In this case, I added a condition to the policy to require that anyone assuming the role must come from an account in ou-netsec.

This is the trust policy I created for VPCManager:


{
  "Version": "2012-10-17",
  "Statement": [
	{
			"Effect": "Allow",
			"Principal": {
			"AWS": [
				"123456789012",
				"345678901234",
				"567890123456"
			]
		},
		"Action": "sts:AssumeRole",
		"Condition":{
		"StringEquals":{
		"aws:PrincipalTag/JobRole":"NetworkAdmin"
			},
			"ForAnyValue:StringLike":{
				"aws:PrincipalOrgPaths":["o-myorganization/*/ou-netsec/"]
            }
         }
      }
   ]
}

I started by adding the Effect, Principal, and Action to allow principals from three network security accounts to assume the role. To ensure they have the right job role, I added a condition to require the JobRole=NetworkAdmin tag must be applied to principals before they can assume the role. Finally, as an added layer of security, I added the second condition that requires anyone assuming the role must come from an account in the network security OU. This final step ensures that I specified the correct account IDs for my network security accounts—even if I accidentally provided an account that is not part of my organization, members of that account won’t be able to assume the role because they aren’t part of ou-netsec.

Though only members of the network security team may assume the role, it’s still possible for any principals with IAM permissions to modify it. As next steps, I could apply a Service Control Policy (SCP) that protects the role from modification and prevents other roles in the account from modifying VPCs. For more information, see How to use service control policies to set permission guardrails in the AWS Security Blog.

Summary

AWS offers tools to control access for individual principals, accounts, OUs, or entire organizations—this helps you manage permissions at the appropriate scale for your business. You can now use the aws:PrincipalOrgPaths condition key to control access to your resources based on OUs configured in AWS Organizations. For more information about these global condition keys and policy examples, read the IAM documentation.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon Identity and Access Management forum.

Want more AWS Security news? Follow us on Twitter.

Michael Switzer, Senior Product Manager AWS Identity

Michael Switzer

Mike Switzer is the product manager for the Identity and Access Management service at AWS. He enjoys working directly with customers to identify solutions to their challenges, and using data-driven decision making to drive his work. Outside of work, Mike is an avid cyclist and outdoorsperson. Mike holds a master’s degree in computational mathematics from the University of Washington.

Accept an ANDB Addendum for all accounts within your AWS Organization

Post Syndicated from Adam Star original https://aws.amazon.com/blogs/security/accept-an-andb-addendum-for-all-accounts-within-your-aws-organization/

For customers who use AWS to store or process personal information covered by the Australian Privacy Act 1988, I’m excited to announce that you can now accept a single AWS Australian Notifiable Data Breach Addendum (ANDB Addendum) for all accounts within your AWS Organization. Once accepted, all current and future accounts created or added to your AWS Organization will immediately be covered by the ANDB Addendum.

My team is focused on improving the customer experience by improving the tools used to perform compliance tasks. Previously, if you wanted to designate several AWS accounts, you had to sign in to each account individually to accept the ANDB Addendum. Now, an authorized master account user can accept the ANDB Addendum once to automatically designate all existing and future member accounts in the AWS Organization as ANDB accounts. This capability addresses a frequent customer request to be able to quickly designate multiple ANDB accounts and confirm those accounts are covered under the terms of the ANDB Addendum.

If you have an ANDB Addendum in place already and want to leverage this new capability, a master account user can accept the new AWS Organizations ANDB Addendum in AWS Artifact today. To get started, your organization must use AWS Organizations to manage your accounts, and “all features” need to be enabled. Learn more about creating an organization.

Once you’re using AWS Organizations with all features enabled and you have the necessary user permissions, accepting the AWS Organizations ANDB Addendum takes about two minutes. We’ve created a video that shows you the process, step-by-step.

If your organization prefers to continue managing ANDB accounts individually, you can still do that. It takes less than two minutes to designate a single account as an ANDB account in AWS Artifact. You can watch our video to learn how.

As with all AWS Artifact features, there is no additional cost to use AWS Artifact to review, accept, and manage individual account ANDB Addendums or the new AWS Organizations ANDB Addendum. To learn more about AWS Artifact, please visit the AWS Artifact FAQ page.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Adam Star

Adam joined Amazon in 2012 and is a Program Manager on the Security Obligations and Contracts team. He enjoys designing practical solutions to help customers meet a range of global compliance requirements including GDPR, HIPAA, and the European Banking Authority’s Guidelines on Outsourcing Arrangements. Adam lives in Seattle with his wife & daughter. Originally from New York, he’s constantly searching for “real” bagels & pizza. He’s an active member of the Washington State Bar Association and American Homebrewers Association, finding the latter much more successful when attempting to make friends in social situations.

AWS Security Profiles: Greg McConnel, Senior Manager, Security Specialists Team

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-greg-mcconnel-senior-manager-security-specialists-team/

Greg McConnel, Senior Manager
In the weeks leading up to re:Invent 2019, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS, and what do you do in your current role?

I’ve been with AWS nearly seven years. I was previously a Solutions Architect on the Security Specialists Team for the Americas, and I recently took on an interim management position for the team. Once we find a permanent team lead, I’ll become the team’s West Coast manager.

How do you explain your job?

The Security Specialist team is a customer-facing role where we get to talk to customers about the security, identity, and compliance of AWS. We help enable customers to securely and compliantly adopt the AWS Cloud. Some of what we do is one-to-one with customers, but we also engage in a lot of one-to-many activities. My team talks to customers about their security concerns, how best to secure their workloads on AWS, and how to use our services alongside partner solutions to improve their security posture. A big part of my role is building content and delivering that to customers. This includes things like white papers, blog posts, hands-on workshops, Well-Architected reviews, and presentations.

What are you currently working on that you’re most excited about?

In the Security Specialist team, the one-to-one engagements we have with customers are primarily short-term and centered around particular questions or topics. Although this can benefit customers a great deal, it’s a reactive model. In order to move to a more proactive stance, we plan to start working with individual customers on a longer-term basis to help them solve particular security challenges or opportunities. This will remove any barriers to working directly with a Security Specialist and allow us to dive deep into the security concerns of our customers.

What’s the most challenging part of your job?

Keeping up with the pace of innovation. Just about everyone who works with AWS experiences this, whether you’re a customer or working in the AWS field. AWS is launching so many amazing things all the time it can be challenging to keep up. The specialist SA role is attractive because the area we cover is somewhat limited. It’s a carved-out space within the larger, continuously expanding body of AWS knowledge. So it’s a little easier, and it allows me to dive deeper into particular security topics. But even within this carved out area of security, it takes some dedication to stay current and informed for my customers. I find things like Jeff Barr’s AWS News Blog and the AWS Security Blog really valuable. Also, I’m a big fan of hands on learning, so just playing around with new services, especially by following along with examples in new blog posts, can be an interesting way to keep up.

What’s your favorite part of your job?

Creating content, particularly workshops. For me, hands-on learning is not only an effective way to learn, it’s simply more fun. The process of creating workshops with a team can be rewarding and educational. It’s a creative, interactive process. Getting to see others try out your workshop and provide feedback is so satisfying, especially when you can tell that people now understand new concepts because of the work you did. Watching other people deliver a workshop that you built is also rewarding.

In your opinion, what’s the biggest challenge facing cloud security right now?

AWS allows you to move very fast. One of the primary advantages of cloud computing is the speed and agility it provides. When you’re moving fast though, security can be overlooked a bit. It’s so easy to start building on AWS that customers might not invest the necessary cycles on security, or they might think that doing so will slow them down. The reality, though, is that AWS provides the tools to allow you to stay agile while maintaining—and in many cases improving—your security. Automation and continuous monitoring are the keys here. AWS makes it possible to automate many basic security tasks, like patching. With the right tooling, you also gain the visibility needed to accurately identify and monitor critical assets and data. We provide great solutions for logging and monitoring that are highly integrated with our other services. So it’s quite possible now to move fast and stay secure, and it’s important that you do both.

What security best practices do you recommend to customers?

There are certain services we recommend that customer look into enabling because they’re an easy win when it comes to security. These aren’t requirements because there are many great partner solutions that customers can also use. But these are solutions that I’d encourage customers to at least consider:

  • Turn on Amazon GuardDuty, which is a threat detection service that continuously monitors for malicious and unauthorized activity. The benefits it provides versus the effort involved to enable it makes it a pretty easy choice. You literally click a button to turn on GuardDuty, and the service starts analyzing tens of billions of events across a number of AWS data sources.
  • Work with your account team to get a Well-Architected review of your key workloads, especially with a focus on the Security pillar. You can even run well-architected reviews yourself with the AWS Well-Architected Tool. A well-architected review can help you build secure, high-performing, resilient, and efficient infrastructure for your application.
  • Use IAM access advisor to help ensure that all the credentials in your AWS accounts are not overly broad by examining your service last accessed information.
  • Use AWS Organizations for centralized administration of your credentials management and governance and move to full-feature mode to take advantage of everything the service offers.
  • Practice the principle of least privilege.

What does cloud security mean to you, personally?

This is the first time in a while that I’ve worked for a company where I actually like the product. I’m constantly surprised by the new services and features we release and what our customers are building on AWS. My background is in security, especially identity, so that’s the area I am most passionate about within AWS. I want people to use AWS because they can build amazing things at a pace that was just not possible before the cloud—but I want people to do all of this securely. Being able to help customers use our services and do so securely keeps me motivated.

You have a passion for building great workshops. What are you and Jesse Fuchs doing to help make workshops at re:Invent better?

Jesse and I have been working on a central site for AWS security workshops. We post a curated list of workshops, and everything on the site has gone through an internal bar raiser review. This is a formal process for reviewing the workshops and other content to make sure they meet certain standards, that they’re up to date, and that they follow a certain format so customers will find them familiar no matter which workshop they go through.

We’re now implementing a version of this bar raiser review for every workshop at re:Invent this year. In the past, there were people who helped review workshops, but there was no formal, independent bar raising process. Now, every re:Invent workshop will go through it. Eventually, we want to extend this review at other AWS events.

Over time, you’ll see a lot more workshops added to the site. All of these workshops can either be delivered by AWS to the customer, or by customers to their own internal teams. It’s all Open Source too. Finally, we’ll keep the workshops up to date as new features are added.

The workshop you and Jesse Fuchs will host at re:Invent promises that attendees will build an end-to-end functional app with a secure identity provider. How can you build such an app in such a relatively short time?

It know it sounds like a tall order, but the workshop is designed to be completed in the allotted time. And since the content will be up on GitHub, you can also work on it afterwards on your own. It’s a level 400 workshop, so we’re expecting people to come into the session with a certain level of familiarity with some of these services. The session is two and half hours, and a big part of this workshop is the one-on-one interaction with the facilitators. So if attendees feel like this is a lot to take in, they should keep in mind that they’ll get one-on-one interaction with experts in these fields. Our facilitators will sit down with attendees and address their questions as they go through the hands-on work. A lot of the learning actually comes out of that facilitation. The session starts with an initial presentation and detailed instructions for doing the workshop. Then the majority of the time will be spent hands-on with that added one-on-one interaction. I don’t think we stress enough the value that customers get from the facilitation that occurs in workshops and how much learning occurs during that process.

What else should we know about your workshop?

One of the things I concentrate on is identity, and that’s not just Identity and Access Management. It’s identity across the board. If you saw Quint Van Deman’s session last year at re:Invent, he used an analogy about identity being a cake with three layers. There’s the platform layer on the bottom, which includes access to the console. There’s the infrastructure layer, which is identity for services like EC2 and RDS, for example. And then there’s the application identity layer on the top. That’s sometimes the layer that customers are most interested in because it’s related to the applications they’re building. Our workshop is primarily about that top layer. It’s one of the more difficult topics to cover, but again, an interesting one for customers.

What is your advice to first-time attendees at re:Invent?

Take advantage of your time at re:Invent and plan your week because there’s really no fluff. It’s not about marketing, it’s about learning. re:Invent is an educational event first and foremost. Also, take advantage of meeting people because it’s also a great networking event. Having conversations with people from companies that are doing similar things to what you’re doing on AWS can be very enlightening.

You like obscure restaurants and even more obscure movies. Can you share some favorites?

From a movie standpoint, I like Thrillers and Sci-Fi—especially unusual Sci-Fi movies that are somewhat out of the mainstream. I do like blockbusters, but I definitely like to find great independent movies. I recently saw a movie called Coherence that was really interesting. It’s about what would happen if a group of friends having a dinner party could move between parallel or alternate universes. A few others I’d recommend to people who share my taste for the obscure, and that are maybe even a little though-provoking, are Perfect Sense and The Quiet Earth.

From a restaurant standpoint, I am always looking for new Asian food restaurants, especially Vietnamese and Thai food. There are a lot of great ones hidden here in Los Angeles.
 
Want more AWS Security news? Follow us on Twitter.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Greg McConnel

In his nearly 7 years at AWS, Greg has been in a number of different roles, which gives him a unique viewpoint on this company. He started out as a TAM in Enterprise Support, moved to a Gaming Specialist SA role, and then found a fitting home in AWS Security as a Security Specialist SA focusing on identity. Greg will continue to contribute to the team in his new role as a manager. Greg currently lives in LA where he loves to kayak, go on road trips, and in general, enjoy the sunny climate.

New! Set permission guardrails confidently by using IAM access advisor to analyze service-last-accessed information for accounts in your AWS organization

Post Syndicated from Ujjwal Pugalia original https://aws.amazon.com/blogs/security/set-permission-guardrails-using-iam-access-advisor-analyze-service-last-accessed-information-aws-organization/

You can use AWS Organizations to centrally govern and manage multiple accounts as you scale your AWS workloads. With AWS Organizations, central security administrators can use service control policies (SCPs) to establish permission guardrails that all IAM users and roles in the organization’s accounts adhere to. When teams and projects are just getting started, administrators may allow access to a broader range of AWS services to inspire innovation and agility. However, as developers and applications settle into common access patterns, administrators need to set permission guardrails to remove permissions for services that have not or should not be accessed by their accounts. Whether you’re just getting started with SCPs or have existing SCPs, you can now use AWS Identity and Access Management (IAM) access advisor to help you restrict permissions confidently.

IAM access advisor uses data analysis to help you set permission guardrails confidently by providing you service-last-accessed information for accounts in your organization. By analyzing last-accessed information, you can determine the services not used by IAM users and roles. You can implement permissions guardrails using SCPs that restrict access to those services. For example, you can identify services not accessed in an organizational units (OU) for the last 90 days, create an SCP that denies access to these services, and attach it to the OU to restrict access to all IAM users and roles across the accounts in the OU. You can view service-last-accessed information for your accounts, OUs, and your organization using the IAM console in the account you used to create your organization. You can access this information programmatically using IAM access advisor APIs with the AWS Command Line Interface (AWS CLI) or a programmatic client.

In this post, I first review the service-last-accessed information provided by IAM access advisor using the IAM console. Next, I walk through an example to demonstrate how you can use this information to remove permissions for services not accessed by IAM users and roles within your production OU by creating an SCP.

Use IAM access advisor to view service-last-accessed information using the AWS management console

Access advisor provides an access report that displays a list of services and the last-accessed timestamps for when an IAM principal accessed each service. To view the access report in the console, sign in to the IAM console using the account you used to create your organization. Additionally, you need to enable SCPs on your organization root to view the access report. You can view the service-last-accessed information in two ways. First, you can use the Organization activity view to review the service-last-accessed information for an organizational entity such as an account or OU. Second, you can use the SCP view to review the service-last-accessed information for services allowed by existing SCPs attached to your organizational entities.

The Organization activity view lists your OUs and accounts. You can select an OU or account to view the services that the entity is allowed to access and the service-last-accessed information for those services. This tells you services that have not been accessed in an organizational entity. Using this information, you can remove permissions for these services by creating a new SCP and attaching it the organizational entity or updating an existing SCP attached to the entity.

The SCP view lists all the SCPs in your organization. You can select a SCP to view the services allowed by the SCP and the service-last-accessed information for those services. The service-last-accessed information is the last-accessed timestamp across all the organizational entities that the SCP is attached to. This tells you services that have not been accessed but are allowed by the SCP. Using this information, you can refine your existing permission guardrails to remove permissions for services not accessed for your existing SCPs.

Figure 1 shows an example of the access report for an OU. You can see the service-last-accessed information for all services that IAM users and roles can access in all the accounts in ProductionOU. You can see that services such as AWS Ground Station and Amazon GameLift have not been used in the last year. You can also see that Amazon DynamoDB was last accessed in account Application1 10 days ago.
 

Figure 1: An example access report for an OU

Figure 1: An example access report for an OU

Now that I’ve described how to view service-last-accessed information, I will walk through an example.

Example: Restrict access to services not accessed in production by creating an SCP

For this example, assume ExampleCorp uses AWS Organizations to organize their development, test, and production environments into organizational units (OUs). Alice is a central security administrator responsible for managing the accounts in the production OU for ExampleCorp. She wants to ensure that her production OU called ProductionOU has permissions to only the services that are required to run existing workloads. Currently, Alice hasn’t set any permission guardrails on her production OU. I will show you how you can help Alice review the service-last-accessed information for her production OU and set a permission guardrail confidently using a SCP to restrict access to services not accessed by ExampleCorp developers and applications in production.

Prerequisites

  1. Ensure that the SCP policy type is enabled for the organization. If you haven’t enabled SCPs, you can enable it for your organization root by following the steps mentioned in Enabling and Disabling a Policy Type on a Root.
  2. Ensure that your IAM roles or users have appropriate permissions to view the access report, you can do so by attaching the IAMAccessAdvisorReadOnly managed policy.

How to review service-last-accessed information for ProductionOU in the IAM console

In this section, you’ll review the service-last-accessed information using IAM access advisor to determine the services that have not been accessed across all the accounts in ProductionOU.

  1. Start by signing in to the IAM console in the account that you used to create the organization.
  2. In the left navigation pane, under the AWS Organizations section, select the Organization activity view.

    Note: Enabling the SCP policy type does not set any permission guardrails for your organization unless you start attaching SCPs to accounts and OUs in your organization.

  3. In the Organization activity view, select ProductionOU from the organization structure displayed on the console so you can review the service last accessed information across all accounts in that OU.
     
    Figure 2: Select 'ProductionOU' from the organizational structure

    Figure 2: Select ‘ProductionOU’ from the organizational structure

  4. Selecting ProductionOU opens the Details and activity tab, which displays the access report for this OU. In this example, I have no permission guardrail set on the ProductionOU, so the default FULLAWSACCESS SCP is attached, allowing the ProductionOU to have access to all services. The access report displays all AWS services along with their last-accessed timestamps across accounts in the OU.
     
    Figure 3: The service access report

    Figure 3: The service access report

  5. Review the access report for ProductionOU to determine services that have not been accessed across accounts in this OU. In this example, there are multiple accounts in ProductionOU. Based on the report, you can identify that services Ground Station and GameLift have not been used in 365 days. Using this information, you can confidently set a permission guardrail by creating and attaching a new SCP that removes permissions for these services from ProductionOU. You can use a different time period, such as 90 days or 6 months, to determine if a service is not accessed based on your preference.
     
    Figure 4: Amazon GameLift and AWS Ground Station are not accessed

    Figure 4: Amazon GameLift and AWS Ground Station are not accessed

Create and attach a new SCP to ProductionOU in the AWS Organizations console

In this section, you’ll use the access insights you gained from using IAM access advisor to create and attach a new SCP to ProductionOU that removes permissions to Ground Station and GameLift.

  1. In the AWS Organizations console, select the Policies tab, and then select Create policy.
  2. In the Create new policy window, give your policy a name and description that will help you quickly identify it. For this example, I use the following name and description.
    • Name: ProductionGuardrail
    • Description: Restricts permissions to services not accessed in ProductionOU.
  3. The policy editor provides you with an empty statement in the text editor to get started. Position your cursor inside the policy statement. The editor detects the content of the policy statement you selected, and allows you to add relevant Actions, Resources, and Conditions to it using the left panel.
     
    Figure 5: SCP editor tool

    Figure 5: SCP editor tool

  4. Next, add the services you want to restrict. Using the left panel, select services Ground Station and GameLift. Denying access to services using SCPs is a powerful action if these services are in use. From the service last accessed information I reviewed in step 6 of the previous section, I know these services haven’t been used for more than 365 days, so it is safe to remove access to these services. In this example, I’m not adding any resource or condition to my policy statement.
     
    Figure 6: Add the services you want to restrict

    Figure 6: Add the services you want to restrict

  5. Next, use the Resource policy element, which allows you to provide specific resources. In this example, I select the resource type as All Resources.
  6.  

    Figure 9: Select resource type as All Resources

    Figure 7: Select resource type as “All Resources”

  7. Select the Create Policy button to create your policy. You can see the new policy in the Policies tab.
     
    Figure 10: The new policy on the “Policies” tab

    Figure 8: The new policy on the “Policies” tab

  8. Finally, attach the policy to ProductionOU where you want to apply the permission guardrail.

Alice can now review the service-last-accessed information for the ProductionOU and set permission guardrails for her production accounts. This ensures that the permission guardrail Alice set for her production accounts provides permissions to only the services that are required to run existing workloads.

Summary

In this post, I reviewed how access advisor provides service-last-accessed information for AWS organizations. Then, I demonstrated how you can use the Organization activity view to review service-last-accessed information and set permission guardrails to restrict access only to the services that are required to run existing workloads. You can also retrieve service-last-accessed information programmatically. To learn more, visit the documentation for retrieving service last accessed information using APIs.

If you have comments about using IAM access advisor for your organization, submit them in the Comments section below. For questions related to reviewing the service last accessed information through the console or programmatically, start a thread on the IAM forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Ujjwal Pugalia

Ujjwal is the product manager for the console sign-in and sign-up experience at AWS. He enjoys working in the customer-centric environment at Amazon because it aligns with his prior experience building an enterprise marketplace. Outside of work, Ujjwal enjoys watching crime dramas on Netflix. He holds an MBA from Carnegie Mellon University (CMU) in Pittsburgh.

How to use service control policies to set permission guardrails across accounts in your AWS Organization

Post Syndicated from Michael Switzer original https://aws.amazon.com/blogs/security/how-to-use-service-control-policies-to-set-permission-guardrails-across-accounts-in-your-aws-organization/

AWS Organizations provides central governance and management for multiple accounts. Central security administrators use service control policies (SCPs) with AWS Organizations to establish controls that all IAM principals (users and roles) adhere to. Now, you can use SCPs to set permission guardrails with the fine-grained control supported in the AWS Identity and Access Management (IAM) policy language. This makes it easier for you to fine-tune policies to meet the precise requirements of your organization’s governance rules.

Now, using SCPs, you can specify Conditions, Resources, and NotAction to deny access across accounts in your organization or organizational unit. For example, you can use SCPs to restrict access to specific AWS Regions, or prevent your IAM principals from deleting common resources, such as an IAM role used for your central administrators. You can also define exceptions to your governance controls, restricting service actions for all IAM entities (users, roles, and root) in the account except a specific administrator role.

To implement permission guardrails using SCPs, you can use the new policy editor in the AWS Organizations console. This editor makes it easier to author SCPs by guiding you to add actions, resources, and conditions. In this post, I review SCPs, walk through the new capabilities, and show how to construct an example SCP you can use in your organization today.

Overview of Service Control Policy concepts

Before I walk through some examples, I’ll review a few features of SCPs and AWS Organizations.

SCPs offer central access controls for all IAM entities in your accounts. You can use them to enforce the permissions you want everyone in your business to follow. Using SCPs, you can give your developers more freedom to manage their own permissions because you know they can only operate within the boundaries you define.

You create and apply SCPs through AWS Organizations. When you create an organization, AWS Organizations automatically creates a root, which forms the parent container for all the accounts in your organization. Inside the root, you can group accounts in your organization into organizational units (OUs) to simplify management of these accounts. You can create multiple OUs within a single organization, and you can create OUs within other OUs to form a hierarchical structure. You can attach SCPs to the organization root, OUs, and individual accounts. SCPs attached to the root and OUs apply to all OUs and accounts inside of them.

SCPs use the AWS Identity and Access Management (IAM) policy language; however, they do not grant permissions. SCPs enable you set permission guardrails by defining the maximum available permissions for IAM entities in an account. If a SCP denies an action for an account, none of the entities in the account can take that action, even if their IAM permissions allow them to do so. The guardrails set in SCPs apply to all IAM entities in the account, which include all users, roles, and the account root user.

Policy Elements Available in SCPs

The table below summarizes the IAM policy language elements available in SCPs. You can read more about the different IAM policy elements in the IAM JSON Policy Reference.

The Supported Statement Effect column describes the effect type you can use with each policy element in SCPs.

Policy ElementDefinitionSupported Statement Effect
StatementMain element for a policy. Each policy can have multiple statements.Allow, Deny
Sid(Optional) Friendly name for the statement.Allow, Deny
EffectDefine whether a SCP statement allows or denies actions in an account.Allow, Deny
ActionList the AWS actions the SCP applies to.Allow, Deny
NotAction (New)(Optional) List the AWS actions exempt from the SCP. Used in place of the Action element.Deny
Resource (New)List the AWS resources the SCP applies to.Deny
Condition (New)(Optional) Specify conditions for when the statement is in effect.Deny

Note: Some policy elements are only available in SCPs that deny actions.

You can use the new policy elements in new or existing SCPs in your organization. In the next section, I use the new elements to create a SCP using the AWS Organizations console.

Create an SCP in the AWS Organizations console

In this section, you’ll create an SCP that restricts IAM principals in accounts from making changes to a common administrative IAM role created in all accounts in your organization. Imagine your central security team uses these roles to audit and make changes to AWS settings. For the purposes of this example, you have a role in all your accounts named AdminRole that has the AdministratorAccess managed policy attached to it. Using an SCP, you can restrict all IAM entities in the account from modifying AdminRole or its associated permissions. This helps you ensure this role is always available to your central security team. Here are the steps to create and attach this SCP.

  1. Ensure you’ve enabled all features in AWS Organizations and SCPs through the AWS Organizations console.
  2. In the AWS Organizations console, select the Policies tab, and then select Create policy.

    Figure 1: Select "Create policy" on the "Policies" tab

    Figure 1: Select “Create policy” on the “Policies” tab

  3. Give your policy a name and description that will help you quickly identify it. For this example, I use the following name and description.
    • Name: DenyChangesToAdminRole
    • Description: Prevents all IAM principals from making changes to AdminRole.

     

    Figure 2: Give the policy a name and description

    Figure 2: Give the policy a name and description

  4. The policy editor provides you with an empty statement in the text editor to get started. Position your cursor inside the policy statement. The editor detects the content of the policy statement you selected, and allows you to add relevant Actions, Resources, and Conditions to it using the left panel.

    Figure 3: SCP editor tool

    Figure 3: SCP editor tool

  5. Change the Statement ID to describe what the statement does. For this example, I reused the name of the policy, DenyChangesToAdminRole, because this policy has only one statement.

    Figure 4: Change the Statement ID

    Figure 4: Change the Statement ID

  6. Next, add the actions you want to restrict. Using the left panel, select the IAM service. You’ll see a list of actions. To learn about the details of each action, you can hover over the action with your mouse. For this example, we want to allow principals in the account to view the role, but restrict any actions that could modify or delete it. We use the new NotAction policy element to deny all actions except the view actions for the role. Select the following view actions from the list:
    • GetContextKeysForPrincipalPolicy
    • GetRole
    • GetRolePolicy
    • ListAttachedRolePolicies
    • ListInstanceProfilesForRole
    • ListRolePolicies
    • ListRoleTags
    • SimulatePrincipalPolicy
  7. Now position your cursor at the Action element and change it to NotAction. After you perform the steps above, your policy should look like the one below.

    Figure 5: An example policy

    Figure 5: An example policy

  8. Next, apply these controls to only the AdminRole role in your accounts. To do this, use the Resource policy element, which now allows you to provide specific resources.
      1. On the left, near the bottom, select the Add Resources button.
      2. In the prompt, select the IAM service from the dropdown menu.
      3. Select the role as the resource type, and then type “arn:aws:iam::*:role/AdminRole” in the resource ARN prompt.
      4. Select Add resource.

    Note: The AdminRole has a common name in all accounts, but the account IDs will be different for each individual role. To simplify the policy statement, use the * wildcard in place of the account ID to account for all roles with this name regardless of the account.

  9. Your policy should look like this:
    
    {    
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "DenyChangesToAdminRole",
          "Effect": "Deny",
          "NotAction": [
            "iam:GetContextKeysForPrincipalPolicy",
            "iam:GetRole",
            "iam:GetRolePolicy",
            "iam:ListAttachedRolePolicies",
            "iam:ListInstanceProfilesForRole",
            "iam:ListRolePolicies",
            "iam:ListRoleTags",
            "iam:SimulatePrincipalPolicy"
          ],
          "Resource": [
            "arn:aws:iam::*:role/AdminRole"
          ]
        }
      ]
    }
    

  10. Select the Save changes button to create your policy. You can see the new policy in the Policies tab.

    Figure 6: The new policy on the “Policies” tab

    Figure 6: The new policy on the “Policies” tab

  11. Finally, attach the policy to the AWS account where you want to apply the permissions.

When you attach the SCP, it prevents changes to the role’s configuration. The central security team that uses the role might want to make changes later on, so you may want to allow the role itself to modify the role’s configuration. I’ll demonstrate how to do this in the next section.

Grant an exception to your SCP for an administrator role

In the previous section, you created a SCP that prevented all principals from modifying or deleting the AdminRole IAM role. Administrators from your central security team may need to make changes to this role in your organization, without lifting the protection of the SCP. In this next example, I build on the previous policy to show how to exclude the AdminRole from the SCP guardrail.

  1. In the AWS Organizations console, select the Policies tab, select the DenyChangesToAdminRole policy, and then select Policy editor.
  2. Select Add Condition. You’ll use the new Condition element of the policy, using the aws:PrincipalARN global condition key, to specify the role you want to exclude from the policy restrictions.
  3. The aws:PrincipalARN condition key returns the ARN of the principal making the request. You want to ignore the policy statement if the requesting principal is the AdminRole. Using the StringNotLike operator, assert that this SCP is in effect if the principal ARN is not the AdminRole. To do this, fill in the following values for your condition.
    1. Condition key: aws:PrincipalARN
    2. Qualifier: Default
    3. Operator: StringNotEquals
    4. Value: arn:aws:iam::*:role/AdminRole
  4. Select Add condition. The following policy will appear in the edit window.
    
    {    
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "DenyChangesToAdminRole",
          "Effect": "Deny",
          "NotAction": [
            "iam:GetContextKeysForPrincipalPolicy",
            "iam:GetRole",
            "iam:GetRolePolicy",
            "iam:ListAttachedRolePolicies",
            "iam:ListInstanceProfilesForRole",
            "iam:ListRolePolicies",
            "iam:ListRoleTags"
          ],
          "Resource": [
            "arn:aws:iam::*:role/AdminRole"
          ],
          "Condition": {
            "StringNotLike": {
              "aws:PrincipalARN":"arn:aws:iam::*:role/AdminRole"
            }
          }
        }
      ]
    }
    
    

  5. After you validate the policy, select Save. If you already attached the policy in your organization, the changes will immediately take effect.

Now, the SCP denies all principals in the account from updating or deleting the AdminRole, except the AdminRole itself.

Next steps

You can now use SCPs to restrict access to specific resources, or define conditions for when SCPs are in effect. You can use the new functionality in your existing SCPs today, or create new permission guardrails for your organization. I walked through one example in this blog post, and there are additional use cases for SCPs that you can explore in the documentation. Below are a few that we have heard from customers that you may want to look at.

  • Account may only operate in certain AWS regions (example)
  • Account may only deploy certain EC2 instance types (example)
  • Account requires MFA is enabled before taking an action (example)

You can start applying SCPs using the AWS Organizations console, CLI, or API. See the Service Control Policies Documentation or the AWS Organizations Forums for more information about SCPs, how to use them in your organization, and additional examples.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Michael Switzer

Mike is the product manager for the Identity and Access Management service at AWS. He enjoys working directly with customers to identify solutions to their challenges, and using data-driven decision making to drive his work. Outside of work, Mike is an avid cyclist and outdoorsperson. He holds a master’s degree in computational mathematics from the University of Washington.

How to create and manage users within AWS Single Sign-On

Post Syndicated from Vijay Sharma original https://aws.amazon.com/blogs/security/how-to-create-and-manage-users-within-aws-sso/

AWS Single Sign-On (AWS SSO) is a cloud service that allows you to grant your users access to AWS resources, such as Amazon EC2 instances, across multiple AWS accounts. By default, AWS SSO now provides a directory that you can use to create users, organize them in groups, and set permissions across those groups. You can also grant the users that you create in AWS SSO permissions to applications such Salesforce, Box, and Office 365. AWS SSO and its directory are available at no additional cost to you.

A directory is a key building block that allows you to manage the users to whom you want to grant access to AWS resources and applications. AWS Identity and Access Management (IAM) provides a way to create users that can be used to access AWS resources within one AWS account. However, many businesses prefer an approach that enables users to sign in once with a single credential and access multiple AWS accounts and applications. You can now create your users centrally in AWS SSO and manage user access to all your AWS accounts and applications. Your users sign in to a user portal with a single set of credentials configured in AWS SSO, allowing them to access all of their assigned accounts and applications in a single place.

Note: If you manage your users in a Microsoft Active Directory (Microsoft AD) directory, AWS SSO already provides you with an option to connect to a Microsoft AD directory. By connecting your Microsoft AD directory once with AWS SSO, you can assign permissions for AWS accounts and applications directly to your users by easily looking up users and groups from your Microsoft AD directory. Your users can then use their existing Microsoft AD credentials to sign into the AWS SSO user portal and access their assigned accounts and applications in a single place. Customers who manage their users in an existing Lightweight Directory Access Protocol (LDAP) directory or through a cloud identity provider such as Microsoft Azure AD can continue to use IAM federation to enable their users’ access to AWS resources.

How to create users and groups in AWS SSO

You can create users in AWS SSO by configuring their email address and name. When you create a user, AWS SSO sends an email to the user by default so that they can set their own password. Your user will use their email address and a password they configure in AWS SSO to sign into the user portal and access all of their assigned accounts and applications in a single place.

You can also add the users that you create in AWS SSO to groups you create in AWS SSO. In addition, you can create permissions sets that define permitted actions on an AWS resource, and assign them to your users and groups. For example, you can grant the DevOps group permissions to your production AWS accounts. When you add users to the DevOps group, they get access to your production AWS accounts automatically.

In this post, I will show you how to create users and groups in AWS SSO, how to create permission sets, how to assign your groups and users to permission sets and AWS accounts, and how your users can sign into the AWS SSO user portal to access AWS accounts. To learn more about how to grant users that you create in AWS SSO permissions to business applications such as Office 365 and Salesforce, see Manage SSO to Your Applications.

Walk-through prerequisites

For this walk-through, I assume the following:

Overview

To illustrate how to add users in AWS SSO and how to grant permissions to multiple AWS accounts, imagine that you’re the IT manager for a company, Example.com, that wants to make it easy for its users to access resources in multiple AWS accounts. Example.com has five AWS accounts: a master account (called MasterAcct), two developer accounts (DevAccount1 and DevAccount2), and two production accounts (ProdAccount1 and ProdAccount2). Example.com uses AWS Organizations to manage these accounts and has already enabled AWS SSO.

Example.com has two developers, Martha and Richard, who need full access to Amazon EC2 and Amazon S3 in the developer accounts (DevAccount1 and DevAccount2) and read-only access to EC2 and S3 resources in the production accounts (ProdAccount1 and ProdAccount2).

The following diagram illustrates how you can grant Martha and Richard permissions to the developer and production accounts in four steps:

  1. Add users and groups in AWS SSO: Add users Martha and Richard in AWS SSO by configuring their names and email addresses. Add a group called Developers in AWS SSO and add Martha and Richard to the Developers group.
  2. Create permission sets: Create two permission sets. In the first permission set, include policies that give full access to Amazon EC2 and Amazon S3. In second permission set, include policies that give read-only access to Amazon EC2 and Amazon S3.
  3. Assign groups to accounts and permission sets: Assign the Developers group to your developer accounts and assign the permission set that gives full access to Amazon EC2 and Amazon S3. Assign the Developers group to your production accounts, too, and assign the permission set that gives read-only access to Amazon EC2 and Amazon S3. Martha and Richard now have full access to Amazon EC2 and Amazon S3 in the developer accounts and read-only access in the production accounts.
  4. Users sign into the User Portal to access accounts: Martha and Richard receive email from AWS to set their passwords with AWS SSO. Martha and Richard can now sign into the AWS SSO User Portal using their email addresses and the passwords they set with AWS SSO, allowing them to access their assigned AWS accounts.
Figure 1: Architecture diagram

Figure 1: Architecture diagram

Step 1: Add users and groups in AWS SSO

To add users in AWS SSO, navigate to the AWS SSO Console. Then, follow the steps below to add Martha as a user, to create a group called Developers, and to add Martha to the Developers group in AWS SSO.

  1. In the AWS SSO Dashboard, choose Manage your directory to navigate to the Directory tab.
    Figure 2: Navigating to the "Manage your directory" page

    Figure 2: Navigating to the “Manage your directory” page

  2. By default, AWS SSO provides you a directory that you can use to manage users and groups in AWS SSO. To add a user in AWS SSO, choose Add user. If you previously connected a Microsoft AD directory with AWS SSO, you can switch to using the directory that AWS SSO now provides by default by following the steps in Change Directory.
    Figure 3: Adding new users to your directory

    Figure 3: Adding new users to your directory

  3. On the Add User page, enter an email address, first name, and last name for the user, then create a display name. In this example, you’re adding “Martha Rivera” as a user. For the password, choose Send an email to the user with password instructions. This allows users to set their own passwords.

    Optionally, you can also set a mobile phone number and add additional user attributes.

    Figure 4: Adding user details

    Figure 4: Adding user details

  4. Next, you’re ready to add the user to groups. First, you need to create a group. Later, in Step 3, you can grant your group permissions to an AWS account so that any users added to the group will inherit the group’s permissions automatically. In this example, you will create a group called Developers and add Martha to the group. To do so, from the Add user to groups page, choose Create group.
    Figure 5: Creating a group

    Figure 5: Creating a new group

  5. In the Create group window, title your group by filling out the Group name field. For this example, enter Developers. Optionally, you can also enter a description of the group in the Description field. Choose Create to create the group.
    Figure 6: Adding a name and description to your new group

    Figure 6: Adding a name and description to your new group

  6. On the Add users to group page, check the box next to the group you just created, and then choose Add user. Following this process will allow you to add Martha to the Developers group.
    Figure 7: Adding a user to your group

    Figure 7: Adding a user to your new group

You’ve successfully created the user Martha and added her to the Developers group. You can repeat sub-steps 2, 3, and 6 above to create more users and add them to the group. This is the process you should follow to create the user Richard and add him to the Developers group.

Next, you’ll grant the Developers group permissions to AWS resources within multiple AWS accounts. To follow along, you’ll first need to create permission sets.

Step 2: Create permission sets

To grant user permissions to AWS resources, you must create permission sets. A permission set is a collection of administrator-defined policies that AWS SSO uses to determine a user’s permissions for any given AWS account. Permission sets can contain either AWS managed policies or custom policies that are stored in AWS SSO. Policies contain statements that represent individual access controls (allow or deny) for various tasks. This determines what tasks users can or cannot perform within the AWS account. To learn more about permission sets, see Permission Sets.

For this use case, you’ll create two permissions sets: 1) EC2AndS3FullAccess, which has AmazonEC2FullAccess and AmazonS3FullAccess managed policies attached and 2) EC2AndS3ReadAccess, which has AmazonEC2ReadOnlyAccess and AmazonS3ReadOnlyAccess managed policies attached. Later, in Step 3, you can assign groups to these permissions sets and AWS accounts, so that your users have access to these resources. To learn more about creating permission sets with different levels of access, see Create Permission Set.

Follow the steps below to create permission sets:

  1. Navigate to the AWS SSO Console and choose AWS accounts in the left-hand navigation menu.
  2. Switch to the Permission sets tab on the AWS Accounts page, and then choose Create permissions set.
    Figure 8: Creating a permission set

    Figure 8: Creating a permission set

  3. On the Create new permissions set page, choose Create a custom permission set. To learn more about choosing between an existing job function policy and a custom permission set, see Create Permission Set.
    Figure 9: Customizing a permission set

    Figure 9: Customizing a permission set

  4. Enter EC2AndS3FullAccess in the Name field and choose Attach AWS managed policies. Then choose AmazonEC2FullAccess and AmazonS3FullAccess. Choose Create to create the permission set.
    Figure 10: Attaching AWS managed policies to your permission set

    Figure 10: Attaching AWS managed policies to your permission set

You’ve successfully created a permission set. You can use the steps above to create another permission set, called EC2AndS3ReadAccess, by attaching the AmazonEC2ReadOnlyAccess and AmazonS3ReadOnlyAccess managed policies. Now you’re ready to assign your groups to accounts and permission sets.

Step 3: Assign groups to accounts and permission sets

In this step, you’ll assign your Developers group full access to Amazon EC2 and Amazon S3 in the developer accounts and read-only access to these resources in the production accounts. To do so, you’ll assign the Developers group to the EC2AndS3FullAccess permission set and to the two developer accounts (DevAccount1 and DevAccount2). Similarly, you’ll assign the Developers group to the EC2AndS3ReadAccess permission set and to the production AWS accounts (ProdAccount1 and ProdAccount2).

Follow the steps below to assign the Developers group to the EC2AndS3FullAccess permission set and developer accounts (DevAccount1 and DevAccount2). To learn more about how to manage access to your AWS accounts, see Manage SSO to Your AWS Accounts.

  1. Navigate to the AWS SSO Console and choose AWS Accounts in the left-hand navigation menu.
  2. Switch to the AWS organization tab and choose the accounts to which you want to assign your group. For this example, select accounts DevAccount1 and DevAccount2 from the list of AWS accounts. Next, choose Assign users.
    Figure 11: Assigning users to your accounts

    Figure 11: Assigning users to your accounts

  3. On the Select users and groups page, type the name of the group you want to add into the search box and choose Search. For this example, you will be looking for the group called Developers. Check the box next to the correct group and choose Next: Permission Sets.
    Figure 12: Setting permissions for the "Developers" group

    Figure 12: Setting permissions for the “Developers” group

  4. On the Select permissions sets page, select the permission sets that you want to assign to your group. For this use case, you’ll select the EC2AndS3FullAccess permission set. Then choose Finish.
    Figure 13: Choosing permission sets

    Figure 13: Choosing permission sets

You’ve successfully granted users in the Developers group access to accounts DevAccount1 and DevAccount2, with full access to Amazon EC2 and Amazon S3.

You can follow the same steps above to grant users in the Developers group access to accounts ProdAccount1 and ProdAccount2 with the permissions in the EC2AndS3ReadAccess permission set. This will grant the users in the Developers group read-only access to Amazon EC2 and Amazon S3 in the production accounts.

Figure 14: Notification of successful account configuration

Figure 14: Notification of successful account configuration

Step 4: Users sign into User Portal to access accounts

Your users can now sign into the AWS SSO User Portal to manage resources in their assigned AWS accounts. The user portal provides your users with single sign-on access to all their assigned accounts and business applications. From the user portal, your users can sign into multiple AWS accounts by choosing the AWS account icon in the portal and selecting the account that they want to access.

You can follow the steps below to see how Martha signs into the user portal to access her assigned AWS accounts.

  1. When you added Martha as a user in Step 1, you selected the option Send the user an email with password setup instructions. AWS SSO sent instructions to set a password to Martha at the email that you configured when creating the user. This is the email that Martha received:
    Figure 15: AWS SSO password configuration email

    Figure 15: AWS SSO password configuration email

  2. To set her password, Martha will select Accept invitation in the email that she received from AWS SSO. Selecting Accept invitation will take Martha to a page where she can set her password. After Martha sets her password, she can navigate to the User Portal.
    Figure 16: User Portal sign-in

    Figure 16: User Portal sign-in

  3. In the User Portal, Martha can select the AWS Account icon to view all the AWS accounts to which she has permissions.
    Figure 17: View of AWS Account icon from User Portal

    Figure 17: View of AWS Account icon from User Portal

  4. Martha can now see the developer and production accounts that you granted her permissions to in previous steps. For each account, she can also see the list of roles that she can assume within the account. For example, for DevAccount1 and DevAccount2, Martha can assume the EC2AndS3FullAccess role that gives her full access to manage Amazon EC2 and Amazon S3. Similarly, for ProdAccount1 and ProdAccount2, Martha can assume the EC2AndS3ReadAccess role that gives her read-only access to Amazon EC2 and Amazon S3. Martha can select accounts and choose Management Console next to the role she wants to assume, letting her sign into the AWS Management Console to manage AWS resources. To switch to a different account, Martha can navigate to the User Portal and select a different account. From the User Portal, Martha can also get temporary security credentials for short-term access to resources in an AWS account using AWS Command Line Interface (CLI). To learn more, see How to Get Credentials of an IAM Role for Use with CLI Access to an AWS Account.
    Figure 17: Switching accounts from the User Portal

    Figure 18: Switching accounts from the User Portal

  5. Martha bookmarks the user portal URL in her browser so that she can quickly access the user portal the next time she wants to access AWS accounts.

Summary

By default, AWS now provides you with a directory that you can use to manage users and groups within AWS SSO and to grant user permissions to resources in multiple AWS accounts and business applications. In this blog post, I showed you how to manage users and groups within AWS SSO and grant them permissions to multiple AWS accounts. I also showed how your users sign into the user portal to access their assigned AWS accounts.

If you have feedback or questions about AWS SSO, start a new thread on the AWS SSO forum or contact AWS Support. If you have feedback about this blog post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Vijay Sharma

Vijay is a Senior Product Manager with AWS Identity.

AWS Organizations now requires email address verification in order to invite accounts to an organization

Post Syndicated from Raymond Ma original https://aws.amazon.com/blogs/security/aws-organizations-now-requires-email-address-verification/

AWS Organizations, the service for centrally managing multiple AWS accounts, enables you to invite existing accounts to join your organization. To provide additional assurance about your organization’s identity to AWS accounts that you invite, AWS Organizations is adding a new feature. Beginning on September 27, 2018, you’ll need to verify the email address associated with your organization’s master account before you invite existing accounts to join your organization. To prepare for this feature, you may verify the email address associated with your master account starting today.

If you need to change your master account email address prior to verifying it, please see Managing an AWS Account.

How to verify your master account email address

To verify your master account email address, follow these steps:

  1. Navigate to the Organizations console and choose the Settings tab.
  2. From the Organization details section of the settings page, choose Send verification request.
    Figure 1: Sending the verification request

    Figure 1: Sending the verification request

    You’ll receive a pop-up notification confirming that a verification email has been sent to the email address associated with your master account.

    Figure 2: Confirmation of verification email

    Figure 2: Notification of sent email

  3. Log into your email account and look for an email from AWS with the subject line AWS Organizations email verification request.
  4. Open the email and choose Verify your email address.
    Figure 3: Verification link in email

    Figure 3: Email verification link

  5. If you don’t have an active AWS session, log back into your AWS account. You should see a notification that your email address has been successfully verified. You’ll now be able to invite accounts to join your organization.
    Figure 4: Notification of successful verification

    Figure 4: Notification of successful verification

Summary

To provide additional security assurance to AWS accounts that are invited to join an organization, AWS Organizations will require email verification for organization master accounts starting September 27, 2018. To verify your master account email address and invite AWS accounts to join your organization, sign in to the Organizations console and follow the steps I’ve outlined in this post.

If you have comments about this post, submit them in the Comments section below. If you have questions about or issues implementing this solution, start a new thread on the Organizations forum or contact AWS Support..

Want more AWS Security news? Follow us on Twitter.

Raymond Ma

Raymond is a Senior Technical Product Manager on the AWS Identity team covering AWS Organizations and Accounts. Outside of work, he enjoys taking care of his dog, Merlin, and volunteering with King County Search and Rescue.