Tag Archives: Identity

Getting started with AWS SSO delegated administration

Post Syndicated from Chris Mercer original https://aws.amazon.com/blogs/security/getting-started-with-aws-sso-delegated-administration/

Recently, AWS launched the ability to delegate administration of AWS Single Sign-On (AWS SSO) in your AWS Organizations organization to a member account (an account other than the management account). This post will show you a practical approach to using this new feature. For the documentation for this feature, see Delegated administration in the AWS Single Sign-On User Guide.

With AWS Organizations, your enterprise organization can manage your accounts more securely and at scale. One of the benefits of Organizations is that it integrates with many other AWS services, so you can centrally manage accounts and how the services in those accounts can be used.

AWS SSO is where you can create, or connect, your workforce identities in AWS just once, and then manage access centrally across your AWS organization. You can create user identities directly in AWS SSO, or you can bring them from your Microsoft Active Directory or a standards-based identity provider, such as Okta Universal Directory or Azure AD. With AWS SSO, you get a unified administration experience to define, customize, and assign fine-grained access.

By default, the management account in an AWS organization has the power and authority to manage member accounts in the organization. Because of these additional permissions, it is important to exercise least privilege and tightly control access to the management account. AWS recommends that enterprises create one or more accounts specifically designated for security of the organization, with proper controls and access management policies in place. AWS provides a method in which many services can be administered for the organization from a member account; this is usually referred to as a delegated administrator account. These accounts can reside in a security organizational unit (OU), where administrators can enforce organizational policies. Figure 1 is an example of a recommended set of OUs in Organizations.

Figure 1: Recommended AWS Organizations OUs

Figure 1: Recommended AWS Organizations OUs

Many AWS services support this delegated administrator model, including Amazon GuardDuty, AWS Security Hub, and Amazon Macie. For an up-to-date complete list, see AWS services that you can use with AWS Organizations. AWS SSO is now the most recent addition to the list of services in which you can delegate administration of your users, groups, and permissions, including third-party applications, to a member account of your organization.

How to configure a delegated administrator account

In this scenario, your enterprise AnyCompany has an organization consisting of a management account, an account for managing security, as well as a few member accounts. You have enabled AWS SSO in the organization, but you want to enable the security team to manage permissions for accounts and roles in the organization. AnyCompany doesn’t want you to give the security team access to the management account, and they also want to make sure the security team can’t delete the AWS SSO configuration or manage access to that account, so you decide to delegate the administration of AWS SSO to the security account.

Note: There are a few things to consider when making this change, which you should review before you enable delegated administration. These items are covered in the console during the process, and are described in the section Considerations when delegating AWS SSO administration in this post.

To delegate AWS SSO administration to a security account

  1. In the AWS Organizations console, log in to the management account with a user or role that has permission to use organizations:RegisterDelegatedAdministrator, as well as AWS SSO management permissions.
  2. In the AWS SSO console, navigate to the Region in which AWS SSO is enabled.
  3. Choose Settings on the left navigation pane, and then choose the Management tab on the right side.
  4. Under Delegated administrator, choose Register account, as shown in Figure 2.
    Figure 2: The registered account button in AWS SSO

    Figure 2: The Register account button in AWS SSO

  5. Consider the implications of designating a delegated administrator account (as described in the section Considerations when delegating AWS SSO administration). Select the account you want to be able to manage AWS SSO, and then choose Register account, as shown in Figure 3.
    Figure 3: Choosing a delegated administrator account in AWS SSO

    Figure 3: Choosing a delegated administrator account in AWS SSO

You should see a success message to indicate that the AWS SSO delegated administrator account is now setup.

To remove delegated AWS SSO administration from an account

  1. In the AWS Organizations console, log in to the management account with a user or role that has permission to use organizations:DeregisterDelegatedAdministrator.
  2. In the AWS SSO console, navigate to the Region in which AWS SSO is enabled.
  3. Choose Settings on the left navigation pane, and then choose the Management tab on the right side.
  4. Under Delegated administrator, select Deregister account, as shown in Figure 4.
    Figure 4: The Deregister account button in AWS SSO

    Figure 4: The Deregister account button in AWS SSO

  5. Consider the implications of removing a delegated administrator account (as described in the section Considerations when delegating AWS SSO administration), then enter the account name that is currently administering AWS SSO, and choose Deregister account, as shown in Figure 5.
    Figure 5: Considerations of deregistering a delegated administrator in AWS SSO

    Figure 5: Considerations of deregistering a delegated administrator in AWS SSO

Considerations when delegating AWS SSO administration

There are a few considerations you should keep in mind when you delegate AWS SSO administration. The first consideration is that the delegated administrator account will not be able to perform the following actions:

  • Delete the AWS SSO configuration.
  • Delegate (to other accounts) administration of AWS SSO.
  • Manage user or group access to the management account.
  • Manage permission sets that are provisioned (have a user or group assigned) in the organization management account.

For examples of those last two actions, consider the following scenarios:

In the first scenario, you are managing AWS SSO from the delegated administrator account. You would like to give your colleague Saanvi access to all the accounts in the organization, including the management account. This action would not be allowed, since the delegated administrator account cannot manage access to the management account. You would need to log in to the management account (with a user or role that has proper permissions) to provision that access.

In a second scenario, you would like to change the permissions Paulo has in the management account by modifying the policy attached to a ManagementAccountAdmin permission set, which Paulo currently has access to. In this scenario, you would also have to do this from inside the management account, since the delegated administrator account does not have permissions to modify the permission set, because it is provisioned to a user in the management account.

With those caveats in mind, users with proper access in the delegated administrator account will be able to control permissions and assignments for users and groups throughout the AWS organization. For more information about limiting that control, see Allow a user to administer AWS SSO for specific accounts in the AWS Single Sign-On User Guide.

Deregistering an AWS SSO delegated administrator account will not affect any permissions or assignments in AWS SSO, but it will remove the ability for users in the delegated account to manage AWS SSO from that account.

Additional considerations if you use Microsoft Active Directory

There are additional considerations for you to keep in mind if you use Microsoft Active Directory (AD) as an identity provider, specifically if you use AWS SSO configurable AD sync, and which AWS account the directory resides in. In order to use AWS SSO delegated administration when the identity source is set to Active Directory, AWS SSO configurable AD sync must be enabled for the directory. Your organization’s administrators must synchronize Active Directory users and groups you want to grant access to into an AWS SSO identity store. When you enable AWS SSO configurable AD sync, a new feature that launched in April, Active Directory administrators can choose which users and groups get synced into AWS SSO, similar to how other external identity providers work today when using the System for Cross-domain Identity Management (SCIM) v2.0 protocol. This way, AWS SSO knows about users and groups even before they are granted access to specific accounts or roles, and AWS SSO administrators don’t have to manually search for them.

Another thing to consider when delegating AWS SSO administration when using AD as an identity source is where your directory resides, that is which AWS account owns the directory. If you decide to change the AWS SSO identity source from any other source to Active Directory, or change it from Active Directory to any other source, then the directory must reside in (be owned by) the account that the change is being performed in. For example, if you are currently signed in to the management account, you can only change the identity source to or from directories that reside in (are owned by) the management account. For more information, see Manage your identity source in the AWS Single Sign-On User Guide.

Best practices for managing AWS SSO with delegated administration

AWS recommends the following best practices when using delegated administration for AWS SSO:

  • Maintain separate permission sets for use in the organization management account (versus the rest of the accounts). This way, permissions can be kept separate and managed from within the management account without causing confusion among the delegated administrators.
  • When granting access to the organization management account, grant the access to groups (and permission sets) specifically for access in that account. This helps enable the principal of least privilege for this important account, and helps ensure that AWS SSO delegated administrators are able to manage the rest of the organization as efficiently as possible (by reducing the number of users, groups, and permission sets that are off limits to them).
  • If you plan on using one of the AWS Directory Services for Microsoft Active Directory (AWS Managed Microsoft AD or AD Connector) as your AWS SSO identity source, locate the directory and the AWS SSO delegated administrator account in the same AWS account.

Conclusion

In this post, you learned about a helpful new feature of AWS SSO, the ability to delegate administration of your users and permissions to a member account of your organization. AWS recommends as a best practice that the management account of an AWS organization be secured by a least privilege access model, in which as few people as possible have access to the account. You can enable delegated administration for supported AWS services, including AWS SSO, as a useful tool to help your organization minimize access to the management account by moving that control into an AWS account designated specifically for security or identity services. We encourage you to consider AWS SSO delegated administration for administrating access in AWS. To learn more about the new feature, see Delegated administration in the AWS Single Sign-On User Guide.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS IAM forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Chris Mercer

Chris is a security specialist solutions architect. He helps AWS customers implement sophisticated, scalable, and secure solutions to business challenges. He has experience in penetration testing, security architecture, and running military IT systems and networks. Chris holds a Master’s Degree in Cybersecurity, several AWS certifications, OSCP, and CISSP. Outside of AWS, he is a professor, student pilot, and Cub Scout leader.

Establishing a data perimeter on AWS

Post Syndicated from Ilya Epshteyn original https://aws.amazon.com/blogs/security/establishing-a-data-perimeter-on-aws/

For your sensitive data on AWS, you should implement security controls, including identity and access management, infrastructure security, and data protection. Amazon Web Services (AWS) recommends that you set up multiple accounts as your workloads grow to isolate applications and data that have specific security requirements. AWS tools can help you establish a data perimeter between your multiple accounts, while blocking unintended access from outside of your organization. Data perimeters on AWS span many different features and capabilities. Based on your security requirements, you should decide which capabilities are appropriate for your organization. In this first blog post on data perimeters, I discuss which AWS Identity and Access Management (IAM) features and capabilities you can use to establish a data perimeter on AWS. Subsequent posts will provide implementation guidance and IAM policy examples for establishing your identity, resource, and network data perimeters.

A data perimeter is a set of preventive guardrails that help ensure that only your trusted identities are accessing trusted resources from expected networks. These terms are defined as follows:

  • Trusted identities – Principals (IAM roles or users) within your AWS accounts, or AWS services that are acting on your behalf
  • Trusted resources – Resources that are owned by your AWS accounts, or by AWS services that are acting on your behalf
  • Expected networks – Your on-premises data centers and virtual private clouds (VPCs), or networks of AWS services that are acting on your behalf

Data perimeter guardrails

You typically implement data perimeter guardrails as coarse-grained controls that apply across a broad set of AWS accounts and resources. When you implement a data perimeter, consider the following six primary control objectives.

Data perimeter Control objective
Identity Only trusted identities can access my resources.
Only trusted identities are allowed from my network.
Resource My identities can access only trusted resources.
Only trusted resources can be accessed from my network.
Network My identities can access resources only from expected networks.
My resources can only be accessed from expected networks.

Note that the controls in the preceding table are coarse in nature and are meant to serve as always-on boundaries. You can think of data perimeters as creating a firm boundary around your data to prevent unintended access patterns. Although data perimeters can prevent broad unintended access, you still need to make fine-grained access control decisions. Establishing a data perimeter does not diminish the need to continuously fine-tune permissions by using tools such as IAM Access Analyzer as part of your journey to least privilege.

To implement the preceding control objectives on AWS, use three primary capabilities:

Let’s expand the previous table to include the corresponding policies you would use to implement the controls for each of the control objectives.

Data perimeter Control objective Implemented by using
Identity Only trusted identities can access my resources. Resource-based policies
Only trusted identities are allowed from my network. VPC endpoint policies
Resource My identities can access only trusted resources. SCPs
Only trusted resources can be accessed from my network. VPC endpoint policies
Network My identities can access resources only from expected networks. SCPs
My resources can only be accessed from expected networks. Resource-based policies

As you can see in the preceding table, the correct policy for each control objective depends on which resource you are trying to secure. Resource-based policies, which are applied to resources such as Amazon S3 buckets, can be used to filter access based on the calling principal and the network from which they are making a call. VPC endpoint policies are used to inspect the principal that is making the API call and the resource they are trying to access. And SCPs are used to restrict your identities from accessing resources outside your control or from outside your network. Note that SCPs apply only to your principals within your AWS organization, whereas resource policies can be used to limit access to all principals.

The last components are the specific IAM controls or condition keys that enforce the control objective. For effective data perimeter controls, use the following primary IAM condition keys, including the new resource owner condition keys:

  • aws:PrincipalOrgID – Use this condition key to restrict access to trusted identities, your principals (roles or users) that belong to your organization. In the context of a data perimeter, you will use this condition key with your resource-based policies and VPC endpoint policies.
  • aws:ResourceOrgID – Use this condition key to restrict access to resources that belong to your AWS organization. To establish a data perimeter, you will use this condition key within SCPs and VPC endpoint policies.
  • aws:SourceIp, aws:SourceVpc, aws:SourceVpce – Use these condition keys to restrict access to expected network locations, such as your corporate network or your VPCs. In the context of a data perimeter, you will use these keys within identity and resource-based policies.

We can now complete the table that we’ve been developing throughout this post.

Data perimeter Control objective Implemented by using Primary IAM capability
Identity Only trusted identities can access my resources. Resource-based policies aws:PrincipalOrgID
aws:PrincipalIsAWSService
Only trusted identities are allowed from my network. VPC endpoint policies aws:PrincipalOrgID
Resource My identities can access only trusted resources. SCPs aws:ResourceOrgID
Only trusted resources can be accessed from my network. VPC endpoint policies aws:ResourceOrgID
Network My identities can access resources only from expected networks. SCPs aws:SourceIp
aws:SourceVpc
aws:SourceVpce
aws:ViaAWSService
My resources can only be accessed from expected networks. Resource-based policies aws:SourceIp
aws:SourceVpc
aws:SourceVpce
aws:ViaAWSService
aws:PrincipalIsAWSService

For the identity data perimeter, the primary condition key is aws:PrincipalOrgID, which you can use in resource-based policies and VPC endpoint policies so that only your identities are allowed access. Use aws:PrincipalIsAWSService to allow AWS services to access your resources by using their own identities—for example, AWS CloudTrail can use this access to write data to your bucket.

For the resource data perimeter, the primary condition key is aws:ResourceOrgID, which you can use in an SCP policy or VPC endpoint policy to allow your identities and network to access only the resources that belong to your AWS organization.

Last, for the network perimeter, use the aws:SourceIp, aws:SourceVpc, and aws:SourceVpce condition keys in SCPs and resource-based policies to make sure that your identities and resources are accessed only from your trusted network. Use the aws:PrincipalIsAWSService and aws:ViaAWSService condition keys to allow AWS services to access your resources from outside your network locations. For example, CloudTrail can use this access to write data to one of your S3 buckets, or Amazon Athena can query data in your S3 buckets. For more information about using these keys as part of your data perimeter strategy, see the blog post IAM makes it easier for you to manage permissions for AWS services accessing your resources.

Conclusion

In this blog post, you learned the foundational elements that are needed to implement an identity, resource, and network data perimeter on AWS, including the primary IAM capabilities that are used to implement each of the control objectives. Stay tuned to the follow-up posts in this series, which will provide prescriptive guidance on establishing your identity, resource, and network data perimeters.

Following are additional resources that will help you further explore the data perimeter topic, including a whitepaper and a hands-on-workshop. We have also curated several blog posts related to the key IAM capabilities discussed in this post.

If you have any questions, comments, or concerns, contact AWS Support or start a new thread on the IAM forum. If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Author

Ilya Epshteyn

Ilya is a Senior Manager of Identity Solutions in AWS Identity. He helps customers to innovate on AWS by building highly secure, available, and scalable architectures. He enjoys spending time outdoors and building Lego creations with his kids.

How to integrate AWS STS SourceIdentity with your identity provider

Post Syndicated from Keith Joelner original https://aws.amazon.com/blogs/security/how-to-integrate-aws-sts-sourceidentity-with-your-identity-provider/

You can use third-party identity providers (IdPs) such as Okta, Ping, or OneLogin to federate with the AWS Identity and Access Management (IAM) service using SAML 2.0, allowing your workforce to configure services by providing authorization access to the AWS Management Console or Command Line Interface (CLI). When you federate to AWS, you assume a role through the AWS Security Token Service (AWS STS), which through the AssumeRole API returns a set of temporary security credentials you then use to access AWS resources. The use of temporary credentials can make it challenging for administrators to trace which identity was responsible for actions performed.

To address this, with AWS STS you set a unique attribute called SourceIdentity, which allows you to easily see which identity is responsible for a given action.

This post will show you how to set up the AWS STS SourceIdentity attribute when using Okta, Ping, or OneLogin as your IdP. Your IdP administrator can configure a corporate directory attribute, such as an email address, to be passed as the SourceIdentity value within the SAML assertion. This value is stored as the SourceIdentity element in AWS CloudTrail, along with the activity performed by the assumed role. This post will also show you how to set up a sample policy for setting the SourceIdentity when switching roles. Finally, as an administrator reviewing CloudTrail activity, you can use the source identity information to determine who performed which actions. We will walk you through CloudTrail logs from two accounts to demonstrate the continuance of the source identity attribute, showing you how the SourceIdentity will appear in both accounts’ logs.

For more information about the SAML authentication flow in AWS services, see AWS Identity and Access Management Using SAML. For more information about using SourceIdentity, see How to relate IAM role activity to corporate identity.

Configure the SourceIdentity attribute with Okta integration

You will do this portion of the configuration within the Okta administrative console. This procedure assumes that you have a previously configured AWS and Okta integration. If not, you can configure your integration by following the instructions in the Okta AWS Multi-Account Configuration Guide. You will use the Okta to SAML integration and configure an optional attribute to map as the SourceIdentity.

To set up Okta with SourceIdentity

  1. Log in to the Okta admin console.
  2. Navigate to Applications–AWS.
  3. In the top navigation bar, select the Sign On tab, as shown in Figure 1.

    Figure 1 - Navigate to attributes in SAML settings on the Okta applications page

    Figure 1 – Navigate to attributes in SAML settings on the Okta applications page

  4. Under Sign on methods, select SAML 2.0, and choose the arrow next to Attributes (Optional) to expand, as shown in Figure 2.

    Figure 2 - Add new attribute SourceIdentity and map it to Okta provided attribute of your choice

    Figure 2 – Add new attribute SourceIdentity and map it to Okta provided attribute of your choice

  5. Add the optional attribute definition for SourceIdentity using the following parameters:
    • For Name, enter:
      https://aws.amazon.com/SAML/Attributes/SourceIdentity
    • For Name format, choose URI Reference.
    • For Value, enter user.login.

    Note: The Name format options are the following:
    Unspecified – can be any format defined by the Okta profile and must be interpreted by your application.
    URI Reference – the name is provided as a Uniform Resource Identifier string.
    Basic – a simple string; the default if no other format is specified.

The examples shown in Figure 1 and Figure 2 show how to map an email address to the SourceIdentity attribute by using an on-premises Active Directory sync. The SourceIdentity can be mapped to other attributes from your Active Directory.

Configure the SourceIdentity attribute with PingOne integration

You do this portion of the configuration in the Ping Identity administrative console. This procedure assumes that you have a previously configured AWS and Ping integration. If not, you can set up the PingFederate AWS Connector by following the Ping Identity instructions Configuring an SSO connection to Amazon Web Services.

You’re using the Ping to SAML integration and configuring an optional attribute to map as the source identity.

Configuring PingOne as an IdP involves setting up an identity repository (in this case, the PingOne Directory), creating a user group, and adding users to the individual groups.

To configure PingOne as an IdP for AWS

  1. Navigate to https://admin.pingone.com/ and log in using your administrator credentials.
  2. Choose the My Applications tab, as shown in Figure 3.

    Figure 3. PingOne My Applications tab

    Figure 3. PingOne My Applications tab

  3. On the Amazon Web Services line, choose on the arrow on the right side to show application details to edit and add a new attribute for the source identity.
  4. Choose Continue to Next Step to open the Attribute Mapping section, as shown in Figure 4.

    Figure 4. Attribute mappings

    Figure 4. Attribute mappings

  5. In the Attribute Mapping section line 1, for SAML_SUBJECT, choose Advanced.
  6. On the Advanced Attribute Options page, for Name ID Format to send to SP select urn:oasis:names:tc:SAML:2.0:nameid-format:persistent. For IDP Attribute Name or Literal Value, select SAML_SUBJECT, as shown in Figure 4.

    Figure 5. Advanced Attribute Options for SAML_SUBJECT

    Figure 5. Advanced Attribute Options for SAML_SUBJECT

  7. In the Attribute Mapping section line 2 as shown in Figure 4, for the application attribute https://aws.amazon.com/SAML/Attributes/Role, select Advanced.
  8. On the Advanced Attribute Options page, for Name Format, select urn:oasis:names:tc:SAML:2.0:attrname-format:uri, as shown in Figure 6.

    Figure 6. Advanced Attribute Options for https://aws.amazon.com/SAML/Attributes/Role

    Figure 6. Advanced Attribute Options for https://aws.amazon.com/SAML/Attributes/Role

  9. In the Attribute Mapping section line 2 as shown in Figure 4, select As Literal.
  10. For IDP Attribute Name or Literal Value, format the role and provider ARNs (which are not yet created on the AWS side) in the following format. Be sure to replace the placeholders with your own values. Make a note of the role name and SAML provider name, as you will be using these exact names to create an IAM role and an IAM provider on the AWS side.

    arn:aws:iam::<AWS_ACCOUNT_ID>:role/<IAM_ROLE_NAME>,arn:aws:iam:: ::<AWS_ACCOUNT_ID>:saml-provider/<SAML_PROVIDER_NAME>

  11. In the Attribute Mapping section line 3 as shown in Figure 4, for the application attribute https://aws.amazon.com/SAML/Attributes/RoleSessionName, enter Email (Work).
  12. In the Attribute Mapping section as shown in Figure 4, to create line 5, choose Add a new attribute in the lower left.
  13. In the newly added Attribute Mapping section line 5 as shown in Figure 4, add the SourceIdentity.
    • For Application Attribute, enter:
      https://aws.amazon.com/SAML/Attributes/SourceIdentity
    • For Identity Bridge Attribute or Literal Value, enter:
      SAML_SUBJECT
  14. Choose Continue to Next Step in the lower right.
  15. For Group Access, add your existing PingOne Directory Group to this application.
  16. Review your setup configuration, as shown in Figure 7, and choose Finish.

    Figure 7. Review mappings

    Figure 7. Review mappings

Configure the SourceIdentity attribute with OneLogin integration

For the OneLogin SAML integration with AWS, you use the Amazon Web Services Multi Account application and configure an optional attribute to map as the SourceIdentity. You do this portion of the configuration in the OneLogin administrative console.

This procedure assumes that you already have a previously configured AWS and OneLogin integration. For information about how to configure the OneLogin application for AWS authentication and authorization, see the OneLogin KB article Configure SAML for Amazon Web Services (AWS) with Multiple Accounts and Roles.

After the OneLogin Multi Account application and AWS are correctly configured for SAML login, you can further customize the application to pass the SourceIdentity parameter upon login.

To change OneLogin configuration to add SourceIdentity attribute

  1. In the OneLogin administrative console, in the Amazon Web Services Multi Account application, on the app administration page, navigate to Parameters, as shown in Figure 8.

    Figure 8. OneLogin AWS Multi Account Application Configuration Parameters

    Figure 8. OneLogin AWS Multi Account Application Configuration Parameters

  2. To add a parameter, choose the + (plus) icon to the right of Value.
  3. As shown in Figure 9, for Field Name enter https://aws.amazon.com/SAML/Attributes/SourceIdentity, select Include in SAML assertion, then choose Save.
    Figure 9. OneLogin AWS Multi Account Application add new field

    Figure 9. OneLogin AWS Multi Account Application add new field

  4. In the Edit Field page, select the default value you want to use for SourceIdentity. For the example in this blog post, for Value, select Email, then choose Save, as shown in Figure 10.
    Figure 10. OneLogin AWS Multi Account Application map new field to email

    Figure 10. OneLogin AWS Multi Account Application map new field to email

After you’ve completed this procedure, review the final mapping details, as shown in Figure 11, to confirm that you see the additional parameter that will be passed into AWS through the SAML assertion.

Figure 11. OneLogin AWS Multi Account Application final mapping details

Figure 11. OneLogin AWS Multi Account Application final mapping details

Configuring AWS IAM role trust policy

Now that the IdP configuration is complete, you can enable your AWS accounts to use SourceIdentity by modifying the IAM role trust policy.

For the workforce identity or application to be able to define their source identity when they assume IAM roles, you must first grant them permission for the sts:SetSourceIdentity action, as illustrated in the sample policy document below. This will permit the workforce identity or application to set the SourceIdentity themselves without any need for manual intervention.

To modify an AWS IAM role trust policy

  1. Log in to the AWS Management Console for your account as a user with privileges to configure an IdP, typically an administrator.
  2. Navigate to the AWS IAM service.
  3. For trusted identity, choose SAML 2.0 federation.
  4. From the SAML Provider drop down menu, select the IAM provider you created previously.
  5. Modify the role trust policy and add the SetSourceIdentity action.

Sample policy document

This is a sample policy document attached to a role you assume when you log in to Account1 from the Okta dashboard. Edit your Account1/Role1 trust policy document and add sts:AssumeRoleWithSAML and sts:setSourceIdentity to the Action section.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::<AccountId>:saml-provider/<IdP>"
      },
      "Action": [
        "sts:AssumeRoleWithSAML",
        "sts:SetSourceIdentity"
      ],
      "Condition": {
        "StringEquals": {
          "SAML:aud": "https://signin.aws.amazon.com/saml"
        }
      }
    }
  ]
}

Notes: The SetSourceIdentity action has to be allowed in the trust policy for assumeRole to work when the IdP is set up to pass SourceIdentity in the assertion. Future version of the sign-in URL may contain a Region code. When this occurs, you will need to modify the URL appropriately.

Policy statement

The following are examples of how the line “Federated”: “arn:aws:iam::<AccountId>:saml-provider/<IdP>” should look, based on the different IdPs specified in this post:

  • “Federated”: “arn:aws:iam::12345678990:saml-provider/Okta”
  • “Federated”: “arn:aws:iam::12345678990:saml-provider/PingOne”
  • “Federated”: “arn:aws:iam::12345678990:saml-provider/OneLogin”

Modify Account2/Role2 policy statement

The following is a sample access control policy document in Account2 for Role2 that allows you to switchRole from Account1. Edit the control policy and add sts:AssumeRole and sts:SetSourceIdentity in the Action section.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::<AccountID>:root"
      },
      "Action": [
        "sts:AssumeRole",
        "sts:SetSourceIdentity"
      ] 
    }
  ]
}

Trace the SourceIdentity attribute in AWS CloudTrail

Use the following procedure for each IdP to illustrate passing a corporate directory attribute mapped as the SourceIdentity.

To trace the SourceIdentity attribute in AWS CloudTrail

  1. Use an IdP to log in to an account Account1 (111122223333) using a role named Role1.
  2. Create a new Amazon Simple Storage Service (Amazon S3) bucket in Account1.
  3. Validate that the CloudTrail log entries for Account1 contain the Active Directory mapped SourceIdentity.
  4. Use the Switch Role feature to switch to a second account Account2 (444455556666), using a role named Role2.
  5. Create a new Amazon S3 bucket in Account2.

To summarize what you’ve done so far, you have:

  • Configured your corporate directory to pass a unique attribute to AWS as the source identity.
  • Configured a role that will persist the SourceIdentity attribute in AWS STS, which an employee will use to federate into your account.
  • Configured an Amazon S3 bucket that user will access.

Now you’ll observe in CloudTrail the SourceIdentity attribute that will be associated with every IAM action.

To see the SourceIdentity attribute in CloudTrail

  1. From the your preferred IdP dashboard, select the AWS tile to log into the AWS console. The example in Figure 12 shows the Okta dashboard.
    Figure 12. Login to AWS from IdP dashboard

    Figure 12. Login to AWS from IdP dashboard

  2. Choose the AWS icon, which will take you to the AWS Management Console. Notice how the user has assumed the role you created earlier.
  3. To test the SourceIdentity action, you will create a new Amazon S3 bucket.

    Amazon S3 bucket names are globally unique, and the namespace is shared by all AWS accounts, so you will need to create a unique bucket name in your account. For this example, we used a bucket named DOC-EXAMPLE-BUCKET1 to validate CloudTrail log entries containing the SourceIdentity attribute.

  4. Log into an account Account1 (111122223333) using a role named Role1.
  5. Next, create a new Amazon S3 bucket in Account1, and validate that the Account1 CloudTrail logs entries contain the SourceIdentity attribute.
  6. Create an Amazon S3 bucket called DOC-EXAMPLE-BUCKET1, as shown in Figure 13.
    Figure 13. Create S3 bucket

    Figure 13. Create S3 bucket

  7. In the AWS Management Console go to CloudTrail and check the log entry for bucket creation event, as shown in Figure 14.
    Figure 14 - Bucket creating entry in CloudTrail

    Figure 14 – Bucket creating entry in CloudTrail

Sample CloudTrail entry showing SourceIdentity entry

The following example shows the new sourceIdentity entry added to the JSON message for the CreateBucket event above.

{"eventVersion":"1.08",
"userIdentity":{
    "type":"AssumedRole",
    "principalId":"AROA42BPHP3V5TTJH32PZ:sourceidentitytest",
    "arn":"arn:aws:sts::111122223333:assumed-role/idsol-org-admin/sourceidentitytest",
    "accountId":"111122223333",
    "accessKeyId":"ASIA42BPHP3V2QJBW7WJ",
    "sessionContext":{
        "sessionIssuer":{
            "type":"Role",
            "principalId":"AROA42BPHP3V5TTJH32PZ",
            "arn":"arn:aws:iam::111122223333:role/idsol-org-admin",
            "accountId":"111122223333","userName":"idsol-org-admin"
        },
        "webIdFederationData":{},
        "attributes":{
            "mfaAuthenticated":"false",
            "creationDate":"2021-05-05T16:29:19Z"
        },
        "sourceIdentity":"<[email protected]>"
    }
},
"eventTime":"2021-05-05T16:33:25Z",
"eventSource":"s3.amazonaws.com",
"eventName":"CreateBucket",
"awsRegion":"us-east-1",
"sourceIPAddress":"203.0.113.0"
  1. Switch to Account2 (444455556666) using assume role, and switch to Account2/assumeRoleSourceIdentity.
  2. Create a new Amazon S3 bucket in Account2 and validate that the Account2 CloudTrail log entries contain the SourceIdentity attribute, as shown in Figure 15.
    Figure 15 - Switch role to assumeRoleSourceIdentity

    Figure 15 – Switch role to assumeRoleSourceIdentity

  3. Create a new Amazon S3 bucket in account2 called DOC-EXAMPLE-BUCKET2, as shown in Figure 16.
    Figure 16 - Create DOC-EXAMPLE-BUCKET2 bucket while logged into account2 using assumeRoleSourceIdentity

    Figure 16 – Create DOC-EXAMPLE-BUCKET2 bucket while logged into account2 using assumeRoleSourceIdentity

  4. Check the CloudTrail logs for account2 (444455556666) to see if the original SourceIdentity is logged, as shown in Figure 17.
    Figure 17 - CloudTrail log entry for the above action

    Figure 17 – CloudTrail log entry for the above action

CloudTrail entry showing original SourceIdentity after assuming a role

{
    "eventVersion": "1.08",
    "userIdentity": {
        "type": "AssumedRole",
        "principalId": "AROAVC5CY2KJCIXJLPMQE:sourceidentitytest",
        "arn": "arn:aws:sts::444455556666:assumed-role/s3assumeRoleSourceIdentity/sourceidentitytest",
        "accountId": "444455556666",
        "accessKeyId": "ASIAVC5CY2KJIAO7CGA6",
        "sessionContext": {
            "sessionIssuer": {
                "type": "Role",
                "principalId": "AROAVC5CY2KJCIXJLPMQE",
                "arn": "arn:aws:iam::444455556666:role/s3assumeRoleSourceIdentity",
                "accountId": "444455556666",
                "userName": "s3assumeRoleSourceIdentity"
            },
            "webIdFederationData": {},
            "attributes": {
                "mfaAuthenticated": "false",
                "creationDate": "2021-05-05T16:47:41Z"
            },
            "sourceIdentity": "<[email protected]>"
        }
    },
    "eventTime": "2021-05-05T16:48:53Z",
    "eventSource": "s3.amazonaws.com",
    "eventName": "CreateBucket",
    "awsRegion": "us-east-1",
    "sourceIPAddress": "203.0.113.0",

You logged into Account1/Role1 and switched to Account2/Role2. All the user activities performed in AWS using the Assume Role were also logged with the original user’s sourceIdentity attribute. This makes it simple to trace user activity in CloudTrail.

Conclusion

Now that you have configured your SourceIdentity, you have made it easier for the security team of your organization to use CloudTrail logs to investigate and identify the originating identity of a user. In this post, you learned how to configure the AWS STS SourceIdentity attribute for three different popular IdPs, as well as how to configure each IdP using SAML and their optional attributes. We also provided sample control policy documents outlining how to configure the SourceIdentity for each provider. Additionally, we provide a sample policy for setting the SourceIdentity when switching roles. Lastly, the post walks through how the source identity will show in CloudTrail logs, and provides logs from two accounts to demonstrate the continuance of the source identity attribute. You can now test this capability yourself in your own environment, validate activity in your CloudTrail logs, and determine which user performed a specific action while using the assumeRole functionality.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Keith Joelner

Keith Joelner

Keith is a Solution Architect at Amazon Web Services working in the ISV segment. He is based in the San Francisco Bay area. Since joining AWS in 2019, he’s been supporting Snowflake and Okta. In his spare time Keith liked woodworking and home improvement projects.

Nitin Kulkarni

Nitin is a Solutions Architect on the AWS Identity Solutions team. He helps customers build secure and scalable solutions on the AWS platform. He also enjoys hiking, baseball and linguistics.

Ramesh Kumar Venkatraman

Ramesh Kumar Venkatraman is a Solutions Architect at AWS who is passionate about containers and databases. He works with AWS customers to design, deploy and manage their AWS workloads and architectures. In his spare time, he loves to play with his two kids and follows cricket.

Eddie Esquivel

Eddie Esquivel

Eddie is a Sr. Solutions Architect in the ISV segment. He spent time at several startups focusing on Big Data and Kubernetes before joining AWS. Currently, he’s focused on management and governance and helping customers make best use of AWS technology. In his spare time he enjoys spending time outdoors with his Wife and pet dog.

How to set up Amazon Cognito for federated authentication using Azure AD

Post Syndicated from Ratan Kumar original https://aws.amazon.com/blogs/security/how-to-set-up-amazon-cognito-for-federated-authentication-using-azure-ad/

In this blog post, I’ll walk you through the steps to integrate Azure AD as a federated identity provider in Amazon Cognito user pool. A user pool is a user directory in Amazon Cognito that provides sign-up and sign-in options for your app users.

Identity management and authentication flow can be challenging when you need to support requirements such as OAuth, social authentication, and login using a Security Assertion Markup Language (SAML) 2.0 based identity provider (IdP) to meet your enterprise identity management requirements. Amazon Cognito provides you a managed, scalable user directory, user sign-up and sign-in, and federation through third-party identity providers. An added benefit for developers is that it provides you a standardized set of tokens (Identity, Access and Refresh Token). So, in situations when you have to support authentication with multiple identity providers (e.g. Social authentication, SAML IdP, etc.), you don’t have to write code for handling different tokens issued by different identity providers. Instead, you can just work with a consistent set of tokens issued by Amazon Cognito user pool.
 

Figure 1: High-level architecture for federated authentication in a web or mobile app

Figure 1: High-level architecture for federated authentication in a web or mobile app

As shown in Figure 1, the high-level application architecture of a serverless app with federated authentication typically involves following steps:

  1. User selects their preferred IdP to authenticate.
  2. User gets re-directed to the federated IdP for login. On successful authentication, the IdP posts back a SAML assertion or token containing user’s identity details to an Amazon Cognito user pool.
  3. Amazon Cognito user pool issues a set of tokens to the application
  4. Application can use the token issued by the Amazon Cognito user pool for authorized access to APIs protected by Amazon API Gateway.

To learn more about the authentication flow with SAML federation, see the blog post Building ADFS Federation for your Web App using Amazon Cognito User Pools.

Step-by-step instructions for enabling Azure AD as federated identity provider in an Amazon Cognito user pool

This post will walk you through the following steps:

  1. Create an Amazon Cognito user pool
  2. Add Amazon Cognito as an enterprise application in Azure AD
  3. Add Azure AD as SAML identity provider (IDP) in Amazon Cognito
  4. Create an app client and use the newly created SAML IDP for Azure AD

Prerequisites

You’ll need to have administrative access to Azure AD, an AWS account and the AWS Command Line Interface (AWS CLI) installed on your machine. Follow the instructions for installing, updating, and uninstalling the AWS CLI version 2; and then to configure your installation, follow the instructions for configuring the AWS CLI. If you don’t want to install AWS CLI, you can also run these commands from AWS CloudShell which provides a browser-based shell to securely manage, explore, and interact with your AWS resources.

Step 1: Create an Amazon Cognito user pool

The procedures in this post use the AWS CLI, but you can also follow the instructions to use the AWS Management Console to create a new user pool.

To create a user pool in the AWS CLI

  1. Use the following command to create a user pool with default settings. Be sure to replace <yourUserPoolName> with the name you want to use for your user pool.
    aws cognito-idp create-user-pool \
    --pool-name <yourUserPoolName>
    

    You should see an output containing number of details about the newly created user pool.

  2. Copy the value of user pool ID, in this example, ap-southeast-2_xx0xXxXXX. You will need this value for the next steps.
    "UserPool": {
            "Id": "ap-southeast-2_xx0xXxXXX",
            "Name": "example-corp-prd-userpool"
           "Policies": { …
    

Add a domain name to user pool

One of the many useful features of Amazon Cognito is hosted UI which provides a configurable web interface for user sign in. Hosted UI is accessible from a domain name that needs to be added to the user pool. There are two options for adding a domain name to a user pool. You can either use an Amazon Cognito domain, or a domain name that you own. This solution uses an Amazon Cognito domain, which will look like the following:

https://<yourDomainPrefix>.auth.<aws-region>.amazoncognito.com

To add a domain name to user pool

  1. Use following CLI command to add an Amazon Cognito domain to the user pool. Replace <yourDomainPrefix> with a unique domain name prefix (for example example-corp-prd). Note that you cannot use keywords aws, amazon, or cognito for domain prefix.
    aws cognito-idp create-user-pool-domain \
    --domain <yourDomainPrefix> \
    --user-pool-id <yourUserPoolID>
    

Prepare information for Azure AD setup

Next, you prepare Identifier (Entity ID) and Reply URL, which are required to add Amazon Cognito as an enterprise application in Azure AD (done in Step 2 below). Azure AD expects these values in a very specific format. In a text editor, note down your values for Identifier (Entity ID) and Reply URL according to the following formats:

  • For Identifier (Entity ID) the format is:
    urn:amazon:cognito:sp:<yourUserPoolID>
    

    For example:

    urn:amazon:cognito:sp:ap-southeast-2_nYYYyyYyYy
    

  • For Reply URL the format is:
    https://<yourDomainPrefix>.auth.<aws-region>.amazoncognito.com/saml2/idpresponse
    

    For example:

    https://example-corp-prd.auth.ap-southeast-2.amazoncognito.com/saml2/idpresponse
    

    Note: The Reply URL is the endpoint where Azure AD will send SAML assertion to Amazon Cognito during the process of user authentication.

Update the placeholders above with your values (without < >), and then note the values of Identifier (Entity ID) and Reply URL in a text editor for future reference.

For more information, see Adding SAML Identity Providers to a User Pool in the Amazon Cognito Developer Guide.

Step 2: Add Amazon Cognito as an enterprise application in Azure AD

In this step, you add an Amazon Cognito user pool as an application in Azure AD, to establish a trust relationship between them.

To add new application in Azure AD

  1. Log in to the Azure Portal.
  2. In the Azure Services section, choose Azure Active Directory.
  3. In the left sidebar, choose Enterprise applications.
  4. Choose New application.
  5. On the Browse Azure AD Gallery page, choose Create your own application.
  6. Under What’s the name of your app?, enter a name for your application and select Integrate any other application you don’t find in the gallery (Non-gallery), as shown in Figure 2. Choose Create.
     
    Figure 2: Add an enterprise app in Azure AD

    Figure 2: Add an enterprise app in Azure AD

It will take few seconds for the application to be created in Azure AD, then you should be redirected to the Overview page for the newly added application.

Note: Occasionally, this step can result in a Not Found error, even though Azure AD has successfully created a new application. If that happens, in Azure AD navigate back to Enterprise applications and search for your application by name.

To set up Single Sign-on using SAML

  1. On the Getting started page, in the Set up single sign on tile, choose Get started, as shown in Figure 3.
     
    Figure 3: Application configuration page in Azure AD

    Figure 3: Application configuration page in Azure AD

  2. On the next screen, select SAML.
  3. In the middle pane under Set up Single Sign-On with SAML, in the Basic SAML Configuration section, choose the edit icon ().
  4. In the right pane under Basic SAML Configuration, replace the default Identifier ID (Entity ID) with the Identifier (Entity ID) you copied previously. In the Reply URL (Assertion Consumer Service URL) field, enter the Reply URL you copied previously, as shown in Figure 4. Choose Save.
     
    Figure 4: Azure AD SAML-based Sign-on setup

    Figure 4: Azure AD SAML-based Sign-on setup

  5. In the middle pane under Set up Single Sign-On with SAML, in the User Attributes & Claims section, choose Edit.
  6. Choose Add a group claim.
  7. On the User Attributes & Claims page, in the right pane under Group Claims, select Groups assigned to the application, leave Source attribute as Group ID, as shown in Figure 5. Choose Save.
     
    Figure 5: Option to select group claims to release to Amazon Cognito

    Figure 5: Option to select group claims to release to Amazon Cognito

    This adds the group claim so that Amazon Cognito can receive the group membership detail of the authenticated user as part of the SAML assertion.

  8. In a text editor, note down the Claim names under Additional claims, as shown in Figure 5. You’ll need these when creating attribute mapping in Amazon Cognito.
  9. Close the User Attributes & Claims screen by choosing the X in the top right corner. You’ll be redirected to the Set up Single Sign-on with SAML page.
  10. Scroll down to the SAML Signing Certificate section, and copy the App Federation Metadata Url by choosing the copy into clipboard icon (highlighted with red arrow in Figure 6). Keep this URL in a text editor, as you’ll need it in the next step.
     
    Figure 6: Copy SAML metadata URL from Azure AD

    Figure 6: Copy SAML metadata URL from Azure AD

Step 3: Add Azure AD as SAML IDP in Amazon Cognito

Next, you need an attribute in the Amazon Cognito user pool where group membership details from Azure AD can be received, and add Azure AD as an identity provider.

To add custom attribute to user pool and add Azure AD as an identity provider

  1. Use the following CLI command to add a custom attribute to the user pool. Replace <yourUserPoolID> and <customAttributeName> with your own values.
    aws cognito-idp add-custom-attributes \
    --user-pool-id <yourUserPoolID> \
    --custom-attributes Name=<customAttributeName>,AttributeDataType="String"
    

    If the command succeeds, you’ll not see any output.

  2. Use the following CLI command to add Azure AD as an identity provider. Be sure to replace the following with your own values:
    • Replace <yourUserPoolID> with Amazon Cognito user pool ID copied previously.
    • Replace <IDProviderName> with a name for your identity provider (for example, Example-Corp-IDP).
    • Replace <MetadataURLCopiedFromAzureAD> with the Metadata URL copied from Azure AD.
    • Replace <customAttributeName> with custom attribute name created previously.
    aws cognito-idp create-identity-provider \
    --user-pool-id <yourUserPoolID> \
    --provider-name=<IDProviderName> \
    --provider-type SAML \
    --provider-details MetadataURL=<MetadataURLCopiedFromAzureAD> \
    --attribute-mapping email=http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress,<customAttributeName>=http://schemas.microsoft.com/ws/2008/06/identity/claims/groups
    

    Successful running of this command adds Azure AD as a SAML IDP to your Amazon Cognito user pool.

Step 4: Create an app client and use the newly created SAML IDP for Azure AD

Before you can use Amazon Cognito in your web application, you need to register your app with Amazon Cognito as an app client. An app client is an entity within an Amazon Cognito user pool that has permission to call unauthenticated API operations (operations that do not require an authenticated user), for example to register, sign in, and handle forgotten passwords.

To create an app client

  1. Use following command to create an app client. Be sure to replace the following with your own values:
    • Replace <yourUserPoolID> with the Amazon Cognito user pool ID created previously.
    • Replace <yourAppClientName> with a name for your app client.
    • Replace <callbackURL> with the URL of your web application that will receive the authorization code. It must be an HTTPS endpoint, except for in a local development environment where you can use http://localhost:PORT_NUMBER.
    • Use parameter –allowed-o-auth-flows for allowed OAuth flows that you want to enable. In this example, we use code for Authorization code grant.
    • Use parameter –allowed-o-auth-scopes to specify which OAuth scopes (such as phone, email, openid) Amazon Cognito will include in the tokens. In this example, we use openid.
    • Replace <IDProviderName> with the same name you used for ID provider previously.
    aws cognito-idp create-user-pool-client \
    --user-pool-id <yourUserPoolID> \
    --client-name <yourAppClientName> \
    --no-generate-secret \
    --callback-urls <callbackURL> \
    --allowed-o-auth-flows code \
    --allowed-o-auth-scopes openid email\
    --supported-identity-providers <IDProviderName> \
    --allowed-o-auth-flows-user-pool-client
    

Successful running of this command will provide an output in following format. In a text editor, note down the ClientId for referencing in the web application. In this following example, the ClientId is 7xyxyxyxyxyxyxyxyxyxy.

{
    "UserPoolClient": {
        "UserPoolId": "ap-southeast-2_xYYYYYYY",
        "ClientName": "my-client-name",
        "ClientId": "7xyxyxyxyxyxyxyxyxyxy",
        "LastModifiedDate": "2021-05-04T17:33:32.936000+12:00",
        "CreationDate": "2021-05-04T17:33:32.936000+12:00",
        "RefreshTokenValidity": 30,
        "SupportedIdentityProviders": [
            "Azure-AD"
        ],
        "CallbackURLs": [
            "http://localhost:3030"
        ],
        "AllowedOAuthFlows": [
            "code"
        ],
        "AllowedOAuthScopes": [
            "openid", "email"
        ],
        "AllowedOAuthFlowsUserPoolClient": true
    }
}

Test the setup

Next, do a quick test to check if everything is configured properly.

  1. Open the Amazon Cognito console.
  2. Choose Manage User Pools, then choose the user pool you created in Step 1: Create an Amazon Cognito user pool.
  3. In the left sidebar, choose App client settings, then look for the app client you created in Step 4: Create an app client and use the newly created SAML IDP for Azure AD. Scroll to the Hosted UI section and choose Launch Hosted UI, as shown in Figure 7.
     
    Figure 7: App client settings showing link to access Hosted UI

    Figure 7: App client settings showing link to access Hosted UI

  4. On the sign-in page as shown in Figure 8, you should see all the IdPs that you enabled on the app client. Choose the Azure-AD button, which redirects you to the sign-in page hosted on https://login.microsoftonline.com/.
     
    Figure 8: Amazon Cognito hosted UI

    Figure 8: Amazon Cognito hosted UI

  5. Sign in using your corporate ID. If everything is working properly, you should be redirected back to the callback URL after successful authentication.

(Optional) Add authentication to a single page application

One way to add secure authentication using Amazon Cognito into a single page application (SPA) is to use the Auth.federatedSignIn() method of Auth class from AWS Amplify. AWS Amplify provides SDKs to integrate your web or mobile app with a growing list of AWS services, including integration with Amazon Cognito user pool. The federatedSign() method will render the hosted UI that gives users the option to sign in with the identity providers that you enabled on the app client (in Step 4), as shown in Figure 8. One advantage of hosted UI is that you don’t have to write any code for rendering it. Additionally, it will transparently implement the Authorization code grant with PKCE and securely provide your client-side application with the tokens (ID, Access and Refresh) that are required to access the backend APIs.

For a sample web application and instructions to connect it with Amazon Cognito authentication, see the aws-amplify-oidc-federation GitHub repository.

Conclusion

In this blog post, you learned how to integrate an Amazon Cognito user pool with Azure AD as an external SAML identity provider, to allow your users to use their corporate ID to sign in to web or mobile applications.

For more information about this solution, see our video Integrating Amazon Cognito with Azure Active Directory (from timestamp 25:26) on the official AWS twitch channel. In the video, you’ll find an end-to-end demo of how to integrate Amazon Cognito with Azure AD, and then how to use AWS Amplify SDK to add authentication to a simple React app (using the example of a pet store). The video also includes how you can access group membership details from Azure AD for authorization and fine-grained access control.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Cognito forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ratan Kumar

Ratan is a solutions architect based out of Auckland, New Zealand. He works with large enterprise customers helping them design and build secure, cost-effective, and reliable internet scale applications using the AWS cloud. He is passionate about technology and likes sharing knowledge through blog posts and twitch sessions.

Author

Vishwanatha Nayak

Vish is a solutions architect at AWS. He engages with customers to create innovative solutions that are secure, reliable, and cost optimised to address business problems and accelerate the adoption of AWS services. He has over 15 years of experience in various software development, consulting, and architecture roles.

Implement OAuth 2.0 device grant flow by using Amazon Cognito and AWS Lambda

Post Syndicated from Jeff Lombardo original https://aws.amazon.com/blogs/security/implement-oauth-2-0-device-grant-flow-by-using-amazon-cognito-and-aws-lambda/

In this blog post, you’ll learn how to implement the OAuth 2.0 device authorization grant flow for Amazon Cognito by using AWS Lambda and Amazon DynamoDB.

When you implement the OAuth 2.0 authorization framework (RFC 6749) for internet-connected devices with limited input capabilities or that lack a user-friendly browser—such as wearables, smart assistants, video-streaming devices, smart-home automation, and health or medical devices—you should consider using the OAuth 2.0 device authorization grant (RFC 8628). This authorization flow makes it possible for the device user to review the authorization request on a secondary device, such as a smartphone, that has more advanced input and browser capabilities. By using this flow, you can work around the limits of the authorization code grant flow with Proof Key for Code Exchange (PKCE)-defined OpenID Connect Core specifications. This will help you to avoid scenarios such as:

  • Forcing end users to define a dedicated application password or use an on-screen keyboard with a remote control
  • Degrading the security posture of the end users by exposing their credentials to the client application or external observers

One common example of this type of scenario is a TV HDMI streaming device where, to be able to consume videos, the user must slowly select each letter of their user name and password with the remote control, which exposes these values to other people in the room during the operation.

Solution overview

The OAuth 2.0 device authorization grant (RFC 8628) is an IETF standard that enables Internet of Things (IoT) devices to initiate a unique transaction that authenticated end users can securely confirm through their native browsers. After the user authorizes the transaction, the solution will issue a delegated OAuth 2.0 access token that represents the end user to the requesting device through a back-channel call, as shown in Figure 1.
 

Figure 1: The device grant flow implemented in this solution

Figure 1: The device grant flow implemented in this solution

The workflow is as follows:

  1. An unauthenticated user requests service from the device.
  2. The device requests a pair of random codes (one for the device and one for the user) by authenticating with the client ID and client secret.
  3. The Lambda function creates an authorization request that stores the device code, user code, scope, and requestor’s client ID.
  4. The device provides the user code to the user.
  5. The user enters their user code on an authenticated web page to authorize the client application.
  6. The user is redirected to the Amazon Cognito user pool /authorize endpoint to request an authorization code.
  7. The user is returned to the Lambda function /callback endpoint with an authorization code.
  8. The Lambda function stores the authorization code in the authorization request.
  9. The device uses the device code to check the status of the authorization request regularly. And, after the authorization request is approved, the device uses the device code to retrieve a set of JSON web tokens from the Lambda function.
  10. In this case, the Lambda function impersonates the device to the Amazon Cognito user pool /token endpoint by using the authorization code that is stored in the authorization request, and returns the JSON web tokens to the device.

To achieve this flow, this blog post provides a solution that is composed of:

  • An AWS Lambda function with three additional endpoints:
    • The /token endpoint, which will handle client application requests such as generation of codes, the authorization request status check, and retrieval of the JSON web tokens.
    • The /device endpoint, which will handle user requests such as delivering the UI for approval or denial of the authorization request, or retrieving an authorization code.
    • The /callback endpoint, which will handle the reception of the authorization code associated with the user who is approving or denying the authorization request.
  • An Amazon Cognito user pool with:
  • Finally, an Amazon DynamoDB table to store the state of all the processed authorization requests.

Implement the solution

The implementation of this solution requires three steps:

  1. Define the public fully qualified domain name (FQDN) for the Application Load Balancer public endpoint and associate an X.509 certificate to the FQDN
  2. Deploy the provided AWS CloudFormation template
  3. Configure the DNS to point to the Application Load Balancer public endpoint for the public FQDN

Step 1: Choose a DNS name and create an SSL certificate

Your Lambda function endpoints must be publicly resolvable when they are exposed by the Application Load Balancer through an HTTPS/443 listener.

To configure the Application Load Balancer component

  1. Choose an FQDN in a DNS zone that you own.
  2. Associate an X.509 certificate and private key to the FQDN by doing one of the following:
  3. After you have the certificate in ACM, navigate to the Certificates page in the ACM console.
  4. Choose the right arrow (►) icon next to your certificate to show the certificate details.
     
    Figure 2: Locating the certificate in ACM

    Figure 2: Locating the certificate in ACM

  5. Copy the Amazon Resource Name (ARN) of the certificate and save it in a text file.
     
    Figure 3: Locating the certificate ARN in ACM

    Figure 3: Locating the certificate ARN in ACM

Step 2: Deploy the solution by using a CloudFormation template

To configure this solution, you’ll need to deploy the solution CloudFormation template.

Before you deploy the CloudFormation template, you can view it in its GitHub repository.

To deploy the CloudFormation template

  1. Choose the following Launch Stack button to launch a CloudFormation stack in your account.
    Select the Launch Stack button to launch the template

    Note: The stack will launch in the N. Virginia (us-east-1) Region. To deploy this solution into other AWS Regions, download the solution’s CloudFormation template, modify it, and deploy it to the selected Region.

  2. During the stack configuration, provide the following information:
    • A name for the stack.
    • The ARN of the certificate that you created or imported in AWS Certificate Manager.
    • A valid email address that you own. The initial password for the Amazon Cognito test user will be sent to this address.
    • The FQDN that you chose earlier, and that is associated to the certificate that you created or imported in AWS Certificate Manager.
    Figure 4: Configure the CloudFormation stack

    Figure 4: Configure the CloudFormation stack

  3. After the stack is configured, choose Next, and then choose Next again. On the Review page, select the check box that authorizes CloudFormation to create AWS Identity and Access Management (IAM) resources for the stack.
     
    Figure 5: Authorize CloudFormation to create IAM resources

    Figure 5: Authorize CloudFormation to create IAM resources

  4. Choose Create stack to deploy the stack. The deployment will take several minutes. When the status says CREATE_COMPLETE, the deployment is complete.

Step 3: Finalize the configuration

After the stack is set up, you must finalize the configuration by creating a DNS CNAME entry in the DNS zone you own that points to the Application Load Balancer DNS name.

To create the DNS CNAME entry

  1. In the CloudFormation console, on the Stacks page, locate your stack and choose it.
     
    Figure 6: Locating the stack in CloudFormation

    Figure 6: Locating the stack in CloudFormation

  2. Choose the Outputs tab.
  3. Copy the value for the key ALBCNAMEForDNSConfiguration.
     
    Figure 7: The ALB CNAME output in CloudFormation

    Figure 7: The ALB CNAME output in CloudFormation

  4. Configure a CNAME DNS entry into your DNS hosted zone based on this value. For more information on how to create a CNAME entry to the Application Load Balancer in a DNS zone, see Creating records by using the Amazon Route 53 console.
  5. Note the other values in the Output tab, which you will use in the next section of this post.

    Output key Output value and function
    DeviceCognitoClientClientID The app client ID, to be used by the simulated device to interact with the authorization server
    DeviceCognitoClientClientSecret The app client secret, to be used by the simulated device to interact with the authorization server
    TestEndPointForDevice The HTTPS endpoint that the simulated device will use to make its requests
    TestEndPointForUser The HTTPS endpoint that the user will use to make their requests
    UserPassword The password for the Amazon Cognito test user
    UserUserName The user name for the Amazon Cognito test user

Evaluate the solution

Now that you’ve deployed and configured the solution, you can initiate the OAuth 2.0 device code grant flow.

Until you implement your own device logic, you can perform all of the device calls by using the curl library, a Postman client, or any HTTP request library or SDK that is available in the client application coding language.

All of the following device HTTPS requests are made with the assumption that the device is a private OAuth 2.0 client. Therefore, an HTTP Authorization Basic header will be present and formed with a base64-encoded Client ID:Client Secret value.

You can retrieve the URI of the endpoints, the client ID, and the client secret from the CloudFormation Output table for the deployed stack, as described in the previous section.

Initialize the flow from the client application

The solution in this blog post lets you decide how the user will ask the device to start the authorization request and how the user will be presented with the user code and URI in order to verify the request. However, you can emulate the device behavior by generating the following HTTPS POST request to the Application Load Balancer–protected Lambda function /token endpoint with the appropriate HTTP Authorization header. The Authorization header is composed of:

  • The prefix Basic, describing the type of Authorization header
  • A space character as separator
  • The base64 encoding of the concatenation of:
    • The client ID
    • The colon character as a separator
    • The client secret
     POST /token?client_id=AIDACKCEVSQ6C2EXAMPLE HTTP/1.1
     User-Agent: Mozilla/4.0 (compatible; MSIE5.01; Windows NT)
     Host: <FQDN of the ALB protected Lambda function>
     Accept: */*
     Accept-Encoding: gzip, deflate
     Connection: Keep-Alive
     Authorization: Basic QUlEQUNLQ0VWUwJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY VORy9iUHhSZmlDWUVYQU1QTEVLRVkg
    

The following JSON message will be returned to the client application.

Server: awselb/2.0
Date: Tue, 06 Apr 2021 19:57:31 GMT
Content-Type: application/json
Content-Length: 33
Connection: keep-alive
cache-control: no-store
{
    "device_code": "APKAEIBAERJR2EXAMPLE",
    "user_code": "ANPAJ2UCCR6DPCEXAMPLE",
    "verification_uri": "https://<FQDN of the ALB protected Lambda function>/device",
    "verification_uri_complete":"https://<FQDN of the ALB protected Lambda function>/device?code=ANPAJ2UCCR6DPCEXAMPLE&authorize=true",
    "interval": <Echo of POLLING_INTERVAL environment variable>,
    "expires_in": <Echo of CODE_EXPIRATION environment variable>
}

Check the status of the authorization request from the client application

You can emulate the process where the client app regularly checks for the authorization request status by using the following HTTPS POST request to the Application Load Balancer–protected Lambda function /token endpoint. The request should have the same HTTP Authorization header that was defined in the previous section.

POST /token?client_id=AIDACKCEVSQ6C2EXAMPLE&device_code=APKAEIBAERJR2EXAMPLE&grant_type=urn:ietf:params:oauth:grant-type:device_code HTTP/1.1
 User-Agent: Mozilla/4.0 (compatible; MSIE5.01; Windows NT)
 Host: <FQDN of the ALB protected Lambda function>
 Accept: */*
 Accept-Encoding: gzip, deflate
 Connection: Keep-Alive
 Authorization: Basic QUlEQUNLQ0VWUwJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY VORy9iUHhSZmlDWUVYQU1QTEVLRVkg

Until the authorization request is approved, the client application will receive an error message that includes the reason for the error: authorization_pending if the request is not yet authorized, slow_down if the polling is too frequent, or expired if the maximum lifetime of the code has been reached. The following example shows the authorization_pending error message.

HTTP/1.1 400 Bad Request
Server: awselb/2.0
Date: Tue, 06 Apr 2021 20:57:31 GMT
Content-Type: application/json
Content-Length: 33
Connection: keep-alive
cache-control: no-store
{
"error":"authorization_pending"
}

Approve the authorization request with the user code

Next, you can approve the authorization request with the user code. To act as the user, you need to open a browser and navigate to the verification_uri that was provided by the client application.

If you don’t have a session with the Amazon Cognito user pool, you will be required to sign in.

Note: Remember that the initial password was sent to the email address you provided when you deployed the CloudFormation stack.

If you used the initial password, you’ll be asked to change it. Make sure to respect the password policy when you set a new password. After you’re authenticated, you’ll be presented with an authorization page, as shown in Figure 8.
 

Figure 8: The user UI for approving or denying the authorization request

Figure 8: The user UI for approving or denying the authorization request

Fill in the user code that was provided by the client application, as in the previous step, and then choose Authorize.

When the operation is successful, you’ll see a message similar to the one in Figure 9.
 

Figure 9: The “Success” message when the authorization request has been approved

Figure 9: The “Success” message when the authorization request has been approved

Finalize the flow from the client app

After the request has been approved, you can emulate the final client app check for the authorization request status by using the following HTTPS POST request to the Application Load Balancer–protected Lambda function /token endpoint. The request should have the same HTTP Authorization header that was defined in the previous section.

POST /token?client_id=AIDACKCEVSQ6C2EXAMPLE&device_code=APKAEIBAERJR2EXAMPLE&grant_type=urn:ietf:params:oauth:grant-type:device_code HTTP/1.1
 User-Agent: Mozilla/4.0 (compatible; MSIE5.01; Windows NT)
 Host: <FQDN of the ALB protected Lambda function>
 Accept: */*
 Accept-Encoding: gzip, deflate
 Connection: Keep-Alive
 Authorization: Basic QUlEQUNLQ0VWUwJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY VORy9iUHhSZmlDWUVYQU1QTEVLRVkg

The JSON web token set will then be returned to the client application, as follows.

HTTP/1.1 200 OK
Server: awselb/2.0
Date: Tue, 06 Apr 2021 21:41:50 GMT
Content-Type: application/json
Content-Length: 3501
Connection: keep-alive
cache-control: no-store
{
"access_token":"eyJrEXAMPLEHEADER2In0.eyJznvbEXAMPLEKEY6IjIcyJ9.eYEs-zaPdEXAMPLESIGCPltw",
"refresh_token":"eyJjdEXAMPLEHEADERifQ. AdBTvHIAPKAEIBAERJR2EXAMPLELq -co.pjEXAMPLESIGpw",
"expires_in":3600

The client application can now consume resources on behalf of the user, thanks to the access token, and can refresh the access token autonomously, thanks to the refresh token.

Going further with this solution

This project is delivered with a default configuration that can be extended to support additional security capabilities or to and adapted the experience to your end-users’ context.

Extending security capabilities

Through this solution, you can:

  • Use an AWS KMS key issued by AWS KMS to:
    • Encrypt the data in the database;
    • Protect the configuration in the Amazon Lambda function;
  • Use AWS Secret Manager to:
    • Securely store sensitive information like Cognito application client’s credentials;
    • Enforce Cognito application client’s credentials rotation;
  • Implement additional Amazon Lambda’s code to enforce data integrity on changes;
  • Activate AWS WAF WebACLs to protect your endpoints against attacks;

Customizing the end-user experience

The following table shows some of the variables you can work with.

Name Function Default value Type
CODE_EXPIRATION Represents the lifetime of the codes generated 1800 Seconds
DEVICE_CODE_FORMAT Represents the format for the device code #aA A string where:
# represents numbers
a lowercase letters
A uppercase letters
! special characters
DEVICE_CODE_LENGTH Represents the device code length 64 Number
POLLING_INTERVAL Represents the minimum time, in seconds, between two polling events from the client application 5 Seconds
USER_CODE_FORMAT Represents the format for the user code #B A string where:
# represents numbers
a lowercase letters
b lowercase letters that aren’t vowels
A uppercase letters
B uppercase letters that aren’t vowels
! special characters
USER_CODE_LENGTH Represents the user code length 8 Number
RESULT_TOKEN_SET Represents what should be returned in the token set to the client application ACCESS+REFRESH A string that includes only ID, ACCESS, and REFRESH values separated with a + symbol

To change the values of the Lambda function variables

  1. In the Lambda console, navigate to the Functions page.
  2. Select the DeviceGrant-token function.
     
    Figure 10: AWS Lambda console—Function selection

    Figure 10: AWS Lambda console—Function selection

  3. Choose the Configuration tab.
     
    Figure 11: AWS Lambda function—Configuration tab

    Figure 11: AWS Lambda function—Configuration tab

  4. Select the Environment variables tab, and then choose Edit to change the values for the variables.
     
    Figure 12: AWS Lambda Function—Environment variables tab

    Figure 12: AWS Lambda Function—Environment variables tab

  5. Generate new codes as the device and see how the experience changes based on how you’ve set the environment variables.

Conclusion

Although your business and security requirements can be more complex than the example shown in this post, this blog post will give you a good way to bootstrap your own implementation of the Device Grant Flow (RFC 8628) by using Amazon Cognito, AWS Lambda, and Amazon DynamoDB.

Your end users can now benefit from the same level of security and the same experience as they have when they enroll their identity in their mobile applications, including the following features:

  • Credentials will be provided through a full-featured application on the user’s mobile device or their computer
  • Credentials will be checked against the source of authority only
  • The authentication experience will match the typical authentication process chosen by the end user
  • Upon consent by the end user, IoT devices will be provided with end-user delegated dynamic credentials that are bound to the exact scope of tasks for that device

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Cognito forum or reach out through the post’s GitHub repository.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Jeff Lombardo

Jeff is a solutions architect expert in IAM, Application Security, and Data Protection. Through 16 years as a security consultant for enterprises of all sizes and business verticals, he delivered innovative solutions with respect to standards and governance frameworks. Today at AWS, he helps organizations enforce best practices and defense in depth for secure cloud adoption.

Protect public clients for Amazon Cognito by using an Amazon CloudFront proxy

Post Syndicated from Mahmoud Matouk original https://aws.amazon.com/blogs/security/protect-public-clients-for-amazon-cognito-by-using-an-amazon-cloudfront-proxy/

In Amazon Cognito user pools, an app client is an entity that has permission to call unauthenticated API operations (that is, operations that don’t have an authenticated user), such as operations to sign up, sign in, and handle forgotten passwords. In this post, I show you a solution designed to protect these API operations from unwanted bots and distributed denial of service (DDoS) attacks.

To protect Amazon Cognito services and customers, Amazon Cognito applies request rate quotas on all API categories, and throttles rapid calls that exceed the assigned quota. For that reason, you must ensure your applications control who can call unauthenticated API operations and at what rate, so that user calls aren’t throttled because of unwanted or misconfigured clients that call these API operations at high rates.

App clients fall into one of two categories: public clients (used from web or mobile applications) and private or confidential clients (used from a secured backend). Public clients shouldn’t have secrets, because it isn’t possible to protect secrets in these types of clients. Confidential clients, on the other hand, use a secret to authorize calls to unauthenticated operations. In these clients, the secret can be protected in the backend.

The benefit of using a confidential app client with a secret in Amazon Cognito is that unauthenticated API operations will accept only the calls that include the secret hash for this client, and will drop calls with an invalid or missing secret. In this way, you control who calls these API operations. Public applications can use a confidential app client by implementing a lightweight proxy layer in front of the Amazon Cognito endpoint, and then using this proxy to add a secret hash in relevant requests before passing the requests to Amazon Cognito.

There are multiple options that you can use to implement this proxy. One option is to use Amazon CloudFront and [email protected] to add the secret hash to the incoming requests. When you use a CloudFront proxy, you can also use AWS WAF, which gives you tools to detect and block unwanted clients. From [email protected], you can also integrate with other services (like Amazon Fraud Detector or third-party bot detection services) to help you detect possible fraudulent requests and block them. The CloudFront proxy, with the right set of security tools, helps protect your Amazon Cognito user pool from unwanted clients.

Solution overview

To implement this lightweight proxy pattern, you need to create an application client with a secret. Unauthenticated API calls to this client must include the secret hash which is added to the request from the proxy layer. Client applications use an SDK like AWS Amplify, the Amazon Cognito Identity SDK, or a mobile SDK to communicate with Amazon Cognito. By default, the SDK sends requests to the Regional Amazon Cognito endpoint. Your application must override the default endpoint by manually adding an “Endpoint” property in the app configuration. See the Integrate the client application with the proxy section later in this post for more details.

Figure 1 shows how this works, step by step.
 

Figure 1: A proxy solution to the Amazon Cognito Regional endpoint

Figure 1: A proxy solution to the Amazon Cognito Regional endpoint

The workflow is as follows:

  1. You configure the client application (mobile or web client) to use a CloudFront endpoint as a proxy to an Amazon Cognito Regional endpoint. You also create an application client in Amazon Cognito with a secret. This means that any unauthenticated API call must have the secret hash.
  2. Clients that send unauthenticated API calls to the Amazon Cognito endpoint directly are blocked and dropped because of the missing secret.
  3. You use [email protected] to add a secret hash to the relevant incoming requests before passing them on to the Amazon Cognito endpoint.
  4. From [email protected], you must have the app client secret to be able to calculate the secret hash and add it to the request. It’s recommended that you keep the secret in AWS Secrets Manager and cache it for the lifetime of the function.
  5. You use AWS WAF with CloudFront distribution to enforce rate limiting, allow and deny lists, and other rule groups according to your security requirements.

When to use this pattern

It’s a best practice to use this proxy pattern with clients that use SDKs to integrate with Amazon Cognito user pools. Examples include mobile applications that use the iOS or Android SDK, or web applications that use client-side libraries like Amplify or the Amazon Cognito Identity SDK to integrate with Amazon Cognito.

You don’t need to use a proxy pattern with server-side applications that use an AWS SDK to integrate with Amazon Cognito user pools from a protected backend, because server-side applications can natively use confidential clients and protect the secret in the backend.

You can’t use this solution with applications that use Hosted UI and OAuth 2.0 endpoints to integrate with Amazon Cognito user pools. This includes federation scenarios where users sign in with an external identity provider (IdP).

Implementation and deployment details

Before you deploy this solution, you need a user pool and an application client that has the client secret. When you have these in place, choose the following Launch Stack button to launch a CloudFormation stack in your account and deploy the proxy solution.

Select the Launch Stack button to launch the template

Note: The CloudFormation stack must be created in the us-east-1 AWS Region, but the user pool itself can exist in any supported Region.

The template takes the parameters shown in Figure 2 below.
 

Figure 2: CloudFormation stack creation with initial parameters

Figure 2: CloudFormation stack creation with initial parameters

The parameters in Figure 2 include:

  • AdvancedSecurityEnabled is a flag that indicates whether advanced security is enabled in the user pool or not. This flag determines which version of the Lambda function is deployed. Notice that if you change this flag as part of a stack update, it overrides the function code, so if you have any manual changes, make sure to back up your changes.
  • AppClientSecret is the secret for your application client. This secret is stored in Secrets Manager and accessed from [email protected] as needed.
  • LambdaS3BucketName is the bucket that hosts the Lambda code package. You don’t need to change this parameter unless you have a requirement to modify or extend the solution with your own Lambda function.
  • RateLimit is the maximum number of calls from a single IP address that are allowed within a 5 minute period. Values between 100 requests and 20 million requests are valid for RateLimit.
  • Important: provide a value suitable for your application and security requirements.

  • UserPoolId is the ID of your user pool. This value is used by [email protected] when needed (for example, to call admin APIs, which require the user pool ID).
  • UserPoolRegion is the AWS Region where you created your user pool. This value is used to determine which Amazon Cognito Regional endpoint to proxy the calls to.

This template creates several resources in your AWS account, as follows:

  1. A CloudFront distribution that serves as a proxy to an Amazon Cognito Regional endpoint.
  2. An AWS WAF web access control list (ACL) with rules for the allow list, deny list, and rate limit.
  3. A Lambda function to be deployed at the edge and assigned to the origin request event.
  4. A secret in Secrets Manager, to hold the values of the application client secret and user pool ID.

After you create the stack, the CloudFront distribution domain name is available on the Outputs tab in the CloudFront console, as shown in Figure 3. This is the value that’s used as the Endpoint property in your client-side application. You can optionally add an alternative domain name to the CloudFront distribution if you prefer to use your own custom domain.
 

Figure 3: The output of the CloudFormation stack creation, displaying the CloudFront domain name

Figure 3: The output of the CloudFormation stack creation, displaying the CloudFront domain name

Use [email protected] to add a secret hash to the request

As explained earlier, the purpose of having this proxy is to be able to inject the secret hash in unauthenticated API calls before passing them to the Amazon Cognito endpoint. This injection is achieved by a Lambda function that intercepts incoming requests at the edge (the CloudFront distribution) before passing them to the origin (the Amazon Cognito Regional endpoint).

The Lambda function that is deployed to the edge has two versions. One is a simple pass-through proxy that only adds the secret hash, and this version is used if Amazon Cognito advanced security isn’t enabled. The other version is a proxy that uses the AdminInitiateAuth and AdminRespondToAuthChallenge API operations instead of unauthenticated API operations for the user authentication and challenge response. This allows the proxy layer to propagate the client IP address to the Amazon Cognito endpoint, which guides the adaptive authentication features of advanced security. The version that is deployed by the stack is determined by the AdvancedSecurityEnabled flag when you create or update the CloudFormation stack.

You can extend this solution by manually modifying the Lambda function with your own processing logic. For example, you can integrate with fraud detection or bot detection services to evaluate the request and decide to proceed or reject the call. Note that after making any change to the Lambda function code, you must deploy a new version to the edge location. To do that from the Lambda console, navigate to Actions, choose Deploy to [email protected], and then choose Use existing CloudFront trigger on this function.

Important: If you update the stack from CloudFormation and change the value of the AdvancedSecurityEnabled flag, the new value overrides the Lambda code with the default version for the choice. In that case, all manual changes are lost.

Allow or block requests

The template that is provided in this blog post creates a web ACL with three rules: AllowList, DenyList, and RateLimit. These rules are evaluated in order and determine which requests are allowed or blocked. The template also creates four IP sets, as shown in Figure 4, to hold the values of allowed or blocked IPs for both IPv4 and IPv6 address types.
 

Figure 4: The CloudFormation template creates IP sets in the AWS WAF console for allow and deny lists

Figure 4: The CloudFormation template creates IP sets in the AWS WAF console for allow and deny lists

If you want to always allow requests from certain clients, for example, trusted enterprise clients or server-side clients in cases where a large volume of requests is coming from the same IP address like a VPN gateway, add these IP addresses to the corresponding AllowList IP set. Similarly, if you want to always block traffic from certain IPs, add those IPs to the corresponding DenyList IP set.

Requests from sources that aren’t on the allow list or deny list are evaluated based on the volume of calls within 5 minutes, and sources that exceed the defined rate limit within 5 minutes are automatically blocked. If you want to change the defined rate limit, you can do so by updating the CloudFormation stack and providing a different value for the RateLimit parameter. Or you can modify this value directly in the AWS WAF console by editing the RateLimit rule.

Note: You can also use AWS Managed Rules for AWS WAF to add additional protection according to your security needs.

Integrate the client application with the proxy

You can integrate the client application with the proxy by changing the Endpoint in your client application to use the CloudFront distribution domain name. The domain name is located in the Outputs section of the CloudFormation stack.

You then need to edit your client-side code to forward calls to Amazon Cognito through the proxy endpoint. For example, if you’re using the Identity SDK, you should change this property as follows.

var poolData = {
  UserPoolId: '<USER-POOL-ID>',
  ClientId: '<APP-CLIENT-ID>',
  endpoint: 'https://<CF-DISTRIBUTION-DOMAIN>'
};

If you’re using AWS Amplify, you can change the endpoint in the aws-exports.js file by overriding the property aws_cognito_endpoint. Or, if you configure Amplify Auth in your code, you can provide the endpoint as follows.

Amplify.Auth.configure({
  userPoolId: '<USER-POOL-ID>',
  userPoolWebClientId: '<APP-CLIENT-ID>',
  endpoint: 'https://<CF-DISTRIBUTION-DOMAIN>'
});

If you have a mobile application that uses the Amplify mobile SDK, you can override the endpoint in your configuration as follows (don’t include AppClientSecret parameter in your configuration). Note that the Endpoint value contains the domain name only, not the full URL. This feature is available in the latest releases of the iOS and Android SDKs.

"CognitoUserPool": {
  "Default": {
    "AppClientId": "<APP-CLIENT-ID>",
    "Endpoint": "<CF-DISTRIBUTION-DOMAIN>",
    "PoolId": "<USER-POOL-ID>",
    "Region": "<REGION>"
  }
}

Warning: The Amplify CLI overwrites customizations to the awsconfiguration.json and amplifyconfiguration.json files if you do an amplify push or amplify pull operation. You must manually re-apply the Endpoint customization and remove the AppClientSecret if you use the CLI to modify your cloud backend.

Solution limitations

This solution has these limitations:

  • If advanced security features are enabled for the user pool, Amazon Cognito calculates risk for user events. If you use this proxy pattern, the IP address that is propagated in user events is the proxy IP address, which causes risk calculation for SignUp, ForgotPassword, and ResendCode events to be inaccurate. On the other hand, Sign-In events still have the client IP address propagated correctly, and risk calculation and adaptive authentication for Sign-In events aren’t affected by the use of this proxy.
  • This solution is not applicable to Hosted UI, OAuth 2.0 endpoints, and federation flows.
  • Authenticated and admin API operations (which require developer credentials or an access token) aren’t covered in this solution. These API operations don’t require a secret hash, and they use other authentication mechanisms.
  • Using this proxy solution with mobile apps requires an update to the application. The update might take time to be available in the relevant app store, and you must depend on end users to update their app. Plan ahead of time to use the solution with mobile apps.

How to detect unusual behavior

In this section, I share with you the steps to detect, quickly analyze and respond to unwanted clients. It’s a best practice to configure monitoring and alarms that help you to detect unexpected spikes in activity. Additionally, I show you how to be ready to quickly identify clients that are calling your resources at a higher-than-usual rate.

Monitor utilization compared to quotas

Amazon Cognito integrates with Service Quotas, which monitor service utilization compared to quotas. These metrics help you detect unexpected spikes and be alerted if you’re approaching your quota for a certain API category. Approaching your quota indicates that there is a risk that calls from legitimate users will be throttled.

To view utilization versus quota metrics

  1. In the Service Quotas console, choose Service Quotas, choose AWS Services, and then choose Amazon Cognito User Pools.
  2. Under Service quotas, enter the search term rate of. This shows you the list of API categories and the assigned quotas for each category.
     
    Figure 5: The Service Quotas console showing Amazon Cognito API category rate quotas

    Figure 5: The Service Quotas console showing Amazon Cognito API category rate quotas

  3. Choose any of the API categories to see utilization versus quota metrics.
     
    Figure 6: The Service Quotas console showing utilization vs quota metrics for Amazon Cognito UserCreation APIs

    Figure 6: The Service Quotas console showing utilization vs quota metrics for Amazon Cognito UserCreation APIs

  4. You can also create alarms from this page to alert you if utilization is above a pre-defined threshold. You can create alarms starting at 50 percent utilization. It’s recommended that you create multiple alarms, for example at the 50 percent, 70 percent, and 90 percent thresholds, and configure CloudWatch alarms as appropriate.
     
    Figure 7: Creating an alarm for the utilization of the UserCreation API category

    Figure 7: Creating an alarm for the utilization of the UserCreation API category

Analyze CloudTrail logs with Athena

If you detect an unexpected spike in traffic to a certain API category, the next step is to identify the sources of this spike. You can do that by using CloudTrail logs or, after you deploy and use this proxy solution, CloudFront logs as sources of information. You can then analyze these logs by using Amazon Athena queries.

The first step is to create Athena tables from CloudTrail and CloudFront logs. You can do that by following these steps for CloudTrail and similar steps for CloudFront. After you have these tables created, you can create a set of queries that help you identify unwanted clients. Here are a couple of examples:

  • Use the following query to identify clients with the highest call rate to the InitiateAuth API operation within the timeframe you noticed the spike (change the eventtime value to reflect the attack window).
    SELECT sourceipaddress, count(*)
    FROM "default"."cloudtrail_logs"
    WHERE eventname='InitiateAuth'
    AND eventtime >= '2021-03-01T00:00:00Z'and eventtime < '2021-03-31T00:00:00Z'
    GROUP BY sourceipaddress
    LIMIT 10
    

  • Use the following query to identify clients that come through CloudFront with the highest error rate.
    SELECT count(*) as count, request_ip
    FROM "default"."cloudfront_logs"
    WHERE status>500
    GROUP BY request_ip
    

After you identify sources that are calling your service with a higher-than-usual rate, you can block these clients by adding them to the DenyList IP set that was created in AWS WAF.

Analyze CloudTrail events with CloudWatch Logs Insights

It’s a best practice to configure your trail to send events to CloudWatch Logs. After you do this, you can interactively search and analyze your Amazon Cognito CloudTrail events with CloudWatch Logs Insights to identify errors, unusual activity, or unusual user behavior in your account.

Conclusion

In this post, I showed you how to implement a lightweight proxy to an Amazon Cognito endpoint, which can be used with an application client secret to control access to unauthenticated API operations. This approach, together with security tools such as AWS WAF, helps provide protection for these API operations from unwanted clients. I also showed you strategies to help detect an ongoing attack and quickly analyze, identify, and block unwanted clients.

For more strategies for DDoS mitigation, see the AWS Best Practices for DDoS Resiliency.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Cognito forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mahmoud Matouk

Mahmoud is a Senior Solutions Architect with the Amazon Cognito team. He helps AWS customers build secure and innovative solutions for various identity and access management scenarios.

How AWS SSO Active Directory sync enhances AWS application experiences

Post Syndicated from Sharanya Ramakrishnan original https://aws.amazon.com/blogs/security/how-aws-sso-active-directory-sync-enhances-aws-application-experiences/

Identity management is easiest when you can manage identities in a centralized location and use these identities across various accounts and applications. You also want to be able to use these identities for other purposes within applications, like searching through groups, finding members of a certain group, and sharing projects with other users or groups. For example, when you use AWS Systems Manager Change Manager, you might want to search for groups or distinguish a user from a list of users with the same name based on their email address. You expect that the user and group details you see are consistent with the details that appear in a different application.

AWS Single Sign-On (AWS SSO) streamlines identity management by enabling you to connect an identity provider (IdP), such as the AWS internal directory or a range of partners and use the IdP identity information for access and collaboration within applications. Now you can get the same benefits when you connect your Microsoft Active Directory (AD) as your AWS SSO identity source. With the release of AWS SSO AD sync, you’ll be able to access AD groups, along with AD users, from AWS SSO-integrated applications, and use these groups and users for collaborative experiences. AD sync automatically brings identity information from your Active Directory into AWS SSO and makes this information available to you within applications. It makes sure that the user and group details you access in Amazon Web Services (AWS) stay consistent with information in Active Directory through periodic synchronizations.

In this post, I’ll walk you through key use cases that highlight how applications use the user and group information that is synchronized from Active Directory and how the AD synchronization capability works to make this possible.

Access control

Your ability to manage who can access which parts of an application or who has the necessary permissions to drive certain tasks within an application relies on the application’s ability to retrieve user and group information. It’s also important that any access that you configure is updated dynamically when there are any changes at the source. For example, if you define approval access to a group in an application and a member leaves the group when they change roles within the company, their group-based access within the application should be revoked. With AD sync, AWS SSO-integrated applications can utilize user and group information that is periodically updated, and therefore stays current.

Suppose you’ve set up an approval template in Systems Manager Change Manager for patching instances and want to require that all members of the IT Security Operations team approve any change requests created with this template. AD sync enhances this process by giving you the option to define approvers at the AD group level. If you have an IT Security Operations group in Active Directory and the group has permissions set up to access AWS SSO, this group will be available to you in Change Manager to select as an approver in your template. If a member of the IT Security Operations group switches roles and leaves the team, AD sync helps to ensure that the member’s access to approve patching-related change requests is revoked, by dynamically updating the IT Security Operations group in Change Manager once the member is removed from the group in Active Directory.

It’s common for teams at companies to work on cross-functional initiatives that involve sharing projects, reports, or dashboards with members of different teams for their review and feedback, or for collaboration. In such cases, you want to be able to easily search for users and groups within the application and share out relevant artifacts. AD sync makes it possible to access users and groups within AWS SSO-integrated applications, and you can then use this information for searching and sharing.

For example, if you use an AWS SSO-integrated application like AWS IoT SiteWise to create and share dashboards for metrics reviews with leadership or to collaborate with other teams in your organization, you’ll now be able to see all users with access to AWS. AD sync makes it possible for AWS IoT SiteWise to access all users, rather than only the users who signed in to AWS at least once.

Administrative efficiency

If you’re a platform admin or cloud admin who manages access to AWS SSO in your company, assigning users and groups with access to AWS accounts and resources is a routine task that requires administrative effort. Because AD sync periodically syncs AD groups into AWS SSO, you only need to pre-define access to resources for an AD group once. After that point, any new member, such as a new employee, who is added to the AD group in Active Directory will gain access to resources tied to the AD group. The new employee will also be added to AWS SSO through AD sync, and their information will stay current through periodic syncs. Therefore, the administrative effort involved on your end for managing users is reduced.

Similarly, if an employee leaves the company, you will no longer have to worry about deleting their information in AWS, because AD sync automatically deletes user and group objects that you delete in Active Directory. This simplifies your user lifecycle management and reduces the manual effort involved in the process.

How Active Directory sync works in the background

This new AD sync feature is for customers who want to use their AD identities with AWS SSO, without setting up a separate IdP, such as AD Federation Service or Azure AD. To use this capability, you must connect AWS SSO to your Active Directory by using AWS SSO with either AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) or AD Connector. Learn more about using AWS Managed Microsoft AD and AD Connector.

AD sync brings in user and group information from your Active Directory and stores it in the AWS SSO identity store. Once this information is synchronized, AWS SSO-integrated applications can use the user and group information to deliver collaborative experiences, such as sharing a dashboard with other users.

AD sync obtains a list of users and groups to be synchronized from Active Directory based on the assignments that you make to AWS accounts and applications. It then syncs those users and groups (including the group members) into the AWS identity store, keeping the information updated through periodic syncs, as shown in Figure 1.

Figure 1: Active Directory synchronization of users and groups

Figure 1: Active Directory synchronization of users and groups

If a user has assignments based on attribute-based access-control (ABAC) and changes departments, attributes will automatically update at the next sync. If a user happens to sign in before the next sync, the attributes will be updated at sign-in to maintain consistency. The user will now see their assignments updated based on their new department.

AD sync also syncs in all members of a group, including sub-groups or nested groups. It flattens members of the nested groups, that is, it adds them to the parent group in the AWS SSO identity store. For example, if Group B is a member or nested group of Group A in Active Directory, then members of Group B are also synced into AWS SSO and added directly to Group A, as shown in Figure 2. So, only Group A can be used in AWS SSO accounts and applications.

Figure 2: Members of nested Group B flattened and added to parent Group A

Figure 2: Members of nested Group B flattened and added to parent Group A

If you delete a user or group in Active Directory, AD sync automatically deletes the user or group from the AWS SSO identity store. You won’t see the deleted identity appear in AWS SSO-integrated applications, either. However, if you only delete the assignments for a user or group, the user or group will remain in AWS SSO and won’t be automatically deleted.

Summary

In this blog post, I explained how user and group synchronization can help deliver better application experiences with less administrative effort. I also covered how the AWS SSO AD sync capability delivers this benefit for applications such as AWS Systems Manager and AWS IoT SiteWise. AD sync capability is available to you at no additional cost in all AWS Regions supported by AWS SSO. If you want to get started with AWS SSO or learn more about AD sync, see the AWS SSO User Guide.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS SSO forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Sharanya Ramakrishnan

Sharanya is a Senior Technical Product Manager in the AWS Identity team. She enjoys solving customer problems through meaningful products, particularly in the dynamic security and identity space. Outside of work, Sharanya likes to travel and enjoys hiking and reading.

re:Invent – New security sessions launching soon

Post Syndicated from Marta Taggart original https://aws.amazon.com/blogs/security/reinvent-new-security-sessions-launching-soon/

Where did the last month go? Were you able to catch all of the sessions in the Security, Identity, and Compliance track you hoped to see at AWS re:Invent? If you missed any, don’t worry—you can stream all the sessions released in 2020 via the AWS re:Invent website. Additionally, we’re starting 2021 with all new sessions that you can stream live January 12–15. Here are the new Security, Identity, and Compliance sessions—each session is offered at multiple times, so you can find the time that works best for your location and schedule.

Protecting sensitive data with Amazon Macie and Amazon GuardDuty – SEC210
Himanshu Verma, AWS Speaker

Tuesday, January 12 – 11:00 AM to 11:30 AM PST
Tuesday, January 12 – 7:00 PM to 7:30 PM PST
Wednesday, January 13 – 3:00 AM to 3:30 AM PST

As organizations manage growing volumes of data, identifying and protecting your sensitive data can become increasingly complex, expensive, and time-consuming. In this session, learn how Amazon Macie and Amazon GuardDuty together provide protection for your data stored in Amazon S3. Amazon Macie automates the discovery of sensitive data at scale and lowers the cost of protecting your data. Amazon GuardDuty continuously monitors and profiles S3 data access events and configurations to detect suspicious activities. Come learn about these security services and how to best use them for protecting data in your environment.

BBC: Driving security best practices in a decentralized organization – SEC211
Apurv Awasthi, AWS Speaker
Andrew Carlson, Sr. Software Engineer – BBC

Tuesday, January 12 – 1:15 PM to 1:45 PM PST
Tuesday, January 12 – 9:15 PM to 9:45 PM PST
Wednesday, January 13 – 5:15 AM to 5:45 AM PST

In this session, Andrew Carlson, engineer at BBC, talks about BBC’s journey while adopting AWS Secrets Manager for lifecycle management of its arbitrary credentials such as database passwords, API keys, and third-party keys. He provides insight on BBC’s secrets management best practices and how the company drives these at enterprise scale in a decentralized environment that has a highly visible scope of impact.

Get ahead of the curve with DDoS Response Team escalations – SEC321
Fola Bolodeoku, AWS Speaker

Tuesday, January 12 – 3:30 PM to 4:00 PM PST
Tuesday, January 12 – 11:30 PM to 12:00 AM PST
Wednesday, January – 7:30 AM to 8:00 AM PST

This session identifies tools and tricks that you can use to prepare for application security escalations, with lessons learned provided by the AWS DDoS Response Team. You learn how AWS customers have used different AWS offerings to protect their applications, including network access control lists, security groups, and AWS WAF. You also learn how to avoid common misconfigurations and mishaps observed by the DDoS Response Team, and you discover simple yet effective actions that you can take to better protect your applications’ availability and security controls.

Network security for serverless workloads – SEC322
Alex Tomic, AWS Speaker

Thursday, January 14 -1:30 PM to 2:00 PM PST
Thursday, January 14 – 9:30 PM to 10:00 PM PST
Friday, January 15 – 5:30 AM to 6:00 AM PST

Are you building a serverless application using services like Amazon API Gateway, AWS Lambda, Amazon DynamoDB, Amazon Aurora, and Amazon SQS? Would you like to apply enterprise network security to these AWS services? This session covers how network security concepts like encryption, firewalls, and traffic monitoring can be applied to a well-architected AWS serverless architecture.

Building your cloud incident response program – SEC323
Freddy Kasprzykowski, AWS Speaker

Wednesday, January 13 – 9:00 AM to 9:30 AM PST
Wednesday, January 13 – 5:00 PM to 5:30 PM PST
Thursday, January 14 – 1:00 AM to 1:30 AM PST

You’ve configured your detection services and now you’ve received your first alert. This session provides patterns that help you understand what capabilities you need to build and run an effective incident response program in the cloud. It includes a review of some logs to see what they tell you and a discussion of tools to analyze those logs. You learn how to make sure that your team has the right access, how automation can help, and which incident response frameworks can guide you.

Beyond authentication: Guide to secure Amazon Cognito applications – SEC324
Mahmoud Matouk, AWS Speaker

Wednesday, January 13 – 2:15 PM to 2:45 PM PST
Wednesday, January 13 – 10:15 PM to 10:45 PM PST
Thursday, January 14 – 6:15 AM to 6:45 AM PST

Amazon Cognito is a flexible user directory that can meet the needs of a number of customer identity management use cases. Web and mobile applications can integrate with Amazon Cognito in minutes to offer user authentication and get standard tokens to be used in token-based authorization scenarios. This session covers best practices that you can implement in your application to secure and protect tokens. You also learn about new Amazon Cognito features that give you more options to improve the security and availability of your application.

Event-driven data security using Amazon Macie – SEC325
Neha Joshi, AWS Speaker

Thursday, January 14 – 8:00 AM to 8:30 AM PST
Thursday, January 14 – 4:00 PM to 4:30 PM PST
Friday, January 15 – 12:00 AM to 12:30 AM PST

Amazon Macie sensitive data discovery jobs for Amazon S3 buckets help you discover sensitive data such as personally identifiable information (PII), financial information, account credentials, and workload-specific sensitive information. In this session, you learn about an automated approach to discover sensitive information whenever changes are made to the objects in your S3 buckets.

Instance containment techniques for effective incident response – SEC327
Jonathon Poling, AWS Speaker

Thursday, January 14 – 10:15 AM to 10:45 AM PST
Thursday, January 14 – 6:15 PM to 6:45 PM PST
Friday, January 15 – 2:15 AM to 2:45 AM PST

In this session, learn about several instance containment and isolation techniques, ranging from simple and effective to more complex and powerful, that leverage native AWS networking services and account configuration techniques. If an incident happens, you may have questions like “How do we isolate the system while preserving all the valuable artifacts?” and “What options do we even have?”. These are valid questions, but there are more important ones to discuss amidst a (possible) incident. Join this session to learn highly effective instance containment techniques in a crawl-walk-run approach that also facilitates preservation and collection of valuable artifacts and intelligence.

Trusted connects for government workloads – SEC402
Brad Dispensa, AWS Speaker

Wednesday, January 13 – 11:15 AM to 11:45 AM PST
Wednesday, January 13 – 7:15 PM to 7:45 PM PST
Thursday, January 14 – 3:15 AM to 3:45 AM PST

Cloud adoption across the public sector is making it easier to provide government workforces with seamless access to applications and data. With this move to the cloud, we also need updated security guidance to ensure public-sector data remain secure. For example, the TIC (Trusted Internet Connections) initiative has been a requirement for US federal agencies for some time. The recent TIC-3 moves from prescriptive guidance to an outcomes-based model. This session walks you through how to leverage AWS features to better protect public-sector data using TIC-3 and the National Institute of Standards and Technology (NIST) Cybersecurity Framework (CSF). Also, learn how this might map into other geographies.

I look forward to seeing you in these sessions. Please see the re:Invent agenda for more details and to build your schedule.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Marta Taggart

Marta is a Seattle-native and Senior Program Manager in AWS Security, where she focuses on privacy, content development, and educational programs. Her interest in education stems from two years she spent in the education sector while serving in the Peace Corps in Romania. In her free time, she’s on a global hunt for the perfect cup of coffee.

re:Invent 2020 – Your guide to AWS Identity and Data Protection sessions

Post Syndicated from Marta Taggart original https://aws.amazon.com/blogs/security/reinvent-2020-your-guide-to-aws-identity-and-data-protection-sessions/

AWS re:Invent will certainly be different in 2020! Instead of seeing you all in Las Vegas, this year re:Invent will be a free, three-week virtual conference. One thing that will remain the same is the variety of sessions, including many Security, Identity, and Compliance sessions. As we developed sessions, we looked to customers—asking where they would like to expand their knowledge. One way we did this was shared in a recent Security blog post, where we introduced a new customer polling feature that provides us with feedback directly from customers. The initial results of the poll showed that Identity and Access Management and Data Protection are top-ranking topics for customers. We wanted to highlight some of the re:Invent sessions for these two important topics so that you can start building your re:Invent schedule. Each session is offered at multiple times, so you can sign up for the time that works best for your location and schedule.

Managing your Identities and Access in AWS

AWS identity: Secure account and application access with AWS SSO
Ron Cully, Principal Product Manager, AWS

Dec 1, 2020 | 12:00 PM – 12:30 PM PST
Dec 1, 2020 | 8:00 PM – 8:30 PM PST
Dec 2, 2020 | 4:00 AM – 4:30 AM PST

AWS SSO provides an easy way to centrally manage access at scale across all your AWS Organizations accounts, using identities you create and manage in AWS SSO, Microsoft Active Directory, or external identity providers (such as Okta Universal Directory or Azure AD). This session explains how you can use AWS SSO to manage your AWS environment, and it covers key new features to help you secure and automate account access authorization.

Getting started with AWS identity services
Becky Weiss, Senior Principal Engineer, AWS

Dec 1, 2020 | 1:30 PM – 2:00 PM PST
Dec 1, 2020 | 9:30 PM – 10:00 PM PST
Dec 2, 2020 | 5:30 AM – 6:00 AM PST

The number, range, and breadth of AWS services are large, but the set of techniques that you need to secure them is not. Your journey as a builder in the cloud starts with this session, in which practical examples help you quickly get up to speed on the fundamentals of becoming authenticated and authorized in the cloud, as well as on securing your resources and data correctly.

AWS identity: Ten identity health checks to improve security in the cloud
Cassia Martin, Senior Security Solutions Architect, AWS

Dec 2, 2020 | 9:30 AM – 10:00 AM PST
Dec 2, 2020 | 5:30 PM – 6:00 PM PST
Dec 3, 2020 | 1:30 AM – 2:00 AM PST

Get practical advice and code to help you achieve the principle of least privilege in your existing AWS environment. From enabling logs to disabling root, the provided checklist helps you find and fix permissions issues in your resources, your accounts, and throughout your organization. With these ten health checks, you can improve your AWS identity and achieve better security every day.

AWS identity: Choosing the right mix of AWS IAM policies for scale
Josh Du Lac, Principal Security Solutions Architect, AWS

Dec 2, 2020 | 11:00 AM – 11:30 AM PST
Dec 2, 2020 | 7:00 PM – 7:30 PM PST
Dec 3, 2020 | 3:00 AM – 3:30 AM PST

This session provides both a strategic and tactical overview of various AWS Identity and Access Management (IAM) policies that provide a range of capabilities for the security of your AWS accounts. You probably already use a number of these policies today, but this session will dive into the tactical reasons for choosing one capability over another. This session zooms out to help you understand how to manage these IAM policies across a multi-account environment, covering their purpose, deployment, validation, limitations, monitoring, and more.

Zero Trust: An AWS perspective
Quint Van Deman, Principal WW Identity Specialist, AWS

Dec 2, 2020 | 12:30 PM – 1:00 PM PST
Dec 2, 2020 | 8:30 PM – 9:00 PM PST
Dec 3, 2020 | 4:30 AM – 5:00 AM PST

AWS customers have continuously asked, “What are the optimal patterns for ensuring the right levels of security and availability for my systems and data?” Increasingly, they are asking how patterns that fall under the banner of Zero Trust might apply to this question. In this session, you learn about the AWS guiding principles for Zero Trust and explore the larger subdomains that have emerged within this space. Then the session dives deep into how AWS has incorporated some of these concepts, and how AWS can help you on your own Zero Trust journey.

AWS identity: Next-generation permission management
Brigid Johnson, Senior Software Development Manager, AWS

Dec 3, 2020 | 11:00 AM – 11:30 AM PST
Dec 3, 2020 | 7:00 PM – 7:30 PM PST
Dec 4, 2020 | 3:00 AM – 3:30 AM PST

This session is for central security teams and developers who manage application permissions. This session reviews a permissions model that enables you to scale your permissions management with confidence. Learn how to set your organization up for access management success with permission guardrails. Then, learn about granting workforce permissions based on attributes, so they scale as your users and teams adjust. Finally, learn about the access analysis tools and how to use them to identify and reduce broad permissions and give users and systems access to only what they need.

How Goldman Sachs administers temporary elevated AWS access
Harsha Sharma, Solutions Architect, AWS
Chana Garbow Pardes, Associate, Goldman Sachs
Jewel Brown, Analyst, Goldman Sachs

Dec 16, 2020 | 2:00 PM – 2:30 PM PST
Dec 16, 2020 | 10:00 PM – 10:30 PM PST
Dec 17, 2020 | 6:00 AM – 6:30 AM PST

Goldman Sachs takes security and access to AWS accounts seriously. While empowering teams with the freedom to build applications autonomously is critical for scaling cloud usage across the firm, guardrails and controls need to be set in place to enable secure administrative access. In this session, learn how the company built its credential brokering workflow and administrator access for its users. Learn how, with its simple application that uses proprietary and AWS services, including Amazon DynamoDB, AWS Lambda, AWS CloudTrail, Amazon S3, and Amazon Athena, Goldman Sachs is able to control administrator credentials and monitor and report on actions taken for audits and compliance.

Data Protection

Do you need an AWS KMS custom key store?
Tracy Pierce, Senior Consultant, AWS

Dec 15, 2020 | 9:45 AM – 10:15 AM PST
Dec 15, 2020 | 5:45 PM – 6:15 PM PST
Dec 16, 2020 | 1:45 AM – 2:15 AM PST

AWS Key Management Service (AWS KMS) has integrated with AWS CloudHSM, giving you the option to create your own AWS KMS custom key store. In this session, you learn more about how a KMS custom key store is backed by an AWS CloudHSM cluster and how it enables you to generate, store, and use your KMS keys in the hardware security modules that you control. You also learn when and if you really need a custom key store. Join this session to learn why you might choose not to use a custom key store and instead use the AWS KMS default.

Using certificate-based authentication on containers & web servers on AWS
Josh Rosenthol, Senior Product Manager, AWS
Kevin Rioles, Manager, Infrastructure & Security, BlackSky

Dec 8, 2020 | 12:45 PM – 1:15 PM PST
Dec 8, 2020 | 8:45 PM – 9:15 PM PST
Dec 9, 2020 | 4:45 AM – 5:15 AM PST

In this session, BlackSky talks about its experience using AWS Certificate Manager (ACM) end-entity certificates for the processing and distribution of real-time satellite geospatial intelligence and monitoring. Learn how BlackSky uses certificate-based authentication on containers and web servers within its AWS environment to help make TLS ubiquitous in its deployments. The session details the implementation, architecture, and operations best practices that the company chose and how it was able to operate ACM at scale across multiple accounts and regions.

The busy manager’s guide to encryption
Spencer Janyk, Senior Product Manager, AWS

Dec 9, 2020 | 11:45 AM – 12:15 PM PST
Dec 9, 2020 | 7:45 PM – 8:15 PM PST
Dec 10, 2020 | 3:45 AM – 4:15 AM PST

In this session, explore the functionality of AWS cryptography services and learn when and where to deploy each of the following: AWS Key Management Service, AWS Encryption SDK, AWS Certificate Manager, AWS CloudHSM, and AWS Secrets Manager. You also learn about defense-in-depth strategies including asymmetric permissions models, client-side encryption, and permission segmentation by role.

Building post-quantum cryptography for the cloud
Alex Weibel, Senior Software Development Engineer, AWS

Dec 15, 2020 | 12:45 PM – 1:15 PM PST
Dec 15, 2020 | 8:45 PM – 9:15 PM PST
Dec 16, 2020 | 4:45 AM – 5:15 AM PST

This session introduces post-quantum cryptography and how you can use it today to secure TLS communication. Learn about recent updates on standards and existing deployments, including the AWS post-quantum TLS implementation (pq-s2n). A description of the hybrid key agreement method shows how you can combine a new post-quantum key encapsulation method with a classical key exchange to secure network traffic today.

Data protection at scale using Amazon Macie
Neel Sendas, Senior Technical Account Manager, AWS

Dec 17, 2020 | 7:15 AM – 7:45 AM PST
Dec 17, 2020 | 3:15 PM – 3:45 PM PST
Dec 17, 2020 | 11:15 PM – 11:45 PM PST

Data Loss Prevention (DLP) is a common topic among companies that work with sensitive data. If an organization can’t identify its sensitive data, it can’t protect it. Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS. In this session, we will share details of the design and architecture you can use to deploy Macie at large scale.

While sessions are virtual this year, they will be offered at multiple times with live moderators and “Ask the Expert” sessions available to help answer any questions that you may have. We look forward to “seeing” you in these sessions. Please see the re:Invent agenda for more details and to build your schedule.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Marta Taggart

Marta is a Seattle-native and Senior Program Manager in AWS Security, where she focuses on privacy, content development, and educational programs. Her interest in education stems from two years she spent in the education sector while serving in the Peace Corps in Romania. In her free time, she’s on a global hunt for the perfect cup of coffee.

Author

Himanshu Verma

Himanshu is a Worldwide Specialist for AWS Security Services. In this role, he leads the go-to-market creation and execution for AWS Data Protection and Threat Detection & Monitoring services, field enablement, and strategic customer advisement. Prior to AWS, he held roles as Director of Product Management, engineering and development, working on various identity, information security and data protection technologies.

New – Attributes Based Access Control with AWS Single Sign On

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-attributes-based-access-control-with-aws-single-sign-on/

Starting today, you can pass user attributes in the AWS session when your workforce sign-in into the cloud using AWS Single Sign-On. This gives you the centralized account access management of AWS Single Sign-On and ABAC, with the flexibility to use AWS SSO, Active Directory, or an external identity provider as your identity source. To learn more about the advantages of ABAC policies on AWS, you may read my previous blog post on the subject.

Overview
On one side, system administrators configure user attributes on the AWS Single Sign-On identity repository, or the managed Active Directory. System administrators may also configure an external identity provider, such as Okta, OneLogin or PingFederate to pass existing user attributes in the AWS sessions when their workforce federates into AWS. These attributes are known as session tags in AWS. On the other side, cloud administrators create fine-grained permissions policies such that your workforce get only access to cloud resources with matching resource tags.

Creating policies based on matching attributes instead of functional roles helps to reduce the number of distinct permissions and roles you must create and manage in your AWS environment. For example, when developers Bob from team red and Alice from team blue sign-in into AWS and assume the same AWS Identity and Access Management (IAM) role, they get distinct permissions to project resources tagged for their team. The identity system sends the team name attribute in the AWS session when Bob and Alice sign-in into AWS. The role’s permissions grant access to project resources with matching team name tags. Now, if Bob moves to team blue and system administrators update his team name in their identity provider directory, Bob automatically gets access to team blue’s project resources without requiring permissions updates in IAM.

How to Configure AWS SSO to Map User Attributes
Before to configure AWS SSO, there are two important points to highlight. First, ABAC will work with attributes from any identity source configured in AWS SSO : AWS SSO itself, a managed Active Directory, or an external identity provider. Second, there are two ways to pass attributes for access control to AWS SSO. Either you can pass attributes directly in the SAML assertion using the prefix https://aws.amazon.com/SAML/Attributes/AccessControl, or you can use attributes that are in the AWS SSO identity store. Those attributes are configured by your AWS SSO administrator for users created in AWS SSO, synchronized in from an Active Directory, or synchronized in from an external identity provider using automatic provisioning (SCIM).

For this demo, I choose to use an external identity provider and SCIM.

I can enable ABAC in AWS using AWS SSO with three steps:

Step 1: I configure my identity source with the associated user identities and attributes in the external identity provider. As of today, AWS SSO supports identity synchronization via SCIM with Azure AD, Okta, OneLogin, and PingFederate. Check this page to get an up-to-date list. The specifics depend on each identity provider.

Step 2: I configure the SCIM attributes I want to use for access control using the new Access Control Attributes global setting in the AWS SSO console or API. This screen allows me to select attributes for access control from the identity source I configured in step 1.

Attributes for Access Control

Step 3: I author ABAC rules through permission sets and resource-based policies using the attributes I configured in Step 2. More about this in a minute.

Now, when my workforce federates into an AWS account using SSO, they get access to their AWS resources based on matching attributes.

Attributes are passed as session tags. They are passed as comma-separated key:value pairs. The total character length of all the attributes together must be less than or equal to 460 characters.

What Does a Policy Look Like?
I now can use user attributes in my permission sets using the aws:PrincipalTag condition key when creating access control rules. For example, I can tag all the resources in my organization with their respective department name, and use a single permission set that grants developers access only to their department resources. Now, whenever developers federate into the AWS account, AWS SSO creates a department session tag with the value received from the identity provider. The security policies allow them to only get access to the resources in their respective department. As the team adds more developers and resources to their project, I only have to tag resources with the correct department name. As a result, as the organization adds new resources and developers to departments, developers can only manage resources aligned to their department without needing any permission updates.

An ABAC SSO permission set policy might look like this:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [ "ec2:DescribeInstances"],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": ["ec2:StartInstances","ec2:StopInstances"],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "ec2:ResourceTag/Department": "${aws:PrincipalTag/Department}"
                }
            }
        }
    ]
}

This policy allows anybody to DescribeInstances, but only users with a aws:PrincipalTag/Department tag’s value matching the EC2 instance ec2:ResourceTag/Department tag’s value are authorized to stop or to start instances.

I attach this policy to an AWS Account’s Permission Set. On the left part of the AWS Single Sign-On console, I click AWS Accounts and select the Permission sets tab. Then I click Create permission set. On the next screen, I select Create a customer permission set.

Create a custom permission set

I enter a name and description, I make sure Create a custom permissions policy is selected. Then I can copy/paste the previous policy allowing to start and stop EC2 instances when the department name tag value is equal to the person’s department name tag value.

Create Custom Policy for Permission Set

On the next screen, I enter some tags, then I review my configuration before clicking Create. Et voila, I am ready to go.

If you have existing federation configured with AWS Security Token Service, remember that external identity providers consider AWS SSO as a new application configuration. This means when you move from direct IAM federation to AWS SSO, you have to update your external identity provider configuration to connect with AWS SSO and to introduce attributes as session tags for this configuration.

Available Today
There is no additional charge to configure user attributes with AWS Single Sign-On. You can start to use it today in all AWS Regions where AWS SSO is available.

— seb

Aligning IAM policies to user personas for AWS Security Hub

Post Syndicated from Vaibhawa Kumar original https://aws.amazon.com/blogs/security/aligning-iam-policies-to-user-personas-for-aws-security-hub/

AWS Security Hub provides you with a comprehensive view of your security posture across your accounts in Amazon Web Services (AWS) and gives you the ability to take action on your high-priority security alerts. There are several different user personas that use Security Hub, and they typically require different AWS Identity and Access Management (IAM) permissions. Those personas include: a security administrator; a security analyst or engineer on a central security team or in a Cloud Center of Excellence; and a Developer Operations (DevOps) engineer or application builder who is the primary owner of an AWS account. In this post, we show how to deploy sample IAM policies for these three personas.

The first persona, a security administrator or cloud system administrator (sysadmin), is responsible for setting up and configuring Security Hub, and they typically need access to all of the Security Hub APIs to do this work. As part of this work, the sysadmin enables Security Hub on various accounts and Regions, decides which standards and controls should be enabled on which accounts, enables product integrations, creates insights, sets up custom actions and automated remediations, and configures IAM policies for other users.

The second persona, a security analyst or engineer using Security Hub, is part of a central security team, often part of a Cloud Center of Excellence. We often see that cloud security efforts are centralized in Cloud Centers of Excellence within a company. These security analysts or engineers typically have access to the master account in Security Hub and can view and take action on findings from any of the connected member accounts. They typically are not configuring Security Hub, so they don’t need permissions to do so.

The third persona is a DevOps engineer or application builder. This user needs the ability to view findings and take action only on the findings associated with their account. For cloud workloads, security is often decentralized down to these users. Security Hub enables them to take more proactive responsibility for the security of their own account by directly viewing and taking action on findings in their account. They typically don’t need permissions to set up and configure Security Hub, because that is done by a central sysadmin.

Overview

The following reference architecture presents an overview of a Security Hub master-member account structure and three personas: a security administrator, a security analyst/engineer, and a DevOps engineer.
 

Figure 1: Reference architecture

Figure 1: Reference architecture

In this blog post, we show how you can create and use the following AWS managed and customer managed IAM policies to support these three personas:

  • The sysadmin persona needs permissions to configure and manage Security Hub, account memberships, insights, and integrations, and to create remediations and take actions and perform record and workflow updates. The AWS managed IAM policy called AWSSecurityHubFullAccess provides the permissions for this persona. An IAM user or role with these permissions can deploy and configure Security Hub in master and member accounts. They can also update findings. The sysadmin also requires permissions to configure AWS Config and Amazon CloudWatch event rules to set up automated responses and remediations.
  • The security analyst persona needs permissions to read, list, and describe findings, standards, controls, and products; to update findings; and to create and update insights for Security Hub resources in the master account. The AWS managed IAM policy called AWSSecurityHubReadOnlyAccess provides the permissions needed for read, list, and describe actions, and a customer managed policy will be attached to give permissions to create and update insights and to update findings.
  • The DevOps engineer persona needs the same permissions as the security analyst persona, but they will only have the ability to access their own Security Hub member AWS account(s) and won’t have access to the master account.

Depending on your specific use case, you might want to provide additional permissions to the security analyst and DevOps engineer personas. For example, you might want to also grant them permissions to create custom actions by using the UpdateActionTarget API. In that case, you should also ensure that they have appropriate permissions to create CloudWatch event rules. You can also restrict these personas to only be able to update certain fields in findings (for example, only update workflow status but not severity) by using IAM context keys.

Prerequisites

You must have already enabled Security Hub with one account as master and other associated accounts as members. You will use the following AWS services:

Implementation

To create the required customer managed policies and associate them to users and roles, you will perform these tasks, described in more detail later in this section:

  1. Create a customer managed policy and associate it with the user and role for the security analyst persona in the Security Hub master account, along with the AWSSecurityHubReadOnlyAccess AWS managed policy.
  2. Create a customer managed policy and associate it with the user and role for the DevOps persona in the Security Hub member account, along with the AWSSecurityHubReadOnlyAccess AWS managed policy.
  3. Create a sysadmin user and role and associate it with the AWS managed policy for AWSSecurityHubFullAccess, along with AWS Config and CloudWatch event rule permissions in the Security Hub master account.

The following policy JSON script is for those two customer managed policies.

Security Hub - Security Analyst policy 
MasterCustomer Managed Policy:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "SecurityAnalystMasterCMP",
            "Effect": "Allow",
            "Action": [
                "securityhub:UpdateInsight",
                "securityhub:CreateInsight",
	  			"securityhub:BatchUpdateFindings"
            ],
            "Resource": "*"
        }
    ]
}

Security Hub – DevOps Engineer policy: 
MemberCustomer Managed Policy:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "DevOpsMemberCMP",
            "Effect": "Allow",
            "Action": [
                "securityhub:UpdateInsight",
                "securityhub:CreateInsight",
                "securityhub:BatchUpdateFindings"
            ],
            "Resource": "*"
        }
    ]
}

Step 1: Create the user and role for the security analyst persona

First, create a customer managed policy and associate it with the user and role for the security analyst persona in the Security Hub master account, along with the AWSSecurityHubReadOnlyAccess AWS managed policy.

To create the IAM policy, user, and role (console method)

  1. Sign in to the AWS Management Console in the Security Hub master account and open the IAM console.
  2. In the IAM console navigation pane, choose Policies, and then choose Create Policies.
  3. Choose the JSON tab.
  4. Copy the Security Hub – Security Analyst policy JSON shown earlier in this section, and paste it into the visual editor. When you are finished, choose Review policy.
  5. On the Review policy page, enter a name and a description (optional) for the policy that you’re creating. Review the policy summary to see the permissions that are granted by your policy. Then choose Create policy to save your work.
  6. Follow the instructions in the AWS Identity and Access Management User Guide to create a user and role, and attach the policy you just created.
  7. In the IAM console navigation pane, choose the user or role (or both) that you just created, and on the Permissions tab, choose Add Permissions.
  8. Choose the permissions category Attach existing policies directly to filter and select the managed policy AWSSecurityHubReadOnlyAccess. Review the permissions and then choose Add permissions.
  9. Save all the changes and try using the user and role after few minutes.

Step 2: Create the user and role for the DevOps persona

Next, create a customer managed policy and associate it with the user and role for the DevOps persona in the Security Hub member account, along with the AWSSecurityHubReadOnlyAccess AWS managed policy.

To create the IAM policy, user, and role (console method)

  1. Sign in to the AWS Management Console in the Security Hub member account and open the IAM console.
  2. In the IAM console navigation pane, choose Policies and then choose Create Policies.
  3. Choose the JSON tab.
  4. Copy the Security Hub – DevOps Engineer policy JSON shown earlier in this section, and paste it into the visual editor. When you are finished, choose Review policy.
  5. On the Review policy page, enter a name and a description (optional) for the policy that you are creating. Review the policy summary to see the permissions that are granted by your policy. Then choose Create policy to save your work.
  6. Follow the instructions in the AWS Identity and Access Management User Guide to create a user and role, and attach the policy you just created.
  7. In the IAM console navigation pane, choose the user or role (or both) that you just created, and on the Permissions tab, choose Add Permissions.
  8. Choose the permissions category Attach existing policies directly to filter and select the managed policy AWSSecurityHubReadOnlyAccess. Review the permissions and then choose Add permissions.
  9. Save all the changes and try using the user and role after few minutes.

Step 3: Create the user and role for the sysadmin persona

Next, create a user and role for the sysadmin persona and associate it with the AWS managed policies AWSSecurityHubFullAccess, CloudWatchEventsFullAccess, and full access to AWS Config in the Security Hub master account.

To create the IAM user and role (console method)

  1. Sign in to the AWS Management Console in the Security Hub member account and open the IAM console.
  2. Follow the instructions in the AWS Identity and Access Management User Guide to create a user and role.
  3. In the IAM console navigation pane, choose the user or role (or both) that you just created, and on the Permissions tab, choose Add Permissions.
  4. Choose the permissions category Attach existing policies directly to filter and select managed policies, and attach the managed policies AWSSecurityHubFullAccess and CloudWatchEventsFullAccess.
  5. Create another policy to grant full access to AWS Config as described in the AWS Config Developer Guide, and attach the policy to the user or role. Review the permissions, and choose Add permissions.
  6. Save all the changes and try using the user and role after few minutes.

Test the users and roles in the Security Hub master and member accounts

Finally, test the three users and roles that you created for the respective personas in the preceding steps.

To test the users and roles

  1. Security analyst user and role:
    1. Sign in to the AWS Management Console in the master account and open Security Hub.
    2. Make sure that Security Hub UI features (such as the Summary, Security Standard, Insights, Findings, and Integrations) are rendered so that the Security Analyst can view them.
    3. Navigate to a finding and change the workflow status as described in the topic Setting the workflow status for findings.
  2. DevOps engineer user and role:
    1. Sign in to the AWS Management Console in the member account and open Security Hub.
    2. Make sure that Security Hub UI features (such as the Summary, Security Standard, Insights, Findings, and Integrations) are rendered so that the DevOps Engineer can view them.
    3. Create a custom insight for the member account and other attribute groupings.
  3. Sysadmin user and role:
    1. Sign in to the AWS Management Console in the master or member account, and open Security Hub.
    2. Try admin-related operations such as create a custom action, invite another account, and so on.

Adding policy conditions

You might want to further restrict the permissions for the security analyst and DevOps personas. IAM policies for Security Hub’s BatchUpdateFindings API enable you to specify conditions to prevent a user from making any update to a specific finding field. The following example disallows setting the Workflow Status field to Suppressed.

{
	"Sid": "CMPCondition",
	"Effect": "Deny",
	"Action": "securityhub:BatchUpdateFindings",
	"Resource": "*",
	"Condition": {
		"StringEquals": {
			"securityhub:ASFFSyntaxPath/Workflow.Status": "SUPPRESSED"
		}
	}
}

Summary

In this post, we showed you how to align AWS managed and customer managed IAM policies to different user personas, so that you can allow different users to access Security Hub with least privilege permissions. Security Hub enables both central security teams and individual DevOps engineers to understand and improve the security posture of the AWS accounts in their organization.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Security Hub forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Vaibhawa Kumar

Vaibhawa is a Cloud Infra Architect with AWS Professional Services. He helps customers on their cloud journey of critical workloads to the AWS cloud with infrastructure security and automations. In his free time, you can find him spending time with family, sports, and cooking.

Author

Sanjay Patel

Sanjay is a Senior Cloud Application Architect with AWS Professional Services. He has a diverse background in software design, enterprise architecture, and API integrations. He has helped AWS customers automate infrastructure security. He enjoys working with AWS customers to identify and implement the best fit solution.

Author

Ely Kahn

Ely is the Principal Product Manager for AWS Security Hub. Before his time at AWS, Ely was a co-founder for Sqrrl, a security analytics startup that AWS acquired and is now Amazon Detective. Earlier, Ely served in a variety of positions in the federal government, including Director of Cybersecurity at the National Security Council in the White House.

How to implement password-less authentication with Amazon Cognito and WebAuthn

Post Syndicated from Mahmoud Matouk original https://aws.amazon.com/blogs/security/how-to-implement-password-less-authentication-with-amazon-cognito-and-webauthn/

In this blog post, I show you how to offer a password-less authentication experience to your customers. To do this, you’ll allow physical security keys or platform authenticators (like finger-print scanners) to be used as the authentication factor to your web or mobile applications that use Amazon Cognito user pools for authentication.

An Amazon Cognito user pool is a user directory that Amazon Web Services (AWS) customers use to manage their customer identities. Web Authentication (WebAuthn) is a W3C standard that lets users authenticate to web applications using public-key cryptography. Using public-key cryptography enables you to implement a stronger authentication mechanism that’s less dependent on passwords.

Mobile and web applications can use WebAuthn together with browser and device support for the Client-To-Authenticator-Protocol (CTAP) to implement Fast ID Online (FIDO) authentication. To learn more about the flow of WebAuthn and CTAP, visit the FIDO Alliance.

How it works

Amazon Cognito user pools allow you to build a custom authentication flow that uses AWS Lambda functions to authenticates users based on one or more challenge/response cycles. You can use this flow to implement password-less authentication that is based on custom challenges. To use this flow to implement FIDO authentication, you need to create credentials during the registration phase and reference these credentials in the user’s profile. You can then validate these credentials during the authentication phase in a custom challenge.

During registration, a new set of credentials that are bound to your application (relying party), are created through a FIDO authenticator. For example, a platform authenticator with a biometric sensor or a roaming authenticator like a physical security key. The private key of this credential set remains on the authenticator, the public key, together with a credential identifier are saved in a custom attribute that’s part of the user profile in Amazon Cognito.

During authentication, a Cognito custom authentication flow will be used to implement authentication through a custom challenge. The application prompts the user to sign in using the authenticator that they used during registration. Response from the authenticator is then passed as a challenge response to Amazon Cognito and verified using the stored public key.

In this password-less flow, the private key has never left the physical device, the authenticator also validates that relying party in authentication request matches the relying party that was used to create the credentials. This combination provides a more secured authentication flow that uses stronger credentials, protects user from phishing and provides better user experience.

About the demo project

This blog post and the diagrams below explain a scenario that uses FIDO as the only authentication factor, to implement password-less authentication. To help with implementation details, I created a project to demonstrate WebAuthn integration with Amazon Cognito that provides sample code for three scenarios:

  • A scenario that uses FIDO as the only factor (password-less)
  • A scenario that uses FIDO as a second factor (with password)
  • A scenario that lets users sign-in with only a password

This project is only a demonstration and shouldn’t be used as-is in production environments. When using FIDO as authentication factor, it is a best practice to allow users to register multiple authenticators and you need to implement an account recovery workflow in case of a lost authenticator.

Implementing FIDO Authentication

Let’s take a deeper dive into the design and components involved in implementing this solution. To deploy this project in your development environment, follow the instructions in the WebAuthn integration with Amazon Cognito project.

Creating and configuring user pool

The first step is to create a Cognito user pool and triggers that orchestrate a custom authentication flow. You do that by deploying the CloudFormation stack that will create all resources as explained in the demo project.

Few implementation details to note about the user pool:

  • The template creates a user pool, app client, triggers, and Lambda functions to use for custom authentication.
  • The template creates a custom attribute called publicKeyCred. This is the custom attribute that holds a base64 encoded representation of the credential identifier and public key for the user’s authenticator.
  • The app client defines what authentication flows are allowed. You can limit allowed flows according to your use-case. To support FIDO authentication, you must allow CUSTOM_AUTH flow.
  • The app client has “write” permissions to the custom attribute publicKeyCred but not “read” permissions. This allows your application to write the attribute during registration or profile updates but excludes this attribute from the user’s id_token. Since this attribute is considered back-end data that is only used during authentication, it doesn’t need to be part of user profile in the id_token.

User registration flow

The registration flow needs to create credentials using the authenticator and store the public-key in user’s profile. Let’s take a closer look at the sequence of calls and involved components to implement this flow.
 

Figure 1: WebAuthn user registration process

Figure 1: WebAuthn user registration process

  1. The user navigates to your application, www.example.com (relying party), and creates an account. A request is sent to the relying party to build a credentials options object and send it back to the browser. (in the demo project, this starts in the createCredentials function in webauthn-client.js and creates the credentials options object by making a call to createCredRequest in authn.js)
  2. The browser uses built-in WebAuthn APIs to create the new credentials with an available authenticator using the credentials options object that was created in first step. This is done by making a call to navigator.credentials.create API (this API is available in browsers and platforms that support FIDO and WebAuthn).
  3. The user experience in this step depends on the OS, browser, and the authenticator. For example, the browser could prompt the user to attach a security key or, on devices that support it, to use a biometric scanner.
     
    Figure 2: User registration and browser alert to use an authenticator in Firefox

    Figure 2: User registration and browser alert to use an authenticator in Firefox

  4. The user interacts with an authenticator (by touching a security key or scanning finger on a touch-id device), which generates new credentials bound to the relying party and returns a response object to the browser.
  5. The browser sends a credential response object to the relying party to parse and validate the response on the server-side. The credential identifier and public key are extracted from the credential response. At this step, your application can also check additional authenticator data and use it to make authentication decisions. For example, your application can check if the authenticator was able to verify user identity through PIN or biometrics (UV flag) or only user presence (UP flag) was verified by authenticator. In the demo project, this is still part of createCredentials function and server-side parsing and validation is done in parseCredResponse that is implemented in authn.js
  6. To complete the user registration, the browser passes the profile attributes that have been collected during registration through Amazon Cognito APIs as custom attributes. This step is performed in signUp function in webauthn-client.js

At the conclusion of this process, a new user will be created in Amazon Cognito and the custom attribute “publicKeyCred” will be populated with a base64 encoded string that includes a credential identifier and the public key generated by the authenticator. This attribute is not considered secret or sensitive data, it rather includes the public key that will be used to verify the authenticator response during subsequent authentications.

User authentication flow

The following diagram describes the custom authentication flow to implement password-less authentication.
 

Figure 3: WebAuthn user authentication process

Figure 3: WebAuthn user authentication process

  1. The user provides their user name and selects the sign-in button, script (running in browser) starts the sign-in process using Amazon Cognito InitiateAuth API passing the user name and indicating that authentication flow is CUSTOM_AUTH. In the demo project, this part is performed in the signIn function in webauthn-client.js.
  2. The Amazon Cognito service passes control to the Define Auth Challenge Lambda trigger. The trigger then determines that this is the first step in the authentication and returns CUSTOM_CHALLENGE as the next challenge to the user.
  3. Control then moves to Create Auth Challenge Lambda trigger to create the custom challenge. This trigger creates a random challenge (a 64 bytes random string), extracts the credential identifier from the user profile (the value passed initially during the sign-up process) combines them and returns them as a custom challenge to the client. This is performed in CreateAuthChallenge Lambda function.
  4. The browser then prompts the user to activate an authenticator. At this stage, the authenticator verifies that credentials exist for the identifier and that the relying party matches the one that is bound to the credentials. This is implemented by making a call to navigator.credentials.get API that is available in browsers and devices that support FIDO2 and WebAuthn.
     
    Figure 4: Authentication and browser prompt to use a registered authenticator

    Figure 4: Authentication and browser prompt to use a registered authenticator

  5. If credentials exist and the relying party is verified, the authenticator requests a user attention or verification. Depending on the type of authenticator, user verification through biometrics or a PIN code is performed and the credentials response is passed back to the browser.
     
    Figure 5: Authentication examples from different browsers and platforms

    Figure 5: Authentication examples from different browsers and platforms

  6. The signIn function continues the sign-in process by calling respondToAuthChallenge API and sending the credentials response to Amazon Cognito.
  7. Amazon Cognito sends the response to the Verify Auth Challenge Lambda trigger. This trigger extracts the public key from the user profile, parses and validates the credentials response, and if the signature is valid, it responds with success. This is performed in VerifyAuthChallenge Lambda trigger.
  8. Lastly, Amazon Cognito sends the control again to Define Auth Challenge to determine the next step. If the results from Verify Auth Challenge indicate a successful response, authentication succeeds and Amazon Cognito responds with ID, access, and refresh tokens.

Conclusion

When building customer facing applications, you as the application owner and developer need to balance security with usability. Reducing the risk of account take-over and phishing is based on using strong credentials, strong second-factors, and minimizing the role of passwords. The flexibility of Amazon Cognito custom authentication flow integrated with WebAuthn offer a technical path to make this possible in addition to offering better user experience to your customers.

Check out the WebAuthn with Amazon Cognito project for code samples and deployment steps, deploy this in your development environment to see this integration in action and go build an awesome password-less experience in your application.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Cognito forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mahmoud Matouk

Mahmoud is a Senior Solutions Architect with the Amazon Cognito team. He helps AWS customers build secure and innovative solutions for various identity and access management scenarios.

How to configure Duo multi-factor authentication with Amazon Cognito

Post Syndicated from Mahmoud Matouk original https://aws.amazon.com/blogs/security/how-to-configure-duo-multi-factor-authentication-with-amazon-cognito/

Adding multi-factor authentication (MFA) reduces the risk of user account take-over, phishing attacks, and password theft. Adding MFA while providing a frictionless sign-in experience requires you to offer a variety of MFA options that support a wide range of users and devices. Let’s see how you can achieve that with Amazon Cognito and Duo Multi-Factor Authentication (MFA).

Amazon Cognito user pools are user directories that are used by Amazon Web Services (AWS) customers to manage the identities of their customers and to add sign-in, sign-up and user management features to their customer-facing web and mobile applications. Duo Security is an APN Partner that provides unified access security and multi-factor authentication solutions.

In this blog post, I show you how to use Amazon Cognito custom authentication flow to integrate Duo Multi-Factor Authentication (MFA) into your sign-in flow and offer a wide range of MFA options to your customers. Some second factors available through Duo MFA are mobile phone SMS passcodes, approval of login via phone call, push-notification-based approval on smartphones, biometrics on devices that support it, and security keys that can be attached via USB.

How it works

Amazon Cognito user pools enable you to build a custom authentication flow that authenticates users based on one or more challenge/response cycles. You can use this flow to integrate Duo MFA into your authentication as a custom challenge.

Duo Web offers a software development kit to make it easier for you to integrate your web applications with Duo MFA. You need an account with Duo and an application to protect (which can be created from the Duo admin dashboard). When you create your application in the Duo admin dashboard, note the integration key (ikey), secret key (skey), and API hostname. These details, together with a random string (akey) that you generate, are the primary factors used to integrate your Amazon Cognito user pool with Duo MFA.

Note: ikey, skey, and akey are referred to as Duo keys.

Duo MFA will be integrated into the sign-in flow as a custom challenge. To do that, you need to generate a signed challenge request using Duo APIs and use it to load Duo MFA in an iframe and request the user’s second factor. When the challenge is answered by the user, a signed response is returned to your application and sent to Amazon Cognito for verification. If the response is valid then the MFA challenge is successful.

Let’s take a closer look at the sequence of calls and components involved in this flow.

Implementation details

In this section, I walk you through the end-to-end flow of integrating Duo MFA with Amazon Cognito using a custom authentication flow. To help you with this integration, I built a demo project that provides deployment steps and sample code to create a working demo in your environment.

Create and configure a user pool

The first step is to create the AWS resources needed for the demo. You can do that by deploying the AWS CloudFormation stack as described in the demo project.

A few implementation details to be aware of:

  • The template creates an Amazon Cognito user pool, application client, and AWS Lambda triggers that are used for the custom authentication.
  • The template also accepts ikey, skey, and akey as inputs. For security, the parameters are masked in the AWS CloudFormation console. These parameters are stored in a secret in AWS Secrets Manager with a resource policy that allows relevant Lambda functions read access to that secret.
  • Duo keys are loaded from secrets manager at the initialization of create auth challenge and verify auth challenge Lambda triggers to be used to create sign-request and verify sign-response.

Authentication flow

Figure 1: User authentication process for the custom authentication flow

Figure 1: User authentication process for the custom authentication flow

The preceding sequence diagram (Figure 1) illustrates the sequence of calls to sign in a user, which are as follows:

  1. In your application, the user is presented with a sign-in UI that captures their user name and password and starts the sign-in flow. A script—running in the browser—starts the sign-in process using the Amazon Cognito authenticateUser API with CUSTOM_AUTH set as the authentication flow. This validates the user’s credentials using Secure Remote Password (SRP) protocol and moves on to the second challenge if the credentials are valid.

    Note: The authenticateUser API automatically starts the authentication process with SRP. The first challenge that’s sent to Amazon Cognito is SRP_A. This is followed by PASSWORD_VERIFIER to verify the user’s credentials.

  2. After the SRP challenge step, the define auth challenge Lambda trigger will return CUSTOM_CHALLENGE and this will move control to the create auth challenge trigger.
  3. The create auth challenge Lambda trigger creates a Duo signed request using the Duo keys plus the username and returns the signed request as a challenge to the client. Here is a sample code of what create auth challenge should look like:
    <JavaScript>
    
    exports.handler = async (event) => {
    
        //load duo keys from secrets manager and store them in global variables
    
        if(ikey == null || skey == null || akey == null){ 
          const promise = new Promise(function(resolve, reject) {
              secretsManagerClient.getSecretValue({SecretId: secretName}, function(err, data) {
                    if (err) {throw err; }
                    else {
                        if ('SecretString' in data) {
                            secret = JSON.parse(data.SecretString);
                            ikey = secret['duo-ikey'];
                            skey = secret['duo-skey'];
                            akey = secret['duo-akey'];
                        }
                    }
                    resolve();
                });
            })
            
            await promise; 
        }
    
        
        var username = event.userName;
        var sig_request = duo_web.sign_request(ikey, skey, akey, username);
        
        event.response.publicChallengeParameters = {
            sig_request: sig_request
        };
        
        return event;
    };
    

  4. The client initializes the Duo Web library with the signed request and displays Duo MFA in an iframe to request a second factor from the user. To initialize the Duo library, you need the api_hostname that is generated for your application in the Duo dashboard, the sign-request that was received as a challenge, and a callback function to invoke after the MFA step is completed by the user. This is done on the client side as follows:
    <JavaScript>
          //render Duo MFA iframe
          $("#duo-mfa").html('<iframe id="duo_iframe" title="Two-Factor Authentication" </iframe>');
            
          Duo.init({
            'host': api_hostname,
            'sig_request': challengeParameters.sig_request,
            'submit_callback': mfa_callback
          });
    

  5. Through the Duo iframe, the user can set up their MFA preferences and respond to an MFA challenge. After successful MFA setup, a signed response from the Duo Web library will be returned to the client and passed to the callback function that was provided in Duo.init call.
     
    Figure 2: The first time a user signs in, Duo MFA displays a Start setup screen

    Figure 2: The first time a user signs in, Duo MFA displays a Start setup screen

  6. The client sends the Duo signed response to the Amazon Cognito service as a challenge response.
  7. Amazon Cognito sends the response to the verify auth challenge Lambda trigger, which uses Duo keys and username to verify the response.
    <JavaScript>
    const duo_web = require('duo_web');
    exports.handler = async (event) => {
    
        //load duo keys from secrets manager and store them in global variables
        
        var username = event.userName;
        
        //-------get challenge response
        const sig_response = event.request.challengeAnswer;
        const verificationResult = duo_web.verify_response(ikey, skey, akey, sig_response);
        
        if (verificationResult === username) {
            event.response.answerCorrect = true;
        } else {
            event.response.answerCorrect = false;
        }
        return event;
    };
    

  8. Validation results and current state are passed once again to the define auth challenge Lambda trigger. If the user response is valid, then the Duo MFA challenge is successful. You can then decide to introduce additional challenges to the user or issue tokens and complete the authentication process.

Conclusion

As you build your mobile or web application, keep in mind that using multi-factor authentication is an effective and recommended approach to protect your customers from account take-over, phishing, and the risks of weak or compromised passwords. Making multi-factor authentication easy for your customers enables you to offer authentication experience that protects their accounts but doesn’t slow them down.

Visit the security pillar of AWS Well-Architected Framework to learn more about AWS security best practices and recommendations.

In this blog post, I showed you how to integrate Duo MFA with an Amazon Cognito user pool. Visit the demo application and review the code samples in it to learn how to integrate this with your application.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Cognito forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mahmoud Matouk

Mahmoud is a Senior Solutions Architect with the Amazon Cognito team. He helps AWS customers build secure and innovative solutions for various identity and access management scenarios.

Cloudflare Access: now for SaaS apps, too

Post Syndicated from Sam Rhea original https://blog.cloudflare.com/cloudflare-access-for-saas/

Cloudflare Access: now for SaaS apps, too

Cloudflare Access: now for SaaS apps, too

We built Cloudflare Access™ as a tool to solve a problem we had inside of Cloudflare. We rely on a set of applications to manage and monitor our network. Some of these are popular products that we self-host, like the Atlassian suite, and others are tools we built ourselves. We deployed those applications on a private network. To reach them, you had to either connect through a secure WiFi network in a Cloudflare office, or use a VPN.

That VPN added friction to how we work. We had to dedicate part of Cloudflare’s onboarding just to teaching users how to connect. If someone received a PagerDuty alert, they had to rush to their laptop and sit and wait while the VPN connected. Team members struggled to work while mobile. New offices had to backhaul their traffic. In 2017 and early 2018, our IT team triaged hundreds of help desk tickets with titles like these:

Cloudflare Access: now for SaaS apps, too

While our IT team wrestled with usability issues, our Security team decided that poking holes in our private network was too much of a risk to maintain. Once on the VPN, users almost always had too much access. We had limited visibility into what happened on the private network. We tried to segment the network, but that was error-prone.

Around that time, Google published its BeyondCorp paper that outlined a model of what has become known as Zero Trust Security. Instead of trusting any user on a private network, a Zero Trust perimeter evaluates every request and connection for user identity and other variables.

We decided to create our own implementation by building on top of Cloudflare. Despite BeyondCorp being a new concept, we had experience in this field. For nearly a decade, Cloudflare’s global network had been operating like a Zero Trust perimeter for applications on the Internet – we just didn’t call it that. For example, products like our WAF evaluated requests to public-facing applications. We could add identity as a new layer and use the same network to protect applications teams used internally.

We began moving our self-hosted applications to this new project. Users logged in with our SSO provider from any network or location, and the experience felt like any other SaaS app. Our Security team gained the control and visibility they needed, and our IT team became more productive. Specifically, our IT teams have seen ~80% reduction in the time they spent servicing VPN-related tickets, which unlocked over $100K worth of help desk efficiency annually. Later in 2018, we launched this as a product that our customers could use as well.

By shifting security to Cloudflare’s network, we could also make the perimeter smarter. We could require that users login with a hard key, something that our identity provider couldn’t support. We could restrict connections to applications from specific countries. We added device posture integrations. Cloudflare Access became an aggregator of identity signals in this Zero Trust model.

As a result, our internal tools suddenly became more secure than the SaaS apps we used. We could only add rules to the applications we could place on Cloudflare’s reverse proxy. When users connected to popular SaaS tools, they did not pass through Cloudflare’s network. We lacked a consistent level of visibility and security across all of our applications. So did our customers.

Starting today, our team and yours can fix that. We’re excited to announce that you can now bring the Zero Trust security features of Cloudflare Access to your SaaS applications. You can protect any SaaS application that can integrate with a SAML identity provider with Cloudflare Access.

Even though that SaaS application is not deployed on Cloudflare, we can still add security rules to every login. You can begin using this feature today and, in the next couple of months, you’ll be able to ensure that all traffic to these SaaS applications connects through Cloudflare Gateway.

Standardizing and aggregating identity in Cloudflare’s network

Support for SaaS applications in Cloudflare Access starts with standardizing identity. Cloudflare Access  aggregates different sources of identity: username, password, location, and device. Administrators build rules to determine what requirements a user must meet to reach an application. When users attempt to connect, Cloudflare enforces every rule in that checklist before the user ever reaches the app.

The primary rule in that checklist is user identity. Cloudflare Access is not an identity provider; instead, we source identity from SSO services like Okta, Ping Identity, OneLogin, or public apps like GitHub. When a user attempts to access a resource, we prompt them to login with the provider configured. If successful, the provider shares the user’s identity and other metadata with Cloudflare Access.

A username is just one part of a Zero Trust decision. We consider additional rules, like country restrictions or device posture via partners like Tanium or, soon, additional partners CrowdStrike and VMware Carbon Black. If the user meets all of those criteria, Cloudflare Access summarizes those variables into a standard proof of identity that our network trusts: a JSON Web Token (JWT).

Cloudflare Access: now for SaaS apps, too

A JWT is a secure, information-dense way to share information. Most importantly, JWTs follow a standard, so that different systems can trust one another. When users login to Cloudflare Access, we generate and sign a JWT that contains the decision and information about the user. We store that information in the user’s browser and treat that as proof of identity for the duration of their session.

Every JWT must consist of three Base64-URL strings: the header, the payload, and the signature.

  • The header defines the cryptographic operation that encrypts the data in the JWT.
  • The payload consists of name-value pairs for at least one and typically multiple claims, encoded in JSON. For example, the payload can contain the identity of a user.
  • The signature allows the receiving party to confirm that the payload is authentic.

We store the identity data inside of the payload and include the following details:

  • User identity: typically the email address of the user retrieved from your identity provider.
  • Authentication domain: the domain that signs the token. For Access, we use “example.cloudflareaccess.com” where “example” is a subdomain you can configure.
  • amr: If available, the multifactor authentication method the login used, like a hard key or a TOTP code.
  • Country: The country where the user is connecting from.
  • Audience: The domain of the application you are attempting to reach.
  • Expiration: the time at which the token is no longer valid for use.

Some applications support JWTs natively for SSO. We can send the token to the application and the user can login. In other cases, we’ve released plugins for popular providers like Atlassian and Sentry. However, most applications lack JWT support and rely on a different standard: SAML.

Converting JWT to SAML with Cloudflare Workers

You can deploy Cloudflare’s reverse proxy to protect the applications you host, which puts Cloudflare Access in a position to add identity checks when those requests hit our edge. However, the SaaS applications you use are hosted and managed by the vendors themselves as part of the value they offer. In the same way that I cannot decide who can walk into the front door of the bakery downstairs, you can’t build rules about what requests should and shouldn’t be allowed.

When those applications support integration with your SSO provider, you do have control over the login flow. Many applications rely on a popular standard, SAML, to securely exchange identity data and user attributes between two systems. The SaaS application does not need to know the details of the identity provider’s rules.

Cloudflare Access uses that relationship to force SaaS logins through Cloudflare’s network. The application itself thinks of Cloudflare Access as the SAML identity provider. When users attempt to login, the application sends the user to login with Cloudflare Access.

That said, Cloudflare Access is not an identity provider – it’s an identity aggregator. When the user reaches Access, we will redirect them to the identity provider in the same way that we do today when users request a site that uses Cloudflare’s reverse proxy. By adding that hop through Access, though, we can layer the additional contextual rules and log the event.

Cloudflare Access: now for SaaS apps, too

We still generate a JWT for every login providing a standard proof of identity. Integrating with SaaS applications required us to convert that JWT into a SAML assertion that we can send to the SaaS application. Cloudflare Access runs in every one of Cloudflare’s data centers around the world to improve availability and avoid slowing down users. We did not want to lose those advantages for this flow. To solve that, we turned to Cloudflare Workers.

The core login flow of Cloudflare Access already runs on Cloudflare Workers. We built support for SaaS applications by using Workers to take the JWT and convert its content into SAML assertions that are sent to the SaaS application. The application thinks that Cloudflare Access is the identity provider, even though we’re just aggregating identity signals from your SSO provider and other sources into the JWT, and sending that summary to the app via SAML.

Integrate with Gateway for comprehensive logging (coming soon)

Cloudflare Gateway keeps your users and data safe from threats on the Internet by filtering Internet-bound connections that leave laptops and offices. Gateway gives administrators the ability to block, allow, or log every connection and request to SaaS applications.

However, users are connecting from personal devices and home WiFi networks, potentially bypassing Internet security filtering available on corporate networks. If users have their password and MFA token, they can bypass security requirements and reach into SaaS applications from their own, unprotected devices at home.

To ensure traffic to your SaaS apps only connects over Gateway-protected devices, Cloudflare Access will add a new rule type that requires Gateway when users login to your SaaS applications. Once enabled, users will only be able to connect to your SaaS applications when they use Cloudflare Gateway. Gateway will log those connections and provide visibility into every action within SaaS apps and the Internet.

Every identity provider is now capable of SAML SSO

Identity providers come in two flavors and you probably use both every day. One type is purpose-built to be an identity provider, and the other accidentally became one. With this release, Cloudflare Access can convert either into a SAML-compliant SSO option.

Corporate identity providers, like Okta or Azure AD, manage your business identity. Your IT department creates and maintains the account. They can integrate it with SaaS Applications for SSO.

The second type of login option consists of SaaS providers that began as consumer applications and evolved into public identity providers. LinkedIn, GitHub, and Google required users to create accounts in their applications for networking, coding, or email.

Over the last decade, other applications began to trust those public identity provider logins. You could use your Google account to log into a news reader and your GitHub account to authenticate to DigitalOcean. Services like Google and Facebook became SSO options for everyone. However, most corporate applications only supported integration with a single SAML provider, something public identity providers do not provide. To rely on SSO as a team, you still needed a corporate identity provider.

Cloudflare Access converts a user login from any identity provider into a JWT. With this release, we also generate a standard SAML assertion. Your team can now use the SAML SSO features of a corporate identity provider with public providers like LinkedIn or GitHub.

Multi-SSO meets SaaS applications

We describe Cloudflare Access as a Multi-SSO service because you can integrate multiple identity providers, and their SSO flows, into Cloudflare’s Zero Trust network. That same capability now extends to integrating multiple identity providers with a single SaaS application.

Most SaaS applications will only integrate with a single identity provider, limiting your team to a single option. We know that our customers work with partners, contractors, or acquisitions which can make it difficult to standardize around a single identity option for SaaS logins.

Cloudflare Access can connect to multiple identity providers simultaneously, including multiple instances of the same provider. When users are prompted to login, they can choose the option that their particular team uses.

Cloudflare Access: now for SaaS apps, too

We’ve taken that ability and extended it into the Access for SaaS feature. Access generates a consistent identity from any provider, which we can now extend for SSO purposes to a SaaS application. Even if the application only supports a single identity provider, you can still integrate Cloudflare Access and merge identities across multiple sources. Now, team members who use your Okta instance and contractors who use LinkedIn can both SSO into your Atlassian suite.

All of your apps in one place

Cloudflare Access released the Access App Launch as a single destination for all of your internal applications. Your team members visit a URL that is unique to your organization and the App Launch displays all of the applications they can reach. The feature requires no additional administrative configuration; Cloudflare Access reads the user’s JWT and returns only the applications they are allowed to reach.

Cloudflare Access: now for SaaS apps, too

That experience now extends to all applications in your organization. When you integrate SaaS applications with Cloudflare Access, your users will be able to discover them in the App Launch. Like the flow for internal applications, this requires no additional configuration.

How to get started

To get started, you’ll need a Cloudflare Access account and a SaaS application that supports SAML SSO. Navigate to the Cloudflare for Teams dashboard and choose the “SaaS” application option to start integrating your applications. Cloudflare Access will walk through the steps to configure the application to trust Cloudflare Access as the SSO option.

Cloudflare Access: now for SaaS apps, too

Do you have an application that needs additional configuration? Please let us know.

Protect SaaS applications with Cloudflare for Teams today

Cloudflare Access for SaaS is available to all Cloudflare for Teams customers, including organizations on the free plan. Sign up for a Cloudflare for Teams account and follow the steps in the documentation to get started.

We will begin expanding the Gateway beta program to integrate Gateway’s logging and web filtering with the Access for SaaS feature before the end of the year.

Releasing kubectl support in Access

Post Syndicated from Sam Rhea original https://blog.cloudflare.com/releasing-kubectl-support-in-access/

Releasing kubectl support in Access

Starting today, you can use Cloudflare Access and Argo Tunnel to securely manage your Kubernetes cluster with the kubectl command-line tool.

We built this to address one of the edge cases that stopped all of Cloudflare, as well as some of our customers, from disabling the VPN. With this workflow, you can add SSO requirements and a zero-trust model to your Kubernetes management in under 30 minutes.

Once deployed, you can migrate to Cloudflare Access for controlling Kubernetes clusters without disrupting your current kubectl workflow, a lesson we learned the hard way from dogfooding here at Cloudflare.

What is kubectl?

A Kubernetes deployment consists of a cluster that contains nodes, which run the containers, as well as a control plane that can be used to manage those nodes. Central to that control plane is the Kubernetes API server, which interacts with components like the scheduler and manager.

kubectl is the Kubernetes command-line tool that developers can use to interact with that API server. Users run kubectl commands to perform actions like starting and stopping the nodes, or modifying other elements of the control plane.

In most deployments, users connect to a VPN that allows them to run commands against that API server by addressing it over the same local network. In that architecture, user traffic to run these commands must be backhauled through a physical or virtual VPN appliance. More concerning, in most cases the user connecting to the API server will also be able to connect to other addresses and ports in the private network where the cluster runs.

How does Cloudflare Access apply?

Cloudflare Access can secure web applications as well as non-HTTP connections like SSH, RDP, and the commands sent over kubectl. Access deploys Cloudflare’s network in front of all of these resources. Every time a request is made to one of these destinations, Cloudflare’s network checks for identity like a bouncer in front of each door.

Releasing kubectl support in Access

If the request lacks identity, we send the user to your team’s SSO provider, like Okta, AzureAD, and G Suite, where the user can login. Once they login, they are redirected to Cloudflare where we check their identity against a list of users who are allowed to connect. If the user is permitted, we let their request reach the destination.

In most cases, those granular checks on every request would slow down the experience. However, Cloudflare Access completes the entire check in just a few milliseconds. The authentication flow relies on Cloudflare’s serverless product, Workers, and runs in every one of our data centers in 200 cities around the world. With that distribution, we can improve performance for your applications while also authenticating every request.

How does it work with kubectl?

To replace your VPN with Cloudflare Access for kubectl, you need to complete two steps:

  • Connect your cluster to Cloudflare with Argo Tunnel
  • Connect from a client machine to that cluster with Argo Tunnel
Releasing kubectl support in Access

Connecting the cluster to Cloudflare

On the cluster side, Cloudflare Argo Tunnel connects those resources to our network by creating a secure tunnel with the Cloudflare daemon, cloudflared. As an administrator, you can run cloudflared in any space that can connect to the kubectl API server over TCP.

Once installed, an administrator authenticates the instance of cloudflared by logging in to a browser with their Cloudflare account and choosing a hostname to use. Once selected, Cloudflare will issue a certificate to cloudflared that can be used to create a subdomain for the cluster.

Next, an administrator starts the tunnel. In the example below, the hostname value can be any subdomain of the hostname selected in Cloudflare; the url value should be the API server for the cluster.

cloudflared tunnel --hostname cluster.site.com --url tcp://kubernetes.docker.internal:6443 --socks5=true 

This should be run as a systemd process to ensure the tunnel reconnects if the resource restarts.

Connecting as an end user

End users do not need an agent or client application to connect to web applications secured by Cloudflare Access. They can authenticate to on-premise applications through a browser, without a VPN, like they would for SaaS tools. When we apply that same security model to non-HTTP protocols, we need to establish that secure connection from the client with an alternative to the web browser.

Unlike our SSH flow, end users cannot modify kubeconfig to proxy requests through cloudflared. Pull requests have been submitted to add this functionality to kubeconfig, but in the meantime users can set an alias to serve a similar function.

First, users need to download the same cloudflared tool that administrators deploy on the cluster. Once downloaded, they will need to run a corresponding command to create a local SOCKS proxy. When the user runs the command, cloudflared will launch a browser window to prompt them to login with their SSO and check that they are allowed to reach this hostname.

$ cloudflared access tcp --hostname cluster.site.com url 172.0.0.3:1234

The proxy allows your local kubectl tool to connect to cloudflared via a SOCKS5 proxy, which helps avoid issues with TLS handshakes to the cluster itself. In this model, TLS verification can still be exchanged with the kubectl API server without disabling or modifying that flow for end users.

Users can then create an alias to save time when connecting. The example below aliases all of the steps required to connect in a single command. This can be added to the user’s bash profile so that it persists between restarts.

$ alias kubeone=”env HTTPS_PROXY=socks5://127.0.0.3:1234 kubectl

A (hard) lesson when dogfooding

When we build products at Cloudflare, we release them to our own organization first. The entire company becomes a feature’s first customer, and we ask them to submit feedback in a candid way.

Cloudflare Access began as a product we built to solve our own challenges with security and connectivity. The product impacts every user in our team, so as we’ve grown, we’ve been able to gather more expansive feedback and catch more edge cases.

The kubectl release was no different. At Cloudflare, we have a team that manages our own Kubernetes deployments and we went to them to discuss the prototype. However, they had more than just some casual feedback and notes for us.

They told us to stop.

We had started down an implementation path that was technically sound and solved the use case, but did so in a way that engineers who spend all day working with pods and containers would find to be a real irritant. The flow required a small change in presenting certificates, which did not feel cumbersome when we tested it, but we do not use it all day. That grain of sand would cause real blisters as a new requirement in the workflow.

With their input, we stopped the release, and changed that step significantly. We worked through ideas, iterated with them, and made sure the Kubernetes team at Cloudflare felt this was not just good enough, but better.

What’s next?

Support for kubectl is available in the latest release of the cloudflared tool. You can begin using it today, on any plan. More detailed instructions are available to get started.

If you try it out, please send us your feedback! We’re focused on improving the ease of use for this feature, and other non-HTTP workflows in Access, and need your input.

New to Cloudflare for Teams? You can use all of the Teams products for free through September. You can learn more about the program, and request a dedicated onboarding session, here.

Rely on employee attributes from your corporate directory to create fine-grained permissions in AWS

Post Syndicated from Sulay Shah original https://aws.amazon.com/blogs/security/rely-employee-attributes-from-corporate-directory-create-fine-grained-permissions-aws/

In my earlier post Simplify granting access to your AWS resources by using tags on AWS IAM users and roles, I explained how to implement attribute-based access control (ABAC) in AWS to simplify permissions management at scale. In that scenario, I talked about relying on attributes on your IAM users and roles for access control in AWS. But more often, customers manage workforce user identities with an identity provider (IdP) and want to use identity attributes from their IdP for fine-grained permissions in AWS. In this post I introduce a new capability that enables you to do just that.

In AWS, you can configure your IdP to allow workforce users federated access to AWS resources using credentials from your corporate directory. Along with user credentials, your directory also stores user attributes such as cost center, department, and email address. Now you can configure your IdP to pass in user attributes as tags in federated AWS sessions. These are called session tags. You can then control access to AWS resources based on these session tags. Moreover, when user attributes change or new users are added to your directory, permissions automatically apply based on these attributes. For example, developers can federate into AWS using an IAM role, but can only access resources specific to their project. This is because you define permissions that require the project attribute from their IdP to match the project tag on AWS resources. Additionally, AWS logs these attributes in AWS CloudTrail and enable security administrators to track the user identity for a given role session.

In this post, I introduce session tags and walk you through an example of how to use session tags for ABAC and tracking user activity.

What are session tags?

Session tags are attributes passed in the AWS session. You can use session tags for access control in IAM policies and for monitoring. These tags are not stored in AWS and are valid only for the duration of the session. You define session tags just like tags in AWS—consisting of a customer-defined key and an optional value.

How to pass session tags in the AWS session?

One of the most widely used mechanisms for requesting a session in AWS is by assuming an IAM role. For user identities stored in an external directory, you can configure your SAML IdP in IAM to allow your users federated access to AWS using IAM roles. To understand how to set up SAML federation using an IdP, read AWS Federated Authentication with Active Directory Federation Services (ADFS). If you’re using IAM users, you can also request a session in AWS using AssumeRole and GetFederationToken APIs or using AssumeRoleWithWebIdentity API for applications that require access to AWS resources.

For session tags, you can use all of the above-mentioned APIs to pass tags into your AWS session based on your use case. For details on how to use these APIs to pass session tags, please visit Tags in AWS Sessions.

What permissions do I need to use session tags?

To perform any action in AWS, developers need permissions. For example, to assume a role, your developers need sts:AssumeRole permission. Similarly with session tags, we’re introducing a new action, sts:TagSession, that is required to pass session tags in the session. Additionally, you can require and control session tags using existing AWS conditions:

Action Use Case Where to add
sts:TagSession Required to pass attributes as session tags when using AssumeRole, AssumRoleWithSAML, AssumeRoleWithWebIdentity, or GetFederatioToken API Role’s trust policy or IAM user’s permissions policy based on the API you are using to pass session tags.
Condition Key Use Case Actions that supports the condition key
aws:RequestTag Use this condition to require specific tags in the session. sts:TagSession
aws:TagKeys Use this condition key to control the tag keys that are allowed in the session. sts:TagSession
aws:PrincipalTag* Use this condition in IAM policies to compare tags on AWS resources. AWS Global Condition Keys (all actions across all services support this condition key)

Note: The table above explains only the additional use cases that the keys now support. Support for existing use cases, such as IAM users and roles remains unchanged. For details please visit AWS Global Condition Keys.

Now, I’ll show you how to create fine-grained permissions based on user attributes from your directory and how permissions automatically apply based on attributes when employees switch projects within your organization.

Example: Grant employees access to their project resources in AWS based on their job function

Consider a scenario where your organization deployed AWS EC2 and RDS instances in your AWS account for your company’s production web applications. Your systems engineers manage the EC2 instances and database engineers manage the RDS instances. They both access AWS by federating into your AWS account from a SAML IdP. Your organization’s security policy requires employees to have access to manage only the resources related to their job function and project they work on.

To meet these requirements, your cloud administrator, Michelle, implements attribute-based access control (ABAC) using the jobfunction and project attributes as session tags by following three steps:

  1. Michelle tags all existing EC2 and RDS instances with the corresponding project attribute.
  2. She creates a MyProjectResources IAM role and an IAM permission policy for this role such that employees can access resources with their jobfunction and project tags.
  3. She then configures your SAML IdP to pass the jobfunction and project attributes in the federated session when employees federate into AWS using the MyProjectResources role.

Let’s have a look at these steps in detail.

Step 1: Tag all the project resources

Michelle tags all the project resources with the appropriate project tag. This is important since she wants to create permission rules based on this tag to implement ABAC. To learn how to tag resources in EC2 and RDS, read tagging your Amazon EC2 resources and tagging Amazon RDS resources.

Step 2: Create an IAM role with permissions based on attributes

Next, Michelle creates an IAM role called MyProjectResources using the AWS Management Console or CLI. This is the role that your systems engineers and database engineers will assume when they federate into AWS to access and manage the EC2 and RDS instances respectively. To grant this role permissions, Michelle creates the following IAM policy and attaches it to the MyProjectResources role.

IAM Permissions Policy


{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Effect": "Allow",
			"Action": "rds:DescribeDBInstances",
			"Resource": "*"
		},
		{
			"Effect": "Allow",
			"Action": [
				"rds:RebootDBInstance",
				"rds:StartDBInstance",
				"rds:StopDBInstance"
			],
			"Resource": "*",
			"Condition": {
				"StringEquals": {
					"aws:PrincipalTag/jobfunction": "DatabaseEngineer",
					"rds:db-tag/project": "${aws:PrincipalTag/project}"
				}
			}
		},
		{
			"Effect": "Allow",
			"Action": "ec2:DescribeInstances",
			"Resource": "*"
		},
		{
			"Effect": "Allow",
			"Action": [
				"ec2:StartInstances",
				"ec2:StopInstances",
				"ec2:RebootInstances",
				"ec2:TerminateInstances"
			],
			"Resource": "*",
			"Condition": {
				"StringEquals": {
					"aws:PrincipalTag/jobfunction": "SystemsEngineer",
					"ec2:ResourceTag/project": "${aws:PrincipalTag/project}"
				}
			}
		}
	]
}

In the policy above, Michelle allows specific actions related to EC2 and RDS that the systems engineers and database engineers need to manage their project instances. In the condition element of the policy statements, Michelle adds a condition based on the jobfunction and project attributes to ensure engineers can access only the instances which belong to their jobfunction and have a matching project tag.

To ensure your systems engineers and database engineers can assume this role when they federate into AWS from your IdP, Michelle modifies the role’s trust policy to trust your SAML IdP as shown in the policy statement below. Since we also want to include session tags when engineers federate in, Michelle adds the new action sts:TagSession in the policy statement as shown below. She also adds a condition that requires the jobfunction and project attributes to be included as session tags when engineers assume this role.

Role Trust Policy


{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Effect": "Allow",
			"Principal": {
				"Federated": "arn:aws:iam::999999999999:saml-provider/ExampleCorpProvider"
			},
			"Action": [
				"sts:AssumeRoleWithSAML",
				"sts:TagSession"
			],
			"Condition": {
				"StringEquals": {
					"SAML:aud": "https://signin.aws.amazon.com/saml"
				},
				"StringLike": {
					"aws:RequestTag/project": "*",
					"aws:RequestTag/jobfunction": [
						"SystemsEngineer",
						"DatabaseEngineer"
					]
				}
			}
		}
	]
}

Step 3: Configuring your SAML IdP to pass the jobfunction and project attributes as session tags

Once Michelle creates the role and permissions policy in AWS, she configures her SAML IdP to include the jobfunction and project attributes as session tags in the SAML assertion when engineers federate into AWS using this role.

To pass attributes as session tags in the federated session, the SAML assertion must contain the attributes with the following prefix:

https://aws.amazon.com/SAML/Attributes/PrincipalTag

The example given below shows a part of the SAML assertion generated from my IdP with two attributes (project:Automation and jobfunction:SystemsEngineer) that we want to pass as session tags.


<Attribute Name="https://aws.amazon.com/SAML/Attributes/PrincipalTag:project">
		< AttributeValue >Automation<AttributeValue>
</ Attribute>
<Attribute Name="https://aws.amazon.com/SAML/Attributes/PrincipalTag:jobfunction">
		< AttributeValue >SystemsEngineer<AttributeValue>
</ Attribute>

Note: This sample only contains the new properties in the SAML assertion. There are additional required fields in the SAML assertion that must be present to successfully federate into AWS. To learn more about creating SAML assertions with session tags, visit configuring SAML assertions for the authentication response.

AWS identity partners such as Ping Identity, OneLogin, Auth0, ForgeRock, IBM, Okta, and RSA have validated the end-to-end experience for this new capability with their identity solutions, and we look forward to additional partners validating this capability. To learn more about how to use these identity providers for configuring session tags, please visit integrating third-party SAML solution providers with AWS. If you are using Active Directory Federation Services (ADFS) for SAML federation with AWS, then please visit Configuring ADFS to start using session tags for attribute-based access control.

Now, when your systems engineers and database engineers federate into AWS using the MyProjectResources role, they only get access to their project resources based on the project and jobfunction attributes passed in their federated session. Session tags enabled Michelle to define unique permissions based on user attributes without having to create and manage multiple roles and policies. This helps simplify permissions management in her company.

Permissions automatically apply when employees change projects

Consider the same example with a scenario where your systems engineer, Bob, switches from the automation project to the integration project. Due to this switch, Michelle sets Bob’s project attribute in the IdP to integration. Now, the next time Bob federates into AWS he automatically has access to resources in integration project. Using session tags, permissions automatically apply when you update attributes or create new AWS resources with appropriate attributes without requiring any permissions updates in AWS.

Track user identity using session tags

When developers federate into AWS with session tags, AWS CloudTrail logs these tags to make it easier for security administrators to track the user identity of the session. To view session tags in CloudTrail, your administrator Michelle looks for the AssumeRoleWithSAML event in the eventName filter of CloudTrail. In the example below, Michelle has configured the SAML IdP to pass three session tags: project, jobfunction, and userID. When developers federate into your account, Michelle views the AssumeRoleWithSAML event in CloudTrail to track the user identity of the session using the session tags project, jobfunction, and userID as shown below:
 

Figure 1: Search for the logged events

Figure 1: Search for the logged events

Note: You can use session tags in conjunction with the instructions to track account activity to its origin using AWS CloudTrail to trace the identity of the session.

Summary

You can use session tags to rely on your employee attributes from your corporate directory to create fine-grained permissions at scale in AWS to simplify your permissions management workflows. To learn more about session tags, please visit tags in AWS session.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon IAM forum.

Want more AWS Security news? Follow us on Twitter.

Sulay Shah

Sulay is a Senior Product Manager for Identity and Access Management service at AWS. He strongly believes in the customer first approach and is always looking for new opportunities to assist customers. Outside of work, Sulay enjoys playing soccer and watching movies. Sulay holds a master’s degree in computer science from the North Carolina State University.

Use IAM to share your AWS resources with groups of AWS accounts in AWS Organizations

Post Syndicated from Michael Switzer original https://aws.amazon.com/blogs/security/iam-share-aws-resources-groups-aws-accounts-aws-organizations/

You can now reference Organizational Units (OUs), which are groups of AWS accounts in AWS Organizations, in AWS Identity and Access Management (IAM) policies, making it easier to define access for your IAM principals (users and roles) to the AWS resources in your organization. AWS Organizations lets you organize your accounts into OUs to align them with your business or security purposes. Now, you can use a new condition key, aws:PrincipalOrgPaths, in your policies to allow or deny access based on a principal’s membership in an OU. This makes it easier than ever to share resources between accounts you own in your AWS environments.

For example, you might have an Amazon S3 bucket you need to share with developers and applications from accounts that are members of a specific OU. To accomplish this, you can specify the aws:PrincipalOrgPaths condition and set the value to the organizational unit ID of the caller in the resource-based policy attached to the bucket. When a principal tries to access the bucket, AWS verifies that their account’s OU matches the one specified in the policy. With this condition, permissions automatically apply when you add accounts to the OU without any additional updates to the policy.

In this post, I introduce the new condition key, and show you how to use it in two examples. In the first example you will see how to use the aws:PrincipalOrgPaths condition key to grant multiple AWS accounts access to a resource, without needing to maintain a list of account IDs in your policy. In the second example, you will see how to add a guardrail to your administrative roles that prevents access to powerful actions unless coming from a protected OU in your organization.

AWS Organizations Concepts

Before I walk through the condition, let’s review some important concepts from AWS Organizations.

AWS Organizations allows you to group a set of AWS accounts into an organization that you can manage centrally. Once the accounts have joined the organization, you can group them into organizational units (OUs), allowing you to set policies that help you meet your security and compliance requirements. You can create multiple OUs within a single organization, and you can create OUs within other OUs to form hierarchical relationships between your accounts. When you create an organization, AWS Organizations creates your first account container automatically. It has a special name, called a root. All OUs you create exist inside the root.

Organizations, roots, and OUs use a different format for their identifiers. You can see the differences in the table below:

Resource ID Format Example Value Globally Unique
Organization o-exampleorgid o-p8iu8lkook Yes
Root r-examplerootid r-tkh7 No
Organizational Unit ou-examplerootid-exampleouid ou-tkh7-pbevdy6h No

Organization IDs are globally unique, meaning no organizations share Organization IDs. OU and Root IDs are not globally unique. This means another customer’s organization OU may have the same ID as those from your organization. OU and Root IDs are unique within an organization. Therefore, you should always include the organization identifier when specifying an OU to make sure it is unique to your organization.

Control access to resources based on OU

You use condition keys in the condition element of an IAM policy. A condition is an optional IAM policy element you can use to specify circumstances under which the policy grants or denies permission. A condition includes a condition key, operator, and value for the condition.

Condition key Description Operator(s) Value(s)
aws:PrincipalOrgPaths The paths of the principals’ OU from AWS Organizations All string operators Paths of AWS Organization IDs and organizational unit IDs

The aws:PrincipalOrgPaths condition key is a global condition, meaning you can use it in conjunction with any AWS action. When you use it in the condition element of your IAM policy, it validates the organization, root, and OUs of the principal performing the action on the resource. For example, let’s say a principal was a member of an OU with the id ou-abcd-zzyyxxww inside a root r-abcd in the organization o-1122334455. When the principal makes a request on the resource, its aws:PrincipalOrgPaths value is:

["o-1122334455/r-abcd/ou-abcd-zzyyxxww/"]

The path includes the organization ID to ensure global uniqueness. This ensures only principals from your organization can access your AWS resources. You can use any string operator, such as StringEquals, with the condition. You can also use the wildcard characters (* and ?) when providing a path.

Aws:PrincipalOrgPaths is a multi-value condition key. Multi-value keys allow you to provide multiple values in a list format. Here’s a sample condition statement from a policy that uses the key to validate that a principal is from either ou-1 or ou-2:


"Condition":{
	"ForAnyValue:StringLike":{
		"aws:PrincipalOrgPaths":[
		  "o-1122334455/r-abcd/ou-1/",
		  "o-1122334455/r-abcd/ou-2/"
		]
	}
}

For all multi-value condition keys, you must provide the value as a JSON-formatted list as shown above, even if you’re only specifying one value. As shown in the example above, you also must use the ForAnyValue qualifier in your conditions to specify you’re checking membership of one OU path. For more information, see Creating a Condition That Tests Multiple Key Values in the IAM documentation.

In the next section, I’ll go over an example of how to use the new condition key to protect resources in your account from access outside of a given OU.

Example: Grant S3 bucket access to all principals in an OU in your organization

This example demonstrates how you can use the new condition key to share resources with groups of accounts. By placing the accounts into an OU and granting access based on membership, you can grant targeted access without having to list and maintain all the AWS account IDs in your permission policies.

Consider an example where I want to grant my Machine Learning team permissions to access an S3 bucket training-data that contains images that the team will use to train their machine learning models. I’ve set up my organization such that all AWS accounts owned by my Machine Learning team are part of a specific OU with the ID ou-machinelearn. For the purpose of this example, my organization ID is o-myorganization.

For this example, I want to allow users and applications from the Machine Learning OU or any OU beneath it to have permissions to read the training-data S3 bucket. Any other AWS accounts should not have the ability to view the resource.

To grant these permissions, I author an S3 bucket policy for my training-data resource as shown below.


{
	"Version":"2012-10-17",
	"Statement":{
		"Sid":"TrainingDataS3ReadOnly",
		"Effect":"Allow",
		"Principal": "*",
		"Action":"s3:GetObject",
		"Resource":"arn:aws:s3:::training-data/*",
		"Condition":{
			"ForAnyValue:StringLike":{
				"aws:PrincipalOrgPaths":["o-myorganization/*/ou-machinelearn/*"]
			}
		}
	}
}

In the policy above, I assert that principals trying to read the contents of the training-data bucket must be either a member of the OU that corresponds to the ou-machinelearn ID I provided (my Machine Learning OU Identifier), or a member of any OUs that are children of it. For the aws:PrincipalOrgPaths value, I used two asterisk (*) wildcards. I used the first asterisk (*) between my organization ID and my OU ID because OU IDs are unique within my organization. This means specifying the full path is not necessary to select the OU I need. The second asterisk (*), at the end of the path, is used to specify that I want to allow all child OUs to be included in my string comparison. If I didn’t want to include the child OUs, I could remove the wildcard character.

With this policy on the bucket, any principals in the Machine Learning OU may read objects inside the bucket if the user or role has the appropriate S3 permissions. Note that if this policy did not have the condition statement, it would be accessible by any AWS account. As a best practice, AWS recommends only granting access to the principals that need it. As for next steps, I could edit the Principal section of the policy to restrict access to specific principals in my Machine Learning accounts. For more information, see Specifying a Principal in a Policy in the S3 documentation.

Example: Restrict access to an IAM role to only accounts in an OU in my organization

The next example will show how to use aws:PrincipalOrgPaths to add another layer of security to your existing IAM role trust policies, ensuring only members of specific OUs may assume your roles.

For this example, say my company requires that only network security engineers can create or manage AWS Virtual Private Cloud (VPC) resources in my accounts. The network security team has a dedicated OU, ou-netsec, for their workloads. I have the same organization ID as the previous example, o-myorganization.

Each account in my organization has a dedicated IAM role, VPCManager, with the permissions needed to manage VPCs. I want to ensure that only my network security team, who use principals that are tagged as such, has access to the role. To do this, I edited the role trust policy for VPCManager, which defines who can access an IAM role. In this case, I added a condition to the policy to require that anyone assuming the role must come from an account in ou-netsec.

This is the trust policy I created for VPCManager:


{
  "Version": "2012-10-17",
  "Statement": [
	{
			"Effect": "Allow",
			"Principal": {
			"AWS": [
				"123456789012",
				"345678901234",
				"567890123456"
			]
		},
		"Action": "sts:AssumeRole",
		"Condition":{
		"StringEquals":{
		"aws:PrincipalTag/JobRole":"NetworkAdmin"
			},
			"ForAnyValue:StringLike":{
				"aws:PrincipalOrgPaths":["o-myorganization/*/ou-netsec/"]
            }
         }
      }
   ]
}

I started by adding the Effect, Principal, and Action to allow principals from three network security accounts to assume the role. To ensure they have the right job role, I added a condition to require the JobRole=NetworkAdmin tag must be applied to principals before they can assume the role. Finally, as an added layer of security, I added the second condition that requires anyone assuming the role must come from an account in the network security OU. This final step ensures that I specified the correct account IDs for my network security accounts—even if I accidentally provided an account that is not part of my organization, members of that account won’t be able to assume the role because they aren’t part of ou-netsec.

Though only members of the network security team may assume the role, it’s still possible for any principals with IAM permissions to modify it. As next steps, I could apply a Service Control Policy (SCP) that protects the role from modification and prevents other roles in the account from modifying VPCs. For more information, see How to use service control policies to set permission guardrails in the AWS Security Blog.

Summary

AWS offers tools to control access for individual principals, accounts, OUs, or entire organizations—this helps you manage permissions at the appropriate scale for your business. You can now use the aws:PrincipalOrgPaths condition key to control access to your resources based on OUs configured in AWS Organizations. For more information about these global condition keys and policy examples, read the IAM documentation.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon Identity and Access Management forum.

Want more AWS Security news? Follow us on Twitter.

Michael Switzer, Senior Product Manager AWS Identity

Michael Switzer

Mike Switzer is the product manager for the Identity and Access Management service at AWS. He enjoys working directly with customers to identify solutions to their challenges, and using data-driven decision making to drive his work. Outside of work, Mike is an avid cyclist and outdoorsperson. Mike holds a master’s degree in computational mathematics from the University of Washington.

Your AWS re:Invent 2019 guide to AWS Identity sessions, workshops, and chalk talks

Post Syndicated from Michael Chan original https://aws.amazon.com/blogs/security/aws-reinvent-2019-guide-to-aws-identity-sessions-workshops-chalk-talks/

re:Invent attendees

AWS re:Invent 2019 is coming fast! You’ll soon need to prioritize your sessions. Here’s a list of AWS Identity sessions, workshops, and chalk talks at AWS re:Invent 2019. If you haven’t registered yet for re:Invent, here’s a template you can provide to your manager to help justify your trip.

AWS Identity Leadership Keynote

SEC207-L – Leadership session: AWS identity (Breakout session)
Digital identity is one of the fastest growing and fastest changing parts of the cloud. Zero-trust networks, GDPR concerns, and new IoT opportunities have been dominating cloud news coverage. In this session, learn about significant industry changes that will affect the way AWS approaches identity for both workforce and consumer customers. We announce new features, discuss our participation in open standards and industry groups, and explain how we’re making identity, access control, and resource management easier for you every day.

AWS Identity Management for your Workforce

FSI310 – The journey to least privilege: IAM for Financial Services (Chalk talk)
Enhancements to AWS Identity and Access Management and related services have made it safer and easier than ever to grant developers direct access to AWS. In this session, we share a new approach to automating identity and access management in AWS based on recent engagements with global Financial Services customers. Then, we dive deep to answer your questions about how CI/CD tools and techniques can be used to enforce separation of duties, curtail human review of policy code, and delegate access to IAM while reducing the risk of unintended privilege escalation.

MGT407-R – Automating security management processes with AWS IAM and AWS CloudFormation (Builders session)
Security is a critical element for highly regulated industries like healthcare. Infrastructure as code provides several options to automate security controls, whether it is implementing rules and guardrails or managing changes to policies in an automated yet auditable way. Learn how to implement a process to automate creation, permission changes, and exception management with AWS Service Catalog, AWS CloudFormation, and AWS IAM policies, fostering efficient collaborations between security stakeholders across teams. (Note that this session is repeated once more during the week and denoted with a suffix of “-R1.”)

WIN312-R – Active Directory on AWS to support Windows workloads (Breakout session)
Want to learn your options for running Microsoft Active Directory on AWS? When moving Microsoft workloads to AWS, it’s important to consider how to deploy Microsoft Active Directory to support group policy management, authentication, and authorization. In this session, we discuss options for deploying Microsoft Active Directory to AWS, including AWS Directory Service for Microsoft Active Directory and deploying Active Directory to Windows on Amazon Elastic Compute Cloud (Amazon EC2). We cover such topics as integrating your on-premises Microsoft Active Directory environment to the cloud and leveraging SaaS applications, such as Office 365, with AWS Single Sign-On. (Note that this session is repeated once more during the week and denoted with a suffix of “-R1.”)

WIN405-R – Active Directory design patterns on AWS (Builders session)
Want to learn about your options for running Microsoft Active Directory on AWS? When you move Microsoft workloads to AWS, it’s important to consider how to deploy Active Directory in support of name resolution, authentication, and authorization. In this session, we discuss options for deploying Microsoft Active Directory to AWS, including AWS Managed Microsoft Active Directory and deploying Active Directory to Windows on Amazon EC2. The discussion includes such topics as how to integrate your on-premises Active Directory environment to the cloud using Amazon Route 53 Resolver. (Note that this session is repeated once more during the week and denoted with a suffix of “-R1.”)

AWS Identity Management for your Customers

SEC219-R — Build the next great app with Amazon Cognito (Chalk talk)
Are you planning to build the next great app? Are you planning to include features like AI-driven responses, a friendly user experience, and a lightning fast response time? There’s just one thing in your way: Identity. Before your users can use your app, you first have to know who they are. In this talk, we walk through how Amazon Cognito can help you deliver a unified identity management and authentication experience and help you mediate access to AWS services. We then discuss Amazon Cognito features, best practices, architectures, and how you can use Amazon Cognito to build your app today. (Note that this session is repeated once more during the week and denoted with a suffix of “-R1.”)

SEC403-R — Serverless identity management, authentication, and authorization (Workshop)
In this workshop, you learn how to build a serverless microservices application demonstrating end-to-end authentication and authorization using Amazon Cognito, Amazon API Gateway, AWS Lambda, and all things AWS Identity and Access Management (IAM). You have the opportunity to build an end-to-end functional app with a secure identity provider showcasing user authentication patterns. To participate, you need a laptop, an active AWS Account, an AWS IAM administrator, and familiarity with core AWS services. (Note that this session is repeated once more during the week and denoted with a suffix of “-R1.”)

SEC409-R — Fine-grained access control for serverless apps (Builders session)
In this small-group, hands-on builders session, you take a guided tour of how to build enterprise-grade serverless web applications with fine-grained, directory-based access controls. We show how to take a regular Express.js app, move it to AWS Lambda, add authentication using Amazon Cognito with SAML federation, and implement fine-grained authorization based on an external identity provider’s group membership (e.g., LDAP/AD). Services used: Amazon Cognito, AWS Lambda, Amazon API Gateway, Amazon DynamoDB, AWS CDK, and AWS Amplify. Prerequisites: Proficiency in basic JavaScript/TypeScript. Basic experience with AWS is recommended but not mandatory. (Note that this session is repeated twice more during the week and denoted with a suffix of “-R1” and “-R2.”)

MOB304 – Implement auth and authorization flows in your iOS apps (Workshop)
Learn how to leverage social-provider identity federation (log in with Google, Amazon, Facebook, etc.) as well as easily set up custom authentication flows configured and deployed by the AWS Amplify CLI. You do this hands-on by building and deploying a modern iOS app using AWS Amplify and serverless services. This workshop is suitable for all, even if you’re not a cloud expert. Please bring your own Mac with XCode already installed.

MOB315-R – Breaking down the OAuth flow (Chalk talk)
Are you lost when reading about OAuth implicit grants vs. code grants? Are you always struggling to understand the difference between Amazon Cognito user pools and Amazon Cognito federated identities? And how your corporate Active Directory fits into that picture? During this chalk talk, we demystify identity federation and whiteboard the main flows, allowing you to understand how to leverage these services to bring identity federation to your web or mobile applications. (Note that this session is repeated twice more during the week and denoted with a suffix of “-R1” and “-R2.”)

AWS Access Management

SEC209-R — Getting started with AWS identity (Breakout session)
The number, range, and breadth of AWS services are large, but the set of techniques that you, as a builder in the cloud, will use to secure them is not. Your cloud journey starts with this breakout session, in which we get you up to speed quickly on the practical fundamentals to do identity and authorization right in AWS. (Note that this session is repeated once more during the week and denoted with a suffix of “-R1.”)

SEC217-R – Delegate permissions management using permissions boundaries (Builders session)
The new permissions boundaries feature in AWS IAM addresses how to delegate permissions management to many users. If you have developers who need to be able to create roles for Lambda functions or system administrators who need to be able to create AWS IAM roles and users, or if you find yourself in a similar scenario, permissions boundaries might be a solution for you. (Note that this session is repeated multiple times during the week and denoted with a suffix of “-R1,” “-R2,” and “-R3.”)

SEC326-R — AWS identity-dynamic permissions using employee attributes (Chalk talk)
To access AWS resources, you can configure your IdP in AWS to be your corporate directory, letting your users federate into AWS for single sign-on access to AWS accounts using their corporate credentials. Along with employee credentials, your directory also stores employee attributes such as cost center, department and email address. Now, you can rely on the employee attributes to create fine-grained permissions in AWS. Permissions can then be automatically applied based on attributes when employees change departments or new employees are added in AWS. (Note that this session is repeated once more during the week and denoted with a suffix of “-R1.”)

SEC402-R — AWS identity: Permission boundaries & delegation (Workshop)
A permissions boundary is an AWS IAM feature that makes it easier to delegate permissions management to trusted employees. These employees can now configure IAM permissions to help scale permissions management and move workloads to AWS faster. For example, developers can create IAM roles for AWS Lambda functions and Amazon EC2 instances without exceeding certain permissions boundaries. In this workshop, using a sample application that we provide, practice delegating IAM permissions management so that developers can create roles without being able to either escalate their permissions or impact the resources of other teams. All attendees need a laptop and familiarity with core AWS services. (Note that this session is repeated once more during the week and denoted with a suffix of “-R1.”)

SEC405-R — Access management in 4D (Breakout session)
In this session, we take “who can access what under which conditions” and deeply explore “under which conditions.” We demonstrate patterns that allow you to implement advanced access-management workflows such as two-person rule, just-in-time privilege elevation, real-time adaptive permissions, and more using advanced combinations of AWS identity services, a range of environmental and contextual information sources, and automated and human-based approval workflows. We keep things fun, engaging, and practical using a lively mix of demos and code that you can take home and implement in your own environment. (Note that this session is repeated once more during the week and denoted with a suffix of “-R1.”)

SEC409-R — Fine-grained access control for serverless apps (Builders session)
In this small-group, hands-on builders session, you take a guided tour of how to build enterprise-grade serverless web applications with fine-grained, directory-based access controls. We show how to take a regular Express.js app, move it to AWS Lambda, add authentication using Amazon Cognito with SAML federation, and implement fine-grained authorization based on an external identity provider’s group membership (e.g., LDAP/AD). Services used: Amazon Cognito, AWS Lambda, Amazon API Gateway, Amazon DynamoDB, AWS CDK, and AWS Amplify. Prerequisites: Proficiency in basic JavaScript/TypeScript. Basic experience with AWS is recommended but not mandatory. (Note that this session is repeated twice more during the week and denoted with a suffix of “-R1” and “-R2.”)

Governance of Multi-account Environments

SEC325-R — Architecting security & governance across your landing zone (Breakout session)
A key element of your AWS environment is having a framework to provide resource isolation, separation of duties, and clear billing separation (i.e., a landing zone). In this session, we discuss updates to multi-account strategy best practices for establishing your landing zone, new guidance for building organizational unit structures, and a historical context. We cover security patterns, such as identity federation, cross-account roles, consolidated logging, and account governance. We wrap up with considerations on using AWS Landing Zone, AWS Control Tower, or AWS Organizations. We encourage you to attend all the landing zone sessions. Search for “landing zone” in the session catalog. (Note that this session is repeated once more during the week and denoted with a suffix of “-R1.”)

SEC341-R — Set permission guardrails for multiple accounts in AWS Organizations (Chalk talk)
AWS Organizations provides central governance and management for multiple accounts. Central security administrators use service control policies (SCPs) with Organizations to establish controls that all AWS Identity and Access Management (IAM) principals (users and roles) adhere to. For example, you can use SCPs to restrict access to specific AWS Regions or prevent your IAM principals from deleting common resources, such as an IAM role used by your central administrators. You can also define exceptions to your governance controls, restricting service actions for all IAM entities (users, roles, and root) in the account except a specific administrator role. (Note that this session is repeated once more during the week and denoted with a suffix of “-R1.”)

MGT302-R – Enable AWS adoption at scale with automation and governance (Breakout session)
Enterprises are taking advantage of AWS so they can move quickly while maintaining governance control over costs, security, and compliance. In this session, we discuss how AWS Control Tower, AWS Service Catalog, AWS Organizations, and AWS CloudFormation simplifies compliance and makes ongoing governance easier. You learn how to set up and govern your multi-account AWS environment or landing zone through automation, blueprints, and guardrails. Finally, you learn how to launch governed and secure resources on AWS through a DevOps CI/CD pipeline. (Note that this session is repeated once more during the week and denoted with a suffix of “-R1.”)

MGT307-R – Governance at scale: AWS Control Tower, AWS Organizations, and more (Chalk talk)
As you move to an organization-wide multi-account, multi-region strategy for your AWS environment, new questions emerge. How do I control budgets across many accounts, workloads, and users in a large organization? How do I automate account provisioning and maintain good security when hundreds of users and business units are requesting cloud resources? How can I ensure the organization is adhering to security and governance requirements? Bring all your questions about using AWS Landing Zones, AWS Control Tower, AWS Organizations, AWS Config, and more to build an AWS environment with governance
control built in. (Note that this session is repeated multiple times during the week and denoted with a suffix of “-R1,” “-R2,” and “-R3.”)

Want more AWS Security news? Follow us on Twitter.

Michael Chan

Michael is a Developer Advocate for AWS Identity and Access Management. Prior to this, he was a Professional Services Consultant who assisted customers with their journey to AWS. He enjoys understanding customer problems and working backwards to provide practical solutions.

Learn about AWS Services & Solutions – September AWS Online Tech Talks

Post Syndicated from Jenny Hang original https://aws.amazon.com/blogs/aws/learn-about-aws-services-solutions-september-aws-online-tech-talks/

Learn about AWS Services & Solutions – September AWS Online Tech Talks

AWS Tech Talks

Join us this September to learn about AWS services and solutions. The AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. These tech talks, led by AWS solutions architects and engineers, feature technical deep dives, live demonstrations, customer examples, and Q&A with AWS experts. Register Now!

Note – All sessions are free and in Pacific Time.

Tech talks this month:

 

Compute:

September 23, 2019 | 11:00 AM – 12:00 PM PTBuild Your Hybrid Cloud Architecture with AWS – Learn about the extensive range of services AWS offers to help you build a hybrid cloud architecture best suited for your use case.

September 26, 2019 | 1:00 PM – 2:00 PM PTSelf-Hosted WordPress: It’s Easier Than You Think – Learn how you can easily build a fault-tolerant WordPress site using Amazon Lightsail.

October 3, 2019 | 11:00 AM – 12:00 PM PTLower Costs by Right Sizing Your Instance with Amazon EC2 T3 General Purpose Burstable Instances – Get an overview of T3 instances, understand what workloads are ideal for them, and understand how the T3 credit system works so that you can lower your EC2 instance costs today.

 

Containers:

September 26, 2019 | 11:00 AM – 12:00 PM PTDevelop a Web App Using Amazon ECS and AWS Cloud Development Kit (CDK) – Learn how to build your first app using CDK and AWS container services.

 

Data Lakes & Analytics:

September 26, 2019 | 9:00 AM – 10:00 AM PTBest Practices for Provisioning Amazon MSK Clusters and Using Popular Apache Kafka-Compatible Tooling – Learn best practices on running Apache Kafka production workloads at a lower cost on Amazon MSK.

 

Databases:

September 25, 2019 | 1:00 PM – 2:00 PM PTWhat’s New in Amazon DocumentDB (with MongoDB compatibility) – Learn what’s new in Amazon DocumentDB, a fully managed MongoDB compatible database service designed from the ground up to be fast, scalable, and highly available.

October 3, 2019 | 9:00 AM – 10:00 AM PTBest Practices for Enterprise-Class Security, High-Availability, and Scalability with Amazon ElastiCache – Learn about new enterprise-friendly Amazon ElastiCache enhancements like customer managed key and online scaling up or down to make your critical workloads more secure, scalable and available.

 

DevOps:

October 1, 2019 | 9:00 AM – 10:00 AM PT – CI/CD for Containers: A Way Forward for Your DevOps Pipeline – Learn how to build CI/CD pipelines using AWS services to get the most out of the agility afforded by containers.

 

Enterprise & Hybrid:

September 24, 2019 | 1:00 PM – 2:30 PM PT Virtual Workshop: How to Monitor and Manage Your AWS Costs – Learn how to visualize and manage your AWS cost and usage in this virtual hands-on workshop.

October 2, 2019 | 1:00 PM – 2:00 PM PT – Accelerate Cloud Adoption and Reduce Operational Risk with AWS Managed Services – Learn how AMS accelerates your migration to AWS, reduces your operating costs, improves security and compliance, and enables you to focus on your differentiating business priorities.

 

IoT:

September 25, 2019 | 9:00 AM – 10:00 AM PTComplex Monitoring for Industrial with AWS IoT Data Services – Learn how to solve your complex event monitoring challenges with AWS IoT Data Services.

 

Machine Learning:

September 23, 2019 | 9:00 AM – 10:00 AM PTTraining Machine Learning Models Faster – Learn how to train machine learning models quickly and with a single click using Amazon SageMaker.

September 30, 2019 | 11:00 AM – 12:00 PM PTUsing Containers for Deep Learning Workflows – Learn how containers can help address challenges in deploying deep learning environments.

October 3, 2019 | 1:00 PM – 2:30 PM PTVirtual Workshop: Getting Hands-On with Machine Learning and Ready to Race in the AWS DeepRacer League – Join DeClercq Wentzel, Senior Product Manager for AWS DeepRacer, for a presentation on the basics of machine learning and how to build a reinforcement learning model that you can use to join the AWS DeepRacer League.

 

AWS Marketplace:

September 30, 2019 | 9:00 AM – 10:00 AM PTAdvancing Software Procurement in a Containerized World – Learn how to deploy applications faster with third-party container products.

 

Migration:

September 24, 2019 | 11:00 AM – 12:00 PM PTApplication Migrations Using AWS Server Migration Service (SMS) – Learn how to use AWS Server Migration Service (SMS) for automating application migration and scheduling continuous replication, from your on-premises data centers or Microsoft Azure to AWS.

 

Networking & Content Delivery:

September 25, 2019 | 11:00 AM – 12:00 PM PTBuilding Highly Available and Performant Applications using AWS Global Accelerator – Learn how to build highly available and performant architectures for your applications with AWS Global Accelerator, now with source IP preservation.

September 30, 2019 | 1:00 PM – 2:00 PM PTAWS Office Hours: Amazon CloudFront – Just getting started with Amazon CloudFront and [email protected]? Get answers directly from our experts during AWS Office Hours.

 

Robotics:

October 1, 2019 | 11:00 AM – 12:00 PM PTRobots and STEM: AWS RoboMaker and AWS Educate Unite! – Come join members of the AWS RoboMaker and AWS Educate teams as we provide an overview of our education initiatives and walk you through the newly launched RoboMaker Badge.

 

Security, Identity & Compliance:

October 1, 2019 | 1:00 PM – 2:00 PM PTDeep Dive on Running Active Directory on AWS – Learn how to deploy Active Directory on AWS and start migrating your windows workloads.

 

Serverless:

October 2, 2019 | 9:00 AM – 10:00 AM PTDeep Dive on Amazon EventBridge – Learn how to optimize event-driven applications, and use rules and policies to route, transform, and control access to these events that react to data from SaaS apps.

 

Storage:

September 24, 2019 | 9:00 AM – 10:00 AM PTOptimize Your Amazon S3 Data Lake with S3 Storage Classes and Management Tools – Learn how to use the Amazon S3 Storage Classes and management tools to better manage your data lake at scale and to optimize storage costs and resources.

October 2, 2019 | 11:00 AM – 12:00 PM PTThe Great Migration to Cloud Storage: Choosing the Right Storage Solution for Your Workload – Learn more about AWS storage services and identify which service is the right fit for your business.

 

 

AWS re:Invent Security Recap: Launches, Enhancements, and Takeaways

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/aws-reinvent-security-recap-launches-enhancements-and-takeaways/

For more from Steve, follow him on Twitter

Customers continue to tell me that our AWS re:Invent conference is a winner. It’s a place where they can learn, meet their peers, and rediscover the art of the possible. Of course, there is always an air of anticipation around what new AWS service releases will be announced. This time around, we went even bigger than we ever have before. There were over 50,000 people in attendance, spread across the Las Vegas strip, with over 2,000 breakout sessions, and jam packed hands-on learning opportunities with multiple day hackathons, workshops, and bootcamps.

A big part of all this activity included sharing knowledge about the latest AWS Security, Identity and Compliance services and features, as well as announcing new technology that we’re excited to be adopted so quickly across so many use-cases.

Here are the top Security, Identity and Compliance releases from re:invent 2018:

Keynotes: All that’s new

New AWS offerings provide more prescriptive guidance

The AWS re:Invent keynotes from Andy Jassy, Werner Vogels, and Peter DeSantis, as well as my own leadership session, featured the following new releases and service enhancements. We continue to strive to make architecting easier for developers, as well as our partners and our customers, so they stay secure as they build and innovate in the cloud.

  • We launched several prescriptive security services to assist developers and customers in understanding and managing their security and compliance postures in real time. My favorite new service is AWS Security Hub, which helps you centrally manage your security and compliance controls. With Security Hub, you now have a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie, as well as from AWS Partner solutions. Findings are visually summarized on integrated dashboards with actionable graphs and tables. You can also continuously monitor your environment using automated compliance checks based on the AWS best practices and industry standards your organization follows. Get started with AWS Security Hub with just a few clicks in the Management Console and once enabled, Security Hub will begin aggregating and prioritizing findings. You can enable Security Hub on a single account with one click in the AWS Security Hub console or a single API call.
  • Another prescriptive service we launched is called AWS Control Tower. One of the first things customers think about when moving to the cloud is how to set up a landing zone for their data. AWS Control Tower removes the guesswork, automating the set-up of an AWS landing zone that is secure, well-architected and supports multiple accounts. AWS Control Tower does this by using a set of blueprints that embody AWS best practices. Guardrails, both mandatory and recommended, are available for high-level, rule-based governance, allowing you to have the right operational control over your accounts. An integrated dashboard enables you to keep a watchful eye over the accounts provisioned, the guardrails that are enabled, and your overall compliance status. Sign up for the Control Tower preview, here.
  • The third prescriptive service, called AWS Lake Formation, will reduce your data lake build time from months to days. Prior to AWS Lake Formation, setting up a data lake involved numerous granular tasks. Creating a data lake with Lake Formation is as simple as defining where your data resides and what data access and security policies you want to apply. Lake Formation then collects and catalogs data from databases and object storage, moves the data into your new Amazon S3 data lake, cleans and classifies data using machine learning algorithms, and secures access to your sensitive data. Get started with a preview of AWS Lake Formation, here.
  • Next up, IoT Greengrass enables enhanced security through hardware root of trusted private key storage on hardware secure elements including Trusted Platform Modules (TPMs) and Hardware Security Modules (HSMs). Storing your private key on a hardware secure element adds hardware root of trust level-security to existing AWS IoT Greengrass security features that include X.509 certificates for TLS mutual authentication and encryption of data both in transit and at rest. You can also use the hardware secure element to protect secrets that you deploy to your AWS IoT Greengrass device using AWS IoT Greengrass Secrets Manager. To try these security enhancements for yourself, check out https://aws.amazon.com/greengrass/.
  • You can now use the AWS Key Management Service (KMS) custom key store feature to gain more control over your KMS keys. Previously, KMS offered the ability to store keys in shared HSMs managed by KMS. However, we heard from customers that their needs were more nuanced. In particular, they needed to manage keys in single-tenant HSMs under their exclusive control. With KMS custom key store, you can configure your own CloudHSM cluster and authorize KMS to use it as a dedicated key store for your keys. Then, when you create keys in KMS, you can choose to generate the key material in your CloudHSM cluster. Get started with KMS custom key store by following the steps in this blog post.
  • We’re excited to announce the release of ATO on AWS to help customers and partners speed up the FedRAMP approval process (which has traditionally taken SaaS providers up to 2 years to complete). We’ve already had customers, such as Smartsheet, complete the process in less than 90 days with ATO on AWS. Customers will have access to training, tools, pre-built CloudFormation templates, control implementation details, and pre-built artifacts. Additionally, customers are able to access direct engagement and guidance from AWS compliance specialists and support from expert AWS consulting and technology partners who are a part of our Security Automation and Orchestration (SAO) initiative, including GitHub, Yubico, RedHat, Splunk, Allgress, Puppet, Trend Micro, Telos, CloudCheckr, Saint, Center for Internet Security (CIS), OKTA, Barracuda, Anitian, Kratos, and Coalfire. To get started with ATO on AWS, contact the AWS partner team at [email protected].
  • Finally, I announced our first conference dedicated to cloud security, identity and compliance: AWS re:Inforce. The inaugural AWS re:Inforce, a hands-on gathering of like-minded security professionals, will take place in Boston, MA on June 25th and 26th, 2019 at the Boston Convention and Exhibition Center. The cost for a full conference pass will be $1,099. I’m hoping to see you all there. Sign up here to be notified of when registration opens.

Key re:Invent Takeaways

AWS is here to help you build

  1. Customers want to innovate, and cloud needs to securely enable this. Companies need to able to innovate to meet rapidly evolving consumer demands. This means they need cloud security capabilities they can rely on to meet their specific security requirements, while allowing them to continue to meet and exceed customer expectations. AWS Lake Formation, AWS Control Tower, and AWS Security Hub aggregate and automate otherwise manual processes involved with setting up a secure and compliant cloud environment, giving customers greater flexibility to innovate, create, and manage their businesses.
  2. Cloud Security is as much art as it is science. Getting to what you really need to know about your security posture can be a challenge. At AWS, we’ve found that the sweet spot lies in services and features that enable you to continuously gain greater depth of knowledge into your security posture, while automating mission critical tasks that relieve you from having to constantly monitor your infrastructure. This manifests itself in having an end-to-end automated remediation workflow. I spent some time covering this in my re:Invent session, and will continue to advocate using a combination of services, such as AWS Lambda, WAF, S3, AWS CloudTrail, and AWS Config to proactively identify, mitigate, and remediate threats that may arise as your infrastructure evolves.
  3. Remove human access to data. I’ve set a goal at AWS to reduce human access to data by 80%. While that number may sound lofty, it’s purposeful, because the only way to achieve this is through automation. There have been a number of security incidents in the news across industries, ranging from inappropriate access to personal information in healthcare, to credential stuffing in financial services. The way to protect against such incidents? Automate key security measures and minimize your attack surface by enabling access control and credential management with services like AWS IAM and AWS Secrets Manager. Additional gains can be found by leveraging threat intelligence through continuous monitoring of incidents via services such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie (intelligence from these services will now be available in AWS Security Hub).
  4. Get your leadership on board with your security plan. We offer 500+ security services and features; however, new services and technology can’t be wholly responsible for implementing reliable security measures. Security teams need to set expectations with leadership early, aligning on a number of critical protocols, including how to restrict and monitor human access to data, patching and log retention duration, credential lifespan, blast radius reduction, embedded encryption throughout AWS architecture, and canaries and invariants for security functionality. It’s also important to set security Key Performance Indicators (KPIs) to continuously track. At AWS, we monitor the number of AppSec reviews, how many security checks we can automate, third-party compliance audits, metrics on internal time spent, and conformity with Service Level Agreements (SLAs). While the needs of your business may vary, we find baseline KPIs to be consistent measures of security assurance that can be easily communicated to leadership.

Final Thoughts

Queen’s famous lyric, “I want it all, I want it all, and I want it now,” accurately captures the sentiment at re:Invent this year. Security will always be job zero for us, and we continue to iterate on behalf of customers so they can securely build, experiment and create … right now! AWS is trusted by many of the world’s most risk-sensitive organizations precisely because we have demonstrated this unwavering commitment to putting security above all. Still, I believe we are in the early days of innovation and adoption of the cloud, and I look forward to seeing both the gains and use cases that come out of our latest batch of tools and services.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds five patents in the field of cloud security architecture. Follow Steve on Twitter