Tag Archives: Security, Identity & Compliance

2018 C5 attestation is now available

Post Syndicated from Gerald Boyne original https://aws.amazon.com/blogs/security/2018-c5-attestation-is-now-available/

AWS has completed its 2018 assessment against the Cloud Computing Compliance Controls Catalog (C5) information security and compliance program. Germany’s national cybersecurity authority—Bundesamt für Sicherheit in der Informationstechnik (BSI)—established C5 to define a reference standard for German cloud security requirements. With C5 (as well as with IT-Grundschutz), customers in German member states can use the work performed under this BSI compliance catalog to comply with stringent local requirements.

AWS has added the Irish region DUB and 29 services to this year’s scope:

  • AWS AppSync
  • AWS Batch
  • AWS Certificate Manager
  • AWS CodeBuild
  • AWS CodeCommit
  • AWS Config
  • AWS Firewall Manager
  • AWS IoT Device Management
  • AWS Managed Services
  • AWS OpsWorks
  • AWS Service Catalog
  • AWS Snowball
  • AWS Snowball Edge
  • AWS Snowmobile
  • AWS WAF
  • AWS X-ray
  • Amazon Kinesis Video Streams
  • Amazon Athena
  • Amazon Cloud Directory
  • Amazon Inspector
  • Amazon MQ
  • Amazon Polly
  • Amazon QuickSight
  • Amazon Rekognition
  • Amazon SageMaker
  • Amazon Simple Email Service
  • Amazon SimpleDB
  • Amazon WorkDocs
  • Amazon WorkMail

AWS now has 71 services in scope of C5. In addition, AWS has included the C5 aspect of “Confidentiality” as an advanced C5 testing, which further supports compliance with GDPR by testing the Technical and Organizational Measures (TOMs), and the C5 aspect of “Availability” as an advanced C5 testing, with which customers will achieve a higher independent assurance for the availability of AWS services.

For more information, German readers can take a look at these resources:

The English version of the C5 report is available through AWS Artifact.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Guidelines for protecting your AWS account while using programmatic access

Post Syndicated from Mahmoud Matouk original https://aws.amazon.com/blogs/security/guidelines-for-protecting-your-aws-account-while-using-programmatic-access/

One of the most important things you can do as a customer to ensure the security of your resources is to maintain careful control over who has access to them. This is especially true if any of your AWS users have programmatic access. Programmatic access allows you to invoke actions on your AWS resources either through an application that you write or through a third-party tool. You use an access key ID and a secret access key to sign your requests for authorization to AWS. Programmatic access can be quite powerful, so implementing best practices to protect access key IDs and secret access keys is important in order to prevent accidental or malicious account activity. In this post, I’ll highlight some general guidelines to help you protect your account, as well as some of the options you have when you need to provide programmatic access to your AWS resources.

Protect your root account

Your AWS root account—the account that’s created when you initially sign up with AWS—has unrestricted access to all your AWS resources. There’s no way to limit permissions on a root account. For this reason, AWS always recommends that you do not generate access keys for your root account. This would give your users the power to do things like close the entire account—an ability that they probably don’t need. Instead, you should create individual AWS Identity and Access Management (IAM) users, then grant each user permissions based on the principle of least privilege: Grant them only the permissions required to perform their assigned tasks. To more easily manage the permissions of multiple IAM users, you should assign users with the same permissions to an IAM group.

Your root account should always be protected by Multi-Factor Authentication (MFA). This additional layer of security helps protect against unauthorized logins to your account by requiring two factors: something you know (a password) and something you have (for example, an MFA device). AWS supports virtual and hardware MFA devices, U2F security keys, and SMS text message-based MFA.

Decide how to grant access to your AWS account

To allow users access to the AWS Management Console and AWS Command Line Interface (AWS CLI), you have two options. The first one is to create identities and allow users to log in using a username and password managed by the IAM service. The second approach is to use federation
to allow your users to use their existing corporate credentials to log into the AWS console and CLI.

Each approach has its use cases. Federation is generally better for enterprises that have an existing central directory or plan to need more than the current limit of 5,000 IAM users.

Note: Access to all AWS accounts is managed by AWS IAM. Regardless of the approach you choose, make sure to familiarize yourself with and follow IAM best practices.

Decide when to use access keys

Applications running outside of an AWS environment will need access keys for programmatic access to AWS resources. For example, monitoring tools running on-premises and third-party automation tools will need access keys.

However, if the resources that need programmatic access are running inside AWS, the best practice is to use IAM roles instead. An IAM role is a defined set of permissions—it’s not associated with a specific user or group. Instead, any trusted entity can assume the role to perform a specific business task.

By utilizing roles, you can grant a resource access without hardcoding an access key ID and secret access key into the configuration file. For example, you can grant an Amazon Elastic Compute Cloud (EC2) instance access to an Amazon Simple Storage Service (Amazon S3) bucket by attaching a role with a policy that defines this access to the EC2 instance. This approach improves your security, as IAM will dynamically manage the credentials for you with temporary credentials that are rotated automatically.

Grant least privileges to service accounts

If you decided to create service accounts (that is, accounts used for programmatic access by applications running outside of the AWS environment) and generate access keys for them, you should create a dedicated service account for each use case. This will allow you to restrict the associated policy to only the permissions needed for the particular use case, limiting the blast radius if the credentials are compromised. For example, if a monitoring tool and a release management tool both require access to your AWS environment, create two separate service accounts with two separate policies that define the minimum set of permissions for each tool.

In addition to this, it’s also a best practice to add conditions to the policy that further restrict access—such as restricting access to only the source IP address range of your clients.

Below is an example policy that represents least privileges. It grants the needed permissions (PutObject) on to a specific resource (an S3 bucket named “examplebucket”) while adding further conditions (the client must come from IP range 203.0.113.0/24).


{
    "Version": "2012-10-17",
    "Id": "S3PolicyRestrictPut",
    "Statement": [
            {
            "Sid": "IPAllow",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::examplebucket/*",
            "Condition": {
                "IpAddress": {"aws:SourceIp": "203.0.113.0/24"}
            } 
        } 
    ]
}

Use temporary credentials from AWS STS

AWS Security Token Service (AWS STS) is a web service that enables you to request temporary credentials for use in your code, CLI, or third-party tools. It allows you to assume an IAM role with which you have a trusted relationship and then generate temporary, time-limited credentials based on the permissions associated with the role. These credentials can only be used during the validity period, which reduces your risk.

There are two ways to generate temporary credentials. You can generate them from the CLI, which is helpful when you need credentials for testing from your local machine or from an on-premises or third-party tool. You can also generate them from code using one of the AWS SDKs. This approach is helpful if you need credentials in your application, or if you have multiple user types that require different permission levels.

Create temporary credentials using the CLI

If you have access to the AWS CLI, you can use it to generate temporary credentials with limited permissions to use in your local testing or with third-party tools. To be able to use this approach, here’s what you need:

  • Access to the AWS CLI through your primary user account or through federation. To learn how to configure CLI access using your IAM credentials, follow this link. If you use federation, you still can use the CLI by following the instructions in this blog post.
  • An IAM role that represents the permissions needed for your test client. In the example below, I use “s3-read”. This role should have a policy attached that grants the least privileges needed for the use case.
  • A trusted relationship between the service role (“s3-read”) and your user account, to allow you to assume the service role and generate temporary credentials. Visit this link for the steps to create this trust relationship.

The example command below will generate a temporary access key ID and secret access key that are valid for 15 minutes, based on permissions associated with the role named “s3-read”. You can replace the values below with your own account number, service role, and duration, then use the secret access key and access key ID in your local clients.


aws sts assume-role --role-arn <arn:aws:iam::AWS-ACCOUNT-NUMBER:role/s3-read> --role-session-name <s3-access> --duration-seconds <900>

Here are my results from running the command:


{ "AssumedRoleUser": 
    { 
        "AssumedRoleId": "AROAIEGLQIIQUSJ2I5XRM:s3-access", 
        "Arn": "arn:aws:sts::AWS-ACCOUNT-NUMBER:assumed-role/s3-read/s3-access" 
    }, 
    "Credentials": { 
        "SecretAccessKey":"wZJph6PX3sn0ZU4g6yfXdkyXp5m+nwkEtdUHwC3w",  
        "SessionToken": "FQoGZXIvYXdzENr//////////<<REST-OF-TOKEN>>",
        "Expiration": "2018-11-02T16:46:23Z",
        "AccessKeyId": "ASIAXQZXUENECYQBAAQG" 
    } 
  }

Create temporary credentials from your code

If you have an application that already uses the AWS SDK, you can use AWS STS to generate temporary credentials right from the code instead of hard-coding credentials into your configurations. This approach is recommended if you have client-side code that requires credentials, or if you have multiple types of users (for example, admins, power-users, and regular users) since it allows you to avoid hardcoding multiple sets of credentials for each user type.

For more information about using temporary credentials from the AWS SDK, visit this link.

Utilize Access Advisor

The IAM console provides information about when an AWS service was last accessed by different principals. This information is called service last accessed data.

Using this tool, you can view when an IAM user, group, role, or policy last attempted to access services to which they have permissions. Based on this information, you can decide if certain permissions need to be revoked or restricted further.

Make this tool part of your periodic security check. Use it to evaluate the permissions of all your IAM entities and to revoke unused permissions until they’re needed. You can also automate the process of periodic permissions evaluation using Access Advisor APIs. If you want to learn how, this blog post is a good starting point.

Other tools for credentials management

While least privilege access and temporary credentials are important, it’s equally important that your users are managing their credentials properly—from rotation to storage. Below is a set of services and features that can help to securely store, retrieve, and rotate credentials.

AWS Systems Manager Parameter Store

AWS Systems Manager offers a capability called Parameter Store that provides secure, centralized storage for configuration parameters and secrets across your AWS account. You can store plain text or encrypted data like configuration parameters, credentials, and license keys. Once stored, you can configure granular access to specify who can obtain these parameters in your application, adding another layer of security to protect your data.

Parameter store is a good choice for use cases in which you need hierarchical storage for configuration data management across your account. For example, you can store database access credentials (username and password) in parameter store, encrypt them with an encryption key managed by AWS Key Management Service, and grant EC2 instances running your application permissions to read and decrypt those credentials.

For more information on using AWS Systems Manager Parameter Store, visit this link.

AWS Secrets Manager

AWS Secrets Manager is a service that allows you to centrally manage the lifecycle of secrets used in your organization, including rotation, audits, and access control. By enabling you to rotate secrets automatically, Secrets Manager can help you meet your security and compliance requirements. Secrets Manager also offers built-in integration for MySQL, PostgreSQL, and Amazon Aurora on Amazon RDS and can be extended to other services.

For more information about using AWS Secrets Manager to store and retrieve secrets, visit this link.

Amazon Cognito

Amazon Cognito lets you add user registration, sign-in, and access management features to your web and mobile applications.

Cognito can be used as an Identity Provider (IdP), where it stores and maintains users and credentials securely for your applications, or it can be integrated with OpenID Connect, SAML, and other popular web identity providers like Amazon.com.

Using Amazon Cognito, you can generate temporary access credentials for your clients to access AWS services, eliminating the need to store long-term credentials in client applications.

To learn more about using Amazon Cognito as an IdP, visit our developer guide to Amazon Cognito User Pools. If you’re interested in information about using Amazon Cognito with a third party IdP, review our guide to Amazon Cognito Identity Pools (Federated Identities).

AWS Trusted Advisor

AWS Trusted Advisor is a service that provides a real-time review of your AWS account and offers guidance on how to optimize your resources to reduce cost, increase performance, expand reliability, and improve security.

The “Security” section of AWS Trusted Advisor should be reviewed on regular basis to evaluate the health of your AWS account. Currently, there are multiple security specific checks that occur—from IAM access keys that haven’t been rotated to insecure security groups. Trusted Advisor is a tool to help you more easily perform a daily or weekly review of your AWS account.

git-secrets

git-secrets
, available from the AWS Labs GitHub account, helps you avoid committing passwords and other sensitive credentials to a git repository. It scans commits, commit messages, and –no-ff merges to prevent your users from inadvertently adding secrets to your repositories.

Conclusion

In this blog post, I’ve introduced some options to replace long-term credentials in your applications with temporary access credentials that can be generated using various tools and services on the AWS platform. Using temporary credentials can reduce the risk of falling victim to a compromised environment, further protecting your business.

I also discussed the concept of least privilege and provided some helpful services and procedures to maintain and audit the permissions given to various identities in your environment.

If you have questions or feedback about this blog post, submit comments in the Comments section below, or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mahmoud Matouk

Mahmoud is part of our world-wide public sector Solutions Architects, helping higher education customers build innovative, secured, and highly available solutions using various AWS services.

Author

Joe Chapman

Joe is a Solutions Architect with Amazon Web Services. He primarily serves AWS EdTech customers, providing architectural guidance and best practice recommendations for new and existing workloads. Outside of work, he enjoys spending time with his wife and dog, and finding new adventures while traveling the world.

AWS achieves HDS certification

Post Syndicated from Stephan Hadinger original https://aws.amazon.com/blogs/security/aws-achieves-hds-certification/

At AWS, the security, privacy, and protection of customer data always comes first, which is why I am pleased to share the news that AWS has achieved “Hébergeur de Données de Santé” (HDS) certification. With HDS certification, customers and partners who host French Personal Health Information (PHI) are now able to use AWS services to store and process personal health data. The HDS certificate for AWS can be found in AWS Artifact.

Introduced by the French governmental agency for health, “Agence Française de la Santé Numérique” (ASIP Santé), HDS certification aims to strengthen the security and protection of personal health data. Achieving this certification demonstrates that AWS provides a framework for technical and governance measures to secure and protect personal health data, governed by French law. The HDS certification validates that AWS ensures data confidentiality, integrity, and availability to its customers and partners. AWS worked with Bureau Veritas, an independent third-party auditor, to achieve the certification.

By adopting the AWS cloud, hospitals, health insurance companies, researchers, and other organizations processing personal health data, will be able to improve agility and collaboration, increase experimentation, and foster innovation in order to provide the best possible patient care. The HDS certification currently covers two AWS Regions in Europe (Ireland and Frankfurt), and this will be followed by the AWS Region in Paris, which is planned for the second quarter of 2019.

HDS certification adds to the list of internationally recognized certifications and attestations of compliance for AWS, which include ISO 27017 for cloud security, ISO 27018 for cloud privacy, SOC 1, SOC 2, SOC 3, and PCI DSS (Level 1). You can learn more about AWS HDS certification and other compliance certifications and accreditations here.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

How to enable secure access to Kibana using AWS Single Sign-On

Post Syndicated from Remek Hetman original https://aws.amazon.com/blogs/security/how-to-enable-secure-access-to-kibana-using-aws-single-sign-on/

Amazon Elasticsearch Service (Amazon ES) is a fully managed service to search, analyze, and visualize data in real-time. The service offers integration with Kibana, an open-source data visualization and exploration tool that lets you perform log and time-series analytics and application monitoring.

Many enterprise customers who want to use these capabilities find it challenging to secure access to Kibana. Kibana users have direct access to data stored in Amazon ES—so it’s important that only authorized users have access to Kibana. Data stored in Amazon ES can also have different classifications. For example, you might have one domain that stores confidential data and another that stores public data. In this case, securing access requires you not only to prevent unauthorized users from accessing the data but also to grant different groups of users access to different data classifications.

In this post, I’ll show you how to secure access to Kibana through Amazon Single Sign-On (AWS SSO) so that only users authenticated to Microsoft Active Directory can access and visualize data stored in Amazon ES. AWS SSO uses standard identity federation via SAML similar to Microsoft ADFS or Ping Federation. AWS SSO integrates with AWS Managed Microsoft Active Directory or Active Directory hosted on-premises or EC2 Instance through AWS Active Directory Connector, which means that your employees can sign into the AWS SSO user portal using their existing corporate Active Directory credentials. In addition, I’ll show you how to map users between an Amazon ES domain and a specific Active Directory security group so that you can limit who has access to a given Amazon ES domain.

Prerequisites and assumptions

You need the following for this walkthrough:

Solution overview

The architecture diagram below illustrates how the solution will authenticate users into Kibana:
 

Figure 1: Architectural diagram

Figure 1: Architectural diagram

  1. The user requests accesses to Kibana
  2. Kibana sends an HTML form back to the browser with a SAML request for authentication from Cognito. The HTML form is automatically posted to Cognito. User is prompted to then select SSO and authentication request is passed to SSO.
  3. AWS SSO sends a challenge to the browser for credentials
  4. User logs in to AWS SSO. AWS SSO authenticates the user against AWS Directory Service. AWS Directory Service may in turn authenticate the user against an on premise Active Directory.
  5. AWS SSO sends a SAML response to the browser
  6. Browser POSTs the response to Cognito. Amazon Cognito validates the SAML response to verify that the user has been successfully authenticated and then passes the information back to Kibana.
  7. Access to Kibana and Elasticsearch is granted

Deployment and configuration

In this section, I’ll show you how to deploy and configure the security aspects described in the solution overview.

Amazon Cognito authentication for Kibana

First, I’m going to highlight some initial configuration settings for Amazon Cognito and Amazon ES. I’ll show you how to create a Cognito user pool, a user pool domain, and an identity pool, and then how to configure Kibana authentication under Elasticsearch. For each of the commands, remember to replace the placeholders with your own values.

If you need more details on how to set up Amazon Cognito authentication for Kibana, please refer to the service documentation.

  1. Create an Amazon Cognito user pool with the following command:

    aws cognito-idp create-user-pool –pool-name <pool name, for example “Kibana”>

    From the output, copy down the user pool id. You’ll need to provide it in a couple of places later in the process.

    
                    }
                    "CreationDate": 1541690691.411,
                    "EstimatedNumberOfUsers": 0,
                    "Id": "us-east-1_0azgJMX31",
                    "LambdaConfig": {}
                }
            

  2. Create a user pool domain:

    aws cognito-idp create-user-pool-domain –domain <domain name>–user-pool-id <pool id created in step 1>

    The user pool domain name MUST be the same as your Amazon Elasticsearch domain name. If you receive an error that “domain already exists,” it means the name is already in use and you must choose a different name.

  3. Create your Amazon Cognito federated identities:

    aws cognito-identity create-identity-pool –identity-pool-name <identity pool name e.g. Kibana> –allow-unauthenticated-identities

    To make this command work, you have to temporally allow unauthenticated access by adding –allow-unauthenticated-identities. Unauthenticated access will be removed by Amazon Elasticsearch upon enabling Kibana authentication in the next step.

  4. Create an Amazon Elasticsearch domain. To do so, from the AWS Management Console, navigate to Amazon Elasticsearch and select Create a new domain.
    1. Make sure that value enter under “Elasticsearch domain name” match with the domain created under Cognito User Pool.
    2. Under Kibana authentication, complete the form with the following values, as shown in the screenshot:
      • For Cognito User Pool, enter the name of the pool you created in step one.
      • For Cognito Identity Pool, enter the identity you created in step three.
         
        Figure 2: Enter the identity you created in step three

        Figure 2: Enter the identity you created in step three

  5. Now you’re ready to assign IAM roles to your identity pool. Those roles will be saved with your identity pool and whenever Cognito receive a request to authorize a user, it will automatically utilize these roles
    1. From the AWS Management Console, go to Amazon Cognito and select Manage Identity Pools.
    2. Select the identity pool you created in step three.
    3. You should receive the following message: You have not specified roles for this identity pool. Click here to fix it. Follow the link.
       
      Figure 3: Follow the "Click here to fix it" link

      Figure 3: Follow the “Click here to fix it” link

    4. Under Edit identity pool, next to Unauthenticated role, select Create new role.
    5. Select Allow and save your changes.
    6. Next to Unauthenticated role, select Create new role.
    7. Select Allow and save your changes.
  6. Finally, modify the Amazon Elasticsearch access policy:
    1. From the AWS Management Console, go to AWS Identity and Access Management (IAM).
    2. Search for the authenticated role you created in step five and copy the role ARN.
    3. From the mangement console, go to Amazon Elasticsearch Service, and then select the domain you created in step four.
    4. Select Modify access policy and add the following policy (replace the ARN of the authenticated role and the domain ARN with your own values):
      
                      {
                          "Effect": "Allow",
                          "Principal": {
                              "AWS": "<ARN of Authenticated role>"
                          },
                          "Action": "es:ESHttp*",
                          "Resource": "<Domain ARN/*>"
                      }
                      

      Note: For more information about the Amazon Elasticsearch Service access policy visit: https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-ac.html

Configuring AWS Single Sign-On

In this section, I’ll show you how to configure AWS Single Sign-On. In this solution, AWS SSO is used not only to integrate with Microsoft AD but also as a SAML 2.0 identity federation provider. SAML 2.0 is an industry standard used for securely exchanging SAML assertions that pass information about a user between a SAML authority (in this case, Microsoft AD), and a SAML consumer (in this case, Amazon Cognito).

Add Active Directory

  1. From the AWS Management Console, go to AWS Single Sign-On.
  2. If this is the first time you’re configuring AWS SSO, you’ll be asked to enable AWS SSO. Follow the prompt to do so.
  3. From the AWS SSO Dashboard, select Manage your directory.
     
    Figure 4: Select Manage your directory

    Figure 4: Select Manage your directory

  4. Under Directory, select Change directory.
     
    Figure 5: Select "Change directory"

    Figure 5: Select “Change directory”

  5. On the next screen, select Microsoft AD Directory, select the directory you created under AWS Directory Service as a part of prerequisites, and then select Next: Review.
     
    Figure 6: Select "Microsoft AD Directory" and then select the directory you created as a part of the prerequisites

    Figure 6: Select “Microsoft AD Directory” and then select the directory you created as a part of the prerequisites

  6. On the Review page, confirm that you want to switch from an AWS SSO directory to an AWS Directory Service directory, and then select Finish.
    1. Once setup is complete, select Proceed to the directory.

Add application

  1. From AWS SSO Dashboard, select Applications and then Add a new application. Select Add a custom SAML 2.0 application.
     
    Figure 7: Select "Application" and then "Add a new application"

    Figure 7: Select “Application” and then “Add a new application”

  2. Enter a display name for your application (for example, “Kibana”) and scroll down to Application metadata. Select the link that reads If you don’t have a metadata file, you can manually type your metadata values.
  3. Enter the following values, being sure to replace the placeholders with your own values:
    1. Application ACS URL: https://<Elasticsearch domain name>.auth..amazoncognito.com/saml2/idpresponse
    2. Application SAML audience: urn:amazon:cognito:sp:<user pool id>
  4. Select Save changes.
     
    Figure 8: Select "Save changes"

    Figure 8: Select “Save changes”

Add attribute mappings

Switch to the Attribute mappings tab and next to Subject, enter ${user:name} and select unspecified under Format as shown in the following screenshot. Click Save Changes.
 

Figure 9: Enter "${user:name}" and select "Unspecified"

Figure 9: Enter “${user:name}” and select “Unspecified”

For more information about attribute mappings visit: https://docs.aws.amazon.com/singlesignon/latest/userguide/attributemappingsconcept.html

Grant access to Kibana

To manage who has access to Kibana, switch to the Assigned users tab and select Assign users. Add individual users or groups.

Download SAML metadata

Next, you’ll need to download the Amazon SSO SAML metadata. The SAML metadata contains information such as SSO entity ID, public certificate, attributes schema, and other information that’s necessary for Cognito to federate with a SAML identity provider. To download the metadata .xml file, switch to the Configuration tab and select Download metadata file.
 

Figure 10: Select "Download metadata file"

Figure 10: Select “Download metadata file”

Adding an Amazon Cognito identity provider

The last step is to add the identity provider to the user pool.

  1. From the AWS Management Console, go to Amazon Cognito.
    1. Select Manage User Pools, and then select the user pool you created in the previous section.
    2. From the left side menu, under Federation, select Identity providers, and then select SAML.
    3. Select Select file, and then select the Amazon SSO metadata .xml file you downloaded in previous step.
    4.  

      Figure 11: Select "Select file" and select the Amazon SSO metadata .xml file you downloaded in previous step

      Figure 11: Select “Select file” and then select the Amazon SSO metadata .xml file you downloaded in previous step

    5. Enter the provider name (for example, “AWS SSO”), and then select Create provider.
  2. From the left side menu, under App integration, select App client settings.
  3. Uncheck Cognito User Pool, check the name of provider you created in step one, and select Save Changes.
     
    Figure 12: Uncheck "Cognito User Pool"

    Figure 12: Uncheck “Cognito User Pool”

At this point, the configuration is finished. When you open the Kibana URL, you should be redirected to AWS SSO and asked to authenticate using your Active Directory credentials. Keep in mind that if the AWS Elasticsearch domain was created inside VPC, it won’t be accessible from the Internet but only within VPC.

Managing multiple Amazon ES domains

In scenarios where different users need access to different Amazon ES domains, the solution would be as follows for each Amazon ES domain:

  1. Create one Active Directory Security Group per Amazon ES domain
  2. Create an Amazon Cognito user pool for each domain
  3. Add new applications to AWS SSO and grant permission to corresponding security groups
  4. Assign users to the appropriate security group

Deleting domains that use Amazon Cognito Authentication for Kibana

To prevent domains that use Amazon Cognito authentication for Kibana from becoming stuck in a configuration state of “Processing,” it’s important that you delete Amazon ES domains before deleting their associated Amazon Cognito user pools and identity pools.

Conclusion

I’ve outlined an approach to securing access to Kibana by integrating Amazon Cognito with Amazon SSO and AWS Directory Services. This allows you to narrow the scope of users who haves access to each Amazon Elasticsearch domain by configuring separate applications in AWS SSO for each of the domains.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Remek Hetman

Remek is a Senior Cloud Infrastructure Architect with Amazon Web Services Professional Services. He works with AWS financial enterprise customers providing technical guidance and assistance for Infrastructure, Security, DevOps, and Big Data to help them make the best use of AWS services. Outside of work, he enjoys spending time actively, and pursuing his passion – astronomy.

How to eliminate EC2 keypairs from password retrieval of provisioned Windows instances using Secrets Manager and CloudFormation

Post Syndicated from Tracy Pierce original https://aws.amazon.com/blogs/security/how-to-eliminate-ec2-keypairs-password-retrieval-provisioned-windows-instances-secrets-manager-cloudformation/

In my previous post, I showed you how you can increase the durability of your applications and prepare for disaster recovery by using AWS Secrets Manager to replicate your secrets across AWS regions. This is just one of many security best practices you can implement in your AWS environment. Another would be removing the need to share the SSH Private Key to retrieve the password for your Amazon Elastic Compute Cloud (EC2) Windows instances. Currently, to retrieve the Administrator password for your EC2 Windows instance, the instance requires the SSH Private Key to decode the administrator password via the Console or CLI. When you have multiple Administrators that require access, this could result in sharing the SSH Private Key, or even the decoded Administrator password.

To increase security of your environment and remove the requirement of SSH Private Key sharing, I’ll show how you can use AWS Secrets Manager to retrieve the Administrator password from EC2 Windows instances, eliminating the need to share the SSH Private Key. By removing the need to share the SSH Private Key, Administrators no longer need to spend time securing the key or putting mechanisms in place to prevent employees from sharing the key.

I will show you how to use AWS CloudFormation to quickly set up resources in AWS Secrets Manager and EC2. I’ll show you how to use Instance user data to set the local Administrator password, which will enable you to retrieve the password securely without using a shared SSH Private Key. User data is data passed to the instance and is used to perform common automated configuration tasks or run scripts. This could be credential information, shell scripts, or cloud-init directives. This also allows for easy scheduling of password rotations for the Administrator password.

Solution overview

The solution described in this post uses a combination of AWS CloudFormation, AWS Secrets Manager, and Amazon EC2. The AWS CloudFormation template creates a new secret in AWS Secrets Manager with a random value, and then provisions the Windows instance in EC2 using that secret value configured in the EC2 userdata. This userdata sets the secret as the Administrator password for RDP access to the instance. The permissions on the secret created by this process also permit it to rotate the local Administrator password, allowing you to meet best security practices.

Prerequisites

This process assumes you already have an IAM user or role set up in your AWS Account that has mutable permissions to AWS Secrets Manager, AWS Identity and Access Management (IAM), CloudFormation, and EC2. This will be necessary to launch the CloudFormation stack from the template located here. You will also want to have a Security Group set up to permit RDP access to the Windows EC2 instance from allowed IP addresses. You will want to ensure you have your IAM user or role credentials configured on your CLI, if you choose to use that method for launch. Configuring the setting of your CLI with security credentials, default output format, and region are what permits the CLI to interact with AWS APIs.

The following diagram illustrates the process covered in this post.
 

Figure 1: Architectural diagram

Figure 1: Architectural diagram

Once you have your IAM user or role set up, launching the CloudFormation stack will create resources in the following order.

  1. Create a secret in AWS Secrets Manager that contains a random string value.
  2. Create an IAM role and instance profile for the Windows instance with permissions to access the secret.
  3. Create the instance, referencing the secret’s value in the user data, which will be used to set the Administrator password.

Deploy the solution

Now that you know the steps being performed, I’ll walk you through how to use both the AWS Management Console or the AWS CLI to complete this setup. I’ll go over the Console setup and then follow it with the CLI option.

Launch the template using the AWS Management Console

  1. Log in to the CloudFormation Console and select your Region. For my examples, I use the EU-WEST-1 Region, Ireland.
  2. Select Create stack and under Choose a template, select Upload a template to Amazon S3, and then, from your local machine, select the template you downloaded above.
  3. Next, select a unique Stack name, supply the AMI of the EC2 Windows image you want to use, and then select Next. Keep in mind these are unique per Region. For my Stack, I have chosen the name SecretsManager-Windows-Blog and the EU-WEST-1 Windows AMI ami-01776b82784323238.
     
    Figure 2: Select a unique "Stack name" and supply the AMI of the EC2 Windows image you want to use

    Figure 2: Select a unique “Stack name” and supply the AMI of the EC2 Windows image you want to use

  4. You now have the option to add some tags to your stack. I chose to tag it with the key/value pair Name/SecretsManager-Windows-Blog. On this page, you can also choose an IAM role already created for the CloudFormation Stack to run as, or leave it empty.
     
    Figure 3: Add tags to your stack

    Figure 3: Add tags to your stack

    Note: Should you choose not to select an IAM role, CloudFormation will require you to accept that it might create IAM resources. In this case, it will create an IAM role named in the following format: StackName-InstanceRole-RandomString, where StackName is the name you chose for the CloudFormation stack, InstanceRole is the IAM role selected or created to launch the EC2 Instance with (this IAM role is what gives the EC2 instance permission to access AWS APIs), and RandomString is a random alphanumeric string to make the IAM role name unique.

  5. On the Review page, verify your stack information is correct, and then hit Create. CloudFormation will launch your EC2 Windows instance, create your Secret in Secrets Manager, and use the Secret value to set your Administrator password.

Launch the template using the AWS CLI

Make sure to replace the values in <red, italic font> in the examples with values from your own account. You will need to download the template referenced above and upload it to your own S3 Bucket. The S3 URL of the template will be necessary for the following steps.

Run this command:


$ aws cloudformation create-stack --stack-name <SecretsManager-Windows-Blog> --template-url <S3_URL> --parameters ParameterKey=AMI,ParameterValue=<ami-01776b82784323238> --tags Key=Name,Value=<SecretsManager-Windows-Blog> --capabilities CAPABILITY_NAMED_IAM --region <eu-west-1>

If the command ran successfully, you’ll see output similar to this:


$ {
    "StackId": "arn:aws:cloudformation:<eu-west-1:111122223333>:stack/<SecretsManager-Windows-Blog>/<Example_Additional_ID_0123456789>"
}        

Review of the resources your stack creates

Now that your Stack is beginning to create resources, I’ll go over each resource creation event in more detail. The first resource created is the secret stored inside AWS Secrets Manager. The secret is created with the name formatting LocalAdminPassword-RandomString, where RandomString is unique to the secret and the EC2 Windows instance. The key/value pairs of this secret are Username/Administrator and Password/RandomString, where RandomString is unique to the secret and the EC2 Windows instance.

Once the secret is created, the stack creates the IAM role and EC2 Instance Profile. These are required for the EC2 Windows instance to communicate with AWS Secrets Manager and retrieve the stored password. The Trust Policy of the role will list ec2.amazonaws.com as the principal entity, meaning the EC2 instance can assume this IAM role. The permission policy comes in the inline format:

  • An inline policy noted in the template. This policy gives the necessary permissions to retrieve the password from the secret created in AWS Secrets Manager The ARN of the secret created earlier by the CloudFormation template in Secrets Manager is used as the value for the inline policy’s Resource attribute. This is accomplished by using the Reference attribute in the CloudFormation template. This way, the instance can only access the value of its own specific secret.

The last bit for the stack to create is the actual EC2 Windows instance. In my examples, I chose to use the template in its original state. This launches a t2.large instance type. Should you want a different instance type, edit the portion of the template named “InstanceType”: “t2.large” to have the instance type you want to launch. The most important part of the template is the UserData section because this is what retrieves the secret value and sets it as the Administrator password on the instance. For reference, here’s the code:


"UserData": {
    "Fn::Base64": {
        "Fn::Join": [
            "\n",
            [
                "",
                "Import-Module AWSPowerShell",
                {
                    "Fn::Join": [
                        "",
                        [
                            "$password = ((Get-SECSecretValue -SecretId '",
                            {
                                "Ref": "LocalAdminPassword"
                            },
                            "').SecretString | ConvertFrom-Json).Password"
                        ]
                    ]
                },
                "net.exe user Administrator $password",
                ""
            ]
        ]
    }
}        

Once the instance has completed the launch process, your stack will move into the CREATE_COMPLETE status. You can check this in the Console by selecting the StackName and then selecting the Resources tab. I prefer to use the Resources tab as it shows the Physical ID of all resources created by the stack. Here’s an example:
 

Figure 4: Check the status on the "Resources" tab

Figure 4: Check the status on the “Resources” tab

To verify that resources are marked with the CREATE_COMPLETE status with the CLI, run this command (don’t forget to replace the <red> placeholders with your stack informations.


$ aws cloudformation describe-stacks --stack-name <SecretsManager-Windows-Blog> --region <eu-west-1>

You’ll see the “StackStatus”: “CREATE_COMPLETE” and you’ll have an EC2 Windows instance launched, it’s password stored in AWS Secrets Manager, and the instance role giving the instance permissions to retrieve it’s password. You will no longer need to share the SSH Private Key, thus removing another potential security issue.

To verify the secret in the AWS Secrets Manager console is the same one used for your EC2 Windows instance, you can look at the name of the secret itself and the tags listed on the EC2 instance. For example, in the screenshots below, you can see that the secret is named LocalAdminPassword-RandomString. You can then match this to the tag value on your instance with the tag key LocalAdminSecretARN.
 

Figure 5: Verify the secret

Figure 5: Verify the secret

 

Figure 6: Match it to the tag value on your instance with the tag key

Figure 6: Match it to the tag value on your instance with the tag key “LocalAdminSecretARN”

You’ve now launched your EC2 Windows instance, generated a random string password, and will no longer require the SSH Private Key to retrieve the Administrator password for login.

Summary

In this post, I showed you a method to set up a custom Administrator password on a Windows EC2 instance using Instance user data. This password is securely encrypted and stored in AWS Secrets Manager, which will also rotate the password for you. By using this method, you won’t have to share SSH Private Keys to retrieve the Administrator passwords.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Secrets Manager forum.

Want more AWS Security news? Follow us on Twitter.

Author

Tracy Pierce

Tracy Pierce is a Senior Cloud Support Engineer at AWS. She enjoys the peculiar culture of Amazon and uses that to ensure every day is exciting for her fellow engineers and customers alike. Customer Obsession is her highest priority and she shows this by improving processes, documentation, and building tutorials. She has her AS in Computer Security & Forensics from SCTD, SSCP certification, AWS Developer Associate certification, and AWS Security Specialist certification. Outside of work, she enjoys time with friends, her Great Dane, and three cats. She keeps work interesting by drawing cartoon characters on the walls at request.

How to quickly find and update your access keys, password, and MFA setting using the AWS Management Console

Post Syndicated from Sulay Shah original https://aws.amazon.com/blogs/security/how-to-find-update-access-keys-password-mfa-aws-management-console/

You can now more quickly view and update all your security credentials from one place using the “My Security Credentials” page in the AWS Management Console. When you grant your developers programmatic access or AWS Management Console access, they receive credentials, such as a password or access keys, to access AWS resources. For example, creating users in AWS Identity and Access Management (IAM) generates long-term credentials for your developers. Understanding how to use these credentials can be confusing, especially for people who are new to AWS; developers often end up reaching out to their administrators for guidance about using their credentials. Today, we’ve updated the My Security Credentials page to help developers discover, create, or modify security credentials for their IAM users on their own. This includes passwords to access the AWS console, access keys for programmatic AWS access, and multi-factor authentication (MFA) devices. By making it easier to discover and learn about AWS security credentials, developers can get started with AWS more quickly.

If you need to create IAM users, you can use the My Security Credentials page to manage long-term credentials. However, as a best practice, AWS recommends relying on temporary credentials using federation when accessing AWS accounts. Federation enables you to use your existing identity provider to access AWS. You can also use AWS Single Sign-On (SSO) to manage your identities and their access to multiple AWS accounts and business applications centrally. In this post, I review the IAM user experience in the AWS Management Console for retrieving and configuring security credentials.

Access your security credentials

When you interact with AWS, you need security credentials to verify who you are and whether you have permissions to access the resources that you’re requesting. For example, you need a user name and password to sign in to the AWS Management Console, and you need access keys to make programmatic calls to AWS API operations.

To access and manage your security credentials, sign into your AWS console as an IAM user, then navigate to your user name in the upper right section of the navigation bar. From the drop-down menu, select My Security Credentials, as shown in Figure 1.
 

Figure 1: How to find the “My Security Credentials” page

Figure 1: How to find the “My Security Credentials” page

The My Security Credentials page includes all your security credentials. As an IAM user, you should navigate to this central location (Figure 2) to manage all your credentials.
 

Figure 2: The “My security credentials” page

Figure 2: The “My security credentials” page

Next, I’ll show you how IAM users can make changes to their AWS console access password, generate access keys, configure MFA devices, and set AWS CodeCommit credentials using the My Security Credentials page.

Change your password for AWS console access

To change your password, navigate to the My Security Credentials page and, under the Password for console access section, select Change password. In this section, you can also see how old your current password is. In the example in Figure 3, my password is 121 days old. This information can help you determine whether you need to change your password. Based on AWS best practices, I need to update mine.
 

Figure 3: Where to find your password’s age

Figure 3: Where to find your password’s age

To update your password, select the Change password button.

Based on the permissions assigned to your IAM user, you might not see the password requirements set by your admin. The image below shows the password requirements that my administrator has set for my AWS account. I can see the password requirements since my IAM user has access to view the password policy.
 

Figure 4: How to change your password

Figure 4: How to change your password

Once you select Change password and the password meets all the requirements, your IAM user’s password will update.

Generate access keys for programmatic access

An access key ID and secret access key are required to sign requests that you make using the AWS Command Line, the AWS SDKs, or direct API calls. If you have created an access key previously, you might have forgotten to save the secret key. In such cases, AWS recommends deleting the existing access key and creating a new one. You can create new access keys from the My Security Credentials page.
 

Figure 5: How to create a new access key

Figure 5: How to create a new access key

To create a new key, select the Create access key button. This generates a new secret access key. This is the only time you can view or download the secret access key. As a security best practice, AWS does not allow retrieval of a secret access key after its initial creation.

Next, select the Download .csv file button (shown in the image below) and save this file in a secure location only accessible to you.
 

Figure 6: Select the “Download .csv file” button

Figure 6: Select the “Download .csv file” button

Note: If you already have the maximum of two access keys—active or inactive—you must delete one before creating a new key.

If you have a reason to believe someone has access to your access and secret keys, then you need to delete them immediately and create new ones. To delete your existing key, you can select Delete next to your access key ID, as shown below. You can learn more about the best practices by visiting best practices to manage access keys.
 

Figure 7: How to delete or suspend a key

Figure 7: How to delete or suspend a key

The Delete access key dialog now shows you the last time your key was used. This information is critical to helping you understand if an existing system is using the access key, and if deleting the key will break something.
 

Figure 8: The “Delete access key” confirmation window

Figure 8: The “Delete access key” confirmation window

Assign MFA devices

As a best practice, AWS recommends enabling multi-factor authentication (MFA) on all IAM users. MFA adds an extra layer of security because it requires users to provide unique authentication from an AWS-supported MFA mechanism in addition to their sign-in credentials when they access AWS. Now, IAM users can assign or view their current MFA settings through the My Security Credentials page.
 

Figure 9: How to view MFA settings

Figure 9: How to view MFA settings

To learn about MFA support in AWS and about configuring MFA devices for an IAM user, please visit Enabling MFA Devices.

Generate AWS CodeCommit credentials

The My Security Credentials page lets you configure Git credentials for AWS CodeCommit, a version control service for privately storing and managing assets such as documents and source code in the cloud. Additionally, to access the CodeCommit repositories without installing CLI, you can set up SSH connection by uploading the SSH public key on the My Security Credentials page, as shown below. To learn more about AWS CodeCommit and the different configuration options, visit the AWS CodeCommit User Guide.
 

Figure 10: How to generate CodeCommit credentials

Figure 10: How to generate CodeCommit credentials

Summary

The My Security Credentials page for IAM users makes it easier to manage and configure security credentials to help developers get up and running in AWS more quickly. To learn more about the security credentials and best practices, read the Identity and Access Management documentation.

If you have comments about this post, submit them in the Comments section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum.

Want more AWS Security news? Follow us on Twitter.

The author

Sulay Shah

Sulay is the product manager for Identity and Access Management service at AWS. He strongly believes in the customer first approach and is always looking for new opportunities to assist customers. Outside of work, Sulay enjoys playing soccer and watching movies. Sulay holds a master’s degree in computer science from the North Carolina State University.

Updated whitepaper now available: Aligning to the NIST Cybersecurity Framework in the AWS Cloud

Post Syndicated from Min Hyun original https://aws.amazon.com/blogs/security/updated-whitepaper-now-available-aligning-to-the-nist-cybersecurity-framework-in-the-aws-cloud/

I’m proud to announce an updated resource that is designed to provide guidance to help your organization align to the National Institute of Standards and Technology (NIST) Cybersecurity Framework Version 1.1, which was released in 2018. The updated guide, NIST Cybersecurity Framework (CSF): Aligning to the NIST CSF in the AWS Cloud, is designed to help commercial and public sector entities of any size and in any part of the world align with the CSF by leveraging AWS services and resources.

In addition to mapping CSF updates to the latest AWS services and resources, we’ve also renewed our independent third-party assessor’s validation that the AWS services that have undergone FedRAMP Moderate and ISO 9001/27001/27017/27018 accreditations align with the CSF.

If you’re new to the NIST CSF, it’s a voluntary, risk-based, outcome-focused framework. It helps you establish a foundational set of security activities organized around five functions—Identify, Protect, Detect, Respond, Recover—to help you improve the security, risk management, and resilience of your organization. The CSF was originally intended for the critical infrastructure sector, but they’ve been endorsed by governments and industries worldwide as a recommended baseline for organizations of all types and sizes. Sectors as diverse as health care, financial services, and manufacturing are using the NIST CSF, and the list of early global adopters includes Japan, Israel, the UK, and Uruguay, among others.

In short, the NIST CSF is broadly applicable. In fact, in February 2018, the International Standards Organization released “ISO/IEC 27103:2018 — Information technology — Security techniques,” a standard that provides guidance for implementing a cybersecurity framework leveraging existing standards. ISO 27103 promotes the same concepts and best practices reflected in the NIST CSF; specifically, it encourages a framework focused on security outcomes organized around five functions (Identify, Protect, Detect, Respond, Recover) and foundational activities that map to existing standards, accreditations and frameworks. Adopting a versatile framework like the NIST CSF can help your organization achieve security outcomes while benefiting from the efficiencies of reusing instead of redoing.

You can use our updated whitepaper and workbook to learn how AWS services and resources can help enable your organization’s alignment to the CSF. If you’d like support in how to implement the CSF in your organization using AWS services and resources, contact an AWS Solutions Architect.

Want more AWS Security news? Follow us on Twitter.

Author

Min Hyun

Min is the Global Lead for Growth Strategies at AWS. Her team’s mission is to set the industry bar in thought leadership for security and data privacy assurance in emerging technology, trends and strategy to advance customers’ journeys to AWS. View her other Security Blog publications here.

AWS awarded PROTECTED certification in Australia

Post Syndicated from Mathew Graham original https://aws.amazon.com/blogs/security/aws-awarded-protected-certification-in-australia/

The Australian Cyber Security Centre (ACSC) has awarded PROTECTED certification to AWS for 42 of our cloud services. This is the highest data security certification available in Australia for cloud service providers, and AWS offers the most PROTECTED services of any public cloud service provider. You will find AWS on the ACSC’s Certified Cloud Services List (CCSL) at PROTECTED for AWS services, including Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage Service (Amazon S3), AWS Lambda, AWS Key Management Service (AWS KMS), and Amazon GuardDuty.

We worked with the ACSC to develop a solution that meets Australian government security requirements while also offering a breadth of services so you can run highly sensitive workloads on AWS at scale. These certified AWS services are available within our existing AWS Asia-Pacific (Sydney) Region and cover service categories such as compute, storage, network, database, security, analytics, application integration, management and governance. Importantly, all certified services are available at current public prices, which ensures that you are able to use them without paying a premium for security.

Since March 2018, you’ve been able to assess and self-certify at PROTECTED under the Australian Digital Transformation Agency’s Secure Cloud Strategy, but our inclusion on the CCSL at PROTECTED removes this extra step. With our increased level of certification, you can build applications on AWS that meet the Australian government’s security requirements for highly sensitive workloads.

We have several additional resources to help you begin building at PROTECTED on AWS. The ACSC Consumer Guide and AWS IRAP PROTECTED Reference Architecture are available today on AWS Artifact to help you build applications on AWS. The IRAP Certification Report, ACSC Certification Report and ACSC Certification Letter, also on AWS Artifact, allow you to dive deep into our security approach.

If you have questions about our PROTECTED certification or would like to inquire about how to use AWS for your highly sensitive workloads, contact your account team.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mathew Graham

Mathew is the Head of Security Assurance for Australia and New Zealand at AWS. He is passionate about working with regulators to help cloud adoption for our customers. Outside of AWS, Mathew’s time is completely taken up by his new twin daughters. He holds a Master of Information Security from CSU.

Signing executables with Microsoft SignTool.exe using AWS CloudHSM-backed certificates

Post Syndicated from Patrick Palmer original https://aws.amazon.com/blogs/security/signing-executables-with-microsoft-signtool-exe-using-aws-cloudhsm-backed-certificates/

Code signing is the process of digitally signing executables and scripts to confirm the software author and to demonstrate that the code has not been altered or corrupted since it was signed. Packaged software uses branding and trusted sales outlets to assure users of its integrity, but these guarantees are not available when code is transmitted on the internet. Additionally, the internet itself cannot provide any guarantee about the identity of the software creator.

To solve this issue, many companies turn to Microsoft SignTool, a command-line tool that digitally signs files, verifies signatures in files, or time stamps files. The certificate allows end users to trust that software is signed by the author, so long as the private key that is used to sign is only available to that author. A common problem, however, is that the private key and the certificate used in the signing process are located on the same machine. If an attacker compromises the server and steals both the private key and certificate, they can sign malicious code while posing as the trusted author. To protect against this, some companies move their private keys to offline devices. But this means that the keys need to be brought online for each new signing request, or in batches, prolonging the amount of time it takes to sign. The offline devices also need to be stored and backed up in separate, physically secure locations to prevent tampering. A more efficient solution is to use AWS CloudHSM to provide secure storage and backup for these private keys. In this post, I’ll show you how.

Prerequisites and assumptions

This walkthrough assumes that you have a working knowledge of Amazon EC2, AWS CloudHSM, the administration of Windows Server, as well as the basics of certificates and public key infrastructure.

Before you follow this walkthrough, you should first complete the steps in the walkthrough Configure Windows Server as a Certificate Authority (CA) with AWS CloudHSM, and have an example unsigned Windows PowerShell script .ps1 file. After you’ve completed the set-up of your Windows Server CA, you’ll have all the major components ready to start signing your code: the AWS CloudHSM cluster in an Active state, Crypto Users (CU) created on your CloudHSM to manage keys, and the necessary client packages installed on the Windows instance within the same VPC as your AWS CloudHSM.

Important: You will incur charges for the services used in this example. You can find the cost of each service on the corresponding service pricing page. For more information, see
AWS CloudHSM Pricing and Amazon EC2 Pricing.

Out of scope

The focus of this blog post is how to use AWS CloudHSM to store the keys that are used by certificates that will sign binaries used by Microsoft SignTool.exe. It is not intended to represent any best practices for implementing code signing or running a Certificate Authority. For more information, see the NIST Cybersecurity Whitepaper
Security Considerations for Code Signing.

Architectural Overview

 

Figure 1: Architectural overview

Figure 1: Architectural overview

This diagram shows a virtual private cloud (VPC) containing an Amazon EC2 instance running Windows Server 2012 R2 that resides on a public subnet. This instance will run both the CloudHSM client software and Windows Server CA. The instance can be accessed via the Internet Gateway. It will also have security groups that enable RDP access for your IP. The private subnet hosts the Elastic Network Interface (ENI) for CloudHSM cluster that has a single HSM.

Step 1: Install SignTool.exe as part of the Microsoft Windows SDK

Download and install one of the following versions of the Microsoft Windows Software Development Kit (SDK):

You should install the latest applicable Windows SDK package for your operating system. For example, for Microsoft Windows 2012 R2 or later versions, you should install the Microsoft Windows SDK 10.
SignTool.exe is part of the Windows SDK Signing Tools for Desktop Apps installation feature. You can omit the other features to be installed if you don’t need them. The default installation location is:

C:\Program Files (x86)\Windows Kits\<SDK version>\bin\<version number>\<CPU architecture>\signtool.exe

Step 2: Create a signing certificate using the KSP integration

Now that you’ve installed the software required to sign your files, you can start creating a key pair in AWS CloudHSM, along with the corresponding certificate. You can do this with the Certreq application that’s included with Windows Server. The end result from Certreq is a Certificate Signing Request (CSR) that you can submit to a CA. In this example, you’ll submit it to the Microsoft Windows CA you created in the prerequisite section. Certreq supports the KSP (Key Storage Provider) standards, which allows you to specify the name of the KSP created by Cavium specifically for AWS CloudHSM. This is included and installed as part of the AWS CloudHSM client installation.

  1. Create a file named request.inf that contains the lines below. Note that the Subject line may wrap onto the following line. It begins with Subject and ends with Washington and a closing quotation mark (“). Replace the Subject information with your own company information. See Microsoft’s Documentation for an explanation of the sections, keys, and values.
    
            [Version]
            Signature= $Windows NT$
            [NewRequest]
            Subject = "C=US,CN=www.example.com,O=Information Technology,OU=Certificate Management,L=Seattle,S=Washington"
            RequestType=PKCS10
            HashAlgorithm = SHA256
            KeyAlgorithm = RSA
            KeyLength = 2048
            ProviderName = Cavium Key Storage Provider
            KeyUsage = "CERT_DIGITAL_SIGNATURE_KEY_USAGE"
            MachineKeySet = True
            Exportable = False              
     

  2. Create a certificate request with certreq.exe:

    certreq.exe -new request.inf request.csr

    Certreq will return a message saying that the certificate request has been created. Internally, a new key pair has also been generated on your HSM instance, and that private key has been used on the HSM instance to generate the CSR. The following screenshot shows the output from certreq.exe, confirming that the CSR has been created.
     

    Figure 2: The output of certreq.exe

    Figure 2: The output of certreq.exe

  3. Submit the CSR you just created to the Windows Server CA:
    1. Open the Certification Authority tool using the command certsrv.msc.
    2. After the Certification Authority tool opens, right-click the CA server name, choose All Tasks, then choose Submit a new request, as shown in the following screenshot.
       
      Figure 3: Select "Submit a new request"

      Figure 3: Select “Submit a new request”

    3. Submit the new request using the CSR you just created by navigating to its saved location and selecting Open.
  4. You can now issue the certificate from your CA. Navigate to the Pending Requests view, right-click on the certificate you just submitted, and under All Tasks, select Issue. This will move your certificate to Issued Certificates.
     
    Figure 4:  Select "Issue" to move your certificate to "Issued Certificates"

    Figure 4: Select “Issue” to move your certificate to “Issued Certificates”

  5. Now you can export the issued certificate from your CA to a file, so that the certificate can be imported into your local computer store for use by applications.
    1. Navigate to the Issued Certificates view, and right-click on your newly issued certificate.
    2. Select Open to view the certificate, then select the Details tab.
    3. Choose Copy to File to start the Certificate Export Wizard, and copy as a DER encoded binary X.509 file to a location you choose. In my example, I’ve saved mine in the same location as my other files on the Desktop with the file name “signedCertificate.cer.” You should store your certificates in a secure and redundant storage location.
       
      Figure 5: Store your certificates in a secure and redundant storage location

      Figure 5: Store your certificates in a secure and redundant storage location

  6. After you’ve copied the certificate file to the instance where you’ll sign your code, run the command certreq.exe -accept signedCertificate.cer, as shown in the following screen shot. This moves the certificate from the file into the Personal Certificate Store in Windows so that it can be used by applications. You can verify it exists by running certlm.msc and viewing the Personal Certificates.
     
    Figure 6: Run certlm.msc to view the Personal Certificates

    Figure 6: Run certlm.msc to view the Personal Certificates

Step 3: Using the imported certificate with Microsoft SignTool.exe

You should have already installed SignTool.exe as part of the Windows SDK, and you should have an example .ps1 file that’s unsigned.

To use the certificate you created and stored with SignTool.exe, you’ll need the SHA-1 hash of the certificate. This is used as an input parameter and ensures that Windows doesn’t automatically use a certificate that isn’t backed by AWS CloudHSM. While you could use certutil.exe or the Certificate Manager’s graphical user interface to get the SHA-1 hash of the certificate, PowerShell provides a clean interface for obtaining this information.

  1. Open PowerShell as an administrator, then run the command Get-ChildItem -path cert:\LocalMachine\My, as shown in the following screenshot. This will display the thumbprints without spaces, unlike other available methods. Copy the thumbprint associated with your imported certificate.
  2.  

    Figure 7: Run the command Get-ChildItem -path cert:\LocalMachine\My

    Figure 7: Run the command Get-ChildItem -path cert:\LocalMachine\My

  3. Navigate to the directory within PowerShell that contains SignTool.exe. The default location is under C:\Program Files (x86)\Windows Kits\10\bin\10.0.17763.0\x64
  4. Finally, sign your binary with your newly generated certificate by using the following command:

    signtool.exe sign /v /fd sha256 /sha1 0BECF08706C86997B5ED5AD0BB896BD0271A26ED /sm /as C:\Users\Administrator\Desktop\exec.ps1
     

    Figure 8: Sign your binary with your newly generated certificate

    Figure 8: Sign your binary with your newly generated certificate

  5. Optionally, to verify the signature on the file, you can use SignTool.exe with the verify option by using the following command:

    signtool.exe verify /v /pa C:\Users\Administrator\Desktop\exec.ps1

Conclusion

For code signing jobs where the integrity of your signature is important to your business, AWS CloudHSM supports the Microsoft CNG/KSP standard that enables you to store the private key of a Digital Signature certificate within an HSM. Since the private key no longer has to reside on the server, it’s no longer at risk if the server itself were to be compromised.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Patrick Palmer

Patrick is a Cloud Support Engineer at AWS. He has a passion for learning new technologies across the breadth of AWS services and using this experience to guide fellow engineers and customers. He leads a team of Security Engineers at AWS who continuously delight customers when they need it most. Outside of work, he spends the majority of his time with friends and playing video games.

Alerting, monitoring, and reporting for PCI-DSS awareness with Amazon Elasticsearch Service and AWS Lambda

Post Syndicated from Michael Coyne original https://aws.amazon.com/blogs/security/alerting-monitoring-and-reporting-for-pci-dss-awareness-with-amazon-elasticsearch-service-and-aws-lambda/

Logging account activity within your AWS infrastructure is paramount to your security posture and could even be required by compliance standards such as PCI-DSS (Payment Card Industry Security Standard). Organizations often analyze these logs to adapt to changes and respond quickly to security events. For example, if users are reporting that their resources are unable to communicate with the public internet, it would be beneficial to know if a network access list had been changed just prior to the incident. Many of our customers ship AWS CloudTrail event logs to an Amazon Elasticsearch Service cluster for this type of analysis. However, security best practices and compliance standards could require additional considerations. Common concerns include how to analyze log data without the data leaving the security constraints of your private VPC.

In this post, I’ll show you not only how to store your logs, but how to put them to work to help you meet your compliance goals. This implementation deploys an Amazon Elasticsearch Service domain with Amazon Virtual Private Cloud (Amazon VPC) support by utilizing VPC endpoints. A VPC endpoint enables you to privately connect your VPC to Amazon Elasticsearch without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. An AWS Lambda function is used to ship AWS CloudTrail event logs to the Elasticsearch cluster. A separate AWS Lambda function performs scheduled queries on log sets to look for patterns of concern. Amazon Simple Notification Service (SNS) generates automated reports based on a sample set of PCI guidelines discussed further in this post and notifies stakeholders when specific events occur. Kibana serves as the command center, providing visualizations of CloudTrail events that need to be logged based on the provided sample set of PCI-DSS compliance guidelines. The automated report and dashboard that are constructed around the sample PCI-DSS guidelines assist in event awareness regarding your security posture and should not be viewed as a de facto means of achieving certification. This solution serves as an additional tool to provide visibility in to the actions and events within your environment. Deployment is made simple with a provided AWS CloudFormation template.
 

Figure 1: Architectural diagram

Figure 1: Architectural diagram

The figure above depicts the architecture discussed in this post. An Elasticsearch cluster with VPC support is deployed within an AWS Region and Availability Zone. This creates a VPC endpoint in a private subnet within a VPC. Kibana is an Elasticsearch plugin that resides within the Elasticsearch cluster, it is accessed through a provided endpoint in the output section of the CloudFormation template. CloudTrail is enabled in the VPC and ships CloudTrail events to both an S3 bucket and CloudWatch Log Group. The CloudWatch Log Group triggers a custom Lambda function that ships the CloudTrail Event logs to the Elasticsearch domain through the VPC endpoint. An additional Lambda function is created that performs a periodic set of Elasticsearch queries and produces a report that is sent to an SNS Topic. A Windows-based EC2 instance is deployed in a public subnet so users will have the ability to view and interact with a Kibana dashboard. Access to the EC2 instance can be restricted to an allowed CIDR range through a parameter set in the CloudFormation deployment. Access to the Elasticsearch cluster and Kibana is restricted to a Security Group that is created and is associated with the EC2 instance and custom Lambda functions.

Sample PCI-DSS Guidelines

This solution provides a sample set of (10) PCI-DSS guidelines for events that need to be logged.

  • All Commands, API action taken by AWS root user
  • All failed logins at the AWS platform level
  • Action related to RDS (configuration changes)
  • Action related to enabling/disabling/changing of CloudTrail, CloudWatch logs
  • All access to S3 bucket that stores the AWS logs
  • Action related to VPCs (creation, deletion and changes)
  • Action related to changes to SGs/NACLs (creation, deletion and changes)
  • Action related to IAM users, roles, and groups (creation, deletion and changes)
  • Action related to route tables (creation, deletion and changes)
  • Action related to subnets (creation, deletion and changes)

Solution overview

In this walkthrough, you’ll create an Elasticsearch cluster within an Amazon VPC environment. You’ll ship AWS CloudTrail logs to both an Amazon S3 Bucket (to maintain an immutable copy of the logs) and to a custom AWS Lambda function that will stream the logs to the Elasticsearch cluster. You’ll also create an additional Lambda function that will run once a day and build a report of the number of CloudTrail events that occurred based on the example set of 10 PCI-DSS guidelines and then notify stakeholders via SNS. Here’s what you’ll need for this solution:

To make it easier to get started, I’ve included an AWS CloudFormation template that will automatically deploy the solution. The CloudFormation template along with additional files can be downloaded from this link. You’ll need the following resources to set it up:

  • An S3 bucket to upload and store the sample AWS Lambda code and sample Kibana dashboards. This bucket name will be requested during the CloudFormation template deployment.
  • An Amazon Virtual Private Cloud (Amazon VPC).

If you’re unfamiliar with how CloudFormation templates work, you can find more info in the CloudFormation Getting Started guide.

AWS CloudFormation deployment

The following parameters are available in this template.

Parameter Default Description
Elasticsearch Domain Name Name of the Amazon Elasticsearch Service domain.
Elasticsearch Version 6.2 Version of Elasticsearch to deploy.
Elasticsearch Instance Count 3 The number of data nodes to deploy in to the Elasticsearch cluster.
Elasticsearch Instance Class The instance class to deploy for the Elasticsearch data nodes.
Elasticsearch Instance Volume Size 10 The size of the volume for each Elasticsearch data node in GB.
VPC to launch into The VPC to launch the Amazon Elasticsearch Service cluster into.
Availability Zone to launch into The Availability Zone to launch the Amazon Elasticsearch Service cluster into.
Private Subnet ID The subnet to launch the Amazon Elasticsearch Service cluster into.
Elasticsearch Security Group A new Security Group is created that will be associated with the Amazon Elasticsearch Service cluster.
Security Group Description A description for the above created Security Group.
Windows EC2 Instance Class m5.large Windows instance for interaction with Kibana.
EC2 Key Pair EC2 Key Pair to associate with the Windows EC2 instance.
Public Subnet Public subnet to associate with the Windows EC2 instance for access.
Remote Access Allowed CIDR 0.0.0.0/0 The CIDR range to allow remote access (port 3389) to the EC2 instance.
S3 Bucket Name—Lambda Functions S3 Bucket that contains custom AWS Lambda functions.
Private Subnet Private subnet to associate with AWS Lambda functions that are deployed within a VPC.
CloudWatch Log Group Name This will create a CloudWatch Log Group for the AWS CloudTrail event logs.
S3 Bucket Name—CloudTrail logging This will create a new Amazon S3 Bucket for logging CloudTrail events. Name must be a globally unique value.
Date range to perform queries now-1d (examples: now-1d, now-7d, now-90d)
Lambda Subnet CIDR Create a Subnet CIDR to deploy AWS Lambda Elasticsearch query function in to
Availability Zone—Lambda The availability zone to associate with the preceding AWS Lambda Subnet
Email Address [email protected] Email address for reporting to notify stakeholders via SNS. You must accept the subscription by selecting the link sent to this address before alerts will arrive.

It takes 30-45 minutes for this stack to be created. When it’s complete, the CloudFormation console will display the following resource values in the Outputs tab. These values can be referenced at any time and will be needed in the following sections.

oElasticsearchDomainEndpoint Elasticsearch Domain Endpoint Hostname
oKibanaEndpoint Kibana Endpoint Hostname
oEC2Instance Windows EC2 Instance Name used for Kibana access
oSNSSubscriber SNS Subscriber Email Address
oElasticsearchDomainArn Arn of the Elasticsearch Domain
oEC2InstancePublicIp Public IP address of the Windows EC2 instance

Managing and testing the solution

Now that you’ve set up the environment, it’s time to configure the Kibana dashboard.

Kibana configuration

From the AWS CloudFormation output, gather information related to the Windows-based EC2 instance. Once you have retrieved that information, move on to the next steps.

Initial configuration and index pattern

  1. Log into the Windows EC2 instance via Remote Desktop Protocol (RDP) from a resource that is within the allowed CIDR range for remote access to the instance.
  2. Open a browser window and navigate to the Kibana endpoint hostname URL from the output of the AWS CloudFormation stack. Access to the Elasticsearch cluster and Kibana is restricted to the security group that is associated with the EC2 instance and custom Lambda functions during deployment.
  3. In the Kibana dashboard, select Management from the left panel and choose the link for Index Patterns.
  4. Add one index pattern containing the following: cwl-*
     
    Figure 2: Define the index pattern

    Figure 2: Define the index pattern

  5. Select Next Step.
  6. Select the Time Filter Field named @timestamp.
     
    Figure 3: Select "@timestamp"

    Figure 3: Select “@timestamp”

  7. Select Create index pattern.

At this point we’ve launched our environment and have accessed the Kibana console. Within the Kibana console, we’ve configured the index pattern for the CloudWatch logs that will contain the CloudTrail events. Next, we’ll configure visualizations and a dashboard.

Importing sample PCI DSS queries and Kibana dashboard

  1. Copy the export.json from the location you extracted the downloaded zip file to the EC2 Kibana bastion.
  2. Select Management on the left panel and choose the link for Saved Objects.
  3. Select Import in upper right corner and navigate to export.json.
  4. Select Yes, overwrite all saved objects, then select Index Pattern cwl-* and confirm all changes.
  5. Once the import completes, select PCI DSS Dashboard to see the sample dashboard and queries.

Note: You might encounter an error during the import that looks like this:
 

Figure 4: Error message

Figure 4: Error message

This simply means that your streamed logs do not have login-type events in the time period since your deployment. To correct this, you can add a field with a null event.

  1. From the left panel, select Dev Tools and copy the following JSON into the left panel of the console:
    
            POST /cwl-/default/
            {
                "userIdentity": {
                    "userName": "test"
                }
            }              
     

  2. Select the green Play triangle to execute the POST of a document with the missing field.
     
    Figure 5: Select the "Play" button

    Figure 5: Select the “Play” button

  3. Now reimport the dashboard using the steps in Importing Sample PCI DSS Queries and Kibana Dashboard. You should be able to complete the import with no errors.

At this point, you should have CloudTrail events that have been streamed to the Elasticsearch cluster, with a configured Kibana dashboard that looks similar to the following graphic:
 

Figure 6: A configured Kibana dashboard

Figure 6: A configured Kibana dashboard

Automated Reports

A custom AWS Lambda function was created during the deployment of the Amazon CloudFormation stack. This function uses the sample PCI-DSS guidelines from the Kibana dashboard to build a daily report. The Lambda function is triggered every 24 hours and performs a series of Elasticsearch time-based queries of now-1day (the last 24 hours) on the sample guidelines. The results are compiled into a message that is forwarded to Amazon Simple Notification Service (SNS), which sends a report to stakeholders based on the email address you provided in the CloudFormation deployment.

The Lambda function will be named <CloudFormation Stack Name>-ES-Query-LambdaFunction. The Lambda Function enables environment variables such as your query time window to be adjusted or additional functionality like additional Elasticsearch queries to be added to the code. The below sample report allows you to monitor any events against the sample PCI-DSS guidelines. These reports can then be further analyzed in the Kibana dashboard.


    Logging Compliance Report - Wednesday, 11. July 2018 01:06PM
    Violations for time period: 'now-1d'
    
    All Failed login attempts
    - No Alerts Found
    All Commands, API action taken by AWS root user
    - No Alerts Found
    Action related to RDS (configuration changes)
    - No Alerts Found
    Action related to enabling/disabling/changing of CloudTrail CloudWatch logs
    - 3 API calls indicating alteration of log sources detected
    All access to S3 bucket that stores the AWS logs
    - No Alerts Found
    Action related to VPCs (creation, deletion and changes)
    - No Alerts Found
    Action related to changes to SGs/NACLs (creation, deletion and changes)
    - No Alerts Found
    Action related to changes to IAM roles, users, and groups (creation, deletion and changes)
    - 2 API calls indicating creation, alteration or deletion of IAM roles, users, and groups
    Action related to changes to Route Tables (creation, deletion and changes)
    - No Alerts Found
    Action related to changes to Subnets (creation, deletion and changes)
    - No Alerts Found         

Summary

At this point, you have now created a private Elasticsearch cluster with Kibana dashboards that monitors AWS CloudTrail events on a sample set of PCI-DSS guidelines and uses Amazon SNS to send a daily report providing awareness in to your environment—all isolated securely within a VPC. In addition to CloudTrail events streaming to the Elasticsearch cluster, events are also shipped to an Amazon S3 bucket to maintain an immutable source of your log files. The provided Lambda functions can be further modified to add additional or more complex search queries and to create more customized reports for your organization. With minimal effort, you could begin sending additional log data from your instances or containers to gain even more insight as to the security state of your environment. The more data you retain, the more visibility you have into your resources and the closer you are to achieving Compliance-on-Demand.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Michael Coyne

Michael is a consultant for AWS Professional Services. He enjoys the fast-paced environment of ever-changing technology and assisting customers in solving complex issues. Away from AWS, Michael can typically be found with a guitar and spending time with his wife and two young kiddos. He holds a BS in Computer Science from WGU.

How to automate SAML federation to multiple AWS accounts from Microsoft Azure Active Directory

Post Syndicated from Sepehr Samiei original https://aws.amazon.com/blogs/security/how-to-automate-saml-federation-to-multiple-aws-accounts-from-microsoft-azure-active-directory/

You can use federation to centrally manage access to multiple AWS accounts using credentials from your corporate directory. Federation is the practice of establishing trust between a system acting as an identity provider and other systems, often called service providers, that accept authentication tokens from that identity provider. Amazon Web Services (AWS) supports open federation standards, including Security Assertion Markup Language (SAML) 2.0, to make it easier for the systems and service providers to interact. Here, I’m going to explain how to automate federation between AWS Identity and Access Management (IAM) in multiple AWS accounts and Microsoft Azure Active Directory (Azure AD). I’ll be following the same general patterns that allow SAML federation to AWS from any other identity provider that supports SAML 2.0, but I’m also adding some automation that is specific to Azure AD. I’ll show you how to perform the initial configuration, and then how to automatically keep Azure AD in sync with your AWS IAM roles.

AWS supports any SAML 2.0-compliant identity provider. If you’re interested in configuring federated access using an identity provider other than Azure AD, these links might be useful:

In this post, I’m going to focus on the nuances of using Azure AD as a SAML identity provider for AWS. The approach covered here gives you a solution that makes this option easier and adheres to AWS best practices. The primary objectives of this step-by-step walkthrough, along with the accompanying packaged solution, are:

  • Support any number of AWS accounts and roles, making it easier to scale.
  • Keep configuration of both sides updated automatically.
  • Use AWS short-term credentials so you don’t have to store your credentials with your application. This enhances your security posture because these credentials are dynamically generated, securely delivered, naturally expire after their limited lifetime, and are automatically rotated for you.

Solution overview

I’ll discuss:

  • How to configure Microsoft Azure Active Directory and show the steps needed to prepare it for federation with AWS.
  • How to configure AWS IAM Identity Providers and Roles, and explain the steps you need to carry out in your AWS accounts.
  • How to automatically import your AWS configuration into the Azure AD SSO app for AWS.

The following diagram shows the high-level flow of SAML authentication and how your users will be federated into the AWS Management console:
 

Figure 1: SAML federation between Azure AD and AWS

Figure 1: SAML federation between Azure AD and AWS

Key to the interactions in the diagram

  1. User opens a browser and navigates to Azure AD MyApps access panel (myapps.microsoft.com).
  2. If the user isn’t authenticated, she’ll be redirected to the login endpoint for authentication.
  3. User enters her credentials and the login endpoint will verify them against Azure AD tenant.
  4. Upon successful login, user will be redirected back to the access panel.
  5. User will see the list of available applications, including the AWS Console app, and will select the AWS Console app icon.
  6. The access panel redirects the user to the federated application endpoint, passing the application ID of the AWS SSO app.
  7. The AWS SSO application queries Azure AD and generates a SAML assertion, including all the AWS IAM roles assigned to the user.
  8. SAML assertion is sent back to the user.
  9. User is redirected to AWS federation endpoint, presenting the SAML assertion. The AWS federation endpoint verifies the SAML assertion. The user will choose which of their authorized roles they currently want to operate in. Note: If there’s only one role included, the selection is automatic.
  10. The AWS federation endpoint invokes the AssumeRoleWithSAML API of AWS Security Token Service (STS) and exchanges the SAML token with temporary AWS IAM credentials.
  11. Temporary IAM credentials are used to formulate a specific AWS Console URL that’s passed back to the client browser.
  12. User is redirected to AWS Management Console with permissions of the assumed role.

Automated solution components and flow

At the core of this automated solution, there’s a Docker container that runs inside an AWS ECS Fargate task. The container includes a number of PowerShell scripts that iterate through your IAM Roles, find roles that are associated with the Identity Provider of Azure AD, and update the Azure AD SSO app manifest with the necessary values.

The Fargate task is invoked through an AWS Lambda function that’s scheduled through a CloudWatch Rule to run with the frequency you specify during setup.

All of these components require a number of parameters to run correctly, and you provide these parameters through the setup.ps1 script. The setup.ps1 script is run once and acquires all required parameters from you. It then stores these parameters with encryption inside the SSM Parameter Store. Azure credentials are stored in AWS Secrets Manager. This means you could even go another step further and use Secrets Manager lifecycle management capabilities to automatically rotate your Azure credentials. For encryption of Azure credentials, the template creates a new KMS key, exclusive to this application. If you prefer to use an existing key or a Customer Managed Key (CMK), you can modify the CloudFormation template, or simply pass your own key name to the setup.ps1 script.

The following diagram shows all components of the solution:
 

Figure 2: Solution architecture

Figure 2: Solution architecture

  1. You’ll want any ongoing changes in AWS IAM roles to be replicated into Azure AD. Therefore, you need to have the update task run periodically. A CloudWatch Rule triggers an event and an AWS Lambda Function starts running as a result of this event.
  2. The Lambda Function runs an ECS Fargate Task.
  3. The ECS Task is associated with a Task Role with permission to fetch parameters from Systems Manager (SSM) Parameter Store and Secrets Manager. The task will request parameters from SSM PS, and SSM PS decrypts parameter values using the associated key in AWS Key Management Service (KMS). Azure credentials are securely stored in AWS Secrets Manager.
  4. Fargate Task queries AWS Organizations and gets a list of child accounts. It then constructs cross-account role ARNs. The ECS Task then assumes those cross-account roles and iterates through all IAM roles in each account to find those associated with your IdP for Azure AD.
  5. The ECS Task connects to the Azure AD SSO application and retrieves the existing manifest. Notice that, although you manually retrieved the manifest file during setup, it still needs to be fetched again every time to make sure it’s the latest version. The one you manually downloaded is used to retrieve parameters needed for setup, such as the application identifier or entity ID.
  6. ECS Task stores the existing manifest as a backup in a highly-durable S3 bucket. In case anything goes wrong, the last working state of the application manifest is always available in the S3 bucket. These files are stored with the exact time of their retrieval as their file name. You can find the correct version based on the point in time it was retrieved.
  7. The ECS Task generates a new manifest based on your AWS account/roles as inspected in the preceding steps. It uses the Azure AD credentials retrieved from AWS Secrets Manager and uses them to update the Azure AD SSO app with the new manifest. It also creates any required Azure AD Groups according to the specified custom naming convention. This makes it easier for the Azure AD administrator to map Azure AD users to AWS roles and entitle them to assume those roles.

Prerequisites

To start, download a copy of the sample code package.

You must have AWS Organizations enabled on all of your accounts to take advantage of this solution’s automation. Using AWS Organizations, you can configure one of your accounts as the root account and all other accounts will join your organization as child accounts. The root account will be trusted by all child accounts, so you can manage your child account resources from your root account. This trust is enabled using a role in each of your child accounts. AWS Organizations creates a default role with full permissions on child accounts that are directly created using AWS Organizations. Best practice is to delete this default role and create one with privileges restricted to your requirements. A sample role, named AWSCloudFormationStackSetExecutionRole, is included in cross-account-role-cfn.json
of my code package. You should modify this template based on your requirements.

Setup steps

In following sections, I’ll show the steps to setup federation and deploy the automation package. First, I’ll show the steps to prepare Azure Active Directory for federation. After that, you’ll see how you can configure all of your AWS accounts from a central place, regardless of the number of your accounts. The last step is to deploy the automation package in your master AWS account to automatically handle ongoing changes as you go.

Step 1: Configure Microsoft Azure Active Directory

You need to create two resources on your Azure AD tenant: a User and an Enterprise Application.

First thing you need for accessing Azure AD is an Azure AD user. In following the principle of least privilege, you want a user that can only manipulate the SSO application. Azure AD users with the directory role of User will only have access to resources they “own.” Therefore, you can create a new user specifically for this purpose and assign it as the owner of your SSO app. This user will be used by the automation to access Azure AD and update the SSO app.

Here’s how you can create a user with the directory role of User (default):

  1. Open Azure Portal.
  2. Open Azure Active Directory.
  3. In the left pane, select Users.
  4. In the Manage pane, select All users.
  5. Select New user.
  6. Enter values for the Name and User name fields.
  7. Select the Show Password box and note the auto-generated password for this user. You will need it when you change the password.
  8. Select Create.
  9. Open a browser window and go to https://login.microsoftonline.com.
  10. Log in with the new user. You’ll be prompted to change your password. Note the new password so you don’t forget it.

Next, create an Enterprise Application from the Azure AD application gallery:

  1. Open Azure Portal.
  2. Open Azure Active Directory.
  3. In the Manage pane, select Enterprise applications.
  4. Select New application.
  5. In the gallery text box, type AWS.
  6. You’ll see an option with the name Amazon Web Services (AWS). Select that application. Make sure you don’t choose the other option with the name “AWS Console.” That option uses an alternate integration method that isn’t relevant to this post.
  7.  

    Figure 3: Select "Amazon Web Services (AWS)

    Figure 3: Select “Amazon Web Services (AWS)

  8. Select Add. You can change the name to any name you would prefer.
  9. Open the application using this path: Azure Portal > Azure Active Directory > Enterprise Applications > All Applications > your application name (for example, “Amazon Web Services (AWS)”).
  10. From left pane, select Single Sign-on, and then set Single Sign-on mode to SAML-based Sign-on.
  11. The first instance of the app is pre-integrated with Azure AD and requires no mandatory URL settings. However, if you previously created a similar application, you’ll see this:
  12.  

    Figure 4: Azure AD Application Identifier

    Figure 4: Azure AD Application Identifier

  13. If you see the red “Required” value in the Identifier field, select the Edit button and enter a value for it. This can be any value you prefer (the default is https://signin.aws.amazon.com/saml), but it has to be unique within your Azure AD tenant. If you don’t see the Identifier field, it means it’s already prepopulated and you can proceed with the default value. However, if for any reason you prefer to have a custom Identifier value, you can select the Show advanced URL settings checkbox and enter the preferred value.
  14. In the User Attributes section, select the Edit button.
  15. You need to tell Azure AD what SAML attributes and values are expected and accepted on the AWS side. AWS requires two mandatory attributes in any incoming SAML assertion. The Role attribute defines which roles the federated user is allowed to assume. The RoleSessionName attribute defines the specific, traceable attribute for the user that will appear in AWS CloudTrail logs. Role and RoleSessionName are mandatory attributes. You can also use the optional attribute of SessionDuration to specify how long each session will be valid until the user is requested to get a new token. Add the following attributes to the User Attributes & Claims section in the Azure AD SSO application. You can also remove existing default attributes, if you want, because they’ll be ignored by AWS:

    Name (case-sensitive) Value Namespace (case-sensitive) Required or optional?
    RoleSessionName user.userprincipalname
    (this will show logged in user ID in AWS portal, if you want user name, replace it with user.displayName)
    https://aws.amazon.com/SAML/Attributes Required
    Role user.assignedroles https://aws.amazon.com/SAML/Attributes Required
    SessionDuration An integer between 900 seconds (15 minutes) and 43200 seconds (12 hours). https://aws.amazon.com/SAML/Attributes Optional

    Note: I assume that you use users that are directly created within your Azure AD tenant. If you’re using an external user such as a Hotmail, Live, or Gmail account for proof-of-concept purposes, RoleSessionName should be set to user.mail instead.

  16. As a good practice, when it approaches its expiration date, you can rotate your SAML certificate. For this purpose, Azure AD allows you to create additional certificates, but only one certificate can be active at a time. In the SAML Signing Certificate section, make sure the status of this certificate is Active, and then select Federation Metadata XML to download the XML document.
  17. Download the Metadata XML file and save it in the setup directory of the package you downloaded in the beginning of this walkthrough. Make sure you save it with file extension of .xml.
  18. Open Azure Portal > Azure Active Directory > App Registrations > your application name (for example, “Amazon Web Services (AWS)”). If you don’t see your application in the list on the App Registrations page, select All apps from the drop-down list on top of that page and search for it.
  19. Select Manifest. All Azure AD applications are described as a JavaScript Object Notification (JSON) document called manifest. For AWS, this manifest defines all AWS to Azure AD role mappings. Later, we’ll be using automation to generate updates to this file.
     
    Figure 5: Azure AD Application Manifest

    Figure 5: Azure AD Application Manifest

  20. Select Download to download the app manifest JSON file. Save it in the setup directory of the package you downloaded in the beginning of this walkthrough. Make sure you save it with file extension of .json.
  21. Now, back on your registered app, select Settings.
  22. In the Settings pane, select Owners.
     
    Figure 6: Application Owner

    Figure 6: Application Owner

  23. Select Add owner and add the user you created previously as owner of this application. Adding the Azure AD user as owner enables the user to manipulate this object. Since this application is the only Azure AD resource owned by our user, it means we’re enforcing the principle of least privilege on Azure AD side.

At this point, we’re done with the initial configuration of Azure AD. All remaining steps will be performed in your AWS accounts.

Step 2: Configure AWS IAM Identity Providers and Roles

In the previous section, I showed how to configure the Azure AD side represented in the Solution architecture in Figure 1. This section explains the AWS side.

As seen in Figure 1, enabling SAML federation in any AWS account requires two types of AWS IAM resources:

You’ll have to create these two resources in all of your AWS accounts participating in SAML federation. There are various options for doing this. You can:

  • Manually create IAM IdP and Roles using AWS Management Console. For one or two accounts, this might be the easiest way. But as the number of your AWS accounts and roles increase, this method becomes more difficult.
  • Use AWS CLI or AWS Tools for PowerShell. You can use these tools to write automation scripts and simplify both creation and maintenance of your roles.
  • Use AWS CloudFormation. CloudFormation templates enable structured definition of all resources and minimize the effort required to create and maintain them.

Here, I’m going to use CloudFormation and show how it can help you create up to thousands of roles in your organization, if you need that many.

Managing multiple AWS accounts from a root account

AWS CloudFormation simplifies provisioning and management on AWS. You can create templates for the service or application architectures you want and have AWS CloudFormation use those templates for quick and reliable provisioning of the services or applications (called “stacks“). You can also easily update or replicate the stacks as needed. Each stack is deployed in a single AWS account and a specific AWS Region. For example, you can write a template that defines your organization roles in AWS IAM and deploy it in your first AWS account and US East (N.Virginia) region.

But if you have hundreds of accounts, it wouldn’t be easy, and if you have time or budget constraints, sometimes not even possible to manually deploy your template in all accounts. Ideally, you’d want to manage all your accounts from a central place. AWS Organizations is the service that gives you this capability.

In my GitHub package there is a CloudFormation template named cross-account-roles-cfn.json. It’s located under the cfn directory. This template includes two cross-account roles. The first one is a role for cross-account access with the minimum required privileges for this solution that trusts your AWS Organizations master account. This role is used to deploy AWS IAM Identity Provider (IdP) for Azure AD and all SAML federation roles, trusting that IdP within all of your AWS child accounts. The second one is used by the automation to inspect your AWS accounts (through describe calls) and keep the Azure AD SSO application updated. I’ve created two roles to ensure that each component executes with the least privilege required. To recap, you’ll have two cross account roles for two different purposes:

  1. A role with full IAM access and Lambda execution permissions. This one is used for creation and maintenance of SAML IdP and associated IAM roles in all accounts.
  2. A role with IAM read-only access. This one is used by the update task to read and detect any changes in your federation IAM roles so it can update Azure AD SSO app with those changes.

You can deploy CloudFormation templates in your child accounts using CloudFormation StackSets. Log in to your root account, go to the CloudFormation console, and select StackSets.

Select Template is ready, select Upload a template file, and then select the cross-account-roles-cfn.json template to deploy it in all of your accounts. AWS IAM is a global service, so it makes no difference which region you choose for this template. You can select any region, such as us-east-1.
 

Figure 7: Upload template to StackSets console

Figure 7: Upload template to StackSets console

This template includes a parameter prompting you to enter root account number. For instructions to find your account number, see this page.

If you create your child accounts through AWS Organizations, you’ll be able to directly deploy StackSets in those child accounts. But, if you add existing accounts to you organization, you have to first manually deploy
cross-account-roles-cfn.json in your existing accounts. This template includes the IAM role and policies needed to enable your root account to execute StackSets on it.

Configure the SAML Identity Provider and Roles

A sample template to create your organization roles as SAML federation IAM roles is included in the saml-roles.json file in the same cfn directory. This template includes the SAML IdP and three sample roles trusting that IdP. The IdP is implemented as an AWS Lambda-backed CloudFormation custom resource. Included roles are samples using AWS IAM Job Functions for Administrator, Observer, and DBA. Modify this template by adding or removing roles as needed in your organization.

If you need different roles in some of your accounts, you’ll have to create separate copies of this template and modify them accordingly. From the CloudFormation StackSets console, you can choose the accounts to which your template should be deployed.

The last modification to make is on the IdentityProvider custom resource. It includes a <Metadata> property. Its value is defined as <MetadataDocument>. You’d have to replace the value with the content of the SAML certificate metadata XML document that you previously saved in the setup directory (see the Configure Microsoft Azure Active Directory section above). You’ll need to escape all of the quotation marks (“) in the XML string with a backslash (\). If you don’t want to do this manually, you can copy the saml-roles.json template file in the setup directory and as you follow the remainder of instructions in this post, my setup script will do that for you.

Step 3: Updating Azure AD from the root AWS account

The third and last template in the cfn directory is setup-env-cfn-template.json. You have to deploy this template only in your root account. This template creates all the components in your root account, as shown in Figure 8. These are resources needed to run the update task and keep Azure AD SSO App updated with your IAM roles. In addition, it also creates a temporary EC2 instance for initial configuration of that update task. The update task uses AWS Fargate, a serverless service that allows you to run Docker containers in AWS. You have to deploy the setup-env-cfn-template.json template in a region where Fargate is available. Check the AWS Region Table to make sure Fargate is available in your target region. Follow these steps to deploy the stack:

  1. Log in to your root account and open the CloudFormation console page.
  2. Select Create Stack, upload the setup-env-cfn-template.json file, and then select Next.
  3. Enter a stack name, such as aws-iam-aad. The stack name must be all lowercase letters. The template uses the stack name to create an S3 bucket, and because S3 does not allow capital letters, if you choose a stack name containing capital letters, the stack creation will fail. The stack name is also used as the appName parameter in all scripts, and all Parameter Store parameter names are prefixed with it.
  4. Enter and select values for the following parameters:
    1. azureADTenantName: You can get the Azure Active Directory Tenant Name from Azure Portal. Go to the Azure Active Directory Overview page and the tenant name should appear at the top of the page. During setup, this is used as the value for the parameter.
    2. ExecFrequency is the time period for the update task to run. For example, if you enter 30, every 30 minutes Azure AD will be updated with any changes in IAM roles of your AWS accounts.
    3. KeyName is a key pair that is used for login and accessing the EC2 instance. You’ll need to have a key pair created before deploying this template. To create a key pair, follow these instructions: Amazon EC2 Key Pairs. Also, for more convenience, if you’re using a MAC or Linux, you can copy your private key in the setup directory. Don’t forget to run chmod 600 <key name> to change the permissions on the key.
    4. NamingConvention is used to map AWS IAM roles to Azure AD roles. The default naming convention is: “AWS {0} – {1}”. The value of {0} is your account number. The value of {1} is the name of your IAM Role.
    5. SSHLocation is used in a Security Group that restricts access to the setup EC2 instance. You only need this instance for initial setup; therefore, the best practice and most secure option is to change this value to your specific IP address. In any case, make sure you only allow access to your internal IP address range.
    6. Subnet is the VPC subnet in which you want the update task to run. This subnet must have egress (outgoing) internet connectivity. The update task needs this to reach Azure AD Graph API endpoints.
       
      Figure 8: Enter parameters for automation stack

      Figure 8: Enter parameters for automation stack

Once you deploy this template in CloudFormation and the associated stack is successfully created, you can get the IP address of the setup EC2 instance from the Output tab in CloudFormation. Now, follow the steps below to complete the setup.

Note: At this point, in addition to all the files already included in the original package, you have two additional, modified files in the setup directory:

  • The SAML Certificate XML file from Azure AD
  • The App Manifest JSON file from Azure AD

Make sure you have following information handy. This info is required in some of the steps:

Now, follow these steps to complete the setup:

  1. If you’re using Mac, Linux, or UNIX, run the initiate_setup.sh script in the setup directory and, when prompted, provide the IP address from the previous procedure. It will copy all the required files to the target setup EC2 instance and automatically take you to the setup.ps1 script. Now, skip to step 3 below.
  2. If you’re using Windows on your local computer, use your favorite tool (such as WinSCP) to copy both setup and docker directories from your local computer to the /home/ec2-user/scripts directory on the target EC2 instance.
  3. Once copied, use your favorite SSH tool to log in to the target setup EC2 instance. For example, you can use PuTTY for this purpose. As soon as you log in, Setup.ps1 will automatically run for you.
  4. Setup.ps1 is interactive. It will prompt for the path to the three files you saved in the setup directory, and also for your Azure AD user credentials. Use the credentials of the user you created in step 1 of the Configure Microsoft Azure Active Directory section. The script will perform following tasks:
    1. Store Azure AD credentials securely in AWS Secrets Manager. The script also extracts necessary values out of the three input files and stores them as additional parameters in AWS Systems Manager (SSM) Parameter Store.

      Important: The credentials of your Azure user will be stored in AWS Secrets Manager. You must make sure that access to Secrets Manager is restricted to users who are also authorized to retrieve these credentials.

    2. Create a Docker image and push it into an AWS Elastic Container Registry (ECR) repository that’s created as part of the CloudFormation template.
    3. The script checks if saml-roles.json is available in setup directory. If it’s available, the script will replace the value of the Metadata property in the IdP custom resource with content of the SAML metadata XML file. It also generates a text file containing a comma-separated list of all your child accounts, extracting account numbers from cross-account-roles-cfn.json. Both of these are copied to the S3 bucket that is created as part of the template. You can use these at any time to deploy, maintain, and manage your SAML roles in child accounts using CloudFormation StackSets.
    4. If saml-roles.json is available, the script will prompt whether you want it to deploy your roles on your behalf. If you select yes (“y“), it will immediately deploy the template in all child accounts. You can also select no (“n“), if you prefer to do this at another time, or if you need different templates and roles in some accounts.
  5. Once the script executes and successfully completes, you should terminate the setup EC2 instance.

You’ve now completed setting up federation on both sides. All AWS IAM roles that trust an IdP with the SAML certificate of your Azure AD (the Metadata XML file) will now automatically be replicated into your Azure AD tenant. This will take place with the frequency you have defined. Therefore, if you have set the ExecFrequency parameter to “30“, after 30 minutes you’ll see the roles replicated in Azure AD.

But to enable your users to use this federation, you have to entitle them to assume roles, which is what I’ll cover in the next section.

Entitling Azure AD users to assume AWS Roles

  1. Open Azure Portal > Azure Active Directory >
    Enterprise applications > All applications > (your application name) > Users and groups.
  2. Select Add user.
  3. In the Users and groups pane, select one of your Azure AD users (or groups), and then select Select.
  4. Select the Select role pane and, on the right hand side, you should now see your AWS roles listed.

You can add and map Azure AD users or groups to AWS IAM roles this way. By adding users to these groups, you’re giving them access to those roles in AWS through pre-established trust. In the case of Groups, any Azure AD users inside that Group will have SSO access to the AWS Console and permitted to assume AWS roles/accounts associated with their Azure AD Group. Azure AD users who are authenticated against login.microsoftonline.com can go to their Access Panel (myapps.microsoft.com) and select the AWS app icon.

Application maintenance

Most of the time, you will not need to do anything else because the Fargate task will execute on each interval and keep the Azure AD manifest aligned with your AWS accounts and roles. However, there are two situations that might require you to take action:

  • If you rotate your Azure AD SAML certificate
  • If you rotate the Azure user credentials used for synchronization

You can use AWS Secrets Manager lifecycle management capabilities to automate the process for the second case. Otherwise, in the event of either of these two situations, you can modify the corresponding values using the AWS Systems Manager Parameter Store and Secrets Manager consoles. Open the Parameter Store console and find parameters having names prefixed with your setup-env-cfn-template.json stack name (you entered this name when you were creating the stack). In case you rotate your Azure AD SAML certificate, you should also update all of your IdP resources in AWS accounts to use the new resource. Here again, StackSets can do the heavy-lifting for you. Use the same saml-roles.json template to update all of your Stack Instances through CloudFormation. You’ll have to replace the Metadata property value with content of the new certificate, and replace quotation mark characters (“) with escaped quotes (\”).

Summary

I’ve demonstrated how to set up and configure SAML federation and SSO using Azure Active Directory to AWS Console following these principles and requirements:

  • Using security best practices to keep both sides of federation (AWS and Azure) secure
  • Saving time and effort by automating the manual effort needed to synchronize two sides of federation
  • Keeping operation cost to a minimum through a serverless solution

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread in the forums.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Sepehr Samiei

Sepehr is currently a Senior Microsoft Tech Specialized Solutions Architect at AWS. He started his professional career as a .Net developer, which continued for more than 10 years. Early on, he quickly became a fan of cloud computing and he loves to help customers utilise the power of Microsoft tech on AWS. His wife and daughter are the most precious parts of his life.

AWS Security Profiles: Akihiro Umegai, Japan Lead, Office of the CISO

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-akihiro-umegai-japan-lead-office-of-the-ciso/

Author

In the weeks leading up to the Solution Days event in Tokyo, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been with AWS, and what is your role?

I’ve been at AWS for six and a half years. I’m the Japanese representative for the Office of the CISO, a team that’s led by Mark Ryland . We play a supporting role for Steve Schmidt, the Chief Information Security Officer for all of AWS. I help with external-facing functions as well as handling some internal cloud security mitigation tasks.

What are some differences between your role as Japanese representative for the CISO versus what your US counterparts do?

When US companies want to do business in Japan, they face a language and cultural barrier. Perhaps as many as 95% of Japanese companies don’t fully utilize English, which makes it hard to effectively communicate with most US companies. Japan also has traditional business systems and cultural ways of doing things that can seem very unique to outsiders. We put a lot of emphasis on communication and trust-building. Those might be the two most unique elements of doing business in Japan.

There are challenges, but the market size is potentially huge. I tell people that I function as the shock absorber, or stabilizer, between Japan and US headquarters. I interpret the directions of the AWS US team, adapt them to the Japanese market, and then communicate with our Japanese customers.

What’s the most challenging part of your job?

At a certain level, the Japanese market tends to follow the trends of the US market. For example, in Japan, there is a similar need to have Chief Information Security Officers: people who makes decisions about comprehensive security issues on behalf of their companies. However, the concept of a CISO is just beginning to take hold, and CISOs might not be fully considered a primary part of corporate C-level functions. I feel that we need to support our customers’ security leaders in order to help them solidify their security posture.

Historically, Japanese companies have also outsourced many of their IT functions, with Japanese local system integrators supporting these processes. Our customers often need to work with their partners to make decisions, including decisions about security operations and even some compliance matters. It’s critical to involve these local partners, who are very familiar with Japanese customs and business. When I create a security and compliance reference document for any guidelines in the Japanese market, I always form a partner group with three to six partners who know the specific domains in their particular market. Our combined effort allows us to produce practical, customer-centric solutions. These types of partnerships also help us get remarkable attention from the Japanese market: “Big, local, and traditional Japanese system integrators are working with the US cloud vendor AWS!” That process of developing great relationships with partners is the tough part of my job. I might spend 30 – 40 percent of my time in direct customer communication, and 60 – 70 percent of my time communicating with partners.

What are some of the broad differences between global and Japanese markets with respect to the cloud?

As one example, in the financial arena, Japanese regulators are very serious and tough. The main regulator is the Financial Service Agency — the FSA — which controls the issuance of bank licenses. It’s hard to get those licenses in the Japanese market. In contrast, bank regulators in the EU have already issued a license for “challenger banks” that primarily utilize cloud environments for their systems. The total cost of establishing this type of “cloud-based” bank is significantly lower than establishing a traditional on-premise, mainframe-based banking system. It’s a remarkable use case for Japanese regulators and customers. Such new, cloud-based systems usually employ “ready-made” banking system middleware, which is already configured to serve banks’ main functions — customers can purchase the middleware, put it on AWS, and then start a bank within a short period of time. The US-based bank Capital One is another interesting use case: they represent an “all-in” approach of moving all their workloads from an on-premise environment to AWS. You can read the case study here.

However, I do not mean Japanese regulators or banks are behind. They are very rigorous about following rules, and they are very diligent about keeping the trust of their customers. They’re handling the adoption of new technology with care and precision, and they’re interested in listening. In fact, Japanese regulators and related entities, like the FSA, the BOJ (Bank of Japan), and the FISC (Center for Financial Industry Information Systems) are keen to learn good practice from global case studies and new technology use cases in order to enhance Japanese financial business. I’m always looking for interesting, attractive use cases from outside to openly share with them.

What’s the most common misperception you encounter about cloud security and compliance?

Some customers assume that they still need to perform physical data center audits, even if there is no clear objective to visiting the data center. When customers ask me about physical data center audits, I always encourage them to leverage our third-party audit reports (like our SOC-2 reports) and refer to our digital tour of an AWS data center to get a sense of how AWS operate. However, I think risk residing in physical data centers is just one part of the entire process of risk control, and other, more important controls must be emphasized. For example: How will you detect and catch unauthorized access? How will you process detailed logs from various sources? Can you automate security operation by utilizing new, cloud-based security functions to reduce human-based operational risk? Part of the challenge is that AWS needs to translate more of our audit reports (something that is partially my duty). It’s difficult for Japanese customers to interpret a SOC-2 report in English when even native English speakers might have difficulty with the extremely detailed security language. Better translations would directly help us do a better job of explaining these concepts to our customers.

Another common misunderstanding stems from how to perform system audits in a cloud environment. Most existing audits are like a sample base. You can’t read through every log or piece of evidence like a book. For example, auditors might check page 10, skip to page 30, finish sampling, and end the audit. There are “not-yet-checked” portions of the accumulated logs that could have potential residual risk, which is skipped. But in a cloud system, we can gather every detailed log. Most AWS service functions produce lots of detail, and we can process the logs either with Amazon GuardDuty, or machine learning, or third-party log consolidation tools. Ultimately, we’re able to perform a much more detailed, accurate audit — and many customers miss this fact. There’s a gap between what the “compliance audit guy” knows and what the cloud security engineer knows. Compliance professionals don’t always have a deep understanding of technology. They don’t always know how to gather logs from cloud-based systems. But security engineers do. We need to connect these people and go through how to perform audits in cloud systems in a more effective, holistic way to truly secure systems and reduce risk.

What’s your favorite part of your job?

Disruption via new concepts and offerings. Let me give you an example: The Center for Financial Industry Information Systems (FISC) develops and publishes general system security guidelines for financial institutions in Japan. Five years ago, when I called on regulators or regulated customers, they’d all ask me, “What is cloud? How is it different from on-premise? The regulations don’t have any mention of cloud, so we aren’t sure how it could be utilized under the current guidelines.” Customers didn’t know what sort of reaction they’d get from regulators if they moved to the cloud. But over the past five or six years, I’ve shared our security practices and knowledge step-by-step — what’s going on in the US and global markets, which big customers have started using AWS for regulated workloads, and so on. And regulated customers have come to understand that the cloud is a more efficient way to achieve their security compliance. I share market information and techniques that regulated customers can use to think about cloud security controls. Regulators and regulated customers have started to slowly change their perception and become more accepting of the AWS concept of security and compliance. For the last two years, I’ve actually been an official member of some of the expert councils at FISC. Helping to change peoples’ perceptions like that is exciting!

What do you hope your audience will gain from attending AWS Solution Days?

Our goal is to share how AWS can help customers get one step ahead in the fields of cloud security and compliance. We recently announced two major security services, AWS Security Hub and AWS Control Tower, and I hope Japanese customers will see how they can use these services to continue improving their security posture. In addition, the Japanese government has recently become interested in changing their policy for employing the cloud for government systems. They have a lot of interest in how cloud is used in the US. One of the goals of AWS Solution Days is to share what’s going on in US government systems. It should be equally interesting to people in the commercial sector, in terms of learning how high-security systems are being achieved in AWS environments.

What should first-time travelers to Tokyo do or experience?

When you visit Japan, I would suggest trying new foods, especially if you like sushi. The sushi bars have fish that is fresh and super high quality, so order something new off the menu, even if you think you might not like it. It could disrupt your perception of seafood! You might leave with a new appreciation!

Also, American people are sometimes surprised to find that Japan is a very westernized country, although it still retains its own very unique culture. You’ll see McDonalds, Starbucks, lots of US-based companies. There’s lots of American culture, in fact, but it’s all modified by Japanese language and culture, resulting in a new, interesting experience.

You are a DJ: What’s a recently released album that you’d recommend?

I really like club music, like techno and American heavy metal. In addition to being a DJ, I sometimes play the electric guitar. So I would instead recommend one old song which changed my life and turned me from a heavy metal guitar kid into a synthesizer geek. The band is Orbital, and on their 1992 album “Diversions,” there is a song called “Impact USA” that’s my all-time favorite. It’s a beautiful track — it has beautiful melodies and an atmosphere that I think make it universally appealing, even if you don’t typically like techno.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Akihiro Umegai

Akihiro joined AWS in 2012 and currently serves as the Japan lead for the Office of the CISO – AWS Security. In this role, he engages with CISOs, CIOs, and government regulators to address their security and regulatory compliance requirements. He’s also a committee member and contributor for Japan’s Center for Financial Industry Information Systems (FISC), where he provides input on the security controls.

Add a layer of security for AWS SSO user portal sign-in with context-aware email-based verification

Post Syndicated from Ujjwal Pugalia original https://aws.amazon.com/blogs/security/add-a-layer-of-security-for-aws-sso-user-portal-sign-in-with-context-aware-email-based-verification/

If you’re an IT administrator of a growing workforce, your users will require access to a growing number of business applications and AWS accounts. You can use AWS Single Sign-On (AWS SSO) to create and manage users centrally and grant access to AWS accounts and business applications, such as such Salesforce, Box, and Slack. When you use AWS SSO, your users sign in to a central portal to access all of their AWS accounts and applications. Today, we launched email-based verification that provides an additional layer of security for users signing in to the AWS SSO user portal. AWS SSO supports a one-time passcode (OTP) sent to users’ email that they then use as a verification code during sign-in. When enabled, AWS SSO prompts users for their user name and password and then to enter a verification code that was sent to their email address. They need all three pieces of information to be able to sign in to the AWS SSO user portal.

You can enable email-based verification in context-aware or always-on mode. We recommend you enable email-based verification in context-aware mode for users created using the default AWS SSO directory. In this mode, users sign in easily with their username and password for most sign-ins, but must provide additional verification when their sign-in context changes, such as when signing in from a new device or an unknown location. Alternatively, if your company requires users to complete verification for every sign-in, you can use always-on mode.

In this post, I demonstrate how to enable verification in context-aware mode for users in your SSO directory using the AWS SSO console. I then demonstrate how to sign into the AWS SSO user portal using email-based verification.

Enable email-based verification in context-aware mode for users in your SSO directory

Before you enable email-based verification, you must ensure that all your users can access their email to retrieve their verification code. If your users require the AWS SSO user portal to access their email, do not enable email-based verification. For example, if you use AWS SSO to access Office 365, then your users may not be able to access their AWS SSO user portal when you enable email-based verification.

Follow these steps to enable email-based verification for users in your SSO directory:

  1. Sign in to the AWS SSO console. In the left navigation pane, select Settings, and then select Configure under the Two-step verification settings.
  2. Select Context-aware under Verification mode, and Email-based verification under Verification method, and then select Save changes.
     
    Figure 1: Select the verification mode and the verification method

    Figure 1: Select the verification mode and the verification method

  3. Before you choose to confirm the changes in the Enable email-based verification window, make sure that all your users can access their email to retrieve the verification code required to sign in to the AWS SSO user portal without signing in using AWS SSO. To confirm your choice, type CONFIRM (case-sensitive) in the text-entry field, and then select Confirm.
     
    Figure 2: The "Enable email-based verification" window

    Figure 2: The “Enable email-based verification” window

You’ll see that you successfully enabled email-based verification in context-aware mode for all users in your AWS SSO directory.
 

Figure 3: Verification of the settings

Figure 3: Verification of the settings

Next, I demonstrate how your users sign into the AWS SSO user portal with email-based verification in addition to their username and password

Sign-in to the AWS SSO user portal with email-based verification

With email-based verification enabled in context-aware mode, users use the verification code sent to their email when there is a change in their sign-in context. Here’s how that works:

  1. Navigate to your AWS SSO user portal.
  2. Enter your email address and password, and then select Sign in.
     
    Figure 4: The "Single Sign-On" window

    Figure 4: The “Single Sign-On” window

  3. If AWS detects a change in your sign-in context, you’ll receive an email with a 6-digit verification code that you will enter in the next step.
     
    Figure 5: Example verification email

    Figure 5: Example verification email

  4. Enter the code in the Verification code box, and then select Sign in. If you haven’t received your verification code, select Resend email with a code to receive a new code, and be sure to check your spam folder. You can select This is a trusted device to mark your device as trusted so you don’t need to enter a verification code unless your sign-in context changes again, such as signing in from a new browser or an unknown location.
     
    Figure 6: Enter the verification code

    Figure 6: Enter the verification code

The user can now access AWS accounts and business applications that the administrator has configured for them.

Summary

In this post, I shared the benefits of using email-based verification in context-aware mode. I demonstrated how you can enable email-based verification for your users through the SSO console. I also showed you how to sign into the AWS SSO user portal with email-based verification. You can also enable email-based verification for SSO users from your connected AD directory by following the process outlined above.

If you have comments, please submit them in the Comments section below. If you have issues enabling email-based verification for your users, start a thread on the AWS SSO forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Ujjwal Pugalia

Ujjwal is the product manager for the console sign-in and sign-up experience at AWS. He enjoys working in the customer-centric environment at Amazon because it aligns with his prior experience building an enterprise marketplace. Outside of work, Ujjwal enjoys watching crime dramas on Netflix. He holds an MBA from Carnegie Mellon University (CMU) in Pittsburgh.

AWS Security profiles: Michael South, Principal Business Development Manager for Security Acceleration

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-michael-south-principal-business-development-manager-for-security-acceleration/

Author

In the weeks leading up to the Solution Days event in Tokyo, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS, and what do you do in your current role?

I’ve been with AWS since August 2017. I’m part of a team called SeCBAT — the Security and Compliance Business Acceleration Team. I lead customer-focused executive security and compliance efforts for the public sector in the Americas, from Canada down to Chile, and spanning federal government, defense, state and local government, education, and non-profit verticals. The team was established in 2017 to address a need we saw in supporting our customers. While we have fantastic solution architects who connect easily with our customers’ architects and engineers, we didn’t have a readily available team in our World Wide Public Sector organization to engage customers interested in security at the executive level. When we worked with people like CISOs — Chief Information Security Officers — there was a communication gap. CISOs have a broader scope than engineers, and are oftentimes not as technically deep. Technology is only one piece of the puzzle that they’re trying to solve. Other challenging pieces include policy, strategy, culture shift, staffing and training, and the politics of their entire organization. SeCBAT is comprised of prior government CISOs (or similar roles), allowing us to establish trust quickly. We’ve been in their shoes, so we understand the scope of their concerns, we can walk them through how they can meet their security and compliance objectives in AWS, and we can help remove barriers to cloud adoption for the overall customer.

These customer engagements are one of my primary functions. The team also spends a lot of time on strategic communications: presenting at conferences and tradeshows, writing whitepapers and blogs, and generally providing thought leadership for cloud security. Lastly, we work closely with Amazon Public Policy as subject matter experts to assist in reviewing and commenting on draft legislation and government policies, and in meetings with legislators, regulators, and policy-makers to educate them on how security in the cloud works so they can make informed decisions.

What’s the most challenging part of your job?

Customers who are new to the cloud often grapple with feelings of fear and uncertainty (just like I did). For me, figuring out how to address that feeling is a challenge that varies from person to person. It isn’t necessarily based on facts or data — it’s a general human reaction to something new. “The cloud” is very mysterious to people who are just coming into it, and oftentimes their sources of information are inaccurate or sensationalized news articles, combined with a general overuse of the word “cloud” in marketing materials from traditional vendors who are trying to cash in on this industry shift. Once you learn what the cloud really is and how it works, what’s the same and what’s different than what you’re used to on-prem, you can figure out how to manage it, secure it, and incorporate it into your overall strategy. But trying to get past that initial fear of the unknown is challenging. Part of what I do is educate people and then challenge some of the assumptions they might have made prior to our meeting. I want people to be able to look at the data so that they can make an informed decision and not lose an opportunity over a baseless emotion. If they choose not to go to the cloud, then that is absolutely fine, but at least that decision is made on facts and what’s best for the organization.

What’s the most common misperception you encounter about cloud security and compliance?

Visibility. There’s a big misperception that customers will lose visibility into their data and their systems in the cloud, and this becomes a root cause of many other misconceptions. It’s usually the very first point that I focus on in my briefs and discussions. I walk customers through my cloud journey, including my background in traditional security in an on-prem environment. As the Deputy CISO for the city of Washington, DC, I was initially very nervous about transitioning to the cloud, but I tasked my team and myself to dive deep and learn. It didn’t take long for us to determine that not only could we be just as secure and compliant in the cloud as on-prem, but that we could achieve a greater level of security and compliance through resiliency, continuous monitoring, and automated security operations. During our research, we also had to deal with a few on-prem issues, and that’s when it dawned on me that the cloud gave me something that I’d been lacking for my entire IT career — essentially 100% visibility! It didn’t matter if a server was on or off, what network segment it was on, whether the endpoint agent was installed or reporting up, or any other state — I had absolute visibility into every asset we had in the cloud. From here, we could secure and automate with much greater confidence, which resulted in fewer “fires” to put out. Security ended up being a driving force behind the city’s cloud adoption strategy. The security and governance journey can take a while at first, but these factors will enable everyone else move fast, safely. The very first step is understanding the visibility that the cloud allows.

You’ll be giving a keynote at AWS Solution Days, in Tokyo. Is this the first time you’ve been to Japan?

No, my family and I were very fortunate to have lived in Yokosuka, Japan for a few years. I served in the U.S. Navy for 25 years prior to joining AWS, where I enjoyed two tours in Japan. The first was as the Seventh Fleet Information Assurance Manager, the lead for cybersecurity for all U.S. Naval forces in Asia. The second was as the Navy Chief Information Officer (CIO) for all U.S. Naval forces in Japan. Those experiences were some of the best of my career and family life. We would move back to Japan in a heartbeat!

The keynote is called “U.S. government and U.S. defense-related security.” What implications do U.S. government and defense policies have for AWS customers in Japan?

The U.S. and Japan are very strong political and military allies. Their governments and militaries share common interests and defense strategies, and collaborate on a myriad of socio-economic topics. This all requires the sharing of sensitive information, which is where having a common lexicon, standards, and processes for security benefit both parties. I plan to discuss the U.S. environment and highlight things that are working well in the U.S. that Japan might want to consider adopting, plus some things that might not be a good fit—coupled with recommendations on what might be better opportunities. I also plan to demonstrate that AWS is able to meet the high standards of the U.S. government and military with very strict, regulated security. I hope that this will give Japanese customers confidence in our ability to meet the similarly rigorous requirements they might have.

In your experience, how does the cloud security landscape differ between US and Japanese markets?

From my understanding, the Japanese government is in the very early stages of cloud adoption. Many ministries are assessing how they might use the cloud and secure their sensitive data in it. In addition to speaking at the summit, one of my reasons for visiting Japan is to meet with Japanese government customers to learn about their efforts. They’re very much interested in what the U.S. government is doing with AWS. They would like to leverage lessons learned, technical successes, and processes that are working well, in addition to learning about things that they might want to do differently. It’s a great opportunity to showcase all the work we’re doing with the U.S. government that could also benefit the Japanese government.

Five years from now, what changes do you think we’ll see across the security and compliance landscape?

My hope is that we’ll see a better, more holistic method of implementing governance with security engineering and security operations. Right now, globally across the cybersecurity landscape, there are silos: development security, governance, compliance, risk management, engineering, security operations, etc. They should be more mutually supportive and interconnected, and as you implement a plan in one area, it should go into effect seamlessly across the other areas.

Similarly, my hope is that five years from now we’ll start seeing a merge between the technologies and people and processes. Right now, the cybersecurity industry seems to try to tackle every problem with a technological solution. But technology is really the easiest part of every problem. The people and the processes are much more difficult. I think we need to devote a lot more time toward developing a holistic view of cybersecurity based on business risk and objectives.

Why should emerging markets move to the cloud now? Why not wait another five years in the hope that the technology will mature?

I’d like to challenge the assumption that the cloud is not mature. At least with AWS and our near competitors, I’d say the cloud is very mature and provides a level of sophistication that is very difficult and costly to replicate on-prem. If the concern is about technical maturity, you’re already late.

In addition, the waiting approach poses two problems: First, if you’re not engaged now in learning how the cloud works, you’ll just be further behind the curve in five years. Second, I see (and believe I’ll continue to see) that the vast majority of new technologies, services, and concepts are being born in the cloud. Everything is hyper-converging on the cloud as the foundational platform for all other emerging technologies. If you want to be successful with the next big idea in five years, it’s better to get into the cloud now and become an expert at what it can do—so that you’re ready for that next big idea. Because in some way, shape, or form, it’s going to be in or enabled by the cloud.

What are your favorite things to do when you’re visiting Japan?

The history and tradition of Kyoto makes it my favorite city in Japan. But since we’ll be in Tokyo, there a few things there that I’d recommend. First, the 100-Yen sushi-go-rounds. To Americans, I’d explain it as paying one US dollar for a small plate (2 pieces of nigiri or 4 roll slices) of fantastic sushi. You can eat thirty plates for thirty bucks! Places in Tokyo to visit are Harajuku for people-watching, with all the costumes and fashion, Shibuya for shopping, and of course Tokyo tower. I also recommend Ueno park, somewhat close to where our event will be held, which has a pond and zoo.

Japan is one of the safest and politest countries I’ve been to — and I’ve visited about 40 at this point. The people I’ve met there have all been extraordinarily nice and are what really makes Japan so special. I’d highly recommend visiting.

What’s your favorite thing to do in your hometown?

I’m originally from Denver, Colorado. If you’re in Denver, you’ve got to go up to the mountains. If you’re there in the summer, you can hike, camp, go white-water rafting, or horseback riding. If you’re there in the winter, you can go skiing or snowboarding, or just sit by the fire with a hot toddy. It really doesn’t matter. Just go up to the mountains and enjoy the beautiful scenery and wildlife.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security news? Follow us on Twitter.

Michael South

Michael joined AWS in 2017 as the Americas Regional Leader for public sector security and compliance business development. He supports customers who want to achieve business objectives and improve their security and compliance in the cloud. His customers span across the public sector, including: federal governments, militaries, state/provincial governments, academic institutions, and non-profits from North to South America. Prior to AWS, Michael was the Deputy Chief Information Security Officer for the city of Washington, DC and the U.S. Navy’s Chief Information Officer for Japan.

New AWS services launch with HIPAA, PCI, ISO, and SOC – a company first

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/new-aws-services-launch-with-hipaa-pci-iso-and-soc/

Our security culture is one of the things that sets AWS apart. Security is job zero — it is the foundation for all AWS employees and impacts the work we do every day, across the company. And that’s reflected in our services, which undergo exacting internal and external security reviews before being released. From there, we have historically waited for customer demand to begin the complex process of third-party assessment and validating services under specific compliance programs. However, we’ve heard you tell us you want every generally available (GA) service in scope to keep up with the pace of your innovation and at the same time, meet rigorous compliance and regulatory requirements.

I wanted to share how we’re meeting this challenge with a more proactive approach to service certification by certifying services at launch. For the first time, we’ve launched new GA services with PCI DSS, ISO 9001/27001/27017/27018, SOC 2, and HIPAA eligibility. That means customers who rely on or require these compliance programs can select from 10 brand new services right away, without having to wait for one or more trailing audit cycles.

Verifying the security and compliance of the following new services is as simple as going to the console and using AWS Artifact to download the audit reports.

  • Amazon DocumentDB (with MongoDB compatibility) [HIPAA, PCI, ISO, SOC 2]
  • Amazon FSx [HIPAA, PCI, ISO]
  • Amazon Route 53 Resolver [ISO]
  • AWS Amplify [HIPAA, ISO]
  • AWS DataSync [HIPAA, PCI, ISO]
  • AWS Elemental MediaConnect [HIPAA, PCI, ISO]
  • AWS Global Accelerator [PCI, ISO]
  • AWS License Manager [ISO]
  • AWS RoboMaker [HIPAA, PCI, ISO]
  • AWS Transfer for SFTP [HIPAA, PCI, ISO]

This proactive compliance approach means we move upstream in the product development process. Over the last several months, we’ve made significant process improvements to deliver additional services with compliance certifications and HIPAA eligibility. Our security, compliance, and service teams have partnered in new ways to implement controls and audit earlier in a service’s development phase to demonstrate operating effectiveness. We also integrated auditing mechanisms into multiple stages of the launch process, enabling our security and compliance teams, as well as auditors, to assess controls throughout a service’s preview period. Additionally, we increased our audit frequency to meet services’ GA deadlines.

The work reflects a meaningful shift in our business. We’re excited to get these services into your hands sooner and wanted to report our overall progress. We also ask for your continued feedback since it drives our decisions and prioritization. Because going forward, we’ll continue to iterate and innovate until all of our services are certified at launch.

How to use AWS WAF to filter incoming traffic from embargoed countries

Post Syndicated from Rajat Ravinder Varuni original https://aws.amazon.com/blogs/security/how-to-use-aws-waf-to-filter-incoming-traffic-from-embargoed-countries/

AWS WAF provides inline inspection of inbound traffic at the application layer to detect and filter against critical web application security flaws from common web exploits that could affect application availability, compromise security, or consume excessive resources. The inbound traffic is inspected against web access control list (web ACL) rules that you can create manually or programmatically—either through AWS WAF Security Automations or through the AWS Marketplace. AWS WAF functions like a typical web application firewall, but with the added reliability and scalability that comes with being an AWS-managed service. It can detect and filter malicious web requests and scale to handle bursts in traffic.

We have customers in public sector and financial services who use AWS WAF to block requests from certain geographical locations, like embargoed countries, by applying geographic match conditions. By using AWS WAF, our customers can create a customized list to easily manage an automated solution for geographic blocking.

In order to reduce the operational burden of maintaining an up-to-date list of rules for geographical location blocking, this blog post provides you with an automated solution that applies geography-based IP (GeoIP) restrictions based on a descriptive JSON file that lists all the locations that you want to block. When you update this file, the automation applies all rules to the specified AWS WAF web ACL. For countries not listed on the geographic match condition (or if you just need to block a subset of IPs from a country), the JSON file also has a section where you can list IP ranges that should be blocked.

If you deploy our solution with the default parameters, it builds the following environment:
 

Figure 1: Solution diagram

Figure 1: Solution diagram

As the diagram shows, the solution uses these resources:

  • AWS WAF, which functions like a typical web application firewall, but with the added reliability and scalability that comes with being an AWS-managed service.
  • Two AWS Lambda functions — a Custom Resource function and an Embargoed Countries Parser.
    1. The Custom Resource function helps provision the solution when the AWS WAF conditions, rules, and web ACL are created and configured. It’s also triggered when you upload an initial version of the embargoed countries JSON file to your Amazon Simple Storage (Amazon S3) bucket.
    2. The Embargoed Countries Parser function is trigged whenever a new JSON file is uploaded to the S3 bucket. When an upload occurs, the function parses the new file and enforces AWS WAF rules that reflect what the file describes.
  • An Amazon Simple Storage Service (Amazon S3) bucket, where you’ll save the embargoed countries JSON file.
  • An AWS Identity and Access Management (IAM) role that gives the Lambda function access to the following resources:
    1. AWS WAF, to list, create, obtain, and update geographic IP restrictions, conditions, and web ACLs.
    2. Amazon CloudWatch logs, to monitor, store, and access log files generated by AWS Lambda.
    3. Amazon S3, to upload and read the embargoed countries JSON file.

The image below shows a reference architecture where malicious traffic is blocked by AWS WAF rules.
 

Figure 2: AWS WAF integration with Amazon CloudFront / ALB

Figure 2: AWS WAF integration with Amazon CloudFront / ALB

As a starting point for this walk-through, we created a list of embargoed countries based on information published by the Office of Foreign Assets Control (OFAC) of the US Department of the Treasury.
OFAC sanctions and restrictions vary in scope, and OFAC does not maintain one single list of embargoed countries. OFAC also imposes additional restrictions on doing business with certain individuals and entities that are not covered by the embargoed country sanctions list. For the most up-to-date information about embargoed countries and other OFAC sanctions programs, see the US Department of the Treasury’s Resource Center.

IMPORTANT NOTES:

You’re responsible for updating your list of embargoed countries, based on geographic IP restrictions that you establish and keep up-to-date. Later in the post, we’ll show you how to update and edit your list, but we want to emphasize that ensuring your embargo list is current and comprehensive for your business and compliance needs is a critical part of your responsibility as a customer.

Further, the accuracy of the IP Address to country lookup database used by WAF varies by region. Based on recent tests, our overall accuracy for the IP address to country mapping is 99.8%. We recommend that you work with regulatory compliance experts to decide whether your solution meets your compliance needs.

Deploying the CloudFormation stack

To get started, first make sure you have at least one resource that’s associated with your web ACL. This can be either a CloudFront distribution or an Application Load Balancer (ALB). Then, select the Launch Stack button below to launch an AWS CloudFormation stack in your account. It will take approximately 5 minutes for the CloudFormation stack to complete:
 
Select this image to open a link that starts building the CloudFormation stack

The code for this solution is available on GitHub.

Notes: The template will launch in the US East (N. Virginia) Region. To launch the solution in a different AWS Region, use the region selector in the console navigation bar.

  1. On the Select Template page, select Next.
  2. On the Specify Details page, give your solution stack a name.
  3. Under Parameters, review the default parameters for the template and modify the values, if you’d like.

    The following screenshot illustrates the same.
     

    Figure 3: Review and modify parameters

    Figure 3: Review and modify parameters

    Parameter Value Description
    EndpointType <Requires input>

    Default: CloudFront

    Choose whether the endpoint that needs to be protected by AWS WAF is associated with CloudFront or ALB.
    WebAclId Insert the webACL id (or leave it empty to create a new one)
    RuleAction AllowedValues:

    BLOCK, COUNT

    Default: BLOCK

    Select the action that AWS WAF takes when a web request comes from an embargoed country.
    RulePriorityIp Default: 100 Specifies the order in which the embargoed IPs rule will be evaluated in a WebACL.
    RulePriorityGeo Default: 101 Specifies the order in which the embargoed country rule will be evaluated in a WebACL.

     

  4. Select Next.
  5. On the Options page, you can specify tags (key-value pairs) for the resources in your stack, if you’d like. Then select Next.
  6. On the Review page, review and confirm the settings. Be sure to select the box acknowledging that the template will create AWS Identity and Access Management (IAM) resources with custom names.
  7. To deploy the stack, select Create. In approximately two minutes, the stack creation should be complete. You can verify this on the Events tab by finding your stack ID and looking for the CREATE_COMPLETE status:

    Upon the completion of the CloudFormation stack, you should see CREATE_COMPLETE as the Status. It should look like this:
     

    Figure 4: Look for "CREATE_COMPLETE" as the "Status"

    Figure 4: Look for “CREATE_COMPLETE” as the “Status”

  8. Return to the AWS Management Console, where you’ll see that an additional rule has been added, as shown in the following diagram:
     
    Figure 5: An additional rule has been added

    Figure 5: An additional rule has been added

  9. Choose your newly created rule, then go to the Rules details page. You should now see the JSON file that contains our initial list of embargoed countries to filter traffic from. This is a starting point list: it’s your responsibility as a customer to update the embargoed countries list going forward. To update the list of countries, you can edit the JSON file located in the Amazon S3 bucket using the steps in the next section of this post.
     

    Note:Check to make sure that the web ACL is associated with the endpoint you need to protect, or you run the risk of leaving the endpoint unprotected against inbound traffic from the geographic regions you want to block. More information about how to associate an endpoint with WAF web ACL can be found here.

Updating the list of embargoed countries

  1. To find the S3 bucket, on the completed CloudFormation Template, go to the Resources tab.
  2. Select the Physical ID to see an Amazon S3 bucket with an S3 object called
    embargoed-countries.json. Youll be directed to the Amazon S3 Bucket.
     
    Figure 6: The "embargoed-countries.json" file

    Figure 6: The “embargoed-countries.json” file

  3. Download the embargoed-countries.json file, edit it, and upload the edited file to the same location. Wait for a couple of minutes for the changes to propagate to AWS WAF.

Conclusion

You now have access to a simple solution to block inbound traffic from specific geographic regions. Using this solution, you can use AWS WAF to help protect your applications from unwanted or unauthorized traffic to your application served by CloudFront or ALB.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Rajat Ravinder Varuni

As a security architect with Amazon Web Services, Rajat provides subject matter expertise in the architecture and deployment of solutions that reduce the likelihood of data leakage, web application and denial-of-service attacks, as well as in the design of data encryption methodologies to secure mission-critical data. Find his other contributions to the AWS Security Blog here.

Author

Heitor Vital

Heitor Vital is a Solutions Builder at Amazon Web Services. His team outlines AWS best practices and provides prescriptive architectural guidance, as well as automated solutions that you can deploy in your AWS account in minutes. He contributes to projects such as AWS WAF Security Automation, Data Lake Solution, and Serverless Bot Framework.

New whitepaper: Achieving Operational Resilience in the Financial Sector and Beyond

Post Syndicated from Rahul Prabhakar original https://aws.amazon.com/blogs/security/new-whitepaper-achieving-operational-resilience-in-the-financial-sector-and-beyond/

AWS has released a new whitepaper, Amazon Web Services’ Approach to Operational Resilience in the Financial Sector and Beyond, in which we discuss how AWS and customers build for resiliency on the AWS cloud. We’re constantly amazed at the applications our customers build using AWS services — including what our financial services customers have built, from credit risk simulations to mobile banking applications. Depending on their internal and regulatory requirements, financial services companies may need to meet specific resiliency objectives and withstand low-probability events that could otherwise disrupt their businesses. We know that financial regulators are also interested in understanding how the AWS cloud allows customers to meet those objectives. This new whitepaper addresses these topics.

The paper walks through the AWS global infrastructure and how we build to withstand failures. Reflecting how AWS and customers share responsibility for resilience, the paper also outlines how a financial institution could build a mission-critical application across AWS Regions in a way that improves its resiliency compared to a traditional, on-premises environment.

Security and resiliency remain our highest priority. We encourage you to check out the paper and provide feedback. We’d love to hear from you, so don’t hesitate to get in touch with us by reaching out to your account executive or contacting AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Learn about AWS Services & Solutions – January AWS Online Tech Talks

Post Syndicated from Robin Park original https://aws.amazon.com/blogs/aws/learn-about-aws-services-solutions-january-aws-online-tech-talks/

AWS Tech Talks

Happy New Year! Join us this January to learn about AWS services and solutions. The AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. These tech talks, led by AWS solutions architects and engineers, feature technical deep dives, live demonstrations, customer examples, and Q&A with AWS experts. Register Now!

Note – All sessions are free and in Pacific Time.

Tech talks this month:

Containers

January 22, 2019 | 9:00 AM – 10:00 AM PTDeep Dive Into AWS Cloud Map: Service Discovery for All Your Cloud Resources – Learn how to increase your application availability with AWS Cloud Map, a new service that lets you discover all your cloud resources.

Data Lakes & Analytics

January 22, 2019 | 1:00 PM – 2:00 PM PT– Increase Your Data Engineering Productivity Using Amazon EMR Notebooks – Learn how to develop analytics and data processing applications faster with Amazon EMR Notebooks.

Enterprise & Hybrid

January 29, 2019 | 1:00 PM – 2:00 PM PTBuild Better Workloads with the AWS Well-Architected Framework and Tool – Learn how to apply architectural best practices to guide your cloud migration.

IoT

January 29, 2019 | 9:00 AM – 10:00 AM PTHow To Visually Develop IoT Applications with AWS IoT Things Graph – See how easy it is to build IoT applications by visually connecting devices & web services.

Mobile

January 21, 2019 | 11:00 AM – 12:00 PM PTBuild Secure, Offline, and Real Time Enabled Mobile Apps Using AWS AppSync and AWS Amplify – Learn how to easily build secure, cloud-connected data-driven mobile apps using AWS Amplify, GraphQL, and mobile-optimized AWS services.

Networking

January 30, 2019 | 9:00 AM – 10:00 AM PTImprove Your Application’s Availability and Performance with AWS Global Accelerator – Learn how to accelerate your global latency-sensitive applications by routing traffic across AWS Regions.

Robotics

January 29, 2019 | 11:00 AM – 12:00 PM PTUsing AWS RoboMaker Simulation for Real World Applications – Learn how AWS RoboMaker simulation works and how you can get started with your own projects.

Security, Identity & Compliance

January 23, 2019 | 1:00 PM – 2:00 PM PTCustomer Showcase: How Dow Jones Uses AWS to Create a Secure Perimeter Around Its Web Properties – Learn tips and tricks from a real-life example on how to be in control of your cloud security and automate it on AWS.

January 30, 2019 | 11:00 AM – 12:00 PM PTIntroducing AWS Key Management Service Custom Key Store – Learn how you can generate, store, and use your KMS keys in hardware security modules (HSMs) that you control.

Serverless

January 31, 2019 | 9:00 AM – 10:00 AM PT Nested Applications: Accelerate Serverless Development Using AWS SAM and the AWS Serverless Application Repository – Learn how to compose nested applications using the AWS Serverless Application Model (SAM), SAM CLI, and the AWS Serverless Application Repository.

January 31, 2019 | 11:00 AM – 12:00 PM PTDeep Dive Into Lambda Layers and the Lambda Runtime API – Learn how to use Lambda Layers to enable re-use and sharing of code, and how you can build and test Layers locally using the AWS Serverless Application Model (SAM).

Storage

January 28, 2019 | 11:00 AM – 12:00 PM PTThe Amazon S3 Storage Classes – Learn about the Amazon S3 Storage Classes and how to use them to optimize your storage resources.

January 30, 2019 | 1:00 PM – 2:00 PM PTDeep Dive on Amazon FSx for Windows File Server: Running Windows on AWS – Learn how to deploy Amazon FSx for Windows File Server in some of the most common use cases.

How to centralize and automate IAM policy creation in sandbox, development, and test environments

Post Syndicated from Mahmoud ElZayet original https://aws.amazon.com/blogs/security/how-to-centralize-and-automate-iam-policy-creation-in-sandbox-development-and-test-environments/

To keep pace with AWS innovation, many customers allow their application teams to experiment with AWS services in sandbox environments as they move toward production-ready architecture. These teams need timely access to various sets of AWS services and resources, which means they also need a mechanism to help ensure least privilege is granted. In other words, your application team generally shouldn’t have access to administrative resources, such as an AWS Lambda function that takes periodic Amazon Elastic Block Store snapshot backups, or an Amazon CloudWatch Events rule that sends events to a centralized information security account managed by your security team.

In this blog post, I’ll show you how to create a centralized and automated workflow that creates and validates AWS Identity and Access Management (IAM) policies for application teams working in various sandbox, development, and test environments. Your security developers can customize this workflow according to the specific requirements of your security team. They can create logic to limit the allowed permission sets based on account type or owning team. I’ll use AWS CodePipeline to create and manage a workflow containing various stages and spanning multiple AWS accounts that I’ll describe in more detail in the next section.

Solution overview

I’ll start with this scenario: Alice is an administrator for an AWS sandbox account used by her organization’s data scientists to try out AWS analytics services such as Amazon Athena and Amazon EMR. The data scientists assess the suitability of these services for their production use cases by running sample analytics jobs on portions of real data sets after any sensitive information has been taken out. The data sets are stored in an existing Amazon Simple Storage Service (Amazon S3) bucket. For every new project, Alice authors a new IAM policy that allows the project team to access their requested Amazon S3 bucket and create their analytics clusters. However, Alice must follow a company guideline that sandbox accounts can only launch specific Amazon Elastic Compute Cloud (Amazon EC2) instance types. She must also restrict access to all administrative AWS Lambda functions and CloudWatch Events rules that the security team use to monitor sandbox account compliance. Below is the solution that meets these requirements and makes it easier for Alice and other administrators to perform their tasks.
 

Figure 1: Solution architecture

Figure 1: Solution architecture

  1. Alice uses the IAM visual editor to author a template that gives the data science team access to launch and manage EMR clusters that analyze S3-based data sets. She then uploads the IAM JSON policy document into an existing S3 bucket using an AWS Key Management Service (AWS KMS) key. The key and the S3 bucket are already created by the security team as part of account baselining, which I will detail later in this post.
  2. AWS CodePipeline automatically fetches the IAM JSON policy document and invokes a sequence of validation checks that use a single and central Lambda function hosted in an AWS account managed by the security team.
  3. If the IAM JSON policy adheres to all account and general security requirements coded by the security developers, the central Lambda function automatically creates the policy in Alice’s account and the pipeline will succeed. The central validation Lambda function will also attach a set of predefined explicit denies to the IAM policy to ensure that it limits undesired user capabilities in the sandbox account. If the IAM JSON policy fails the checks, the pipeline will fail and provide Alice the specific reason for non-compliance. Alice must then modify the policy and resubmit. When the policy has been successfully created, Alice will attach it to the right IAM user, group, or role.

Solution deployment

This solution includes the following three steps:

Prerequisites

As this solution manages permissions granted to AWS services or IAM entities, I highly recommend that you try the solution first in an isolated test environment to make sure it meets all your security requirements.

  1. You’ll need administrator access in two AWS accounts to set up the solution. The deployment of this solution is typically done by one of your organization’s administrators while setting up new AWS accounts. These are the two account types you’ll need access to:
     

    • A sandbox account. This lets application teams experiment with various AWS architectures. This could be a development or test account, as mentioned earlier.
    • A central information security account. Typically, this is owned by an information security team who monitors and enforces security compliance within a multi-account structure.


Important
: Because the Lambda function that you’ll create in the information security account has highly privileged permissions, it’s important to strictly follow best practices for securing the account. You need to limit account access to security team members. Sandbox account administrators should also not give this central Lambda function any IAM permissions in their sandbox account beyond IAM Policy creation.

  1. Because you’ll use the AWS Management Console for both AWS accounts, I strongly recommended that you have roles in both AWS accounts and use the console’s Switch Role feature. You can attach an alias to each account and give each a different color code so that you always know which one you’re logged into.
  2. Make sure to use the same AWS region for all the resources that you create for this solution.

Step 1: Deploy the solution prerequisites

Before building the pipeline across the two AWS accounts, you must first configure the required resources in both accounts, such as IAM roles and encryption keys. This configuration is typically done according to your security team’s guidelines when your organization first sets up the sandbox, development, or test environment.

Important

  • In addition to the initial setup you’ll create in this section, your security team must explicitly deny sandbox, development, or test account administrators from attaching IAM Policies that do not meet the allowed security policies for that account type, such as the AdministratorAccess IAM policy. Moreover, your security team must ensure any current or future users, groups, or roles in the account have no permissions to directly set or update IAM policies like (for example) CreatePolicy, CreatePolicyVersion, PutRolePolicy, PutUserPolicy, PutGroupPolicy, or UpdateAssumeRolePolicy. You want to ensure that creating permissions can only be done through the automation pipeline, which I’ll show you how to build shortly.
  • Because the solution I’ll be describing focuses on the creation of least privilege permissions, it’s highly advisable that your security team combines the solution with IAM permission boundaries to make sure that any permissions defined in this solution are scoped by a set of pre-defined permissions for every type of account in the organization. For example, your account administrators might only be allowed to create IAM users or roles with a pre-defined set of permission boundaries that limit the permissions attached to those principals. For more information about permission boundaries, please refer to this AWS Security blog post.

Create the sandbox account prerequisites

Follow the steps below to deploy an AWS CloudFormation template that will create the following resources in the sandbox account:

  • An S3 bucket where your sandbox administrators will upload IAM policies
  • An IAM role that your automated pipeline will use to access the S3 bucket that stores the IAM policies
  • An AWS KMS key that you will use to encrypt the IAM policies in your S3 bucket
  1. While logged in to your sandbox account in your default browser, select this link to launch an AWS stack with the sandbox environment prerequisites. You’ll be redirected to the CloudFormation console with the template URL already populated.
     
    Figure 2: CloudFormation console

    Figure 2: CloudFormation console with prepopulated URL

  2. Select Next and, optionally, provide a name for your stack. A suggested stack name, Sandbox-Prerequisites, should already be populated.
  3. The template defines an input parameter called CentralAccount that you can populate with the AWS account ID of your security account. For more information on how to find the account ID of your security account, check here.
  4. Select Next, and then select Next again.
  5. To have the stack create the IAM roles that your pipeline will use, select the check box that says I acknowledge that AWS CloudFormation might create IAM resources with custom names, and then select Create Stack.
  6. Select the Stack info tab and refresh periodically while watching the Stack Status field value. After your stack reaches the state CREATE_COMPLETE, navigate to the CloudFormation Outputs tab and copy the following output values to the text editor of your choice. You’ll use these values in subsequent CloudFormation stacks.
     
    Figure 3: CloudFormation Outputs tab

    Figure 3: CloudFormation Outputs tab

Create the information security account prerequisites

Follow the steps below to deploy a CloudFormation template that will create the following resources in your information security account:

  • An IAM role used by your automated pipeline to invoke your central Lambda function and to provide access to the sandbox account KMS key
  • An IAM role used by the central Lambda function to assume a role in the sandbox account and manage IAM policies
  1. While logged in to your security account in your default browser, select this link to launch an AWS stack with the security environment prerequisites. You’ll be redirected to the CloudFormation console with the template URL already populated.
  2. Select Next and, optionally, provide a name for your stack. A suggested stack name, Sandbox-Prerequisites, should already be populated.
  3. Populate the following input parameter fields:
    • SandboxAccount: The AWS account ID for the sandbox account.
    • ArtifactBucket: The bucket name that you noted in your text editor from the previous stack run in the sandbox account
    • CMKARN: The Amazon Resource Name (ARN) of the KMS key that you noted in your text editor from the previous stack run in the sandbox account
    • PolicyCheckerFunctionName: The name of the Lambda function to be created later. The default value is PolicyChecker
  4. Select Next, and then select Next again.
  5. To have the stack create the IAM roles used by your pipeline, select the box that reads I acknowledge that AWS CloudFormation might create IAM resources with custom names, and then select Create Stack.
  6. Wait for your stack until it reaches the state CREATE_COMPLETE.

Create the sandbox account pipeline

Now, switch back to your sandbox account and deploy the CloudFormation template that will create the following resources in the sandbox account:

  • An AWS CodePipeline automation pipeline that fetches the IAM policy document from S3 and sends it to the security account for centralized validation. If valid, a Lambda function in the information security account will also create the IAM policy in the sandbox account.
  • An S3 bucket policy to allow your central Lambda function to fetch the IAM policy JSON document from your bucket
  • An IAM role that will be assumed by the Lambda function in the central information security account and used to create IAM policies in the sandbox account. Sandbox account administrator can then attach those IAM policies to the required entities, like an IAM user or role.
  1. While logged in to your sandbox account in your default browser, select this link to launch an AWS stack with the sandbox environment prerequisites. You’ll be redirected to the CloudFormation console with the template URL already populated.
  2. Click Next and, optionally, provide a name for your stack. A suggested stack name, Sandbox-Pipeline, should already be populated.
  3. Populate the following input parameter fields:
    • CentralAccount: The AWS account ID of the information security account, without hyphens.
    • ArtifactBucket: The same bucket name that you noted in your text editor earlier and used in the previous stack in the information security account.
    • CMKARN: The ARN of the KMS key that you noted in your text editor earlier and used in the previous stack in the information security account.
    • PolicyCheckerFunctionName: Again, the name of the Lambda function to be created later. It must be the same value you provided to the information security account template.
  4. Select Next, and then select Next again.
  5. To have the stack create the required IAM roles, select the box that reads I acknowledge that AWS CloudFormation might create IAM resources with custom names, and then select Create Stack.
  6. Wait for your stack until it reaches the state CREATE_COMPLETE.

Step 2: Set up the policy validation Lambda function in the central information security account

In the central information security account, create the Lambda function to validate the IAM policies created in sandbox environment.

  1. In the AWS Lambda console, select Create Function and then select Author from scratch. Provide values for the following fields:
    • Name. This must be the same function name defined as input parameter PolicyCheckerFunctionName to CloudFormation in step 1, when you set up the information security account prerequisites. If you did not change the default value in step 1, the default is still PolicyChecker.
    • Runtime. Python 2.7.
    • Role. To set the role, select Choose an existing role, and then select the role named policy-checker-lambda-role. This is the role you created in step 1, when you set up the information security account prerequisites.

    Choose Create Function, scroll down to Function Code, and then paste the following code into the editor (replacing the existing code):

    
    #  Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved.
    #  Licensed under the Apache License, Version 2.0 (the "License"). You may not
    #  use this file except in compliance with
    #  the License. A copy of the License is located at
    #      http://aws.amazon.com/apache2.0/
    #  or in the "license" file accompanying this file. This file is distributed
    #  on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
    #  either express or implied. See the License for the
    #  specific language governing permissions and
    #  limitations under the License.
    from __future__ import print_function
    import json
    import boto3
    import zipfile
    import tempfile
    import os
    
    print('Loading function')
    PERMISSIVE_ERROR_MSG = """Policy creation request rejected: * permissions not
                             allowed in both actions and resources"""
    GENERAL_ERROR_MSG = """An error has occurred while validating policy.
                            Please contact admin"""
    
    
    def get_template(event, s3, artifact, file_in_zip):
        tmp_file = tempfile.NamedTemporaryFile()
        bucket = event['CodePipeline.job']['data']['inputArtifacts'][0]['location']['s3Location']['bucketName']
        key = event['CodePipeline.job']['data']['inputArtifacts'][0]['location']['s3Location']['objectKey']
    
        with tempfile.NamedTemporaryFile() as tmp_file:
            s3.download_file(bucket, key, tmp_file.name)
            with zipfile.ZipFile(tmp_file.name, 'r') as zip:
                return zip.read(file_in_zip)
    
    
    def get_sts_session(event, account, rolename):
        sts = boto3.client("sts")
        RoleArn = str("arn:aws:iam::" + account + ":role/" + rolename)
        response = sts.assume_role(
            RoleArn=RoleArn,
            RoleSessionName='SecurityManageAccountPermissions',
            DurationSeconds=900)
        sts_session = boto3.Session(
            aws_access_key_id=response['Credentials']['AccessKeyId'],
            aws_secret_access_key=response['Credentials']['SecretAccessKey'],
            aws_session_token=response['Credentials']['SessionToken'],
            region_name=os.environ['AWS_REGION'],
            botocore_session=None,
            profile_name=None)
        return (sts_session)
    
    
    def ManagePolicy(event, context):
        # Set boto session to get pipeline artifact from sandbox/dev/test account
        artifact_session = boto3.Session(
            aws_access_key_id=event['CodePipeline.job']['data']
                                   ['artifactCredentials']['accessKeyId'],
            aws_secret_access_key=event['CodePipeline.job']['data']
                                       ['artifactCredentials']['secretAccessKey'],
            aws_session_token=event['CodePipeline.job']['data']
                                   ['artifactCredentials']['sessionToken'],
            region_name=os.environ['AWS_REGION'],
            botocore_session=None,
            profile_name=None)
        # Fetch pipeline artifact from S3
        s3 = artifact_session.client('s3')
        permission_doc = get_template(event, s3, '', 'policy.json')
        metadata_doc = json.loads(get_template(event, s3, '', 'metadata.json'))
        permission_doc_json = json.loads(permission_doc)
        # Assume the central account role in sandbox/dev/test account
        global STS_SESSION
        STS_SESSION = ''  
        STS_SESSION = get_sts_session(
            event, event['CodePipeline.job']['accountId'], 'central-account-role')
        iam = STS_SESSION.client('iam')
        codepipeline = STS_SESSION.client('codepipeline')
        policy_arn = 'arn:aws:iam::' + event['CodePipeline.job']['accountId'] + ':policy/' + metadata_doc['PolicyName']
    
        try:
            # 1.Sample code - Validate policy sent from sandbox/dev/test account:
            # look for * actions and * resources
            for statement in permission_doc_json['Statement']:
                if statement['Action'] == '*' and statement['Resource'] == '*':
                    return codepipeline.put_job_failure_result(
                                        jobId=event['CodePipeline.job']['id'],
                                        failureDetails={
                                            'type': 'JobFailed',
                                            'message': PERMISSIVE_ERROR_MSG})
            # 2.Sample code - Attach any required denies from central
            # pre-defined policy
            iam_local = boto3.client('iam')
            account_id = context.invoked_function_arn.split(":")[4]
            local_policy_arn = 'arn:aws:iam::' + account_id + ':policy/central-deny-policy-sandbox'
            policy_response = iam_local.get_policy(PolicyArn=local_policy_arn)
            policy_version_id = policy_response['Policy']['DefaultVersionId']
            policy_version_doc = iam_local.get_policy_version(
                PolicyArn=local_policy_arn,
                VersionId=policy_version_id)
            for statement in policy_version_doc['PolicyVersion']['Document']['Statement']:
                permission_doc_json['Statement'].append(
                   statement
                )
            # 3. If validated successfully, create policy in
            # sandbox/dev/test account
            iam.create_policy(
                PolicyName=metadata_doc['PolicyName'],
                PolicyDocument=json.dumps(permission_doc_json),
                Description=metadata_doc['PolicyDescription'])
    
            # successful creation, put result back to
            # sandbox/dev/test account pipeline
            codepipeline.put_job_success_result(
                jobId=event['CodePipeline.job']['id'])
        except Exception as e:
            print('Error: ' + str(e))
            codepipeline.put_job_failure_result(
                jobId=event['CodePipeline.job']['id'],
                failureDetails={'type': 'JobFailed', 'message': GENERAL_ERROR_MSG})
    
    def lambda_handler(event, context):
        print(event)
        ManagePolicy(event, context)
    

    This sample code shows how the Lambda function checks the IAM JSON policy submitted by Alice for policies that are too permissive because they allow all IAM actions on all account resources. The sample code also shows an IAM Deny action that prevents the launch of Amazon EC2 instances that are not part of the T2 EC2 instance family. An explicit deny here ensures that only T2 instances can be launched. Your security developers should author code similar to this sample code, in order to meet the security policies of every account type and control the IAM policies created in various sandbox, development, and test environments.

  2. Before saving your new Lambda function code, scroll further down to the Basic Settings section and increase the function timeout to 10 seconds.
  3. Select Save.

Step 3: Test the sandbox account pipeline

Now it’s time to deploy the solution in your sandbox account.

  1. Create the following files and compress them into an archive with the name policy.zip (this is the name expected by your created pipeline).
    • metadata.json: This file contains metadata like the name and description of the IAM policy to be created.
      
                      {
                      "PolicyDescription": "ec2 start permission policy",
                      "PolicyName": "Ec2RunTeamA"
                      }
                      

    • policy.json: This file contains the JSON body of the IAM policy to be created.
      
                      {
                      "Version": "2012-10-17",
                      "Statement": [
                              {
                              "Sid": "EC2Run",
                              "Effect": "Allow",
                              "Action": "ec2:RunInstances",
                              "Resource": "*"
                              }
                      ]
                      }
                      

  2. To upload your policy.zip file to the bucket you created earlier, go to the Amazon S3 console in the sandbox account and, in the search box at the top of the page, search for the bucket you noted in your text editor earlier as ArtifactBucket.
  3. When you locate your bucket, select the bucket name, and then select Upload. The upload dialog will appear.
  4. Select Add Files and navigate to the folder with the policy.zip file. Select the file, select Open, select Next, and then select Next again.
     
    Figure 4: S3 upload dialog

    Figure 4: S3 upload dialog

  5. Select the AWS KMS master-key radio button, and then select the KMS key that has the alias codepipeline-policy-crossaccounts.
     
    Figure 5: Selecting the KMS key

    Figure 5: Selecting the KMS key

  6. Select Next, and then select Upload.
  7. Go to AWS CodePipeline console, select your sandbox pipeline, and wait for the pipeline to start running. It can take up to a minute for it to start.
     
    Figure 6: AWS CodePipeline console

    Figure 6: AWS CodePipeline console

  8. Wait for your pipeline to complete. There should be no validation errors for the IAM policy you just uploaded and your IAM policy should be successfully created. To view the newly created IAM policy, open the AWS IAM console.
  9. Select Policies on the left and search for the policy with the name defined in the metadata.json file.
     
    Figure 7: Viewing your new policy

    Figure 7: Viewing your new policy

  10. Select the policy name. Note the IAM deny that was automatically added to your defined policy.

If you’d like to test the pipeline further, you can modify the policy to permit all actions on all resources. When policy.zip is uploaded again, the pipeline should return the following error:


Policy creation request rejected: * permissions not allowed in both actions and resources

If you encounter any errors as you modify your Lambda function code, you can always go back to the Lambda function logs in the central information security account. For more information on how to access Lambda function logs, please refer to the documentation.

The same logic used here can be extended to other sandbox, development, or test environments. However, for the central information security account, the existing roles will need to be updated to trust and have access to the resources in the newly added sandbox, development, or test account.

Summary

In this blog post, I showed you how to centralize the validation and creation of IAM policies across various AWS accounts. This allows your security developers to start coding your security best practices; permitting automatic creation and validation of IAM policies across your various sandbox, development, and test accounts. Account administrators can then attach those validated IAM policies to the required IAM users, groups or roles. This process strikes the balance between agility and control. It empowers your account administrators to create compliant and least-privilege permission IAM policies, while also allowing your application teams to keep quickly experimenting and innovating. If you have feedback about this blog post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Mahmoud ElZayet

Mahmoud is a Global Accounts Solutions Architect at AWS. He works with large enterprise customers providing guidance and technical assistance for building cloud solutions. Mahmoud is passionate about DevOps and Cloud Compliance topics. Outside of work, he enjoys exploring new places with his wife and two kids.

Top 11 posts in 2018

Post Syndicated from Tom Olsen original https://aws.amazon.com/blogs/security/top-11-posts-in-2018/

We covered a lot of ground in 2018: from GDPR to re:Inforce and numerous feature announcements, AWS GuardDuty deep-dives to SOC reports, automated reasoning explanations, and a series of interviews with AWS thought leaders.

We’ve got big plans for 2019, but there’s room for more: please let us know what you want to read about in the Comments section below.

The top 11 posts from 2018 based on page views

  1. Setting the Record Straight on Bloomberg BusinessWeek’s Erroneous Article
  2. All AWS Services GDPR ready
  3. Use YubiKey security key to sign into AWS Management Console with YubiKey for multi-factor authentication
  4. AWS Federated Authentication with Active Directory Federation Services (AD FS)
  5. AWS GDPR Data Processing Addendum – Now Part of Service Terms
  6. Announcing the First AWS Security Conference: AWS re:Inforce 2019
  7. Easier way to control access to AWS regions using IAM policies
  8. How to Use Bucket Policies and Apply Defense-in-Depth to Help Secure Your Amazon S3 Data
  9. Preparing for AWS Certificate Manager (ACM) Support of Certificate Transparency
  10. How to Create an AWS IAM Policy to Grant AWS Lambda Access to an Amazon DynamoDB Table
  11. How to retrieve short-term credentials for CLI use with AWS Single Sign-on

If you’re new to AWS and are just discovering the Security Blog, we’ve also compiled a list of older posts that customers continue to find useful.

The top 10 posts of all time based on page views

  1. Where’s My Secret Access Key?
  2. Writing IAM Policies: How to Grant Access to an Amazon S3 Bucket
  3. How to Restrict Amazon S3 Bucket Access to a Specific IAM Role
  4. Securely Connect to Linux Instances Running in a Private Amazon VPC
  5. Setting the Record Straight on Bloomberg BusinessWeek’s Erroneous Article
  6. Writing IAM Policies: Grant Access to User-Specific Folders in an Amazon S3 Bucket
  7. How to Connect Your On-Premises Active Directory to AWS Using AD Connector
  8. A New and Standardized Way to Manage Credentials in the AWS SDKs
  9. IAM Policies and Bucket Policies and ACLs! Oh, My! (Controlling Access to S3 Resources)
  10. How to Control Access to Your Amazon Elasticsearch Service Domain

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.