Tag Archives: Security, Identity & Compliance

How to create and manage users within AWS Single Sign-On

Post Syndicated from Vijay Sharma original https://aws.amazon.com/blogs/security/how-to-create-and-manage-users-within-aws-sso/

AWS Single Sign-On (AWS SSO) is a cloud service that allows you to grant your users access to AWS resources, such as Amazon EC2 instances, across multiple AWS accounts. By default, AWS SSO now provides a directory that you can use to create users, organize them in groups, and set permissions across those groups. You can also grant the users that you create in AWS SSO permissions to applications such Salesforce, Box, and Office 365. AWS SSO and its directory are available at no additional cost to you.

A directory is a key building block that allows you to manage the users to whom you want to grant access to AWS resources and applications. AWS Identity and Access Management (IAM) provides a way to create users that can be used to access AWS resources within one AWS account. However, many businesses prefer an approach that enables users to sign in once with a single credential and access multiple AWS accounts and applications. You can now create your users centrally in AWS SSO and manage user access to all your AWS accounts and applications. Your users sign in to a user portal with a single set of credentials configured in AWS SSO, allowing them to access all of their assigned accounts and applications in a single place.

Note: If you manage your users in a Microsoft Active Directory (Microsoft AD) directory, AWS SSO already provides you with an option to connect to a Microsoft AD directory. By connecting your Microsoft AD directory once with AWS SSO, you can assign permissions for AWS accounts and applications directly to your users by easily looking up users and groups from your Microsoft AD directory. Your users can then use their existing Microsoft AD credentials to sign into the AWS SSO user portal and access their assigned accounts and applications in a single place. Customers who manage their users in an existing Lightweight Directory Access Protocol (LDAP) directory or through a cloud identity provider such as Microsoft Azure AD can continue to use IAM federation to enable their users’ access to AWS resources.

How to create users and groups in AWS SSO

You can create users in AWS SSO by configuring their email address and name. When you create a user, AWS SSO sends an email to the user by default so that they can set their own password. Your user will use their email address and a password they configure in AWS SSO to sign into the user portal and access all of their assigned accounts and applications in a single place.

You can also add the users that you create in AWS SSO to groups you create in AWS SSO. In addition, you can create permissions sets that define permitted actions on an AWS resource, and assign them to your users and groups. For example, you can grant the DevOps group permissions to your production AWS accounts. When you add users to the DevOps group, they get access to your production AWS accounts automatically.

In this post, I will show you how to create users and groups in AWS SSO, how to create permission sets, how to assign your groups and users to permission sets and AWS accounts, and how your users can sign into the AWS SSO user portal to access AWS accounts. To learn more about how to grant users that you create in AWS SSO permissions to business applications such as Office 365 and Salesforce, see Manage SSO to Your Applications.

Walk-through prerequisites

For this walk-through, I assume the following:

Overview

To illustrate how to add users in AWS SSO and how to grant permissions to multiple AWS accounts, imagine that you’re the IT manager for a company, Example.com, that wants to make it easy for its users to access resources in multiple AWS accounts. Example.com has five AWS accounts: a master account (called MasterAcct), two developer accounts (DevAccount1 and DevAccount2), and two production accounts (ProdAccount1 and ProdAccount2). Example.com uses AWS Organizations to manage these accounts and has already enabled AWS SSO.

Example.com has two developers, Martha and Richard, who need full access to Amazon EC2 and Amazon S3 in the developer accounts (DevAccount1 and DevAccount2) and read-only access to EC2 and S3 resources in the production accounts (ProdAccount1 and ProdAccount2).

The following diagram illustrates how you can grant Martha and Richard permissions to the developer and production accounts in four steps:

  1. Add users and groups in AWS SSO: Add users Martha and Richard in AWS SSO by configuring their names and email addresses. Add a group called Developers in AWS SSO and add Martha and Richard to the Developers group.
  2. Create permission sets: Create two permission sets. In the first permission set, include policies that give full access to Amazon EC2 and Amazon S3. In second permission set, include policies that give read-only access to Amazon EC2 and Amazon S3.
  3. Assign groups to accounts and permission sets: Assign the Developers group to your developer accounts and assign the permission set that gives full access to Amazon EC2 and Amazon S3. Assign the Developers group to your production accounts, too, and assign the permission set that gives read-only access to Amazon EC2 and Amazon S3. Martha and Richard now have full access to Amazon EC2 and Amazon S3 in the developer accounts and read-only access in the production accounts.
  4. Users sign into the User Portal to access accounts: Martha and Richard receive email from AWS to set their passwords with AWS SSO. Martha and Richard can now sign into the AWS SSO User Portal using their email addresses and the passwords they set with AWS SSO, allowing them to access their assigned AWS accounts.
Figure 1: Architecture diagram

Figure 1: Architecture diagram

Step 1: Add users and groups in AWS SSO

To add users in AWS SSO, navigate to the AWS SSO Console. Then, follow the steps below to add Martha as a user, to create a group called Developers, and to add Martha to the Developers group in AWS SSO.

  1. In the AWS SSO Dashboard, choose Manage your directory to navigate to the Directory tab.
    Figure 2: Navigating to the "Manage your directory" page

    Figure 2: Navigating to the “Manage your directory” page

  2. By default, AWS SSO provides you a directory that you can use to manage users and groups in AWS SSO. To add a user in AWS SSO, choose Add user. If you previously connected a Microsoft AD directory with AWS SSO, you can switch to using the directory that AWS SSO now provides by default by following the steps in Change Directory.
    Figure 3: Adding new users to your directory

    Figure 3: Adding new users to your directory

  3. On the Add User page, enter an email address, first name, and last name for the user, then create a display name. In this example, you’re adding “Martha Rivera” as a user. For the password, choose Send an email to the user with password instructions. This allows users to set their own passwords.

    Optionally, you can also set a mobile phone number and add additional user attributes.

    Figure 4: Adding user details

    Figure 4: Adding user details

  4. Next, you’re ready to add the user to groups. First, you need to create a group. Later, in Step 3, you can grant your group permissions to an AWS account so that any users added to the group will inherit the group’s permissions automatically. In this example, you will create a group called Developers and add Martha to the group. To do so, from the Add user to groups page, choose Create group.
    Figure 5: Creating a group

    Figure 5: Creating a new group

  5. In the Create group window, title your group by filling out the Group name field. For this example, enter Developers. Optionally, you can also enter a description of the group in the Description field. Choose Create to create the group.
    Figure 6: Adding a name and description to your new group

    Figure 6: Adding a name and description to your new group

  6. On the Add users to group page, check the box next to the group you just created, and then choose Add user. Following this process will allow you to add Martha to the Developers group.
    Figure 7: Adding a user to your group

    Figure 7: Adding a user to your new group

You’ve successfully created the user Martha and added her to the Developers group. You can repeat sub-steps 2, 3, and 6 above to create more users and add them to the group. This is the process you should follow to create the user Richard and add him to the Developers group.

Next, you’ll grant the Developers group permissions to AWS resources within multiple AWS accounts. To follow along, you’ll first need to create permission sets.

Step 2: Create permission sets

To grant user permissions to AWS resources, you must create permission sets. A permission set is a collection of administrator-defined policies that AWS SSO uses to determine a user’s permissions for any given AWS account. Permission sets can contain either AWS managed policies or custom policies that are stored in AWS SSO. Policies contain statements that represent individual access controls (allow or deny) for various tasks. This determines what tasks users can or cannot perform within the AWS account. To learn more about permission sets, see Permission Sets.

For this use case, you’ll create two permissions sets: 1) EC2AndS3FullAccess, which has AmazonEC2FullAccess and AmazonS3FullAccess managed policies attached and 2) EC2AndS3ReadAccess, which has AmazonEC2ReadOnlyAccess and AmazonS3ReadOnlyAccess managed policies attached. Later, in Step 3, you can assign groups to these permissions sets and AWS accounts, so that your users have access to these resources. To learn more about creating permission sets with different levels of access, see Create Permission Set.

Follow the steps below to create permission sets:

  1. Navigate to the AWS SSO Console and choose AWS accounts in the left-hand navigation menu.
  2. Switch to the Permission sets tab on the AWS Accounts page, and then choose Create permissions set.
    Figure 8: Creating a permission set

    Figure 8: Creating a permission set

  3. On the Create new permissions set page, choose Create a custom permission set. To learn more about choosing between an existing job function policy and a custom permission set, see Create Permission Set.
    Figure 9: Customizing a permission set

    Figure 9: Customizing a permission set

  4. Enter EC2AndS3FullAccess in the Name field and choose Attach AWS managed policies. Then choose AmazonEC2FullAccess and AmazonS3FullAccess. Choose Create to create the permission set.
    Figure 10: Attaching AWS managed policies to your permission set

    Figure 10: Attaching AWS managed policies to your permission set

You’ve successfully created a permission set. You can use the steps above to create another permission set, called EC2AndS3ReadAccess, by attaching the AmazonEC2ReadOnlyAccess and AmazonS3ReadOnlyAccess managed policies. Now you’re ready to assign your groups to accounts and permission sets.

Step 3: Assign groups to accounts and permission sets

In this step, you’ll assign your Developers group full access to Amazon EC2 and Amazon S3 in the developer accounts and read-only access to these resources in the production accounts. To do so, you’ll assign the Developers group to the EC2AndS3FullAccess permission set and to the two developer accounts (DevAccount1 and DevAccount2). Similarly, you’ll assign the Developers group to the EC2AndS3ReadAccess permission set and to the production AWS accounts (ProdAccount1 and ProdAccount2).

Follow the steps below to assign the Developers group to the EC2AndS3FullAccess permission set and developer accounts (DevAccount1 and DevAccount2). To learn more about how to manage access to your AWS accounts, see Manage SSO to Your AWS Accounts.

  1. Navigate to the AWS SSO Console and choose AWS Accounts in the left-hand navigation menu.
  2. Switch to the AWS organization tab and choose the accounts to which you want to assign your group. For this example, select accounts DevAccount1 and DevAccount2 from the list of AWS accounts. Next, choose Assign users.
    Figure 11: Assigning users to your accounts

    Figure 11: Assigning users to your accounts

  3. On the Select users and groups page, type the name of the group you want to add into the search box and choose Search. For this example, you will be looking for the group called Developers. Check the box next to the correct group and choose Next: Permission Sets.
    Figure 12: Setting permissions for the "Developers" group

    Figure 12: Setting permissions for the “Developers” group

  4. On the Select permissions sets page, select the permission sets that you want to assign to your group. For this use case, you’ll select the EC2AndS3FullAccess permission set. Then choose Finish.
    Figure 13: Choosing permission sets

    Figure 13: Choosing permission sets

You’ve successfully granted users in the Developers group access to accounts DevAccount1 and DevAccount2, with full access to Amazon EC2 and Amazon S3.

You can follow the same steps above to grant users in the Developers group access to accounts ProdAccount1 and ProdAccount2 with the permissions in the EC2AndS3ReadAccess permission set. This will grant the users in the Developers group read-only access to Amazon EC2 and Amazon S3 in the production accounts.

Figure 14: Notification of successful account configuration

Figure 14: Notification of successful account configuration

Step 4: Users sign into User Portal to access accounts

Your users can now sign into the AWS SSO User Portal to manage resources in their assigned AWS accounts. The user portal provides your users with single sign-on access to all their assigned accounts and business applications. From the user portal, your users can sign into multiple AWS accounts by choosing the AWS account icon in the portal and selecting the account that they want to access.

You can follow the steps below to see how Martha signs into the user portal to access her assigned AWS accounts.

  1. When you added Martha as a user in Step 1, you selected the option Send the user an email with password setup instructions. AWS SSO sent instructions to set a password to Martha at the email that you configured when creating the user. This is the email that Martha received:
    Figure 15: AWS SSO password configuration email

    Figure 15: AWS SSO password configuration email

  2. To set her password, Martha will select Accept invitation in the email that she received from AWS SSO. Selecting Accept invitation will take Martha to a page where she can set her password. After Martha sets her password, she can navigate to the User Portal.
    Figure 16: User Portal sign-in

    Figure 16: User Portal sign-in

  3. In the User Portal, Martha can select the AWS Account icon to view all the AWS accounts to which she has permissions.
    Figure 17: View of AWS Account icon from User Portal

    Figure 17: View of AWS Account icon from User Portal

  4. Martha can now see the developer and production accounts that you granted her permissions to in previous steps. For each account, she can also see the list of roles that she can assume within the account. For example, for DevAccount1 and DevAccount2, Martha can assume the EC2AndS3FullAccess role that gives her full access to manage Amazon EC2 and Amazon S3. Similarly, for ProdAccount1 and ProdAccount2, Martha can assume the EC2AndS3ReadAccess role that gives her read-only access to Amazon EC2 and Amazon S3. Martha can select accounts and choose Management Console next to the role she wants to assume, letting her sign into the AWS Management Console to manage AWS resources. To switch to a different account, Martha can navigate to the User Portal and select a different account. From the User Portal, Martha can also get temporary security credentials for short-term access to resources in an AWS account using AWS Command Line Interface (CLI). To learn more, see How to Get Credentials of an IAM Role for Use with CLI Access to an AWS Account.
    Figure 17: Switching accounts from the User Portal

    Figure 18: Switching accounts from the User Portal

  5. Martha bookmarks the user portal URL in her browser so that she can quickly access the user portal the next time she wants to access AWS accounts.

Summary

By default, AWS now provides you with a directory that you can use to manage users and groups within AWS SSO and to grant user permissions to resources in multiple AWS accounts and business applications. In this blog post, I showed you how to manage users and groups within AWS SSO and grant them permissions to multiple AWS accounts. I also showed how your users sign into the user portal to access their assigned AWS accounts.

If you have feedback or questions about AWS SSO, start a new thread on the AWS SSO forum or contact AWS Support. If you have feedback about this blog post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Vijay Sharma

Vijay is a Senior Product Manager with AWS Identity.

How AWS SideTrail verifies key AWS cryptography code

Post Syndicated from Daniel Schwartz-Narbonne original https://aws.amazon.com/blogs/security/how-aws-sidetrail-verifies-key-aws-cryptography-code/

We know you want to spend your time learning valuable new skills, building innovative software, and scaling up applications — not worrying about managing infrastructure. That’s why we’re always looking for ways to help you automate the management of AWS services, particularly when it comes to cloud security. With that in mind, we recently developed SideTrail, an open source, program analysis tool that helps AWS developers verify key security properties of cryptographic implementations. In other words, the tool gives you assurances that secret information is kept secret. For example, developers at AWS have used this tool to verify that an AWS open source library, s2n, is protected against cryptographic side-channels. Let’s dive a little deeper into what this tool is and how it works.

SideTrail’s secret sauce is its automated reasoning capability, a mathematical proof-based technology, that verifies the correctness of critical code and helps detect errors that could lead to data leaks. Automated reasoning mathematically checks every possible input and reports any input that could cause an error. Full detail about SideTrail can be found in a scientific publication from VSTTE, a prominent software conference.

Automated reasoning is also useful because it can help prevent timing side-channels, a process by which data may be inadvertently revealed. Automated reasoning is what powers SideTrail to be able to detect the presence of these issues. If a code change causes a timing side-channel to be introduced, the mathematical proof will fail, and you’ll be notified about the flaw before the code is released.

Timing side-channels potentially allows attackers to learn about sensitive information, such as encryption keys, by observing the timing of messages on a network. For example, let’s say you have some code that checks a password as follows:


for (i = 0; i < length; ++i) {
	if (password[i] != input[i]) {
		send("bad password");
  	}
}

The amount of time until the reply message is received depends on which byte in the password is incorrect. As a result, an attacker can start guessing what the first character of the password is:

  • a*******
  • b*******
  • c*******

When the time to receive the error message changes, the first letter in the password is revealed. Repeating this process for each of the remaining characters turns a complex, exponential guessing challenge into a comparatively simple, linear one. SideTrail identifies these timing gaps, alerting developers to potential data leaks that could impact customers.

Here’s how to get started with SideTrail:

  1. Download SideTrail from GitHub.
  2. Follow the instructions provided with the download to setup your environment.
  3. Annotate your code with a threat model. The threat model is used to determine:
    • Which values should be considered secret. For example, a cryptographic key would be considered a secret, whereas the IP address of a public server wouldn’t.
    • The minimum timing difference an attacker can observe.
  4. Compile your code using SideTrail and analyze the report (for full guidance on how to analyze your report, read the VSTTE paper).

How SideTrail proves code correctness

The prior example had a data leak detected by SideTrail. However, SideTrail can do more than just detect leaks caused timing side-channels: it can prove that correct code won’t result in such leaks. One technique that developers can use to prevent data loss is to ensure that all paths in a program take the same amount of time to run no matter what a secret value may be. SideTrail helps to prove that such code is correct. Consider the following code that implements this technique:


int example(int pub, int secret) { 
	if (secret > pub) { 
  		pub -= secret; 
  	} else {
  		pub += secret; 
	}
	return pub;
}

Following the steps outlined in the prior section, a developer creates a threat model for this code by determining which values are public and which are secret. After annotating their code accordingly, they then compile it using SideTrail, which verifies that all paths in the code take the same amount of time. SideTrail verifies the code by leveraging the industry-standard Clang compiler to compile the code to LLVM bitcode. Working at the LLVM bitcode level allows SideTrail to work on the same optimized representation the compiler does. This is important because compiler optimizations can potentially affect the timing behavior of code, and hence the effectiveness of countermeasures used in that code. To maximize the effectiveness of SideTrail, developers can compile code for SideTrail using the same compiler configuration that’s used for released code.

The result of this process is a modified program that tracks the time required to execute each instruction. Here is how the result looks, inclusive of the annotations SideTrail includes (in red) to indicate the runtime of each operation. For more detail on the LLVM annotations, please read the VSTTE paper referenced earlier in this post.


int example(int pub, int secret, int *cost) { 
	*cost += GT_OP_COST;  
	if (secret > pub) { 
		*cost += SUBTRACTION_OP_COST;
		pub -= secret; 
  	} else {
		*cost += ADDITION_OP_COST;
  		pub += secret; 
  	} 
	return pub;
} 

If you’re wondering how this works in practice, you can inspect the SideTrail model by examining the annotated LLVM bitcode. Guidance on inspecting the SideTrail model can be found in the VSTTE paper.

To check whether the above code has a timing side-channel, SideTrail makes two copies of the code, symbolically executes both copies, and then checks whether the runtime differs. SideTrail does this by generating a wrapper and then using an automated reasoning engine to mathematically verify the correctness of this test for all possible inputs. If automated reasoning engine does this successfully, you can have confidence that the code does not contain a timing side-channel.

Conclusion

By using SideTrail, developers will be able to prove that there are no timing side channels present in their code, providing increased assurance against data loss. For example, Amazon s2n uses code-balancing countermeasures to mitigate this kind of attack. Through the use of SideTrail, we’ve proven the correctness of these countermeasures by running regression tests on every check-in to s2n, allowing AWS developers to build on top of this library with even more confidence, while delivering product and technology updates to customers with the highest security standards available.

More information about SideTrail and our other research can be found on our Provable Security hub. SideTrail is also available for public use as an open-source project.

If you have feedback about this blog post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Daniel Schwartz-Narbonne

Daniel is a Software Development Engineer in the AWS Automated Reasoning Group. Prior to joining Amazon, he earned a PhD at Princeton, where he developed a software framework to debug parallel programs. Then, at New York University, he designed a tool that automatically isolates and explains the cause of crashes in C programs. When he’s not working, you might find Daniel in the kitchen making dinner for his family, in a tent camping, or volunteering as an EMT with a local ambulance squad.

Podcast: AI tech named automated reasoning provides next-gen cloud security

Post Syndicated from Supriya Anand original https://aws.amazon.com/blogs/security/podcast-automated-reasoning-aws-next-gen-security-ai/

AWS just released a new podcast on how next generation security technology, backed by automated reasoning, is providing you higher levels of assurance for key components of your AWS architecture. Byron Cook, Director of the AWS Automated Reasoning Group, discusses how automated reasoning is embedded within AWS services and code and the tools customers can take advantage of today to achieve provable security.

Here’s a direct link to listen to it.

As the AWS cloud continues to grow, offering more services and features for you to architect your environment, AWS is working to ensure that the security of the cloud meets the pace of growth and your needs. To address the evolving threat landscape, AWS has made it easier to operate workloads securely in the cloud with a host of services and features that strengthen your security posture. Using automated reasoning, a branch of artificial intelligence, AWS is implementing next-generation security technology to help secure its platform. Automated reasoning at AWS helps customers verify and ensure continuous security of key components in the cloud, providing provable security—the highest assurance of cloud security.

Automated reasoning is powered by mathematical logic and consists of designing and implementing mechanized mathematical proofs that key components in the cloud are operating in alignment with customers’ intended security measures. With automated reasoning, AWS enables customers to detect entire classes of misconfigurations that could potentially expose vulnerable data. This relieves the customers’ burden of having to manually verify increasingly granular configurations for a complex organization, providing new levels of assurance that security verification scales with enterprise growth.

We hope you enjoy the podcast! If you have feedback about this blog post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Author

Supriya Anand

Supriya is a Content Strategist at AWS working with the Automated Reasoning Group.

How to rotate a WordPress MySQL database secret using AWS Secrets Manager in Amazon EKS

Post Syndicated from Paavan Mistry original https://aws.amazon.com/blogs/security/how-to-rotate-a-wordpress-mysql-database-secret-using-aws-secrets-manager-in-amazon-eks/

AWS Secrets Manager recently announced a feature update to rotate credentials for all Amazon RDS database types. This allows you to automatically rotate credentials for all types of databases hosted on Amazon RDS. In this post, I show you how to rotate database secrets for a non-RDS database using AWS Secrets Manager. I use a containerized WordPress application with a MySQL database to demonstrate the secret rotation.

Enabling regular rotation of database secrets helps secure application databases, protects customer data, and helps meet compliance requirements. You’ll use Amazon Elastic Container Service for Kubernetes (Amazon EKS) to help deploy, manage, and scale the WordPress application, and Secrets Manager to perform secret rotation on a containerized MySQL database.

Prerequisites

You’ll need an Amazon EKS or a Kubernetes cluster running and accessible for this post. Getting Started with Amazon EKS provides instructions on setting up the cluster and node environment. I recommend that you have a basic understanding of Kubernetes concepts, but Kubernetes reference document links are provided throughout this walk-through. For this post, I use the placeholder EKSClusterName to denote the existing EKS cluster. Remember to replace this with the name of your EKS cluster.

You’ll also need AWS Command Line Interface (AWS CLI) installed and configured on your machine. For this blog, I assume that the default AWS CLI region is set to Oregon (us-west-2) and that you have access to the AWS services described in this post. If you use other regions, you should check the availability of AWS services in those regions.

Architecture overview

 

Figure 1: Architecture and data flow diagram within Amazon EKS nodes

Figure 1: Architecture and data flow diagram within Amazon EKS nodes

The architecture diagram shows the overall deployment architecture with two data flows, a user data flow and a Secrets Manager data flow within the EKS three-node cluster VPC (virtual private cloud).

To access the WordPress site, user data flows through an internet gateway and an external load balancer as part of the WordPress frontend Kubernetes deployment. The AWS Secrets Manager data flow uses the recently announced Secrets Manager VPC Endpoint and an AWS Lambda function within the VPC that rotates the MySQL database secret through an internal load balancer. MySQL database and the internal load balancer are provisioned as part of the WordPress MySQL Kubernetes deployment. The internal load balancer allows the database service to be exposed internally for Secrets Manager to perform the rotation within a VPC. It removes the need for the database to be exposed to the Internet for secret rotation, which is not a good security practice.

Solution overview

The blog post consists of the following steps:

  1. Host WordPress and MySQL services on Amazon EKS.
  2. Store the database secret in AWS Secrets Manager.
  3. Set up the rotation of the database secret using Secrets Manager VPC Endpoint.
  4. Rotate the database secret and update the WordPress frontend deployment.

Note: This post helps you implement MySQL secret rotation for non-RDS databases. It should be used as a reference guide on database secret rotation. For guidance on using WordPress on AWS, please refer to the WordPress: Best Practices on AWS whitepaper.

Step 1: Host WordPress and MySQL services on Amazon EKS

To deploy WordPress on an EKS cluster, you’ll use three YAML templates.

  1. First, create a StorageClass in Amazon EKS that uses Amazon Elastic Block Store (Amazon EBS) for persistent volumes, using the command below if the volumes don’t exist. Then, create a Kubernetes namespace. The Kubernetes namespace creates a separate environment within your Kubernetes cluster to create objects specific to this walk-through. This helps with object management and logical separation.
    
    kubectl create -f https://raw.githubusercontent.com/paavan98pm/eks-secret-rotation/master/templates/gp2-storage-class.yaml
    
    kubectl create namespace wp
    

  2. Next, use Kubernetes secrets to store your MySQL database password with an Amazon EKS cluster. The password is generated by the AWS Secret Manager get-random-password API using the command below. This allows a random password to be created and stored in the Kubernetes secret object through the Secrets Manager API feature without the need to manually create it. The get-random-password API allows various password length and type restrictions to be enforced based on your organizational security policy.
    
    kubectl create secret generic mysql-pass --from-literal=password=$(aws secretsmanager get-random-password --password-length 20 --no-include-space | jq -r .RandomPassword) --namespace=wp
    

You’ll use the Kubernetes secret mysql-pass to create your MySQL and WordPress deployments using the YAML manifests in the next step.

Deploying MySQL and WordPress in Amazon EKS

Now, you’ll run the MySQL and WordPress templates provided by the Kubernetes community to deploy your backend MySQL services, deployments, and persistent volume claims.

  1. To deploy an internal load balancer that AWS Secrets Manager will use to perform its secret rotation, you’ll use the Kubernetes service annotation service.beta.kubernetes.io/aws-load-balancer-internal within the MySQL service YAML. For WordPress deployment, you’ll add three replicas for availability across the three availability zones.
    
    kubectl create -f https://raw.githubusercontent.com/paavan98pm/eks-secret-rotation/master/templates/mysql-deployment.yaml --namespace=wp
    
    kubectl create -f https://raw.githubusercontent.com/paavan98pm/eks-secret-rotation/master/templates/wordpress-deployment.yaml --namespace=wp
    

    Run the commands above with the Kubernetes YAML manifests to create MySQL and WordPress services, deployments, and persistent volume claims. As shown in the architecture diagram, these services also provision an external and an internal load balancer. The external load balancer is accessible over the internet for your WordPress frontend, while you’ll use the internal load balancer to rotate database secrets. It takes a few minutes for the load balancers to be provisioned.

  2. Once the load balancers are provisioned, you should be able to access your WordPress site in your browser by running the following command. The browser will open to show the WordPress setup prompt in Figure 2.
    
    open http://$(kubectl get svc -l app=wordpress --namespace=wp -o=jsonpath='{.items[0].status.loadBalancer.ingress[0].hostname}')
    

     

    Figure 2: WordPress setup page for the Amazon EKS deployment

    Figure 2: WordPress setup page for the Amazon EKS deployment

  3. From here, follow the WordPress instructions to complete installation and set up credentials to access your WordPress administration page. This completes the hosting and setup of your WordPress blog on Amazon EKS.

Step 2: Store the database secret in AWS Secrets Manager

Next, you’re going to store your database secret in AWS Secrets Manager. AWS Secrets Manager is a fully managed AWS service that enables you to rotate, manage, and retrieve secrets such as database credentials and application programming interface keys (API keys) throughout their lifecycle. Follow the steps below to use Secrets Manager to store the MySQL database secret and the rotation configuration details.
 

Figure 3: Store a new secret in AWS Secrets Manager

Figure 3: Store a new secret in AWS Secrets Manager

From the Secrets Manager console, select Store a new secret to open the page shown in Figure 3 above. Follow the instructions below to store the MySQL database secret parameters.

  1. Select Credentials for other database and provide the username and password you created in the previous step. The username value will be root, and you should paste the password value from the output of the command below. This command copies the MySQL password stored in the Kubernetes secret object, allowing you to store it in Secrets Manager.

    Password:

    
    kubectl get secret mysql-pass -o=jsonpath='{.data.password}' --namespace=wp | base64 --decode | pbcopy
    

  2. Secrets Manager encrypts secrets by default using the service-specific encryption key for your AWS account. Since you’re storing the secret for MySQL database, choose the MySQL tile and provide the server details. For Server address, you can paste the output of the command below. The database name and port values are mysql and 3306, respectively.

    Server address:

    
    kubectl get svc -l app=wordpress-mysql --namespace=wp -o=jsonpath='{.items[0].status.loadBalancer.ingress[0].hostname}' | pbcopy
    

  3. Select Next and give the secret a name and a description that enables you to reference and manage the secret easily. I use the placeholder yourSecretName to reference your secret throughout the rest of my post; be sure to swap this placeholder with your own secret name.
  4. On the next screen, accept the default setting: Disable automatic rotation.
  5. Finally, select Store to store your MySQL secret in AWS Secrets Manager.

Step 3: Set MySQL secret rotation using Secrets Manager VPC Endpoint

Next, you’ll create a Lambda Function within the Amazon EKS node cluster VPC to rotate this secret. Within the node cluster VPC, you’re also going to configure the AWS Secrets Manager recently announced support for Amazon Virtual Private Cloud (VPC) endpoints powered by AWS PrivateLink.

Note: If you have a database hosted on Amazon RDS, Secrets Manager natively supports rotating secrets. However, if you’re using a container-based MySQL database, you must create and configure the Lambda rotation function for AWS Secrets Manager, and then provide the Amazon Resource Name (ARN) of the completed function to the secret.

Creating a Lambda rotation function

The following commands apply a WordPress MySQL rotation template to a new Lambda function. CloudFormation uses an AWS Serverless Application Repository template to automate most of the steps for you. The values you need to replace for your environment are denoted in red.

  1. The command below will return a ChangeSetId. It uses the Secrets Manager MySQL single user template from the AWS Serverless Application Repository and creates a CloudFormation Change Set by providing the Secrets Manager endpoint and Lambda function name as parameters. You need to change the Secrets Manager region within the endpoint parameter and provide a name for the Lambda function that you’re creating.
    
    aws serverlessrepo create-cloud-formation-change-set --application-id arn:aws:serverlessrepo:us-east-1:297356227824:applications/SecretsManagerRDSMySQLRotationSingleUser --stack-name MyLambdaCreationStack --parameter-overrides '[{"Name":"endpoint","Value":"https://secretsmanager.<REGION>.amazonaws.com"},{"Name":"functionName","Value":"<EKSMySQLRotationFunction>"}]' | jq -r '.ChangeSetId'
    

  2. The next command runs the change set that you just created to create a Lambda function. The change-set-name parameter comes from the ChangeSetId output of the previous command. Copy the ChangeSetId value into the instruction below.
    
    aws cloudformation execute-change-set --change-set-name <ChangeSetId>
    

  3. The following command grants Secrets Manager permission to call the Lambda function on your behalf. You should provide the Lambda function name that you set earlier.
    
    aws lambda add-permission --function-name <EKSMySQLRotationFunction> --principal secretsmanager.amazonaws.com --action lambda:InvokeFunction --statement-id SecretsManagerAccess
    

Configuring the Lambda rotation function to use Secrets Manager VPC Endpoint

Instead of connecting your VPC to the internet, you can connect directly to Secrets Manager through a private endpoint that you configure within your VPC. When you use a VPC service endpoint, communication between your VPC and Secrets Manager occurs entirely within the AWS network and requires no public internet access.

  1. To enable AWS Secrets Manager VPC endpoint, first store the Node Instance security group in a NODE_INSTANCE_SG environment variable to use it as an argument for creating the Secrets Manager VPC Endpoint. Using the VPC ID from the EKS cluster, this command filters the Security Groups attached to the nodes and stores them in the NODE_INSTANCE_SG environment variable. This variable will be used as an argument in the next step to create a Secrets Manager VPC endpoint.
    
    export NODE_INSTANCE_SG=$(aws ec2 describe-security-groups --filters Name=vpc-id,Values=$(aws eks describe-cluster --name <EKSClusterName> | jq -r '.cluster.resourcesVpcConfig.vpcId') Name=tag:aws:cloudformation:logical-id,Values=NodeSecurityGroup | jq -r '.SecurityGroups[].GroupId')
    

  2. Next, create a Secrets Manager VPC endpoint attached to the EKS node cluster subnets and the related security group. This command retrieves the VPC ID and Subnet IDs from the EKS cluster to create a Secrets Manager VPC endpoint.
    
    aws ec2 create-vpc-endpoint --vpc-id $(aws eks describe-cluster --name <EKSClusterName> | jq -r '.cluster.resourcesVpcConfig.vpcId') --vpc-endpoint-type Interface --service-name com.amazonaws.<region>.secretsmanager --subnet-ids $(aws eks describe-cluster --name <EKSClusterName> | jq -r '.cluster.resourcesVpcConfig.subnetIds | join(" ")') --security-group-id $NODE_INSTANCE_SG --private-dns-enabled
    

  3. Next, update the Lambda function to attach it to the EKS node cluster subnets and the related security group. This command updates the Lambda function’s configuration by retrieving its ARN and EKS cluster Subnet IDs and Security Group ID.
    
    aws lambda update-function-configuration --function-name $(aws lambda list-functions | jq -r '.Functions[] | select(.FunctionName == "<EKSMySQLRotationFunction>") | .FunctionArn') --vpc-config SubnetIds=$(aws eks describe-cluster --name <EKSClusterName> | jq -r '.cluster.resourcesVpcConfig.subnetIds | join(",")'),SecurityGroupIds=$NODE_INSTANCE_SG
    

Step 4: Rotate the database secret and update your WordPress frontend deployment

Now that you’ve configured the Lambda rotation function, use the rotate-secret command to schedule a rotation of this secret. The command below uses the previously stored secret name with the Lambda function ARN to rotate your secret automatically every 30 days. You can adjust the rotation frequency value, if you want. The minimum rotation frequency is 1 day.


aws secretsmanager rotate-secret --secret-id <yourSecretName> --rotation-lambda-arn $(aws lambda list-functions | jq -r '.Functions[] | select(.FunctionName == "<EKSMySQLRotationFunction>") | .FunctionArn') --rotation-rules AutomaticallyAfterDays=30

The figure below shows the Secrets Manager data flow for secret rotation using the VPC endpoint.
 

Figure 4: MySQL database secret rotation steps

Figure 4: MySQL database secret rotation steps

Based on the frequency of the rotation, Secrets Manager will follow the steps below to perform automatic rotations:

  1. Secrets Manager will invoke <EKSMySQLRotationFunction> within the EKS node VPC using the Secrets Manager VPC endpoint.
  2. Using the credentials provided for MySQL database, <EKSMySQLRotationFunction> will rotate the database secret and update the secret value in Secrets Manager.

Once the Lambda function has executed, you can test the updated database secret value that’s stored in AWS Secrets Manager by using the command below.


aws secretsmanager get-secret-value --secret-id <yourSecretName>

Next, update the Kubernetes secret (mysql-pass) with the rotated secret stored in AWS Secrets Manager, then update the WordPress frontend deployment with the rotated MySQL password environment variable using the command below. These instructions should be automated using Kubernetes and AWS API once you’ve completed your secret rotation.

The command below retrieves the rotated secret from Secrets Manager using the GetSecretValue API call and passes the updated value to the Kubernetes secret. The next command patches the WordPress deployment for the frontend WordPress containers to use the updated Kubernetes secret. It also adds a rotation date annotation for the deployment.


kubectl create secret generic mysql-pass --namespace=wp --from-literal=password=$(aws secretsmanager get-secret-value --secret-id <yourSecretName> | jq --raw-output '.SecretString' | jq -r .password) -o yaml --dry-run | k replace -f -

kubectl patch deploy wordpress -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"rotation_date\":\"`date +'%s'`\"}}}}}" --namespace=wp

You should now be able to access the updated WordPress site with database secret rotation enabled.


open http://$(kubectl get svc -l app=wordpress --namespace=wp -o=jsonpath='{.items[0].status.loadBalancer.ingress[0].hostname}')

Clean-up

To delete the environment used in this walk-through in EKS, the secret in Secrets Manager, and the accompanying Lambda function, run the following commands. You only need to follow these steps if you intend to remove the secret, Lambda function, VPC endpoint, and EKS objects created in this walk-through.


kubectl delete namespace wp

aws secretsmanager delete-secret --secret-id <yourSecretName> --force-delete-without-recovery

aws lambda delete-function --function-name <EKSMySQLRotationFunction>

aws ec2 delete-vpc-endpoints --vpc-endpoint-ids $(aws ec2 describe-vpc-endpoints | jq -r '.VpcEndpoints[] | select(.ServiceName == "com.amazonaws.<REGION>.secretsmanager") | .VpcEndpointId')

Pricing

Next, I review the pricing and estimated cost of this example. AWS Secrets Manager offers a 30-day trial period that starts when you store your first secret. Storage of each secret costs $0.40 per secret per month. For secrets that are stored for less than a month, the price is prorated based on the number of hours. There is an additional cost of $0.05 per 10,000 API calls. You can learn more by visiting the AWS Secrets Manager pricing service details page.

You pay $0.20 per hour for each Amazon EKS cluster that you create. You can use a single Amazon EKS cluster to run multiple applications by taking advantage of Kubernetes namespaces and IAM security policies. You pay for AWS resources (for example, EC2 instances or EBS volumes) that you create to run your Kubernetes worker nodes. You only pay for what you use, as you use it; there are no minimum fees and no upfront commitments. See detailed pricing information on the Amazon EC2 pricing page.

Assuming you allocate two hours to follow the blog instructions, the cost will be approximately $1.

  1. Amazon EKS – 2 hours x ($0.20 per EKS cluster + 3 nodes x $0.096 m5.large instance)
  2. AWS Secrets Manager – 2 hours x ($0.40 per secret per month / 30 days / 24 hours + $0.05 per 10,000 API calls)

Summary

In this post, I showed you how to rotate WordPress database credentials in Amazon EKS using AWS Secrets Manager.

For more details on secrets management within Amazon EKS, check out the Github workshop for Kubernetes, particularly the section on ConfigMaps and Secrets, to understand different secret management alternatives available. To get started using Kubernetes on AWS, open the Amazon EKS console. To learn more, read the EKS documentation. To get started managing secrets, open the Secrets Manager console. To learn more, read the Secrets Manager documentation.

If you have comments about this post, submit them in the Comments section below. If you have questions about anything in this post, start a new thread on the EKS forum or Secrets Manager forum.

Want more AWS Security news? Follow us on Twitter.

Author

Paavan Mistry

Paavan is a Security Specialist Solutions Architect at AWS where he enjoys solving customers’ cloud security, risk, and compliance challenges. Outside of work, he enjoys reading about leadership, politics, law, and human rights.

Setting the Record Straight on Bloomberg BusinessWeek’s Erroneous Article

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/setting-the-record-straight-on-bloomberg-businessweeks-erroneous-article/

Today, Bloomberg BusinessWeek published a story claiming that AWS was aware of modified hardware or malicious chips in SuperMicro motherboards in Elemental Media’s hardware at the time Amazon acquired Elemental in 2015, and that Amazon was aware of modified hardware or chips in AWS’s China Region.

As we shared with Bloomberg BusinessWeek multiple times over the last couple months, this is untrue. At no time, past or present, have we ever found any issues relating to modified hardware or malicious chips in SuperMicro motherboards in any Elemental or Amazon systems. Nor have we engaged in an investigation with the government.

There are so many inaccuracies in ‎this article as it relates to Amazon that they’re hard to count. We will name only a few of them here. First, when Amazon was considering acquiring Elemental, we did a lot of due diligence with our own security team, and also commissioned a single external security company to do a security assessment for us as well. That report did not identify any issues with modified chips or hardware. As is typical with most of these audits, it offered some recommended areas to remediate, and we fixed all critical issues before the acquisition closed. This was the sole external security report commissioned. Bloomberg has admittedly never seen our commissioned security report nor any other (and refused to share any details of any purported other report with us).

The article also claims that after learning of hardware modifications and malicious chips in Elemental servers, we conducted a network-wide audit of SuperMicro motherboards and discovered the malicious chips in a Beijing data center. This claim is similarly untrue. The first and most obvious reason is that we never found modified hardware or malicious chips in Elemental servers. Aside from that, we never found modified hardware or malicious chips in servers in any of our data centers. And, this notion that we sold off the hardware and datacenter in China to our partner Sinnet because we wanted to rid ourselves of SuperMicro servers is absurd. Sinnet had been running these data centers since we ‎launched in China, they owned these data centers from the start, and the hardware we “sold” to them was a transfer-of-assets agreement mandated by new China regulations for non-Chinese cloud providers to continue to operate in China.

Amazon employs stringent security standards across our supply chain – investigating all hardware and software prior to going into production and performing regular security audits internally and with our supply chain partners. We further strengthen our security posture by implementing our own hardware designs for critical components such as processors, servers, storage systems, and networking equipment.

Security will always be our top priority. AWS is trusted by many of the world’s most risk-sensitive organizations precisely because we have demonstrated this unwavering commitment to putting their security above all else. We are constantly vigilant about potential threats to our customers, and we take swift and decisive action to address them whenever they are identified.

– Steve Schmidt, Chief Information Security Officer

Three key trends in financial services cloud compliance

Post Syndicated from Igor Kleyman original https://aws.amazon.com/blogs/security/three-key-trends-in-financial-services-cloud-compliance/

As financial institutions increasingly move their technology infrastructure to the cloud, financial regulators are tailoring their oversight to the unique features of a cloud environment. Regulators have followed a variety of approaches, sometimes issuing new rules and guidance tailored to the cloud. Other times, they have updated existing guidelines for managing technology providers to be more applicable for emerging technologies. In each case, however, policymakers’ heightened focus on cybersecurity and privacy has led to increased scrutiny on how financial institutions manage security and compliance.

Because we strive to ensure you can use AWS to meet the highest security standards, we also closely monitor regulatory developments and look for trends to help you stay ahead of the curve. Here are three common themes we’ve seen emerge in the regulatory landscape:

Data security and data management

Regulators expect financial institutions to implement controls and safety measures to protect the security and confidentiality of data stored in the cloud. AWS services are content agnostic—we treat all customer data and associated assets as highly confidential. We have implemented sophisticated technical and physical measures against unauthorized access. Encryption is an important step to help protect sensitive information. You can use AWS Key Management Service (KMS), which is integrated into many services, to encrypt data. KMS also makes it easy to create and control your encryption keys.

Cybersecurity

Financial regulators expect financial institutions to maintain a strong cybersecurity posture. In the cloud, security is a shared responsibility between the cloud provider and the customer: AWS manages security of the cloud, and customers are responsible for managing security in the cloud. To manage security of the cloud, AWS has developed and implemented a security control environment designed to protect the confidentiality, integrity, and availability of your systems and content. AWS infrastructure complies with global and regional regulatory requirements and best practices. You can help ensure security in the cloud by leveraging AWS services. Some new services strive to automate security. Amazon Inspector performs automated security assessments to scan cloud environments for vulnerabilities or deviations from best practices. AWS is also on the cutting edge of using automated reasoning to ensure established security protocols are in place. You can leverage automated proofs with a tool called Zelkova, which is integrated within certain AWS services. Zelkova helps you obtain higher levels of security assurance about your most sensitive systems and operations. Financial institutions can also perform vulnerability scans and penetration testing on their AWS environments—another recurring expectation of financial regulators.

Risk management

Regulators expect financial institutions to have robust risk management processes when using the cloud. Continuous monitoring is key to ensuring that you are managing the risk of your cloud environment, and AWS offers financial institutions a number of tools for governance and traceability. You can have complete visibility of your AWS resources by using services such as AWS CloudTrail, Amazon CloudWatch, and AWS Config to monitor, analyze, and audit events that occur in your cloud environment. You can also use AWS CloudTrail to log and retain account activity related to actions across your AWS infrastructure.

We understand how important security and compliance are for financial institutions, and we strive to ensure that you can use AWS to meet the highest regulatory standards. Here is a selection of resources we created to help you make sense of the changing regulatory landscape around the world:

You can go to our security and compliance resources page for additional information. Have more questions? Reach out to your Account Manager or request to be contacted.

Want more AWS Security news? Follow us on Twitter.

Daniel Schwartz-Narbonne shares how automated reasoning is helping achieve the provable security of AWS boot code

Post Syndicated from Supriya Anand original https://aws.amazon.com/blogs/security/automated-reasoning-provable-security-of-boot-code/

I recently sat down with Daniel Schwartz-Narbonne, a software development engineer in the Automated Reasoning Group (ARG) at AWS, to learn more about the groundbreaking work his team is doing in cloud security. The team uses automated reasoning, a technology based on mathematical logic, to prove that key components of the cloud are operating as intended. ARG recently hit a milestone by leveraging this technology to prove the memory safety of boot code components. Boot code is the foundation of the cloud. Proving the memory safety of boot code is akin to verifying that the foundation of your house is secure—it lets you build upon it without worry. Daniel shared details with the AWS Security Blog team about the project’s accomplishments and how it will help solve cloud security challenges.

Daniel Schwartz-Narbonne discusses ARG's work on the provable security of boot code

Daniel Schwartz-Narbonne discusses how automated reasoning, a branch of AI tech, can help prove the security of boot code

Tell me about yourself: what made you decide to become a software engineer with the Automated Reasoning Group?

I wanted to become an engineer because I like building things and understanding how they work. I get satisfaction out of producing something tangible. I went into cloud security because I believe it’s a major area of opportunity in the computing industry right now. As the cloud continues to scale in response to customer demand, we’re going to need more and more automation around security to meet this demand.

I was offered the opportunity to work with ARG after I finished up my post-doc at NYU. Byron Cook, the director of ARG, was starting up a team with the mission of using formal reasoning methods to solve real-world problems in the cloud. Joining ARG was an opportunity for me to help pioneer the use of automated reasoning for cloud security.

How would you describe automated reasoning?

Automated reasoning uses mathematical analysis to understand what’s happening in a complex computer system. The technique takes a system and a question you might have about the system—like “is the system memory safe?”—and reformulates the question as a set of mathematical properties. Then it uses automated reasoning tools called “constraint solvers” to analyze these properties and provide an answer. We’re using this technology to provide higher levels of cloud security assurance for AWS customers via features that protect key components of the cloud, including IAM permissions, networking controls, verification for security protocols and source code of foundational software packages in use at AWS. Links to this work can be found at the bottom of this post.

What is the Boot Code Verification Project?

The Boot Code Verification Project is one of several ARG projects that apply automated reasoning techniques to the foundational elements of cloud security. In this case, we’re looking at boot code. Boot code is the first code that starts when you turn on a computer. It’s the foundation for all computer code, which makes its security critical. This is joint work with my ARG colleagues Michael Tautschnig and Mark Tuttle and with infrastructure engineers.

Why is boot code so difficult to secure?

Ensuring boot code security by using traditional techniques, such as penetration testing and unit testing, is hard. You can only achieve visibility into code execution via debug ports, which means you have almost no ability to single-step the boot code for debugging. You often can’t instrument the boot code, either, because this can break the build process: the increased size of the instrumented code may be larger than the size of the ROM targeted by the build process. Extracting the data collected by instrumentation is also difficult because the boot code has no access to a file system to record the data, and memory available for storing the data may be limited.

Our aim is to gain increased confidence in the correctness of the boot code by using automated reasoning, instead. Applying automated reasoning to boot code has challenges, however. A big one is that boot code directly interfaces with hardware. Hardware can, for example, modify the value of memory locations through the use of memory-mapped input/output (IO). We developed techniques for modeling the effect that hardware can have on executing boot code. One technique we successfully tried is using model checking to symbolically represent all the effects that hardware could have on the memory state of the boot code. This required close collaboration with our product teams to understand AWS data center hardware and then design and validate a model based on these specifications. To ensure future code revisions maintain the properties we have validated, our analysis is embedded into the continuous integration flow. In such a workflow, each change by the developers triggers automated verification.

We published the full technical details, including the process by which we were able to prove the memory safety of boot code, in Model Checking Boot Code from AWS Data Centers, a peer-reviewed scientific publication at the Computer-Aided Verification Conference, the leading academic conference on automated reasoning.

You mention model checking. Can you explain what that is?

A software model checker is a tool that examines every path through a computer program from every possible input. There are different kinds of model checkers, but our model checker is based on a constraint solver (called a SAT solver, or a Satisfiability solver) that can test whether a given set of constraints is satisfiable. To understand how it works, first remember that each line of a computer program describes a particular change in the state of the computer (for example, turning on the device). Our model checker describes each change as an equation that shows how the computer’s state has changed. If you describe each line of code in a program this way, the result is a set of equations that describes a set of constraints upon all the ways that the program can change the state of the computer. We hand these constraints and a question (“Is there a bug?”) to a constraint solver, which then determines if the computer can ever reach a state in which the question (“Is there a bug?”) is true.

What is memory safety? Why is it so crucial to prove the memory safety of boot code?

A proof of memory safety gives you assurance that certain security issues cannot arise. Memory safety states that every operation in a program can only write to the variables it has access to, within the bounds of those variables. A classic example is a buffer that stores data coming in from a message on the network. If the message is larger than the buffer in which it’s stored, then you’ll overwrite the buffer, as well as whatever comes after the buffer. If trusted data stored after the buffer is overwritten, then the value of this trusted data is under the control of the adversary inducing the buffer overflow—and your system’s security is now at risk.

Boot code is written in C, a language that does not have the dynamic run-time support for memory safety found in other programming languages. The Boot Code Verification Project uses automated reasoning technology to prove memory safety of the boot code for every possible input.

What has the Boot Code Verification Project accomplished?

We’ve achieved two major accomplishments. The first is the concrete proof we’ve delivered. We have demonstrated that for every boot configuration, device configuration, possible boot source, and second stage binary, AWS boot code is memory safe.

The second accomplishment is more forward-looking. We haven’t just validated a piece of code—we’ve validated a methodology for testing security critical C code at AWS. As we describe in our paper, completing this proof required us to make significant advances in program analysis tooling, ranging from the way we handle memory-mapped IO, to a more efficient symbolic implementation of memcpy, to new tooling that can analyze the unusual linker configurations used in boot code. We made the tooling easier to use, with AWS Batch scripts that allow automatic proof re-runs, and HTML-based reports that make it easy to dive in and understand code. We expect to build on these improvements as we continue to apply automated reasoning to the AWS cloud.

Is your work open source?

We use the model checker CBMC (C Bounded Model Checker), which is available on GitHub under the open source Berkeley Software Distribution license. AWS is committed to the open source community, and we have a number of other projects that you can also find on GitHub.

Overall, how does the Boot Code Verification Project benefit customers?

Customers ask how AWS secures their data. This project is a part of the answer, providing assurance for how AWS protects low-level code in customer data centers running on AWS. Given all systems and processes run on top of this code, customers need to know measures are in place to keep it continuously safe.

We also believe that technology powered by automated reasoning has wider applicability. Our team has created tools like Zelkova, which we’ve embedded in a variety of AWS services to help customers validate their security-critical code. Because the Boot Code Verification Project is based on an existing open source project, wider applications of our methodology have also been documented in a variety of scientific publications that you can find on the the AWS Provable Security page under “Insight papers.” We encourage customers to check out our resources and comment below!

Want more AWS Security news? Follow us on Twitter.

Author

Supriya Anand

Supriya is a Content Strategist at AWS working with the Automated Reasoning Group.

How to clone an AWS CloudHSM cluster across regions

Post Syndicated from Tracy Pierce original https://aws.amazon.com/blogs/security/how-to-clone-an-aws-cloudhsm-cluster-across-regions/

You can use AWS CloudHSM to generate, store, import, export, and manage your cryptographic keys. It also permits hash functions to compute message digests and hash-based message authentication codes (HMACs), as well as cryptographically sign data and verify signatures. To help ensure redundancy of data and simplification of the disaster recovery process, you’ll typically clone your AWS CloudHSM cluster into a different AWS region. This then allows you to synchronize keys, including non-exportable keys, across regions. Non-exportable keys are keys that can never leave the CloudHSM device in plaintext. They reside on the CloudHSM device and are encrypted for security purposes.

You clone a cluster to another region in a two-step process. First, you copy a backup to the destination region. Second, you create a new cluster from this backup. In this post, I’ll show you how to set up one cluster in region 1, and how to use the new CopyBackupToRegion feature to clone the cluster and hardware security modules (HSMs) to a virtual private cloud (VPC) in region 2.

Note: This post doesn’t include instructions on how to set up a cross-region VPC to synchronize HSMs across the two cloned clusters. If you need to do that, read this article.

Solution overview

To complete this solution, you can use either the AWS Command Line Interface (AWS CLI)
or the AWS CloudHSM API. For this post, I’ll use the AWS CLI to copy the cluster backup from region 1 to region 2, and then I’ll launch a new cluster from that copied backup.

The following diagram illustrates the process covered in the post.
 

Figure 1: Architecture diagram

Figure 1: Architecture diagram

Here’s how the process works:

  1. AWS CloudHSM creates a backup of the cluster and stores it in an S3 bucket owned by AWS CloudHSM.
  2. You run the CLI/API command to copy the backup to another AWS region.
  3. When the backup is completed, you use that backup to then create a cluster and HSMs.
  4. Note: Backups can’t be copied into or out of AWS GovCloud (US) because it’s a restricted region.

As with all cluster backups, when you copy the backup to a new AWS region, it’s stored in an Amazon S3 bucket owned by an AWS CloudHSM account. AWS CloudHSM manages the security and storage of cluster backups for you. This means the backup in both regions will also have the durability of Amazon S3, which is 99.999999999%. The backup in region 2 will also be encrypted and secured in the same way as your backup in region 1. You can read more about the encryption process of your AWS CloudHSM backups here.

Any HSMs created in this cloned cluster will have the same users and keys as the original cluster at the time the backup was taken. From this point on, you must manually keep the cloned clusters in sync. Specifically:

  • If you create users after creating your new cluster from the backup, you must create them on both clusters manually.
  • If you change the password for a user in one cluster, you must change the password on the cloned clusters to match.
  • If you create more keys in one cluster, you must sync them to at least one HSM in the cloned cluster. Note that after you sync the key from cluster 1 to cluster 2, the CloudHSM automated cluster synchronization will take care of syncing the keys within the 2nd cluster.

Prerequisites

Some items that will need to be in place for this to work are:

Important note: Syncing keys across clusters in more than one region will only work if all clusters are created from the same backup. This is because synchronization requires the same secret key, called a masking key, to be present on the source and destination HSM. The masking key is specific to each cluster. It can’t be exported, and can’t be used for any purpose other than synchronizing keys across HSMs in a cluster.

Step 1: Create your first cluster in region 1

Follow the links in each sub-step below to the documentation page for more information and setup requirements:

  1. Create the cluster. To do this, you will run the command below via CLI. You will want to replace the placeholder <SUBNET ID 1> with one of your private subnets.
    $ aws cloudhsmv2 create-cluster –hsm-type hsm1.medium –subnet-ids <SUBNET ID 1>
  2. Launch your Amazon Elastic Compute Cloud (Amazon EC2) client (in the public subnet). You can follow the steps here to launch an EC2 Instance.
  3. Create the first HSM (in the private subnet). To do this, you will run the command below via CLI. You will want to replace the placeholder <CLUSTER ID> with the ID given from the ‘Create the cluster’ command above. You’ll replace <AVAILABILITY ZONE> with the AZ matching your private subnet. For example, us-east-1a.
    $ aws cloudhsmv2 create-hsm –cluster-id <CLUSTER ID> –availability-zone <AVAILABILITY ZONE>
  4. Initialize the cluster. Initializing your cluster requires creating a self-signed certificate and using that to sign the cluster’s Certificate Signing Request (CSR). You can view an example here of how to create and use a self-signed certificate. Once you have your certificate, you will run the command below to initialize the cluster with it. You will want to replace the placeholder <CLUSTER ID> with your cluster id from step 1.
    $ aws cloudhsmv2 initialize-cluster –cluster-id <CLUSTER ID> –signed-cert file://<CLUSTER ID>_CustomerHsmCertificate.crt –-trust-anchor file://customerCA.crt

    Note: Don’t forget to place a copy of the certificate used to sign your cluster’s CSR into the /opt/cloudhsm/etc directory to ensure a continued secure connection.

  5. Install the cloudhsm-client software. Once the Amazon EC2 client is launched, you’ll need to download and install the cloudhsm-client software. You can do this by running the command below from the CLI:
    wget https://s3.amazonaws.com/cloudhsmv2-software/CloudHsmClient/EL6/cloudhsm-client-latest.el6.x86_64.rpm

    Once downloaded, you’ll install by running this command:
    sudo yum install -y ./cloudhsm-client-latest.el6.x86_64.rpm

  6. The last step in initializing the cluster requires you to configure the cloudhsm-client to point to the ENI IP of your first HSM. You do this on your EC2 client by running this command:
    $ sudo /opt/cloudhsm/bin/configure -a <IP ADDRESS>
    Replace the <IP ADDRESS> placeholder with your HSM’s ENI IP. The cloudhsm-client comes pre-installed with a Python script called “configure” located in the /opt/cloudhsm/bin/ directory. This will update your /opt/cloudhsm/etc/cloudhsm_mgmt_util.cfg and
    /opt/cloudhsm/etc/cloudhsm_client.cfg files with your HSM’s IP address. This ensures your client can connect to your cluster.
  7. Activate the cluster. To activate, you must launch the cloudhsm-client by running this command, which logs you into the cluster:

    $ /opt/cloudhsm/bin/cloudhsm_mgmt_util /opt/cloudhsm/etc/cloudhsm_mgmt_util.cfg

    Then, you need to enable the secure communication by running this command:

    aws-cloudhsm>enable_e2e

    If you’ve placed the certificate in the correct directory, you should see a response like this on the command line:

    E2E enabled on server 0(server1)

    If you run the command listUsers you’ll see a PRECO user:

    
    aws-cloudhsm>listUsers
    Users on server 0(server1):
    Number of users found:2
    	User ID		User Type	User Name
    	     1		PRECO		admin
    	     2		AU		app_user
    
    

    Change the password for this user to complete the activation process. You do this by first logging in using the command below:

    aws-cloudhsm>loginHSM PRECO admin password

    Once logged in, change the password using this command:

    
    aws-cloudhsm>changePswd PRECO admin <NEW PASSWORD>
    ***************************CAUTION******************************
    This is a CRITICAL operation, should be done on all nodes in the
    cluster. Cav server does NOT synchronize these changes with the 
    nodes on which this operation is not executed or failed, please
    ensure this operation is executed on all nodes in the cluster.
    ****************************************************************
    
    Do you want to continue(y/n)?Y
    Changing password for admin(PRECO) on 1 nodes
    

    Once completed, log out using the command logout, then log back in with the new password, using the command loginHSM PRECO admin <NEW PASSWORD>.

    Doing this allows you to create the first crypto user (CU). You create the user by running the command:
    aws-cloudhsm>createUser <USERTYPE (ex: CO, CU)> <USERNAME> <PASSWORD>
    Replace the red values in this command. The <USERTYPE> can be a CO (crypto officer) or a CU (crypto user). You can find more information about usertypes here. You’ll replace the placeholders <USERNAME> <PASSWORD> with a real user and password combo. Crypto Users are permitted to create and share keys on the CloudHSM.

    Run the command quit to exit this tool.

Step 2: Trigger a backup of your cluster

To trigger a backup that will be copied to region 2 to create your new cluster, add an HSM to your cluster in region 1. You can do this via the console or CLI. The backup that is created will contain all users (COs, CUs, and appliance users), all key material on the HSMs, and the configurations and policies associated with them. The user portion is extremely important because keys can only be synced across clusters to the same user. Make a note of the backup ID because you will need it later. You can find this by logging into the AWS console and navigating to the CloudHSM console, then selecting Backups. There will be a list of backup IDs, cluster IDs, and creation times. Make sure to select the backup ID specifically created for the cross-region copy.

Step 3: Create a key on your cluster in Region 1

There are many ways to create a key. I’m using key_mgmt_util because it’s an easy and straightforward method using CLI commands instead of SDK libraries. Start by connecting to the EC2 client instance that you launched above and ensuring the cloudhsm-client is running. If you aren’t sure, run this command:

$ sudo start cloudhsm-client

Now, launch the key_mgmt_util by running this command:

$ /opt/cloudhsm/bin/key_mgmt_util

When you see the prompt, log in as a CU to create your key, replacing <USERNAME> and <PASSWORD> with an actual CU user’s username and password:

Command: loginHSM -u CU -s <USERNAME> -p <PASSWORD>

To create the key for this example, we’re going to use the key_mgmt_util to generate a symmetric key. Note the -nex parameter is what makes this key non-exportable. An example command is below:

Command: genSymKey -t 31 -s 32 -l aes256 -nex

In the above command:

  1. genSymKey creates the Symmetric key
  2. -t chooses the key type, which in this case is AES
  3. -s states the key size, which in this case is 32 bytes
  4. -l creates a label to easily recognize the key by
  5. -nex makes the key non-exportable

The HSM will return a key handle. This is used as an identifier to reference the key in future commands. Make a note of the key handle because you will need it later. Here’s an example of the full output in which you can see the key handle provided is 37:


Command:genSymKey -t 31 -s 32 -l aes256 -nex
	Cfm3GenerateSymmetricKey returned: 0x00 : HSM Return: SUCCESS
	Symmetric Key Created.   Key Handle: 37
	Cluster Error Status
	Node id 0 and err state 0x00000000 : HSM Return: SUCCESS

Step 4: Copy your backup from region 1 to region 2 and create a cluster from the backup

To copy your backup from region 1 to region 2, from your EC2 client you’ll need to run the command that appears after these important notes:

  • Make sure the proper permissions are applied for the IAM role or user configured for the CLI. You’ll want to be a CloudHSM administrator for these actions. The instructions here show you how to create an admin user for this process, and here is an example of the permissions policy:
    
    {
       "Version":"2018-06-12",
       "Statement":{
          "Effect":"Allow",
          "Action":[
             "cloudhsm:*",
             "ec2:CreateNetworkInterface",
             "ec2:DescribeNetworkInterfaces",
             "ec2:DescribeNetworkInterfaceAttribute",
             "ec2:DetachNetworkInterface",
             "ec2:DeleteNetworkInterface",
             "ec2:CreateSecurityGroup",
             "ec2:AuthorizeSecurityGroupIngress",
             "ec2:AuthorizeSecurityGroupEgress",
             "ec2:RevokeSecurityGroupEgress",
             "ec2:DescribeSecurityGroups",
             "ec2:DeleteSecurityGroup",
             "ec2:CreateTags",
             "ec2:DescribeVpcs",
             "ec2:DescribeSubnets",
             "iam:CreateServiceLinkedRole"
          ],
          "Resource":"*"
       }
    }
    
    

  • To copy the backup over, you need to know the destination region, the source cluster ID, and/or the source backup ID. You can find the source cluster ID and/or the source backup ID in the CloudHSM console.
  • If you use only the cluster ID, the most recent backup of the associated cluster will be chosen for copy. If you specify the backup ID, that associated backup will be copied. If you don’t know these IDs, run the describe-clusters or describe-backups commands.

Here’s the example command:


$ aws cloudhsmv2 copy-backup-to-region --destination-region <DESTINATION REGION> --cluster-id <CLUSTER ID> --backup-id <BACKUP ID>
{
    “DestinationBackup”: {
	“BackupId”: “backup-gdnekhcxf4n”,
	“CreateTimestamp”: 1531742400,
	“BackupState”: “CREATE_IN_PROGRESS”,
	“SourceCluster”: “cluster-kzlczlspnho”,
	“SourceBackup”: “backup-4kuraxsqetz”,
	“SourceRegion”: “us-east-1”
     }
}

Once the backup has been copied to region 2, you’ll see a new backup ID in your console. This is what you’ll use to create your new cluster. You can follow the steps here to create your new cluster from this backup. This cluster will launch already initialized for you, but it will still need HSMs added into it to complete the activation process. Make sure you copy over the cluster certificate from the original cluster to the new region. You can do this by opening two terminal sessions, one for each HSM. Open the certificate file on the HSM in cluster 1 and copy it. On the HSM in cluster 2, create a file and paste the certificate over. You can use any text editor you like to do this. This certificate is required to establish the encrypted connection between your client and HSM instances.

You should also make sure you’ve added the cloned cluster’s Security Group to your EC2 client instance to allow connectivity. You do this by selecting the Security Group for your EC2 client in the EC2 console, and selecting Add rules. You’ll add a rule allowing traffic, with the source being the Security Group ID of your cluster.

Finally, take note of the ENI IP for the HSM because you’ll need it later. You can find this in your CloudHSM Console by clicking on the cluster for more information.

Step 5: Create a new configuration file with one ENI IP from both clusters

To sync a key from a cluster in region 1 to a cluster in region 2, you must create a configuration file that contains at least one ENI IP of an HSM in both clusters. This is required to allow the cloudhsm-client to communicate with both clusters at the same time. This is where the masking key we mentioned earlier comes into play as the syncKey command uses that to copy keys between clusters. This is why the cluster in region 2 must be created from a backup of the cluster in region 1. For the new configuration file, I’m going to copy over the original file /opt/cloudhsm/etc/cloudhsm_mgmt_util.cfg to a new file. Name this SyncClusters.cfg. You’re going to edit this new configuration file to have the ENI IP of the HSM in the cluster of region 1 and the ENI IP of the HSM in the cluster of region 2. It should look something like this:


{
    "scard": {
        "certificate": "cert-sc",
        "enable": "no",
        "pkey": "pkey-sc",
        "port": 2225
    },
    "servers": [
        {
            "CAfile": "",
            "CApath": "/opt/cloudhsm/etc/certs",
            "certificate": "/opt/cloudhsm/etc/client.crt",
            "e2e_encryption": {
                "enable": "yes",
                "owner_cert_path": "/opt/cloudhsm/etc/customerCA.crt"
            },
            "enable": "yes",
            "hostname": "",
            "name": "",
            "pkey": "/opt/cloudhsm/etc/client.key",
            "port": 2225,
            "server_ssl": "yes",
            "ssl_ciphers": ""
        },
        {
            "CAfile": "",
            "CApath": "/opt/cloudhsm/etc/certs",
            "certificate": "/opt/cloudhsm/etc/client.crt",
            "e2e_encryption": {
                "enable": "yes",
                "owner_cert_path": "/opt/cloudhsm/etc/customerCA.crt"
            },
            "enable": "yes",
            "hostname": "",
            "name": "",
            "pkey": "/opt/cloudhsm/etc/client.key",
            "port": 2225,
            "server_ssl": "yes",
            "ssl_ciphers": ""
        }
    ]
}

To verify connectivity to both clusters, start the cloudhsm-client using the modified configuration file. The command will look similar to this:

$ /opt/cloudhsm/bin/cloudhsm_mgmt_util.cfg /opt/cloudhsm/etc/SyncClusters.cfg

After connection, you should see something similar to this, with one IP from cluster 1 and one IP from cluster 2:


Connecting to the server(s), it may take time
depending on the server(s) load, please wait...
Connecting to server '<CLUSTER-1-IP>': hostname '<CLUSTER-1-IP>', port 2225...
Connected to server '<CLUSTER-1-IP>': hostname '<CLUSTER-1-IP>', port 2225.
Connecting to server '<CLUSTER-2-IP>': hostname '<CLUSTER-2-IP>', port 2225...
Connected to server '<CLUSTER-2-IP>': hostname '<CLUSTER-2-IP>', port 2225.

If you run the command info server from the prompt, you’ll see a list of servers your client is connected to. Make note of these because they’ll be important when syncing your keys. Typically, you’ll see server 0 as your first HSM in cluster 1 and server 1 as your first HSM in cluster 2.

Step 6: Sync your key from the cluster in region 1 to the cluster in region 2

You’re ready to sync your keys. Make sure you’ve logged in as the Crypto Officer (CO) user. Only the CO user can perform management functions on the cluster (for example, syncing keys).

Note: These steps are all performed at the server prompt, not the aws-cloudhsm prompt.

First, run the command listUsers to get the user IDs of the user that created the keys. Here’s an example:


server0>listUsers
Users on server 0(<CLUSTER-A-IP>):
Number of users found:3

User Id    User Type	User Name     MofnPubKey     LoginFailureCnt	 2FA
1		CO   	admin         NO               0	 	 NO
2		AU   	app_user      NO               0	 	 NO
3		CU   	<USERNAME>    NO               0	 	 NO

Make note of the user ID because you’ll need it later; in this case, it’s 3. Now, you need to see the key handles that you want to sync. You either noted this from earlier, or you can find this by running the findAllKeys command with the parameter for user 3. Here’s an example:


server0>findAllKeys 3 0
Keys on server 0(<CLUSTER-1-IP<):
Number of keys found 1
number of keys matched from start index 0::1
37
findAllKeys success

In this case, the key handle I want to sync is 37. When running the command syncKey, you’ll input the key handle and the server you want to sync it to (the destination server). Here’s an example:

server0>syncKey 37 1

In this example, 37 is the key handle, and 1 is the destination HSM. You’ll run the exit command to back out to the cluster prompt, and from here you can run findAllKeys again, which should show the same key handle on both clusters.


aws-cloudhsm>findAllKeys 3 0
Keys on server 0(<CLUSTER-1-IP>):
Number of keys found 1
number of keys matched from start index 0::1
37
findAllKeys success on server 0(<CLUSTER-1-IP>)
Keys on server 0(<CLUSTER-2-IP>):
Number of keys found 1
number of keys matched from start index 0::1
37
findAllKeys success on server 0(<CLUSTER-2-IP>)

Repeat this process with all keys you want to sync between clusters.

Summary

I walked you through how to create a cluster, trigger a backup, copy that backup to a new region, launch a new cluster from that backup, and then sync keys across clusters. This will help reduce disaster recovery time, while helping to ensure that your keys are secure in multiple regions should a failure occur.

Remember to always manually update users across clusters after the initial backup copy and cluster creation because these aren’t automatic. You must also run the syncKey command on any keys created after this, as well.

You’re now set up for fault tolerance in your AWS CloudHSM environment.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS CloudHSM forum.

Want more AWS Security news? Follow us on Twitter.

Author

Tracy Pierce

Tracy Pierce is a Senior Cloud Support Engineer at AWS. She enjoys the peculiar culture of Amazon and uses that to ensure every day is exciting for her fellow engineers and customers alike. Customer Obsession is her highest priority and she shows this by improving processes, documentation, and building tutorials. She has her AS in Computer Security & Forensics from SCTD, SSCP certification, AWS Developer Associate certification, and AWS Security Specialist certification. Outside of work, she enjoys time with friends, her Great Dane, and three cats. She keeps work interesting by drawing cartoon characters on the walls at request.

Visualizing Amazon GuardDuty findings

Post Syndicated from Mike Fortuna original https://aws.amazon.com/blogs/security/visualizing-amazon-guardduty-findings/

Amazon GuardDuty is a managed threat detection service that continuously monitors for malicious or unauthorized behavior to help protect your AWS accounts and workloads. Enable GuardDuty and it begins monitoring for:

  • Anomalous API activity
  • Potentially unauthorized deployments and compromised instances
  • Reconnaissance by attackers.

GuardDuty analyzes and processes VPC flow log, AWS CloudTrail event log, and DNS log data sources. You don’t need to manually manage these data sources because the data is automatically leveraged and analyzed when you activate GuardDuty. For example, GuardDuty consumes VPC Flow Log events directly from the VPC Flow Logs feature through an independent and duplicative stream of flow logs. As a result, you don’t incur any operational burden on existing workloads.

GuardDuty helps find potential threats in your AWS environment by producing security findings that you can view in the GuardDuty console or consume through Amazon CloudWatch Events, which is a service that makes alerts actionable and easier to integrate into existing event management and workflow systems. One common question we hear from customers is “how do I visualize these findings to generate meaningful insights?” In this post, we’re going to show you how to create a dashboard that includes visualizations like this:
 

Figure 1: Example visualization

Figure 1: Example visualization

You’ll learn how you can use AWS services to create a pipeline for your GuardDuty findings so you can log and visualize them. The services include:

Architecture

The architectural diagram below illustrates the pipeline we’ll create.
 

Figure 2: Architectural diagram

Figure 2: Architectural diagram

We’ll walk through the data flow to explain the architecture and highlight the additional customizations available to you.

  1. Amazon GuardDuty is enabled in an account and begins monitoring CloudTrail logs, VPC flow logs, and DNS query logs. If a threat is detected, GuardDuty forwards a finding to CloudWatch Events. For a newly generated finding, GuardDuty sends a notification based on its CloudWatch event within 5 minutes of the finding. CloudWatch Events allows you to send upstream notifications to various services filtered on your configured event patterns. We’ll configure an event pattern that only forwards events coming from the GuardDuty service.
  2. We define two targets in our CloudWatch Event Rule. The first target is a Kinesis Firehose stream for delivery into an Elasticsearch domain and an S3 bucket. The second target is an SNS Topic for Email/SMS notification of findings. We’ll send all findings to our targets; however, you can filter and format the findings you send by using a Lambda function (or by event pattern matching with a CloudWatch Event Rule). For example, you could send only high-severity alarms (that is, findings with detail.severity > 7).
  3. The Firehose stream delivers findings to Amazon Elasticsearch, which provides visualization and analysis for our event findings. The stream also delivers findings to an S3 bucket. The S3 bucket is used for long term archiving. This data can augment your data lake and you can use services such as Amazon Athena to perform advanced analytics.
  4. We’ll search, explore, and visualize the GuardDuty findings using Kibana and the Elasticsearch query Domain Specific Language (DSL) to gain valuable insights. Amazon Elasticsearch has a built-in Kibana plugin to visualize the data and perform operational analyses.
  5. To provide a simplified and secure authentication method, we provide user authentication to Kibana with Amazon Cognito User Pools. This method provides improved security from traditional IP whitelists or proxy infrastructure.
  6. Our second CloudWatch Event target is SNS, which has subscribed email endpoint(s) that allow your operations teams to receive email (or SMS messages) when a new GuardDuty Event is received.

If you would like to centralize your findings from multiple regions into a single S3 bucket, you can adapt this pipeline. You would deploy the frontend of the pipeline by configuring Kinesis Firehose in the remote regions to point to the S3 bucket in the centralized region. You can leverage prefixes in the Kinesis Firehose configuration to identify the source region. For example, you would configure a prefix of us-west-1 for events originating from the us-west-1 region. Analytic queries from tools such as Athena can then selectively target the desired region.

Deployment Steps

This CloudFormation template will install the pipeline and components required for GuardDuty visualization:
 
Select button to launch stack

When you start the stack creation process you will be prompted for the following information:

  • Stack name — This is the name of the stack you will create
  • EmailAddress — This email address is used to create a username in Cognito and a subscriber to the SNS topic.
  • ESDomainName — This will be the name given to the Elasticsearch Domain.
  • IndexName — This will be the Index created by Firehose to load data into Elasticsearch.

 

Figure 3: The "Create stack" interface

Figure 3: The “Create stack” interface

Once the infrastructure is installed, you’ll follow two main steps, each of which is described in detail later:

  1. Add Cognito authentication to Kibana, which is hosted on the Elasticsearch domain. At the time of writing, this can’t be done natively in CloudFormation. We’ll also confirm the SNS subscription so we can start to receive GuardDuty Findings via email.
  2. Configure Kibana with the index, the appropriate scripted fields, and the dashboard to provide the visualizations. We’ll also enable GuardDuty to start monitoring your account and send sample findings to test the pipeline.

Step 1: Enable Cognito authentication in Kibana

To enable user authentication to your dashboards hosted in Kibana, you need to enable the integration from the Elasticsearch domain that was created within the Cloudformation template.

  1. Open the AWS Console and select the Cognito service. Select Manage User Pools to access the User Pool that was created in Cloudformation. Select the user pool beginning with the name VisualizeGuardDutyUserPool and, under the App Integration menu item, select Domain name.
     
    Figure 4: The "Domain name" interface

    Figure 4: The “Domain name” interface

  2. You need to create a unique domain prefix to allow Kibana to authenticate using Cognito. Enter a unique domain prefix (it can only contain lowercase letters, numbers, and hyphens). After entering the prefix, select the Check Availability button to ensure it’s available in the region. If it’s available, select Save Changes button.
  3. From the AWS Console, select the Cloudformation service.
  4. Select the template you created for your pipeline, select the Outputs tab, and then, under Value, copy the value of ESCognitoRole. You’ll use this role when you enable Cognito authentication of Elasticsearch.
     
    Figure 5: The "Outputs" tab and the "ESCognitoRole" key

    Figure 5: The “Outputs” tab and the “ESCognitoRole” key

  5. Next, browse to the Elasticsearch Service, select the domain you created from the CloudFormation template, and select the Configure cluster button:
     
    Figure 6: The "Configure cluster" button

    Figure 6: The “Configure cluster” button

  6. Under the Kibana authentication section, select the Enable Amazon Cognito for authentication checkbox. You’ll be presented with several fields you need to configure, including: Cognito User Pool (the name of the user pool should start with VisualizeGuardDutyUserPool), Cognito Identity Pool (the name of the identity pool should start with VisualizeGuardDutyIDPool), and IAM Role Name (this was copied in step 4 earlier). A Cognito User Pool is a user directory in Amazon Cognito, we use this to create a user account to provide authentication to Kibana. Amazon Cognito Identity Pools (federated identities) enable you to create unique identities for your users and federate them with identity providers. The Cognito Identity Pool in our case is used to provide federated access to Kibana. After you provided values for these fields, select the Submit button.
     
    Figure 7: The "Kibana authentication" interface

    Figure 7: The “Kibana authentication” interface

  7. The cluster reconfiguration will take several minutes to complete processing. When you see Domain status as Active, you can proceed.
  8. Finally, confirm the subscription email you received from SNS. Look for an email from: AWS Notifications <[email protected]>, open the message and select Confirm subscription to allow SNS to send you email when the SNS Topic receives a notification for new GuardDuty findings.

Step 2: Set up the Kibana dashboard and enable GuardDuty

Now, you can set up the Kibana dashboard with custom visualizations.

  1. Open the CloudFormation service page and select the stack you created earlier.
  2. Under the Outputs section, copy the Kibana URL.
     
    Figure 8: Copy the Kibana URL

    Figure 8: Copy the Kibana URL

  3. Paste the Kibana URL in a new browser window.
  4. Check your email client. You should have an email containing the temporary password from Cognito. Copy the temporary password and use it to log in to Cognito. If you haven’t received the email, check your email junk folder. You can also create additional users in the Cognito User Pool that was created from the CloudFormation Stack to provide additional users Kibana access.
     
    Figure 9: Example email with temporary password

    Figure 9: Example email with temporary password

  5. At the login prompt, enter the email address and password for the Cognito user the CloudFormation template created. A prompt to change your password will appear. Change your password to proceed. The Cognito User Pool requires: upper case letters, lower case letters, special characters, and numbers with a minimum length of 8 characters.
  6. It’s time to add mapping information to your index to instruct Kibana that some of the fields are delivered as geopoints. This allows these fields to be properly visualized with a Coordinate Map. Select Dev Tools in the menu on the left side:
  7.  

    Figure 10: Select "Dev tools"

    Figure 10: Select “Dev tools”

  8. Paste the following API call in the text box to provide the appropriate mappings for the networkConnectionAction & portProbeAction geolocation field. This calls the Elasticsearch API and updates the geolocation mapping for the above fields:
    
    PUT _template/gdt
    {
      "template": "gdt*",
      "settings": {},
      "mappings": {
        "_default_": {
          "properties": {
            "detail.service.action.portProbeAction.portProbeDetails.remoteIpDetails.geoLocation": {
              "type": "geo_point"
            },
            "detail.service.action.networkConnectionAction.remoteIpDetails.geoLocation": {
              "type": "geo_point"
            }        
          }
        }
      }
    }
    

  9. After you paste the API call be sure to remove whitespace after the ending brace. This allows you to select the green arrow to execute it. You should receive a message that the call was successful.
     
    Figure 11: Paste the API call

    Figure 11: Paste the API call

  10. Next, enable GuardDuty and send sample findings so you can create the Kibana Dashboard with data present. Find the GuardDuty service in the AWS Console and select the Get started button.
  11. From the Welcome to GuardDuty page, select the Enable GuardDuty button.
  12. Next, send some sample events. From the GuardDuty service, select the Settings menu on the left-hand menu, and then select Generate sample findings as shown here:
     
    Figure 12: The "Generate sample findings" button

    Figure 12: The “Generate sample findings” button

  13. Optionally, if you want to test with real GuardDuty findings, you can leverage the Amazon GuardDuty Tester. This AWS CloudFormation template creates an isolated environment with a bastion host, a tester EC2 instance, and two target EC2 instances to simulate five types of common attacks that GuardDuty is built to detect and notify you with generated findings. Once deployed, you would use the tester EC2 instance to execute a shell script to generate GuardDuty findings. Additional detail about this option can be found in the GuardDuty documentation.
  14. On the Kibana landing page, in the menu on the left side, create the Index by selecting Management.
  15. On the Management page, select Index Patterns.
  16. On the Create index pattern page, under Index patterns, enter gdt-* (if you used a different IndexName in the Cloudformation template, use that here), and then select Next Step.

    Note: It takes several minutes for the GuardDuty findings to generate a CloudWatch Event, work through the pipeline, and create the index in Elasticsearch. If the index doesn’t appear initially, please wait a few minutes and try again.

     

    Figure 13: The "Create index pattern" page

    Figure 13: The “Create index pattern” page

  17. Under Time Filter field name, select time from the drop-down list, and then select Create index pattern.
     
    Figure 14: The "Time Filter field name" list

    Figure 14: The “Time Filter field name” list

Create scripted fields

With the Index defined, we will now create two scripted fields that your dashboard visualizations will use.

Define the severity level

  1. Select the Index you just created, and then select scripted fields.
     
    Figure 15: The "scripted fields" tab

    Figure 15: The “scripted fields” tab

  2. Select Add Scripted Field, and enter the following information:
    • Name — sevLevel
    • Language — painless
    • Type — String
    • Format (Default: String) — -default-
    • Popularity — (leave at default of 0)
    • Script — copy and paste this script into the text-entry field:
      
      if (doc['detail.severity'].value < 3.9) { 
          return "Low";
      }
      else {if (doc['detail.severity'].value < 6.9) {
                return "Medium";
             }
      return "High";
      }
      

  3. After entering the information, select Create Field.

The sevLevel field provides a value-to-level mapping as defined by GuardDuty Severity Levels. This allows you to visualize the severity levels in a more user-friendly format (High, Medium, and Low) instead of a cryptic numerical value. To generate sevLevel, we used Kibana painless scripting, which allows custom field creation.

Define the attack type

  1. Now create a second scripted field for typeCategory. The typeCategory field extracts the finding attack type. Enter the following information:
    • Name — typeCategory
    • Language — painless
    • Type — String
    • Format (Default: String) — -default-
    • Popularity — (leave at default of 0)
    • Script — Copy and paste this script into the text-entry field:
      
      def path = doc['detail.type.keyword'].value;
      if (path != null) {
          int firstColon = path.indexOf(":");
          if (firstColon > 0) {
          return path.substring(0,firstColon);
          }
      }
      return "";
      

  2. After entering the information, select Create Field.

The typeCategory field is used to define the broad category “attack type.” The source field (detail.type.keyword) provides a lot of detailed information (for example: Recon:EC2/PortProbeUnprotectedPort), but we want to visualize the category of “attack type” in the high-level dashboard (that is, only Recon). We can still visualize on a more granular level, if necessary.

Create the Kibana Dashboard

  1. Create the Kibana dashboard by importing a JSON file containing its definition. To do this, download the Kibana dashboard and visualizations definition JSON file from here.
  2. Select Management in the menu on the left, and then select Saved Objects. On the right, select Import.
  3. Select the JSON file you downloaded and select Open. This imports the GuardDuty dashboard and visualizations. Select Yes, overwrite all objects.
  4. In the Index Pattern Conflicts section, under New index pattern, select gdt-*, and then select Confirm all changes.

Dashboard in action

  1. Select Dashboard in the menu on the left.
  2. Select the Guard Duty Summary link.

Your GuardDuty Dashboard will look like this:
 

Figure 16: The GuardDuty dashboard with callouts

Figure 16: The GuardDuty dashboard

The dashboard provides the following visualizations:

  1. This filter allows you to filter sample findings from real findings. If you generate sample findings from the GuardDuty AWS console, this filter allows you to remove the sample findings from the dashboard.
  2. The GuardDuty — Affected Instances chart shows which EC2 instances have associated findings. This visualization allows you to filter specific instances from display by selecting them in the graphic.
  3. The Guard Duty — Threat Type chart allows you to filter on the general attack type (inner circle) as well as the specific attack type (outer circle).
  4. The Guard Duty — Events Per Day graph allows you to visualize and filter on a specific time or date to show findings for that specific time, as well as search for temporal patterns in findings.
  5. GuardDuty — Top10 Findings provides a list of the top 10 findings by count.
  6. GuardDuty — Total Events provides the total number of events based on the criteria chosen. This value will change based on the filters defined.
  7. The GuardDuty — Heatmap — Port Probe Source Countries visualizes the countries where port probes are issued from. This is a Coordinate Map visualization that allows you to see the source and volume of the port probes targeting your instances.
  8. The GuardDuty — Network Connection Source Countries visualizes where brute force attacks are coming from. This is a Region Map visualization that allows you to highlight the country the brute force attacks are sourced from.
  9. GuardDuty — Severity Levels is a pie chart that show findings by severity levels (High, Med, Low), and you can filter by a specific level (that is, only show high-severity findings). This visualization uses the scripted field we created earlier for simplified visualization.
  10. The All-GuardDuty table includes the raw findings for all events. This provides complete raw event detail and the ability to filter at very granular levels.

In a previous blog, we saw how you can create a Kibana dashboard to visualize your network security posture by visualizing your VPC flow logs. This GuardDuty dashboard augments that dashboard. You can use a single Elasticsearch cluster to host both of these dashboards, in addition to other data sources you want to analyze and report on.

Conclusion

We’ve outlined an approach to rapidly build a pipeline to help you archive, analyze, and visualize your GuardDuty findings for rapid insight and actionable intelligence. You can extend this solution in a number of ways, including:

  • Modifying the alert email sent with a structured message (instead of raw JSON)
  • Adding additional visualizations, such as a heatmaps or timeseries charts
  • Extending the solution across AWS accounts or regions.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon GuardDuty forum.

Want more AWS Security news? Follow us on Twitter.

Michael Fortuna

Michael Fortuna

Michael is a Solutions Architect in AWS supporting enterprise customers and their journey to the cloud. Prior to his work on AWS and cloud technologies, Michael’s areas of focus included software-defined networking, security, collaboration, and virtualization technologies. He’s very excited to work as an SA because it allows him to dive deep on technology while helping customers.

Ravi Sakaria

Ravi Sakariar

Ravi is a Senior Solutions Architect at AWS based in New York. He works with enterprise customers as they transform their business and journey to the cloud. He enjoys the culture of innovation at Amazon because it’s similar to his prior experiences building startup companies. Outside of work, Ravi enjoys spending time with his family, cooking, and watching the New Jersey Devils.

AWS completes TISAX high assessment

Post Syndicated from Gerald Boyne original https://aws.amazon.com/blogs/security/aws-completes-tisax-high-assessment/

We have completed the European automotive industry’s TISAX high assessment for 43 services. To successfully complete the TISAX high assessment, EY Germany conducted an independent audit, and attested that our information management system meets industry-set standards. This provides automotive industry organizations the assurance needed to build secure applications and services on AWS.

TISAX was established by the German Association of the Automotive Industry (VDA) and is governed by the European Network Exchange (ENX), which is an association of 15 companies within the European automotive industry.

The following AWS services were TISAX high assessed in our Dublin and Frankfurt Regions:

  • Amazon API Gateway
  • Amazon CloudFront
  • Amazon CloudWatch Logs
  • Amazon Cognito
  • Amazon Connect
  • Amazon DynamoDB
  • Amazon ElastiCache
  • Amazon Elastic Block Store (EBS)
  • Amazon Elastic Container Registry (ECR)
  • Amazon Elastic Container Service (ECS)
  • Amazon Elastic Cloud Compute (EC2)
  • Amazon Elastic File System (EFS)
  • Amazon Elastic Load Balancing
  • Amazon Elastic MapReduce (EMR)
  • Amazon Glacier
  • Amazon Kinesis Data Streams
  • Amazon Redshift
  • Amazon Relational Database Service (RDS)
  • Amazon Route 53
  • Amazon S3 Transfer Acceleration
  • Amazon Simple Notification Service (SNS)
  • Amazon Simple Queue Service (SQS)
  • Amazon Simple Storage Service (S3)
  • Amazon Simple Workflow Service (SWF)
  • Amazon Virtual Private Cloud (VPC)
  • Amazon WorkSpaces
  • AWS CloudFormation
  • AWS CloudHSM
  • AWS CloudTrail
  • AWS Database Migration Service (DMS)
  • AWS Direct Connect
  • AWS Directory Service for Microsoft Active Directory
  • AWS Elastic Beanstalk
  • AWS Identity and Access Management (IAM)
  • AWS IoT Core
  • AWS Key Management Service (KMS)
  • AWS Lambda
  • AWS [email protected]
  • AWS Shield
  • AWS Step Functions
  • AWS Storage Gateway
  • AWS Systems Manager
  • VM Import/Export

AWS achieves FedRAMP JAB High and Moderate Provisional Authorization across 14 Services in the AWS US East/West and GovCloud Regions

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/aws-achieves-fedramp-jab-high-moderate-provisional-authorization/

Since I launched our FedRAMP program way back in 2013, it has always excited me to talk about how we’re continually expanding the scope of our compliance programs because that means you’re able to use more of our services for sensitive and regulated workloads. Up to this point, we’ve had 22 services in our US East/West Regions under FedRAMP Moderate and 21 services in our GovCloud Region under FedRAMP High.

Today, I’m happy tell you about the latest expansion of our FedRAMP program, which makes for a 64% overall increase in FedRAMP covered services. We’ve achieved JAB authorizations for an additional 14 FedRAMP Moderate services in our US East/West Regions and three of those services also received FedRAMP High in our GovCloud Region. Check out the services below. All the services are available in the US East/West Regions, and the services with asterisks are also available in GovCloud.

  • Amazon API Gateway
  • Amazon Cloud Directory
  • Amazon Cognito
  • Amazon ElastiCache*
  • Amazon Inspector
  • Amazon Macie
  • Amazon QuickSight
  • Amazon Route 53
  • Amazon WAF
  • AWS Config
  • AWS Database Migration Service*
  • AWS Lambda
  • AWS Shield Advanced
  • AWS Snowball/Snowball Edge*

You can now see our updated list of authorizations on the FedRAMP Marketplace. We also list all of our services in scope by compliance program on our site. As always, our FedRAMP assessment was completed with a third-party assessment partner to ensure an independent validation of our technical, management, and operational security controls against the FedRAMP baselines.

Our customer obsession starts with you. It’s been a personal goal of mine, and a point of direct feedback from you, to accelerate the pace at which we’re onboarding services into all of our compliance programs, not just FedRAMP. So, we’ll continue to work with you and with regulatory and compliance bodies around the world to ensure that we’re raising the bar on your security and compliance needs and continually earning the trust you place in us.

To learn about what other public sector customers are doing on AWS, see our Government, Education, and Nonprofits Case Studies and Customer Success Stories. And certainly, stay tuned for more exciting future FedRAMP updates.

Want more AWS Security news? Follow us on Twitter.

How to use AWS Secrets Manager to rotate credentials for all Amazon RDS database types, including Oracle

Post Syndicated from Apurv Awasthi original https://aws.amazon.com/blogs/security/how-to-use-aws-secrets-manager-rotate-credentials-amazon-rds-database-types-oracle/

You can now use AWS Secrets Manager to rotate credentials for Oracle, Microsoft SQL Server, or MariaDB databases hosted on Amazon Relational Database Service (Amazon RDS) automatically. Previously, I showed how to rotate credentials for a MySQL database hosted on Amazon RDS automatically with AWS Secrets Manager. With today’s launch, you can use Secrets Manager to automatically rotate credentials for all types of databases hosted on Amazon RDS.

In this post, I review the key features of Secrets Manager. You’ll then learn:

  1. How to store the database credential for the superuser of an Oracle database hosted on Amazon RDS
  2. How to store the Oracle database credential used by an application
  3. How to configure Secrets Manager to rotate both Oracle credentials automatically on a schedule that you define

Key features of Secrets Manager

AWS Secrets Manager makes it easier to rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. The key features of this service include the ability to:

  1. Secure and manage secrets centrally. You can store, view, and manage all your secrets centrally. By default, Secrets Manager encrypts these secrets with encryption keys that you own and control. You can use fine-grained IAM policies or resource-based policies to control access to your secrets. You can also tag secrets to help you discover, organize, and control access to secrets used throughout your organization.
  2. Rotate secrets safely. You can configure Secrets Manager to rotate secrets automatically without disrupting your applications. Secrets Manager offers built-in integrations for rotating credentials for all Amazon RDS databases (MySQL, PostgreSQL, Oracle, Microsoft SQL Server, MariaDB, and Amazon Aurora.) You can also extend Secrets Manager to meet your custom rotation requirements by creating an AWS Lambda function to rotate other types of secrets.
  3. Transmit securely. Secrets are transmitted securely over Transport Layer Security (TLS) protocol 1.2. You can also use Secrets Manager with Amazon Virtual Private Cloud (Amazon VPC) endpoints powered by AWS Privatelink to keep this communication within the AWS network and help meet your compliance and regulatory requirements to limit public internet connectivity.
  4. Pay as you go. Pay for the secrets you store in Secrets Manager and for the use of these secrets; there are no long-term contracts, licensing fees, or infrastructure and personnel costs. For example, a typical production-scale web application will generate an estimated monthly bill of $6. If you follow along the instructions in this blog post, your estimated monthly bill for Secrets Manager will be $1. Note: you may incur additional charges for using Amazon RDS and Amazon Lambda, if you’ve already consumed the free tier for these services.

Now that you’re familiar with Secrets Manager features, I’ll show you how to store and automatically rotate credentials for an Oracle database hosted on Amazon RDS. I divided these instructions into three phases:

  1. Phase 1: Store and configure rotation for the superuser credential
  2. Phase 2: Store and configure rotation for the application credential
  3. Phase 3: Retrieve the credential from Secrets Manager programmatically

Prerequisites

To follow along, your AWS Identity and Access Management (IAM) principal (user or role) requires the SecretsManagerReadWrite AWS managed policy to store the secrets. Your principal also requires the IAMFullAccess AWS managed policy to create and configure permissions for the IAM role used by Lambda for executing rotations. You can use IAM permissions boundaries to grant an employee the ability to configure rotation without also granting them full administrative access to your account.

Phase 1: Store and configure rotation for the superuser credential

From the Secrets Manager console, on the right side, select Store a new secret.

Since I’m storing credentials for database hosted on Amazon RDS, I select Credentials for RDS database. Next, I input the user name and password for the superuser. I start by securing the superuser because it’s the most powerful database credential and has full access to the database.
 

Figure 1: For "Select secret type," choose "Credentials for RDS database"

Figure 1: For “Select secret type,” choose “Credentials for RDS database”

For this example, I choose to use the default encryption settings. Secrets Manager will encrypt this secret using the Secrets Manager DefaultEncryptionKey in this account. Alternatively, I can choose to encrypt using a customer master key (CMK) that I have stored in AWS Key Management Service (AWS KMS). To learn more, read the Using Your AWS KMS CMK documentation.
 

Figure 2: Choose either DefaultEncryptionKey or use a CMK

Figure 2: Choose either DefaultEncryptionKey or use a CMK

Next, I view the list of Amazon RDS instances in my account and select the database this credential accesses. For this example, I select the DB instance oracle-rds-database from the list, and then I select Next.

I then specify values for Secret name and Description. For this example, I use Database/Development/Oracle-Superuser as the name and enter a description of this secret, and then select Next.
 

Figure 3: Provide values for "Secret name" and "Description"

Figure 3: Provide values for “Secret name” and “Description”

Since this database is not yet being used, I choose to enable rotation. To do so, I select Enable automatic rotation, and then set the rotation interval to 60 days. Remember, if this database credential is currently being used, first update the application (see phase 3) to use Secrets Manager APIs to retrieve secrets before enabling rotation.
 

Figure 4: Select "Enable automatic rotation"

Figure 4: Select “Enable automatic rotation”

Next, Secrets Manager requires permissions to rotate this secret on my behalf. Because I’m storing the credentials for the superuser, Secrets Manager can use this credential to perform rotations. Therefore, on the same screen, I select Use a secret that I have previously stored in AWS Secrets Manager, and then select Next.

Finally, I review the information on the next screen. Everything looks correct, so I select Store. I have now successfully stored a secret in Secrets Manager.

Note: Secrets Manager will now create a Lambda function in the same VPC as my Oracle database and trigger this function periodically to change the password for the superuser. I can view the name of the Lambda function on the Rotation configuration section of the Secret Details page.

The banner on the next screen confirms that I’ve successfully configured rotation and the first rotation is in progress, which enables me to verify that rotation is functioning as expected. Secrets Manager will rotate this credential automatically every 60 days.
 

Figure 5: The confirmation notification

Figure 5: The confirmation notification

Phase 2: Store and configure rotation for the application credential

The superuser is a powerful credential that should be used only for administrative tasks. To enable your applications to access a database, create a unique database credential per application and grant these credentials limited permissions. You can use these database credentials to read or write to database tables required by the application. As a security best practice, deny the ability to perform management actions, such as creating new credentials.

In this phase, I will store the credential that my application will use to connect to the Oracle database. To get started, from the Secrets Manager console, on the right side, select Store a new secret.

Next, I select Credentials for RDS database, and input the user name and password for the application credential.

I continue to use the default encryption key. I select the DB instance oracle-rds-database, and then select Next.

I specify values for Secret Name and Description. For this example, I use Database/Development/Oracle-Application-User as the name and enter a description of this secret, and then select Next.

I now configure rotation. Once again, since my application is not using this database credential yet, I’ll configure rotation as part of storing this secret. I select Enable automatic rotation, and set the rotation interval to 60 days.

Next, Secrets Manager requires permissions to rotate this secret on behalf of my application. Earlier in the post, I mentioned that applications credentials have limited permissions and are unable to change their password. Therefore, I will use the superuser credential, Database/Development/Oracle-Superuser, that I stored in Phase 1 to rotate the application credential. With this configuration, Secrets Manager creates a clone application user.
 

Figure 6: Select the superuser credential

Figure 6: Select the superuser credential

Note: Creating a clone application user is the preferred mechanism of rotation because the old version of the secret continues to operate and handle service requests while the new version is prepared and tested. There’s no application downtime while changing between versions.

I review the information on the next screen. Everything looks correct, so I select Store. I have now successfully stored the application credential in Secrets Manager.

As mentioned in Phase 1, AWS Secrets Manager creates a Lambda function in the same VPC as the database and then triggers this function periodically to rotate the secret. Since I chose to use the existing superuser secret to rotate the application secret, I will grant the rotation Lambda function permissions to retrieve the superuser secret. To grant this permission, I first select role from the confirmation banner.
 

Figure 7: Select the "role" link that's in the confirmation notification

Figure 7: Select the “role” link that’s in the confirmation notification

Next, in the Permissions tab, I select SecretsManagerRDSMySQLRotationMultiUserRolePolicy0. Then I select Edit policy.
 

Figure 8: Edit the policy on the "Permissions" tab

Figure 8: Edit the policy on the “Permissions” tab

In this step, I update the policy (see below) and select Review policy. When following along, remember to replace the placeholder ARN-OF-SUPERUSER-SECRET with the ARN of the secret you stored in Phase 1.


{
  "Statement": [
    {
        "Effect": "Allow",
        "Action": [
            "ec2:CreateNetworkInterface",
			"ec2:DeleteNetworkInterface",
			"ec2:DescribeNetworkInterfaces",
			"ec2:DetachNetworkInterface"
		],
		"Resource": "*"
	},
	{
	    "Sid": "GrantPermissionToUse",
		"Effect": "Allow",
		"Action": [
            "secretsmanager:GetSecretValue"
        ],
		"Resource": "ARN-OF-SUPERUSER-SECRET"
	}
  ]
}

Here’s what it will look like:
 

Figure 9: Edit the policy

Figure 9: Edit the policy

Next, I select Save changes. I have now completed all the steps required to configure rotation for the application credential, Database/Development/Oracle-Application-User.

Phase 3: Retrieve the credential from Secrets Manager programmatically

Now that I have stored the secret in Secrets Manager, I add code to my application to retrieve the database credential from Secrets Manager. I use the sample code from Phase 2 above. This code sets up the client and retrieves and decrypts the secret Database/Development/Oracle-Application-User.

Remember, applications require permissions to retrieve the secret, Database/Development/Oracle-Application-User, from Secrets Manager. My application runs on Amazon EC2 and uses an IAM role to obtain access to AWS services. I attach the following policy to my IAM role. This policy uses the GetSecretValue action to grant my application permissions to read secret from Secrets Manager. This policy also uses the resource element to limit my application to read only the Database/Development/Oracle-Application-User secret from Secrets Manager. You can refer to the Secrets Manager Documentation to understand the minimum IAM permissions required to retrieve a secret.


{
 "Version": "2012-10-17",
 "Statement": {
    "Sid": "RetrieveDbCredentialFromSecretsManager",
    "Effect": "Allow",
    "Action": "secretsmanager:GetSecretValue",
    "Resource": "arn:aws:secretsmanager:<AWS-REGION>:<ACCOUNT-NUMBER>:secret: Database/Development/Oracle-Application-User     
 }
}

In the above policy, remember to replace the placeholder <AWS-REGION> with the AWS region that you’re using and the placeholder <ACCOUNT-NUMBER> with the number of your AWS account.

Summary

I explained the key benefits of Secrets Manager as they relate to RDS and showed you how to help meet your compliance requirements by configuring Secrets Manager to rotate database credentials automatically on your behalf. Secrets Manager helps you protect access to your applications, services, and IT resources without the upfront investment and on-going maintenance costs of operating your own secrets management infrastructure. To get started, visit the Secrets Manager console. To learn more, visit Secrets Manager documentation.

If you have comments about this post, submit them in the Comments section below. If you have questions about anything in this post, start a new thread on the Secrets Manager forum.

Want more AWS Security news? Follow us on Twitter.

Apurv Awasthi

Apurv is the product manager for credentials management services at AWS, including AWS Secrets Manager and IAM Roles. He enjoys the “Day 1” culture at Amazon because it aligns with his experience building startups in the sports and recruiting industries. Outside of work, Apurv enjoys hiking. He holds an MBA from UCLA and an MS in computer science from University of Kentucky.

New guide helps financial services customers in Brazil navigate cloud requirements

Post Syndicated from Leandro Bennaton original https://aws.amazon.com/blogs/security/new-guide-helps-financial-services-customers-in-brazil-navigate-cloud-requirements/

We have a new resource to help our financial services customers in Brazil navigate regulatory requirements for using the cloud. The AWS User Guide to Financial Services Regulations in Brazil is a deep dive into the Brazilian National Monetary Council’s Resolution No. 4,658. The cybersecurity cloud resolution is the first of its kind by regulators in Brazil. The guide details how our services may be able to assist you in achieving these security expectations.

The resolution covers topics such as implementing a cybersecurity policy, incident response, entering into agreements with cloud service providers, subcontracting, business continuity, and notification requirements. Our guide addresses each of these issues and provides specific guidance on how you can use AWS to satisfy requirements.

The AWS User Guide to Financial Services Regulations in Brazil is part of a series of publications that seek to facilitate customer compliance. It’s available in English and Portuguese. We’ll continue to monitor the regulatory environment in Brazil and around the world and to publish additional resources.

If you have any questions, please contact your account executive.

How to automate the import of third-party threat intelligence feeds into Amazon GuardDuty

Post Syndicated from Rajat Ravinder Varuni original https://aws.amazon.com/blogs/security/how-to-automate-import-third-party-threat-intelligence-feeds-into-amazon-guardduty/

Amazon GuardDuty is an AWS threat detection service that helps protect your AWS accounts and workloads by continuously monitoring them for malicious and unauthorized behavior. You can enable Amazon GuardDuty through the AWS Management Console with one click. It analyzes billions of events across your AWS accounts and uses machine learning to detect anomalies in account and workload activity. Then it references integrated threat intelligence feeds to identify suspected attackers. Within an AWS region, GuardDuty processes data from AWS CloudTrail Logs, Amazon Virtual Private Cloud (VPC) Flow Logs, and Domain Name System (DNS) Logs. All log data is encrypted in transit. GuardDuty extracts various fields from the logs for profiling and anomaly detection and then discards the logs. GuardDuty’s threat intelligence findings are based on ingested threat feeds from AWS threat intelligence and from third-party vendors CrowdStrike and Proofpoint.

However, beyond these built-in threat feeds, you have two ways to customize your protection. Customization is useful if you need to enforce industry-specific threat feeds, such as those for the financial services or the healthcare space. The first customization option is to provide your own list of whitelisted IPs. The second is to generate findings based on third-party threat intelligence feeds that you own or have the rights to share and upload into GuardDuty. However, keeping the third party threat list ingested to GuardDuty up-to-date requires many manual steps. You would need to:

  • Authorize administrator access
  • Download the list from a third-party provider
  • Upload the generated file to the service
  • Replace outdated threat feeds

In the following blog we’ll show you how to automate these steps when using a third-party feed. We’ll leverage FireEye iSIGHT Threat Intelligence as an example of how to upload a feed you have licensed to GuardDuty, but this solution can also work with other threat intelligence feeds. If you deploy this solution with the default parameters, it builds the following environment:
 

Figure 1: Diagram of the solution environment

Figure 1: Diagram of the solution environment

The following resources are used in this solution:

  • An Amazon CloudWatch Event that periodically invokes an AWS Lambda function. By default, CloudWatch will invoke the function every six days, but you can alter this if you’d like.
  • An AWS Systems Manager Parameter Store that securely stores the public and private keys that you provide. These keys are required to download the threat feeds.
  • An AWS Lambda function that consists of a script that programmatically imports a licensed FireEye iSIGHT Threat Intelligence feed into Amazon GuardDuty.
  • An AWS Identity and Access Management (IAM) role that gives the Lambda function access to the following:
    1. GuardDuty, to list, create, obtain, and update threat lists.
    2. CloudWatch Logs, to monitor, store, and access log files generated by AWS Lambda.
    3. Amazon S3, to upload threat lists on Amazon S3 and ingest them to GuardDuty.
  • An Amazon Simple Storage Service (S3) bucket to store your threat lists. After the solution is deployed, the bucket is retained unless you delete it manually.
  • Amazon GuardDuty, which needs to be enabled in the same AWS region in which you want to deploy the solution.

    Note: It’s a security best practice to enable GuardDuty in all regions.

Deploy the solution

Once you’ve taken care of the prerequisites, follow these steps:

  1. Select the Launch Stack button to launch a CloudFormation stack in your account. It takes approximately 5 minutes for the CloudFormation stack to complete:
     
    Select this image to open a link that starts building the CloudFormation stack
  2. Notes: If you’ve invited other accounts to enable GuardDuty and become associated with your AWS account (such that you can view and manage their GuardDuty findings on their behalf), please run this solution from the master account. Find more information on managing master and member GuardDuty accounts here. Executing this solution from the master account ensures that Guard Duty reports findings from all the member accounts as well, using the imported threat list.

    The template will launch in the US East (N. Virginia) Region. To launch the solution in a different AWS Region, use the region selector in the console navigation bar. This is because Guard Duty is a region specific service.

    The code is available on GitHub.

  3. On the Select Template page, select Next.
  4. On the Specify Details page, give your solution stack a name.
  5. Under Parameters, review the default parameters for the template and modify the values, if you’d like.

    Parameter Value Description
    Public Key <Requires input> FireEye iSIGHT Threat Intelligence public key.
    Private Key <Requires input> FireEye iSIGHT Threat Intelligence private key
    Days Requested 7 The maximum age (in days) of the threats you want to collect. (min 1 – max 30)
    Frequency 6 The number of days between executions – when the solution downloads a new threat feed (min 1 – max 29)
  6. Select Next.
  7. On the Options page, you can specify tags (key-value pairs) for the resources in your stack, if you’d like, and then select Next.
  8. On the Review page, review and confirm the settings. Be sure to select the box acknowledging that the template will create AWS Identity and Access Management (IAM) resources with custom names.
  9. To deploy the stack, select Create.

After approximately 5 minutes, the stack creation should be complete. You can verify this on the Events tab:
 

Figure 2: Check the status of the stack creation on the "Events" tab

Figure 2: Check the status of the stack creation on the “Events” tab

The Lambda function that updates your GuardDuty threat lists is invoked right after you provision the solution. It’s also set to run periodically to keep your environment updated. However, in scenarios that require faster updates to your threat intelligence lists, such as the discovery of a new Zero Day vulnerability, you can manually run the Lambda function to avoid waiting until the scheduled update event. To manually run the Lambda function, follow the steps described here to create and ingest the newly downloaded threat feeds into Amazon GuardDuty.

Summary

We’ve described how to deploy an automated solution that downloads the latest threat intelligence feeds you have licensed from a third-party provider such as FireEye. This solution provides a large amount of individual threat intelligence data for GuardDuty to process and report findings on. Furthermore, as newer threat feeds are published by FireEye (or the threat intelligence feed provider of your choice), they will be automatically ingested into GuardDuty.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the Amazon GuartDuty forum.

Want more AWS Security news? Follow us on Twitter.

How to use Amazon GuardDuty and AWS Web Application Firewall to automatically block suspicious hosts

Post Syndicated from Cameron Worrell original https://aws.amazon.com/blogs/security/how-to-use-amazon-guardduty-and-aws-web-application-firewall-to-automatically-block-suspicious-hosts/

When you’re implementing security measures across your AWS resources, you should use a holistic approach that incorporates controls across multiple areas. In the Cloud Adoption Framework (CAF) Security perspective whitepaper, we define these controls across four categories.

  • Directive controls. Establish the governance, risk, and compliance models the environment will operate within.
  • Preventive controls. Protect your workloads and mitigate threats and vulnerabilities.
  • Detective controls. Provide full visibility and transparency over the operation of your deployments in AWS.
  • Responsive controls. Drive remediation of potential deviations from your security baselines.

The use of security automation is also a key principle outlined in the whitepaper. It helps reduce operational overhead and create repeatable, predictable approaches to monitoring and responding to events. You can take advantage of AWS services to build powerful solutions for the automated detection and remediation of threats against your AWS environments. For example, you can configure Amazon CloudWatch Events to invoke a Lambda action in response to suspicious or unexpected behavior in your AWS environment detected by Amazon GuardDuty. You can configure automated flows that use both detective and responsive controls and might also feed into preventative controls to help mitigate the threat in the future. Depending on the type of source event, you can automatically invoke specific actions, such as modifying access controls, terminating instances, or revoking credentials.

In this blog post, we’ll show you how to use Amazon GuardDuty to automatically update the AWS Web Application Firewall Web Access Control Lists (WebACLs) and VPC Network Access Control Lists (NACLs) in response to GuardDuty findings. After GuardDuty detects a suspicious activity, the solution updates these resources to block communication from the suspicious host while you perform additional investigation and remediation. Once communication has been blocked, further occurrences of a finding are reduced, allowing security and operations teams to focus more on higher priority tasks.

Amazon GuardDuty is a continuous security monitoring and threat detection service that incorporates threat intelligence, anomaly detection, and machine learning to help protect your AWS resources, including your AWS accounts. Amazon CloudWatch Events delivers a near-real-time stream of system events that describe changes in AWS resources. Amazon GuardDuty sends notifications based on Amazon CloudWatch Events when any change in the findings takes place. In the context of GuardDuty, such changes include newly generated findings and all subsequent occurrences of these existing findings. Using rules that you can quickly set up, you can match CloudWatch events and route them to one or more target actions. This solution routes matched events to AWS Lambda, which then performs updates to AWS Web Application Firewall (WAF) and VPC NACLs. AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. It supports both managed rules as well as a powerful rule language for custom rules. A Network Access Control List (NACL) is an optional layer of security for your Amazon Virtual Private Cloud (VPC) that acts as a firewall for controlling traffic in and out of one or more subnets.

Solution overview

The solution assumes that Amazon GuardDuty is enabled in your AWS account. If it isn’t enabled, you can find more info about the free trial and pricing here, and you can follow the steps in the GuardDuty documentation to set up the service and start monitoring your account.

Figure 1 shows how the CloudFormation template creates the sample solution:

Figure 1: How the CloudFormation template works

Figure 1: How the CloudFormation template works

Here’s how the solution works, as shown in the diagram:

  1. A GuardDuty finding is raised with suspected malicious activity.
  2. A CloudWatch Event is configured to filter for GuardDuty Finding type.
  3. A Lambda function is invoked by the CloudWatch Event and parses the GuardDuty finding.
  4. State data for blocked hosts is stored in Amazon DynamoDB table. The Lambda function checks the state table for existing host entry.
  5. The Lambda function creates a Rule inside AWS WAF and in a VPC NACL.
  6. A notification email is sent via Amazon Simple Notification Service (SNS).

A second Lambda function runs on a 5-minute recurring schedule and removes entries that are past the configurable retention period from WAF IPSets (which is a list that contains the blacklisted IPs or CIDRs), VPC NACLs, and the Dynamo DB table.

GuardDuty findings referenced in this solution

This solution’s CloudWatch Event Rule pattern is configured to match the following GuardDuty Finding types:

  1. UnauthorizedAccess:EC2/SSHBruteForce
    This finding informs you that an EC2 instance in your AWS environment was involved in a brute force attack aimed at obtaining passwords to SSH services on Linux-based systems.
  2. UnauthorizedAccess:EC2/RDPBruteForce
    This finding informs you that an EC2 instance in your AWS environment was involved in a brute force attack aimed at obtaining passwords to RDP services on Windows-based systems.
  3. Recon:EC2/PortProbeUnprotectedPort
    This finding informs you that a port on an EC2 instance in your AWS environment isn’t blocked by a security group, access control list (ACL), or an on-host firewall (for example, Linux IPChains), and known scanners on the internet are actively probing it.
  4. Trojan:EC2/BlackholeTraffic
    This finding informs you that an EC2 instance in your AWS environment might be compromised because it’s trying to communicate with an IP address of a black hole. Black holes refer to places in the network where incoming or outgoing traffic is silently discarded without informing the source that the data didn’t reach its intended recipient.
  5. Backdoor:EC2/XORDDOS
    This finding informs you that an EC2 instance in your AWS environment is attempting to communicate with an IP address that’s associated with XOR DDoS malware. XOR DDoS is Trojan malware that hijacks Linux systems.
  6. UnauthorizedAccess:EC2/TorIPCaller
    This finding informs you that an EC2 instance in your AWS environment is receiving inbound connections from a Tor exit node. Tor is software for enabling anonymous communication. It encrypts and randomly bounces communications through relays between a series of network nodes.
  7. Trojan:EC2/DropPoint
    This finding informs you that an EC2 instance in your AWS environment is trying to communicate with an IP address of a remote host that’s known to hold credentials and other stolen data captured by malware.

When one of these GuardDuty finding types is matched by the CloudWatch Event Rule, an entry is created in the target ACLs to deny the suspicious host, and then a notification is sent to an email address by this solution’s Lambda. Blocking traffic from the suspicious host helps to mitigate the threat while you perform additional investigation and remediation. For more information, see Remediating a Compromised EC2 Instance.

Solution deployment

This sample solution includes 6 main steps:

  1. Deploy the CloudFormation template.
  2. Create and run a Lambda GuardDuty finding test event.
  3. Confirm the entry in the VPC Network ACL.
  4. Confirm the entry in the AWS WAF IPSets.
  5. Confirm the SNS notification subscription.
  6. Apply the WAF Web ACLs to resources.

Step 1: Deploy the CloudFormation template

For this next step, make sure you deploy the template within the AWS account and region where you want to monitor GuardDuty findings.

  1. Select this link to launch a CloudFormation stack in your account.

    Note: The stack will launch in the N. Virginia (us-east-1) region. It takes approximately 15 minutes for the CloudFormation stack to complete. To deploy this solution into other AWS regions, first upload the solution’s Lambda deployment packages (zip files with code) to an S3 bucket in the selected region. Once you have uploaded the zip files in the target region, update the CloudFormation ArtifactsBucket and ArticaftsPrefix parameters referenced in step 3 below.

  2. In the CloudFormation console, select the Select Template form, and then select Next.
  3. On the Specify Details page, provide the following input parameters. You can modify the default values to customize the solution for your environment.

    Input parameter Input parameter description
    AdminEmail Email address to receive notifications. Must be a valid email address.
    Retention How long to retain IP addresses in the blacklist (in minutes). Default is 12 hours.
    CloudFrontIPSetId ID for existing WAF IPSet on CloudFront. Enter the ID here if there’s an existing WAF IPSet on CloudFront you want to use. Leave set to the default value of False if you want to create a new WebACL and IPSet.
    ALBIPSetId ID for existing WAF IPSet on ALB. Enter if there is an existing WAF IPSet on ALB. Leave set to False for creation of new WebACL and IPSet.
    ArtifactsBucket S3 bucket with artifact files (Lambda functions, templates, html files, etc.). Leave set to the default value for deployment into N. Virginia region.
    ArtifactsPrefix Path in the S3 bucket containing artifact files. Leave set to the default value for deployment into N. Virginia region.

    Note: AWS WAF is not currently available in all regions. For more information about where it’s available, refer to this page.

    Figure 2 shows an example of values entered on this screen:

    Figure 2: CloudFormation parameters on the "Specify Details" page

    Figure 2: CloudFormation parameters on the “Specify Details” page

  4. Enter values for all of the input parameters, and then select Next.
  5. On the Options page, accept the defaults, and then select Next.
  6. On the Review page, confirm the details, and then select Create.
  7. While the stack is being created, check the email inbox for the value you gave for the AdminEmail address parameter. Look for an email message with the subject “AWS Notification – Subscription Confirmation”. Select the link to confirm the subscription to the SNS topic. You should see a message similar to this:

    Figure 3: Subscription confirmation

    Figure 3: Subscription confirmation

Once the Status field for the CloudFormation stack changes to CREATE_COMPLETE, the solution is implemented and is ready for testing.

Figure 4: The "Status" displays "CREATE_COMPLETE"

Figure 4: The “Status” displays “CREATE_COMPLETE”

Step 2: Create and run a Lambda GuardDuty finding test event

Once the CloudFormation stack has completed deployment, you can test the functionality using a Lambda test event.

  1. In the console, select Services > VPC > Subnets and locate a subnet suitable for testing the solution. On the Summary tab, copy the Subnet ID to the clipboard or to a text editor.

    Figure 5: The "Subnet ID" value on the "Summary" tab

    Figure 5: The “Subnet ID” value on the “Summary” tab

  2. In the console, select Services > CloudFormation > GuardDutytoACL stack. In the stack Outputs tab, look for the GuardDutytoACLLambda entry, similar to Figure 6 below:

    Figure 6: The "GuardDutytoACLLambda" entry on the "Outputs" tab

    Figure 6: The “GuardDutytoACLLambda” entry on the “Outputs” tab

  3. 3. Select the link and you’ll be redirected to the Lambda console, with the Lambda function already open, similar to Figure 7:

    Figure 7: The Lambda function open in the Lambda console

    Figure 7: The Lambda function open in the Lambda console

  4. In the top right, select the Select a test event… drop-down list, and then select Configure test events.

    Figure 8: Select "Configure test events" from the drop-down list

    Figure 8: Select “Configure test events” from the drop-down list

  5. To facilitate testing, a test event file has been provided. On the Configure test event page, provide a name for Event name, and then paste the provided test event JSON in the body of the event.
  6. Update the value of subnetId key (line 34) to the value of your Subnet ID from step 2.1, and then select Create.

    Figure 9: Update the value of the "subnetId" key

    Figure 9: Update the value of the “subnetId” key

  7. Select Test to invoke the Lambda with the test event. You should see a message “Execution result: succeeded” similar to below:

    Figure 10: The "Test" button and the "succeeded" message

    Figure 10: The “Test” button and the “succeeded” message

Step 3: Confirm the entry in the VPC Network ACL (NACL)

In this step, you’ll confirm the DENY entry was created in the NACL. This solution is configured to create up to 10 entries in an ACL ranging between rule numbers 71 and 80. Since NACL rules are processed in order, it’s important that the DENY rule is placed before the ALLOW rule.

  1. In the console, select Services > VPC > Subnets and locate the subnet you provided for the test event.
  2. Select the Network ACL tab and confirm the new entry generated from the test event.
    Figure 11: Check the entry from the test event on the "Network" tab

    Figure 11: Check the entry from the test event on the “Network” tab

    Note that VPC NACL entries are created in the rule number range between 71 and 80. Older entries are aged out to create a “sliding window” of blocked hosts.

Step 4: Confirm the entry in the AWS WAF IPSets

In this step, you’ll verify that the entry was added to the CloudFront WAF IPSet and to the ALB WAF IPSet.

  1. In the console, select Services > WAF & Shield, and then select IP addresses.
  2. For Filter, select Global (CloudFront), and then select the IPSet named GD2ACL CloudFront IPSet for Blacklisted IP addresses.

    Figure 12: Filter the list and then select "GD2ACL CloudFront IPSet for Blacklisted IP addresses"

    Figure 12: Filter the list and then select “GD2ACL CloudFront IPSet for Blacklisted IP addresses”

  3. Confirm the IP address that was added to the list in the IPSet:

    Figure 13: Confirm the IP address was added

    Figure 13: Confirm the IP address was added

  4. In the console, select Services > WAF & Shield, and then select IP addresses.
  5. For Filter, select US East (N. Virginia)–or another region in which you deployed this solution–and then select the IPSet named GD2ACL ALB IPSet for blacklisted IP addresses.
  6. Confirm the IP address added to the ALB IPSet:

    Figure 14: Make sure the IP address was added

    Figure 14: Make sure the IP address was added

There might be specific host addresses that you want to prevent from being added to the blacklist. You can do this within GuardDuty by using a trusted IP list. Trusted IP lists consist of IP addresses that you have whitelisted for secure communication with your AWS infrastructure and applications. GuardDuty doesn’t generate findings for IP addresses on trusted IP lists. For additional information, see Working with Trusted IP Lists and Threat Lists.

Step 5: Confirm the SNS notification subscription

In this step, you’ll view the SNS notification that was sent to the email address you set up.

    1. Review the email inbox for the value you provided for the AdminEmail parameter and look for a message with the subject line “AWS GD2ACL Alert.”The contents of the message from SNS should be similar to this:

      Figure 15: SNS message example

      Figure 15: SNS message example

Step 6: Apply the WAF Web ACLs to resources

The final task is to associate the Web ACL with the CloudFront Distributions and Application Load Balancers that you want to automatically update with this solution. To learn how to do this, see Associating or Disassociating a Web ACL with a CloudFront Distribution or an Application Load Balancer.

You can also use AWS Firewall Manger to associate the Web ACLs. AWS Firewall Manager simplifies your AWS WAF administration and maintenance tasks across multiple accounts and resources. With Firewall Manager, you set up your firewall rules just once. The service automatically applies your rules across your accounts and resources, even as you add new resources.

Summary

You’ve learned how to use Amazon GuardDuty to automatically update AWS Web Application Firewall (AWS WAF) and VPC Network Access Control Lists (ACLs) in response to GuardDuty findings. With just a few steps, you can use this sample solution to help mitigate threats by blocking communication with suspicious hosts. You can explore additional solutions possible using GuardDuty Finding types and CloudWatch Events target actions. This solution’s code is available on GitHub. Feel free to play around with the code to add more GuardDuty findings to this solution and also to build bigger and better solutions!

If you have comments about this blog post, submit them in the Comments section below. If you have questions about using this solution, start a thread in the GuardDuty, WAF, or CloudWatch forums, or contact AWS Support.

Using AWS CloudHSM-backed certificates with Microsoft Internet Information Server

Post Syndicated from Jeff Levine original https://aws.amazon.com/blogs/security/using-aws-cloudhsm-backed-certificates-with-microsoft-internet-information-server/

SSL/TLS certificates are used to create encrypted sessions to endpoints such as web servers. If you want to get an SSL certificate, you usually start by creating a private key and a corresponding certificate signing request (CSR). You then send the CSR to a certificate authority (CA) and receive a certificate. When a user seeks to securely browse your website, the web server presents the certificate to the user’s browser. The user’s browser verifies that the certificate is associated with a trusted root certificate, and then uses the certificate to encrypt the session data. The web server then uses the private key to decrypt the user’s data. The security of this protocol relies on the private key remaining private.

A common problem with web servers, however, is that the private key and the certificate reside on the web server itself. If an attacker compromises the server and steals both the private key and certificate, the attacker could potentially use them to intercept the session or create an imposter server. If your business requires the use of hardware security modules for key storage, you can use AWS CloudHSM to generate and hold the private keys for web server certificates rather than placing them on the web server directly and reduce your exposure. On Windows, AWS CloudHSM integrates with Microsoft Internet Information Server (IIS) through the Key Storage Provider extensions to the Microsoft Cryptography Next Generation API (KSP/CNG). In this blog post, I will show you how to leverage CloudHSM KSP/CNG capabilities to secure the private key within the hardware security module (HSM), if you are using Microsoft Internet Information Server (IIS) to host your website.

Prerequisites and assumptions

You need the following for this walkthrough.

  • An AWS account and permissions to provision Amazon Virtual Private Cloud (Amazon VPC), Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon Route 53, and Amazon CloudHSM. I recommend using an AWS user account ID that has full administrative privileges.
  • A region that supports CloudHSM. I’m using the eu-west-1 (Ireland) region.
  • A domain to use for certificate generation. I’m using the domain example.com.
  • A trusted public certificate authority (CA).
  • A working knowledge of the Amazon VPC, Amazon EC2, and Amazon CloudHSM services.
  • Familiarity with the administration of Windows Server 2012 R2

Important: You will incur charges for the services used in this example. You can find the cost of each service on that service’s pricing page. You can also use the Simple Monthly Calculator to estimate the cost.

Architectural overview

I’ll go over a few diagrams before I start building. I’ll start with a diagram of the infrastructure.

Figure 1: CloudHSM infrastructure

Figure 1: CloudHSM infrastructure

This diagram shows a VPC in the eu-west-1 (Ireland) region. An Amazon EC2 instance running Windows Server 2012 R2 resides on a public subnet. This instance will run both the CloudHSM client software and IIS to host a website. The instance has an Elastic IP and can be accessed via the Internet Gateway. I’ll also have security groups that enable RDP, HTTP, and HTTPS access. The private subnet hosts the Elastic Network Interface (ENI) for CloudHSM cluster that has a single HSM.

And here’s the certification hierarchy:

Figure 2: Certification hierarchy

Figure 2: Certification hierarchy

The Amazon EC2 instance will run Windows Server 2012 R2 (WS2012R2) and IIS. I’ll use the KSP/CNG capabilities of CloudHSM to request a CSR from the WS2012R2 instance. The KSP/CNG integration will reach out to CloudHSM that will, in turn, create a private/public key pair. The private key will remain within CloudHSM while the public key will be used to create the CSR. I’ll then submit the CSR to a CA. The CA will sign the CSR with the CA‘s private key and return a certificate. I’ll then accept the CSR with the certificate that will make it available to IIS. I’ll then secure the default IIS website with the certificate. Finally, I’ll browse to the website and show how it uses the CNG/KSP integration.

Now that you’ve seen what I’m building, we can get started!

Build the CloudHSM infrastructure

You’ll need to set up a CloudHSM environment. This environment will consist of a VPC, a CloudHSM cluster with a single HSM, and a Windows 2012 R2 EC2 instance that will run both the CloudHSM client and IIS. For each of the steps below, I have provided a link to the relevant documentation, as well as any additional instructions to build out the infrastructure to match my sample environment.

  1. Select a region that offers AWS CloudHSM. I’m using the eu-west-1 (Ireland) region.
  2. Create a Virtual Private Cloud (VPC). Use the following values:
    VPC IPv4 CIDR block: 10.0.0.0/16
    VPC Name: cloudhsm-vpc
    Public subnet IPv4 CIDR block: 10.0.0.0/24
    Public subnet Availability Zone: eu-west-1b
    Public subnet name: cloudhsm-public
  3. Create a private subnet. Use the following values:
    IPv4 CIDR block: 10.0.1.0/24
    Availability Zone: eu-west-1b
    Name: cloudhsm-private
  4. Create a cluster. Use the following values:
    HSM Availability Zone: eu-west-1b
    Subnet: Use the private subnet you just created in eu-west-1b.
    Number of HSMs: 1
  5. Launch an Amazon EC2 client instance. Use the following values:
    AMI: Microsoft Windows Server 2012 R2 Base
    Instance type: t2.medium
    VPC: cloudhsm-vpc
    Subnet: cloudhsm-public
    Auto-assign Public IP: Enable
    Name tag: cloudhsm
    Security group: The CloudHSM cluster security group

    Once the instance is running, create an additional administrative Windows user because the built-in Administrator account has restrictions.

  6. Create an additional security group to allow you to connect to the instance and attach the security group to the instance in addition to the security group of the CloudHSM cluster.
  7. Allocate and associate an Elastic IP address to the instance so that the instance’s IP address persists if you need to stop and start the instance.
  8. Create a DNS “A record” to map the host name to the Elastic IP. In this example, I’m mapping www.example.com to the Elastic IP that I allocated.
  9. Create an HSM. After the HSM is created, note the HSM Id in the form of hsm- followed be an alphanumeric string; for example, hsm-1234abcd.
  10. Initialize the cluster.
  11. Install and configure the AWS CloudHSM client (Windows).
    You might need to use a command window with administrative privileges to do this.
  12. Activate the cluster.
  13. Create a crypto user. You will need to specify the ID and the password of the crypto user in the environment variables that you will create in the next section.

Define environment variables

We now need to define some Windows system environment variables (not user variables) used by the CNG/KSP integration. You can modify the environment variables by selecting System from the Control Panel, selecting Advanced System Settings, and selecting Environment Variables.

  1. Go into the Windows Control Panel and enter the following definitions:
    liquidsecurity_daemon_id 1
    liquidsecurity_daemon_socket_type TCPSOCKET
    liquidsecurity_daemon_tcp_port 1111
    n3fips_username USER
    n3fips_password USER:PASSWORD
    n3fips_partition hsm-ABCDEFGHIJK
    • Substitute the user ID and password that you specified in step 12 for the USER and PASSWORD values.
    • Substitute the HSM ID value you got from step 8 for hsm-ABCDEFGHIJK.

    Your Windows Control Panel configuration should look like the image below, not including the above substitutions.

    Figure 3: Environment variable settings

    Figure 3: Environment variable settings

  2. Reboot the EC2 instance to ensure that the environment variables are in place for all process.

Launch the CloudHSM client

You must start the CloudHSM client in order to enable the CNG/KSP integration.

  1. Open a command prompt window on the Windows instance.
  2. Enter the following commands:

    cd \ProgramData\Amazon\CloudHSM\data
    \Program Files\Amazon\CloudHSM\cloudhsm_client.exe cloudhsm_client.cfg

    The cloudhsm_client.exe command will continue to run. Important note: Do not close the window! The output at the bottom of the window looks similar to the following:

    Figure 4: CloudHSM client output

    Figure 4: CloudHSM client output

Launch key_mgmt_util

We are now going to launch the key_mgmt_util client and confirm that no keys are present in the CloudHSM cluster.

  1. Open a new command window. Do not use the window that you opened in the previous section.
  2. Enter the following commands.

    cd “\Program files\Amazon\CloudHSM”
    key_mgmt_util.exe
    loginHSM –u CU –s USER –p PASSWORD

    (replace USER and PASSWORD with the crypto user and password that you created above)

    findKey

You should see a message saying that there are no keys present, as shown in the figure below.

Figure 5: findKey output showing no keys present

Figure 5: findKey output showing no keys present

Important note: Do not exit from key_mgmt_util! You’ll return to this window later.

Install IIS

Here’s how:

  1. Follow the steps of this tutorial to install IIS. When installing IIS, make the following changes:
    • Use the Windows Server 2012 instructions as a guideline.
    • Do not install MySQL and PHP because you won’t need these services for this walkthrough.
  2. From your workstation’s web browser, browse the Elastic IP or the DNS “A Record” of the web server using the HTTP protocol. You should see the default Microsoft IIS page as shown below. If you don’t see the page, check the security groups for the server and verify that port 80 (HTTP) and port 443 (HTTPS) are open.
     
    Figure 6: http://www.example.com

    Figure 6: http://www.example.com

  3. If you browse to the same IP using the HTTPS protocol, you should not get a response because you haven’t configured HTTPS on the EC2 instance.

Provision a server certificate using the CNG/KSP integration

I’m now going to show how to provision a server certificate. I will do this with the certreq program that’s included with Windows Server 2012 R2. Certreq generates a CSR that I’ll submit to a CA. Certreq supports the KSP/CNG standard and can place certificates into the Windows certificate store.

  1. Create a file named request.inf that contains the lines below. Note that the Subject line may wrap onto the following line. It begins with Subject and ends with Washington and a closing quotation mark (“).

    
    [Version]
    Signature= $Windows NT$
    [NewRequest]
    Subject = "C=US,CN=www.example.com,O=Information Technology,OU=Certificate Management,L=Seattle,S=Washington"
    HashAlgorithm = SHA256
    KeyAlgorithm = RSA
    KeyLength = 2048
    ProviderName = Cavium Key Storage Provider
    KeyUsage = 0xf0
    MachineKeySet = True
    SuppressDefaults = True
    SMIME = false
    RequestType = PKCS10
    [EnhancedKeyUsageExtension]
    OID=1.3.6.1.5.5.7.3.1
    [Extensions]
    2.5.29.17 = "{text}"
    _continue_ = "dns=www.example.com&"
    _continue_ = "dns=example.com&"
    

    Note the presence of the extensions section that allows you to specify SANs (Subject Alternative Names). SANs are now required by some browsers. Remember to substitute your own domain for example.com in the above example.

  2. Create a certificate request with certreq.exe.
    certreq.exe -new request.inf request.csr.pem

    Certreq returns a message saying that the certificate request has been created.

  3. Figure 7: Output from certreq.exe showing that the CSR has been created

    Figure 7: Output from certreq.exe showing that the CSR has been created

  4. In the same command window where you ran key_mgmt_util, run findKey again to see if there are any new keys. Use the findKey command by itself to display all keys. This shows that two keys have been added with handles 7 and 8. Use the -c 2 parameter to display only public keys (handle 262214) and -c 3 to display private keys (handle 262154). The two keys are part of the public/private key pair that were just created with certreq.
     
    Figure 8: findKey output showing two new keys

    Figure 8: findKey output showing two new keys

  5. Submit the CSR you just created to the CA using the procedures provided by the CA. The CA will return a certificate that is named request.crt.pem (or similar).

    If the certificate authority doesn’t accept the CSR provided by certreq.exe, try the following:

    1. If the certificate authority asks you to provide the domains for the certificate, make sure you enter all of the SANs included in the request.inf file.
    2. Check the first and last lines of the CSR and see if the word “NEW” appears. If so, remove the word in both lines.
  6. Use the certreq command to accept the new certificate and insert it into the certificate store where IIS can access it.

    certreq.exe -accept request.crt.pem

Configure and test the secure website

  1. Start the IIS Manager. You can do this through the Server Manager or by running the following command:

    Inetmgr

  2. Under Connections, expand the local server, and then expand the Sites folder below. Select the site named Default Web Site.
     
    Figure 9: IIS Manager settings

    Figure 9: IIS Manager settings

  3. On the right, under Actions, select Edit Site-> Bindings, and then add a binding with the following fields making sure to substitute your domain name in place of example.com:

    Type: https
    Host name: www.example.com
    SSL certificate: www.example.com

  4. Select OK to save the new binding, and then select Close.
     
    Figure 10: Adding site bindings

    Figure 10: Adding site bindings

  5. Select Manage Website -> Restart. This will restart the website and ensure the new binding is active.
  6. Browse to https://www.example.com, remembering to substitute your own domain. You’ll see the default website with a secure connection as shown below. If you get a warning message from your workstation’s browser, make sure you added the root-level certificate to the trusted certificate store. (Note: This should not be an issue if you’re using a commercial CA supported by major web browsers)
     
    Figure 11: https://www.example.com

    Figure 11: https://www.example.com

  7. Examine the certificate information being reported to the browser. For my example, I’m using the Chrome browser so I’ll select the padlock icon, then Certificates, then Detail.
     
    Figure 12: Server certificate details

    Figure 12: Server certificate details

    Figure 12 shows that the Subject field contains the same information that I provided before. You can check out the other fields for information about the Subject Alternative Names and other fields and verify that the correct certificate is being used.

Conclusion

For projects that have special requirements regarding the storage of keys, AWS CloudHSM supports the Microsoft KSP/CNG standard that enables you to store the private key of an SSL/TLS certificate within an HSM. Since the private key no longer has to reside on the server, it’s no longer at risk if the server itself were to be compromised.

If you have feedback about this blog post, submit comments in the “Comments” section below. If you have questions about this blog post, start a new thread on the Amazon CloudHSM forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Amazon ElastiCache for Redis now PCI DSS compliant, allowing you to process sensitive payment card data in-memory for faster performance

Post Syndicated from Manan Goel original https://aws.amazon.com/blogs/security/amazon-elasticache-redis-now-pci-dss-compliant-payment-card-data-in-memory/

Amazon ElastiCache for Redis has achieved the Payment Card Industry Data Security Standard (PCI DSS). This means that you can now use ElastiCache for Redis for low-latency and high-throughput in-memory processing of sensitive payment card data, such as Customer Cardholder Data (CHD). ElastiCache for Redis is a Redis-compatible, fully-managed, in-memory data store and caching service in the cloud. It delivers sub-millisecond response times with millions of requests per second.

To create a PCI-Compliant ElastiCache for Redis cluster, you must use the latest Redis engine version 4.0.10 or higher and current generation node types. The service offers various data security controls to store, process, and transmit sensitive financial data. These controls include in-transit encryption (TLS), at-rest encryption, and Redis AUTH. There’s no additional charge for PCI DSS compliant ElastiCache for Redis.

In addition to PCI, ElastiCache for Redis is a HIPAA eligible service. If you want to use your existing Redis clusters that process healthcare information to also process financial information while meeting PCI requirements, you must upgrade your Redis clusters from 3.2.6 to 4.0.10. For more details, see Upgrading Engine Versions and ElastiCache for Redis Compliance.

Meeting these high bars for security and compliance means ElastiCache for Redis can be used for secure database and application caching, session management, queues, chat/messaging, and streaming analytics in industries as diverse as financial services, gaming, retail, e-commerce, and healthcare. For example, you can use ElastiCache for Redis to build an internet-scale, ride-hailing application and add digital wallets that store customer payment card numbers, thus enabling people to perform financial transactions securely and at industry standards.

To get started, see ElastiCache for Redis Compliance Documentation.

Want more AWS Security news? Follow us on Twitter.