Tag Archives: AWS CloudHSM

How to deploy CloudHSM to securely share your keys with your SaaS provider

Post Syndicated from Vinod Madabushi original https://aws.amazon.com/blogs/security/how-to-deploy-cloudhsm-securely-share-keys-with-saas-provider/

If your organization is using software as a service (SaaS), your data is likely stored and protected by the SaaS provider. However, depending on the type of data that your organization stores and the compliance requirements that it must meet, you might need more control over how the encryption keys are stored, protected, and used. In this post, I’ll show you two options for deploying and managing your own CloudHSM cluster to secure your keys, while still allowing trusted third-party SaaS providers to securely access your HSM cluster in order to perform cryptographic operations. You can also use this architecture when you want to share your keys with another business unit or with an application that’s running in a separate AWS account.

AWS CloudHSM is one of several cryptography services provided by AWS to help you secure your data and keys in the AWS cloud. AWS CloudHSM provides single-tenant HSMs based on third-party FIPS 140-2 Level 3 validated hardware, under your control, in your Amazon Virtual Private Cloud (Amazon VPC). You can generate and use keys on your HSM using CloudHSM command line tools or standards-compliant C, Java, and OpenSSL SDKs.

A related, more widely used service is AWS Key Management Service (KMS). KMS is generally easier to use, cheaper to operate, and is natively integrated with most AWS services. However, there are some use cases for which you may choose to rely on CloudHSM to meet your security and compliance requirements.

Solution Overview

There are two ways you can set up your VPC and CloudHSM clusters to allow trusted third-party SaaS providers to use the HSM cluster for cryptographic operations. The first option is to use VPC peering to allow traffic to flow between the SaaS provider’s HSM client VPC and your CloudHSM VPC, and to utilize a custom application to harness the HSM.

The second option is to use KMS to manage the keys, specifying a custom key store to generate and store the keys. AWS KMS supports custom key stores backed by AWS CloudHSM clusters. When you create an AWS KMS customer master key (CMK) in a custom key store, AWS KMS generates and stores non-extractable key material for the CMK in an AWS CloudHSM cluster that you own and manage.

Decision Criteria: VPC Peering vs Custom Key Store

The right solution for you will depend on factors like your VPC configuration, security requirements, network setup, and the type of cryptographic operations you need. The following table provides a high-level summary of how these two options compare. Later in this post, I’ll go over both options in detail and explain the design considerations you need to be aware of before deploying the solution in your environment.

Technical ConsiderationsSolution
VPC PeeringCustom Keystore
Are you able to peer or connect your HSM VPC with your SaaS provider?✔
Is your SaaS provider sensitive to costs from KMS usage in their AWS account?✔
Do you need CloudHSM-specific cryptographic tasks like signing, HMAC, or random number generation?✔
Does your SaaS provider need to encrypt your data directly with the Master Key?✔
Does your application rely on a PKCS#11-compliant or JCE-compliant SDK?✔
Does your SaaS provider need to use the keys in AWS services?✔
Do you need to log all key usage activities when SaaS providers use your HSM keys?✔

Option 1: VPC Peering

 

Figure 1: Architecture diagram showing VPC peering between the SaaS provider's HSM client VPC and the customer's HSM VPC

Figure 1: Architecture diagram showing VPC peering between the SaaS provider’s HSM client VPC and the customer’s HSM VPC

Figure 1 shows how you can deploy a CloudHSM cluster in a dedicated HSM VPC and peer this HSM VPC with your service provider’s VPC to allow them to access the HSM cluster through the client/application. I recommend that you deploy the CloudHSM cluster in a separate HSM VPC to limit the scope of resources running in that VPC. Since VPC peering is not transitive, service providers will not have access to any resources in your application VPCs or any other VPCs that are peered with the HSM VPC.

It’s possible to leverage the HSM cluster for other purposes and applications, but you should be aware of the potential drawbacks before you do. This approach could make it harder for you to find non-overlapping CIDR ranges for use with your SaaS provider. It would also mean that your SaaS provider could accidentally overwrite HSM account credentials or lock out your HSMs, causing an availability issue for your other applications. Due to these reasons, I recommend that you dedicate a CloudHSM cluster for use with your SaaS providers and use small VPC and subnet sizes, like /27, so that you’re not wasting IP space and it’s easier to find non-overlapping IP addresses with your SaaS provider.

If you’re using VPC peering, your HSM VPC CIDR cannot overlap with your SaaS provider’s VPC. Deploying the HSM cluster in a separate VPC gives you flexibility in selecting a suitable CIDR range that is non-overlapping with the service provider since you don’t have to worry about your other applications. Also, since you’re only hosting the HSM Cluster in this VPC, you can choose a CIDR range that is relatively small.

Design considerations

Here are additional considerations to think about when deploying this solution in your environment:

  • VPC peering allow resources in either VPC to communicate with each other as long as security groups, NACLS, and routing allow for it. In order to improve security, place only resources that are meant to be shared in the VPC, and secure communication at the port/protocol level by using security groups.
  • If you decide to revoke the SaaS provider’s access to your CloudHSM, you have two choices:
    • At the network layer, you can remove connectivity by deleting the VPC peering or by modifying the CloudHSM security groups to disallow the SaaS provider’s CIDR ranges.
    • Alternately, you can log in to the CloudHSM as Crypto Officer (CO) and change the password or delete the Crypto user that the SaaS provider is using.
  • If you’re deploying CloudHSM across multiple accounts or VPCs within your organization, you can also use AWS Transit Gateway to connect the CloudHSM VPC to your application VPCs. Transit Gateway is ideal when you have multiple application VPCs that needs CloudHSM access, as it easily scales and you don’t have to worry about the VPC peering limits or the number of peering connections to manage.
  • If you’re the SaaS provider, and you have multiple clients who might be interested in this solution, you must make sure that one customer IP space doesn’t overlap with yours. You must also make sure that each customer’s HSM VPC doesn’t overlap with any of the others. One solution is to dedicate one VPC per customer, to keep the client/application dedicated to that customer, and to peer this VPC with your application VPC. This reduces the overlapping CIDR dependency among all your customers.

Option 2: Custom Key Store

As the AWS KMS documentation explains, KMS supports custom key stores backed by AWS CloudHSM clusters. When you create an AWS KMS customer master key (CMK) in a custom key store, AWS KMS generates and stores non-extractable key material for the CMK in an AWS CloudHSM cluster that you own and manage. When you use a CMK in a custom key store, the cryptographic operations are performed in the HSMs in the cluster. This feature combines the convenience and widespread integration of AWS KMS with the added control of an AWS CloudHSM cluster in your AWS account. This option allows you to keep your master key in the CloudHSM cluster but allows your SaaS provider to use your master key securely by using KMS.

Each custom key store is associated with an AWS CloudHSM cluster in your AWS account. When you connect the custom key store to its cluster, AWS KMS creates the network infrastructure to support the connection. Then it logs into the key AWS CloudHSM client in the cluster using the credentials of a dedicated crypto user in the cluster. All of this is automatically set up, with no need to peer VPCs or connect to your SaaS provider’s VPC.

You create and manage your custom key stores in AWS KMS, and you create and manage your HSM clusters in AWS CloudHSM. When you create CMKs in an AWS KMS custom key store, you view and manage the CMKs in AWS KMS. But you can also view and manage their key material in AWS CloudHSM, just as you would do for other keys in the cluster.

The following diagram shows how some keys can be located in a CloudHSM cluster but be visible through AWS KMS. These are the keys that AWS KMS can use for crypto operations performed through KMS.
 

Figure 2: High level overview of KMS custom key store

Figure 2: High level overview of KMS custom key store

While this option eliminates many of the networking components you need to set up for Option 1, it does limit the type of cryptographic operations that your SaaS provider can perform. Since the SaaS provider doesn’t have direct access to CloudHSM, the crypto operations are limited to the encrypt and decrypt operations supported by KMS, and your SaaS provider must use KMS APIs for all of their operations. This is easy if they’re using AWS services which use KMS already, but if they’re performing operations within their application before storing the data in AWS storage services, this approach could be challenging, because KMS doesn’t support all the same types of cryptographic operations that CloudHSM supports.

Figure 3 illustrates the various components that make up a custom key store and shows how a CloudHSM cluster can connect to KMS to create a customer controlled key store.
 

Figure 3: A cluster of two CloudHSM instances is connected to KMS to create a customer controlled key store

Figure 3: A cluster of two CloudHSM instances is connected to KMS to create a customer controlled key store

Design Considerations

  • Note that when using custom key store, you’re creating a kmsuser CU account in your AWS CloudHSM cluster and providing the kmsuser account credentials to AWS KMS.
  • This option requires your service provider to be able to use KMS as the key management option within their application. Because your SaaS provider cannot communicate directly with the CloudHSM cluster, they must instead use KMS APIs to encrypt the data. If your SaaS provider is encrypting within their application without using KMS, this option may not work for you.
  • When deploying a custom key store, you must not only control access to the CloudHSM cluster, you must also control access to AWS KMS.
  • Because the custom key store and KMS are located in your account, you must give permission to the SaaS provider to use certain KMS keys. You can do this by enabling cross account access. For more information, please refer to the blog post “Share custom encryption keys more securely between accounts by using AWS Key Management Service.”
  • I recommend dedicating an AWS account to the CloudHSM cluster and custom key store, as this simplifies setup. For more information, please refer to Controlling Access to Your Custom Key Store.

Network architecture that is not supported by CloudHSM

Figure 4: Diagram showing the network anti-pattern for deploying CloudHSM

Figure 4: Diagram showing the network anti-pattern for deploying CloudHSM

Figure 4 shows various networking technologies, like AWS PrivateLink, Network Address Translation (NAT), and AWS Load Balancers, that cannot be used with CloudHSM when placed between the CloudHSM cluster and the client/application. All of these methods mask the real IPs of the HSM cluster nodes from the client, which breaks the communication between the CloudHSM client and the HSMs.

When the CloudHSM client successfully connects to the HSM cluster, it downloads a list of HSM IP addresses which is then stored and used for subsequent connections. When one of the HSM nodes is unavailable, the client/application will automatically try the IP address of the HSM nodes it knows about. When HSMs are added or removed from the cluster, the client is automatically reconfigured. Since the client relies on a current list of IP addresses to transparently handle high availability and failover within the cluster, masking the real IP address of the HSM node thus breaks the communication between the cluster and the client.

You can read more about how the CloudHSM client works in the AWS CloudHSM User Guide.

Summary

In this blog post, I’ve shown you two options for deploying CloudHSM to store your key material while allowing your SaaS provider to access and use those keys on your behalf. This allows you to remain in control of your encryption keys and use a SaaS solution without compromising security.

It’s important to understand the security requirements, network setup, and type of cryptographic operation for each approach, and to choose the option that aligns the best with your goals. As a best practice, it’s also important to understand how to secure your CloudHSM and KMS deployment and to use necessary role-based access control with minimum privilege. Read more about AWS KMS Best Practices and CloudHSM Best Practices.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Key Management Service discussion forum.

Want more AWS Security news? Follow us on Twitter.

Vinod Madabushi

Vinod is an Enterprise Solutions Architect with AWS. He works with customers on building highly available, scalable, and secure applications on AWS Cloud. He’s passionate about solving technology challenges and helping customers with their cloud journey.

How to migrate a digital signing workload to AWS CloudHSM

Post Syndicated from Tracy Pierce original https://aws.amazon.com/blogs/security/how-to-migrate-a-digital-signing-workload-to-aws-cloudhsm/

Is your on-premises Hardware Security Module (HSM) at end-of-life? Does continued maintenance of your on-premises hardware take a lot of time and cost a lot of money? Do you want or need all of your workloads to be performed on AWS? By migrating these workloads to AWS CloudHSM, you receive automated backups, low cost HSMs, managed maintenance, automatic recovery in event of a hardware failure, integrated fault tolerance, and high-availability. One such workload you might consider migrating is secret key material used for digital signing operations.

Enterprise certificate authority (CA) or public key infrastructure (PKI) applications use the private portion of an asymmetric key pair generated and stored in a hardware security module (HSM) to perform signing operations. Examples of such operations include the creation of digital certificates for web-servers or IoT devices, file signatures, or when negotiating a TLS session. Migrating this type of workload to AWS may save you time and money. If your HSM is at end of life and you need an alternative, you can migrate the digital signing workload to AWS CloudHSM in just a few steps.

This post will focus on a workload that allows you to create and use a digital certificate to digitally sign an arbitrary file. I’ll show you how to create a new asymmetric key pair and generate the corresponding certificate signing request (CSR) on AWS CloudHSM. This CSR, once signed by the appropriate issuing CA, allows your new key pair and the associated certificate to be trusted in the same way as the key pairs in your original HSM. You could then move traffic related to signing operations or issuing certificates to your AWS CloudHSM cluster.

Background

Before I walk you through the steps of migrating a certificate signing workload into CloudHSM, I’ll provide a little background information so you’ll know how CloudHSM, PKI, and CAs work together. Every certificate is associated with a key pair made up of a private (secret) key and a public key. The private key associated with a certificate needs to be kept confidential, so it typically resides on a hardware security module (HSM). The public portion of the key pair is not confidential, is included in the certificate, and can be shared with anyone who wants to verify a digital signature made with the corresponding private key. In a PKI, a CA is the trusted entity that issues digital certificates on behalf of end-entities. At the top of the trust hierarchy is a root CA, which is implicitly trusted when it is established because it acts as the root of trust for intermediate CAs and end-entity certificates that may be issued underneath it. Intermediate CAs are trusted because their certificates are signed by the root CA. Intermediate CAs in turn sign end-entity certificates, which are used to authenticate identities of various actors across the data transfer process. A common use case for end-entity certificates is for web servers so that connecting clients can verify the server’s identity. Generally, end-entity certificates are valid for 1-3 years, intermediate CA certificates are valid for 5-10 years, and root CAs are valid for 30 years or more.

Beyond solving for the non-repudiation of objects signed by end-entity certificates to ensure the owner of the private key performed the signing operation, there is still the problem of trusting that the owner of the private key is the identity they claim to be. When evaluating trust in this way, there are generally two options; relying on public CAs or private CAs. Public CAs widely distribute the public keys of their root certificates into popular client trust stores (for example, browsers and operating systems). This allows users to verify that the identity of the end-entity has been attested to by a publicly trusted CA. This helps when the signer and the verifier of the digital asset don’t know each other and haven’t shared cryptographic material with each other in advance to perform future validations. Private CAs are those for which there are no widely distributed copies of their associated public keys. The verifier has to retrieve the public key from the private CA and has to explicitly trust the cert without any third-party attestation of the signer’s identity. This is appropriate for cases when signers and verifiers are in the same company or know each other. Examples of when to use a private CA are securing virtual private networks, data or file replication between internal servers, remote backups, file-sharing, email, or other personal accounts.

Regardless of the certificate trust model you need, AWS CloudHSM can be used to create the initial key pair and CSR for both public and private CA requests. Note that AWS offers some alternatives for certificate management that may simplify your workloads without having to use AWS CloudHSM directly. AWS Certificate Manager (ACM) automatically creates key pairs and issues public or private certificates to identify resources within your organization. For use cases that need capabilities not yet supported by ACM, or in unusual situations in which a single-tenant HSM under your control is required for compliance reasons, you can use AWS CloudHSM directly for key generation and signing operations.

Organizations currently using an on-premises HSM for the creation of asymmetric keys used in digital certificates often use a vendor-proprietary mechanism to replicate key material across multiple HSMs for resiliency. However, this method prevents the key material from ever being transferred to an HSM offered by a different vendor. Consider it “vendor lock-in’ by design. So, the private key corresponding to the certificates you use for signing and authentication are locked inside that HSM. But if they are locked inside, how do you move to AWS CloudHSM? The answer is that you don’t have to rely on these inaccessible keys: you can create a new key pair and use it within AWS CloudHSM to begin issuing end-entity certificates.

Solution overview

I will go over creating a new private key in AWS CloudHSM using the Windows client and using Microsoft certreq to generate a corresponding CSR. You provide this CSR to your private or public CA to receive a signed certificate in return. This certificate and its public key then needs to be propagated to wherever your signatures are verified. At the end of this post, I will show you how to verify your digital signatures using Microsoft SignTool. SignTool is provided by Microsoft to allow Windows users to digitally sign files, verify file signatures, and file timestamps.
 

Figure 1: Procedural diagram

Figure 1: Procedural diagram

As shown in the diagram above, the steps followed in this post are:

  1. Create a new RSA private key using KSP/CNG through the AWS CloudHSM Windows client.
  2. Using Microsoft certreq, create your CSR.
  3. Provide the CSR to your CA for signing.
  4. Use Microsoft SignTool to sign files in your environment.

Note: You may have to register this new certificate with any partners that do not automatically verify the entire certificate chain. This could be 3rd party applications, vendors, or outside entities that utilize your certificates to determine trust.

Prerequisites

In this walkthrough, I assume that you already have an AWS CloudHSM cluster set up and initialized with at least one HSM device, and an Amazon Elastic Compute Cloud (EC2) Windows-based instance with the AWS CloudHSM client, PowerShell, and Windows SDK with Microsoft SignTool installed. You must have a crypto user (CU) on the HSM to perform the steps in this post.

Deploying the solution

Step 1: Create a new private key using KSP/CNG using the AWS CloudHSM Windows client

On your Windows server where the AWS CloudHSM Windows client is installed, use a text editor to create a certificate request file named IISCertRequest.inf. For the purpose of this post, I have filled out an example file below.


[Version]
Signature = "$Windows NT$"
[NewRequest]
Subject = "CN=example.com,C=US,ST=Washington,L=Seattle,O=ExampleOrg,OU=WebServer"
HashAlgorithm = SHA256
KeyAlgorithm = RSA
KeyLength = 2048
ProviderName = "Cavium Key Storage Provider"
KeyUsage = "CERT_DIGITAL_SIGNATURE_KEY_USAGE"
MachineKeySet = True    

Step 2: Using Microsoft certreq, create your CSR

On the same server, open PowerShell and, at the PowerShell prompt, create a CSR from the IISCertRequest.inf file by using the Windows certreq command. Here’s an example of the command. Remember to change out the text in red italics with your own file name.


PS C:\>certreq -new <IISCertRequest.inf IISCertRequest.csr> 
	SDK Version: 2.03
CertReq: Request Created

If successful, you’ll see the “Request Created” message above, as well as the new file <IISCertRequest.csr> on your server. This certificate will be provided to your choice of public CA for certificate issuance. This will need to be completed manually via your public CAs suggested method of certificate request.

Step 3: Provide the CSR to your CA for signing

The CA that had been signing your existing end-entity certificates with keys generated by your original HSM is the one you use to sign the new certificates with keys generated by AWS CloudHSM, as well. There are many CAs to choose from, such as Digicert, Trustwave, GoDaddy, and so on. You will want to follow their steps for submitting your CSR to receive your certificate in return.

Step 4: Use Microsoft SignTool to sign files in your environment

When you receive your signed certificate back from your chosen CA, save a copy locally on your Windows server. Then, move the certificate file to the Personal Certificate Store in Windows so it can be used by other applications, such as Microsoft SignTool. Here’s an example of the command. Be sure to replace the value in <red italics> with your actual certificate name.
PS C:\certreq -accept <signedCertificate.cer>

Now, the certificate is ready for use, and I’ll show you how to use it to sign a file. First, you have to get the thumbprint of your certificate. To do this, open PowerShell as an Administrator (right-click the app and choose Run as Administrator). Type this command:
PS C:\>Get-ChildItem -path cert:\LocalMachine\My

If successful, you should see an output similar to this. Copy the thumbprint that is returned. You’ll need it when you perform the actual signing operation on a file.


Thumbprint				                Subject
---------------						-----------
49DF7HDJT84723FDKCURLSXYRF9830568CXHSUB2		CN=WINDOWS-CA
VJFU57E6DI9DKMCHAKLDFJA8E73739Q04730QU7A		CN=www.example.com, OU=Certif….

To open the SignTool application, navigate to the app’s directory within PowerShell. By default, this is typically:
C:\Program Files (x86)\Windows Kits\<SDK Version> \bin\<version number> \<CPU architecture>

For example, if you had downloaded the Microsoft Windows SDK 10 version, the application would be stored in:

C:\Program Files (x86)\Windows Kits\10\bin\10.0.17763.0\x64

When you’ve located the directory, sign your file by running the command below. Remember to replace the values in <red italics> with your own values. The test.exe file in this example can be any valid executable file in your directory.
PS C:\>.\signtool.exe sign /v /fd sha256 /sha1 <thumbprint> /sm /as C:\Users\Administrator\Desktop\<test.exe>

You should see a message like this:


Done Adding Additional Store
Successfully signed C:\User\Administrator\Desktop\<test.exe>

Number of files successfully Signed: 1
Number of warnings: 0
Number of errors: 0

One last optional item you can do is verify the signature on the file using the command below. Again, replace your values for those in red italics.
PS C:\>.\signtool.exe verify /v /pa C:\Users\Administrators\Desktop\<test.exe>

You’ve now successfully migrated your file signing workload to AWS CloudHSM. If your signing certificate was not issued by a publicly trusted CA but instead by a private CA, make sure to deploy a copy of the root CA certificate and any intermediate certs from the private CA on any systems you want to verify the integrity of your signed file.

Conclusion

In this post, I walked you through creating a new RSA asymmetric key pair to create a CSR. After supplying the CSR to your chosen CA and receiving a signing certificate in return, I then showed you a how to use Microsoft SignTool with AWS CloudHSM to sign files in your environment. You can now use AWS CloudHSM to sign code, documents, or other certificates in the same method of your original HSMs.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS CloudHSM forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tracy Pierce

Tracy Pierce is a Senior Consultant, Security Specialty, for Remote Consulting Services. She enjoys the peculiar culture of Amazon and uses that to ensure every day is exciting for her fellow engineers and customers alike. Customer Obsession is her highest priority and she shows this by improving processes, documentation, and building tutorials. She has her AS in Computer Security & Forensics from SCTD, SSCP certification, AWS Developer Associate certification, and AWS Security Specialist certification. Outside of work, she enjoys time with friends, her Great Dane, and three cats. She keeps work interesting by drawing cartoon characters on the walls at request.

How to BYOK (bring your own key) to AWS KMS for less than $15.00 a year using AWS CloudHSM

Post Syndicated from Tracy Pierce original https://aws.amazon.com/blogs/security/how-to-byok-bring-your-own-key-to-aws-kms-for-less-than-15-00-a-year-using-aws-cloudhsm/

Note: BYOK is helpful for certain use cases, but I recommend that you familiarize yourself with KMS best practices before you adopt this approach. You can review best practices in the AWS Key Management Services Best Practices (.pdf) whitepaper.

May 14, 2019: We’ve updated a sentence to clarify that this solution does not include instructions for creating an AWS CloudHSM cluster. For information about how to do this, please refer to our documentation.


Back in 2016, AWS Key Management Service (AWS KMS) announced the ability to bring your own keys (BYOK) for use with KMS-integrated AWS services and custom applications. This feature allows you more control over the creation, lifecycle, and durability of your keys. You decide the hardware or software used to generate the customer-managed customer master keys (CMKs), you determine when or if the keys will expire, and you get to upload your keys only when you need them and delete them when you’re done.

The documentation that walks you through how to import a key into AWS KMS provides an example that uses OpenSSL to create your master key. Some customers, however, don’t want to use OpenSSL. While it’s a valid method of creating the key material associated with a KMS CMK, security best practice is to perform key creation on a hardware security module (HSM) or a hardened key management system.

However, using an on-premises HSM to create and back up your imported keys can become expensive. You have to plan for factors like the cost of the device itself, its storage in a datacenter, electricity, maintenance of the device, and network costs, all of which can add up. An on-premises HSM device could run upwards of $10K annually even if used sparingly, in addition to the cost of purchasing the device in the first place. Even if you’re only using the HSM for key creation and backup and don’t need it on an ongoing basis, you might still need to keep it running to avoid complex re-initialization processes. This is where AWS CloudHSM comes in. CloudHSM offers HSMs that are under your control, in your virtual private cloud (VPC). You can spin up an HSM device, create your key material, export it, import it into AWS KMS for use, and then terminate the HSM (since CloudHSM saves your HSM state using secure backups). Because you’re only billed for the time your HSM instance is active, you can perform all of these steps for less than $15.00 a year!

Solution pricing

In this post, I’ll show you how to use an AWS CloudHSM cluster (which is a group that CloudHSM uses to keep all HSMs inside in sync with one another) with one HSM to create your key material. Only one HSM is needed, as you’ll use this HSM just for key generation and storage. Ongoing crypto operations that use the key will all be performed by KMS. AWS CloudHSM comes with an hourly fee that changes based on your region of choice; be sure to check the pricing page for updates prior to use. AWS KMS has a $1.00/month charge for all customer-managed CMKs, including those that you import yourself. In the following chart, I’ve calculated annual costs for some regions. These assume that you want to rotate your imported key annually and that you perform this operation once per year by running a single CloudHSM for one hour. I’ve included 12 monthly installments of $1.00/mo for your CMK. Depending on the speed of your application, additional costs may apply for KMS usage, which can be found on the AWS KMS pricing page.

REGIONCLOUDHSM PRICING STRUCTUREKMS PRICING STRUCTUREANNUAL TOTAL COST
US-EAST-1$1.60/HR$1/MO$13.60/YR
US-WEST-2$1.45/HR$1/MO$13.45/YR
EU-WEST-1$1.47/HR$1/MO$13.47/YR
US-GOV-EAST-1$2.08/HR$1/MO$14.08/YR
AP-SOUTHEAST-1$1.86/HR$1/MO$13.86/YR
EU-CENTRAL-1$1.92/HR$1/MO$13.92/YR
EU-NORTH-1$1.40/HR$1/MO$13.40/YR
AP-SOUTHEAST-2$1.99/HR$1/MO$13.99/YR
AP-NORTHEAST-1$1.81/HR$1/MO$13.81/YR

Solution overview

I’ll walk you through the process of creating a CMK in AWS KMS with no key material associated and then downloading the public key and import token that you’ll need in order to import the key material later on. I’ll also show you how to create, securely wrap, and export your symmetric key from AWS CloudHSM. Finally, I’ll show you how to import your key material into AWS KMS and then terminate your HSM to save on cost. The following diagram illustrates the steps covered in this post.
 

Figure 1: The process of creating a CMK in AWS KMS

Figure 1: The process of creating a CMK in AWS KMS

  1. Create a customer master key (CMK) in AWS KMS that has no key material associated.
  2. Download the import wrapping key and import token from KMS.
  3. Import the wrapping key provided by KMS into the HSM.
  4. Create a 256 bit symmetric key on AWS CloudHSM.
  5. Use the imported wrapping key to wrap the symmetric key.
  6. Import the symmetric key into AWS KMS using the import token from step 2.
  7. Terminate your HSM, which triggers a backup. Delete or leave your cluster, depending on your needs.

Prerequisites

In this walkthrough, I assume that you already have an AWS CloudHSM cluster set up and initialized with at least one HSM device, and an Amazon Elastic Compute Cloud (EC2) instance running Amazon Linux OS with the AWS CloudHSM client installed. You must have a crypto user (CU) on the HSM to perform the key creation and export functions. You’ll also need an IAM user or role with permissions to both AWS KMS and AWS CloudHSM, and with credentials configured in your AWS Command Line Interface (AWS CLI). Make sure to store your CU information and IAM credentials (if a user is created for this activity) in a safe location. You can use AWS Secrets Manager for secure storage.

Deploying the solution

For my demonstration, I’ll be using the AWS CLI. However, if you prefer working with a graphical user interface, step #1 and the important portion of step #3 can be done in the AWS KMS Console. All other steps are via AWS CLI only.

Step 1: Create the CMK with no key material associated

Begin by creating a customer master key (CMK) in AWS KMS that has no key material associated. The CLI command to create the CMK is as follows:

$ aws kms create-key –origin EXTERNAL –region us-east-1

If successful, you’ll see an output on the CLI similar to below. The KeyState will be PendingImport and the Origin will be EXTERNAL.


{
    "KeyMetadata": {
        "AWSAccountId": "111122223333",
        "KeyId": "1234abcd-12ab-34cd-56ef-1234567890ab",
        "Arn": "arn:aws:kms:us-east-1:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab",
        "CreationDate": 1551119868.343,
        "Enabled": false,
        "Description": "",
        "KeyUsage": "ENCRYPT_DECRYPT",
        "KeyState": "PendingImport",
        "Origin": "EXTERNAL",
        "KeyManager": "CUSTOMER"
    }
}

Step 2: Download the public key and import token

With the CMK ID created, you now need to download the import wrapping key and the import token. Wrapping is a method of encrypting the key so that it doesn’t pass in plaintext over the network. You need both the wrapping key and the import token in order to import a key into AWS KMS. You’ll use the public key to encrypt your key material, which ensures that your key is not exposed as it’s imported. When AWS KMS receives the encrypted key material, it will use the corresponding private component of the import wrapping key to decrypt it. The public and private key pair used for this action will always be a 2048-bit RSA key that is unique to each import operation. The import token contains metadata to ensure the key material is imported to the correct CMK ID. Both the import token and import wrapping key are on a time limit, so if you don’t import your key material within 24 hours of downloading them, the import wrapping key and import token will expire and you’ll need to download a new set from KMS.

  1. (OPTIONAL) In my example command, I’ll use the wrapping algorithm RSAES_OAEP_SHA_256 to encrypt and securely import my key into AWS KMS. If you’d like to choose a different algorithm, you may use any of the following: RSAES_OAEP_SHA_256, RSAES_OAEP_SHA_1, or RSAES_PKCS1_V1_5.
  2. In the command below, replace the example key ID (shown in red) with the key ID you received in step 1. If you’d like, you can also modify the wrapping algorithm. Then run the command.

    $ aws kms get-parameters-for-import –key-id <1234abcd-12ab-34cd-56ef-1234567890ab> –wrapping-algorithm <RSAES_OAEP_SHA_256> –wrapping-key-spec RSA_2048 –region us-east-1

    If the command is successful, you’ll see an output similar to what’s below. Your output will contain the import token and import wrapping key. Copy this output to a safe location, as you’ll need it in the next step. Keep in mind, these items will expire after 24 hours.

    
        {
            "KeyId": "arn:aws:kms:us-east-1:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab ",
            "ImportToken": "AQECAHjS7Vj0eN0qmSLqs8TfNtCGdw7OfGBfw+S8plTtCvzJBwAABrMwggavBgkqhkiG9w0BBwagggagMIIGnAIBADCCBpUGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMpK6U0uSCNsavw0OJAgEQgIIGZqZCSrJCf68dvMeP7Ah+AfPEgtKielFq9zG4aXAIm6Fx+AkH47Q8EDZgf16L16S6+BKc1xUOJZOd79g5UdETnWF6WPwAw4woDAqq1mYKSRtZpzkz9daiXsOXkqTlka5owac/r2iA2giqBFuIXfr2RPDB6lReyrhAQBYTwefTHnVqosKVGsv7/xxmuorn2iBwMfyqoj2gmykmoxrmWmCdK+jRdWKBHtriwplBTEHpUTdHmQnQzGVThfr611XFCQDZ+uk/wpVFY4jZ/Z/…"
            "PublicKey": "MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA4KCiW36zofngmyVTOpMuFfHAXLWTfl2BAvaeq5puewoMt1zPwBP23qKdG10L7ChTVt4H9PCs1vzAWm2jGeWk5fO…",
            "ParametersValidTo": 1551207484.208
        }
        

  3. Because the output of the command is base64 encoded, you must base64 decode both components into binary data before use. To do this, use your favorite text editor to save the output into two separate files. Name one ImportToken.b64—it will have the output of the “ImportToken” section from above. Name the other PublicKey.b64—it will have the output of the “PublicKey” section from above. You will find example commands to save both below.
    
        echo 'AQECAHjS7Vj0eN0qmSLqs8TfNtCGdw7OfGBfw+S8plTtCvzJBwAABrMwggavBgkqhkiG9w0BBwagggagMIIGnAIBADCCBpUGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMpK6U0uSCNsavw0OJAgEQgIIGZqZCSrJCf68dvMeP7Ah+AfPEgtKielFq9zG4aXAIm6Fx+AkH47Q8EDZgf16L16S6+BKc1xUOJZOd79g5UdETnWF6WPwAw4woDAqq1mYKSRtZpzkz9daiXsOXkqTlka5owac/r2iA2giqBFuIXfr2RPDB6lReyrhAQBYTwefTHnVqosKVGsv7/xxmuorn2iBwMfyqoj2gmykmoxrmWmCdK+jRdWKBHtriwplBTEHpUTdHmQnQzGVThfr611XFCQDZ+uk/wpVFY4jZ/Z' > /ImportToken.b64
    
        echo 'MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA4KCiW36zofngmyVTOpMuFfHAXLWTfl2BAvaeq5puewoMt1zPwBP23qKdG10L7ChTVt4H9PCs1vzAWm2jGeWk5fO' > /PublicKey.b64 
        

  4. On both saved files, you must then run the following commands, which will base64 decode them and save them in their binary form:
    
        $ openssl enc -d -base64 -A -in PublicKey.b64 -out PublicKey.bin
    
        $ openssl enc -d -base64 -A -in ImportToken.b64 -out ImportToken.bin
        

Step 3: Import the import wrapping key provided by AWS KMS into your HSM

Now that you’ve created the CMK ID and prepared for the key material import, you’re going to move to AWS CloudHSM to create and export your symmetric encryption key. You’re going to import the import wrapping key provided by AWS KMS into your HSM. Before completing this step, be sure you’ve set up the prerequisites I listed at the start of this post.

  1. Log into your EC2 instance with the AWS CloudHSM client package installed, and launch the Key Management Utility. You will launch the utility using the command below:

    $ /opt/cloudhsm/bin/key_mgmt_util

  2. After launching the key_mgmt_util, you’ll need to log in as the CU user. Do so with the command below, being sure to replace <ExampleUser> and <Password123> for your own CU user and password.

    Command: loginHSM -u CU -p <Password123> -s <ExampleUser>

    You should see an output similar to the example below, letting you know that you’ve logged in correctly:

        
        Cfm3LoginHSM returned: 0x00 : HSM Return: SUCCESS
        Cluster Error Status
        Node id 1 and err state 0x00000000 : HSM Return: SUCCESS
        

    Next, you’ll import the import wrapping key provided by KMS into the HSM, so that you can use it to wrap the symmetric key you’re going to create.

  3. Open the file PublicKey.b64 in your favorite text editor and add the line —–BEGIN PUBLIC KEY—– at the very top of the file and the line —–END PUBLIC KEY—— at the very end. (An example of how this should look is below.) Save this new file as PublicKey.pem.
       
        -----BEGIN PUBLIC KEY-----
        MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA0UnLXHbP/jC/QykQQUroaB0xQYYVaZge//NrmIdYtYvaw32fcEUYjTHovWkYkmifUFVYkNaWKNRGuICo1oAY5cgowgxYcsBXZ4Pk2h3v43tIsxG63ZDLKDFju/dtGaa8+CwR8mR…
        -----END PUBLIC KEY-----
        

  4. Use the importPubKey command from the key_mgmt_util to import the key file. An example of the command is below. For the -l flag (label), I’ve named my example <wrapping-key> so that I’ll know what it’s for. You can name yours whatever you choose. Also, remember to replace <PublicKey.pem> with the actual filename of your public key.

    Command: importPubKey -l <wrapping-key> -f <PublicKey.pem>

    If successful, your output should look similar to the following example. Make note of the key handle, as this is the identifying ID for your wrapping key:

          
        Cfm3CreatePublicKey returned: 0x00 : HSM Return: SUCCESS
    
        Public Key Handle: 30 
        
        Cluster Error Status
        Node id 2 and err state 0x00000000 : HSM Return: SUCCESS
        

Step 4: Create a symmetric key on AWS CloudHSM

Next, you’ll create a symmetric key that you want to export from CloudHSM, so that you can import it into AWS KMS.

The first parameter you’ll use for this action is -t with the value 31. This is the key-type flag and 31 equals an AES key. KMS only accepts the AES key type. The second is -s with a value of 32. This is the key-size flag, and 32 equals a 32-byte key (256 bits). The last flag is the -l flag with the value BYOK_Key. This flag is simply a label to name your key, so you may alter this value however you wish.

Here’s what my example looks like:

Command: genSymKey -t 31 -s 32 -l BYOK-KMS

If successful, your output should look similar to what’s below. Make note of the key handle, as this will be the identifying ID of the key you wish to import into AWS KMS.

  
Cfm3LoginHSM returned: 0x00 : HSM Return: SUCCESS

Symmetric Key Created. Key Handle: 20

Cluster Error Status
Node id 1 and err state 0x00000000 : HSM Return: SUCCESS

Step 5: Use the imported import wrapping key to wrap the symmetric key

Now that you’ve imported the import wrapping key into your HSM in the PublicKey.pem file, I’ll show you how to use the PublicKey.pem file and the key_mgmt_util to wrap your symmetric key out of the HSM.

An example of the command is below. Here’s how to customize the parameters:

  • -k refers to the key handle of your symmetric key. Replace the variable that follows -k with your own key handle ID from step 4.
  • -w refers to the key handle of your wrapping key. Replace the variable with your own ID from step 3.4
  • -out refers to the file name of your exported key. Replace the variable with a file name of your choice.
  • -m refers to the wrapping-mechanism. In my example, I’ve used RSA-OAEP, which has a value of 8.
  • -t refers to the hash-type. In my example, I’ve used SHA256, which has a value of 3.
  • -noheader refers to the -noheader flag. This omits the header that specifies CloudHSM-specific key attributes.

Command: wrapKey -k <20> -w <30> -out <KMS-BYOK-March2019.bin> -m 8 -t 3 -noheader

If successful, you should see an output similar to what’s below:

 
Cfm2WrapKey5 returned: 0x00 : HSM Return: SUCCESS

	Key Wrapped.

	Wrapped Key written to file "KMS-BYOK-March2019.bin length 256

	Cfm2WrapKey returned: 0x00 : HSM Return: SUCCESS

With that, you’ve completed the AWS CloudHSM steps. You now have a symmetric key, securely wrapped for transport into AWS KMS. You can log out of the Key Management Utility by entering the word exit.

Step 6: Import the wrapped symmetric key into AWS KMS using the key import function

You’re ready to import the wrapped symmetric key into AWS KMS using the key import function. Make the following updates to the sample command that I provide below:

  • Replace the key-id value and file names with your own.
  • Leave the expiration model parameter blank, or choose from
    KEY_MATERIAL_EXPIRES or KEY_MATERIAL_DOES_NOT_EXPIRE. If you leave the parameter blank, it will default to KEY_MATERIAL_EXPIRES. This expiration model requires you to set the –valid-to parameter, which can be any time of your choosing so long as it follows the format in the example. If you choose KEY_MATERIAL_DOES_NOT_EXPIRE, you may leave the –valid-to option out of the command. To enforce an expiration and rotation of your key material, best practice is to use the KEY_MATERIAL_EXPIRES option and a date of 1-2 years.

Here’s my sample command:

$ aws kms import-key-material –key-id <1234abcd-12ab-34cd-56ef-1234567890ab> –encrypted-key-material fileb://<KMS-BYOK-March2019.bin> –import-token fileb://<ImportToken.bin> –expiration-model KEY_MATERIAL_EXPIRES –valid-to 2020-03-01T12:00:00-08:00 –region us-east-1

You can test that the import was successful by using the key ID to encrypt a small (under 4KB) file. An example command is below:

$ aws kms encrypt –key-id 1234abcd-12ab-34cd-56ef-1234567890ab –plaintext file://test.txt –region us-east-1

A successful call will output something similar to below:

 
{
    "KeyId": "arn:aws:kms:us-east-1:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab ", 
    "CiphertextBlob": "AQICAHi4Pj/Jhk6aHMZNegOpYFnnuiEKAtjRWwD33TdDnKNY5gHbUVdrwptz…"
}

Step 7: Terminate your HSM (which triggers a backup)

The last step in the process is to Terminate your HSM (which triggers a backup). Since you’ve imported the key material into AWS KMS, you no longer need to run the HSM. Terminating the HSM will automatically trigger a backup, which will take an exact copy of the key material and the users on your HSM and store it, encrypted, in an Amazon Simple Storage Service (Amazon S3) bucket. If you need to re-import your key into KMS or if your company’s security policy requires an annual rotation of your KMS master key(s), when the time comes, you can spin up an HSM from your CloudHSM backup and be back to where you started. You can re-import your existing key material or create key materials for your new CMK, either way, you’ll need to request a fresh import wrapping key and import token from KMS.

Deleting an HSM can be done with the command below. Replace <cluster-example> with your actual cluster ID and <hsm-example> with your HSM ID:

$ aws cloudhsmv2 delete-hsm –cluster-id <cluster-example> –hsm-id <hsm-example> –region us-east-1

Following these steps, you will have successfully defined a KMS CMK, created your own key material on a CloudHSM device, and then imported it into AWS KMS for use. Dependent upon region, all of this can be done for less than $15.00 a year.

Summary

In this post, I walked you through creating a CMK ID without encryption key material associated, creating and then exporting a symmetric key from AWS CloudHSM, and then importing it into AWS KMS for use with KMS-integrated AWS services. This process allows you to maintain ownership and management over your master keys, create them on FIPS 140-2 Level 3 validated hardware, and maintain a copy for disaster recovery scenarios. This saves not only the time and personnel required to maintain your own HSMs on-premises, but the cost of hardware, electricity, and housing of the device.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS CloudHSM or AWS KMS forums.

Want more AWS Security news? Follow us on Twitter.

Author

Tracy Pierce

Tracy Pierce is a Senior Cloud Support Engineer at AWS. She enjoys the peculiar culture of Amazon and uses that to ensure every day is exciting for her fellow engineers and customers alike. Customer Obsession is her highest priority and she shows this by improving processes, documentation, and building tutorials. She has her AS in Computer Security & Forensics from SCTD, SSCP certification, AWS Developer Associate certification, and AWS Security Specialist certification. Outside of work, she enjoys time with friends, her Great Dane, and three cats. She keeps work interesting by drawing cartoon characters on the walls at request.

How to migrate your EC2 Oracle Transparent Data Encryption (TDE) database encryption wallet to CloudHSM

Post Syndicated from Tracy Pierce original https://aws.amazon.com/blogs/security/how-to-migrate-your-ec2-oracle-transparent-data-encryption-tde-database-encryption-wallet-to-cloudhsm/

In this post, I’ll show you how to migrate an encryption wallet for an Oracle database installed on Amazon EC2 from using an outside HSM to using AWS CloudHSM. Transparent Data Encryption (TDE) for Oracle is a common use case for Hardware Security Module (HSM) devices like AWS CloudHSM. Oracle TDE uses what is called “envelope encryption.” Envelope encryption is when the encryption key used to encrypt the tables of your database is in turn encrypted by a master key that resides either in a software keystore or on a hardware keystore, like an HSM. This master key is non-exportable by design to protect the confidentiality and integrity of your database encryption. This gives you a more granular encryption scheme on your data.

An encryption wallet is an encrypted container used to store the TDE master key for your database. The encryption wallet needs to be opened manually after a database startup and prior to the TDE encrypted data being accessed, so the master key is available for data decryption. The process I talk about in this post can be used with any non-AWS hardware or software encryption wallet, or a hardware encryption wallet that utilizes AWS CloudHSM Classic. For my examples in this tutorial, I will be migrating from a CloudHSM Classic to a CloudHSM cluster. It is worth noting that Gemalto has announced the end-of-life for Luna 5 HSMs, which our CloudHSM Classic fleet uses.

Note: You cannot migrate from an Oracle instance in
Amazon Relational Database Service (Amazon RDS) to AWS CloudHSM. You must install the Oracle database on an Amazon EC2 instance. Amazon RDS is not currently integrated with AWS CloudHSM.

When you move from one type of encryption wallet to another, new TDE master keys are created inside the new wallet. To ensure that you have access to backups that rely on your old HSM, consider leaving the old HSM running for your normal recovery window period. The steps I discuss will perform the decryption of your TDE keys and then re-encrypt them with the new TDE master key for you.

Once you’ve migrated your Oracle databases to use AWS CloudHSM as your encryption wallet, it’s also a good idea to set up cross-region replication for disaster recovery efforts. With copies of your database and encryption wallet in another region, you can be back in production quickly should a disaster occur. I’ll show you how to take advantage of this by setting up cross-region snapshots of your Oracle database Amazon Elastic Block Store (EBS) volumes and copying backups of your CloudHSM cluster between regions.

Solution overview

For this solution, you will modify the Oracle database’s encryption wallet to use AWS CloudHSM. This is completed in three steps, which will be detailed below. First, you will switch from the current encryption wallet, which is your original HSM device, to a software wallet. This is done by reverse migrating to a local wallet. Second, you’ll replace the PKCS#11 provider of your original HSM with the CloudHSM PKCS#11 software library. Third, you’ll switch the encryption wallet for your database to your CloudHSM cluster. Once this process is complete, your database will automatically re-encrypt all data keys using the new master key.

To complete the disaster recovery (DR) preparation portion of this post, you will perform two more steps. These consist of copying over snapshots of your EC2 volumes and your CloudHSM cluster backups to your DR region. The following diagram illustrates the steps covered in this post.
 

Figure 1: Steps to migrate your EC2 Oracle TDE database encryption wallet to CloudHSM

Figure 1: Steps to migrate your EC2 Oracle TDE database encryption wallet to CloudHSM

  1. Switch the current encryption wallet for the Oracle database TDE from your original HSM to a software wallet via a reverse migration process.
  2. Replace the PKCS#11 provider of your original HSM with the AWS CloudHSM PKCS#11 software library.
  3. Switch your encryption wallet to point to your AWS CloudHSM cluster.
  4. (OPTIONAL) Set up cross-region copies of the EC2 instance housing your Oracle database
  5. (OPTIONAL) Set up a cross-region copy of your recent CloudHSM cluster backup

Prerequisites

This process assumes you have the below items already set up or configured:

Deploy the solution

Now that you have the basic steps, I’ll go into more detail on each of them. I’ll show you the steps to migrate your encryption wallet to a software wallet using a reverse migration command.

Step 1: Switching the current encryption wallet for the Oracle database TDE from your original HSM to a software wallet via a reverse migration process.

To begin, you must configure the sqlnet.ora file for the reverse migration. In Oracle databases, the sqlnet.ora file is a plain-text configuration file that contains information like encryption, route of connections, and naming parameters that determine how the Oracle server and client must use the capabilities for network database access. You will want to create a backup so you can roll back in the event of any errors. You can make a copy with the below command. Make sure to replace </path/to/> with the actual path to your sqlnet.ora file location. The standard location for this file is “$ORACLE_HOME/network/admin“, but check your setup to ensure this is correct.

cp </path/to/>sqlnet.ora </path/to/>sqlnet.ora.backup

The software wallet must be created before you edit this file, and it should preferably be empty. Then, using your favorite text editor, open the sqlnet.ora file and set the below configuration. If an entry already exists, replace it with the below text.


ENCRYPTION_WALLET_LOCATION=
  (SOURCE=(METHOD=FILE)(METHOD_DATA=
    (DIRECTORY=<path_to_keystore>)))

Make sure to replace the <path_to_keystore> with the directory location of your destination wallet. The destination wallet is the path you choose for the local software wallet. You will notice in Oracle the words “keystore” and “wallet” are interchangeable for this post. Next, you’ll configure the wallet for the reverse migration. For this, you will use the ADMINISTER KEY MANAGEMENT statement with the SET ENCRYPTION KEY and REVERSE MIGRATE clauses as shown in the example below.

By using the REVERSE MIGRATE USING clause in your statement, you ensure the existing TDE table keys and tablespace encryption keys are decrypted by the hardware wallet TDE master key and then re-encrypted with the software wallet TDE master key. You will need to log into the database instance as a user that has been granted the ADMINISTER KEY MANAGEMENT or SYSKM privileges to run this statement. An example of the login is below. Make sure to replace the <sec_admin> and <password> with your administrator user name and password for the database.


sqlplus c##<sec_admin> syskm
Enter password: <password> 
Connected.

Once you’re connected, you’ll run the SQL statement below. Make sure to replace <password> with your own existing wallet password and <username:password> with your own existing wallet user ID and password. We are going to run this statement with the WITH BACKUP parameter, as it’s always ideal to take a backup in case something goes incorrectly.

ADMINISTER KEY MANAGEMENT SET ENCRYPTION KEY IDENTIFIED BY <password> REVERSE MIGRATE USING “<username:password>” WITH BACKUP;

If successful, you will see the text keystore altered. When complete, you do not need to restart your database or manually re-open the local wallet as the migration process loads this into memory for you.

With the migration complete, you’ll now move onto the next step of replacing the PKCS#11 provider of your original HSM with the CloudHSM PKCS#11 software library. This library is a PKCS#11 standard implementation that communicates with the HSMs in your cluster and is compliant with PKCS#11 version 2.40.

Step 2: Replacing the PKCS#11 provider of your original HSM with the AWS CloudHSM PKCS#11 software library.

You’ll begin by installing the software library with the below two commands.

wget https://s3.amazonaws.com/cloudhsmv2-software/CloudHsmClient/EL6/cloudhsm-client-pkcs11-latest.el6.x86_64.rpm

sudo yum install -y ./cloudhsm-client-pkcs11-latest.el6.x86_64.rpm

When installation completes, you will be able to find the CloudHSM PKCS#11 software library files in the directory, the default directory for AWS CloudHSM’s software library installs. To ensure processing speed and throughput capabilities of the HSMs, I suggest installing a Redis cache as well. This cache stores key handles and attributes locally, so you may access them without making a call to the HSMs. As this step is optional and not required for this post, I will leave the link for installation instructions here. With the software library installed, you want to ensure the CloudHSM client is running. You can check this with the command below.

sudo start cloudhsm-client

Step 3: Switching your encryption wallet to point to your AWS CloudHSM cluster.

Once you’ve verified the client is running, you’re going to create another backup of the sqlnet.ora file. It’s always a good idea to take backups before making any changes. The command would be similar to below, replacing </path/to/> with the actual path to your sqlnet.ora file.

cp </path/to/>sqlnet.ora </path/to/>sqlnet.ora.backup2

With this done, again open the sqlnet.ora file with your favorite text editor. You are going to edit the line encryption_wallet_location to resemble the below text.


ENCRYPTION_WALLET_LOCATION=
  (SOURCE=(METHOD=HSM))

Save the file and exit. You will need to create the directory where your Oracle database will expect to find the library file for the AWS CloudHSM PKCS#11 software library. You do this with the command below.

sudo mkdir -p /opt/oracle/extapi/64/hsm

With the directory created, you next copy over the CloudHSM PKCS#11 software library from the original installation directory to this one. It is important this new directory only contain the one library file. Should any files exist in the directory that are not directly related to the way you installed the CloudHSM PKCS#11 software library, remove them. The command to copy is below.

sudo cp /opt/cloudhsm/lib/libcloudhsm_pkcs11_standard.so /opt/oracle/extapi/64/hsm

Now, modify the ownership of the directory and everything inside. The Oracle user must have access to these library files to run correctly. The command to do this is below.

sudo chown -R oracle:dba /opt/oracle

With that done, you can start your Oracle database. This completes the migration of your encryption wallet and TDE keys from your original encryption wallet to a local wallet, and finally to CloudHSM as the new encryption wallet. Should you decide you wish to create new TDE master encryption keys on CloudHSM, you can follow the steps here to do so.

These steps are optional, but helpful in the event you must restore your database to production quickly. For customers that leverage DR environments, we have two great blog posts here and here to walk you through each step of the cross-region replication process. The first uses a combination of AWS Step Functions and Amazon CloudWatch Events to copy your EBS snapshots to your DR region, and the second showcases how to copy your CloudHSM cluster backups to your DR region.

Summary

In this post, I walked you through how to migrate your Oracle TDE database encryption wallet to point it to CloudHSM for secure storage of your TDE. I showed you how to properly install the CloudHSM PKCS#11 software library and place it in the directory for Oracle to find and use. This process can be used to migrate most on-premisis encryption wallet to AWS CloudHSM to ensure security of your TDE keys and meet compliance requirements.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS CloudHSM forum.

Want more AWS Security news? Follow us on Twitter.

Author

Tracy Pierce

Tracy Pierce is a Senior Cloud Support Engineer at AWS. She enjoys the peculiar culture of Amazon and uses that to ensure every day is exciting for her fellow engineers and customers alike. Customer Obsession is her highest priority and she shows this by improving processes, documentation, and building tutorials. She has her AS in Computer Security & Forensics from SCTD, SSCP certification, AWS Developer Associate certification, and AWS Security Specialist certification. Outside of work, she enjoys time with friends, her Great Dane, and three cats. She keeps work interesting by drawing cartoon characters on the walls at request.

How to run AWS CloudHSM workloads on Docker containers

Post Syndicated from Mohamed AboElKheir original https://aws.amazon.com/blogs/security/how-to-run-aws-cloudhsm-workloads-on-docker-containers/

AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to generate and use your own encryption keys on the AWS Cloud. With CloudHSM, you can manage your own encryption keys using FIPS 140-2 Level 3 validated HSMs. Your HSMs are part of a CloudHSM cluster. CloudHSM automatically manages synchronization, high availability, and failover within a cluster.

CloudHSM is part of the AWS Cryptography suite of services, which also includes AWS Key Management Service (KMS) and AWS Certificate Manager Private Certificate Authority (ACM PCA). KMS and ACM PCA are fully managed services that are easy to use and integrate. You’ll generally use AWS CloudHSM only if your workload needs a single-tenant HSM under your own control, or if you need cryptographic algorithms that aren’t available in the fully-managed alternatives.

CloudHSM offers several options for you to connect your application to your HSMs, including PKCS#11, Java Cryptography Extensions (JCE), or Microsoft CryptoNG (CNG). Regardless of which library you choose, you’ll use the CloudHSM client to connect to all HSMs in your cluster. The CloudHSM client runs as a daemon, locally on the same Amazon Elastic Compute Cloud (EC2) instance or server as your applications.

The deployment process is straightforward if you’re running your application directly on your compute resource. However, if you want to deploy applications using the HSMs in containers, you’ll need to make some adjustments to the installation and execution of your application and the CloudHSM components it depends on. Docker containers don’t typically include access to an init process like systemd or upstart. This means that you can’t start the CloudHSM client service from within the container using the general instructions provided by CloudHSM. You also can’t run the CloudHSM client service remotely and connect to it from the containers, as the client daemon listens to your application using a local Unix Domain Socket. You cannot connect to this socket remotely from outside the EC2 instance network namespace.

This blog post discusses the workaround that you’ll need in order to configure your container and start the client daemon so that you can utilize CloudHSM-based applications with containers. Specifically, in this post, I’ll show you how to run the CloudHSM client daemon from within a Docker container without needing to start the service. This enables you to use Docker to develop, deploy and run applications using the CloudHSM software libraries, and it also gives you the ability to manage and orchestrate workloads using tools and services like Amazon Elastic Container Service (Amazon ECS), Kubernetes, Amazon Elastic Container Service for Kubernetes (Amazon EKS), and Jenkins.

Solution overview

My solution shows you how to create a proof-of-concept sample Docker container that is configured to run the CloudHSM client daemon. When the daemon is up and running, it runs the AESGCMEncryptDecryptRunner Java class, available on the AWS CloudHSM Java JCE samples repo. This class uses CloudHSM to generate an AES key, then it uses the key to encrypt and decrypt randomly generated data.

Note: In my example, you must manually enter the crypto user (CU) credentials as environment variables when running the container. For any production workload, you’ll need to carefully consider how to provide, secure, and automate the handling and distribution of your HSM credentials. You should work with your security or compliance officer to ensure that you’re using an appropriate method of securing HSM login credentials for your application and security needs.

Figure 1: Architectural diagram

Figure 1: Architectural diagram

Prerequisites

To implement my solution, I recommend that you have basic knowledge of the below:

  • CloudHSM
  • Docker
  • Java

Here’s what you’ll need to follow along with my example:

  1. An active CloudHSM cluster with at least one active HSM. You can follow the Getting Started Guide to create and initialize a CloudHSM cluster. (Note that for any production cluster, you should have at least two active HSMs spread across Availability Zones.)
  2. An Amazon Linux 2 EC2 instance in the same Amazon Virtual Private Cloud in which you created your CloudHSM cluster. The EC2 instance must have the CloudHSM cluster security group attached—this security group is automatically created during the cluster initialization and is used to control access to the HSMs. You can learn about attaching security groups to allow EC2 instances to connect to your HSMs in our online documentation.
  3. A CloudHSM crypto user (CU) account created on your HSM. You can create a CU by following these user guide steps.

Solution details

  1. On your Amazon Linux EC2 instance, install Docker:
    
            # sudo yum -y install docker
            

  2. Start the docker service:
    
            # sudo service docker start
            

  3. Create a new directory and step into it. In my example, I use a directory named “cloudhsm_container.” You’ll use the new directory to configure the Docker image.
    
            # mkdir cloudhsm_container
            # cd cloudhsm_container           
            

  4. Copy the CloudHSM cluster’s CA certificate (customerCA.crt) to the directory you just created. You can find the CA certificate on any working CloudHSM client instance under the path /opt/cloudhsm/etc/customerCA.crt. This certificate is created during initialization of the CloudHSM Cluster and is needed to connect to the CloudHSM cluster.
  5. In your new directory, create a new file with the name run_sample.sh that includes the contents below. The script starts the CloudHSM client daemon, waits until the daemon process is running and ready, and then runs the Java class that is used to generate an AES key to encrypt and decrypt your data.
    
            #! /bin/bash
    
            # start cloudhsm client
            echo -n "* Starting CloudHSM client ... "
            /opt/cloudhsm/bin/cloudhsm_client /opt/cloudhsm/etc/cloudhsm_client.cfg &> /tmp/cloudhsm_client_start.log &
            
            # wait for startup
            while true
            do
                if grep 'libevmulti_init: Ready !' /tmp/cloudhsm_client_start.log &> /dev/null
                then
                    echo "[OK]"
                    break
                fi
                sleep 0.5
            done
            echo -e "\n* CloudHSM client started successfully ... \n"
            
            # start application
            echo -e "\n* Running application ... \n"
            
            java -ea -Djava.library.path=/opt/cloudhsm/lib/ -jar target/assembly/aesgcm-runner.jar --method environment
            
            echo -e "\n* Application completed successfully ... \n"                      
            

  6. In the new directory, create another new file and name it Dockerfile (with no extension). This file will specify that the Docker image is built with the following components:
    • The AWS CloudHSM client package.
    • The AWS CloudHSM Java JCE package.
    • OpenJDK 1.8. This is needed to compile and run the Java classes and JAR files.
    • Maven, a build automation tool that is needed to assist with building the Java classes and JAR files.
    • The AWS CloudHSM Java JCE samples that will be downloaded and built.
  7. Cut and paste the contents below into Dockerfile.

    Note: Make sure to replace the HSM_IP line with the IP of an HSM in your CloudHSM cluster. You can get your HSM IPs from the CloudHSM console, or by running the describe-clusters AWS CLI command.

    
            # Use the amazon linux image
            FROM amazonlinux:2
            
            # Install CloudHSM client
            RUN yum install -y https://s3.amazonaws.com/cloudhsmv2-software/CloudHsmClient/EL7/cloudhsm-client-latest.el7.x86_64.rpm
            
            # Install CloudHSM Java library
            RUN yum install -y https://s3.amazonaws.com/cloudhsmv2-software/CloudHsmClient/EL7/cloudhsm-client-jce-latest.el7.x86_64.rpm
            
            # Install Java, Maven, wget, unzip and ncurses-compat-libs
            RUN yum install -y java maven wget unzip ncurses-compat-libs
            
            # Create a work dir
            WORKDIR /app
            
            # Download sample code
            RUN wget https://github.com/aws-samples/aws-cloudhsm-jce-examples/archive/master.zip
            
            # unzip sample code
            RUN unzip master.zip
            
            # Change to the create directory
            WORKDIR aws-cloudhsm-jce-examples-master
            
            # Build JAR files
            RUN mvn validate && mvn clean package
            
            # Set HSM IP as an environmental variable
            ENV HSM_IP <insert the IP address of an active CloudHSM instance here>
            
            # Configure cloudhms-client
            COPY customerCA.crt /opt/cloudhsm/etc/
            RUN /opt/cloudhsm/bin/configure -a $HSM_IP
            
            # Copy the run_sample.sh script
            COPY run_sample.sh .
            
            # Run the script
            CMD ["bash","run_sample.sh"]                        
            

  8. Now you’re ready to build the Docker image. Use the following command, with the name jce_sample_client. This command will let you use the Dockerfile you created in step 6 to create the image.
    
            # sudo docker build -t jce_sample_client .
            

  9. To run a Docker container from the Docker image you just created, use the following command. Make sure to replace the user and password with your actual CU username and password. (If you need help setting up your CU credentials, see prerequisite 3. For more information on how to provide CU credentials to the AWS CloudHSM Java JCE Library, refer to the steps in the CloudHSM user guide.)
    
            # sudo docker run --env HSM_PARTITION=PARTITION_1 \
            --env HSM_USER=<user> \
            --env HSM_PASSWORD=<password> \
            jce_sample_client
            

    If successful, the output should look like this:

    
            * Starting cloudhsm-client ... [OK]
            
            * cloudhsm-client started successfully ...
            
            * Running application ...
            
            ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors 
            to the console.
            70132FAC146BFA41697E164500000000
            Successful decryption
                SDK Version: 2.03
            
            * Application completed successfully ...          
            

Conclusion

My solution provides an example of how to run CloudHSM workloads on Docker containers. You can use it as a reference to implement your cryptographic application in a way that benefits from the high availability and load balancing built in to AWS CloudHSM without compromising on the flexibility that Docker provides for developing, deploying, and running applications. If you have comments about this post, submit them in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Mohamed AboElKheir

Mohamed AboElKheir joined AWS in September 2017 as a Security CSE (Cloud Support Engineer) based in Cape Town. He is a subject matter expert for CloudHSM and is always enthusiastic about assisting CloudHSM customers with advanced issues and use cases. Mohamed is passionate about InfoSec, specifically cryptography, penetration testing (he’s OSCP certified), application security, and cloud security (he’s AWS Security Specialty certified).

AWS re:Invent Security Recap: Launches, Enhancements, and Takeaways

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/aws-reinvent-security-recap-launches-enhancements-and-takeaways/

For more from Steve, follow him on Twitter

Customers continue to tell me that our AWS re:Invent conference is a winner. It’s a place where they can learn, meet their peers, and rediscover the art of the possible. Of course, there is always an air of anticipation around what new AWS service releases will be announced. This time around, we went even bigger than we ever have before. There were over 50,000 people in attendance, spread across the Las Vegas strip, with over 2,000 breakout sessions, and jam packed hands-on learning opportunities with multiple day hackathons, workshops, and bootcamps.

A big part of all this activity included sharing knowledge about the latest AWS Security, Identity and Compliance services and features, as well as announcing new technology that we’re excited to be adopted so quickly across so many use-cases.

Here are the top Security, Identity and Compliance releases from re:invent 2018:

Keynotes: All that’s new

New AWS offerings provide more prescriptive guidance

The AWS re:Invent keynotes from Andy Jassy, Werner Vogels, and Peter DeSantis, as well as my own leadership session, featured the following new releases and service enhancements. We continue to strive to make architecting easier for developers, as well as our partners and our customers, so they stay secure as they build and innovate in the cloud.

  • We launched several prescriptive security services to assist developers and customers in understanding and managing their security and compliance postures in real time. My favorite new service is AWS Security Hub, which helps you centrally manage your security and compliance controls. With Security Hub, you now have a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie, as well as from AWS Partner solutions. Findings are visually summarized on integrated dashboards with actionable graphs and tables. You can also continuously monitor your environment using automated compliance checks based on the AWS best practices and industry standards your organization follows. Get started with AWS Security Hub with just a few clicks in the Management Console and once enabled, Security Hub will begin aggregating and prioritizing findings. You can enable Security Hub on a single account with one click in the AWS Security Hub console or a single API call.
  • Another prescriptive service we launched is called AWS Control Tower. One of the first things customers think about when moving to the cloud is how to set up a landing zone for their data. AWS Control Tower removes the guesswork, automating the set-up of an AWS landing zone that is secure, well-architected and supports multiple accounts. AWS Control Tower does this by using a set of blueprints that embody AWS best practices. Guardrails, both mandatory and recommended, are available for high-level, rule-based governance, allowing you to have the right operational control over your accounts. An integrated dashboard enables you to keep a watchful eye over the accounts provisioned, the guardrails that are enabled, and your overall compliance status. Sign up for the Control Tower preview, here.
  • The third prescriptive service, called AWS Lake Formation, will reduce your data lake build time from months to days. Prior to AWS Lake Formation, setting up a data lake involved numerous granular tasks. Creating a data lake with Lake Formation is as simple as defining where your data resides and what data access and security policies you want to apply. Lake Formation then collects and catalogs data from databases and object storage, moves the data into your new Amazon S3 data lake, cleans and classifies data using machine learning algorithms, and secures access to your sensitive data. Get started with a preview of AWS Lake Formation, here.
  • Next up, IoT Greengrass enables enhanced security through hardware root of trusted private key storage on hardware secure elements including Trusted Platform Modules (TPMs) and Hardware Security Modules (HSMs). Storing your private key on a hardware secure element adds hardware root of trust level-security to existing AWS IoT Greengrass security features that include X.509 certificates for TLS mutual authentication and encryption of data both in transit and at rest. You can also use the hardware secure element to protect secrets that you deploy to your AWS IoT Greengrass device using AWS IoT Greengrass Secrets Manager. To try these security enhancements for yourself, check out https://aws.amazon.com/greengrass/.
  • You can now use the AWS Key Management Service (KMS) custom key store feature to gain more control over your KMS keys. Previously, KMS offered the ability to store keys in shared HSMs managed by KMS. However, we heard from customers that their needs were more nuanced. In particular, they needed to manage keys in single-tenant HSMs under their exclusive control. With KMS custom key store, you can configure your own CloudHSM cluster and authorize KMS to use it as a dedicated key store for your keys. Then, when you create keys in KMS, you can choose to generate the key material in your CloudHSM cluster. Get started with KMS custom key store by following the steps in this blog post.
  • We’re excited to announce the release of ATO on AWS to help customers and partners speed up the FedRAMP approval process (which has traditionally taken SaaS providers up to 2 years to complete). We’ve already had customers, such as Smartsheet, complete the process in less than 90 days with ATO on AWS. Customers will have access to training, tools, pre-built CloudFormation templates, control implementation details, and pre-built artifacts. Additionally, customers are able to access direct engagement and guidance from AWS compliance specialists and support from expert AWS consulting and technology partners who are a part of our Security Automation and Orchestration (SAO) initiative, including GitHub, Yubico, RedHat, Splunk, Allgress, Puppet, Trend Micro, Telos, CloudCheckr, Saint, Center for Internet Security (CIS), OKTA, Barracuda, Anitian, Kratos, and Coalfire. To get started with ATO on AWS, contact the AWS partner team at [email protected].
  • Finally, I announced our first conference dedicated to cloud security, identity and compliance: AWS re:Inforce. The inaugural AWS re:Inforce, a hands-on gathering of like-minded security professionals, will take place in Boston, MA on June 25th and 26th, 2019 at the Boston Convention and Exhibition Center. The cost for a full conference pass will be $1,099. I’m hoping to see you all there. Sign up here to be notified of when registration opens.

Key re:Invent Takeaways

AWS is here to help you build

  1. Customers want to innovate, and cloud needs to securely enable this. Companies need to able to innovate to meet rapidly evolving consumer demands. This means they need cloud security capabilities they can rely on to meet their specific security requirements, while allowing them to continue to meet and exceed customer expectations. AWS Lake Formation, AWS Control Tower, and AWS Security Hub aggregate and automate otherwise manual processes involved with setting up a secure and compliant cloud environment, giving customers greater flexibility to innovate, create, and manage their businesses.
  2. Cloud Security is as much art as it is science. Getting to what you really need to know about your security posture can be a challenge. At AWS, we’ve found that the sweet spot lies in services and features that enable you to continuously gain greater depth of knowledge into your security posture, while automating mission critical tasks that relieve you from having to constantly monitor your infrastructure. This manifests itself in having an end-to-end automated remediation workflow. I spent some time covering this in my re:Invent session, and will continue to advocate using a combination of services, such as AWS Lambda, WAF, S3, AWS CloudTrail, and AWS Config to proactively identify, mitigate, and remediate threats that may arise as your infrastructure evolves.
  3. Remove human access to data. I’ve set a goal at AWS to reduce human access to data by 80%. While that number may sound lofty, it’s purposeful, because the only way to achieve this is through automation. There have been a number of security incidents in the news across industries, ranging from inappropriate access to personal information in healthcare, to credential stuffing in financial services. The way to protect against such incidents? Automate key security measures and minimize your attack surface by enabling access control and credential management with services like AWS IAM and AWS Secrets Manager. Additional gains can be found by leveraging threat intelligence through continuous monitoring of incidents via services such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie (intelligence from these services will now be available in AWS Security Hub).
  4. Get your leadership on board with your security plan. We offer 500+ security services and features; however, new services and technology can’t be wholly responsible for implementing reliable security measures. Security teams need to set expectations with leadership early, aligning on a number of critical protocols, including how to restrict and monitor human access to data, patching and log retention duration, credential lifespan, blast radius reduction, embedded encryption throughout AWS architecture, and canaries and invariants for security functionality. It’s also important to set security Key Performance Indicators (KPIs) to continuously track. At AWS, we monitor the number of AppSec reviews, how many security checks we can automate, third-party compliance audits, metrics on internal time spent, and conformity with Service Level Agreements (SLAs). While the needs of your business may vary, we find baseline KPIs to be consistent measures of security assurance that can be easily communicated to leadership.

Final Thoughts

Queen’s famous lyric, “I want it all, I want it all, and I want it now,” accurately captures the sentiment at re:Invent this year. Security will always be job zero for us, and we continue to iterate on behalf of customers so they can securely build, experiment and create … right now! AWS is trusted by many of the world’s most risk-sensitive organizations precisely because we have demonstrated this unwavering commitment to putting security above all. Still, I believe we are in the early days of innovation and adoption of the cloud, and I look forward to seeing both the gains and use cases that come out of our latest batch of tools and services.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds five patents in the field of cloud security architecture. Follow Steve on Twitter

Are KMS custom key stores right for you?

Post Syndicated from Richard Moulds original https://aws.amazon.com/blogs/security/are-kms-custom-key-stores-right-for-you/

You can use the AWS Key Management Service (KMS) custom key store feature to gain more control over your KMS keys. The KMS custom key store integrates KMS with AWS CloudHSM to help satisfy compliance obligations that would otherwise require the use of on-premises hardware security modules (HSMs) while providing the AWS service integrations of KMS. However, the additional control comes with increased cost and potential impact on performance and availability. This post will help you decide if this feature is the best approach for you.

KMS is a fully managed service that generates encryption keys and helps you manage their use across more than 45 AWS services. It also supports the AWS Encryption SDK and other client-side encryption tools, and you can integrate it into your own applications. KMS is designed to meet the requirements of the vast majority of AWS customers. However, there are situations where customers need to manage their keys in single-tenant HSMs that they exclusively control. Previously, KMS did not meet these requirements since it offered only the ability to store keys in shared HSMs that are managed by KMS.

AWS CloudHSM is a service that’s primarily intended to support customer-managed applications that are specifically designed to use HSMs. It provides direct control over HSM resources, but the service isn’t, by itself, widely integrated with other AWS managed services. Before custom key store, this meant that if you required direct control of your HSMs but still wanted to use and store regulated data in AWS managed services, you had to choose between changing those requirements, not using a given AWS service, or building your own solution. KMS custom key store gives you another option.

How does a custom key store work?

With custom key store, you can configure your own CloudHSM cluster and authorize KMS to use it as a dedicated key store for your keys rather than the default KMS key store. Then, when you create keys in KMS, you can choose to generate the key material in your CloudHSM cluster. Your KMS customer master keys (CMKs) never leave the CloudHSM instances, and all KMS operations that use those keys are only performed in your HSMs. In all other respects, the master keys stored in your custom key store are used in a way that is consistent with other KMS CMKs.

This diagram illustrates the primary components of the service and shows how a cluster of two CloudHSM instances is connected to KMS to create a customer controlled key store.
 

Figure 1: A cluster of two CloudHSM instances is connected to the KMS front-end hosts to create a customer controlled key store

Figure 1: A cluster of two CloudHSM instances is connected to KMS to create a customer controlled key store

Because you control your CloudHSM cluster, you can take direct action to manage certain aspects of the lifecycle of your keys, independently of KMS. Specifically, you can verify that KMS correctly created keys in your HSMs and you can delete key material and restore keys from backup at any time. You can also choose to connect and disconnect the CloudHSM cluster from KMS, effectively isolating your keys from KMS. However, with more control comes more responsibility. It’s important that you understand the availability and durability impact of using this feature, and I discuss the issues in the next section.

Decision criteria

KMS customers who plan to use a custom key store tell us they expect to use the feature selectively, deciding on a key-by-key basis where to store them. To help you decide if and how you might use the new feature, here are some important issues to consider.

Here are some reasons you might want to store a key in a custom key store:

  • You have keys that are required to be protected in a single-tenant HSM or in an HSM over which you have direct control.
  • You have keys that are explicitly required to be stored in an HSM validated at FIPS 140-2 level 3 overall (the HSMs used in the default KMS key store are validated to level 2 overall, with level 3 in several categories, including physical security).
  • You have keys that are required to be auditable independently of KMS.

And here are some considerations that might influence your decision to use a custom key store:

  • Cost — Each custom key store requires that your CloudHSM cluster contains at least two HSMs. CloudHSM charges vary by region, but you should expect costs of at least $1,000 per month, per HSM, if each device is permanently provisioned. This cost occurs regardless of whether you make any requests of the KMS API directly or indirectly through an AWS service.
  • Performance — The number of HSMs determines the rate at which keys can be used. It’s important that you understand the intended usage patterns for your keys and ensure that you have provisioned your HSM resources appropriately.
  • Availability — The number of HSMs and the use of availability zones (AZs) impacts the availability of your cluster and, therefore, your keys. The risk of your configuration errors that result in a custom key store being disconnected, or key material being deleted and unrecoverable, must be understood and assessed.
  • Operations — By using the custom key store feature, you will perform certain tasks that are normally handled by KMS. You will need to set up HSM clusters, configure HSM users, and potentially restore HSMs from backup. These are security-sensitive tasks for which you should have the appropriate resources and organizational controls in place to perform.

Getting Started

Here’s a basic rundown of the steps that you’ll take to create your first key in a custom key store within a given region.

  1. Create your CloudHSM cluster, initialize it, and add HSMs to the cluster. If you already have a CloudHSM cluster, you can use it as a custom key store in addition to your existing applications.
  2. Create a CloudHSM user so that KMS can access your cluster to create and use keys.
  3. Create a custom key store entry in KMS, give it a name, define which CloudHSM cluster you want it to use, and give KMS the credentials to access your cluster.
  4. Instruct KMS to make a connection to your cluster and log in.
  5. Create a CMK in KMS in the usual way except now select CloudHSM as the source of your key material. You’ll define administrators, users, and policies for the key as you would for any other CMK.
  6. Use the key via the existing KMS APIs, AWS CLI, or the AWS Encryption SDK. Requests to use the key don’t need to be context-aware of whether the key is stored in a custom key store or the default KMS key store.

Summary

Some customers need specific controls in place before they can use KMS to manage encryption keys in AWS. The new KMS custom key store feature is intended to satisfy that requirement. You can now apply the controls provided by CloudHSM to keys managed in KMS, without changing access control policies or service integration.

However, by using the new feature, you take responsibility for certain operational aspects that would otherwise be handled by KMS. It’s important that you have the appropriate controls in place and understand the performance and availability requirements of each key that you create in a custom key store.

If you’ve been prevented from migrating sensitive data to AWS because of specific key management requirements that are currently not met by KMS, consider using the new KMS custom key store feature.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Key Management Service discussion forum.

Want more AWS Security news? Follow us on Twitter.

Author

Richard Moulds

Richard is a Principal Product Manager at AWS. He’s a member of the KMS team and is primarily focused on helping to define the product roadmap and satisfy customer requirements. Richard has more than 15 years experience in helping customers build encryption and key management systems to protect their data. His attraction to cryptography stems from the challenge of taking such a complex subject and translating it into simple solutions that customers should be able to take for granted, on a grand scale. When he’s not thinking ahead he’s focused on the past, restoring classic cars, the more rust the better.

How to clone an AWS CloudHSM cluster across regions

Post Syndicated from Tracy Pierce original https://aws.amazon.com/blogs/security/how-to-clone-an-aws-cloudhsm-cluster-across-regions/

You can use AWS CloudHSM to generate, store, import, export, and manage your cryptographic keys. It also permits hash functions to compute message digests and hash-based message authentication codes (HMACs), as well as cryptographically sign data and verify signatures. To help ensure redundancy of data and simplification of the disaster recovery process, you’ll typically clone your AWS CloudHSM cluster into a different AWS region. This then allows you to synchronize keys, including non-exportable keys, across regions. Non-exportable keys are keys that can never leave the CloudHSM device in plaintext. They reside on the CloudHSM device and are encrypted for security purposes.

You clone a cluster to another region in a two-step process. First, you copy a backup to the destination region. Second, you create a new cluster from this backup. In this post, I’ll show you how to set up one cluster in region 1, and how to use the new CopyBackupToRegion feature to clone the cluster and hardware security modules (HSMs) to a virtual private cloud (VPC) in region 2.

Note: This post doesn’t include instructions on how to set up a cross-region VPC to synchronize HSMs across the two cloned clusters. If you need to do that, read this article.

Solution overview

To complete this solution, you can use either the AWS Command Line Interface (AWS CLI)
or the AWS CloudHSM API. For this post, I’ll use the AWS CLI to copy the cluster backup from region 1 to region 2, and then I’ll launch a new cluster from that copied backup.

The following diagram illustrates the process covered in the post.
 

Figure 1: Architecture diagram

Figure 1: Architecture diagram

Here’s how the process works:

  1. AWS CloudHSM creates a backup of the cluster and stores it in an S3 bucket owned by AWS CloudHSM.
  2. You run the CLI/API command to copy the backup to another AWS region.
  3. When the backup is completed, you use that backup to then create a cluster and HSMs.
  4. Note: Backups can’t be copied into or out of AWS GovCloud (US) because it’s a restricted region.

As with all cluster backups, when you copy the backup to a new AWS region, it’s stored in an Amazon S3 bucket owned by an AWS CloudHSM account. AWS CloudHSM manages the security and storage of cluster backups for you. This means the backup in both regions will also have the durability of Amazon S3, which is 99.999999999%. The backup in region 2 will also be encrypted and secured in the same way as your backup in region 1. You can read more about the encryption process of your AWS CloudHSM backups here.

Any HSMs created in this cloned cluster will have the same users and keys as the original cluster at the time the backup was taken. From this point on, you must manually keep the cloned clusters in sync. Specifically:

  • If you create users after creating your new cluster from the backup, you must create them on both clusters manually.
  • If you change the password for a user in one cluster, you must change the password on the cloned clusters to match.
  • If you create more keys in one cluster, you must sync them to at least one HSM in the cloned cluster. Note that after you sync the key from cluster 1 to cluster 2, the CloudHSM automated cluster synchronization will take care of syncing the keys within the 2nd cluster.

Prerequisites

Some items that will need to be in place for this to work are:

Important note: Syncing keys across clusters in more than one region will only work if all clusters are created from the same backup. This is because synchronization requires the same secret key, called a masking key, to be present on the source and destination HSM. The masking key is specific to each cluster. It can’t be exported, and can’t be used for any purpose other than synchronizing keys across HSMs in a cluster.

Step 1: Create your first cluster in region 1

Follow the links in each sub-step below to the documentation page for more information and setup requirements:

  1. Create the cluster. To do this, you will run the command below via CLI. You will want to replace the placeholder <SUBNET ID 1> with one of your private subnets.
    $ aws cloudhsmv2 create-cluster –hsm-type hsm1.medium –subnet-ids <SUBNET ID 1>
  2. Launch your Amazon Elastic Compute Cloud (Amazon EC2) client (in the public subnet). You can follow the steps here to launch an EC2 Instance.
  3. Create the first HSM (in the private subnet). To do this, you will run the command below via CLI. You will want to replace the placeholder <CLUSTER ID> with the ID given from the ‘Create the cluster’ command above. You’ll replace <AVAILABILITY ZONE> with the AZ matching your private subnet. For example, us-east-1a.
    $ aws cloudhsmv2 create-hsm –cluster-id <CLUSTER ID> –availability-zone <AVAILABILITY ZONE>
  4. Initialize the cluster. Initializing your cluster requires creating a self-signed certificate and using that to sign the cluster’s Certificate Signing Request (CSR). You can view an example here of how to create and use a self-signed certificate. Once you have your certificate, you will run the command below to initialize the cluster with it. You will want to replace the placeholder <CLUSTER ID> with your cluster id from step 1.
    $ aws cloudhsmv2 initialize-cluster –cluster-id <CLUSTER ID> –signed-cert file://<CLUSTER ID>_CustomerHsmCertificate.crt –-trust-anchor file://customerCA.crt

    Note: Don’t forget to place a copy of the certificate used to sign your cluster’s CSR into the /opt/cloudhsm/etc directory to ensure a continued secure connection.

  5. Install the cloudhsm-client software. Once the Amazon EC2 client is launched, you’ll need to download and install the cloudhsm-client software. You can do this by running the command below from the CLI:
    wget https://s3.amazonaws.com/cloudhsmv2-software/CloudHsmClient/EL6/cloudhsm-client-latest.el6.x86_64.rpm

    Once downloaded, you’ll install by running this command:
    sudo yum install -y ./cloudhsm-client-latest.el6.x86_64.rpm

  6. The last step in initializing the cluster requires you to configure the cloudhsm-client to point to the ENI IP of your first HSM. You do this on your EC2 client by running this command:
    $ sudo /opt/cloudhsm/bin/configure -a <IP ADDRESS>
    Replace the <IP ADDRESS> placeholder with your HSM’s ENI IP. The cloudhsm-client comes pre-installed with a Python script called “configure” located in the /opt/cloudhsm/bin/ directory. This will update your /opt/cloudhsm/etc/cloudhsm_mgmt_util.cfg and
    /opt/cloudhsm/etc/cloudhsm_client.cfg files with your HSM’s IP address. This ensures your client can connect to your cluster.
  7. Activate the cluster. To activate, you must launch the cloudhsm-client by running this command, which logs you into the cluster:

    $ /opt/cloudhsm/bin/cloudhsm_mgmt_util /opt/cloudhsm/etc/cloudhsm_mgmt_util.cfg

    Then, you need to enable the secure communication by running this command:

    aws-cloudhsm>enable_e2e

    If you’ve placed the certificate in the correct directory, you should see a response like this on the command line:

    E2E enabled on server 0(server1)

    If you run the command listUsers you’ll see a PRECO user:

    
    aws-cloudhsm>listUsers
    Users on server 0(server1):
    Number of users found:2
    	User ID		User Type	User Name
    	     1		PRECO		admin
    	     2		AU		app_user
    
    

    Change the password for this user to complete the activation process. You do this by first logging in using the command below:

    aws-cloudhsm>loginHSM PRECO admin password

    Once logged in, change the password using this command:

    
    aws-cloudhsm>changePswd PRECO admin <NEW PASSWORD>
    ***************************CAUTION******************************
    This is a CRITICAL operation, should be done on all nodes in the
    cluster. Cav server does NOT synchronize these changes with the 
    nodes on which this operation is not executed or failed, please
    ensure this operation is executed on all nodes in the cluster.
    ****************************************************************
    
    Do you want to continue(y/n)?Y
    Changing password for admin(PRECO) on 1 nodes
    

    Once completed, log out using the command logout, then log back in with the new password, using the command loginHSM PRECO admin <NEW PASSWORD>.

    Doing this allows you to create the first crypto user (CU). You create the user by running the command:
    aws-cloudhsm>createUser <USERTYPE (ex: CO, CU)> <USERNAME> <PASSWORD>
    Replace the red values in this command. The <USERTYPE> can be a CO (crypto officer) or a CU (crypto user). You can find more information about usertypes here. You’ll replace the placeholders <USERNAME> <PASSWORD> with a real user and password combo. Crypto Users are permitted to create and share keys on the CloudHSM.

    Run the command quit to exit this tool.

Step 2: Trigger a backup of your cluster

To trigger a backup that will be copied to region 2 to create your new cluster, add an HSM to your cluster in region 1. You can do this via the console or CLI. The backup that is created will contain all users (COs, CUs, and appliance users), all key material on the HSMs, and the configurations and policies associated with them. The user portion is extremely important because keys can only be synced across clusters to the same user. Make a note of the backup ID because you will need it later. You can find this by logging into the AWS console and navigating to the CloudHSM console, then selecting Backups. There will be a list of backup IDs, cluster IDs, and creation times. Make sure to select the backup ID specifically created for the cross-region copy.

Step 3: Create a key on your cluster in Region 1

There are many ways to create a key. I’m using key_mgmt_util because it’s an easy and straightforward method using CLI commands instead of SDK libraries. Start by connecting to the EC2 client instance that you launched above and ensuring the cloudhsm-client is running. If you aren’t sure, run this command:

$ sudo start cloudhsm-client

Now, launch the key_mgmt_util by running this command:

$ /opt/cloudhsm/bin/key_mgmt_util

When you see the prompt, log in as a CU to create your key, replacing <USERNAME> and <PASSWORD> with an actual CU user’s username and password:

Command: loginHSM -u CU -s <USERNAME> -p <PASSWORD>

To create the key for this example, we’re going to use the key_mgmt_util to generate a symmetric key. Note the -nex parameter is what makes this key non-exportable. An example command is below:

Command: genSymKey -t 31 -s 32 -l aes256 -nex

In the above command:

  1. genSymKey creates the Symmetric key
  2. -t chooses the key type, which in this case is AES
  3. -s states the key size, which in this case is 32 bytes
  4. -l creates a label to easily recognize the key by
  5. -nex makes the key non-exportable

The HSM will return a key handle. This is used as an identifier to reference the key in future commands. Make a note of the key handle because you will need it later. Here’s an example of the full output in which you can see the key handle provided is 37:


Command:genSymKey -t 31 -s 32 -l aes256 -nex
	Cfm3GenerateSymmetricKey returned: 0x00 : HSM Return: SUCCESS
	Symmetric Key Created.   Key Handle: 37
	Cluster Error Status
	Node id 0 and err state 0x00000000 : HSM Return: SUCCESS

Step 4: Copy your backup from region 1 to region 2 and create a cluster from the backup

To copy your backup from region 1 to region 2, from your EC2 client you’ll need to run the command that appears after these important notes:

  • Make sure the proper permissions are applied for the IAM role or user configured for the CLI. You’ll want to be a CloudHSM administrator for these actions. The instructions here show you how to create an admin user for this process, and here is an example of the permissions policy:
    
    {
       "Version":"2018-06-12",
       "Statement":{
          "Effect":"Allow",
          "Action":[
             "cloudhsm:*",
             "ec2:CreateNetworkInterface",
             "ec2:DescribeNetworkInterfaces",
             "ec2:DescribeNetworkInterfaceAttribute",
             "ec2:DetachNetworkInterface",
             "ec2:DeleteNetworkInterface",
             "ec2:CreateSecurityGroup",
             "ec2:AuthorizeSecurityGroupIngress",
             "ec2:AuthorizeSecurityGroupEgress",
             "ec2:RevokeSecurityGroupEgress",
             "ec2:DescribeSecurityGroups",
             "ec2:DeleteSecurityGroup",
             "ec2:CreateTags",
             "ec2:DescribeVpcs",
             "ec2:DescribeSubnets",
             "iam:CreateServiceLinkedRole"
          ],
          "Resource":"*"
       }
    }
    
    

  • To copy the backup over, you need to know the destination region, the source cluster ID, and/or the source backup ID. You can find the source cluster ID and/or the source backup ID in the CloudHSM console.
  • If you use only the cluster ID, the most recent backup of the associated cluster will be chosen for copy. If you specify the backup ID, that associated backup will be copied. If you don’t know these IDs, run the describe-clusters or describe-backups commands.

Here’s the example command:


$ aws cloudhsmv2 copy-backup-to-region --destination-region <DESTINATION REGION> --cluster-id <CLUSTER ID> --backup-id <BACKUP ID>
{
    “DestinationBackup”: {
	“BackupId”: “backup-gdnekhcxf4n”,
	“CreateTimestamp”: 1531742400,
	“BackupState”: “CREATE_IN_PROGRESS”,
	“SourceCluster”: “cluster-kzlczlspnho”,
	“SourceBackup”: “backup-4kuraxsqetz”,
	“SourceRegion”: “us-east-1”
     }
}

Once the backup has been copied to region 2, you’ll see a new backup ID in your console. This is what you’ll use to create your new cluster. You can follow the steps here to create your new cluster from this backup. This cluster will launch already initialized for you, but it will still need HSMs added into it to complete the activation process. Make sure you copy over the cluster certificate from the original cluster to the new region. You can do this by opening two terminal sessions, one for each HSM. Open the certificate file on the HSM in cluster 1 and copy it. On the HSM in cluster 2, create a file and paste the certificate over. You can use any text editor you like to do this. This certificate is required to establish the encrypted connection between your client and HSM instances.

You should also make sure you’ve added the cloned cluster’s Security Group to your EC2 client instance to allow connectivity. You do this by selecting the Security Group for your EC2 client in the EC2 console, and selecting Add rules. You’ll add a rule allowing traffic, with the source being the Security Group ID of your cluster.

Finally, take note of the ENI IP for the HSM because you’ll need it later. You can find this in your CloudHSM Console by clicking on the cluster for more information.

Step 5: Create a new configuration file with one ENI IP from both clusters

To sync a key from a cluster in region 1 to a cluster in region 2, you must create a configuration file that contains at least one ENI IP of an HSM in both clusters. This is required to allow the cloudhsm-client to communicate with both clusters at the same time. This is where the masking key we mentioned earlier comes into play as the syncKey command uses that to copy keys between clusters. This is why the cluster in region 2 must be created from a backup of the cluster in region 1. For the new configuration file, I’m going to copy over the original file /opt/cloudhsm/etc/cloudhsm_mgmt_util.cfg to a new file. Name this SyncClusters.cfg. You’re going to edit this new configuration file to have the ENI IP of the HSM in the cluster of region 1 and the ENI IP of the HSM in the cluster of region 2. It should look something like this:


{
    "scard": {
        "certificate": "cert-sc",
        "enable": "no",
        "pkey": "pkey-sc",
        "port": 2225
    },
    "servers": [
        {
            "CAfile": "",
            "CApath": "/opt/cloudhsm/etc/certs",
            "certificate": "/opt/cloudhsm/etc/client.crt",
            "e2e_encryption": {
                "enable": "yes",
                "owner_cert_path": "/opt/cloudhsm/etc/customerCA.crt"
            },
            "enable": "yes",
            "hostname": "",
            "name": "",
            "pkey": "/opt/cloudhsm/etc/client.key",
            "port": 2225,
            "server_ssl": "yes",
            "ssl_ciphers": ""
        },
        {
            "CAfile": "",
            "CApath": "/opt/cloudhsm/etc/certs",
            "certificate": "/opt/cloudhsm/etc/client.crt",
            "e2e_encryption": {
                "enable": "yes",
                "owner_cert_path": "/opt/cloudhsm/etc/customerCA.crt"
            },
            "enable": "yes",
            "hostname": "",
            "name": "",
            "pkey": "/opt/cloudhsm/etc/client.key",
            "port": 2225,
            "server_ssl": "yes",
            "ssl_ciphers": ""
        }
    ]
}

To verify connectivity to both clusters, start the cloudhsm-client using the modified configuration file. The command will look similar to this:

$ /opt/cloudhsm/bin/cloudhsm_mgmt_util.cfg /opt/cloudhsm/etc/SyncClusters.cfg

After connection, you should see something similar to this, with one IP from cluster 1 and one IP from cluster 2:


Connecting to the server(s), it may take time
depending on the server(s) load, please wait...
Connecting to server '<CLUSTER-1-IP>': hostname '<CLUSTER-1-IP>', port 2225...
Connected to server '<CLUSTER-1-IP>': hostname '<CLUSTER-1-IP>', port 2225.
Connecting to server '<CLUSTER-2-IP>': hostname '<CLUSTER-2-IP>', port 2225...
Connected to server '<CLUSTER-2-IP>': hostname '<CLUSTER-2-IP>', port 2225.

If you run the command info server from the prompt, you’ll see a list of servers your client is connected to. Make note of these because they’ll be important when syncing your keys. Typically, you’ll see server 0 as your first HSM in cluster 1 and server 1 as your first HSM in cluster 2.

Step 6: Sync your key from the cluster in region 1 to the cluster in region 2

You’re ready to sync your keys. Make sure you’ve logged in as the Crypto Officer (CO) user. Only the CO user can perform management functions on the cluster (for example, syncing keys).

Note: These steps are all performed at the server prompt, not the aws-cloudhsm prompt.

First, run the command listUsers to get the user IDs of the user that created the keys. Here’s an example:


server0>listUsers
Users on server 0(<CLUSTER-A-IP>):
Number of users found:3

User Id    User Type	User Name     MofnPubKey     LoginFailureCnt	 2FA
1		CO   	admin         NO               0	 	 NO
2		AU   	app_user      NO               0	 	 NO
3		CU   	<USERNAME>    NO               0	 	 NO

Make note of the user ID because you’ll need it later; in this case, it’s 3. Now, you need to see the key handles that you want to sync. You either noted this from earlier, or you can find this by running the findAllKeys command with the parameter for user 3. Here’s an example:


server0>findAllKeys 3 0
Keys on server 0(<CLUSTER-1-IP<):
Number of keys found 1
number of keys matched from start index 0::1
37
findAllKeys success

In this case, the key handle I want to sync is 37. When running the command syncKey, you’ll input the key handle and the server you want to sync it to (the destination server). Here’s an example:

server0>syncKey 37 1

In this example, 37 is the key handle, and 1 is the destination HSM. You’ll run the exit command to back out to the cluster prompt, and from here you can run findAllKeys again, which should show the same key handle on both clusters.


aws-cloudhsm>findAllKeys 3 0
Keys on server 0(<CLUSTER-1-IP>):
Number of keys found 1
number of keys matched from start index 0::1
37
findAllKeys success on server 0(<CLUSTER-1-IP>)
Keys on server 0(<CLUSTER-2-IP>):
Number of keys found 1
number of keys matched from start index 0::1
37
findAllKeys success on server 0(<CLUSTER-2-IP>)

Repeat this process with all keys you want to sync between clusters.

Summary

I walked you through how to create a cluster, trigger a backup, copy that backup to a new region, launch a new cluster from that backup, and then sync keys across clusters. This will help reduce disaster recovery time, while helping to ensure that your keys are secure in multiple regions should a failure occur.

Remember to always manually update users across clusters after the initial backup copy and cluster creation because these aren’t automatic. You must also run the syncKey command on any keys created after this, as well.

You’re now set up for fault tolerance in your AWS CloudHSM environment.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS CloudHSM forum.

Want more AWS Security news? Follow us on Twitter.

Author

Tracy Pierce

Tracy Pierce is a Senior Cloud Support Engineer at AWS. She enjoys the peculiar culture of Amazon and uses that to ensure every day is exciting for her fellow engineers and customers alike. Customer Obsession is her highest priority and she shows this by improving processes, documentation, and building tutorials. She has her AS in Computer Security & Forensics from SCTD, SSCP certification, AWS Developer Associate certification, and AWS Security Specialist certification. Outside of work, she enjoys time with friends, her Great Dane, and three cats. She keeps work interesting by drawing cartoon characters on the walls at request.

Using AWS CloudHSM-backed certificates with Microsoft Internet Information Server

Post Syndicated from Jeff Levine original https://aws.amazon.com/blogs/security/using-aws-cloudhsm-backed-certificates-with-microsoft-internet-information-server/

SSL/TLS certificates are used to create encrypted sessions to endpoints such as web servers. If you want to get an SSL certificate, you usually start by creating a private key and a corresponding certificate signing request (CSR). You then send the CSR to a certificate authority (CA) and receive a certificate. When a user seeks to securely browse your website, the web server presents the certificate to the user’s browser. The user’s browser verifies that the certificate is associated with a trusted root certificate, and then uses the certificate to encrypt the session data. The web server then uses the private key to decrypt the user’s data. The security of this protocol relies on the private key remaining private.

A common problem with web servers, however, is that the private key and the certificate reside on the web server itself. If an attacker compromises the server and steals both the private key and certificate, the attacker could potentially use them to intercept the session or create an imposter server. If your business requires the use of hardware security modules for key storage, you can use AWS CloudHSM to generate and hold the private keys for web server certificates rather than placing them on the web server directly and reduce your exposure. On Windows, AWS CloudHSM integrates with Microsoft Internet Information Server (IIS) through the Key Storage Provider extensions to the Microsoft Cryptography Next Generation API (KSP/CNG). In this blog post, I will show you how to leverage CloudHSM KSP/CNG capabilities to secure the private key within the hardware security module (HSM), if you are using Microsoft Internet Information Server (IIS) to host your website.

Prerequisites and assumptions

You need the following for this walkthrough.

  • An AWS account and permissions to provision Amazon Virtual Private Cloud (Amazon VPC), Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon Route 53, and Amazon CloudHSM. I recommend using an AWS user account ID that has full administrative privileges.
  • A region that supports CloudHSM. I’m using the eu-west-1 (Ireland) region.
  • A domain to use for certificate generation. I’m using the domain example.com.
  • A trusted public certificate authority (CA).
  • A working knowledge of the Amazon VPC, Amazon EC2, and Amazon CloudHSM services.
  • Familiarity with the administration of Windows Server 2012 R2

Important: You will incur charges for the services used in this example. You can find the cost of each service on that service’s pricing page. You can also use the Simple Monthly Calculator to estimate the cost.

Architectural overview

I’ll go over a few diagrams before I start building. I’ll start with a diagram of the infrastructure.

Figure 1: CloudHSM infrastructure

Figure 1: CloudHSM infrastructure

This diagram shows a VPC in the eu-west-1 (Ireland) region. An Amazon EC2 instance running Windows Server 2012 R2 resides on a public subnet. This instance will run both the CloudHSM client software and IIS to host a website. The instance has an Elastic IP and can be accessed via the Internet Gateway. I’ll also have security groups that enable RDP, HTTP, and HTTPS access. The private subnet hosts the Elastic Network Interface (ENI) for CloudHSM cluster that has a single HSM.

And here’s the certification hierarchy:

Figure 2: Certification hierarchy

Figure 2: Certification hierarchy

The Amazon EC2 instance will run Windows Server 2012 R2 (WS2012R2) and IIS. I’ll use the KSP/CNG capabilities of CloudHSM to request a CSR from the WS2012R2 instance. The KSP/CNG integration will reach out to CloudHSM that will, in turn, create a private/public key pair. The private key will remain within CloudHSM while the public key will be used to create the CSR. I’ll then submit the CSR to a CA. The CA will sign the CSR with the CA‘s private key and return a certificate. I’ll then accept the CSR with the certificate that will make it available to IIS. I’ll then secure the default IIS website with the certificate. Finally, I’ll browse to the website and show how it uses the CNG/KSP integration.

Now that you’ve seen what I’m building, we can get started!

Build the CloudHSM infrastructure

You’ll need to set up a CloudHSM environment. This environment will consist of a VPC, a CloudHSM cluster with a single HSM, and a Windows 2012 R2 EC2 instance that will run both the CloudHSM client and IIS. For each of the steps below, I have provided a link to the relevant documentation, as well as any additional instructions to build out the infrastructure to match my sample environment.

  1. Select a region that offers AWS CloudHSM. I’m using the eu-west-1 (Ireland) region.
  2. Create a Virtual Private Cloud (VPC). Use the following values:
    VPC IPv4 CIDR block: 10.0.0.0/16
    VPC Name: cloudhsm-vpc
    Public subnet IPv4 CIDR block: 10.0.0.0/24
    Public subnet Availability Zone: eu-west-1b
    Public subnet name: cloudhsm-public
  3. Create a private subnet. Use the following values:
    IPv4 CIDR block: 10.0.1.0/24
    Availability Zone: eu-west-1b
    Name: cloudhsm-private
  4. Create a cluster. Use the following values:
    HSM Availability Zone: eu-west-1b
    Subnet: Use the private subnet you just created in eu-west-1b.
    Number of HSMs: 1
  5. Launch an Amazon EC2 client instance. Use the following values:
    AMI: Microsoft Windows Server 2012 R2 Base
    Instance type: t2.medium
    VPC: cloudhsm-vpc
    Subnet: cloudhsm-public
    Auto-assign Public IP: Enable
    Name tag: cloudhsm
    Security group: The CloudHSM cluster security group

    Once the instance is running, create an additional administrative Windows user because the built-in Administrator account has restrictions.

  6. Create an additional security group to allow you to connect to the instance and attach the security group to the instance in addition to the security group of the CloudHSM cluster.
  7. Allocate and associate an Elastic IP address to the instance so that the instance’s IP address persists if you need to stop and start the instance.
  8. Create a DNS “A record” to map the host name to the Elastic IP. In this example, I’m mapping www.example.com to the Elastic IP that I allocated.
  9. Create an HSM. After the HSM is created, note the HSM Id in the form of hsm- followed be an alphanumeric string; for example, hsm-1234abcd.
  10. Initialize the cluster.
  11. Install and configure the AWS CloudHSM client (Windows).
    You might need to use a command window with administrative privileges to do this.
  12. Activate the cluster.
  13. Create a crypto user. You will need to specify the ID and the password of the crypto user in the environment variables that you will create in the next section.

Define environment variables

We now need to define some Windows system environment variables (not user variables) used by the CNG/KSP integration. You can modify the environment variables by selecting System from the Control Panel, selecting Advanced System Settings, and selecting Environment Variables.

  1. Go into the Windows Control Panel and enter the following definitions:
    liquidsecurity_daemon_id 1
    liquidsecurity_daemon_socket_type TCPSOCKET
    liquidsecurity_daemon_tcp_port 1111
    n3fips_username USER
    n3fips_password USER:PASSWORD
    n3fips_partition hsm-ABCDEFGHIJK
    • Substitute the user ID and password that you specified in step 12 for the USER and PASSWORD values.
    • Substitute the HSM ID value you got from step 8 for hsm-ABCDEFGHIJK.

    Your Windows Control Panel configuration should look like the image below, not including the above substitutions.

    Figure 3: Environment variable settings

    Figure 3: Environment variable settings

  2. Reboot the EC2 instance to ensure that the environment variables are in place for all process.

Launch the CloudHSM client

You must start the CloudHSM client in order to enable the CNG/KSP integration.

  1. Open a command prompt window on the Windows instance.
  2. Enter the following commands:

    cd \ProgramData\Amazon\CloudHSM\data
    \Program Files\Amazon\CloudHSM\cloudhsm_client.exe cloudhsm_client.cfg

    The cloudhsm_client.exe command will continue to run. Important note: Do not close the window! The output at the bottom of the window looks similar to the following:

    Figure 4: CloudHSM client output

    Figure 4: CloudHSM client output

Launch key_mgmt_util

We are now going to launch the key_mgmt_util client and confirm that no keys are present in the CloudHSM cluster.

  1. Open a new command window. Do not use the window that you opened in the previous section.
  2. Enter the following commands.

    cd “\Program files\Amazon\CloudHSM”
    key_mgmt_util.exe
    loginHSM –u CU –s USER –p PASSWORD

    (replace USER and PASSWORD with the crypto user and password that you created above)

    findKey

You should see a message saying that there are no keys present, as shown in the figure below.

Figure 5: findKey output showing no keys present

Figure 5: findKey output showing no keys present

Important note: Do not exit from key_mgmt_util! You’ll return to this window later.

Install IIS

Here’s how:

  1. Follow the steps of this tutorial to install IIS. When installing IIS, make the following changes:
    • Use the Windows Server 2012 instructions as a guideline.
    • Do not install MySQL and PHP because you won’t need these services for this walkthrough.
  2. From your workstation’s web browser, browse the Elastic IP or the DNS “A Record” of the web server using the HTTP protocol. You should see the default Microsoft IIS page as shown below. If you don’t see the page, check the security groups for the server and verify that port 80 (HTTP) and port 443 (HTTPS) are open.
     
    Figure 6: http://www.example.com

    Figure 6: http://www.example.com

  3. If you browse to the same IP using the HTTPS protocol, you should not get a response because you haven’t configured HTTPS on the EC2 instance.

Provision a server certificate using the CNG/KSP integration

I’m now going to show how to provision a server certificate. I will do this with the certreq program that’s included with Windows Server 2012 R2. Certreq generates a CSR that I’ll submit to a CA. Certreq supports the KSP/CNG standard and can place certificates into the Windows certificate store.

  1. Create a file named request.inf that contains the lines below. Note that the Subject line may wrap onto the following line. It begins with Subject and ends with Washington and a closing quotation mark (“).

    
    [Version]
    Signature= $Windows NT$
    [NewRequest]
    Subject = "C=US,CN=www.example.com,O=Information Technology,OU=Certificate Management,L=Seattle,S=Washington"
    HashAlgorithm = SHA256
    KeyAlgorithm = RSA
    KeyLength = 2048
    ProviderName = Cavium Key Storage Provider
    KeyUsage = 0xf0
    MachineKeySet = True
    SuppressDefaults = True
    SMIME = false
    RequestType = PKCS10
    [EnhancedKeyUsageExtension]
    OID=1.3.6.1.5.5.7.3.1
    [Extensions]
    2.5.29.17 = "{text}"
    _continue_ = "dns=www.example.com&"
    _continue_ = "dns=example.com&"
    

    Note the presence of the extensions section that allows you to specify SANs (Subject Alternative Names). SANs are now required by some browsers. Remember to substitute your own domain for example.com in the above example.

  2. Create a certificate request with certreq.exe.
    certreq.exe -new request.inf request.csr.pem

    Certreq returns a message saying that the certificate request has been created.

  3. Figure 7: Output from certreq.exe showing that the CSR has been created

    Figure 7: Output from certreq.exe showing that the CSR has been created

  4. In the same command window where you ran key_mgmt_util, run findKey again to see if there are any new keys. Use the findKey command by itself to display all keys. This shows that two keys have been added with handles 7 and 8. Use the -c 2 parameter to display only public keys (handle 262214) and -c 3 to display private keys (handle 262154). The two keys are part of the public/private key pair that were just created with certreq.
     
    Figure 8: findKey output showing two new keys

    Figure 8: findKey output showing two new keys

  5. Submit the CSR you just created to the CA using the procedures provided by the CA. The CA will return a certificate that is named request.crt.pem (or similar).

    If the certificate authority doesn’t accept the CSR provided by certreq.exe, try the following:

    1. If the certificate authority asks you to provide the domains for the certificate, make sure you enter all of the SANs included in the request.inf file.
    2. Check the first and last lines of the CSR and see if the word “NEW” appears. If so, remove the word in both lines.
  6. Use the certreq command to accept the new certificate and insert it into the certificate store where IIS can access it.

    certreq.exe -accept request.crt.pem

Configure and test the secure website

  1. Start the IIS Manager. You can do this through the Server Manager or by running the following command:

    Inetmgr

  2. Under Connections, expand the local server, and then expand the Sites folder below. Select the site named Default Web Site.
     
    Figure 9: IIS Manager settings

    Figure 9: IIS Manager settings

  3. On the right, under Actions, select Edit Site-> Bindings, and then add a binding with the following fields making sure to substitute your domain name in place of example.com:

    Type: https
    Host name: www.example.com
    SSL certificate: www.example.com

  4. Select OK to save the new binding, and then select Close.
     
    Figure 10: Adding site bindings

    Figure 10: Adding site bindings

  5. Select Manage Website -> Restart. This will restart the website and ensure the new binding is active.
  6. Browse to https://www.example.com, remembering to substitute your own domain. You’ll see the default website with a secure connection as shown below. If you get a warning message from your workstation’s browser, make sure you added the root-level certificate to the trusted certificate store. (Note: This should not be an issue if you’re using a commercial CA supported by major web browsers)
     
    Figure 11: https://www.example.com

    Figure 11: https://www.example.com

  7. Examine the certificate information being reported to the browser. For my example, I’m using the Chrome browser so I’ll select the padlock icon, then Certificates, then Detail.
     
    Figure 12: Server certificate details

    Figure 12: Server certificate details

    Figure 12 shows that the Subject field contains the same information that I provided before. You can check out the other fields for information about the Subject Alternative Names and other fields and verify that the correct certificate is being used.

Conclusion

For projects that have special requirements regarding the storage of keys, AWS CloudHSM supports the Microsoft KSP/CNG standard that enables you to store the private key of an SSL/TLS certificate within an HSM. Since the private key no longer has to reside on the server, it’s no longer at risk if the server itself were to be compromised.

If you have feedback about this blog post, submit comments in the “Comments” section below. If you have questions about this blog post, start a new thread on the Amazon CloudHSM forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Security of Cloud HSMBackups

Post Syndicated from Balaji Iyer original https://aws.amazon.com/blogs/architecture/security-of-cloud-hsmbackups/

Today, our customers use AWS CloudHSM to meet corporate, contractual and regulatory compliance requirements for data security by using dedicated Hardware Security Module (HSM) instances within the AWS cloud. CloudHSM delivers all the benefits of traditional HSMs including secure generation, storage, and management of cryptographic keys used for data encryption that are controlled and accessible only by you.

As a managed service, it automates time-consuming administrative tasks such as hardware provisioning, software patching, high availability, backups and scaling for your sensitive and regulated workloads in a cost-effective manner. Backup and restore functionality is the core building block enabling scalability, reliability and high availability in CloudHSM.

You should consider using AWS CloudHSM if you require:

  • Keys stored in dedicated, third-party validated hardware security modules under your exclusive control
  • FIPS 140-2 compliance
  • Integration with applications using PKCS#11, Java JCE, or Microsoft CNG interfaces
  • High-performance in-VPC cryptographic acceleration (bulk crypto)
  • Financial applications subject to PCI regulations
  • Healthcare applications subject to HIPAA regulations
  • Streaming video solutions subject to contractual DRM requirements

We recently released a whitepaper, “Security of CloudHSM Backups” that provides in-depth information on how backups are protected in all three phases of the CloudHSM backup lifecycle process: Creation, Archive, and Restore.

About the Author

Balaji Iyer is a senior consultant in the Professional Services team at Amazon Web Services. In this role, he has helped several customers successfully navigate their journey to AWS. His specialties include architecting and implementing highly-scalable distributed systems, operational security, large scale migrations, and leading strategic AWS initiatives.

How to Enhance the Security of Sensitive Customer Data by Using Amazon CloudFront Field-Level Encryption

Post Syndicated from Alex Tomic original https://aws.amazon.com/blogs/security/how-to-enhance-the-security-of-sensitive-customer-data-by-using-amazon-cloudfront-field-level-encryption/

Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content to end users through a worldwide network of edge locations. CloudFront provides a number of benefits and capabilities that can help you secure your applications and content while meeting compliance requirements. For example, you can configure CloudFront to help enforce secure, end-to-end connections using HTTPS SSL/TLS encryption. You also can take advantage of CloudFront integration with AWS Shield for DDoS protection and with AWS WAF (a web application firewall) for protection against application-layer attacks, such as SQL injection and cross-site scripting.

Now, CloudFront field-level encryption helps secure sensitive data such as a customer phone numbers by adding another security layer to CloudFront HTTPS. Using this functionality, you can help ensure that sensitive information in a POST request is encrypted at CloudFront edge locations. This information remains encrypted as it flows to and beyond your origin servers that terminate HTTPS connections with CloudFront and throughout the application environment. In this blog post, we demonstrate how you can enhance the security of sensitive data by using CloudFront field-level encryption.

Note: This post assumes that you understand concepts and services such as content delivery networks, HTTP forms, public-key cryptography, CloudFrontAWS Lambda, and the AWS CLI. If necessary, you should familiarize yourself with these concepts and review the solution overview in the next section before proceeding with the deployment of this post’s solution.

How field-level encryption works

Many web applications collect and store data from users as those users interact with the applications. For example, a travel-booking website may ask for your passport number and less sensitive data such as your food preferences. This data is transmitted to web servers and also might travel among a number of services to perform tasks. However, this also means that your sensitive information may need to be accessed by only a small subset of these services (most other services do not need to access your data).

User data is often stored in a database for retrieval at a later time. One approach to protecting stored sensitive data is to configure and code each service to protect that sensitive data. For example, you can develop safeguards in logging functionality to ensure sensitive data is masked or removed. However, this can add complexity to your code base and limit performance.

Field-level encryption addresses this problem by ensuring sensitive data is encrypted at CloudFront edge locations. Sensitive data fields in HTTPS form POSTs are automatically encrypted with a user-provided public RSA key. After the data is encrypted, other systems in your architecture see only ciphertext. If this ciphertext unintentionally becomes externally available, the data is cryptographically protected and only designated systems with access to the private RSA key can decrypt the sensitive data.

It is critical to secure private RSA key material to prevent unauthorized access to the protected data. Management of cryptographic key material is a larger topic that is out of scope for this blog post, but should be carefully considered when implementing encryption in your applications. For example, in this blog post we store private key material as a secure string in the Amazon EC2 Systems Manager Parameter Store. The Parameter Store provides a centralized location for managing your configuration data such as plaintext data (such as database strings) or secrets (such as passwords) that are encrypted using AWS Key Management Service (AWS KMS). You may have an existing key management system in place that you can use, or you can use AWS CloudHSM. CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys in the AWS Cloud.

To illustrate field-level encryption, let’s look at a simple form submission where Name and Phone values are sent to a web server using an HTTP POST. A typical form POST would contain data such as the following.

POST / HTTP/1.1
Host: example.com
Content-Type: application/x-www-form-urlencoded
Content-Length:60

Name=Jane+Doe&Phone=404-555-0150

Instead of taking this typical approach, field-level encryption converts this data similar to the following.

POST / HTTP/1.1
Host: example.com
Content-Type: application/x-www-form-urlencoded
Content-Length: 1713

Name=Jane+Doe&Phone=AYABeHxZ0ZqWyysqxrB5pEBSYw4AAA...

To further demonstrate field-level encryption in action, this blog post includes a sample serverless application that you can deploy by using a CloudFormation template, which creates an application environment using CloudFront, Amazon API Gateway, and Lambda. The sample application is only intended to demonstrate field-level encryption functionality and is not intended for production use. The following diagram depicts the architecture and data flow of this sample application.

Sample application architecture and data flow

Diagram of the solution's architecture and data flow

Here is how the sample solution works:

  1. An application user submits an HTML form page with sensitive data, generating an HTTPS POST to CloudFront.
  2. Field-level encryption intercepts the form POST and encrypts sensitive data with the public RSA key and replaces fields in the form post with encrypted ciphertext. The form POST ciphertext is then sent to origin servers.
  3. The serverless application accepts the form post data containing ciphertext where sensitive data would normally be. If a malicious user were able to compromise your application and gain access to your data, such as the contents of a form, that user would see encrypted data.
  4. Lambda stores data in a DynamoDB table, leaving sensitive data to remain safely encrypted at rest.
  5. An administrator uses the AWS Management Console and a Lambda function to view the sensitive data.
  6. During the session, the administrator retrieves ciphertext from the DynamoDB table.
  7. The administrator decrypts sensitive data by using private key material stored in the EC2 Systems Manager Parameter Store.
  8. Decrypted sensitive data is transmitted over SSL/TLS via the AWS Management Console to the administrator for review.

Deployment walkthrough

The high-level steps to deploy this solution are as follows:

  1. Stage the required artifacts
    When deployment packages are used with Lambda, the zipped artifacts have to be placed in an S3 bucket in the target AWS Region for deployment. This step is not required if you are deploying in the US East (N. Virginia) Region because the package has already been staged there.
  2. Generate an RSA key pair
    Create a public/private key pair that will be used to perform the encrypt/decrypt functionality.
  3. Upload the public key to CloudFront and associate it with the field-level encryption configuration
    After you create the key pair, the public key is uploaded to CloudFront so that it can be used by field-level encryption.
  4. Launch the CloudFormation stack
    Deploy the sample application for demonstrating field-level encryption by using AWS CloudFormation.
  5. Add the field-level encryption configuration to the CloudFront distribution
    After you have provisioned the application, this step associates the field-level encryption configuration with the CloudFront distribution.
  6. Store the RSA private key in the Parameter Store
    Store the private key in the Parameter Store as a SecureString data type, which uses AWS KMS to encrypt the parameter value.

Deploy the solution

1. Stage the required artifacts

(If you are deploying in the US East [N. Virginia] Region, skip to Step 2, “Generate an RSA key pair.”)

Stage the Lambda function deployment package in an Amazon S3 bucket located in the AWS Region you are using for this solution. To do this, download the zipped deployment package and upload it to your in-region bucket. For additional information about uploading objects to S3, see Uploading Object into Amazon S3.

2. Generate an RSA key pair

In this section, you will generate an RSA key pair by using OpenSSL:

  1. Confirm access to OpenSSL.
    $ openssl version

    You should see version information similar to the following.

    OpenSSL <version> <date>

  1. Create a private key using the following command.
    $ openssl genrsa -out private_key.pem 2048

    The command results should look similar to the following.

    Generating RSA private key, 2048 bit long modulus
    ................................................................................+++
    ..........................+++
    e is 65537 (0x10001)
  1. Extract the public key from the private key by running the following command.
    $ openssl rsa -pubout -in private_key.pem -out public_key.pem

    You should see output similar to the following.

    writing RSA key
  1. Restrict access to the private key.$ chmod 600 private_key.pem Note: You will use the public and private key material in Steps 3 and 6 to configure the sample application.

3. Upload the public key to CloudFront and associate it with the field-level encryption configuration

Now that you have created the RSA key pair, you will use the AWS Management Console to upload the public key to CloudFront for use by field-level encryption. Complete the following steps to upload and configure the public key.

Note: Do not include spaces or special characters when providing the configuration values in this section.

  1. From the AWS Management Console, choose Services > CloudFront.
  2. In the navigation pane, choose Public Key and choose Add Public Key.
    Screenshot of adding a public key

Complete the Add Public Key configuration boxes:

  • Key Name: Type a name such as DemoPublicKey.
  • Encoded Key: Paste the contents of the public_key.pem file you created in Step 2c. Copy and paste the encoded key value for your public key, including the -----BEGIN PUBLIC KEY----- and -----END PUBLIC KEY----- lines.
  • Comment: Optionally add a comment.
  1. Choose Create.
  2. After adding at least one public key to CloudFront, the next step is to create a profile to tell CloudFront which fields of input you want to be encrypted. While still on the CloudFront console, choose Field-level encryption in the navigation pane.
  3. Under Profiles, choose Create profile.
    Screenshot of creating a profile

Complete the Create profile configuration boxes:

  • Name: Type a name such as FLEDemo.
  • Comment: Optionally add a comment.
  • Public key: Select the public key you configured in Step 4.b.
  • Provider name: Type a provider name such as FLEDemo.
    This information will be used when the form data is encrypted, and must be provided to applications that need to decrypt the data, along with the appropriate private key.
  • Pattern to match: Type phone. This configures field-level encryption to match based on the phone.
  1. Choose Save profile.
  2. Configurations include options for whether to block or forward a query to your origin in scenarios where CloudFront can’t encrypt the data. Under Encryption Configurations, choose Create configuration.
    Screenshot of creating a configuration

Complete the Create configuration boxes:

  • Comment: Optionally add a comment.
  • Content type: Enter application/x-www-form-urlencoded. This is a common media type for encoding form data.
  • Default profile ID: Select the profile you added in Step 3e.
  1. Choose Save configuration

4. Launch the CloudFormation stack

Launch the sample application by using a CloudFormation template that automates the provisioning process.

Input parameterInput parameter description
ProviderIDEnter the Provider name you assigned in Step 3e. The ProviderID is used in field-level encryption configuration in CloudFront (letters and numbers only, no special characters)
PublicKeyNameEnter the Key Name you assigned in Step 3b. This name is assigned to the public key in field-level encryption configuration in CloudFront (letters and numbers only, no special characters).
PrivateKeySSMPathLeave as the default: /cloudfront/field-encryption-sample/private-key
ArtifactsBucketThe S3 bucket with artifact files (staged zip file with app code). Leave as default if deploying in us-east-1.
ArtifactsPrefixThe path in the S3 bucket containing artifact files. Leave as default if deploying in us-east-1.

To finish creating the CloudFormation stack:

  1. Choose Next on the Select Template page, enter the input parameters and choose Next.
    Note: The Artifacts configuration needs to be updated only if you are deploying outside of us-east-1 (US East [N. Virginia]). See Step 1 for artifact staging instructions.
  2. On the Options page, accept the defaults and choose Next.
  3. On the Review page, confirm the details, choose the I acknowledge that AWS CloudFormation might create IAM resources check box, and then choose Create. (The stack will be created in approximately 15 minutes.)

5. Add the field-level encryption configuration to the CloudFront distribution

While still on the CloudFront console, choose Distributions in the navigation pane, and then:

    1. In the Outputs section of the FLE-Sample-App stack, look for CloudFrontDistribution and click the URL to open the CloudFront console.
    2. Choose Behaviors, choose the Default (*) behavior, and then choose Edit.
    3. For Field-level Encryption Config, choose the configuration you created in Step 3g.
      Screenshot of editing the default cache behavior
    4. Choose Yes, Edit.
    5. While still in the CloudFront distribution configuration, choose the General Choose Edit, scroll down to Distribution State, and change it to Enabled.
    6. Choose Yes, Edit.

6. Store the RSA private key in the Parameter Store

In this step, you store the private key in the EC2 Systems Manager Parameter Store as a SecureString data type, which uses AWS KMS to encrypt the parameter value. For more information about AWS KMS, see the AWS Key Management Service Developer Guide. You will need a working installation of the AWS CLI to complete this step.

  1. Store the private key in the Parameter Store with the AWS CLI by running the following command. You will find the <KMSKeyID> in the KMSKeyID in the CloudFormation stack Outputs. Substitute it for the placeholder in the following command.
    $ aws ssm put-parameter --type "SecureString" --name /cloudfront/field-encryption-sample/private-key --value file://private_key.pem --key-id "<KMSKeyID>"
    
    ------------------
    |  PutParameter  |
    +----------+-----+
    |  Version |  1  |
    +----------+-----+

  1. Verify the parameter. Your private key material should be accessible through the ssm get-parameter in the following command in the Value The key material has been truncated in the following output.
    $ aws ssm get-parameter --name /cloudfront/field-encryption-sample/private-key --with-decryption
    
    -----…
    
    ||  Value  |  -----BEGIN RSA PRIVATE KEY-----
    MIIEowIBAAKCAQEAwGRBGuhacmw+C73kM6Z…….

    Notice we use the —with decryption argument in this command. This returns the private key as cleartext.

    This completes the sample application deployment. Next, we show you how to see field-level encryption in action.

  1. Delete the private key from local storage. On Linux for example, using the shred command, securely delete the private key material from your workstation as shown below. You may also wish to store the private key material within an AWS CloudHSM or other protected location suitable for your security requirements. For production implementations, you also should implement key rotation policies.
    $ shred -zvu -n  100 private*.pem
    
    shred: private_encrypted_key.pem: pass 1/101 (random)...
    shred: private_encrypted_key.pem: pass 2/101 (dddddd)...
    shred: private_encrypted_key.pem: pass 3/101 (555555)...
    ….

Test the sample application

Use the following steps to test the sample application with field-level encryption:

  1. Open sample application in your web browser by clicking the ApplicationURL link in the CloudFormation stack Outputs. (for example, https:d199xe5izz82ea.cloudfront.net/prod/). Note that it may take several minutes for the CloudFront distribution to reach the Deployed Status from the previous step, during which time you may not be able to access the sample application.
  2. Fill out and submit the HTML form on the page:
    1. Complete the three form fields: Full Name, Email Address, and Phone Number.
    2. Choose Submit.
      Screenshot of completing the sample application form
      Notice that the application response includes the form values. The phone number returns the following ciphertext encryption using your public key. This ciphertext has been stored in DynamoDB.
      Screenshot of the phone number as ciphertext
  3. Execute the Lambda decryption function to download ciphertext from DynamoDB and decrypt the phone number using the private key:
    1. In the CloudFormation stack Outputs, locate DecryptFunction and click the URL to open the Lambda console.
    2. Configure a test event using the “Hello World” template.
    3. Choose the Test button.
  4. View the encrypted and decrypted phone number data.
    Screenshot of the encrypted and decrypted phone number data

Summary

In this blog post, we showed you how to use CloudFront field-level encryption to encrypt sensitive data at edge locations and help prevent access from unauthorized systems. The source code for this solution is available on GitHub. For additional information about field-level encryption, see the documentation.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, please start a new thread on the CloudFront forum.

– Alex and Cameron

AWS HIPAA Eligibility Update (October 2017) – Sixteen Additional Services

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-hipaa-eligibility-post-update-october-2017-sixteen-additional-services/

Our Health Customer Stories page lists just a few of the many customers that are building and running healthcare and life sciences applications that run on AWS. Customers like Verge Health, Care Cloud, and Orion Health trust AWS with Protected Health Information (PHI) and Personally Identifying Information (PII) as part of their efforts to comply with HIPAA and HITECH.

Sixteen More Services
In my last HIPAA Eligibility Update I shared the news that we added eight additional services to our list of HIPAA eligible services. Today I am happy to let you know that we have added another sixteen services to the list, bringing the total up to 46. Here are the newest additions, along with some short descriptions and links to some of my blog posts to jog your memory:

Amazon Aurora with PostgreSQL Compatibility – This brand-new addition to Amazon Aurora allows you to encrypt your relational databases using keys that you create and manage through AWS Key Management Service (KMS). When you enable encryption for an Amazon Aurora database, the underlying storage is encrypted, as are automated backups, read replicas, and snapshots. Read New – Encryption at Rest for Amazon Aurora to learn more.

Amazon CloudWatch Logs – You can use the logs to monitor and troubleshoot your systems and applications. You can monitor your existing system, application, and custom log files in near real-time, watching for specific phrases, values, or patterns. Log data can be stored durably and at low cost, for as long as needed. To learn more, read Store and Monitor OS & Application Log Files with Amazon CloudWatch and Improvements to CloudWatch Logs and Dashboards.

Amazon Connect – This self-service, cloud-based contact center makes it easy for you to deliver better customer service at a lower cost. You can use the visual designer to set up your contact flows, manage agents, and track performance, all without specialized skills. Read Amazon Connect – Customer Contact Center in the Cloud and New – Amazon Connect and Amazon Lex Integration to learn more.

Amazon ElastiCache for Redis – This service lets you deploy, operate, and scale an in-memory data store or cache that you can use to improve the performance of your applications. Each ElastiCache for Redis cluster publishes key performance metrics to Amazon CloudWatch. To learn more, read Caching in the Cloud with Amazon ElastiCache and Amazon ElastiCache – Now With a Dash of Redis.

Amazon Kinesis Streams – This service allows you to build applications that process or analyze streaming data such as website clickstreams, financial transactions, social media feeds, and location-tracking events. To learn more, read Amazon Kinesis – Real-Time Processing of Streaming Big Data and New: Server-Side Encryption for Amazon Kinesis Streams.

Amazon RDS for MariaDB – This service lets you set up scalable, managed MariaDB instances in minutes, and offers high performance, high availability, and a simplified security model that makes it easy for you to encrypt data at rest and in transit. Read Amazon RDS Update – MariaDB is Now Available to learn more.

Amazon RDS SQL Server – This service lets you set up scalable, managed Microsoft SQL Server instances in minutes, and also offers high performance, high availability, and a simplified security model. To learn more, read Amazon RDS for SQL Server and .NET support for AWS Elastic Beanstalk and Amazon RDS for Microsoft SQL Server – Transparent Data Encryption (TDE) to learn more.

Amazon Route 53 – This is a highly available Domain Name Server. It translates names like www.example.com into IP addresses. To learn more, read Moving Ahead with Amazon Route 53.

AWS Batch – This service lets you run large-scale batch computing jobs on AWS. You don’t need to install or maintain specialized batch software or build your own server clusters. Read AWS Batch – Run Batch Computing Jobs on AWS to learn more.

AWS CloudHSM – A cloud-based Hardware Security Module (HSM) for key storage and management at cloud scale. Designed for sensitive workloads, CloudHSM lets you manage your own keys using FIPS 140-2 Level 3 validated HSMs. To learn more, read AWS CloudHSM – Secure Key Storage and Cryptographic Operations and AWS CloudHSM Update – Cost Effective Hardware Key Management at Cloud Scale for Sensitive & Regulated Workloads.

AWS Key Management Service – This service makes it easy for you to create and control the encryption keys used to encrypt your data. It uses HSMs to protect your keys, and is integrated with AWS CloudTrail in order to provide you with a log of all key usage. Read New AWS Key Management Service (KMS) to learn more.

AWS Lambda – This service lets you run event-driven application or backend code without thinking about or managing servers. To learn more, read AWS Lambda – Run Code in the Cloud, AWS Lambda – A Look Back at 2016, and AWS Lambda – In Full Production with New Features for Mobile Devs.

[email protected] – You can use this new feature of AWS Lambda to run Node.js functions across the global network of AWS locations without having to provision or manager servers, in order to deliver rich, personalized content to your users with low latency. Read [email protected] – Intelligent Processing of HTTP Requests at the Edge to learn more.

AWS Snowball Edge – This is a data transfer device with 100 terabytes of on-board storage as well as compute capabilities. You can use it to move large amounts of data into or out of AWS, as a temporary storage tier, or to support workloads in remote or offline locations. To learn more, read AWS Snowball Edge – More Storage, Local Endpoints, Lambda Functions.

AWS Snowmobile – This is an exabyte-scale data transfer service. Pulled by a semi-trailer truck, each Snowmobile packs 100 petabytes of storage into a ruggedized 45-foot long shipping container. Read AWS Snowmobile – Move Exabytes of Data to the Cloud in Weeks to learn more (and to see some of my finest LEGO work).

AWS Storage Gateway – This hybrid storage service lets your on-premises applications use AWS cloud storage (Amazon Simple Storage Service (S3), Amazon Glacier, and Amazon Elastic File System) in a simple and seamless way, with storage for volumes, files, and virtual tapes. To learn more, read The AWS Storage Gateway – Integrate Your Existing On-Premises Applications with AWS Cloud Storage and File Interface to AWS Storage Gateway.

And there you go! Check out my earlier post for a list of resources that will help you to build applications that comply with HIPAA and HITECH.

Jeff;

 

Want to Learn More About AWS CloudHSM and Hardware Key Management? Register for and Attend this October 25 Tech Talk: “CloudHSM – Secure, Scalable Key Storage in AWS”

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/want-to-learn-more-about-aws-cloudhsm-and-hardware-key-management-register-for-and-attend-this-october-25-tech-talk-cloudhsm-secure-scalable-key-storage-in-aws/

AWS Online Tech Talks banner

As part of the AWS Online Tech Talks series, AWS will present CloudHSM – Secure, Scalable Key Storage in AWS on Wednesday, October 25. This tech talk will start at 9:00 A.M. Pacific Time and end at 9:40 A.M. Pacific Time.

Applications handling confidential or sensitive data are subject to corporate or regulatory requirements and therefore need validated control of encryption keys and cryptographic operations. AWS CloudHSM brings to your AWS resources the security and control of traditional HSMs. This Tech Talk will show how you can leverage CloudHSM to build scalable, reliable applications without sacrificing either security or performance. Attend this Tech Talk to learn how you can use CloudHSM to quickly and easily build secure, compliant, fast, and flexible applications.

You also will:

  • Learn about the challenges CloudHSM can help you address.
  • Understand how CloudHSM can secure your workloads and data.
  • Learn how to transfer and modernize workloads.

This tech talk is free. Register today.

– Craig

AWS Summit New York – Summary of Announcements

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-summit-new-york-summary-of-announcements/

Whew – what a week! Tara, Randall, Ana, and I have been working around the clock to create blog posts for the announcements that we made at the AWS Summit in New York. Here’s a summary to help you to get started:

Amazon Macie – This new service helps you to discover, classify, and secure content at scale. Powered by machine learning and making use of Natural Language Processing (NLP), Macie looks for patterns and alerts you to suspicious behavior, and can help you with governance, compliance, and auditing. You can read Tara’s post to see how to put Macie to work; you select the buckets of interest, customize the classification settings, and review the results in the Macie Dashboard.

AWS GlueRandall’s post (with deluxe animated GIFs) introduces you to this new extract, transform, and load (ETL) service. Glue is serverless and fully managed, As you can see from the post, Glue crawls your data, infers schemas, and generates ETL scripts in Python. You define jobs that move data from place to place, with a wide selection of transforms, each expressed as code and stored in human-readable form. Glue uses Development Endpoints and notebooks to provide you with a testing environment for the scripts you build. We also announced that Amazon Athena now integrates with Amazon Glue, as does Apache Spark and Hive on Amazon EMR.

AWS Migration Hub – This new service will help you to migrate your application portfolio to AWS. My post outlines the major steps and shows you how the Migration Hub accelerates, tracks,and simplifies your migration effort. You can begin with a discovery step, or you can jump right in and migrate directly. Migration Hub integrates with tools from our migration partners and builds upon the Server Migration Service and the Database Migration Service.

CloudHSM Update – We made a major upgrade to AWS CloudHSM, making the benefits of hardware-based key management available to a wider audience. The service is offered on a pay-as-you-go basis, and is fully managed. It is open and standards compliant, with support for multiple APIs, programming languages, and cryptography extensions. CloudHSM is an integral part of AWS and can be accessed from the AWS Management Console, AWS Command Line Interface (CLI), and through API calls. Read my post to learn more and to see how to set up a CloudHSM cluster.

Managed Rules to Secure S3 Buckets – We added two new rules to AWS Config that will help you to secure your S3 buckets. The s3-bucket-public-write-prohibited rule identifies buckets that have public write access and the s3-bucket-public-read-prohibited rule identifies buckets that have global read access. As I noted in my post, you can run these rules in response to configuration changes or on a schedule. The rules make use of some leading-edge constraint solving techniques, as part of a larger effort to use automated formal reasoning about AWS.

CloudTrail for All Customers – Tara’s post revealed that AWS CloudTrail is now available and enabled by default for all AWS customers. As a bonus, Tara reviewed the principal benefits of CloudTrail and showed you how to review your event history and to deep-dive on a single event. She also showed you how to create a second trail, for use with CloudWatch CloudWatch Events.

Encryption of Data at Rest for EFS – When you create a new file system, you now have the option to select a key that will be used to encrypt the contents of the files on the file system. The encryption is done using an industry-standard AES-256 algorithm. My post shows you how to select a key and to verify that it is being used.

Watch the Keynote
My colleagues Adrian Cockcroft and Matt Wood talked about these services and others on the stage, and also invited some AWS customers to share their stories. Here’s the video:

Jeff;

 

AWS CloudHSM Update – Cost Effective Hardware Key Management at Cloud Scale for Sensitive & Regulated Workloads

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-cloudhsm-update-cost-effective-hardware-key-management/

Our customers run an incredible variety of mission-critical workloads on AWS, many of which process and store sensitive data. As detailed in our Overview of Security Processes document, AWS customers have access to an ever-growing set of options for encrypting and protecting this data. For example, Amazon Relational Database Service (RDS) supports encryption of data at rest and in transit, with options tailored for each supported database engine (MySQL, SQL Server, Oracle, MariaDB, PostgreSQL, and Aurora).

Many customers use AWS Key Management Service (KMS) to centralize their key management, with others taking advantage of the hardware-based key management, encryption, and decryption provided by AWS CloudHSM to meet stringent security and compliance requirements for their most sensitive data and regulated workloads (you can read my post, AWS CloudHSM – Secure Key Storage and Cryptographic Operations, to learn more about Hardware Security Modules, also known as HSMs).

Major CloudHSM Update
Today, building on what we have learned from our first-generation product, we are making a major update to CloudHSM, with a set of improvements designed to make the benefits of hardware-based key management available to a much wider audience while reducing the need for specialized operating expertise. Here’s a summary of the improvements:

Pay As You Go – CloudHSM is now offered under a pay-as-you-go model that is simpler and more cost-effective, with no up-front fees.

Fully Managed – CloudHSM is now a scalable managed service; provisioning, patching, high availability, and backups are all built-in and taken care of for you. Scheduled backups extract an encrypted image of your HSM from the hardware (using keys that only the HSM hardware itself knows) that can be restored only to identical HSM hardware owned by AWS. For durability, those backups are stored in Amazon Simple Storage Service (S3), and for an additional layer of security, encrypted again with server-side S3 encryption using an AWS KMS master key.

Open & Compatible  – CloudHSM is open and standards-compliant, with support for multiple APIs, programming languages, and cryptography extensions such as PKCS #11, Java Cryptography Extension (JCE), and Microsoft CryptoNG (CNG). The open nature of CloudHSM gives you more control and simplifies the process of moving keys (in encrypted form) from one CloudHSM to another, and also allows migration to and from other commercially available HSMs.

More Secure – CloudHSM Classic (the original model) supports the generation and use of keys that comply with FIPS 140-2 Level 2. We’re stepping that up a notch today with support for FIPS 140-2 Level 3, with security mechanisms that are designed to detect and respond to physical attempts to access or modify the HSM. Your keys are protected with exclusive, single-tenant access to tamper-resistant HSMs that appear within your Virtual Private Clouds (VPCs). CloudHSM supports quorum authentication for critical administrative and key management functions. This feature allows you to define a list of N possible identities that can access the functions, and then require at least M of them to authorize the action. It also supports multi-factor authentication using tokens that you provide.

AWS-Native – The updated CloudHSM is an integral part of AWS and plays well with other tools and services. You can create and manage a cluster of HSMs using the AWS Management Console, AWS Command Line Interface (CLI), or API calls.

Diving In
You can create CloudHSM clusters that contain 1 to 32 HSMs, each in a separate Availability Zone in a particular AWS Region. Spreading HSMs across AZs gives you high availability (including built-in load balancing); adding more HSMs gives you additional throughput. The HSMs within a cluster are kept in sync: performing a task or operation on one HSM in a cluster automatically updates the others. Each HSM in a cluster has its own Elastic Network Interface (ENI).

All interaction with an HSM takes place via the AWS CloudHSM client. It runs on an EC2 instance and uses certificate-based mutual authentication to create secure (TLS) connections to the HSMs.

At the hardware level, each HSM includes hardware-enforced isolation of crypto operations and key storage. Each customer HSM runs on dedicated processor cores.

Setting Up a Cluster
Let’s set up a cluster using the CloudHSM Console:

I click on Create cluster to get started, select my desired VPC and the subnets within it (I can also create a new VPC and/or subnets if needed):

Then I review my settings and click on Create:

After a few minutes, my cluster exists, but is uninitialized:

Initialization simply means retrieving a certificate signing request (the Cluster CSR):

And then creating a private key and using it to sign the request (these commands were copied from the Initialize Cluster docs and I have omitted the output. Note that ID identifies the cluster):

$ openssl genrsa -out CustomerRoot.key 2048
$ openssl req -new -x509 -days 365 -key CustomerRoot.key -out CustomerRoot.crt
$ openssl x509 -req -days 365 -in ID_ClusterCsr.csr   \
                              -CA CustomerRoot.crt    \
                              -CAkey CustomerRoot.key \
                              -CAcreateserial         \
                              -out ID_CustomerHsmCertificate.crt

The next step is to apply the signed certificate to the cluster using the console or the CLI. After this has been done, the cluster can be activated by changing the password for the HSM’s administrative user, otherwise known as the Crypto Officer (CO).

Once the cluster has been created, initialized and activated, it can be used to protect data. Applications can use the APIs in AWS CloudHSM SDKs to manage keys, encrypt & decrypt objects, and more. The SDKs provide access to the CloudHSM client (running on the same instance as the application). The client, in turn, connects to the cluster across an encrypted connection.

Available Today
The new HSM is available today in the US East (Northern Virginia), US West (Oregon), US East (Ohio), and EU (Ireland) Regions, with more in the works. Pricing starts at $1.45 per HSM per hour.

Jeff;