Tag Archives: AWS CloudHSM

AWS re:Invent Security Recap: Launches, Enhancements, and Takeaways

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/aws-reinvent-security-recap-launches-enhancements-and-takeaways/

For more from Steve, follow him on Twitter

Customers continue to tell me that our AWS re:Invent conference is a winner. It’s a place where they can learn, meet their peers, and rediscover the art of the possible. Of course, there is always an air of anticipation around what new AWS service releases will be announced. This time around, we went even bigger than we ever have before. There were over 50,000 people in attendance, spread across the Las Vegas strip, with over 2,000 breakout sessions, and jam packed hands-on learning opportunities with multiple day hackathons, workshops, and bootcamps.

A big part of all this activity included sharing knowledge about the latest AWS Security, Identity and Compliance services and features, as well as announcing new technology that we’re excited to be adopted so quickly across so many use-cases.

Here are the top Security, Identity and Compliance releases from re:invent 2018:

Keynotes: All that’s new

New AWS offerings provide more prescriptive guidance

The AWS re:Invent keynotes from Andy Jassy, Werner Vogels, and Peter DeSantis, as well as my own leadership session, featured the following new releases and service enhancements. We continue to strive to make architecting easier for developers, as well as our partners and our customers, so they stay secure as they build and innovate in the cloud.

  • We launched several prescriptive security services to assist developers and customers in understanding and managing their security and compliance postures in real time. My favorite new service is AWS Security Hub, which helps you centrally manage your security and compliance controls. With Security Hub, you now have a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie, as well as from AWS Partner solutions. Findings are visually summarized on integrated dashboards with actionable graphs and tables. You can also continuously monitor your environment using automated compliance checks based on the AWS best practices and industry standards your organization follows. Get started with AWS Security Hub with just a few clicks in the Management Console and once enabled, Security Hub will begin aggregating and prioritizing findings. You can enable Security Hub on a single account with one click in the AWS Security Hub console or a single API call.
  • Another prescriptive service we launched is called AWS Control Tower. One of the first things customers think about when moving to the cloud is how to set up a landing zone for their data. AWS Control Tower removes the guesswork, automating the set-up of an AWS landing zone that is secure, well-architected and supports multiple accounts. AWS Control Tower does this by using a set of blueprints that embody AWS best practices. Guardrails, both mandatory and recommended, are available for high-level, rule-based governance, allowing you to have the right operational control over your accounts. An integrated dashboard enables you to keep a watchful eye over the accounts provisioned, the guardrails that are enabled, and your overall compliance status. Sign up for the Control Tower preview, here.
  • The third prescriptive service, called AWS Lake Formation, will reduce your data lake build time from months to days. Prior to AWS Lake Formation, setting up a data lake involved numerous granular tasks. Creating a data lake with Lake Formation is as simple as defining where your data resides and what data access and security policies you want to apply. Lake Formation then collects and catalogs data from databases and object storage, moves the data into your new Amazon S3 data lake, cleans and classifies data using machine learning algorithms, and secures access to your sensitive data. Get started with a preview of AWS Lake Formation, here.
  • Next up, IoT Greengrass enables enhanced security through hardware root of trusted private key storage on hardware secure elements including Trusted Platform Modules (TPMs) and Hardware Security Modules (HSMs). Storing your private key on a hardware secure element adds hardware root of trust level-security to existing AWS IoT Greengrass security features that include X.509 certificates for TLS mutual authentication and encryption of data both in transit and at rest. You can also use the hardware secure element to protect secrets that you deploy to your AWS IoT Greengrass device using AWS IoT Greengrass Secrets Manager. To try these security enhancements for yourself, check out https://aws.amazon.com/greengrass/.
  • You can now use the AWS Key Management Service (KMS) custom key store feature to gain more control over your KMS keys. Previously, KMS offered the ability to store keys in shared HSMs managed by KMS. However, we heard from customers that their needs were more nuanced. In particular, they needed to manage keys in single-tenant HSMs under their exclusive control. With KMS custom key store, you can configure your own CloudHSM cluster and authorize KMS to use it as a dedicated key store for your keys. Then, when you create keys in KMS, you can choose to generate the key material in your CloudHSM cluster. Get started with KMS custom key store by following the steps in this blog post.
  • We’re excited to announce the release of ATO on AWS to help customers and partners speed up the FedRAMP approval process (which has traditionally taken SaaS providers up to 2 years to complete). We’ve already had customers, such as Smartsheet, complete the process in less than 90 days with ATO on AWS. Customers will have access to training, tools, pre-built CloudFormation templates, control implementation details, and pre-built artifacts. Additionally, customers are able to access direct engagement and guidance from AWS compliance specialists and support from expert AWS consulting and technology partners who are a part of our Security Automation and Orchestration (SAO) initiative, including GitHub, Yubico, RedHat, Splunk, Allgress, Puppet, Trend Micro, Telos, CloudCheckr, Saint, Center for Internet Security (CIS), OKTA, Barracuda, Anitian, Kratos, and Coalfire. To get started with ATO on AWS, contact the AWS partner team at [email protected].
  • Finally, I announced our first conference dedicated to cloud security, identity and compliance: AWS re:Inforce. The inaugural AWS re:Inforce, a hands-on gathering of like-minded security professionals, will take place in Boston, MA on June 25th and 26th, 2019 at the Boston Convention and Exhibition Center. The cost for a full conference pass will be $1,099. I’m hoping to see you all there. Sign up here to be notified of when registration opens.

Key re:Invent Takeaways

AWS is here to help you build

  1. Customers want to innovate, and cloud needs to securely enable this. Companies need to able to innovate to meet rapidly evolving consumer demands. This means they need cloud security capabilities they can rely on to meet their specific security requirements, while allowing them to continue to meet and exceed customer expectations. AWS Lake Formation, AWS Control Tower, and AWS Security Hub aggregate and automate otherwise manual processes involved with setting up a secure and compliant cloud environment, giving customers greater flexibility to innovate, create, and manage their businesses.
  2. Cloud Security is as much art as it is science. Getting to what you really need to know about your security posture can be a challenge. At AWS, we’ve found that the sweet spot lies in services and features that enable you to continuously gain greater depth of knowledge into your security posture, while automating mission critical tasks that relieve you from having to constantly monitor your infrastructure. This manifests itself in having an end-to-end automated remediation workflow. I spent some time covering this in my re:Invent session, and will continue to advocate using a combination of services, such as AWS Lambda, WAF, S3, AWS CloudTrail, and AWS Config to proactively identify, mitigate, and remediate threats that may arise as your infrastructure evolves.
  3. Remove human access to data. I’ve set a goal at AWS to reduce human access to data by 80%. While that number may sound lofty, it’s purposeful, because the only way to achieve this is through automation. There have been a number of security incidents in the news across industries, ranging from inappropriate access to personal information in healthcare, to credential stuffing in financial services. The way to protect against such incidents? Automate key security measures and minimize your attack surface by enabling access control and credential management with services like AWS IAM and AWS Secrets Manager. Additional gains can be found by leveraging threat intelligence through continuous monitoring of incidents via services such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie (intelligence from these services will now be available in AWS Security Hub).
  4. Get your leadership on board with your security plan. We offer 500+ security services and features; however, new services and technology can’t be wholly responsible for implementing reliable security measures. Security teams need to set expectations with leadership early, aligning on a number of critical protocols, including how to restrict and monitor human access to data, patching and log retention duration, credential lifespan, blast radius reduction, embedded encryption throughout AWS architecture, and canaries and invariants for security functionality. It’s also important to set security Key Performance Indicators (KPIs) to continuously track. At AWS, we monitor the number of AppSec reviews, how many security checks we can automate, third-party compliance audits, metrics on internal time spent, and conformity with Service Level Agreements (SLAs). While the needs of your business may vary, we find baseline KPIs to be consistent measures of security assurance that can be easily communicated to leadership.

Final Thoughts

Queen’s famous lyric, “I want it all, I want it all, and I want it now,” accurately captures the sentiment at re:Invent this year. Security will always be job zero for us, and we continue to iterate on behalf of customers so they can securely build, experiment and create … right now! AWS is trusted by many of the world’s most risk-sensitive organizations precisely because we have demonstrated this unwavering commitment to putting security above all. Still, I believe we are in the early days of innovation and adoption of the cloud, and I look forward to seeing both the gains and use cases that come out of our latest batch of tools and services.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds five patents in the field of cloud security architecture. Follow Steve on Twitter

Are KMS custom key stores right for you?

Post Syndicated from Richard Moulds original https://aws.amazon.com/blogs/security/are-kms-custom-key-stores-right-for-you/

You can use the AWS Key Management Service (KMS) custom key store feature to gain more control over your KMS keys. The KMS custom key store integrates KMS with AWS CloudHSM to help satisfy compliance obligations that would otherwise require the use of on-premises hardware security modules (HSMs) while providing the AWS service integrations of KMS. However, the additional control comes with increased cost and potential impact on performance and availability. This post will help you decide if this feature is the best approach for you.

KMS is a fully managed service that generates encryption keys and helps you manage their use across more than 45 AWS services. It also supports the AWS Encryption SDK and other client-side encryption tools, and you can integrate it into your own applications. KMS is designed to meet the requirements of the vast majority of AWS customers. However, there are situations where customers need to manage their keys in single-tenant HSMs that they exclusively control. Previously, KMS did not meet these requirements since it offered only the ability to store keys in shared HSMs that are managed by KMS.

AWS CloudHSM is a service that’s primarily intended to support customer-managed applications that are specifically designed to use HSMs. It provides direct control over HSM resources, but the service isn’t, by itself, widely integrated with other AWS managed services. Before custom key store, this meant that if you required direct control of your HSMs but still wanted to use and store regulated data in AWS managed services, you had to choose between changing those requirements, not using a given AWS service, or building your own solution. KMS custom key store gives you another option.

How does a custom key store work?

With custom key store, you can configure your own CloudHSM cluster and authorize KMS to use it as a dedicated key store for your keys rather than the default KMS key store. Then, when you create keys in KMS, you can choose to generate the key material in your CloudHSM cluster. Your KMS customer master keys (CMKs) never leave the CloudHSM instances, and all KMS operations that use those keys are only performed in your HSMs. In all other respects, the master keys stored in your custom key store are used in a way that is consistent with other KMS CMKs.

This diagram illustrates the primary components of the service and shows how a cluster of two CloudHSM instances is connected to KMS to create a customer controlled key store.
 

Figure 1: A cluster of two CloudHSM instances is connected to the KMS front-end hosts to create a customer controlled key store

Figure 1: A cluster of two CloudHSM instances is connected to KMS to create a customer controlled key store

Because you control your CloudHSM cluster, you can take direct action to manage certain aspects of the lifecycle of your keys, independently of KMS. Specifically, you can verify that KMS correctly created keys in your HSMs and you can delete key material and restore keys from backup at any time. You can also choose to connect and disconnect the CloudHSM cluster from KMS, effectively isolating your keys from KMS. However, with more control comes more responsibility. It’s important that you understand the availability and durability impact of using this feature, and I discuss the issues in the next section.

Decision criteria

KMS customers who plan to use a custom key store tell us they expect to use the feature selectively, deciding on a key-by-key basis where to store them. To help you decide if and how you might use the new feature, here are some important issues to consider.

Here are some reasons you might want to store a key in a custom key store:

  • You have keys that are required to be protected in a single-tenant HSM or in an HSM over which you have direct control.
  • You have keys that are explicitly required to be stored in an HSM validated at FIPS 140-2 level 3 overall (the HSMs used in the default KMS key store are validated to level 2 overall, with level 3 in several categories, including physical security).
  • You have keys that are required to be auditable independently of KMS.

And here are some considerations that might influence your decision to use a custom key store:

  • Cost — Each custom key store requires that your CloudHSM cluster contains at least two HSMs. CloudHSM charges vary by region, but you should expect costs of at least $1,000 per month, per HSM, if each device is permanently provisioned. This cost occurs regardless of whether you make any requests of the KMS API directly or indirectly through an AWS service.
  • Performance — The number of HSMs determines the rate at which keys can be used. It’s important that you understand the intended usage patterns for your keys and ensure that you have provisioned your HSM resources appropriately.
  • Availability — The number of HSMs and the use of availability zones (AZs) impacts the availability of your cluster and, therefore, your keys. The risk of your configuration errors that result in a custom key store being disconnected, or key material being deleted and unrecoverable, must be understood and assessed.
  • Operations — By using the custom key store feature, you will perform certain tasks that are normally handled by KMS. You will need to set up HSM clusters, configure HSM users, and potentially restore HSMs from backup. These are security-sensitive tasks for which you should have the appropriate resources and organizational controls in place to perform.

Getting Started

Here’s a basic rundown of the steps that you’ll take to create your first key in a custom key store within a given region.

  1. Create your CloudHSM cluster, initialize it, and add HSMs to the cluster. If you already have a CloudHSM cluster, you can use it as a custom key store in addition to your existing applications.
  2. Create a CloudHSM user so that KMS can access your cluster to create and use keys.
  3. Create a custom key store entry in KMS, give it a name, define which CloudHSM cluster you want it to use, and give KMS the credentials to access your cluster.
  4. Instruct KMS to make a connection to your cluster and log in.
  5. Create a CMK in KMS in the usual way except now select CloudHSM as the source of your key material. You’ll define administrators, users, and policies for the key as you would for any other CMK.
  6. Use the key via the existing KMS APIs, AWS CLI, or the AWS Encryption SDK. Requests to use the key don’t need to be context-aware of whether the key is stored in a custom key store or the default KMS key store.

Summary

Some customers need specific controls in place before they can use KMS to manage encryption keys in AWS. The new KMS custom key store feature is intended to satisfy that requirement. You can now apply the controls provided by CloudHSM to keys managed in KMS, without changing access control policies or service integration.

However, by using the new feature, you take responsibility for certain operational aspects that would otherwise be handled by KMS. It’s important that you have the appropriate controls in place and understand the performance and availability requirements of each key that you create in a custom key store.

If you’ve been prevented from migrating sensitive data to AWS because of specific key management requirements that are currently not met by KMS, consider using the new KMS custom key store feature.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Key Management Service discussion forum.

Want more AWS Security news? Follow us on Twitter.

Author

Richard Moulds

Richard is a Principal Product Manager at AWS. He’s a member of the KMS team and is primarily focused on helping to define the product roadmap and satisfy customer requirements. Richard has more than 15 years experience in helping customers build encryption and key management systems to protect their data. His attraction to cryptography stems from the challenge of taking such a complex subject and translating it into simple solutions that customers should be able to take for granted, on a grand scale. When he’s not thinking ahead he’s focused on the past, restoring classic cars, the more rust the better.

How to clone an AWS CloudHSM cluster across regions

Post Syndicated from Tracy Pierce original https://aws.amazon.com/blogs/security/how-to-clone-an-aws-cloudhsm-cluster-across-regions/

You can use AWS CloudHSM to generate, store, import, export, and manage your cryptographic keys. It also permits hash functions to compute message digests and hash-based message authentication codes (HMACs), as well as cryptographically sign data and verify signatures. To help ensure redundancy of data and simplification of the disaster recovery process, you’ll typically clone your AWS CloudHSM cluster into a different AWS region. This then allows you to synchronize keys, including non-exportable keys, across regions. Non-exportable keys are keys that can never leave the CloudHSM device in plaintext. They reside on the CloudHSM device and are encrypted for security purposes.

You clone a cluster to another region in a two-step process. First, you copy a backup to the destination region. Second, you create a new cluster from this backup. In this post, I’ll show you how to set up one cluster in region 1, and how to use the new CopyBackupToRegion feature to clone the cluster and hardware security modules (HSMs) to a virtual private cloud (VPC) in region 2.

Note: This post doesn’t include instructions on how to set up a cross-region VPC to synchronize HSMs across the two cloned clusters. If you need to do that, read this article.

Solution overview

To complete this solution, you can use either the AWS Command Line Interface (AWS CLI)
or the AWS CloudHSM API. For this post, I’ll use the AWS CLI to copy the cluster backup from region 1 to region 2, and then I’ll launch a new cluster from that copied backup.

The following diagram illustrates the process covered in the post.
 

Figure 1: Architecture diagram

Figure 1: Architecture diagram

Here’s how the process works:

  1. AWS CloudHSM creates a backup of the cluster and stores it in an S3 bucket owned by AWS CloudHSM.
  2. You run the CLI/API command to copy the backup to another AWS region.
  3. When the backup is completed, you use that backup to then create a cluster and HSMs.
  4. Note: Backups can’t be copied into or out of AWS GovCloud (US) because it’s a restricted region.

As with all cluster backups, when you copy the backup to a new AWS region, it’s stored in an Amazon S3 bucket owned by an AWS CloudHSM account. AWS CloudHSM manages the security and storage of cluster backups for you. This means the backup in both regions will also have the durability of Amazon S3, which is 99.999999999%. The backup in region 2 will also be encrypted and secured in the same way as your backup in region 1. You can read more about the encryption process of your AWS CloudHSM backups here.

Any HSMs created in this cloned cluster will have the same users and keys as the original cluster at the time the backup was taken. From this point on, you must manually keep the cloned clusters in sync. Specifically:

  • If you create users after creating your new cluster from the backup, you must create them on both clusters manually.
  • If you change the password for a user in one cluster, you must change the password on the cloned clusters to match.
  • If you create more keys in one cluster, you must sync them to at least one HSM in the cloned cluster. Note that after you sync the key from cluster 1 to cluster 2, the CloudHSM automated cluster synchronization will take care of syncing the keys within the 2nd cluster.

Prerequisites

Some items that will need to be in place for this to work are:

Important note: Syncing keys across clusters in more than one region will only work if all clusters are created from the same backup. This is because synchronization requires the same secret key, called a masking key, to be present on the source and destination HSM. The masking key is specific to each cluster. It can’t be exported, and can’t be used for any purpose other than synchronizing keys across HSMs in a cluster.

Step 1: Create your first cluster in region 1

Follow the links in each sub-step below to the documentation page for more information and setup requirements:

  1. Create the cluster. To do this, you will run the command below via CLI. You will want to replace the placeholder <SUBNET ID 1> with one of your private subnets.
    $ aws cloudhsmv2 create-cluster –hsm-type hsm1.medium –subnet-ids <SUBNET ID 1>
  2. Launch your Amazon Elastic Compute Cloud (Amazon EC2) client (in the public subnet). You can follow the steps here to launch an EC2 Instance.
  3. Create the first HSM (in the private subnet). To do this, you will run the command below via CLI. You will want to replace the placeholder <CLUSTER ID> with the ID given from the ‘Create the cluster’ command above. You’ll replace <AVAILABILITY ZONE> with the AZ matching your private subnet. For example, us-east-1a.
    $ aws cloudhsmv2 create-hsm –cluster-id <CLUSTER ID> –availability-zone <AVAILABILITY ZONE>
  4. Initialize the cluster. Initializing your cluster requires creating a self-signed certificate and using that to sign the cluster’s Certificate Signing Request (CSR). You can view an example here of how to create and use a self-signed certificate. Once you have your certificate, you will run the command below to initialize the cluster with it. You will want to replace the placeholder <CLUSTER ID> with your cluster id from step 1.
    $ aws cloudhsmv2 initialize-cluster –cluster-id <CLUSTER ID> –signed-cert file://<CLUSTER ID>_CustomerHsmCertificate.crt –-trust-anchor file://customerCA.crt

    Note: Don’t forget to place a copy of the certificate used to sign your cluster’s CSR into the /opt/cloudhsm/etc directory to ensure a continued secure connection.

  5. Install the cloudhsm-client software. Once the Amazon EC2 client is launched, you’ll need to download and install the cloudhsm-client software. You can do this by running the command below from the CLI:
    wget https://s3.amazonaws.com/cloudhsmv2-software/CloudHsmClient/EL6/cloudhsm-client-latest.el6.x86_64.rpm

    Once downloaded, you’ll install by running this command:
    sudo yum install -y ./cloudhsm-client-latest.el6.x86_64.rpm

  6. The last step in initializing the cluster requires you to configure the cloudhsm-client to point to the ENI IP of your first HSM. You do this on your EC2 client by running this command:
    $ sudo /opt/cloudhsm/bin/configure -a <IP ADDRESS>
    Replace the <IP ADDRESS> placeholder with your HSM’s ENI IP. The cloudhsm-client comes pre-installed with a Python script called “configure” located in the /opt/cloudhsm/bin/ directory. This will update your /opt/cloudhsm/etc/cloudhsm_mgmt_util.cfg and
    /opt/cloudhsm/etc/cloudhsm_client.cfg files with your HSM’s IP address. This ensures your client can connect to your cluster.
  7. Activate the cluster. To activate, you must launch the cloudhsm-client by running this command, which logs you into the cluster:

    $ /opt/cloudhsm/bin/cloudhsm_mgmt_util /opt/cloudhsm/etc/cloudhsm_mgmt_util.cfg

    Then, you need to enable the secure communication by running this command:

    aws-cloudhsm>enable_e2e

    If you’ve placed the certificate in the correct directory, you should see a response like this on the command line:

    E2E enabled on server 0(server1)

    If you run the command listUsers you’ll see a PRECO user:

    
    aws-cloudhsm>listUsers
    Users on server 0(server1):
    Number of users found:2
    	User ID		User Type	User Name
    	     1		PRECO		admin
    	     2		AU		app_user
    
    

    Change the password for this user to complete the activation process. You do this by first logging in using the command below:

    aws-cloudhsm>loginHSM PRECO admin password

    Once logged in, change the password using this command:

    
    aws-cloudhsm>changePswd PRECO admin <NEW PASSWORD>
    ***************************CAUTION******************************
    This is a CRITICAL operation, should be done on all nodes in the
    cluster. Cav server does NOT synchronize these changes with the 
    nodes on which this operation is not executed or failed, please
    ensure this operation is executed on all nodes in the cluster.
    ****************************************************************
    
    Do you want to continue(y/n)?Y
    Changing password for admin(PRECO) on 1 nodes
    

    Once completed, log out using the command logout, then log back in with the new password, using the command loginHSM PRECO admin <NEW PASSWORD>.

    Doing this allows you to create the first crypto user (CU). You create the user by running the command:
    aws-cloudhsm>createUser <USERTYPE (ex: CO, CU)> <USERNAME> <PASSWORD>
    Replace the red values in this command. The <USERTYPE> can be a CO (crypto officer) or a CU (crypto user). You can find more information about usertypes here. You’ll replace the placeholders <USERNAME> <PASSWORD> with a real user and password combo. Crypto Users are permitted to create and share keys on the CloudHSM.

    Run the command quit to exit this tool.

Step 2: Trigger a backup of your cluster

To trigger a backup that will be copied to region 2 to create your new cluster, add an HSM to your cluster in region 1. You can do this via the console or CLI. The backup that is created will contain all users (COs, CUs, and appliance users), all key material on the HSMs, and the configurations and policies associated with them. The user portion is extremely important because keys can only be synced across clusters to the same user. Make a note of the backup ID because you will need it later. You can find this by logging into the AWS console and navigating to the CloudHSM console, then selecting Backups. There will be a list of backup IDs, cluster IDs, and creation times. Make sure to select the backup ID specifically created for the cross-region copy.

Step 3: Create a key on your cluster in Region 1

There are many ways to create a key. I’m using key_mgmt_util because it’s an easy and straightforward method using CLI commands instead of SDK libraries. Start by connecting to the EC2 client instance that you launched above and ensuring the cloudhsm-client is running. If you aren’t sure, run this command:

$ sudo start cloudhsm-client

Now, launch the key_mgmt_util by running this command:

$ /opt/cloudhsm/bin/key_mgmt_util

When you see the prompt, log in as a CU to create your key, replacing <USERNAME> and <PASSWORD> with an actual CU user’s username and password:

Command: loginHSM -u CU -s <USERNAME> -p <PASSWORD>

To create the key for this example, we’re going to use the key_mgmt_util to generate a symmetric key. Note the -nex parameter is what makes this key non-exportable. An example command is below:

Command: genSymKey -t 31 -s 32 -l aes256 -nex

In the above command:

  1. genSymKey creates the Symmetric key
  2. -t chooses the key type, which in this case is AES
  3. -s states the key size, which in this case is 32 bytes
  4. -l creates a label to easily recognize the key by
  5. -nex makes the key non-exportable

The HSM will return a key handle. This is used as an identifier to reference the key in future commands. Make a note of the key handle because you will need it later. Here’s an example of the full output in which you can see the key handle provided is 37:


Command:genSymKey -t 31 -s 32 -l aes256 -nex
	Cfm3GenerateSymmetricKey returned: 0x00 : HSM Return: SUCCESS
	Symmetric Key Created.   Key Handle: 37
	Cluster Error Status
	Node id 0 and err state 0x00000000 : HSM Return: SUCCESS

Step 4: Copy your backup from region 1 to region 2 and create a cluster from the backup

To copy your backup from region 1 to region 2, from your EC2 client you’ll need to run the command that appears after these important notes:

  • Make sure the proper permissions are applied for the IAM role or user configured for the CLI. You’ll want to be a CloudHSM administrator for these actions. The instructions here show you how to create an admin user for this process, and here is an example of the permissions policy:
    
    {
       "Version":"2018-06-12",
       "Statement":{
          "Effect":"Allow",
          "Action":[
             "cloudhsm:*",
             "ec2:CreateNetworkInterface",
             "ec2:DescribeNetworkInterfaces",
             "ec2:DescribeNetworkInterfaceAttribute",
             "ec2:DetachNetworkInterface",
             "ec2:DeleteNetworkInterface",
             "ec2:CreateSecurityGroup",
             "ec2:AuthorizeSecurityGroupIngress",
             "ec2:AuthorizeSecurityGroupEgress",
             "ec2:RevokeSecurityGroupEgress",
             "ec2:DescribeSecurityGroups",
             "ec2:DeleteSecurityGroup",
             "ec2:CreateTags",
             "ec2:DescribeVpcs",
             "ec2:DescribeSubnets",
             "iam:CreateServiceLinkedRole"
          ],
          "Resource":"*"
       }
    }
    
    

  • To copy the backup over, you need to know the destination region, the source cluster ID, and/or the source backup ID. You can find the source cluster ID and/or the source backup ID in the CloudHSM console.
  • If you use only the cluster ID, the most recent backup of the associated cluster will be chosen for copy. If you specify the backup ID, that associated backup will be copied. If you don’t know these IDs, run the describe-clusters or describe-backups commands.

Here’s the example command:


$ aws cloudhsmv2 copy-backup-to-region --destination-region <DESTINATION REGION> --cluster-id <CLUSTER ID> --backup-id <BACKUP ID>
{
    “DestinationBackup”: {
	“BackupId”: “backup-gdnekhcxf4n”,
	“CreateTimestamp”: 1531742400,
	“BackupState”: “CREATE_IN_PROGRESS”,
	“SourceCluster”: “cluster-kzlczlspnho”,
	“SourceBackup”: “backup-4kuraxsqetz”,
	“SourceRegion”: “us-east-1”
     }
}

Once the backup has been copied to region 2, you’ll see a new backup ID in your console. This is what you’ll use to create your new cluster. You can follow the steps here to create your new cluster from this backup. This cluster will launch already initialized for you, but it will still need HSMs added into it to complete the activation process. Make sure you copy over the cluster certificate from the original cluster to the new region. You can do this by opening two terminal sessions, one for each HSM. Open the certificate file on the HSM in cluster 1 and copy it. On the HSM in cluster 2, create a file and paste the certificate over. You can use any text editor you like to do this. This certificate is required to establish the encrypted connection between your client and HSM instances.

You should also make sure you’ve added the cloned cluster’s Security Group to your EC2 client instance to allow connectivity. You do this by selecting the Security Group for your EC2 client in the EC2 console, and selecting Add rules. You’ll add a rule allowing traffic, with the source being the Security Group ID of your cluster.

Finally, take note of the ENI IP for the HSM because you’ll need it later. You can find this in your CloudHSM Console by clicking on the cluster for more information.

Step 5: Create a new configuration file with one ENI IP from both clusters

To sync a key from a cluster in region 1 to a cluster in region 2, you must create a configuration file that contains at least one ENI IP of an HSM in both clusters. This is required to allow the cloudhsm-client to communicate with both clusters at the same time. This is where the masking key we mentioned earlier comes into play as the syncKey command uses that to copy keys between clusters. This is why the cluster in region 2 must be created from a backup of the cluster in region 1. For the new configuration file, I’m going to copy over the original file /opt/cloudhsm/etc/cloudhsm_mgmt_util.cfg to a new file. Name this SyncClusters.cfg. You’re going to edit this new configuration file to have the ENI IP of the HSM in the cluster of region 1 and the ENI IP of the HSM in the cluster of region 2. It should look something like this:


{
    "scard": {
        "certificate": "cert-sc",
        "enable": "no",
        "pkey": "pkey-sc",
        "port": 2225
    },
    "servers": [
        {
            "CAfile": "",
            "CApath": "/opt/cloudhsm/etc/certs",
            "certificate": "/opt/cloudhsm/etc/client.crt",
            "e2e_encryption": {
                "enable": "yes",
                "owner_cert_path": "/opt/cloudhsm/etc/customerCA.crt"
            },
            "enable": "yes",
            "hostname": "",
            "name": "",
            "pkey": "/opt/cloudhsm/etc/client.key",
            "port": 2225,
            "server_ssl": "yes",
            "ssl_ciphers": ""
        },
        {
            "CAfile": "",
            "CApath": "/opt/cloudhsm/etc/certs",
            "certificate": "/opt/cloudhsm/etc/client.crt",
            "e2e_encryption": {
                "enable": "yes",
                "owner_cert_path": "/opt/cloudhsm/etc/customerCA.crt"
            },
            "enable": "yes",
            "hostname": "",
            "name": "",
            "pkey": "/opt/cloudhsm/etc/client.key",
            "port": 2225,
            "server_ssl": "yes",
            "ssl_ciphers": ""
        }
    ]
}

To verify connectivity to both clusters, start the cloudhsm-client using the modified configuration file. The command will look similar to this:

$ /opt/cloudhsm/bin/cloudhsm_mgmt_util.cfg /opt/cloudhsm/etc/SyncClusters.cfg

After connection, you should see something similar to this, with one IP from cluster 1 and one IP from cluster 2:


Connecting to the server(s), it may take time
depending on the server(s) load, please wait...
Connecting to server '<CLUSTER-1-IP>': hostname '<CLUSTER-1-IP>', port 2225...
Connected to server '<CLUSTER-1-IP>': hostname '<CLUSTER-1-IP>', port 2225.
Connecting to server '<CLUSTER-2-IP>': hostname '<CLUSTER-2-IP>', port 2225...
Connected to server '<CLUSTER-2-IP>': hostname '<CLUSTER-2-IP>', port 2225.

If you run the command info server from the prompt, you’ll see a list of servers your client is connected to. Make note of these because they’ll be important when syncing your keys. Typically, you’ll see server 0 as your first HSM in cluster 1 and server 1 as your first HSM in cluster 2.

Step 6: Sync your key from the cluster in region 1 to the cluster in region 2

You’re ready to sync your keys. Make sure you’ve logged in as the Crypto Officer (CO) user. Only the CO user can perform management functions on the cluster (for example, syncing keys).

Note: These steps are all performed at the server prompt, not the aws-cloudhsm prompt.

First, run the command listUsers to get the user IDs of the user that created the keys. Here’s an example:


server0>listUsers
Users on server 0(<CLUSTER-A-IP>):
Number of users found:3

User Id    User Type	User Name     MofnPubKey     LoginFailureCnt	 2FA
1		CO   	admin         NO               0	 	 NO
2		AU   	app_user      NO               0	 	 NO
3		CU   	<USERNAME>    NO               0	 	 NO

Make note of the user ID because you’ll need it later; in this case, it’s 3. Now, you need to see the key handles that you want to sync. You either noted this from earlier, or you can find this by running the findAllKeys command with the parameter for user 3. Here’s an example:


server0>findAllKeys 3 0
Keys on server 0(<CLUSTER-1-IP<):
Number of keys found 1
number of keys matched from start index 0::1
37
findAllKeys success

In this case, the key handle I want to sync is 37. When running the command syncKey, you’ll input the key handle and the server you want to sync it to (the destination server). Here’s an example:

server0>syncKey 37 1

In this example, 37 is the key handle, and 1 is the destination HSM. You’ll run the exit command to back out to the cluster prompt, and from here you can run findAllKeys again, which should show the same key handle on both clusters.


aws-cloudhsm>findAllKeys 3 0
Keys on server 0(<CLUSTER-1-IP>):
Number of keys found 1
number of keys matched from start index 0::1
37
findAllKeys success on server 0(<CLUSTER-1-IP>)
Keys on server 0(<CLUSTER-2-IP>):
Number of keys found 1
number of keys matched from start index 0::1
37
findAllKeys success on server 0(<CLUSTER-2-IP>)

Repeat this process with all keys you want to sync between clusters.

Summary

I walked you through how to create a cluster, trigger a backup, copy that backup to a new region, launch a new cluster from that backup, and then sync keys across clusters. This will help reduce disaster recovery time, while helping to ensure that your keys are secure in multiple regions should a failure occur.

Remember to always manually update users across clusters after the initial backup copy and cluster creation because these aren’t automatic. You must also run the syncKey command on any keys created after this, as well.

You’re now set up for fault tolerance in your AWS CloudHSM environment.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS CloudHSM forum.

Want more AWS Security news? Follow us on Twitter.

Author

Tracy Pierce

Tracy Pierce is a Senior Cloud Support Engineer at AWS. She enjoys the peculiar culture of Amazon and uses that to ensure every day is exciting for her fellow engineers and customers alike. Customer Obsession is her highest priority and she shows this by improving processes, documentation, and building tutorials. She has her AS in Computer Security & Forensics from SCTD, SSCP certification, AWS Developer Associate certification, and AWS Security Specialist certification. Outside of work, she enjoys time with friends, her Great Dane, and three cats. She keeps work interesting by drawing cartoon characters on the walls at request.

Using AWS CloudHSM-backed certificates with Microsoft Internet Information Server

Post Syndicated from Jeff Levine original https://aws.amazon.com/blogs/security/using-aws-cloudhsm-backed-certificates-with-microsoft-internet-information-server/

SSL/TLS certificates are used to create encrypted sessions to endpoints such as web servers. If you want to get an SSL certificate, you usually start by creating a private key and a corresponding certificate signing request (CSR). You then send the CSR to a certificate authority (CA) and receive a certificate. When a user seeks to securely browse your website, the web server presents the certificate to the user’s browser. The user’s browser verifies that the certificate is associated with a trusted root certificate, and then uses the certificate to encrypt the session data. The web server then uses the private key to decrypt the user’s data. The security of this protocol relies on the private key remaining private.

A common problem with web servers, however, is that the private key and the certificate reside on the web server itself. If an attacker compromises the server and steals both the private key and certificate, the attacker could potentially use them to intercept the session or create an imposter server. If your business requires the use of hardware security modules for key storage, you can use AWS CloudHSM to generate and hold the private keys for web server certificates rather than placing them on the web server directly and reduce your exposure. On Windows, AWS CloudHSM integrates with Microsoft Internet Information Server (IIS) through the Key Storage Provider extensions to the Microsoft Cryptography Next Generation API (KSP/CNG). In this blog post, I will show you how to leverage CloudHSM KSP/CNG capabilities to secure the private key within the hardware security module (HSM), if you are using Microsoft Internet Information Server (IIS) to host your website.

Prerequisites and assumptions

You need the following for this walkthrough.

  • An AWS account and permissions to provision Amazon Virtual Private Cloud (Amazon VPC), Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon Route 53, and Amazon CloudHSM. I recommend using an AWS user account ID that has full administrative privileges.
  • A region that supports CloudHSM. I’m using the eu-west-1 (Ireland) region.
  • A domain to use for certificate generation. I’m using the domain example.com.
  • A trusted public certificate authority (CA).
  • A working knowledge of the Amazon VPC, Amazon EC2, and Amazon CloudHSM services.
  • Familiarity with the administration of Windows Server 2012 R2

Important: You will incur charges for the services used in this example. You can find the cost of each service on that service’s pricing page. You can also use the Simple Monthly Calculator to estimate the cost.

Architectural overview

I’ll go over a few diagrams before I start building. I’ll start with a diagram of the infrastructure.

Figure 1: CloudHSM infrastructure

Figure 1: CloudHSM infrastructure

This diagram shows a VPC in the eu-west-1 (Ireland) region. An Amazon EC2 instance running Windows Server 2012 R2 resides on a public subnet. This instance will run both the CloudHSM client software and IIS to host a website. The instance has an Elastic IP and can be accessed via the Internet Gateway. I’ll also have security groups that enable RDP, HTTP, and HTTPS access. The private subnet hosts the Elastic Network Interface (ENI) for CloudHSM cluster that has a single HSM.

And here’s the certification hierarchy:

Figure 2: Certification hierarchy

Figure 2: Certification hierarchy

The Amazon EC2 instance will run Windows Server 2012 R2 (WS2012R2) and IIS. I’ll use the KSP/CNG capabilities of CloudHSM to request a CSR from the WS2012R2 instance. The KSP/CNG integration will reach out to CloudHSM that will, in turn, create a private/public key pair. The private key will remain within CloudHSM while the public key will be used to create the CSR. I’ll then submit the CSR to a CA. The CA will sign the CSR with the CA‘s private key and return a certificate. I’ll then accept the CSR with the certificate that will make it available to IIS. I’ll then secure the default IIS website with the certificate. Finally, I’ll browse to the website and show how it uses the CNG/KSP integration.

Now that you’ve seen what I’m building, we can get started!

Build the CloudHSM infrastructure

You’ll need to set up a CloudHSM environment. This environment will consist of a VPC, a CloudHSM cluster with a single HSM, and a Windows 2012 R2 EC2 instance that will run both the CloudHSM client and IIS. For each of the steps below, I have provided a link to the relevant documentation, as well as any additional instructions to build out the infrastructure to match my sample environment.

  1. Select a region that offers AWS CloudHSM. I’m using the eu-west-1 (Ireland) region.
  2. Create a Virtual Private Cloud (VPC). Use the following values:
    VPC IPv4 CIDR block: 10.0.0.0/16
    VPC Name: cloudhsm-vpc
    Public subnet IPv4 CIDR block: 10.0.0.0/24
    Public subnet Availability Zone: eu-west-1b
    Public subnet name: cloudhsm-public
  3. Create a private subnet. Use the following values:
    IPv4 CIDR block: 10.0.1.0/24
    Availability Zone: eu-west-1b
    Name: cloudhsm-private
  4. Create a cluster. Use the following values:
    HSM Availability Zone: eu-west-1b
    Subnet: Use the private subnet you just created in eu-west-1b.
    Number of HSMs: 1
  5. Launch an Amazon EC2 client instance. Use the following values:
    AMI: Microsoft Windows Server 2012 R2 Base
    Instance type: t2.medium
    VPC: cloudhsm-vpc
    Subnet: cloudhsm-public
    Auto-assign Public IP: Enable
    Name tag: cloudhsm
    Security group: The CloudHSM cluster security group

    Once the instance is running, create an additional administrative Windows user because the built-in Administrator account has restrictions.

  6. Create an additional security group to allow you to connect to the instance and attach the security group to the instance in addition to the security group of the CloudHSM cluster.
  7. Allocate and associate an Elastic IP address to the instance so that the instance’s IP address persists if you need to stop and start the instance.
  8. Create a DNS “A record” to map the host name to the Elastic IP. In this example, I’m mapping www.example.com to the Elastic IP that I allocated.
  9. Create an HSM. After the HSM is created, note the HSM Id in the form of hsm- followed be an alphanumeric string; for example, hsm-1234abcd.
  10. Initialize the cluster.
  11. Install and configure the AWS CloudHSM client (Windows).
    You might need to use a command window with administrative privileges to do this.
  12. Activate the cluster.
  13. Create a crypto user. You will need to specify the ID and the password of the crypto user in the environment variables that you will create in the next section.

Define environment variables

We now need to define some Windows system environment variables (not user variables) used by the CNG/KSP integration. You can modify the environment variables by selecting System from the Control Panel, selecting Advanced System Settings, and selecting Environment Variables.

  1. Go into the Windows Control Panel and enter the following definitions:
    liquidsecurity_daemon_id 1
    liquidsecurity_daemon_socket_type TCPSOCKET
    liquidsecurity_daemon_tcp_port 1111
    n3fips_username USER
    n3fips_password USER:PASSWORD
    n3fips_partition hsm-ABCDEFGHIJK
    • Substitute the user ID and password that you specified in step 12 for the USER and PASSWORD values.
    • Substitute the HSM ID value you got from step 8 for hsm-ABCDEFGHIJK.

    Your Windows Control Panel configuration should look like the image below, not including the above substitutions.

    Figure 3: Environment variable settings

    Figure 3: Environment variable settings

  2. Reboot the EC2 instance to ensure that the environment variables are in place for all process.

Launch the CloudHSM client

You must start the CloudHSM client in order to enable the CNG/KSP integration.

  1. Open a command prompt window on the Windows instance.
  2. Enter the following commands:

    cd \ProgramData\Amazon\CloudHSM\data
    \Program Files\Amazon\CloudHSM\cloudhsm_client.exe cloudhsm_client.cfg

    The cloudhsm_client.exe command will continue to run. Important note: Do not close the window! The output at the bottom of the window looks similar to the following:

    Figure 4: CloudHSM client output

    Figure 4: CloudHSM client output

Launch key_mgmt_util

We are now going to launch the key_mgmt_util client and confirm that no keys are present in the CloudHSM cluster.

  1. Open a new command window. Do not use the window that you opened in the previous section.
  2. Enter the following commands.

    cd “\Program files\Amazon\CloudHSM”
    key_mgmt_util.exe
    loginHSM –u CU –s USER –p PASSWORD

    (replace USER and PASSWORD with the crypto user and password that you created above)

    findKey

You should see a message saying that there are no keys present, as shown in the figure below.

Figure 5: findKey output showing no keys present

Figure 5: findKey output showing no keys present

Important note: Do not exit from key_mgmt_util! You’ll return to this window later.

Install IIS

Here’s how:

  1. Follow the steps of this tutorial to install IIS. When installing IIS, make the following changes:
    • Use the Windows Server 2012 instructions as a guideline.
    • Do not install MySQL and PHP because you won’t need these services for this walkthrough.
  2. From your workstation’s web browser, browse the Elastic IP or the DNS “A Record” of the web server using the HTTP protocol. You should see the default Microsoft IIS page as shown below. If you don’t see the page, check the security groups for the server and verify that port 80 (HTTP) and port 443 (HTTPS) are open.
     
    Figure 6: http://www.example.com

    Figure 6: http://www.example.com

  3. If you browse to the same IP using the HTTPS protocol, you should not get a response because you haven’t configured HTTPS on the EC2 instance.

Provision a server certificate using the CNG/KSP integration

I’m now going to show how to provision a server certificate. I will do this with the certreq program that’s included with Windows Server 2012 R2. Certreq generates a CSR that I’ll submit to a CA. Certreq supports the KSP/CNG standard and can place certificates into the Windows certificate store.

  1. Create a file named request.inf that contains the lines below. Note that the Subject line may wrap onto the following line. It begins with Subject and ends with Washington and a closing quotation mark (“).

    
    [Version]
    Signature= $Windows NT$
    [NewRequest]
    Subject = "C=US,CN=www.example.com,O=Information Technology,OU=Certificate Management,L=Seattle,S=Washington"
    HashAlgorithm = SHA256
    KeyAlgorithm = RSA
    KeyLength = 2048
    ProviderName = Cavium Key Storage Provider
    KeyUsage = 0xf0
    MachineKeySet = True
    SuppressDefaults = True
    SMIME = false
    RequestType = PKCS10
    [EnhancedKeyUsageExtension]
    OID=1.3.6.1.5.5.7.3.1
    [Extensions]
    2.5.29.17 = "{text}"
    _continue_ = "dns=www.example.com&"
    _continue_ = "dns=example.com&"
    

    Note the presence of the extensions section that allows you to specify SANs (Subject Alternative Names). SANs are now required by some browsers. Remember to substitute your own domain for example.com in the above example.

  2. Create a certificate request with certreq.exe.
    certreq.exe -new request.inf request.csr.pem

    Certreq returns a message saying that the certificate request has been created.

  3. Figure 7: Output from certreq.exe showing that the CSR has been created

    Figure 7: Output from certreq.exe showing that the CSR has been created

  4. In the same command window where you ran key_mgmt_util, run findKey again to see if there are any new keys. Use the findKey command by itself to display all keys. This shows that two keys have been added with handles 7 and 8. Use the -c 2 parameter to display only public keys (handle 262214) and -c 3 to display private keys (handle 262154). The two keys are part of the public/private key pair that were just created with certreq.
     
    Figure 8: findKey output showing two new keys

    Figure 8: findKey output showing two new keys

  5. Submit the CSR you just created to the CA using the procedures provided by the CA. The CA will return a certificate that is named request.crt.pem (or similar).

    If the certificate authority doesn’t accept the CSR provided by certreq.exe, try the following:

    1. If the certificate authority asks you to provide the domains for the certificate, make sure you enter all of the SANs included in the request.inf file.
    2. Check the first and last lines of the CSR and see if the word “NEW” appears. If so, remove the word in both lines.
  6. Use the certreq command to accept the new certificate and insert it into the certificate store where IIS can access it.

    certreq.exe -accept request.crt.pem

Configure and test the secure website

  1. Start the IIS Manager. You can do this through the Server Manager or by running the following command:

    Inetmgr

  2. Under Connections, expand the local server, and then expand the Sites folder below. Select the site named Default Web Site.
     
    Figure 9: IIS Manager settings

    Figure 9: IIS Manager settings

  3. On the right, under Actions, select Edit Site-> Bindings, and then add a binding with the following fields making sure to substitute your domain name in place of example.com:

    Type: https
    Host name: www.example.com
    SSL certificate: www.example.com

  4. Select OK to save the new binding, and then select Close.
     
    Figure 10: Adding site bindings

    Figure 10: Adding site bindings

  5. Select Manage Website -> Restart. This will restart the website and ensure the new binding is active.
  6. Browse to https://www.example.com, remembering to substitute your own domain. You’ll see the default website with a secure connection as shown below. If you get a warning message from your workstation’s browser, make sure you added the root-level certificate to the trusted certificate store. (Note: This should not be an issue if you’re using a commercial CA supported by major web browsers)
     
    Figure 11: https://www.example.com

    Figure 11: https://www.example.com

  7. Examine the certificate information being reported to the browser. For my example, I’m using the Chrome browser so I’ll select the padlock icon, then Certificates, then Detail.
     
    Figure 12: Server certificate details

    Figure 12: Server certificate details

    Figure 12 shows that the Subject field contains the same information that I provided before. You can check out the other fields for information about the Subject Alternative Names and other fields and verify that the correct certificate is being used.

Conclusion

For projects that have special requirements regarding the storage of keys, AWS CloudHSM supports the Microsoft KSP/CNG standard that enables you to store the private key of an SSL/TLS certificate within an HSM. Since the private key no longer has to reside on the server, it’s no longer at risk if the server itself were to be compromised.

If you have feedback about this blog post, submit comments in the “Comments” section below. If you have questions about this blog post, start a new thread on the Amazon CloudHSM forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Security of Cloud HSMBackups

Post Syndicated from Balaji Iyer original https://aws.amazon.com/blogs/architecture/security-of-cloud-hsmbackups/

Today, our customers use AWS CloudHSM to meet corporate, contractual and regulatory compliance requirements for data security by using dedicated Hardware Security Module (HSM) instances within the AWS cloud. CloudHSM delivers all the benefits of traditional HSMs including secure generation, storage, and management of cryptographic keys used for data encryption that are controlled and accessible only by you.

As a managed service, it automates time-consuming administrative tasks such as hardware provisioning, software patching, high availability, backups and scaling for your sensitive and regulated workloads in a cost-effective manner. Backup and restore functionality is the core building block enabling scalability, reliability and high availability in CloudHSM.

You should consider using AWS CloudHSM if you require:

  • Keys stored in dedicated, third-party validated hardware security modules under your exclusive control
  • FIPS 140-2 compliance
  • Integration with applications using PKCS#11, Java JCE, or Microsoft CNG interfaces
  • High-performance in-VPC cryptographic acceleration (bulk crypto)
  • Financial applications subject to PCI regulations
  • Healthcare applications subject to HIPAA regulations
  • Streaming video solutions subject to contractual DRM requirements

We recently released a whitepaper, “Security of CloudHSM Backups” that provides in-depth information on how backups are protected in all three phases of the CloudHSM backup lifecycle process: Creation, Archive, and Restore.

About the Author

Balaji Iyer is a senior consultant in the Professional Services team at Amazon Web Services. In this role, he has helped several customers successfully navigate their journey to AWS. His specialties include architecting and implementing highly-scalable distributed systems, operational security, large scale migrations, and leading strategic AWS initiatives.

How to Enhance the Security of Sensitive Customer Data by Using Amazon CloudFront Field-Level Encryption

Post Syndicated from Alex Tomic original https://aws.amazon.com/blogs/security/how-to-enhance-the-security-of-sensitive-customer-data-by-using-amazon-cloudfront-field-level-encryption/

Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content to end users through a worldwide network of edge locations. CloudFront provides a number of benefits and capabilities that can help you secure your applications and content while meeting compliance requirements. For example, you can configure CloudFront to help enforce secure, end-to-end connections using HTTPS SSL/TLS encryption. You also can take advantage of CloudFront integration with AWS Shield for DDoS protection and with AWS WAF (a web application firewall) for protection against application-layer attacks, such as SQL injection and cross-site scripting.

Now, CloudFront field-level encryption helps secure sensitive data such as a customer phone numbers by adding another security layer to CloudFront HTTPS. Using this functionality, you can help ensure that sensitive information in a POST request is encrypted at CloudFront edge locations. This information remains encrypted as it flows to and beyond your origin servers that terminate HTTPS connections with CloudFront and throughout the application environment. In this blog post, we demonstrate how you can enhance the security of sensitive data by using CloudFront field-level encryption.

Note: This post assumes that you understand concepts and services such as content delivery networks, HTTP forms, public-key cryptography, CloudFrontAWS Lambda, and the AWS CLI. If necessary, you should familiarize yourself with these concepts and review the solution overview in the next section before proceeding with the deployment of this post’s solution.

How field-level encryption works

Many web applications collect and store data from users as those users interact with the applications. For example, a travel-booking website may ask for your passport number and less sensitive data such as your food preferences. This data is transmitted to web servers and also might travel among a number of services to perform tasks. However, this also means that your sensitive information may need to be accessed by only a small subset of these services (most other services do not need to access your data).

User data is often stored in a database for retrieval at a later time. One approach to protecting stored sensitive data is to configure and code each service to protect that sensitive data. For example, you can develop safeguards in logging functionality to ensure sensitive data is masked or removed. However, this can add complexity to your code base and limit performance.

Field-level encryption addresses this problem by ensuring sensitive data is encrypted at CloudFront edge locations. Sensitive data fields in HTTPS form POSTs are automatically encrypted with a user-provided public RSA key. After the data is encrypted, other systems in your architecture see only ciphertext. If this ciphertext unintentionally becomes externally available, the data is cryptographically protected and only designated systems with access to the private RSA key can decrypt the sensitive data.

It is critical to secure private RSA key material to prevent unauthorized access to the protected data. Management of cryptographic key material is a larger topic that is out of scope for this blog post, but should be carefully considered when implementing encryption in your applications. For example, in this blog post we store private key material as a secure string in the Amazon EC2 Systems Manager Parameter Store. The Parameter Store provides a centralized location for managing your configuration data such as plaintext data (such as database strings) or secrets (such as passwords) that are encrypted using AWS Key Management Service (AWS KMS). You may have an existing key management system in place that you can use, or you can use AWS CloudHSM. CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys in the AWS Cloud.

To illustrate field-level encryption, let’s look at a simple form submission where Name and Phone values are sent to a web server using an HTTP POST. A typical form POST would contain data such as the following.

POST / HTTP/1.1
Host: example.com
Content-Type: application/x-www-form-urlencoded
Content-Length:60

Name=Jane+Doe&Phone=404-555-0150

Instead of taking this typical approach, field-level encryption converts this data similar to the following.

POST / HTTP/1.1
Host: example.com
Content-Type: application/x-www-form-urlencoded
Content-Length: 1713

Name=Jane+Doe&Phone=AYABeHxZ0ZqWyysqxrB5pEBSYw4AAA...

To further demonstrate field-level encryption in action, this blog post includes a sample serverless application that you can deploy by using a CloudFormation template, which creates an application environment using CloudFront, Amazon API Gateway, and Lambda. The sample application is only intended to demonstrate field-level encryption functionality and is not intended for production use. The following diagram depicts the architecture and data flow of this sample application.

Sample application architecture and data flow

Diagram of the solution's architecture and data flow

Here is how the sample solution works:

  1. An application user submits an HTML form page with sensitive data, generating an HTTPS POST to CloudFront.
  2. Field-level encryption intercepts the form POST and encrypts sensitive data with the public RSA key and replaces fields in the form post with encrypted ciphertext. The form POST ciphertext is then sent to origin servers.
  3. The serverless application accepts the form post data containing ciphertext where sensitive data would normally be. If a malicious user were able to compromise your application and gain access to your data, such as the contents of a form, that user would see encrypted data.
  4. Lambda stores data in a DynamoDB table, leaving sensitive data to remain safely encrypted at rest.
  5. An administrator uses the AWS Management Console and a Lambda function to view the sensitive data.
  6. During the session, the administrator retrieves ciphertext from the DynamoDB table.
  7. The administrator decrypts sensitive data by using private key material stored in the EC2 Systems Manager Parameter Store.
  8. Decrypted sensitive data is transmitted over SSL/TLS via the AWS Management Console to the administrator for review.

Deployment walkthrough

The high-level steps to deploy this solution are as follows:

  1. Stage the required artifacts
    When deployment packages are used with Lambda, the zipped artifacts have to be placed in an S3 bucket in the target AWS Region for deployment. This step is not required if you are deploying in the US East (N. Virginia) Region because the package has already been staged there.
  2. Generate an RSA key pair
    Create a public/private key pair that will be used to perform the encrypt/decrypt functionality.
  3. Upload the public key to CloudFront and associate it with the field-level encryption configuration
    After you create the key pair, the public key is uploaded to CloudFront so that it can be used by field-level encryption.
  4. Launch the CloudFormation stack
    Deploy the sample application for demonstrating field-level encryption by using AWS CloudFormation.
  5. Add the field-level encryption configuration to the CloudFront distribution
    After you have provisioned the application, this step associates the field-level encryption configuration with the CloudFront distribution.
  6. Store the RSA private key in the Parameter Store
    Store the private key in the Parameter Store as a SecureString data type, which uses AWS KMS to encrypt the parameter value.

Deploy the solution

1. Stage the required artifacts

(If you are deploying in the US East [N. Virginia] Region, skip to Step 2, “Generate an RSA key pair.”)

Stage the Lambda function deployment package in an Amazon S3 bucket located in the AWS Region you are using for this solution. To do this, download the zipped deployment package and upload it to your in-region bucket. For additional information about uploading objects to S3, see Uploading Object into Amazon S3.

2. Generate an RSA key pair

In this section, you will generate an RSA key pair by using OpenSSL:

  1. Confirm access to OpenSSL.
    $ openssl version

    You should see version information similar to the following.

    OpenSSL <version> <date>

  1. Create a private key using the following command.
    $ openssl genrsa -out private_key.pem 2048

    The command results should look similar to the following.

    Generating RSA private key, 2048 bit long modulus
    ................................................................................+++
    ..........................+++
    e is 65537 (0x10001)
  1. Extract the public key from the private key by running the following command.
    $ openssl rsa -pubout -in private_key.pem -out public_key.pem

    You should see output similar to the following.

    writing RSA key
  1. Restrict access to the private key.$ chmod 600 private_key.pem Note: You will use the public and private key material in Steps 3 and 6 to configure the sample application.

3. Upload the public key to CloudFront and associate it with the field-level encryption configuration

Now that you have created the RSA key pair, you will use the AWS Management Console to upload the public key to CloudFront for use by field-level encryption. Complete the following steps to upload and configure the public key.

Note: Do not include spaces or special characters when providing the configuration values in this section.

  1. From the AWS Management Console, choose Services > CloudFront.
  2. In the navigation pane, choose Public Key and choose Add Public Key.
    Screenshot of adding a public key

Complete the Add Public Key configuration boxes:

  • Key Name: Type a name such as DemoPublicKey.
  • Encoded Key: Paste the contents of the public_key.pem file you created in Step 2c. Copy and paste the encoded key value for your public key, including the -----BEGIN PUBLIC KEY----- and -----END PUBLIC KEY----- lines.
  • Comment: Optionally add a comment.
  1. Choose Create.
  2. After adding at least one public key to CloudFront, the next step is to create a profile to tell CloudFront which fields of input you want to be encrypted. While still on the CloudFront console, choose Field-level encryption in the navigation pane.
  3. Under Profiles, choose Create profile.
    Screenshot of creating a profile

Complete the Create profile configuration boxes:

  • Name: Type a name such as FLEDemo.
  • Comment: Optionally add a comment.
  • Public key: Select the public key you configured in Step 4.b.
  • Provider name: Type a provider name such as FLEDemo.
    This information will be used when the form data is encrypted, and must be provided to applications that need to decrypt the data, along with the appropriate private key.
  • Pattern to match: Type phone. This configures field-level encryption to match based on the phone.
  1. Choose Save profile.
  2. Configurations include options for whether to block or forward a query to your origin in scenarios where CloudFront can’t encrypt the data. Under Encryption Configurations, choose Create configuration.
    Screenshot of creating a configuration

Complete the Create configuration boxes:

  • Comment: Optionally add a comment.
  • Content type: Enter application/x-www-form-urlencoded. This is a common media type for encoding form data.
  • Default profile ID: Select the profile you added in Step 3e.
  1. Choose Save configuration

4. Launch the CloudFormation stack

Launch the sample application by using a CloudFormation template that automates the provisioning process.

Input parameter Input parameter description
ProviderID Enter the Provider name you assigned in Step 3e. The ProviderID is used in field-level encryption configuration in CloudFront (letters and numbers only, no special characters)
PublicKeyName Enter the Key Name you assigned in Step 3b. This name is assigned to the public key in field-level encryption configuration in CloudFront (letters and numbers only, no special characters).
PrivateKeySSMPath Leave as the default: /cloudfront/field-encryption-sample/private-key
ArtifactsBucket The S3 bucket with artifact files (staged zip file with app code). Leave as default if deploying in us-east-1.
ArtifactsPrefix The path in the S3 bucket containing artifact files. Leave as default if deploying in us-east-1.

To finish creating the CloudFormation stack:

  1. Choose Next on the Select Template page, enter the input parameters and choose Next.
    Note: The Artifacts configuration needs to be updated only if you are deploying outside of us-east-1 (US East [N. Virginia]). See Step 1 for artifact staging instructions.
  2. On the Options page, accept the defaults and choose Next.
  3. On the Review page, confirm the details, choose the I acknowledge that AWS CloudFormation might create IAM resources check box, and then choose Create. (The stack will be created in approximately 15 minutes.)

5. Add the field-level encryption configuration to the CloudFront distribution

While still on the CloudFront console, choose Distributions in the navigation pane, and then:

    1. In the Outputs section of the FLE-Sample-App stack, look for CloudFrontDistribution and click the URL to open the CloudFront console.
    2. Choose Behaviors, choose the Default (*) behavior, and then choose Edit.
    3. For Field-level Encryption Config, choose the configuration you created in Step 3g.
      Screenshot of editing the default cache behavior
    4. Choose Yes, Edit.
    5. While still in the CloudFront distribution configuration, choose the General Choose Edit, scroll down to Distribution State, and change it to Enabled.
    6. Choose Yes, Edit.

6. Store the RSA private key in the Parameter Store

In this step, you store the private key in the EC2 Systems Manager Parameter Store as a SecureString data type, which uses AWS KMS to encrypt the parameter value. For more information about AWS KMS, see the AWS Key Management Service Developer Guide. You will need a working installation of the AWS CLI to complete this step.

  1. Store the private key in the Parameter Store with the AWS CLI by running the following command. You will find the <KMSKeyID> in the KMSKeyID in the CloudFormation stack Outputs. Substitute it for the placeholder in the following command.
    $ aws ssm put-parameter --type "SecureString" --name /cloudfront/field-encryption-sample/private-key --value file://private_key.pem --key-id "<KMSKeyID>"
    
    ------------------
    |  PutParameter  |
    +----------+-----+
    |  Version |  1  |
    +----------+-----+

  1. Verify the parameter. Your private key material should be accessible through the ssm get-parameter in the following command in the Value The key material has been truncated in the following output.
    $ aws ssm get-parameter --name /cloudfront/field-encryption-sample/private-key --with-decryption
    
    -----…
    
    ||  Value  |  -----BEGIN RSA PRIVATE KEY-----
    MIIEowIBAAKCAQEAwGRBGuhacmw+C73kM6Z…….

    Notice we use the —with decryption argument in this command. This returns the private key as cleartext.

    This completes the sample application deployment. Next, we show you how to see field-level encryption in action.

  1. Delete the private key from local storage. On Linux for example, using the shred command, securely delete the private key material from your workstation as shown below. You may also wish to store the private key material within an AWS CloudHSM or other protected location suitable for your security requirements. For production implementations, you also should implement key rotation policies.
    $ shred -zvu -n  100 private*.pem
    
    shred: private_encrypted_key.pem: pass 1/101 (random)...
    shred: private_encrypted_key.pem: pass 2/101 (dddddd)...
    shred: private_encrypted_key.pem: pass 3/101 (555555)...
    ….

Test the sample application

Use the following steps to test the sample application with field-level encryption:

  1. Open sample application in your web browser by clicking the ApplicationURL link in the CloudFormation stack Outputs. (for example, https:d199xe5izz82ea.cloudfront.net/prod/). Note that it may take several minutes for the CloudFront distribution to reach the Deployed Status from the previous step, during which time you may not be able to access the sample application.
  2. Fill out and submit the HTML form on the page:
    1. Complete the three form fields: Full Name, Email Address, and Phone Number.
    2. Choose Submit.
      Screenshot of completing the sample application form
      Notice that the application response includes the form values. The phone number returns the following ciphertext encryption using your public key. This ciphertext has been stored in DynamoDB.
      Screenshot of the phone number as ciphertext
  3. Execute the Lambda decryption function to download ciphertext from DynamoDB and decrypt the phone number using the private key:
    1. In the CloudFormation stack Outputs, locate DecryptFunction and click the URL to open the Lambda console.
    2. Configure a test event using the “Hello World” template.
    3. Choose the Test button.
  4. View the encrypted and decrypted phone number data.
    Screenshot of the encrypted and decrypted phone number data

Summary

In this blog post, we showed you how to use CloudFront field-level encryption to encrypt sensitive data at edge locations and help prevent access from unauthorized systems. The source code for this solution is available on GitHub. For additional information about field-level encryption, see the documentation.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, please start a new thread on the CloudFront forum.

– Alex and Cameron

AWS HIPAA Eligibility Update (October 2017) – Sixteen Additional Services

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-hipaa-eligibility-post-update-october-2017-sixteen-additional-services/

Our Health Customer Stories page lists just a few of the many customers that are building and running healthcare and life sciences applications that run on AWS. Customers like Verge Health, Care Cloud, and Orion Health trust AWS with Protected Health Information (PHI) and Personally Identifying Information (PII) as part of their efforts to comply with HIPAA and HITECH.

Sixteen More Services
In my last HIPAA Eligibility Update I shared the news that we added eight additional services to our list of HIPAA eligible services. Today I am happy to let you know that we have added another sixteen services to the list, bringing the total up to 46. Here are the newest additions, along with some short descriptions and links to some of my blog posts to jog your memory:

Amazon Aurora with PostgreSQL Compatibility – This brand-new addition to Amazon Aurora allows you to encrypt your relational databases using keys that you create and manage through AWS Key Management Service (KMS). When you enable encryption for an Amazon Aurora database, the underlying storage is encrypted, as are automated backups, read replicas, and snapshots. Read New – Encryption at Rest for Amazon Aurora to learn more.

Amazon CloudWatch Logs – You can use the logs to monitor and troubleshoot your systems and applications. You can monitor your existing system, application, and custom log files in near real-time, watching for specific phrases, values, or patterns. Log data can be stored durably and at low cost, for as long as needed. To learn more, read Store and Monitor OS & Application Log Files with Amazon CloudWatch and Improvements to CloudWatch Logs and Dashboards.

Amazon Connect – This self-service, cloud-based contact center makes it easy for you to deliver better customer service at a lower cost. You can use the visual designer to set up your contact flows, manage agents, and track performance, all without specialized skills. Read Amazon Connect – Customer Contact Center in the Cloud and New – Amazon Connect and Amazon Lex Integration to learn more.

Amazon ElastiCache for Redis – This service lets you deploy, operate, and scale an in-memory data store or cache that you can use to improve the performance of your applications. Each ElastiCache for Redis cluster publishes key performance metrics to Amazon CloudWatch. To learn more, read Caching in the Cloud with Amazon ElastiCache and Amazon ElastiCache – Now With a Dash of Redis.

Amazon Kinesis Streams – This service allows you to build applications that process or analyze streaming data such as website clickstreams, financial transactions, social media feeds, and location-tracking events. To learn more, read Amazon Kinesis – Real-Time Processing of Streaming Big Data and New: Server-Side Encryption for Amazon Kinesis Streams.

Amazon RDS for MariaDB – This service lets you set up scalable, managed MariaDB instances in minutes, and offers high performance, high availability, and a simplified security model that makes it easy for you to encrypt data at rest and in transit. Read Amazon RDS Update – MariaDB is Now Available to learn more.

Amazon RDS SQL Server – This service lets you set up scalable, managed Microsoft SQL Server instances in minutes, and also offers high performance, high availability, and a simplified security model. To learn more, read Amazon RDS for SQL Server and .NET support for AWS Elastic Beanstalk and Amazon RDS for Microsoft SQL Server – Transparent Data Encryption (TDE) to learn more.

Amazon Route 53 – This is a highly available Domain Name Server. It translates names like www.example.com into IP addresses. To learn more, read Moving Ahead with Amazon Route 53.

AWS Batch – This service lets you run large-scale batch computing jobs on AWS. You don’t need to install or maintain specialized batch software or build your own server clusters. Read AWS Batch – Run Batch Computing Jobs on AWS to learn more.

AWS CloudHSM – A cloud-based Hardware Security Module (HSM) for key storage and management at cloud scale. Designed for sensitive workloads, CloudHSM lets you manage your own keys using FIPS 140-2 Level 3 validated HSMs. To learn more, read AWS CloudHSM – Secure Key Storage and Cryptographic Operations and AWS CloudHSM Update – Cost Effective Hardware Key Management at Cloud Scale for Sensitive & Regulated Workloads.

AWS Key Management Service – This service makes it easy for you to create and control the encryption keys used to encrypt your data. It uses HSMs to protect your keys, and is integrated with AWS CloudTrail in order to provide you with a log of all key usage. Read New AWS Key Management Service (KMS) to learn more.

AWS Lambda – This service lets you run event-driven application or backend code without thinking about or managing servers. To learn more, read AWS Lambda – Run Code in the Cloud, AWS Lambda – A Look Back at 2016, and AWS Lambda – In Full Production with New Features for Mobile Devs.

[email protected] – You can use this new feature of AWS Lambda to run Node.js functions across the global network of AWS locations without having to provision or manager servers, in order to deliver rich, personalized content to your users with low latency. Read [email protected] – Intelligent Processing of HTTP Requests at the Edge to learn more.

AWS Snowball Edge – This is a data transfer device with 100 terabytes of on-board storage as well as compute capabilities. You can use it to move large amounts of data into or out of AWS, as a temporary storage tier, or to support workloads in remote or offline locations. To learn more, read AWS Snowball Edge – More Storage, Local Endpoints, Lambda Functions.

AWS Snowmobile – This is an exabyte-scale data transfer service. Pulled by a semi-trailer truck, each Snowmobile packs 100 petabytes of storage into a ruggedized 45-foot long shipping container. Read AWS Snowmobile – Move Exabytes of Data to the Cloud in Weeks to learn more (and to see some of my finest LEGO work).

AWS Storage Gateway – This hybrid storage service lets your on-premises applications use AWS cloud storage (Amazon Simple Storage Service (S3), Amazon Glacier, and Amazon Elastic File System) in a simple and seamless way, with storage for volumes, files, and virtual tapes. To learn more, read The AWS Storage Gateway – Integrate Your Existing On-Premises Applications with AWS Cloud Storage and File Interface to AWS Storage Gateway.

And there you go! Check out my earlier post for a list of resources that will help you to build applications that comply with HIPAA and HITECH.

Jeff;

 

Want to Learn More About AWS CloudHSM and Hardware Key Management? Register for and Attend this October 25 Tech Talk: “CloudHSM – Secure, Scalable Key Storage in AWS”

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/want-to-learn-more-about-aws-cloudhsm-and-hardware-key-management-register-for-and-attend-this-october-25-tech-talk-cloudhsm-secure-scalable-key-storage-in-aws/

AWS Online Tech Talks banner

As part of the AWS Online Tech Talks series, AWS will present CloudHSM – Secure, Scalable Key Storage in AWS on Wednesday, October 25. This tech talk will start at 9:00 A.M. Pacific Time and end at 9:40 A.M. Pacific Time.

Applications handling confidential or sensitive data are subject to corporate or regulatory requirements and therefore need validated control of encryption keys and cryptographic operations. AWS CloudHSM brings to your AWS resources the security and control of traditional HSMs. This Tech Talk will show how you can leverage CloudHSM to build scalable, reliable applications without sacrificing either security or performance. Attend this Tech Talk to learn how you can use CloudHSM to quickly and easily build secure, compliant, fast, and flexible applications.

You also will:

  • Learn about the challenges CloudHSM can help you address.
  • Understand how CloudHSM can secure your workloads and data.
  • Learn how to transfer and modernize workloads.

This tech talk is free. Register today.

– Craig

AWS Summit New York – Summary of Announcements

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-summit-new-york-summary-of-announcements/

Whew – what a week! Tara, Randall, Ana, and I have been working around the clock to create blog posts for the announcements that we made at the AWS Summit in New York. Here’s a summary to help you to get started:

Amazon Macie – This new service helps you to discover, classify, and secure content at scale. Powered by machine learning and making use of Natural Language Processing (NLP), Macie looks for patterns and alerts you to suspicious behavior, and can help you with governance, compliance, and auditing. You can read Tara’s post to see how to put Macie to work; you select the buckets of interest, customize the classification settings, and review the results in the Macie Dashboard.

AWS GlueRandall’s post (with deluxe animated GIFs) introduces you to this new extract, transform, and load (ETL) service. Glue is serverless and fully managed, As you can see from the post, Glue crawls your data, infers schemas, and generates ETL scripts in Python. You define jobs that move data from place to place, with a wide selection of transforms, each expressed as code and stored in human-readable form. Glue uses Development Endpoints and notebooks to provide you with a testing environment for the scripts you build. We also announced that Amazon Athena now integrates with Amazon Glue, as does Apache Spark and Hive on Amazon EMR.

AWS Migration Hub – This new service will help you to migrate your application portfolio to AWS. My post outlines the major steps and shows you how the Migration Hub accelerates, tracks,and simplifies your migration effort. You can begin with a discovery step, or you can jump right in and migrate directly. Migration Hub integrates with tools from our migration partners and builds upon the Server Migration Service and the Database Migration Service.

CloudHSM Update – We made a major upgrade to AWS CloudHSM, making the benefits of hardware-based key management available to a wider audience. The service is offered on a pay-as-you-go basis, and is fully managed. It is open and standards compliant, with support for multiple APIs, programming languages, and cryptography extensions. CloudHSM is an integral part of AWS and can be accessed from the AWS Management Console, AWS Command Line Interface (CLI), and through API calls. Read my post to learn more and to see how to set up a CloudHSM cluster.

Managed Rules to Secure S3 Buckets – We added two new rules to AWS Config that will help you to secure your S3 buckets. The s3-bucket-public-write-prohibited rule identifies buckets that have public write access and the s3-bucket-public-read-prohibited rule identifies buckets that have global read access. As I noted in my post, you can run these rules in response to configuration changes or on a schedule. The rules make use of some leading-edge constraint solving techniques, as part of a larger effort to use automated formal reasoning about AWS.

CloudTrail for All Customers – Tara’s post revealed that AWS CloudTrail is now available and enabled by default for all AWS customers. As a bonus, Tara reviewed the principal benefits of CloudTrail and showed you how to review your event history and to deep-dive on a single event. She also showed you how to create a second trail, for use with CloudWatch CloudWatch Events.

Encryption of Data at Rest for EFS – When you create a new file system, you now have the option to select a key that will be used to encrypt the contents of the files on the file system. The encryption is done using an industry-standard AES-256 algorithm. My post shows you how to select a key and to verify that it is being used.

Watch the Keynote
My colleagues Adrian Cockcroft and Matt Wood talked about these services and others on the stage, and also invited some AWS customers to share their stories. Here’s the video:

Jeff;

 

AWS CloudHSM Update – Cost Effective Hardware Key Management at Cloud Scale for Sensitive & Regulated Workloads

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-cloudhsm-update-cost-effective-hardware-key-management/

Our customers run an incredible variety of mission-critical workloads on AWS, many of which process and store sensitive data. As detailed in our Overview of Security Processes document, AWS customers have access to an ever-growing set of options for encrypting and protecting this data. For example, Amazon Relational Database Service (RDS) supports encryption of data at rest and in transit, with options tailored for each supported database engine (MySQL, SQL Server, Oracle, MariaDB, PostgreSQL, and Aurora).

Many customers use AWS Key Management Service (KMS) to centralize their key management, with others taking advantage of the hardware-based key management, encryption, and decryption provided by AWS CloudHSM to meet stringent security and compliance requirements for their most sensitive data and regulated workloads (you can read my post, AWS CloudHSM – Secure Key Storage and Cryptographic Operations, to learn more about Hardware Security Modules, also known as HSMs).

Major CloudHSM Update
Today, building on what we have learned from our first-generation product, we are making a major update to CloudHSM, with a set of improvements designed to make the benefits of hardware-based key management available to a much wider audience while reducing the need for specialized operating expertise. Here’s a summary of the improvements:

Pay As You Go – CloudHSM is now offered under a pay-as-you-go model that is simpler and more cost-effective, with no up-front fees.

Fully Managed – CloudHSM is now a scalable managed service; provisioning, patching, high availability, and backups are all built-in and taken care of for you. Scheduled backups extract an encrypted image of your HSM from the hardware (using keys that only the HSM hardware itself knows) that can be restored only to identical HSM hardware owned by AWS. For durability, those backups are stored in Amazon Simple Storage Service (S3), and for an additional layer of security, encrypted again with server-side S3 encryption using an AWS KMS master key.

Open & Compatible  – CloudHSM is open and standards-compliant, with support for multiple APIs, programming languages, and cryptography extensions such as PKCS #11, Java Cryptography Extension (JCE), and Microsoft CryptoNG (CNG). The open nature of CloudHSM gives you more control and simplifies the process of moving keys (in encrypted form) from one CloudHSM to another, and also allows migration to and from other commercially available HSMs.

More Secure – CloudHSM Classic (the original model) supports the generation and use of keys that comply with FIPS 140-2 Level 2. We’re stepping that up a notch today with support for FIPS 140-2 Level 3, with security mechanisms that are designed to detect and respond to physical attempts to access or modify the HSM. Your keys are protected with exclusive, single-tenant access to tamper-resistant HSMs that appear within your Virtual Private Clouds (VPCs). CloudHSM supports quorum authentication for critical administrative and key management functions. This feature allows you to define a list of N possible identities that can access the functions, and then require at least M of them to authorize the action. It also supports multi-factor authentication using tokens that you provide.

AWS-Native – The updated CloudHSM is an integral part of AWS and plays well with other tools and services. You can create and manage a cluster of HSMs using the AWS Management Console, AWS Command Line Interface (CLI), or API calls.

Diving In
You can create CloudHSM clusters that contain 1 to 32 HSMs, each in a separate Availability Zone in a particular AWS Region. Spreading HSMs across AZs gives you high availability (including built-in load balancing); adding more HSMs gives you additional throughput. The HSMs within a cluster are kept in sync: performing a task or operation on one HSM in a cluster automatically updates the others. Each HSM in a cluster has its own Elastic Network Interface (ENI).

All interaction with an HSM takes place via the AWS CloudHSM client. It runs on an EC2 instance and uses certificate-based mutual authentication to create secure (TLS) connections to the HSMs.

At the hardware level, each HSM includes hardware-enforced isolation of crypto operations and key storage. Each customer HSM runs on dedicated processor cores.

Setting Up a Cluster
Let’s set up a cluster using the CloudHSM Console:

I click on Create cluster to get started, select my desired VPC and the subnets within it (I can also create a new VPC and/or subnets if needed):

Then I review my settings and click on Create:

After a few minutes, my cluster exists, but is uninitialized:

Initialization simply means retrieving a certificate signing request (the Cluster CSR):

And then creating a private key and using it to sign the request (these commands were copied from the Initialize Cluster docs and I have omitted the output. Note that ID identifies the cluster):

$ openssl genrsa -out CustomerRoot.key 2048
$ openssl req -new -x509 -days 365 -key CustomerRoot.key -out CustomerRoot.crt
$ openssl x509 -req -days 365 -in ID_ClusterCsr.csr   \
                              -CA CustomerRoot.crt    \
                              -CAkey CustomerRoot.key \
                              -CAcreateserial         \
                              -out ID_CustomerHsmCertificate.crt

The next step is to apply the signed certificate to the cluster using the console or the CLI. After this has been done, the cluster can be activated by changing the password for the HSM’s administrative user, otherwise known as the Crypto Officer (CO).

Once the cluster has been created, initialized and activated, it can be used to protect data. Applications can use the APIs in AWS CloudHSM SDKs to manage keys, encrypt & decrypt objects, and more. The SDKs provide access to the CloudHSM client (running on the same instance as the application). The client, in turn, connects to the cluster across an encrypted connection.

Available Today
The new HSM is available today in the US East (Northern Virginia), US West (Oregon), US East (Ohio), and EU (Ireland) Regions, with more in the works. Pricing starts at $1.45 per HSM per hour.

Jeff;

How to Update AWS CloudHSM Devices and Client Instances to the Software and Firmware Versions Supported by AWS

Post Syndicated from Tracy Pierce original https://aws.amazon.com/blogs/security/how-to-update-aws-cloudhsm-devices-and-client-instances-to-the-software-and-firmware-versions-supported-by-aws/

As I explained in my previous Security Blog post, a hardware security module (HSM) is a hardware device designed with the security of your data and cryptographic key material in mind. It is tamper-resistant hardware that prevents unauthorized users from attempting to pry open the device, plug in any extra devices to access data or keys such as subtokens, or damage the outside housing. The HSM device AWS CloudHSM offers is the Luna SA 7000 (also called Safenet Network HSM 7000), which is created by Gemalto. Depending on the firmware version you install, many of the security properties of these HSMs will have been validated under Federal Information Processing Standard (FIPS) 140-2, a standard issued by the National Institute of Standards and Technology (NIST) for cryptography modules. These standards are in place to protect the integrity and confidentiality of the data stored on cryptographic modules.

To help ensure its continued use, functionality, and support from AWS, we suggest that you update your AWS CloudHSM device software and firmware as well as the client instance software to current versions offered by AWS. As of the publication of this blog post, the current non-FIPS-validated versions are 5.4.9/client, 5.3.13/software, and 6.20.2/firmware, and the current FIPS-validated versions are 5.4.9/client, 5.3.13/software, and 6.10.9/firmware. (The firmware version determines FIPS validation.) It is important to know your current versions before updating so that you can follow the correct update path.

In this post, I demonstrate how to update your current CloudHSM devices and client instances so that you are using the most current versions of software and firmware. If you contact AWS Support for CloudHSM hardware and application issues, you will be required to update to these supported versions before proceeding. Also, any newly provisioned CloudHSM devices will use these supported software and firmware versions only, and AWS does not offer “downgrade options.

Note: Before you perform any updates, check with your local CloudHSM administrator and application developer to verify that these updates will not conflict with your current applications or architecture.

Overview of the update process

To update your client and CloudHSM devices, you must use both update paths offered by AWS. The first path involves updating the software on your client instance, also known as a control instance. Following the second path updates the software first and then the firmware on your CloudHSM devices. The CloudHSM software must be updated before the firmware because of the firmware’s dependencies on the software in order to work appropriately.

As I demonstrate in this post, the correct update order is:

  1. Updating your client instance
  2. Updating your CloudHSM software
  3. Updating your CloudHSM firmware

To update your client instance, you must have the private SSH key you created when you first set up your client environment. This key is used to connect via SSH protocol on port 22 of your client instance. If you have more than one client instance, you must repeat this connection and update process on each of them. The following diagram shows the flow of an SSH connection from your local network to your client instances in the AWS Cloud.

Diagram that shows the flow of an SSH connection from your local network to your client instances in the AWS Cloud

After you update your client instance to the most recent software (5.3.13), you then must update the CloudHSM device software and firmware. First, you must initiate an SSH connection from any one client instance to each CloudHSM device, as illustrated in the following diagram. A successful SSH connection will have you land at the Luna shell, denoted by lunash:>. Second, you must be able to initiate a Secure Copy (SCP) of files to each device from the client instance. Because the software and firmware updates require an elevated level of privilege, you must have the Security Officer (SO) password that you created when you initialized your CloudHSM devices.

Diagram illustrating the initiation of an SSH connection from any one client instance to each CloudHSM device

After you have completed all updates, you can receive enhanced troubleshooting assistance from AWS, if you need it. When new versions of software and firmware are released, AWS performs extensive testing to ensure your smooth transition from version to version.

Detailed guidance for updating your client instance, CloudHSM software, and CloudHSM firmware

1.  Updating your client instance

Let’s start by updating your client instances. My client instance and CloudHSM devices are in the eu-west-1 region, but these steps work the same in any AWS region. Because Gemalto offers client instances in both Linux and Windows, I will cover steps to update both. I will start with Linux. Please note that all commands should be run as the “root” user.

Updating the Linux client

  1. SSH from your local network into the client instance. You can do this from Linux or Windows. Typically, you would do this from the directory where you have stored your private SSH key by using a command like the following command in a terminal or PuTTY This initiates the SSH connection by pointing to the path of your SSH key and denoting the user name and IP address of your client instance.
    ssh –i /Users/Bob/Keys/CloudHSM_SSH_Key.pem [email protected]

  1. After the SSH connection is established, you must stop all applications and services on the instance that are using the CloudHSM device. This is required because you are removing old software and installing new software in its place. After you have stopped all applications and services, you can move on to remove the existing version of the client software.
    /usr/safenet/lunaclient/bin/uninstall.sh

This command will remove the old client software, but will not remove your configuration file or certificates. These will be saved in the Chrystoki.conf file of your /etc directory and your usr/safenet/lunaclient/cert directory. Do not delete these files because you will lose the configuration of your CloudHSM devices and client connections.

  1. Download the new client software package: cloudhsm-safenet-client. Double-click it to extract the archive.
    SafeNet-Luna-client-5-4-9/linux/64/install.sh

    Make sure you choose the Luna SA option when presented with it. Because the directory where your certificates are installed is the same, you do not need to copy these certificates to another directory. You do, however, need to ensure that the Chrystoki.conf file, located at /etc/Chrystoki.conf, has the same path and name for the certificates as when you created them. (The path or names should not have changed, but you should still verify they are same as before the update.)

  1. Check to ensure that the PATH environment variable points to the directory, /usr/safenet/lunaclient/bin, to ensure no issues when you restart applications and services. The update process for your Linux client Instance is now complete.

Updating the Windows client

Use the following steps to update your Windows client instances:

  1. Use Remote Desktop Protocol (RDP) from your local network into the client instance. You can accomplish this with the RDP application of your choice.
  2. After you establish the RDP connection, stop all applications and services on the instance that are using the CloudHSM device. This is required because you will remove old software and install new software in its place or overwrite If your client software version is older than 5.4.1, you need to completely remove it and all patches by using Programs and Features in the Windows Control Panel. If your client software version is 5.4.1 or newer, proceed without removing the software. Your configuration file will remain intact in the crystoki.ini file of your C:\Program Files\SafeNet\Lunaclient\ directory. All certificates are preserved in the C:\Program Files\SafeNet\Lunaclient\cert\ directory. Again, do not delete these files, or you will lose all configuration and client connection data.
  3. After you have completed these steps, download the new client software: cloudhsm-safenet-client. Extract the archive from the downloaded file, and launch the SafeNet-Luna-client-5-4-9\win\64\Lunaclient.msi Choose the Luna SA option when it is presented to you. Because the directory where your certificates are installed is the same, you do not need to copy these certificates to another directory. You do, however, need to ensure that the crystoki.ini file, which is located at C:\Program Files\SafeNet\Lunaclient\crystoki.ini, has the same path and name for the certificates as when you created them. (The path and names should not have changed, but you should still verify they are same as before the update.)
  4. Make one last check to ensure the PATH environment variable points to the directory C:\Program Files\SafeNet\Lunaclient\ to help ensure no issues when you restart applications and services. The update process for your Windows client instance is now complete.

2.  Updating your CloudHSM software

Now that your clients are up to date with the most current software version, it’s time to move on to your CloudHSM devices. A few important notes:

  • Back up your data to a Luna SA Backup device. AWS does not sell or support the Luna SA Backup devices, but you can purchase them from Gemalto. We do, however, offer the steps to back up your data to a Luna SA Backup device. Do not update your CloudHSM devices without backing up your data first.
  • If the names of your clients used for Network Trust Link Service (NTLS) connections has a capital “T” as the eighth character, the client will not work after this update. This is because of a Gemalto naming convention. Before upgrading, ensure you modify your client names accordingly. The NTLS connection uses a two-way digital certificate authentication and SSL data encryption to protect sensitive data transmitted between your CloudHSM device and the client Instances.
  • The syslog configuration for the CloudHSM devices will be lost. After the update is complete, notify AWS Support and we will update the configuration for you.

Now on to updating the software versions. There are actually three different update paths to follow, and I will cover each. Depending on the current software versions on your CloudHSM devices, you might need to follow all three or just one.

Updating the software from version 5.1.x to 5.1.5

If you are running any version of the software older than 5.1.5, you must first update to version 5.1.5 before proceeding. To update to version 5.1.5:

  1. Stop all applications and services that access the CloudHSM device.
  2. Download the Luna SA software update package.
  3. Extract all files from the archive.
  4. Run the following command from your client instance to copy the lunasa_update-5.1.5-2.spkg file to the CloudHSM device.
    $ scp –I <private_key_file> lunasa_update-5.1.5-2.spkg [email protected]<hsm_ip_address>:

    <private_key_file> is the private portion of your SSH key pair and <hsm_ip_address> is the IP address of your CloudHSM elastic network interface (ENI). The ENI is the network endpoint that permits access to your CloudHSM device. The IP address was supplied to you when the CloudHSM device was provisioned.

  1. Use the following command to connect to your CloudHSM device and log in with your Security Officer (SO) password.
    $ ssh –I <private_key_file> [email protected]<hsm_ip_address>
    
    lunash:> hsm login

  1. Run the following commands to verify and then install the updated Luna SA software package.
    lunash:> package verify lunasa_update-5.1.5-2.spkg –authcode <auth_code>
    
    lunash:> package update lunasa_update-5.1.5-2.spkg –authcode <auth_code>

    The value you will use for <auth_code> is contained in the lunasa_update-5.1.5-2.auth file found in the 630-010165-018_reva.tar archive you downloaded in Step 2.

  1. Reboot the CloudHSM device by running the following command.
    lunash:> sysconf appliance reboot

When all the steps in this section are completed, you will have updated your CloudHSM software to version 5.1.5. You can now move on to update to version 5.3.10.

Updating the software to version 5.3.10

You can update to version 5.3.10 only if you are currently running version 5.1.5. To update to version 5.3.10:

  1. Stop all applications and services that access the CloudHSM device.
  2. Download the v 5.3.10 Luna SA software update package.
  3. Extract all files from the archive.
  4. Run the following command to copy the lunasa_update-5.3.10-7.spkg file to the CloudHSM device.
    $ scp –i <private_key_file> lunasa_update-5.3.10-7.spkg [email protected]<hsm_ip_address>:

    <private_key_file> is the private portion of your SSH key pair and <hsm_ip_address> is the IP address of your CloudHSM ENI.

  1. Run the following command to connect to your CloudHSM device and log in with your SO password.
    $ ssh –i <private_key_file> [email protected]<hsm_ip_address>
    
    lunash:> hsm login

  1. Run the following commands to verify and then install the updated Luna SA software package.
    lunash:> package verify lunasa_update-5.3.10-7.spkg –authcode <auth_code>
    
    lunash:> package update lunasa_update-5.3.10-7.spkg –authcode <auth_code>

The value you will use for <auth_code> is contained in the lunasa_update-5.3.10-7.auth file found in the SafeNet-Luna-SA-5-3-10.zip archive you downloaded in Step 2.

  1. Reboot the CloudHSM device by running the following command.
    lunash:> sysconf appliance reboot

When all the steps in this section are completed, you will have updated your CloudHSM software to version 5.3.10. You can now move on to update to version 5.3.13.

Note: Do not configure your applog settings at this point; you must first update the software to version 5.3.13 in the following step.

Updating the software to version 5.3.13

You can update to version 5.3.13 only if you are currently running version 5.3.10. If you are not already running version 5.3.10, follow the two update paths mentioned previously in this section.

To update to version 5.3.13:

  1. Stop all applications and services that access the CloudHSM device.
  2. Download the Luna SA software update package.
  3. Extract all files from the archive.
  4. Run the following command to copy the lunasa_update-5.3.13-1.spkg file to the CloudHSM device.
    $ scp –i <private_key_file> lunasa_update-5.3.13-1.spkg [email protected]<hsm_ip_address>

<private_key_file> is the private portion of your SSH key pair and <hsm_ip_address> is the IP address of your CloudHSM ENI.

  1. Run the following command to connect to your CloudHSM device and log in with your SO password.
    $ ssh –i <private_key_file> [email protected]<hsm_ip_address>
    
    lunash:> hsm login

  1. Run the following commands to verify and then install the updated Luna SA software package.
    lunash:> package verify lunasa_update-5.3.13-1.spkg –authcode <auth_code>
    
    lunash:> package update lunasa_update-5.3.13-1.spkg –authcode <auth_code>

The value you will use for <auth_code> is contained in the lunasa_update-5.3.13-1.auth file found in the SafeNet-Luna-SA-5-3-13.zip archive that you downloaded in Step 2.

  1. When updating to this software version, the option to update the firmware also is offered. If you do not require a version of the firmware validated under FIPS 140-2, accept the firmware update to version 6.20.2. If you do require a version of the firmware validated under FIPS 140-2, do not accept the firmware update and instead update by using the steps in the next section, “Updating your CloudHSM FIPS 140-2 validated firmware.”
  2. After updating the CloudHSM device, reboot it by running the following command.
    lunash:> sysconf appliance reboot

  1. Disable NTLS IP checking on the CloudHSM device so that it can operate within its VPC. To do this, run the following command.
    lunash:> ntls ipcheck disable

When all the steps in this section are completed, you will have updated your CloudHSM software to version 5.3.13. If you don’t need the FIPS 140-2 validated firmware, you will have also updated the firmware to version 6.20.2. If you do need the FIPS 140-2 validated firmware, proceed to the next section.

3.  Updating your CloudHSM FIPS 140-2 validated firmware

To update the FIPS 140-2 validated version of the firmware to 6.10.9, use the following steps:

  1. Download version 6.10.9 of the firmware package.
  2. Extract all files from the archive.
  3. Run the following command to copy the 630-010430-010_SPKG_LunaFW_6.10.9.spkg file to the CloudHSM device.
    $ scp –i <private_key_file> 630-010430-010_SPKG_LunaFW_6.10.9.spkg [email protected]<hsm_ip_address>:

<private_key_file> is the private portion of your SSH key pair, and <hsm_ip_address> is the IP address of your CloudHSM ENI.

  1. Run the following command to connect to your CloudHSM device and log in with your SO password.
    $ ssh –i <private_key_file> manager#<hsm_ip_address>
    
    lunash:> hsm login

  1. Run the following commands to verify and then install the updated Luna SA firmware package.
    lunash:> package verify 630-010430-010_SPKG_LunaFW_6.10.9.spkg –authcode <auth_code>
    
    lunash:> package update 630-010430-010_SPKG_LunaFW_6.10.9.spkg –authcode <auth_code>

The value you will use for <auth_code> is contained in the 630-010430-010_SPKG_LunaFW_6.10.9.auth file found in the 630-010430-010_SPKG_LunaFW_6.10.9.zip archive that you downloaded in Step 1.

  1. Run the following command to update the firmware of the CloudHSM devices.
    lunash:> hsm update firmware

  1. After you have updated the firmware, reboot the CloudHSM devices to complete the installation.
    lunash:> sysconf appliance reboot

Summary

In this blog post, I walked you through how to update your existing CloudHSM devices and clients so that they are using supported client, software, and firmware versions. Per AWS Support and CloudHSM Terms and Conditions, your devices and clients must use the most current supported software and firmware for continued troubleshooting assistance. Software and firmware versions regularly change based on customer use cases and requirements. Because AWS tests and validates all updates from Gemalto, you must install all updates for firmware and software by using our package links described in this post and elsewhere in our documentation.

If you have comments about this blog post, submit them in the “Comments” section below. If you have questions about implementing this solution, please start a new thread on the CloudHSM forum.

– Tracy