Tag Archives: AWS Key Management Service*

Architecting a Data Lake for Higher Education Student Analytics

Post Syndicated from Craig Jordan original https://aws.amazon.com/blogs/architecture/architecting-data-lake-for-higher-education-student-analytics/

One of the keys to identifying timely and impactful actions is having enough raw material to work with. However, this up-to-date information typically lives in the databases that sit behind several different applications. One of the first steps to finding data-driven insights is gathering that information into a single store that an analyst can use without interfering with those applications.

For years, reporting environments have relied on a data warehouse stored in a single, separate relational database management system (RDBMS). But now, due to the growing use of Software as a service (SaaS) applications and NoSQL database options, data may be stored outside the data center and in formats other than tables of rows and columns. It’s increasingly difficult to access the data these applications maintain, and a data warehouse may not be flexible enough to house the gathered information.

For these reasons, reporting teams are building data lakes, and those responsible for using data analytics at universities and colleges are no different. However, it can be challenging to know exactly how to start building this expanded data repository so it can be ready to use quickly and still expandable as future requirements are uncovered. Helping higher education institutions address these challenges is the topic of this post.

About Maryville University

Maryville University is a nationally recognized private institution located in St. Louis, Missouri, and was recently named the second fastest growing private university by The Chronicle of Higher Education. Even with its enrollment growth, the university is committed to a highly personalized education for each student, which requires reliable data that is readily available to multiple departments. University leaders want to offer the right help at the right time to students who may be having difficulty completing the first semester of their course of study. To get started, the data experts in the Office of Strategic Information and members of the IT Department needed to create a data environment to identify students needing assistance.

Critical data sources

Like most universities, Maryville’s student-related data centers around two significant sources: the student information system (SIS), which houses student profiles, course completion, and financial aid information; and the learning management system (LMS) in which students review course materials, complete assignments, and engage in online discussions with faculty and fellow students.

The first of these, the SIS, stores its data in an on-premises relational database, and for several years, a significant subset of its contents had been incorporated into the university’s data warehouse. The LMS, however, contains data that the team had not tried to bring into their data warehouse. Moreover, that data is managed by a SaaS application from Instructure, called “Canvas,” and is not directly accessible for traditional extract, transform, and load (ETL) processing. The team recognized they needed a new approach and began down the path of creating a data lake in AWS to support their analysis goals.

Getting started on the data lake

The first step the team took in building their data lake made use of an open source solution that Harvard’s IT department developed. The solution, comprised of AWS Lambda functions and Amazon Simple Storage Service (S3) buckets, is deployed using AWS CloudFormation. It enables any university that uses Canvas for their LMS to implement a solution that moves LMS data into an S3 data lake on a daily basis. The following diagram illustrates this portion of Maryville’s data lake architecture:

The data lake for the Learning Management System data

Diagram 1: The data lake for the Learning Management System data

The AWS Lambda functions invoke the LMS REST API on a daily schedule resulting in Maryville’s data, which has been previously unloaded and compressed by Canvas, to be securely stored into S3 objects. AWS Glue tables are defined to provide access to these S3 objects. Amazon Simple Notification Service (SNS) informs stakeholders the status of the data loads.

Expanding the data lake

The next step was deciding how to copy the SIS data into S3. The team decided to use the AWS Database Migration Service (DMS) to create daily snapshots of more than 2,500 tables from this database. DMS uses a source endpoint for secure access to the on-premises database instance over VPN. A target endpoint determines the specific S3 bucket into which the data should be written. A migration task defines which tables to copy from the source database along with other migration options. Finally, a replication instance, a fully managed virtual machine, runs the migration task to copy the data. With this configuration in place, the data lake architecture for SIS data looks like this:

Diagram 2: Migrating data from the Student Information System

Diagram 2: Migrating data from the Student Information System

Handling sensitive data

In building a data lake you have several options for handling sensitive data including:

  • Leaving it behind in the source system and avoid copying it through the data replication process
  • Copying it into the data lake, but taking precautions to ensure that access to it is limited to authorized staff
  • Copying it into the data lake, but applying processes to eliminate, mask, or otherwise obfuscate the data before it is made accessible to analysts and data scientists

The Maryville team decided to take the first of these approaches. Building the data lake gave them a natural opportunity to assess where this data was stored in the source system and then make changes to the source database itself to limit the number of highly sensitive data fields.

Validating the data lake

With these steps completed, the team turned to the final task, which was to validate the data lake. For this process they chose to make use of Amazon Athena, AWS Glue, and Amazon Redshift. AWS Glue provided multiple capabilities including metadata extraction, ETL, and data orchestration. Metadata extraction, completed by Glue crawlers, quickly converted the information that DMS wrote to S3 into metadata defined in the Glue data catalog. This enabled the data in S3 to be accessed using standard SQL statements interactively in Athena. Without the added cost and complexity of a database, Maryville’s data analyst was able to confirm that the data loads were completing successfully. He was also able to resolve specific issues encountered on particular tables. The SQL queries, written in Athena, could later be converted to ETL jobs in AWS Glue, where they could be triggered on a schedule to create additional data in S3. Athena and Glue enabled the ETL that was needed to transform the raw data delivered to S3 into prepared datasets necessary for existing dashboards.

Once curated datasets were created and stored in S3, the data was loaded into an AWS Redshift data warehouse, which supported direct access by tools outside of AWS using ODBC/JDBC drivers. This capability enabled Maryville’s team to further validate the data by attaching the data in Redshift to existing dashboards that were running in Maryville’s own data center. Redshift’s stored procedure language allowed the team to port some key ETL logic so that the engineering of these datasets could follow a process similar to approaches used in Maryville’s on-premises data warehouse environment.

Conclusion

The overall data lake/data warehouse architecture that the Maryville team constructed currently looks like this:

The complete architecture

Diagram 3: The complete architecture

Through this approach, Maryville’s two-person team has moved key data into position for use in a variety of workloads. The data in S3 is now readily accessible for ad hoc interactive SQL workloads in Athena, ETL jobs in Glue, and ultimately for machine learning workloads running in EC2, Lambda or Amazon Sagemaker. In addition, the S3 storage layer is easy to expand without interrupting prior workloads. At the time of this writing, the Maryville team is both beginning to use this environment for machine learning models described earlier as well as adding other data sources into the S3 layer.

Acknowledgements

The solution described in this post resulted from the collaborative effort of Christine McQuie, Data Engineer, and Josh Tepen, Cloud Engineer, at Maryville University, with guidance from Travis Berkley and Craig Jordan, AWS Solutions Architects.

Architecting for database encryption on AWS

Post Syndicated from Jonathan Jenkyn original https://aws.amazon.com/blogs/security/architecting-for-database-encryption-on-aws/

In this post, I review the options you have to protect your customer data when migrating or building new databases in Amazon Web Services (AWS). I focus on how you can support sensitive workloads in ways that help you maintain compliance and regulatory obligations, and meet security objectives.

Understanding transparent data encryption

I commonly see enterprise customers migrating existing databases straight from on-premises to AWS without reviewing their design. This might seem simpler and faster, but they miss the opportunity to review the scalability, cost-savings, and feature capability of native cloud services. A straight lift and shift migration can also create unnecessary operational overheads, carry-over unneeded complexity, and result in more time spent troubleshooting and responding to events over time.

One example is when enterprise customers who are using Transparent Data Encryption (TDE) or Extensible Key Management (EKM) technologies want to reuse the same technologies in their migration to AWS. TDE and EKM are database technologies that encrypt and decrypt database records as the records are written and read to the underlying storage medium. Customers use TDE features in Microsoft SQL Server, Oracle 10g and 11g, and Oracle Enterprise Edition to meet requirements for data-at-rest encryption. This shouldn’t mean that TDE is the requirement. It’s infrequent that an organizational policy or compliance framework specifies a technology such as TDE in the actual requirement. For example, the Payment Card Industry Data Security Standard (PCI-DSS) standard requires that sensitive data must be protected using “Strong cryptography with associated key-management processes and procedures.” Nowhere does PCI-DSS endorse or require the use of a specific technology.

Understanding risks

It’s important that you understand the risks that encryption-at-rest mitigates before selecting a technology to use. Encryption-at-rest, in the context of databases, generally manages the risk that one of the disks used to store database data is physically stolen and thus compromised. In on-premises scenarios, TDE is an effective technology used to manage this risk. All data from the database—up to and including the disk—is encrypted. The database manages all key management and cryptographic operations. You can also use TDE with a hardware security module (HSM) so that the keys and cryptography for the database are managed outside of the database itself. In TDE implementations, the HSM is used only to manage the key encryption keys (KEK), and not the data encryption keys (DEK) themselves. The DEKs are in volatile memory in the database at runtime, and so the cryptographic operations occur on the database itself.

You can also use native operating system encryption technologies such as dm-crypt or LUKS (Linux Unified Key Setup). Dm-crypt is a full disk encryption (FDE) subsystem in Linux kernel version 2.6 and beyond. Dm-crypt can be used on its own or with LUKS as an extension to add more features. When using dm-crypt, the operating system kernel is responsible for encrypting and decrypting data as it’s written and read from the attached volumes. This would achieve the same outcome as TDE—data written and read to the disk volume is encrypted, and the risk related to physical disk compromise is managed. DEKs are in runtime memory of the machine running the database.

With some TDE implementations, you can encrypt tables, rows, columns, and cells with different DEKs to achieve granular separation of duties between operators. Customers can then configure TDE to authorize access to each DEK based on database login credentials and job function, helping to manage risks associated with unauthorized access. However, the most common configuration I’ve seen is to rely on whole database encryption when using TDE. This configuration gives similar protection against the identified risks as dm-crypt with LUKS used without an HSM, since the DEKs and KEKs are stored within the instance in both cases and the result is that the database data on disk is encrypted.

Using encryption to manage data at rest risks in AWS

When you move to AWS, you gain additional security capabilities that can simplify your security implementations. Since the announcement of the AWS Key Management Service (AWS KMS) in 2014, it has been tightly integrated with Amazon Elastic Block Store (Amazon EBS), Amazon Simple Storage Service (Amazon S3), and dozens of other services on AWS. This means that data is encrypted on disk by checking a single check box. Furthermore, you get the benefits of AWS KMS for key management and cryptographic operations, while being transparent to the Amazon Elastic Compute Cloud (Amazon EC2) instance where the data is being encrypted and decrypted. For simplicity, the authorization for access to the data is managed entirely by AWS Identity and Access Management (IAM) and AWS KMS key resource policies.

If you need more granular access control to the data, you can use the AWS Encryption SDK to encrypt data at the application layer. That provides the same effect as TDE cell-level protection, with a FIPS140-2 Level 2 validated HSM, as might be required by a recognizing standard.

If you must use a FIPS140-2 Level 3 validated HSM to meet more stringent compliance standards or regulations, then you can use the Custom Key Store capability of AWS KMS to achieve that—again in a transparent way. This option has a trade-off, as there is additional operational overhead in terms of managing an AWS CloudHSM cluster.

Many customers choose to migrate their database into the managed Amazon Relational Database Service (Amazon RDS), rather than managing the database instance themselves. Like the Amazon EC2 service, RDS uses Amazon EBS volumes for its data storage, and so can seamlessly use AWS KMS for encryption at rest functionality. When you do so, your management overhead for the protection of data-at-rest reduces to almost zero. This lets you focus on business value while AWS is responsible for the management of your database and the protection of the underlying data. The next section reviews this option and others in more detail.

You can review the available Amazon RDS database engines and versions via the Amazon RDS User Guide documentation, or by running the following AWS Command Line Interface (AWS CLI) command:

aws rds describe-db-engine-versions --query "DBEngineVersions[].DBEngineVersionDescription" --region <regionIdentifier>

Recommended Solutions

If you’re moving an existing database to AWS, you have the following solutions for data at rest encryption. I go into more detail for each option below.

Table 1 – Encryption options

OptionDatabase managementHostEncryptionKey management
1Amazon managedAmazon RDSAmazon EBSAWS KMS
2Amazon managedAmazon RDSAmazon EBSAWS KMS Custom Key Store
3Customer managedAmazon EC2Amazon EBSAWS KMS
4Customer managedAmazon EC2Amazon EBSAWS KMS Custom Key Store
5Customer managedAmazon EC2Amazon EBSLUKS
6Customer managedAmazon EC2DatabaseDatabase TDE
7Customer managedAmazon EC2DatabaseCloudHSM

Option 1 – Using Amazon RDS with Amazon EBS encryption and key management provided by AWS KMS

This approach uses the Amazon RDS service where AWS manages the operating system and database engine. You can configure this service to be a highly scalable resource spanning multiple Availability Zones within an AWS Region to provide resiliency. AWS KMS manages the keys that are used to encrypt the attached Amazon EBS volumes at rest.

Note: This configuration is recommended as your default database encryption approach.

Benefits

  • No key management requirement on host; key management is automated and performed by AWS KMS
  • Meets FIPS140-2 Level 2 validation requirements
  • Simple vertical and horizontal scalability
  • Snapshots for recovery are encrypted automatically
  • AWS manages the patching, maintenance, and configuration of the operating system and database engine
  • Well-recognized configuration, with support offered through AWS Support
  • AWS KMS costs are comparatively low

Challenges

  • Dependent on Amazon RDS supported engines and versions
  • Might require additional controls to manage unauthorized access at table, row, column, or cell level

Option 2 – Using Amazon RDS with Amazon EBS encryption and key management provided by AWS KMS custom key store

This approach uses the Amazon RDS service where AWS manages the operating system and database engine. You can configure this service to be a highly scalable resource spanning multiple Availability Zones within a Region to provide resiliency. CloudHSM keys are used via AWS KMS service integration to encrypt the Amazon EBS volumes at rest.

Note: This configuration is recommended where FIPS140-2 Level 3 validation is a specified compliance requirement.

Benefits

  • No key management requirement on host; key management is performed by AWS KMS
  • Meets FIPS140-2 Level 3 validation requirements
  • Simple vertical and horizontal scalability
  • Snapshots for recovery are encrypted automatically
  • AWS manages the patching, maintenance, and configuration of the database engine
  • Well-recognized configuration with support offered through AWS Support

Challenges

  • Dependent on Amazon RDS supported engines and versions
  • You are responsible for provisioning, configuration, scaling, maintenance, and costs of running CloudHSM cluster
  • Might require additional controls to manage unauthorized access at table, row, column or cell level

Option 3 – Customer-managed database platform hosted on Amazon EC2 with Amazon EBS encryption and key management provided by KMS

In this approach, the key difference is that you’re responsible for managing the EC2 instances, operating systems, and database engines. You can still configure your databases to be highly scalable resources spanning multiple Availability Zones within a Region to provide resiliency, but it takes more effort. AWS KMS manages the keys that are used to encrypt the attached Amazon EBS volumes at rest.

Note: This configuration is recommended when Amazon RDS doesn’t support the desired database engine type or version.

Benefits

  • A 1:1 relationship for migration of database engine configuration
  • Key rotation and management is handled transparently by AWS
  • Data encryption keys are managed by the hypervisor, not by your EC2 instance
  • AWS KMS costs are comparatively low

Challenges

  • You’re responsible for patching and updates of the database engine and OS
  • Might require additional controls to manage unauthorized access at table, row, column, or cell level

Option 4 – Customer-managed database platform hosted on Amazon EC2 with Amazon EBS encryption and key management provided by KMS custom key store

In this approach, you are again responsible for managing the EC2 instances, operating systems, and database engines. You can still configure your databases to be highly scalable resources spanning multiple Availability Zones within a Region to provide resiliency, but it takes more effort. And similar to Option 2, CloudHSM keys are used via AWS KMS service integration to encrypt the Amazon EBS volumes at rest.

Note: This configuration is recommended when Amazon RDS doesn’t support the desired database engine type or version and when FIPS140-2 Level 3 compliance is required.

Benefits

  • A 1:1 relationship for migration of database engine configuration
  • Data encryption keys managed by the hypervisor, not by your EC2 instance
  • Keys managed by FIPS140-2 Level 3 validated HSM

Challenges

  • You’re responsible for provisioning, configuration, scaling, maintenance, and costs of running CloudHSM cluster
  • You’re responsible for patching and updates of the database engine and OS
  • Might require additional controls to manage unauthorized access at table, row, column, or cell level

Option 5 – Customer-managed database platform hosted on Amazon EC2 with Amazon EBS encryption and key management provided by LUKS

In this approach, you’re still responsible for managing the EC2 instances, operating systems, and database engines. You also need to install LUKS onto the Linux instance to manage the encryption of data on Amazon EBS.

Benefits

  • A 1:1 relationship for migration of database engine configuration
  • Transparent encryption is managed by OS with LUKS

Challenges

  • You’re responsible for patching and updates of the database engine and OS
  • Data encryption keys are managed directly on the EC2 instance, and not a dedicated key management system
  • Scaling must be vertical, which is slow and costly
  • LUKS is supported through open-source licensing
  • Support for backup and recovery is LUKS specific, and require additional consideration
  • Might require additional controls to manage unauthorized access at table, row, column or cell level

Note: This approach limits you to only Linux instances and requires the most technical knowledge and effort on your part. Options, such as BitLocker and SQL Server Always Encrypted, exist for Windows hosts, and the complexity and challenges are similar to those of LUKS.

Option 6 – Customer-managed database platform hosted on Amazon EC2 with database encryption and key management provided by TDE

In this approach, you’re still responsible for managing the EC2 instances, operating systems, and database engines. However, instead of encrypting the Amazon EBS volume where the database is stored, you use TDE wallet keys managed by the database engine to encrypt and decrypt records as they are stored and retrieved.

Benefits

  • A 1:1 relationship for migration of database engine configuration
  • Table, row, column, and cell level encryption are managed by TDE, reducing end point risks relating to unauthorized access

Challenges

  • You’re responsible for patching and updates of the database engine and OS
  • Costly license for TDE feature
  • Data encryption keys are managed directly on the EC2 instance
  • Scaling is dependent on TDE functionality and Amazon EC2 scaling
  • Support is split between AWS and a third-party database vendor
  • Cannot share snapshots

Note: This approach is not available with Amazon RDS.

Option 7 – Customer-managed database platform hosted on Amazon EC2 with database encryption performed by TDE and key management provided by CloudHSM

In this approach, you’re still responsible for managing the EC2 instances, operating systems, and database engines. However, instead of encrypting the Amazon EBS volume where the database is stored, you use TDE wallet keys managed by a CloudHSM cluster to encrypt and decrypt records as they are stored and retrieved.

Benefits

  • A 1:1 relationship for migration of database engine configuration
  • Wallet keys (KEK) are managed by a FIPS140-2 Level 3 validated HSM
  • Table, row, column, and cell level encryption are managed by TDE, reducing end point risks relating to unauthorized access

Challenges

  • You’re responsible for patching and updates of the database engine and OS
  • Costly license for TDE feature
  • You are responsible for provisioning, configuration, scaling, maintenance, and costs of running CloudHSM cluster
  • Integration and support of CloudHSM with TDE might vary
  • Scaling is dependent on TDE functionality, Amazon EC2 scaling, and CloudHSM cluster.
  • Data encryption keys are managed on EC2 instance
  • Support is split between AWS and a third-party database vendor
  • Cannot share snapshots

Note: This approach is not available with Amazon RDS.

Summary

While you can operate in AWS similar to how you operate in your on-premises environment, the preceding configurations and recommendations show how you can significantly reduce your challenges and increase your benefits by using cloud-native security services like AWS KMS, Amazon RDS, and CloudHSM. Specifically, using Amazon RDS with Amazon EBS volumes encrypted by AWS KMS provides a highly scalable, resilient, and secure way to manage your keys in AWS.

While there might be some architectural redesign and configuration work needed to move an on-premises database into Amazon RDS, you can leverage AWS services to help you meet your compliance requirements with less effort. By offloading the OS and database maintenance responsibility to AWS, you simultaneously reduce operational friction and increase security. By migrating this way, you can benefit from the scalability and resilience of the AWS global infrastructure and expertise. Lastly, to get started with migrating your database to AWS, I encourage you to use the AWS Database Migration Service.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Jonathan Jenkyn

Jonathan is a Senior Security Growth Strategies Consultant with AWS Professional Services. He’s an active member of the People with Disabilities affinity group, and has built several Amazon initiatives supporting charities and social responsibility causes. Since 1998, he has been involved in IT Security at many levels, from implementation of cryptographic primitives to managing enterprise security governance. Outside of work, he enjoys running, cycling, fund-raising for the BHF and Ipswich Hospital Charity, and spending time with his wife and 5 children.

Author

Scott Conklin

Scott is a Senior Security Consultant with AWS Professional Services (Global Specialty Practice). Based out of Chicago with 4 years tenure, he is an avid distance runner, crypto nerd, lover of unicorns, and enjoys camping, nature, playing Minecraft with his 3 kids, and binge watching Amazon Prime with his wife.

Improved client-side encryption: Explicit KeyIds and key commitment

Post Syndicated from Alex Tribble original https://aws.amazon.com/blogs/security/improved-client-side-encryption-explicit-keyids-and-key-commitment/

I’m excited to announce the launch of two new features in the AWS Encryption SDK (ESDK): local KeyId filtering and key commitment. These features each enhance security for our customers, acting as additional layers of protection for your most critical data. In this post I’ll tell you how they work. Let’s dig in.

The ESDK is a client-side encryption library designed to make it easy for you to implement client-side encryption in your application using industry standards and best practices. Since the security of your encryption is only as strong as the security of your key management, the ESDK integrates with the AWS Key Management Service (AWS KMS), though the ESDK doesn’t require you to use any particular source of keys. When using AWS KMS, the ESDK wraps data keys to one or more customer master keys (CMKs) stored in AWS KMS on encrypt, and calls AWS KMS again on decrypt to unwrap the keys.

It’s important to use only CMKs you trust. If you encrypt to an untrusted CMK, someone with access to the message and that CMK could decrypt your message. It’s equally important to only use trusted CMKs on decrypt! Decrypting with an untrusted CMK could expose you to ciphertext substitution, where you could decrypt a message that was valid, but written by an untrusted actor. There are several controls you can use to prevent this. I recommend a belt-and-suspenders approach. (Technically, this post’s approach is more like a belt, suspenders, and an extra pair of pants.)

The first two controls aren’t new, but they’re important to consider. First, you should configure your application with an AWS Identity and Access Management (IAM) policy that only allows it to use specific CMKs. An IAM policy allowing Decrypt on “Resource”:”*” might be appropriate for a development or testing account, but production accounts should list out CMKs explicitly. Take a look at our best practices for IAM policies for use with AWS KMS for more detailed guidance. Using IAM policy to control access to specific CMKs is a powerful control, because you can programmatically audit that the policy is being used across all of your accounts. To help with this, AWS Config has added new rules and AWS Security Hub added new controls to detect existing IAM policies that might allow broader use of CMKs than you intended. We recommend that you enable Security Hub’s Foundational Security Best Practices standard in all of your accounts and regions. This standard includes a set of vetted automated security checks that can help you assess your security posture across your AWS environment. To help you when writing new policies, the IAM policy visual editor in the AWS Management Console warns you if you are about to create a new policy that would add the “Resource”:”*” condition in any policy.

The second control to consider is to make sure you’re passing the KeyId parameter to AWS KMS on Decrypt and ReEncrypt requests. KeyId is optional for symmetric CMKs on these requests, since the ciphertext blob that the Encrypt request returns includes the KeyId as metadata embedded in the blob. That’s quite useful—it’s easier to use, and means you can’t (permanently) lose track of the KeyId without also losing the ciphertext. That’s an important concern for data that you need to access over long periods of time. Data stores that would otherwise include the ciphertext and KeyId as separate objects get re-architected over time and the mapping between the two objects might be lost. If you explicitly pass the KeyId in a decrypt operation, AWS KMS will only use that KeyId to decrypt, and you won’t be surprised by using an untrusted CMK. As a best practice, pass KeyId whenever you know it. ESDK messages always include the KeyId; as part of this release, the ESDK will now always pass KeyId when making AWS KMS Decrypt requests.

A third control to protect you from using an unexpected CMK is called local KeyId filtering. If you explicitly pass the KeyId of an untrusted CMK, you would still be open to ciphertext substitution—so you need to be sure you’re only passing KeyIds that you trust. The ESDK will now filter KeyIds locally by using a list of trusted CMKs or AWS account IDs you configure. This enforcement happens client-side, before calling AWS KMS. Let’s walk through a code sample. I’ll use Java here, but this feature is available in all of the supported languages of the ESDK.

Let’s say your app is decrypting ESDK messages read out of an Amazon Simple Queue Service (Amazon SQS) queue. Somewhere you’ll likely have a function like this:

public byte[] decryptMessage(final byte[] messageBytes,
                             final Map<String, String> encryptionContext) {
    // The Amazon Resource Name (ARN) of your CMK.
    final String keyArn = "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab";

    // 1. Instantiate the SDK
    AwsCrypto crypto = AwsCrypto.builder().build();

Now, when you create a KmsMasterKeyProvider, you’ll configure it with one or more KeyIds you expect to use. I’m passing a single element here for simplicity.

	// 2. Instantiate a KMS master key provider in Strict Mode using buildStrict()
    final KmsMasterKeyProvider keyProvider = KmsMasterKeyProvider.builder().buildStrict(keyArn); 

Decrypt the message as normal. The ESDK will check each encrypted data key against the list of KeyIds configured at creation: in the preceeding example, the single CMK in keyArn. The ESDK will only call AWS KMS for matching encrypted data keys; if none match, it will throw a CannotUnwrapDataKeyException.

	// 3. Decrypt the message.
    final CryptoResult<byte[], KmsMasterKey> decryptResult = crypto.decryptData(keyProvider, messageBytes);

    // 4. Validate the encryption context.
    //

(See our documentation for more information on how encryption context provides additional authentication features!)

	checkEncryptionContext(decryptResult, encryptionContext);

    // 5. Return the decrypted bytes.
    return decryptResult.getResult();
}

We recommend that everyone using the ESDK with AWS KMS adopt local KeyId filtering. How you do this varies by language—the ESDK Developer Guide provides detailed instructions and example code.

I’m especially excited to announce the second new feature of the ESDK, key commitment, which addresses a non-obvious property of modern symmetric ciphers used in the industry (including the Advanced Encryption Standard (AES)). These ciphers have the property that decrypting a single ciphertext with two different keys could give different plaintexts! Picking a pair of keys that decrypt to two specific messages involves trying random keys until you get the message you want, making it too expensive for most messages. However, if you’re encrypting messages of a few bytes, it might be feasible. Most authenticated encryption schemes, such as AES-GCM, don’t solve for this issue. Instead, they prevent someone who doesn’t control the keys from tampering with the ciphertext. But someone who controls both keys can craft a ciphertext that will properly authenticate under each key by using AES-GCM.

All of this means that if a sender can get two parties to use different keys, those two parties could decrypt the exact same ciphertext and get different results. That could be problematic if the message reads, for example, as “sell 1000 shares” to one party, and “buy 1000 shares” to another.

The ESDK solves this problem for you with key commitment. Key commitment means that only a single data key can decrypt a given message, and that trying to use any other data key will result in a failed authentication check and a failure to decrypt. This property allows for senders and recipients of encrypted messages to know that everyone will see the same plaintext message after decryption.

Key commitment is on by default in version 2.0 of the ESDK. This is a breaking change from earlier versions. Existing customers should follow the ESDK migration guide for their language to upgrade from 1.x versions of the ESDK currently in their environment. I recommend a thoughtful and careful migration.

AWS is always looking for feedback on ways to improve our services and tools. Security-related concerns can be reported to AWS Security at [email protected]. We’re deeply grateful for security research, and we’d like to thank Thai Duong from Google’s security team for reaching out to us. I’d also like to thank my colleagues on the AWS Crypto Tools team for their collaboration, dedication, and commitment (pun intended) to continuously improving our libraries.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Crypto Tools forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Alex Tribble

Alex is a Principal Software Development Engineer in AWS Crypto Tools. She joined Amazon in 2008 and has spent her time building security platforms, protecting availability, and generally making things faster and cheaper. Outside of work, she, her wife, and children love to pack as much stuff into as few bikes as possible.

How to use AWS Config to determine compliance of AWS KMS key policies to your specifications

Post Syndicated from Tracy Pierce original https://aws.amazon.com/blogs/security/how-to-use-aws-config-to-determine-compliance-of-aws-kms-key-policies-to-your-specifications/

One of the top security methodologies is the principle of least privilege, which is the practice of limiting user, application, and service permissions to only those necessary to perform a function or task. In this post, I will describe how you can use AWS Config to create compliance rules that will scan AWS Key Management Service (AWS KMS) key policies to determine whether they follow your company’s guidelines for least privilege. You can use the AWS Config Rules Development Kit (RDK) on GitHub (aws-config-rdk) to quickly create and save custom AWS Config rule sets as AWS Lambda functions in your account(s), and test the rules against sample configuration items.

In this solution, you use a Gherkin syntax file, which is good method for determining the requirements of your rule, as well as how it should react to input. A Gherkin syntax file gives you the ability to create a parameter file and a feature file. The parameter file lists information like RuleName, SourceRuntime, CodeKey, InputParameters, Trigger, SourcePeriodic, and so on. The feature file includes a description of the rule, detailed information about the parameters, what the feature is meant to accomplish, and the scenarios you want to evaluate your rule against.

In this post, I will show you how to take a parameter file and feature file, and use the requirements in them to create your AWS Config rule and testing scenarios using the AWS Config RDK. I include an example key policy file for you to use for your test scenarios. You can download all the code snippets used in this post from the aws-config-aws-kms-policy-rule GitHub repository. I will explain how to use the code examples to create the AWS Config rule as a Lambda function, and how to use the AWS Config RDK to test locally and deploy the rule to your account(s).

Overview

The solution described in this post uses custom AWS Config rules to scan AWS KMS key policies every 24 hours. It checks these rules against a set of parameters that you determine beforehand to meet your organization’s security standards. Based on the rule checks, AWS Config determines the key policy to be either compliant or noncompliant when compared against your company’s specifications. Noncompliant resources are noted as such, so that an administrator can review and determine if the policy should be modified, or if it will be allowed as an exception. The following figure shows the overview of the process.
 

Figure 1: Overview

Figure 1: Overview

The way the process works is as follows:

  1. The AWS Config rule is triggered by a configuration change and scans your key policy.
  2. The key policy is evaluated against your custom AWS Config rule scenarios.
  3. The compliance results of the key policies are presented in the AWS Config console.

The files in the GitHub repository you will use for this post are:

  • AWSConfigRuleKMS.feature – This is the Gherkin file used to determine the scenarios you will test against.
  • AWSConfigRuleKMSPolicy.py – This file is used to create the policy formatting that the AWSConfigRuleKMS.py script uses to parse AWS KMS key policies. Because the AWS KMS key policies are JSON objects, you have to parse them before inputting them as data so you can retrieve comparison results.
  • AWSConfigRuleKMS.py – This is the actual script that does the comparisons of the policies retrieved and the scenarios laid out in the Gherkin file.
  • AWSConfigRuleKMS_test.py – This is the testing file that takes an example policy, runs it against the multiple test scenarios, and outputs the test results. This lets you know if your script is running as expected and producing the proper outcomes.
  • parameters.json – This is the parameters file that tells AWS Config which parameters to check for in the rules.

Prerequisites

This solution has the following prerequisites:

In addition, this solution uses the following services:

Deploying the solution

From the Gherkin file in my repo, you can see you will be testing for eight different scenarios regarding least privilege access to your AWS KMS customer master keys (CMKs). I decided to whitelist any CMKs that have an alias beginning with the word “Otter*”, and any UserID that begins with “AROAOTTER*”. The parameters are set in the parameters.json file. This is how AWS Config knows what to check for in the rules. You will use these same parameters in the AWSConfigRuleKMS_test.py file when performing your tests. Be sure to modify these parameters when creating your own rules.

Scenario 1: Checks to determine if a CMK is marked DISABLED.

Scenario 2: Checks to determine if a CMK is marked ENABLED and is in the list of whitelisted CMKs.

Scenario 3: Checks to determine if a CMK is marked ENABLED, is not in the list of whitelisted CMKs, and has an action equal to kms:*.

Scenario 4: Checks to determine if a CMK is marked ENABLED, is not in the list of whitelisted CMKs, includes a condition for users, does not have kms:*, and has a policy allowing the following actions: kms:Encrypt, kms:Decrypt, kms:Create*, kms:Delete*, and kms:Put* together.

Scenario 5: Checks to determine if a CMK is marked ENABLED, is not in the list of whitelisted CMKs, includes a condition for users, does not have kms:*, and does not have a policy allowing the following actions: kms:Encrypt, kms:Decrypt, kms:Create*, kms:Delete*, and kms:Put* together.

Scenario 6: Checks to determine if a CMK is marked ENABLED, is not in the list of whitelisted CMKs, includes a condition for users, user is listed in the whitelisted users, does not have kms:*, and has a policy allowing only the following actions: kms:Create*, kms:Delete*, and kms:Put*.

Scenario 7: Checks to determine if a CMK is marked ENABLED, is not in the list of whitelisted CMKs, includes a condition for users, user is not listed in the whitelisted users, does not have kms:*, and has a policy allowing only the following actions: kms:Create*, kms:Delete*, and kms:Put*.

Scenario 8: Checks to determine if a CMK is marked ENABLED, is not in the list of whitelisted CMKs, includes a condition for users, user is listed in the whitelisted users, does not have kms:*, and has a policy allowing only the following actions: kms:Encrypt, kms:Decrypt, kms:Create*, kms:Delete*, and kms:Put* together.

You will be using the AWS Config RDK to create and test your rules, and also to deploy them to your account.

To create your AWS Config rule (RDK CLI)

  1. Open a terminal window.
  2. Use the cd command to move to the directory in which you want to create your rule.
  3. Use the create command to create your rule, using the following example. Replace all variables in italics with your inputs.
    
    $ rdk create AWSConfigRuleKMS --runtime python3.6 --resource-types AWS::KMS::Key ---input-parameters '{"CMK_Whitelist":"Otter*","Admin_User_Id":"AROAOTTER*"}'
    

  4. You should see output similar to the following:
    
    Running create!
    Local Rule files created.
    

In your directory, you will now see the following files.

  • AWSConfigRuleKMS.py: This is a skeleton file for you to create your Lambda function. It has some base code and is commented to assist you with building your functions.
  • AWSConfigRuleKMS_test.py: This is a skeleton file for you to create your testing scenario script. It has some base code and helpers in place to make this easier for you.
  • parameters.json: This is a parameter file, based on the inputs from the create command.

For this solution, I have already created the code snippets you will need. Download the files from my GitHub repository. Make sure to include the AWSConfigRuleKMSPolicy.py from my repository in the directory as well.

To download the code snippets from the GitHub repository (CLI)

  1. Open a terminal.
  2. Use the cd command to change to the directory where you created your AWS Config rule.
  3. Run the following command:
    
    git clone https://github.com/aws-samples/aws-config-aws-kms-policy-rule
    

Now that you have the code snippets, you need to place them into the skeleton files to complete the rule.

To complete the Python scripts

  1. In a text editor, open your local copies of both AWSConfigRuleKMS.py and AWSConfigRuleKMS_test.py.
  2. Copy the contents of the AWSConfigRuleKMS.py from my GitHub repository.
  3. In the AWSConfigRuleKMS.py skeleton file, delete the code from the line import json down to and including the line above the section starting with # Helper Functions #. Paste the copied code in its place and save the file.
  4. Copy the contents of the AWSConfigRuleKMS_test.py from my GitHub repository.
  5. In the AWSConfigRuleKMS_test.py skeleton file, delete the code from the line import sys down to and including the line above the section starting with # Helper Functions #. Paste the copied code in its place and save the file.

With all the files updated, you will now use the AWS Config RDK to test the scenarios. For testing, you use the AWSConfigRuleKMS_test.py file, which is the file housing the test scenarios to ensure that your rules work as expected.

To test your AWS Config rule (RDK CLI)

  1. Open a terminal window.
  2. Use the cd command to change to the directory one level above where you created your AWS Config rule. For example, if your AWS Config rule is in C://User/Documents/Config/AWSConfigRuleKMS, then change to the C://User/Documents/Config directory.
  3. Use the test-local command to test your rule, using the following example:
    $ rdk test-local AWSConfigRuleKMS_test
  4. You should see output similar to the following:
    
    Running local test!
    Testing AWSConfigRuleKMS
    Looking for tests in /User/Documents/Config/AWSConfigRuleKMS
    AWSConfigRuleKMS_test.py
    Debug!
    <unittest.suite.TestSuite tests=[<unittest.suite.TestSuite tests=[<AWSConfigRuleKMS_test.TestKMSKeyPolicy testMethod=test__scenario_7_admin_role_not_in_whitelist_sep_of_duty>, <AWSConfigRuleKMS_test.TestKMSKeyPolicy testMethod=test_is_not_cmk>, <AWSConfigRuleKMS_test.TestKMSKeyPolicy testMethod=test_scenario_1_disabled_status>, <AWSConfigRuleKMS_test.TestKMSKeyPolicy testMethod=test_scenario_2_cmk_in_whitelist>, <AWSConfigRuleKMS_test.TestKMSKeyPolicy testMethod=test_scenario_3_kms_star_in_policy>, <AWSConfigRuleKMS_test.TestKMSKeyPolicy testMethod=test_scenario_4_no_sep_of_duty>, <AWSConfigRuleKMS_test.TestKMSKeyPolicy testMethod=test_scenario_5_sep_of_duty_actions>, <AWSConfigRuleKMS_test.TestKMSKeyPolicy testMethod=test_scenario_6_admin_role_in_whitelist_sep_of_duty>, <AWSConfigRuleKMS_test.TestKMSKeyPolicy testMethod=test_scenario_8_admin_role_in_whitelist_no_sep_of_duty>, <AWSConfigRuleKMS_test.TestKMSKeyPolicy testMethod=test_scenario_no_conditions>]>]>
    test__scenario_7_admin_role_not_in_whitelist_sep_of_duty (AWSConfigRuleKMS_test.TestKMSKeyPolicy) ... in Key Policy for alias/testkey, statement does have separation of duties, CMK is not whitelisted, and user id is not whitelisted
    ok
    test_is_not_cmk (AWSConfigRuleKMS_test.TestKMSKeyPolicy) ... ok
    test_scenario_1_disabled_status (AWSConfigRuleKMS_test.TestKMSKeyPolicy) ... CMK alias/testkey is disabled
    ok
    test_scenario_2_cmk_in_whitelist (AWSConfigRuleKMS_test.TestKMSKeyPolicy) ... CMK alias/Otter* is in whitelist for CMK Key Policy check
    ok
    test_scenario_3_kms_star_in_policy (AWSConfigRuleKMS_test.TestKMSKeyPolicy) ... in Key Policy for alias/testkey, statement does have open KMS permissions and CMK is not whitelisted
    ok
    test_scenario_4_no_sep_of_duty (AWSConfigRuleKMS_test.TestKMSKeyPolicy) ... in Key Policy for alias/testkey, statement does not have separation of duties and CMK is not whitelisted
    ok
    test_scenario_5_sep_of_duty_actions (AWSConfigRuleKMS_test.TestKMSKeyPolicy) ... in Key Policy for alias/testkey, statement does have separation of duties and CMK is not whitelisted
    ok
    test_scenario_6_admin_role_in_whitelist_sep_of_duty (AWSConfigRuleKMS_test.TestKMSKeyPolicy) ... in Key Policy for alias/testkey, statement does have separation of duties, CMK is not whitelisted, and user id is whitelisted
    ok
    test_scenario_8_admin_role_in_whitelist_no_sep_of_duty (AWSConfigRuleKMS_test.TestKMSKeyPolicy) ... In Key Policy for alias/testkey, statement does not have separation of duties, CMK is not whitelisted, and user id is whitelisted
    ok
    test_scenario_no_conditions (AWSConfigRuleKMS_test.TestKMSKeyPolicy) ... ok
    
    ----------------------------------------------------------------------
    Ran 10 tests in 0.013s
    
    OK
    <unittest.runner.TextTestResult run=10 errors=0 failures=0>
    

If you encounter errors during testing, they could be caused by:

  • Incorrect code in the AWSConfigRuleKMS.py file
  • Incorrect code in the AWSConfigRuleKMS_test.py file
  • Invalid formatting in the parameters.json file
  • Invalid names on the files – they must match the exact formatting I have.

You should go back through your code to make sure that you used proper syntax, and that the test cases match your requirements. If you run into syntax issues, you can find the full list of AWS supported SDKs on the Tools to Build on AWS page. You can match your code formatting against the code I supplied in my GitHub repository to ensure proper spacing/tabs.

When testing is complete, deploy your AWS Config rule into your account(s). The file to deploy the rule itself is the AWSConfigRuleKMS.py file.

To deploy your AWS Config rule (RDK CLI)

  1. Open a terminal window.
  2. Use the cd command to change to the directory one level above where you created your AWS Config rule. For example, if your rule is in C://User/Documents/Config/AWSConfigRuleKMS, then change to the C://User/Documents/Config directory.
  3. Use the deploy command to deploy your rule, using the following example:
    $ rdk deploy AWSConfigRuleKMS
  4. You should see output similar to the following:
    
    Running deploy!
    Zipping AWSConfigRuleKMS
    Uploading AWSConfigRuleKMS
    Creating CloudFormation Stack for AWSConfigRuleKMS
    Waiting for CloudFormation stack operation to complete...
    ...
    Waiting for CloudFormation stack operation to complete...
    AWS Config deploy complete.
    

There are two ways you can verify that your deployment was successful. You can use either the AWS CloudFormation console, or the AWS Config console. Let’s look at it in the AWS Config console.

To verify deployment in the AWS Config console

  1. Open the AWS Config console.
  2. In the navigation pane, choose Rules.
  3. Choose the AWSConfigRuleKMS rule (or what you named it).
  4. In the Rule details section, you will see the name, trigger type, resource type, rule ARN, parameters, and status, as shown in the following screenshot.
     
    Figure 2: AWSConfigRuleKMS in the AWS Config console

    Figure 2: AWSConfigRuleKMS in the AWS Config console

After the deployment is complete, you must modify the AWS Config role that the AWS Config recorder assumes. Although typically this role would be AWSServiceRoleForConfig, in this solution you need to have your own AWS Config role with the Managed Policy arn:aws:iam::aws:policy/service-role/AWSConfigRole attached. The reason for this is that you need to modify the Trust Policy of the AWS Config role to trust the newly created AWS Lambda role. The AWS Lambda role ARN should look similar to the following:


arn:aws:iam::111122223333:role/rdk/AWSConfigRuleKMS-rdkLambdaRole-RANDOMCHARACTERS

To create your AWS Config recorder role in the IAM console

  1. Open the AWS IAM console.
  2. In the navigation pane, select Roles.
  3. At the top, choose Create Role.
  4. Select Config from the list of services.
  5. For Select your use case, select Config – Customizable.
  6. Choose Next: Permissions.
  7. Choose Next: Tags.
  8. Choose Next: Review.
  9. Enter a descriptive name for the role. For my example, I used CustomConfigRole.
  10. Choose Create role.

To modify the AWS Config recorder role’s trust policy

  1. Open the AWS IAM console.
  2. In the navigation pane, select Roles.
  3. Choose the role created by the RDK for Lambda, which is named AWSConfigRuleKMS-rdkLambdaRole-RANDOMCHARACTERS, or whatever you named it.
  4. Next to the Role ARN, choose the copy icon.
  5. Go back to the Roles screen.
  6. Choose the role you just created.
  7. Choose the Trust relationships tab.
  8. Choose Edit trust relationship.
  9. Copy the following trust policy, and paste it over the existing trust policy. (You need to change the AWS Account ARN in italics to your own AWS Account number and role name):
    
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "",
          "Effect": "Allow",
          "Principal": {
            "AWS": "arn:aws:iam::111122223333:role/rdk/AWSConfigRuleKMS-rdkLambdaRole-132PD765KU42Z",
            "Service": "config.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
    

  10. Choose Update Trust Policy.

To modify the AWS Config recorder role in the AWS Config console

  1. Open the AWS Config console.
  2. In the navigation pane, select Settings.
  3. Scroll down to AWS Config role*.
  4. Select the radio button next to Choose a role from your account.
  5. Choose the role you created for AWS Config.
  6. Choose Save.

After you have done this, you can use the AWS Config console to force an evaluation of your resources.

To force an AWS Config rule evaluation in the AWS Config console

  1. Open the AWS Config console.
  2. In the navigation pane, select Rules.
  3. Choose the AWSConfigRuleKMS rule (or what you named it).
  4. At the top right-hand side of the console, choose Re-evaluate. This can take a few minutes for results to populate.

To have the key policy correctly evaluated, you need to add permissions to the key policy for the root ARN or the AWS Lambda role ARN to perform kms:GetKeyPolicy.

To update key policies to include the root or AWS Lambda role ARN

  1. Open the AWS KMS console.
  2. In the navigation pane, select Customer managed keys.
  3. Choose the Alias or Key ID you want to modify.
  4. Under Key policy, choose Edit.
  5. Copy the following policy snippet and paste it at the bottom of your current policy. (Replace the values in italics with your actual values).
    
    {
        "Sid": "Enable IAM User Permissions",
        "Effect": "Allow",
        "Principal": {
            "AWS": [
                "arn:aws:iam::111122223333:root",
                "arn:aws:iam::111122223333:role/rdk/AWSConfigRuleKMS-rdkLambdaRole-132PD765KU42Z"
            ]
        },
        "Action": "kms:GetKeyPolicy",
        "Resource": "*"
    }
    

  6. Choose Save changes.

In the following example screenshot of my AWS Config console, you can see that I have quite a few CMKs that don’t meet my compliance policies. The key policy either does not have the required permissions defined in my scenarios, or does not have the permissions for the root ARN or my Lambda role to perform kms:GetKeyPolicy. Both can result in a noncompliant status.
 

Figure 3: Viewing noncompliant CMKs in the AWS Config console

Figure 3: Viewing noncompliant CMKs in the AWS Config console

For example, the key policy for alias/SecurityOtterCMK that follows is missing the condition for aws:userid, as well as mixed permission sets. So this CMK is marked as Noncompliant.


{
    "Version": "2012-10-17",
    "Id": "key-consolepolicy-3",
    "Statement": [
        {
            "Sid": "Allow access for Key Administrators",
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::111122223333:root",
                    "arn:aws:iam::444455556666:root"
                ]
            },
            "Action": [
                "kms:Create*",
                "kms:Describe*",
                "kms:Enable*",
                "kms:List*",
                "kms:Put*",
                "kms:Update*",
                "kms:Revoke*",
                "kms:Disable*",
                "kms:Get*",
                "kms:Delete*",
                "kms:TagResource",
                "kms:UntagResource",
                "kms:ScheduleKeyDeletion",
                "kms:CancelKeyDeletion"
            ],
            "Resource": "*"
        },
        {
            "Sid": "Allow use of the key",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:role/Admin"
            },
            "Action": [
                "kms:Encrypt",
                "kms:Decrypt",
                "kms:ReEncrypt*",
                "kms:GenerateDataKey*",
                "kms:DescribeKey"
            ],
            "Resource": "*"
        },
        {
            "Sid": "Allow attachment of persistent resources",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:role/Admin"
            },
            "Action": [
                "kms:CreateGrant",
                "kms:ListGrants",
                "kms:RevokeGrant"
            ],
            "Resource": "*",
            "Condition": {
                "Bool": {
                    "kms:GrantIsForAWSResource": "true"
                }
            }
        }
    ]
}

On the other hand, the CMK marked as alias/Otter-EBS is in my whitelisted keys based upon my Gherkin, so it shows as Compliant.
 

Figure 4: The CMK with a status of Compliant

Figure 4: The CMK with a status of Compliant

You can now monitor your KMS keys for least privilege based on your required parameters.

Conclusion

In this post, I showed you how to use the AWS Config RDK to create, test, and deploy a custom AWS Config rule that scans your KMS key policies to look for rules designed to implement a least privilege concept. I supplied all scripts necessary to create your rule and test locally. With this AWS Config rule, you can use the AWS Config console to see whether your AWS KMS key policies are compliant with your company standards, so that you can react accordingly.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS KMS forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tracy Pierce

Tracy is a Senior Consultant, Security Specialty, for Remote Consulting Services. She enjoys the peculiar culture of Amazon and uses that to ensure every day is exciting for her fellow engineers and customers alike. Customer Obsession is her highest priority and she shows this by improving processes, documentation, and building tutorials. She has her AS in Computer Security & Forensics from SCTD, SSCP certification, AWS Developer Associate certification, and AWS Security Specialist certification. Outside of work, she enjoys time with friends, her Great Dane, and three cats. She keeps work interesting by drawing cartoon characters on the walls at request.

Building a Self-Service, Secure, & Continually Compliant Environment on AWS

Post Syndicated from Japjot Walia original https://aws.amazon.com/blogs/architecture/building-a-self-service-secure-continually-compliant-environment-on-aws/

Introduction

If you’re an enterprise organization, especially in a highly regulated sector, you understand the struggle to innovate and drive change while maintaining your security and compliance posture. In particular, your banking customers’ expectations and needs are changing, and there is a broad move away from traditional branch and ATM-based services towards digital engagement.

With this shift, customers now expect personalized product offerings and services tailored to their needs. To achieve this, a broad spectrum of analytics and machine learning (ML) capabilities are required. With security and compliance at the top of financial service customers’ agendas, being able to rapidly innovate and stay secure is essential. To achieve exactly that, AWS Professional Services engaged with a major Global systemically important bank (G-SIB) customer to help develop ML capabilities and implement a Defense in Depth (DiD) security strategy. This blog post provides an overview of this solution.

The machine learning solution

The following architecture diagram shows the ML solution we developed for a customer. This architecture is designed to achieve innovation, operational performance, and security performance in line with customer-defined control objectives, as well as meet the regulatory and compliance requirements of supervisory authorities.

Machine learning solution developed for customer

This solution is built and automated using AWS CloudFormation templates with pre-configured security guardrails and abstracted through the service catalog. AWS Service Catalog allows you to quickly let your users deploy approved IT services ensuring governance, compliance, and security best practices are enforced during the provisioning of resources.

Further, it leverages Amazon SageMaker, Amazon Simple Storage Service (S3), and Amazon Relational Database Service (RDS) to facilitate the development of advanced ML models. As security is paramount for this workload, data in S3 is encrypted using client-side encryption and column-level encryption on columns in RDS. Our customer also codified their security controls via AWS Config rules to achieve continual compliance

Compute and network isolation

To enable our customer to rapidly explore new ML models while achieving the highest standards of security, separate VPCs were used to isolate infrastructure and accessed control by security groups. Core to this solution is Amazon SageMaker, a fully managed service that provides the ability to rapidly build, train, and deploy ML models. Amazon SageMaker notebooks are managed Juypter notebooks that:

  1. Prepare and process data
  2. Write code to train models
  3. Deploy models to SageMaker hosting
  4. Test or validate models

In our solution, notebooks run in an isolated VPC with no egress connectivity other than VPC endpoints, which enable private communication with AWS services. When used in conjunction with VPC endpoint policies, you can use notebooks to control access to those services. In our solution, this is used to allow the SageMaker notebook to communicate only with resources owned by AWS Organizations through the use of the aws:PrincipalOrgID condition key. AWS Organizations helps provide governance to meet strict compliance regulation and you can use the aws:PrincipalOrgID condition key in your resource-based policies to easily restrict access to Identity Access Management (IAM) principals from accounts.

Data protection

Amazon S3 is used to store training data, model artifacts, and other data sets. Our solution uses server-side encryption with customer master keys (CMKs) stored in AWS Key Management Service (SSE-KMS) encryption to protect data at rest. SSE-KMS leverages KMS and uses an envelope encryption strategy with CMKs. Envelop encryption is the practice of encrypting data with a data key and then encrypting that data key using another key – the CMK. CMKs are created in KMS and never leave KMS unencrypted. This approach allows fine-grained control around access to the CMK and the logging of all access and attempts to access the key to Amazon CloudTrail. In our solution, the age of the CMK is tracked by AWS Config and is regularly rotated. AWS Config enables you to assess, audit, and evaluate the configurations of deployed AWS resources by continuously monitoring and recording AWS resource configurations. This allows you to automate the evaluation of recorded configurations against desired configurations.

Amazon S3 Block Public Access is also used at an account level to ensure that existing and newly created resources block bucket policies or access-control lists (ACLs) don’t allow public access. Service control policies (SCPs) are used to prevent users from modifying this setting. AWS Config continually monitors S3 and remediates any attempt to make a bucket public.

Data in the solution are classified according to their sensitivity that corresponds to your customer’s data classification hierarchy. Classification in the solution is achieved through resource tagging, and tags are used in conjunction with AWS Config to ensure adherence to encryption, data retention, and archival requirements.

Continuous compliance

Our solution adopts a continuous compliance approach, whereby the compliance status of the architecture is continuously evaluated and auto-remediated if a configuration change attempts to violate the compliance posture. To achieve this, AWS Config and config rules are used to confirm that resources are configured in compliance with defined policies. AWS Lambda is used to implement a custom rule set that extends the rules included in AWS Config.

Data exfiltration prevention

In our solution, VPC Flow Logs are enabled on all accounts to record information about the IP traffic going to and from network interfaces in each VPC. This allows us to watch for abnormal and unexpected outbound connection requests, which could be an indication of attempts to exfiltrate data. Amazon GuardDuty analyzes VPC Flow Logs, AWS CloudTrail event logs, and DNS logs to identify unexpected and potentially malicious activity within the AWS environment. For example, GuardDuty can detect compromised Amazon Elastic Cloud Compute (EC2) instances communicating with known command-and-control servers.

Conclusion

Financial services customers are using AWS to develop machine learning and analytics solutions to solve key business challenges while ensuring security and compliance needs. This post outlined how Amazon SageMaker, along with multiple security services (AWS Config, GuardDuty, KMS), enables building a self-service, secure, and continually compliant data science environment on AWS for a financial service use case.

 

Code signing using AWS Certificate Manager Private CA and AWS Key Management Service asymmetric keys

Post Syndicated from Ram Ramani original https://aws.amazon.com/blogs/security/code-signing-aws-certificate-manager-private-ca-aws-key-management-service-asymmetric-keys/

In this post, we show you how to combine the asymmetric signing feature of the AWS Key Management Service (AWS KMS) and code-signing certificates from the AWS Certificate Manager (ACM) Private Certificate Authority (PCA) service to digitally sign any binary data blob and then verify its identity and integrity. AWS KMS makes it easy for you to create and manage cryptographic keys and control their use across a wide range of AWS services and with your applications running on AWS. ACM PCA provides you a highly available private certificate authority (CA) service without the upfront investment and ongoing maintenance costs of operating your own private CA. CA administrators can use ACM PCA to create a complete CA hierarchy, including online root and subordinate CAs, with no need for external CAs. Using ACM PCA, you can provision, rotate, and revoke certificates that are trusted within your organization.

Traditionally, a person’s signature helps to validate that the person signed an agreement and agreed to the terms. Signatures are a big part of our lives, from our driver’s licenses to our home mortgage documents. When a signature is requested, the person or entity requesting the signature needs to verify the validity of the signature and the integrity of the message being signed.

As the internet and cryptography research evolved, technologists found ways to carry the usefulness of signatures from the analog world to the digital world. In the digital world, public and private key cryptography and X.509 certificates can help with digital signing, verifying message integrity, and verifying signature authenticity. In simple terms, an entity—which could be a person, an organization, a device, or a server—can digitally sign a piece of data, and another entity can validate the authenticity of the signature and validate the integrity of the signed data. The data that’s being signed could be a document, a software package, or any other binary data blob.

To learn more about AWS KMS asymmetric keys and ACM PCA, see Digital signing with the new asymmetric keys feature of AWS KMS and How to host and manage an entire private certificate infrastructure in AWS.

We provide Java code snippets for each part of the process in the following steps. In addition, the complete Java code with the maven build configuration file pom.xml are available for download from this GitHub project. The steps below illustrate the different processes that are involved and the associated Java code snippet. However, you need to use the GitHub project to be able to build and run the Java code successfully.

Let’s take a look at the steps.

1. Create an asymmetric key pair

For digital signing, you need a code-signing certificate and an asymmetric key pair. In this step, you create an asymmetric key pair using AWS KMS. The below code snippet in the main method within the file Runner.java is used to create the asymmetric key pair within KMS in your AWS account. An asymmetric KMS key with the alias CodeSigningCMK is created.


AsymmetricCMK codeSigningCMK = AsymmetricCMK.builder()
                .withAlias(CMK_ALIAS)
                .getOrCreate();

2. Create a code-signing certificate

To create a code-signing certificate, you need a private CA hierarchy, which you create within the ACM PCA service. This uses a simple CA hierarchy of one root CA and one subordinate CA under the root because the recommendation is that you should not use the root CA directly for signing code-signing certificates. The certificate authorities are needed to create the code-signing certificate. The common name for the root CA certificate is root CA, and the common name for the subordinate CA certificate is subordinate CA. The following code snippet in the main method within the file Runner.java is used to create the private CA hierarchy.


PrivateCA rootPrivateCA = PrivateCA.builder()
                .withCommonName(ROOT_COMMON_NAME)
                .withType(CertificateAuthorityType.ROOT)
                .getOrCreate();

PrivateCA subordinatePrivateCA = PrivateCA.builder()
        .withIssuer(rootPrivateCA)
        .withCommonName(SUBORDINATE_COMMON_NAME)
        .withType(CertificateAuthorityType.SUBORDINATE)
        .getOrCreate();

3. Create a certificate signing request

In this step, you create a certificate signing request (CSR) for the code-signing certificate. The following code snippet in the main method within the file Runner.java is used to create the CSR. The END_ENTITY_COMMON_NAME refers to the common name parameter of the code signing certificate.


String codeSigningCSR = codeSigningCMK.generateCSR(END_ENTITY_COMMON_NAME);

4. Sign the CSR

In this step, the code-signing CSR is signed by the subordinate CA that was generated in step 2 to create the code-signing certificate.


GetCertificateResult codeSigningCertificate = subordinatePrivateCA.issueCodeSigningCertificate(codeSigningCSR);

Note: The code-signing certificate that’s generated contains the public key of the asymmetric key pair generated in step 1.

5. Create the custom signed object

The data to be signed is a simple string: “the data I want signed”. Its binary representation is hashed and digitally signed by the asymmetric KMS private key created in step 1, and a custom signed object that contains the signature and the code-signing certificate is created.

The below code snippet in the main method within the file Runner.java is used to create the custom signed object.


CustomCodeSigningObject customCodeSigningObject = CustomCodeSigningObject.builder()
                .withAsymmetricCMK(codeSigningCMK)
                .withDataBlob(TBS_DATA.getBytes(StandardCharsets.UTF_8))
                .withCertificate(codeSigningCertificate.getCertificate())
                .build();

6. Verify the signature

The custom signed object is verified for integrity, and the root CA certificate is used to verify the chain of trust to confirm non-repudiation of the identity that produced the digital signature.

The below code snippet in the main method within the file Runner.java is used for signature verification:


String rootCACertificate = rootPrivateCA.getCertificate();
 String customCodeSigningObjectCertificateChain = codeSigningCertificate.getCertificate() + "\n" + codeSigningCertificate.getCertificateChain();

 CustomCodeSigningObject.getInstance(customCodeSigningObject.toString())
        .validate(rootCACertificate, customCodeSigningObjectCertificateChain);

During this signature validation process, the validation method shown in the code above retrieves the public key portion of the AWS KMS asymmetric key pair generated in step 1 from the code-signing certificate. This process has the advantage that credentials to access AWS KMS aren’t needed during signature validation. Any entity that has the root CA certificate loaded in its trust store can verify the signature without needing access to the AWS KMS verify API.

Note: The implementation outlined in this post is an example. It doesn’t use a certificate trust store that’s either part of a browser or part of a file system within the resident operating system of a device or a server. The trust store is placed in an instance of a Java class object for the purpose of this post. If you are planning to use this code-signing example in a production system, you must change the implementation to use a trust store on the host. To do so, you can build and distribute a secure trust store that includes the root CA certificate.

Conclusion

In this post, we showed you how a binary data blob can be digitally signed using ACM PCA and AWS KMS and how the signature can be verified using only the root CA certificate. No secret information or credentials are required to verify the signature. You can use this method to build a custom code-signing solution to address your particular use cases. The GitHub repository provides the Java code and the maven pom.xml that you can use to build and try it yourself. The README.md file in the GitHub repository shows the instructions to execute the code.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Certificate Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ram Ramani

Ram is a Security Solutions Architect at AWS focusing on data protection. Ram works with customers across different industry verticals to provide them with solutions that help with protecting data at rest and in transit. In prior roles, Ram built ML algorithms for video quality optimization and worked on identity and access management solutions for financial services organizations.

Author

Kyle Schultheiss

Kyle is a Senior Software Engineer on the AWS Cryptography team. He has been working on the ACM Private Certificate Authority service since its inception in 2018. In prior roles, he contributed to other AWS services such as Amazon Virtual Private Cloud, Amazon EC2, and Amazon Route 53.

The importance of encryption and how AWS can help

Post Syndicated from Ken Beer original https://aws.amazon.com/blogs/security/importance-of-encryption-and-how-aws-can-help/

Encryption is a critical component of a defense-in-depth strategy, which is a security approach with a series of defensive mechanisms designed. It means if one security mechanism fails, there’s at least one more still operating. As more organizations look to operate faster and at scale, they need ways to meet critical compliance requirements and improve data security. Encryption, when used correctly, can provide an additional layer of protection above basic access control.

How and why does encryption work?

Encryption works by using an algorithm with a key to convert data into unreadable data (ciphertext) that can only become readable again with the right key. For example, a simple phrase like “Hello World!” may look like “1c28df2b595b4e30b7b07500963dc7c” when encrypted. There are several different types of encryption algorithms, all using different types of keys. A strong encryption algorithm relies on mathematical properties to produce ciphertext that can’t be decrypted using any practically available amount of computing power without also having the necessary key. Therefore, protecting and managing the keys becomes a critical part of any encryption solution.

Encryption as part of your security strategy

An effective security strategy begins with stringent access control and continuous work to define the least privilege necessary for persons or systems accessing data. AWS requires that you manage your own access control policies, and also supports defense in depth to achieve the best possible data protection.

Encryption is a critical component of a defense-in-depth strategy because it can mitigate weaknesses in your primary access control mechanism. What if an access control mechanism fails and allows access to the raw data on disk or traveling along a network link? If the data is encrypted using a strong key, as long as the decryption key is not on the same system as your data, it is computationally infeasible for an attacker to decrypt your data. To show how infeasible it is, let’s consider the Advanced Encryption Standard (AES) with 256-bit keys (AES-256). It’s the strongest industry-adopted and government-approved algorithm for encrypting data. AES-256 is the technology we use to encrypt data in AWS, including Amazon Simple Storage Service (S3) server-side encryption. It would take at least a trillion years to break using current computing technology. Current research suggests that even the future availability of quantum-based computing won’t sufficiently reduce the time it would take to break AES encryption.

But what if you mistakenly create overly permissive access policies on your data? A well-designed encryption and key management system can also prevent this from becoming an issue, because it separates access to the decryption key from access to your data.

Requirements for an encryption solution

To get the most from an encryption solution, you need to think about two things:

  1. Protecting keys at rest: Are the systems using encryption keys secured so the keys can never be used outside the system? In addition, do these systems implement encryption algorithms correctly to produce strong ciphertexts that cannot be decrypted without access to the right keys?
  2. Independent key management: Is the authorization to use encryption independent from how access to the underlying data is controlled?

There are third-party solutions that you can bring to AWS to meet these requirements. However, these systems can be difficult and expensive to operate at scale. AWS offers a range of options to simplify encryption and key management.

Protecting keys at rest

When you use third-party key management solutions, it can be difficult to gauge the risk of your plaintext keys leaking and being used outside the solution. The keys have to be stored somewhere, and you can’t always know or audit all the ways those storage systems are secured from unauthorized access. The combination of technical complexity and the necessity of making the encryption usable without degrading performance or availability means that choosing and operating a key management solution can present difficult tradeoffs. The best practice to maximize key security is using a hardware security module (HSM). This is a specialized computing device that has several security controls built into it to prevent encryption keys from leaving the device in a way that could allow an adversary to access and use those keys.

One such control in modern HSMs is tamper response, in which the device detects physical or logical attempts to access plaintext keys without authorization, and destroys the keys before the attack succeeds. Because you can’t install and operate your own hardware in AWS datacenters, AWS offers two services using HSMs with tamper response to protect customers’ keys: AWS Key Management Service (KMS), which manages a fleet of HSMs on the customer’s behalf, and AWS CloudHSM, which gives customers the ability to manage their own HSMs. Each service can create keys on your behalf, or you can import keys from your on-premises systems to be used by each service.

The keys in AWS KMS or AWS CloudHSM can be used to encrypt data directly, or to protect other keys that are distributed to applications that directly encrypt data. The technique of encrypting encryption keys is called envelope encryption, and it enables encryption and decryption to happen on the computer where the plaintext customer data exists, rather than sending the data to the HSM each time. For very large data sets (e.g., a database), it’s not practical to move gigabytes of data between the data set and the HSM for every read/write operation. Instead, envelope encryption allows a data encryption key to be distributed to the application when it’s needed. The “master” keys in the HSM are used to encrypt a copy of the data key so the application can store the encrypted key alongside the data encrypted under that key. Once the application encrypts the data, the plaintext copy of data key can be deleted from its memory. The only way for the data to be decrypted is if the encrypted data key, which is only a few hundred bytes in size, is sent back to the HSM and decrypted.

The process of envelope encryption is used in all AWS services in which data is encrypted on a customer’s behalf (which is known as server-side encryption) to minimize performance degradation. If you want to encrypt data in your own applications (client-side encryption), you’re encouraged to use envelope encryption with AWS KMS or AWS CloudHSM. Both services offer client libraries and SDKs to add encryption functionality to their application code and use the cryptographic functionality of each service. The AWS Encryption SDK is an example of a tool that can be used anywhere, not just in applications running in AWS.

Because implementing encryption algorithms and HSMs is critical to get right, all vendors of HSMs should have their products validated by a trusted third party. HSMs in both AWS KMS and AWS CloudHSM are validated under the National Institute of Standards and Technology’s FIPS 140-2 program, the standard for evaluating cryptographic modules. This validates the secure design and implementation of cryptographic modules, including functions related to ports and interfaces, authentication mechanisms, physical security and tamper response, operational environments, cryptographic key management, and electromagnetic interference/electromagnetic compatibility (EMI/EMC). Encryption using a FIPS 140-2 validated cryptographic module is often a requirement for other security-related compliance schemes like FedRamp and HIPAA-HITECH in the U.S., or the international payment card industry standard (PCI-DSS).

Independent key management

While AWS KMS and AWS CloudHSM can protect plaintext master keys on your behalf, you are still responsible for managing access controls to determine who can cause which encryption keys to be used under which conditions. One advantage of using AWS KMS is that the policy language you use to define access controls on keys is the same one you use to define access to all other AWS resources. Note that the language is the same, not the actual authorization controls. You need a mechanism for managing access to keys that is different from the one you use for managing access to your data. AWS KMS provides that mechanism by allowing you to assign one set of administrators who can only manage keys and a different set of administrators who can only manage access to the underlying encrypted data. Configuring your key management process in this way helps provide separation of duties you need to avoid accidentally escalating privilege to decrypt data to unauthorized users. For even further separation of control, AWS CloudHSM offers an independent policy mechanism to define access to keys.

Even with the ability to separate key management from data management, you can still verify that you have configured access to encryption keys correctly. AWS KMS is integrated with AWS CloudTrail so you can audit who used which keys, for which resources, and when. This provides granular vision into your encryption management processes, which is typically much more in-depth than on-premises audit mechanisms. Audit events from AWS CloudHSM can be sent to Amazon CloudWatch, the AWS service for monitoring and alarming third-party solutions you operate in AWS.

Encrypting data at rest and in motion

All AWS services that handle customer data encrypt data in motion and provide options to encrypt data at rest. All AWS services that offer encryption at rest using AWS KMS or AWS CloudHSM use AES-256. None of these services store plaintext encryption keys at rest — that’s a function that only AWS KMS and AWS CloudHSM may perform using their FIPS 140-2 validated HSMs. This architecture helps minimize the unauthorized use of keys.

When encrypting data in motion, AWS services use the Transport Layer Security (TLS) protocol to provide encryption between your application and the AWS service. Most commercial solutions use an open source project called OpenSSL for their TLS needs. OpenSSL has roughly 500,000 lines of code with at least 70,000 of those implementing TLS. The code base is large, complex, and difficult to audit. Moreover, when OpenSSL has bugs, the global developer community is challenged to not only fix and test the changes, but also to ensure that the resulting fixes themselves do not introduce new flaws.

AWS’s response to challenges with the TLS implementation in OpenSSL was to develop our own implementation of TLS, known as s2n, or signal to noise. We released s2n in June 2015, which we designed to be small and fast. The goal of s2n is to provide you with network encryption that is easier to understand and that is fully auditable. We released and licensed it under the Apache 2.0 license and hosted it on GitHub.

We also designed s2n to be analyzed using automated reasoning to test for safety and correctness using mathematical logic. Through this process, known as formal methods, we verify the correctness of the s2n code base every time we change the code. We also automated these mathematical proofs, which we regularly re-run to ensure the desired security properties are unchanged with new releases of the code. Automated mathematical proofs of correctness are an emerging trend in the security industry, and AWS uses this approach for a wide variety of our mission-critical software.

Implementing TLS requires using encryption keys and digital certificates that assert the ownership of those keys. AWS Certificate Manager and AWS Private Certificate Authority are two services that can simplify the issuance and rotation of digital certificates across your infrastructure that needs to offer TLS endpoints. Both services use a combination of AWS KMS and AWS CloudHSM to generate and/or protect the keys used in the digital certificates they issue.

Summary

At AWS, security is our top priority and we aim to make it as easy as possible for you to use encryption to protect your data above and beyond basic access control. By building and supporting encryption tools that work both on and off the cloud, we help you secure your data and ensure compliance across your entire environment. We put security at the center of everything we do to make sure that you can protect your data using best-of-breed security technology in a cost-effective way.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS KMS forum or the AWS CloudHSM forum, or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ken Beer

Ken is the General Manager of the AWS Key Management Service. Ken has worked in identity and access management, encryption, and key management for over 7 years at AWS. Before joining AWS, Ken was in charge of the network security business at Trend Micro. Before Trend Micro, he was at Tumbleweed Communications. Ken has spoken on a variety of security topics at events such as the RSA Conference, the DoD PKI User’s Forum, and AWS re:Invent.

How to verify AWS KMS asymmetric key signatures locally with OpenSSL

Post Syndicated from J.D. Bean original https://aws.amazon.com/blogs/security/how-to-verify-aws-kms-asymmetric-key-signatures-locally-with-openssl/

In this post, I demonstrate a sample workflow for generating a digital signature within AWS Key Management Service (KMS) and then verifying that signature on a client machine using OpenSSL.

The support for asymmetric keys in AWS KMS has exciting use cases. The ability to create, manage, and use public and private key pairs with KMS enables you to perform digital signing operations using RSA and Elliptic Curve (ECC) keys. You can also perform public key encryption or decryption operations using RSA keys.

For example, you can use ECC or RSA private keys to generate digital signatures. Third parties can perform verification outside of AWS KMS using the corresponding public keys. Similarly, AWS customers and third parties can perform unauthenticated encryption outside of AWS KMS using an RSA public key and still enforce authenticated decryption within AWS KMS. This is done using the corresponding private key.

The commands found in this tutorial were tested using Amazon Linux 2. Other Linux, macOS, or Unix operating systems are likely to work with minimal modification but have not been tested.

Creating an asymmetric signing key pair

To start, create an asymmetric customer master key (CMK) using the AWS Command Line Interface (AWS CLI) example command below. This generates an RSA 4096 key for signature creation and verification using AWS KMS.


aws kms create-key --customer-master-key-spec RSA_4096 \
 --key-usage SIGN_VERIFY \
 --description "Sample Digital Signature Key Pair"

If successful, this command returns a KeyMetadata object. Take note of the KeyID value. As a best practice, I recommend assigning an alias for your key. The command below assigns an alias of sample-sign-verify-key to your newly created CMK (replace the target-key-id value of <1234abcd-12ab-34cd-56ef-1234567890ab> with your KeyID).


aws kms create-alias \
    --alias-name alias/sample-sign-verify-key \
    --target-key-id <1234abcd-12ab-34cd-56ef-1234567890ab> 

Creating signer and verifier roles

For the next phase of this tutorial, you must create two AWS principals. You’ll create two roles: a signer principal and a verifier principal. First, navigate to the AWS Identity and Access Management (IAM) “Create role” Console dialogue that allows entities in a specified account to assume the role. Enter your Account ID and select Next: Permissions, as shown in Figure 1 below.

Figure 1: Enter your Account ID to begin creating a role in AWS IAM

Figure 1: Enter your Account ID to begin creating a role in AWS IAM

Select Next through the next two screens. On the fourth and final screen, enter a Role name of SignerRole and Role description, as shown in Figure 2 below.

Figure 2: Enter a role name and description to finish creating the role

Figure 2: Enter a role name and description to finish creating the role

Select Create role to finish creating the signer role. To create the verifier role, you must perform this same process one more time. On the final screen, provide the name OfflineVerifierRole for the role instead.

Configuring key policy permissions

A best practice is to adhere to the principle of least privilege and provide each AWS principal with the minimal permissions necessary to perform its tasks. The signer and verifier roles that you created currently have no permissions in your account. The signer principal must have permission to be able to create digital signatures in KMS for files using the public portion of your CMK. The verifier principal must have permission to download the plaintext public key portion of your CMK.

To provide access control permissions for KMS actions to your AWS principals, attach a key policy to the CMK. The IAM role for the signer principal (SignerRole) is given kms:Sign permission in the CMK key policy. The IAM role for the verifier principal (OfflineVerifierRole) is given kms:GetPublicKey permission in the CMK key policy.

Navigate to the KMS page in the AWS Console and select customer-managed keys. Next, select your CMK, scroll down to the key policy section, and select edit.

To allow your signer principal to use the CMK for digital signing, append the following stanza to the key policy (replace the account ID value of <111122223333> with your own):


{
    "Sid": "Allow use of the key pair for digital signing",
    "Effect": "Allow",
    "Principal": {"AWS":"arn:aws:iam::<111122223333>:role/SignerRole"},
    "Action": "kms:Sign",
    "Resource": "*"
}

To allow your verifier principal to download the CMK public key, append the following stanza to the key policy (replace the account ID value of <111122223333> with your own):


{
    "Sid": "Allow plaintext download of the public portion of the key pair",
    "Effect": "Allow",
    "Principal": {"AWS":"arn:aws:iam::<111122223333>:role/OfflineVerifierRole"},
    "Action": "kms:GetPublicKey",
    "Resource": "*"
}

You can permit the verifier to perform digital signature verification using KMS by granting the kms:Verify action. However, the kms:GetPublicKey action enables the verifier principal to download the CMK public key in plaintext to verify the signature in a local environment without access to AWS KMS.

Although you configured the policy for a verifier principal within your own account, you can also configure the policy for a verifier principal in a separate account to validate signatures generated by your CMK.

Because AWS KMS enables the verifier principal to download the CMK public key in plaintext, you can also use the verifier principal you configure to download the public key and distribute it to third parties. This can be done whether or not they have AWS security credentials via, for example, an S3 presigned URL.

Signing a message

To demonstrate signature verification, you need KMS to sign a file with your CMK using the KMS Sign API. KMS signatures can be generated directly for messages of up to 4096 bytes. To sign a larger message, you can generate a hash digest of the message, and then provide the hash digest to KMS for signing.

For messages up to 4096 bytes, you first create a text file containing a short message of your choosing, which we refer to as SampleText.txt. To sign the file, you must assume your signer role. To do so, execute the following command, but replace the account ID value of <111122223333> with your own:


aws sts assume-role \
--role-arn arn:aws:iam::<111122223333>:role/SignerRole \
--role-session-name AWSCLI-Session

The return values provide an access key ID, secret key, and session token. Substitute these values into their respective fields in the following command and execute it:


export AWS_ACCESS_KEY_ID=<ExampleAccessKeyID1>
export AWS_SECRET_ACCESS_KEY=<ExampleSecretKey1>
export AWS_SESSION_TOKEN=<ExampleSessionToken1>

Then confirm that you have successfully assumed the signer role by issuing:


aws sts get-caller-identity

If the output of this command contains the text assumed-role/SignerRole then you have successfully assumed the signer role and you may sign your message file with:


aws kms sign \
    --key-id alias/sample-sign-verify-key \
    --message-type RAW \
    --signing-algorithm RSASSA_PKCS1_V1_5_SHA_512 \
    --message fileb://SampleText.txt \
    --output text \
    --query Signature | base64 --decode > SampleText.sig

To indicate that the file is a message and not a message digest, the command passes a MessageType parameter of RAW. The command then decodes the signature and writes it to a local disk as SampleText.sig. This file is important later when you want to verify the signature entirely client-side without calling AWS KMS.

Finally, to drop your assumed role, you may issue:


unset \
	AWS_ACCESS_KEY_ID \
	AWS_SECRET_ACCESS_KEY \
	AWS_SESSION_TOKEN

followed by:


aws sts get-caller-identity

Verifying a signature client-side

Assume your verifier role using the same process as before and issue the following command to fetch a copy of the public portion of your CMK from AWS KMS:


aws kms get-public-key \
 --key-id alias/sample-sign-verify-key \
 --output text \
 --query PublicKey | base64 --decode > SamplePublicKey.der

This command writes to disk the DER-encoded X.509 public key with a name of SamplePublicKey.der . You can convert this DER-encoded key to a PEM-encoded key by running the following command:


openssl rsa -pubin -inform DER \
    -outform PEM -in SamplePublicKey.der \
    -pubout -out SamplePublicKey.pem

You now have the following three files:

  1. A PEM file, SamplePublicKey.pem containing the CMK public key
  2. The original SampleText.txt file
  3. The SampleText.sig file that you generated in KMS using the CMK private key

With these three inputs, you can now verify the signature entirely client-side without calling AWS KMS. To verify the signature, run the following command:


openssl dgst -sha512 \
    -verify SamplePublicKey.pem \
    -signature SampleText.sig \
    SampleText.txt

If you performed all of the steps correctly, you see the following message on your console:


 Verified OK

This successful verification provides a high degree of confidence that the message was endorsed by a principal with permission to sign using your KMS CMK (authentication) — in this case, your sender role principal. It also verifies that the message has not been modified in transit (integrity).

To demonstrate this, update your SampleText.txt file by adding new characters to the file. If you rerun the command, you see the following message:


Verification Failure

Summary

In this tutorial, you verified the authenticity of a digital signature generated by a KMS asymmetric key pair on your local machine. You did this by using OpenSSL and a plaintext public key exported from KMS.

You created an asymmetric CMK in KMS and configured key policy permissions for your signer and verifier principals. You then digitally signed a message in KMS using the private portion of your asymmetric CMK. Then, you exported a copy of the public portion of your asymmetric key pair from KMS in plaintext. With this copy of your public key, you were able to perform signature verification using OpenSSL entirely in your local environment. A similar pattern can be used with an asymmetric CMK configured with a KeyUsage value of ENCRYPT_DECRYPT. This pattern can be used to perform encryption operations in a local environment and decryption operations in AWS KMS.

To learn more about the asymmetric keys feature of KMS, please read the KMS Developer Guide. If you’re considering implementing an architecture involving downloading public keys, be sure to refer to the KMS Developer Guide for Special Considerations for Downloading Public Keys. If you have questions about the asymmetric keys feature, please start a new thread on the AWS KMS Discussion Forum.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

J.D. Bean

J.D. Bean is a Solutions Architect at Amazon Web Services on the World Wide Public Sector Federal Financials team based out of New York City. He is passionate about his work enabling AWS customers’ successful cloud journeys, and his interests include security, privacy, and compliance. J.D. holds a Bachelor of Arts from The George Washington University and a Juris Doctor from New York University School of Law. In his spare time J.D. enjoys spending time with his family, yoga, and experimenting in his kitchen.

Round 2 Hybrid Post-Quantum TLS Benchmarks

Post Syndicated from Alex Weibel original https://aws.amazon.com/blogs/security/round-2-hybrid-post-quantum-tls-benchmarks/

AWS Cryptography has completed benchmarks of Round 2 Versions of the Bit Flipping Key Encapsulation (BIKE) and Supersingular Isogeny Key Encapsulation (SIKE) hybrid post-quantum Transport Layer Security (TLS) Algorithms. Both of these algorithms have been submitted to the National Institute of Standards and Technology (NIST) as part of NIST’s Post-Quantum Cryptography standardization process.

In the first hybrid post-quantum TLS blog, we announced that AWS Key Management Service (KMS) had launched support for hybrid post-quantum TLS 1.2 using Round 1 versions of BIKE and SIKE. In this blog, we are announcing AWS Cryptography’s benchmark results of using Round 2 versions of BIKE and SIKE with hybrid post-quantum TLS 1.2 against an HTTP webservice. Round 2 versions of BIKE and SIKE include performance improvements, parameter tuning, and algorithm updates in response to NIST’s comments on Round 1 versions. I’ll give a refresher on hybrid post-quantum TLS 1.2, go over our Round 2 hybrid post-quantum TLS 1.2 benchmark results, and then describe our benchmarking methodology.

This blog post is intended to inform software developers, AWS customers, and cryptographic researchers about the potential upcoming performance differences between classical and hybrid post-quantum TLS.

Refresher on Hybrid Post-Quantum TLS 1.2

Some of this section is repeated from the previous hybrid post-quantum TLS 1.2 launch announcement for KMS. If you are already familiar with hybrid post-quantum TLS, feel free to skip to the Benchmark Results section.

What is Hybrid Post-Quantum TLS 1.2?

Hybrid post-quantum TLS 1.2 is a proposed extension to the TLS 1.2 Protocol implemented by Amazon’s open source TLS library s2n that provides the security protections of both the classical and post-quantum schemes. It does this by performing two independent key exchanges (one classical and one post-quantum), and then cryptographically combining both keys into a single TLS master secret.

Why is Post-Quantum TLS Important?

Hybrid post-quantum TLS allows connections to remain secure even if one of the key exchanges (either classical or post-quantum) performed during the TLS Handshake is compromised in the future. For example, if a sufficiently large-scale quantum computer were to be built, it could break the current classical public-key cryptography that is used for key exchange in every TLS connection today. Encrypted TLS traffic recorded today could be decrypted in the future with a large-scale quantum computer if post-quantum TLS is not used to protect it.

Round 2 Hybrid Post-Quantum TLS Benchmark Results

Figure 2: Latency in relation to HTTP request count for four key exchange algorithms

Figure 2: Latency in relation to HTTP request count for four key exchange algorithms

Key Exchange AlgorithmServer PQ ImplementationTLS Handshake
+ 1 HTTP Request
TLS Handshake
+ 2 HTTP Requests
TLS Handshake
+ 10 HTTP Requests
TLS Handshake
+ 25 HTTP Requests
ECDHE OnlyN/A10.8 ms15.1 ms52.6 ms124.2 ms
ECDHE + BIKE1‑CCA‑L1‑R2C19.9 ms24.4 ms61.4 ms133.2 ms
ECDHE + SIKE‑P434‑R2C169.6 ms180.3 ms219.1 ms288.1 ms
ECDHE + SIKE‑P434‑R2x86-64
Assembly
20.1 ms24.5 ms62.0 ms133.3 ms

Table 1 shows the time (in milliseconds) that a client and server in the same region take to complete a TCP Handshake, a TLS Handshake, and complete varying numbers of HTTP Requests sent to an HTTP web service running on an i3en.12xlarge host.

Key Exchange AlgorithmClient HelloServer Key ExchangeClient Key ExchangeOtherTLS Handshake Total
ECDHE Only2183387524303061
ECDHE + BIKE1‑CCA‑L1‑R22203288302324308961
ECDHE + SIKE‑P434‑R221467242324303739

Table 2 shows the amount of data (in bytes) used by different messages in the TLS Handshake for each Key Exchange algorithm.

1 HTTP
Request
2 HTTP Requests10 HTTP
Requests
25 HTTP Requests
HTTP Request Bytes8781,7618,82522,070
HTTP Response Bytes6981,3776,80916,994
Total HTTP Bytes15763,13815,63439,064

Table 3 shows the amount of data (in bytes) sent and received through each TLS connection for varying numbers of HTTP requests.

Benchmark Results Analysis

In general, we find that the major trade off between BIKE and SIKE is data usage versus processing time, with BIKE needing to send more bytes but requiring less time processing them, and SIKE making the opposite trade off of needing to send fewer bytes but requiring more time processing them. At the time of integration for our benchmarks, an x86-64 assembly optimized implementation of BIKE1-CCA-L1-R2 was not available in s2n.

Our results show that when only a single HTTP request is sent, completing a BIKE1-CCA-L1-R2 hybrid TLS 1.2 handshake takes approximately 84% more time compared to a non-hybrid TLS connection, and completing an x86-64 assembly optimized SIKE-P434-R2 hybrid TLS 1.2 handshake takes approximately 86% more time than non-hybrid. However, at 25 HTTP Requests per TLS connection, when using the fastest available implementation for both BIKE and SIKE, the increased TLS Handshake latency is amortized, and only 7% more total time is needed for both BIKE and SIKE compared to a classical TLS connection.

Our results also show that BIKE1-CCA-L1-R2 hybrid TLS Handshakes used 5900 more bytes than a classical TLS Handshake, while SIKE-P434-R2 hybrid TLS Handshakes used 678 more bytes than classical TLS.

In the AWS EC2 network, using modern x86-64 CPU’s with the fastest available algorithm implementations, we found that BIKE and SIKE performed similarly, with their maximum latency difference being only 0.6 milliseconds apart, and BIKE being the faster of the two in every benchmark. However when compared to SIKE’s C implementation, which would be used on hosts without the ADX and MULX x86-64 instructions used by SIKE’s assembly implementation, BIKE performed significantly better, seeing a maximum improvement of 157 milliseconds over SIKE.

Hybrid Post-Quantum TLS Benchmark Details and Methodology

Hybrid Post-Quantum TLS Client

Figure 3: Architecture diagram of the AWS SDK Java Client using Java Native Interface (JNI) to communicate with the native AWS Common Runtime (CRT)

Figure 3: Architecture diagram of the AWS SDK Java Client using Java Native Interface (JNI) to communicate with the native AWS Common Runtime (CRT)

Our post-quantum TLS Client is using the aws-crt-dev-preview branch of the AWS SDK Java v2 Client, that has Java Native Interface Bindings to the AWS Common Runtime (AWS CRT) written in C. The AWS Common Runtime uses s2n for TLS negotiation on Linux platforms.

Our client was a single EC2 i3en.6xlarge host, using v0.5.1 of the AWS Common Runtime (AWS CRT) Java Bindings, with commit f3abfaba of s2n and used the x86-64 Assembly implementation for all SIKE-P434-R2 benchmarks.

Hybrid Post-Quantum TLS Server

Our server was a single EC2 i3en.12xlarge host running a REST-ful HTTP web service which used s2n to terminate TLS connections. In order to measure the latency of the SIKE-P434-R2 C implementation on these hosts, we used an s2n compile time flag to build a 2nd version of s2n with SIKE’s x86-64 assembly optimization disabled, and reran our benchmarks with that version.

We chose i3en.12xlarge as our host type because it is optimized for high IO usage, provides high levels of network bandwidth, has a high number of vCPU’s that is typical for many web service endpoints, and has a modern x86-64 CPU with the ADX and MULX instructions necessary to use the high performance Round 2 SIKE x86-64 assembly implementation. Additional TLS Handshake benchmarks performed on other modern types of EC2 hosts, such as the C5 family and M5 family of EC2 instances, also showed similar latency results to those generated on i3en family of EC2 instances.

Post-Quantum Algorithm Implementation Details

The implementations of the post-quantum algorithms used in these benchmarks can be found in the pq-crypto directory of the s2n GitHub Repository. Our Round 2 BIKE implementation uses portable optimized C code, and our Round 2 SIKE implementation uses an optimized implementation in x86-64 assembly when available, and falls back to a portable optimized C implementation otherwise.

Key Exchange Algorithms2n Client Cipher Preferences2n Server Cipher PreferenceNegotiated Cipher
ECDHE OnlyELBSecurityPolicy-TLS-1-1-2017-01KMS-PQ-TLS-1-0-2020-02ECDHE-RSA-AES256-GCM-SHA384
ECDHE + BIKE1‑CCA‑L1‑R2KMS-PQ-TLS-1-0-2020-02KMS-PQ-TLS-1-0-2020-02ECDHE-BIKE-RSA-AES256-GCM-SHA384
ECDHE + SIKE‑P434‑R2PQ-SIKE-TEST-TLS-1-0-2020-02KMS-PQ-TLS-1-0-2020-02ECDHE-SIKE-RSA-AES256-GCM-SHA384

Table 4 shows the Clients and Servers TLS Cipher Config name used in order to negotiate each Key Exchange Algorithm.

Hybrid Post-Quantum TLS Benchmark Methodology

Figure 4: Benchmarking Methodology Client/Server Architecture Diagram

Figure 4: Benchmarking Methodology Client/Server Architecture Diagram

Our Benchmarks were run with a single client host connecting to a single host running a HTTP web service in a different availability zone within the same AWS Region (us-east-1), through a TCP Load Balancer.

We chose to include varying numbers of HTTP requests in our latency benchmarks, rather than TLS Handshakes alone, because customers are unlikely to establish a secure TLS connection and let the connection sit idle performing no work. Customers use TLS connections in order to send and receive data securely, and HTTP web services are one of the most common types of data being secured by TLS. We also chose to place our EC2 server behind a TCP Load Balancer to more closely approximate how an HTTP web service would be deployed in a typical setup.

Latency was measured at the client in Java starting from before a TCP connection was established, until after the final HTTP Response was received, and includes all network transfer time. All connections used RSA Certificate Authentication with a 2048-bit key, and ECDHE Key Exchange used the secp256r1 curve. All latency values listed in Tables 1 above were calculated from the median value (50th percentile) from 60 minutes of continuous single-threaded measurements between the EC2 Client and Server.

More Info

If you’re interested to learn more about post-quantum cryptography check out the following links:

Conclusion

In this blog post, I gave a refresher on hybrid post-quantum TLS, I went over our hybrid post-quantum TLS 1.2 benchmark results, and went over our hybrid post-quantum benchmarking methodology. Our benchmark results found that BIKE and SIKE performed similarly when using s2n’s fastest available implementation on modern CPU’s, but that BIKE performed better than SIKE when both were using their generic C implementation.

If you have feedback about this blog post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Alex Weibel

Alex is a Senior Software Development Engineer on the AWS Crypto Algorithms team. He’s one of the maintainers for Amazon’s TLS Library s2n. Previously, Alex worked on TLS termination and request proxying for S3 and the Elastic Load Balancing Service developing new features for customers. Alex holds a Bachelor of Science degree in Computer Science from the University of Texas at Austin.

Digital signing with the new asymmetric keys feature of AWS KMS

Post Syndicated from Raj Copparapu original https://aws.amazon.com/blogs/security/digital-signing-asymmetric-keys-aws-kms/

AWS Key Management Service (AWS KMS) now supports asymmetric keys. You can create, manage, and use public/private key pairs to protect your application data using the new APIs via the AWS SDK. Similar to the symmetric key features we’ve been offering, asymmetric keys can be generated as customer master keys (CMKs) where the private portion never leaves the service, or as a data key where the private portion is returned to your calling application encrypted under a CMK. The private portion of asymmetric CMKs are used in AWS KMS hardware security modules (HSMs) designed so that no one, including AWS employees, can access the plaintext key material. AWS KMS supports the following asymmetric key types – RSA 2048, RSA 3072, RSA 4096, ECC NIST P-256, ECC NIST P-384, ECC NIST-521, and ECC SECG P-256k1.

We’ve talked with customers and know that one popular use case for asymmetric keys is digital signing. In this post, I will walk you through an example of signing and verifying files using some of the new APIs in AWS KMS.

Background

A common way to ensure the integrity of a digital message as it passes between systems is to use a digital signature. A sender uses a secret along with cryptographic algorithms to create a data structure that is appended to the original message. A recipient with access to that secret can cryptographically verify that the message hasn’t been modified since the sender signed it. In cases where the recipient doesn’t have access to the same secret used by the sender for verification, a digital signing scheme that uses asymmetric keys is useful. The sender can make the public portion of the key available to any recipient to verify the signature, but the sender retains control over creating signatures using the private portion of the key. Asymmetric keys are used for digital signature applications such as trusted source code, authentication/authorization tokens, document e-signing, e-commerce transactions, and secure messaging. AWS KMS supports what are known as raw digital signatures, where there is no identity information about the signer embedded in the signature object. A common way to attach identity information to a digital signature is to use digital certificates. If your application relies on digital certificates for signing and signature verification, we recommend you look at AWS Certificate Manager and Private Certificate Authority. These services allow you to programmatically create and deploy certificates with keys to your applications for digital signing operations. A common application of digital certificates is TLS termination on a web server to secure data in transit.

Signing and verifying files with AWS KMS

Assume that you have an application A that sends a file to application B in your AWS account. You want the file to be digitally signed so that the receiving application B can verify it hasn’t been tampered with in transit. You also want to make sure only application A can digitally sign files using the key because you don’t want application B to receive a file thinking it’s from application A when it was really from a different sender that had access to the signing key. Because AWS KMS is designed so that the private portion of the asymmetric key pair used for signing cannot be used outside the service or by unauthenticated users, you’re able to define and enforce policies so that only application A can sign with the key.

To start, application A will submit either the file itself or a digest of the file to the AWS KMS Sign API under an asymmetric CMK. If the file is less than 4KB, AWS KMS will compute a digest for you as a part of the signing operation. If the file is greater than 4KB, you must send only the digest you created locally and you must tell AWS KMS that you’re passing a digest in the MessageType parameter of the request. You can use any of several hashing functions in your local environment to create a digest of the file, but be aware that the receiving application in account B will need to be able to compute the digest using the same hash function in order to verify the integrity of the file. In my example, I’m using SHA256 as the hash function. Once the digest is created, AWS KMS uses the private portion of the asymmetric CMK to encrypt the digest using the signing algorithm specified in the API request. The result is a binary data object, which we’ll refer to as “the signature” throughout this post.

Once application B receives the file with the signature, it must create a digest of the file. It then passes this newly generated digest, the signature object, the signing algorithm used, and the CMK keyId to the Verify API. AWS KMS uses the corresponding public key of the CMK with the signing algorithm specified in the request to verify the signature. Instead of submitting the signature to the Verify API, application B could verify the signature locally by acquiring the public key. This might be an attractive option if application B didn’t have a way to acquire valid AWS credentials to make a request of AWS KMS. However, this method requires application B to have access to the necessary cryptographic algorithms and to have previously received the public portion of the asymmetric CMK. In my example, application B is running in the same account as application A, so it can acquire AWS credentials to make the Verify API request. I’ll describe how to verify signatures using both methods in a bit more detail later in the post.

Creating signing keys and setting up key policy permissions

To start, you need to create an asymmetric CMK. When calling the CreateKey API, you’ll pass one of the asymmetric values for the CustomerMasterKeySpec parameter. In my example, I’m choosing a key spec of ECC_NIST_P384 because keys used with elliptic curve algorithms tend to be more efficient than those used with RSA-based algorithms.

As a part of creating your asymmetric CMK, you need to attach a resource policy to the key to control which cryptographic operations the AWS principals representing applications A and B can use. A best practice is to use a different IAM principal for each application in order to scope down permissions. In this case, you want application A to only be able to sign files, and application B to only be able to verify them. I will assume each of these applications are running in Amazon EC2, and so I’ll create a couple of IAM roles.

  • The IAM role for application A (SignRole) will be given kms:Sign permission in the CMK key policy
  • The IAM role for application B (VerifyRole) will be given kms:Verify permission in the CMK key policy

The stanza in the CMK key policy document to allow signing should look like this (replace the account ID value of <111122223333> with your own):


{
	"Sid": "Allow use of the key for digital signing",
	"Effect": "Allow",
	"Principal": {"AWS":"arn:aws:iam::<111122223333>:role/SignRole"},
	"Action": "kms:Sign",
	"Resource": "*"
}

The stanza in the CMK key policy document to allow verification should look like this (replace the account ID value of <111122223333> with your own):


{
	"Sid": "Allow use of the key for verification",
	"Effect": "Allow",
	"Principal": {"AWS":"arn:aws:iam::<111122223333>:role/VerifyRole"},
	"Action": "kms:Verify",
	"Resource": "*"
}

Signing Workflow

Once you have created the asymmetric CMK and IAM roles, you’re ready to sign your file. Application A will create a message digest of the file and make a sign request to AWS KMS with the asymmetric CMK keyId, and signing algorithm. The CLI command to do this is shown below. Replace the key-id parameter with your CMK’s specific keyId.


aws kms sign \
	--key-id <1234abcd-12ab-34cd-56ef-1234567890ab> \
	--message-type DIGEST \
	--signing-algorithm ECDSA_SHA_256 \
	--message fileb://ExampleDigest

I chose the ECDSA_SHA_256 signing algorithm for this example. See the Sign API specification for a complete list of supported signing algorithms.

After validating that the API call is authorized by the credentials available to SignRole, KMS generates a signature around the digest and returns the CMK keyId, signature, and the signing algorithm.

Verify Workflow 1 — Calling the verify API

Once application B receives the file and the signature, it computes the SHA 256 digest over the copy of the file it received. It then makes a verify request to AWS KMS, passing this new digest, the signature it received from application A, signing algorithm, and the CMK keyId. The CLI command to do this is shown below. Replace the key-id parameter with your CMK’s specific keyId.


aws kms verify \
	--key-id <1234abcd-12ab-34cd-56ef-1234567890ab> \
	--message-type DIGEST \
	--signing-algorithm ECDSA_SHA_256 \
	--message fileb://ExampleDigest \
	--signature fileb://Signature

After validating that the verify request is authorized, AWS KMS verifies the signature by first decrypting the signature using the public portion of the CMK. It then compares the decrypted result to the digest received in the verify request. If they match, it returns a SignatureValid boolean of True, indicating that the original digest created by the sender matches the digest created by the recipient. Because the original digest is unique to the original file, the recipient can know that the file was not tampered with during transit.

One advantage of using the AWS KMS verify API is that the caller doesn’t have to keep track of the specific public key matching the private key used to create the signature; the caller only has to know the CMK keyId and signing algorithm used. Also, because all request to AWS KMS are logged to AWS CloudTrail, you can audit that the signature and verification operations were both executed as expected. See the Verify API specification for more detail on available parameters.

Verify Workflow 2 — Verifying locally using the public key

Apart from using the Verify API directly, you can choose to retrieve the public key in the CMK using the AWS KMS GetPublicKey API and verify the signature locally. You might want to do this if application B needs to verify multiple signatures at a high rate and you don’t want to make a network call to the Verify API each time. In this method, application B makes a GetPublicKey request to AWS KMS to retrieve the public key. The CLI command to do this is below. Replace the key-id parameter with your CMK’s specific keyId.

aws kms get-public-key \
–key-id <1234abcd-12ab-34cd-56ef-1234567890ab>

Note that the application B will need permissions to make a GetPublicKey request to AWS KMS. The stanza in the CMK key policy document to allow the VerifyRole identity to download the public key should look like this (replace the account ID value of <111122223333> with your own):


{
	"Sid": "Allow retrieval of the public key for verification",
	"Effect": "Allow",
	"Principal": {"AWS":"arn:aws:iam::<111122223333>:role/VerifyRole"},
	"Action": "kms:GetPublicKey ",
	"Resource": "*"
}

Once application B has the public key, it can use your preferred cryptographic provider to perform the signature verification locally. Application B needs to keep track of the public key and signing algorithm used for each signature object it will verify locally. Using the wrong public key will fail to decrypt the signature from application A, making the signature verification operation unsuccessful.

Availability and pricing

Asymmetric keys and operations in AWS KMS are available now in the Northern Virginia, Oregon, Sydney, Ireland, and Tokyo AWS Regions with support for other regions planned. Pricing information for the new feature can be found at the AWS KMS pricing page.

Summary

I showed you a simple example of how you can use the new AWS KMS APIs to digitally sign and verify an arbitrary file. By having AWS KMS generate and store the private portion of the asymmetric key, you can limit use of the key for signing only to IAM principals you define. OIDC ID tokens, OAuth 2.0 access tokens, documents, configuration files, system update messages, and audit logs are but a few of the types of objects you might want to sign and verify using this feature.

You can also perform encrypt and decrypt operations under asymmetric CMKs in AWS KMS as an alternative to using the symmetric CMKs available since the service launched. Similar to how you can ask AWS KMS to generate symmetric keys for local use in your application, you can ask AWS KMS to generate and return asymmetric key pairs for local use to encrypt and decrypt data. Look for a future AWS Security Blog post describing these use cases. For more information about asymmetric key support, see the AWS KMS documentation page.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about the asymmetric key feature, please start a new thread on the AWS KMS Discussion Forum.

Want more AWS Security news? Follow us on Twitter.

Raj Copparapu

Raj Copparapu

Raj Copparapu is a Senior Product Manager Technical. He’s a member of the AWS KMS team and focuses on defining the product roadmap to satisfy customer requirements. He spent over 5 years innovating on behalf of customers to deliver products to help customers secure their data in the cloud. Raj received his MBA from the Duke’s Fuqua School of Business and spent his early career working as an engineer and a business intelligence consultant. In his spare time, Raj enjoys yoga and spending time with his kids.

Post-quantum TLS now supported in AWS KMS

Post Syndicated from Andrew Hopkins original https://aws.amazon.com/blogs/security/post-quantum-tls-now-supported-in-aws-kms/

AWS Key Management Service (AWS KMS) now supports post-quantum hybrid key exchange for the Transport Layer Security (TLS) network encryption protocol that is used when connecting to KMS API endpoints. In this post, I’ll tell you what post-quantum TLS is, what hybrid key exchange is, why it’s important, how to take advantage of this new feature, and how to give us feedback.

What is post-quantum TLS?

Post-quantum TLS is a feature that adds new, post-quantum cipher suites to the protocol. AWS implements TLS using s2n, a streamlined open source implementation of TLS. In June, 2019, AWS introduced post-quantum s2n, which implements two proposed post-quantum hybrid cipher suites specified in this IETF draft. The cipher suites specify a key exchange that provides the security protections of both the classical and post-quantum schemes.

Why is this important?

A large-scale quantum computer would break the current public key cryptography that is used for key exchange in every TLS connection. While a large-scale quantum computer is not available today, it’s still important to think about and plan for your long-term security needs. TLS traffic recorded today could be decrypted by a large-scale quantum computer in the future. If you’re developing applications that rely on the long-term confidentiality of data passed over a TLS connection, you should consider a plan to migrate to post-quantum cryptography before a large-scale quantum computer is available for use by potential adversaries. AWS is working to prepare for this future, and we want you to be well-prepared, too.

We’re offering this feature now instead of waiting so you’ll have a way to measure the potential performance impact to your applications, and you’ll have the additional benefit of the protection afforded by the proposed post-quantum schemes today. While we believe the use of this feature raises the already high security bar for connecting to KMS endpoints, these new cipher suites will have an impact on bandwidth utilization, latency, and could also create issues for intermediate systems that proxy TLS connections. We’d like to get feedback from you on the effectiveness of our implementation so we can improve it over time.

Some background on post-quantum TLS

Today, all requests to AWS KMS use TLS with one of two key exchange schemes:

FFDHE and ECDHE are industry standards for secure key exchange. KMS uses only ephemeral keys for TLS key negotiation; this ensures every connection uses a unique key and the compromise of one connection does not affect the security of another connection. They are secure today against known cryptanalysis techniques which use classic computers; however, they’re not secure against known attacks which use a large-scale quantum computer. In the future a sufficiently capable large-scale quantum computer could run Shor’s Algorithm to recover the TLS session key of a recorded session, and therefore gain access to the data inside. Protecting against a large-scale quantum computer requires using a post-quantum key exchange algorithm during the TLS handshake.

The possibility of large-scale quantum computing has spurred the development of new quantum-resistant cryptographic algorithms. The National Institute for Science and Technology (NIST) has started the process of standardizing post-quantum cryptographic algorithms. AWS contributed to two NIST submissions:

BIKE and SIKE are Key Encapsulation Mechanisms (KEMs); a KEM is a type of key exchange used to establish a shared symmetric key. Post-quantum s2n only uses ephemeral BIKE and SIKE keys.

The NIST standardization process isn’t expected to complete until 2024. Until then, there is a risk that the exclusive use of proposed algorithms like BIKE and SIKE could expose data in TLS connections to security vulnerabilities not yet discovered. To mitigate this risk and use these new post-quantum schemes safely today, we need a way to combine classical algorithms with the expected post-quantum security of the new algorithms submitted to NIST. The Hybrid Post-Quantum Key Encapsulation Methods for Transport Layer Security 1.2 IETF draft describes how to combine BIKE and SIKE with ECDHE to create two new cipher suites for TLS.

These two cipher suites use a hybrid key exchange that performs two independent key exchanges during the TLS handshake and then cryptographically combines the keys into a single TLS session key. This strategy combines the high assurance of a classical key exchange with the security of the proposed post-quantum key exchanges.

The effect of hybrid post-quantum TLS on performance

Post-quantum cipher suites have a different performance profile and bandwidth requirements than traditional cipher suites. We measured the latency and bandwidth for a single handshake on an EC2 C5 2x.large. This provides a baseline for what to expect when you connect to KMS with the SDK. Your exact results will depend on your hardware (CPU speed and number of cores), existing workloads (how often you call KMS and what other work your application performs), and your network (location and capacity).

BIKE and SIKE have different performance tradeoffs: BIKE has faster computations and large keys, and SIKE has slower computations and smaller keys. The tables below show the results of the AWS measurements. ECDHE, a classic cryptographic key exchange algorithm, is included by itself for comparison.

Table 1
TLS MessageECDHE (bytes)ECDHE w/ BIKE (bytes)ECDHE w/SIKE (bytes)
ClientHello139147147
ServerKeyExchange3292,875711
ClientKeyExchange662,610470

Table 1 shows the amount of data (in bytes) sent in each TLS message. The ClientHello message is larger for post-quantum cipher suites because they include a new ClientHello extension. The key exchange messages are larger because they include BIKE or SIKE messages.

Table 2
ItemECDHE (ms)ECDHE w/ BIKE (ms)ECDHE w/SIKE (ms)
Server processing time0.1120.2695.53
Client processing time0.100.3957.05
Total handshake time1.1925.58155.08

Table 2 shows the time (in milliseconds) a client and server in the same region take to complete a handshake. Server processing time includes: key generation, signing the server key exchange message, and processing the client key exchange message. The client processing time includes: verifying the server’s certificate, processing the server key exchange message, and generating the client key exchange message. The total time was measured on the client from the start of the handshake to the end and includes network transfer time. All connections used RSA authentication with a 2048-bit key, and ECDHE used the secp256r1 curve. The BIKE test used the BIKE-1 Level 1 parameter and the SIKE test used the SIKEp503 parameter.

A TLS handshake is only performed once to setup a new connection. The SDK will reuse connections for multiple KMS requests when possible. This means that you don’t want to include measurements of subsequent round-trips under an existing TLS session, otherwise you will skew your performance data.

How to use hybrid post-quantum cipher suites

Note: The “AWS CRT HTTP Client” in the aws-crt-dev-preview branch of the aws-sdk-java-v2 repository is a beta release. This beta release and your use are subject to Section 1.10 (“Beta Service Participation”) of the AWS Service Terms.

To use the post-quantum cipher suites with AWS KMS, you’ll need the Developer Previews of the Java SDK 2.0 and the AWS Common Runtime. You’ll need to configure the AWS Common Runtime HTTP client to use s2n’s post-quantum hybrid cipher suites, and configure the AWS Java SDK 2.0 to use that HTTP client. This client can then be used when connecting to any KMS endpoints, but only those endpoints that are not using FIPS 140-2 validated crypto for the TLS termination. For example, kms.<region>.amazonaws.com supports the use of post-quantum cipher suites, while kms-fips.<region>.amazonaws.com does not.

To see a complete example of everything setup check out the example application here.
 

Figure 1: GitHub and package layout

Figure 1: GitHub and package layout

Figure 1 shows the GitHub and package layout. The steps below will walk you through building and configuring the SDK.

  1. Download the Java SDK v2 Common Runtime Developer Preview:
    
    $ git clone [email protected]:aws/aws-sdk-java-v2.git --branch aws-crt-dev-preview
    $ cd aws-sdk-java-v2
    

  2. Build the aws-crt-client JAR:
    
    $ mvn install -Pquick
    

  3. In your project add the AWS Common Runtime client to your Maven Dependencies:
    
    <dependency>
        <groupId>software.amazon.awssdk</groupId>
        <artifactId>aws-crt-client</artifactId>
        <version>2.10.7-SNAPSHOT</version>
    </dependency>
    

  4. Configure the new SDK and cipher suite in your application’s existing initialization code:
    
    if(!TLS_CIPHER_KMS_PQ_TLSv1_0_2019_06.isSupported()){
        throw new RuntimeException("Post Quantum Ciphers not supported on this Platform");
    }
    SdkAsyncHttpClient awsCrtHttpClient = AwsCrtAsyncHttpClient.builder()
              .tlsCipherPreference(TLS_CIPHER_KMS_PQ_TLSv1_0_2019_06)
              .build();
    KmsAsyncClient kms = KmsAsyncClient.builder()
             .httpClient(awsCrtHttpClient)
             .build();
    ListKeysResponse response = kms.listKeys().get();
    

Now, all connections made to AWS KMS in supported regions will use the new hybrid post-quantum cipher suites.

Things to try

Here are some ideas about how to use this post-quantum-enabled client:

  • Run load tests and benchmarks. These new cipher suites perform differently than traditional key exchange algorithms. You might need to adjust your connection timeouts to allow for the longer handshake times or, if you’re running inside an AWS Lambda function, extend the execution timeout setting.
  • Try connecting from different locations. Depending on the network path your request takes, you might discover that intermediate hosts, proxies, or firewalls with deep packet inspection (DPI) block the request. This could be due to the new cipher suites in the ClientHello or the larger key exchange messages. If this is the case, you might need to work with your Security team or IT administrators to update the relevant configuration to unblock the new TLS cipher suites. We’d like to hear from your about how your infrastructure interacts with this new variant of TLS traffic.

More info

If you’re interested to learn more about post-quantum cryptography check out:

Conclusion

In this blog post, I introduced you to the topic of post-quantum security and covered what AWS and NIST are doing to address the issue. I also showed you how to begin experimenting with hybrid post-quantum key exchange algorithms for TLS when connecting to KMS endpoints.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about how to configure the HTTP client or its interaction with KMS endpoints, please start a new thread on the AWS KMS discussion forum.

How to deploy CloudHSM to securely share your keys with your SaaS provider

Post Syndicated from Vinod Madabushi original https://aws.amazon.com/blogs/security/how-to-deploy-cloudhsm-securely-share-keys-with-saas-provider/

If your organization is using software as a service (SaaS), your data is likely stored and protected by the SaaS provider. However, depending on the type of data that your organization stores and the compliance requirements that it must meet, you might need more control over how the encryption keys are stored, protected, and used. In this post, I’ll show you two options for deploying and managing your own CloudHSM cluster to secure your keys, while still allowing trusted third-party SaaS providers to securely access your HSM cluster in order to perform cryptographic operations. You can also use this architecture when you want to share your keys with another business unit or with an application that’s running in a separate AWS account.

AWS CloudHSM is one of several cryptography services provided by AWS to help you secure your data and keys in the AWS cloud. AWS CloudHSM provides single-tenant HSMs based on third-party FIPS 140-2 Level 3 validated hardware, under your control, in your Amazon Virtual Private Cloud (Amazon VPC). You can generate and use keys on your HSM using CloudHSM command line tools or standards-compliant C, Java, and OpenSSL SDKs.

A related, more widely used service is AWS Key Management Service (KMS). KMS is generally easier to use, cheaper to operate, and is natively integrated with most AWS services. However, there are some use cases for which you may choose to rely on CloudHSM to meet your security and compliance requirements.

Solution Overview

There are two ways you can set up your VPC and CloudHSM clusters to allow trusted third-party SaaS providers to use the HSM cluster for cryptographic operations. The first option is to use VPC peering to allow traffic to flow between the SaaS provider’s HSM client VPC and your CloudHSM VPC, and to utilize a custom application to harness the HSM.

The second option is to use KMS to manage the keys, specifying a custom key store to generate and store the keys. AWS KMS supports custom key stores backed by AWS CloudHSM clusters. When you create an AWS KMS customer master key (CMK) in a custom key store, AWS KMS generates and stores non-extractable key material for the CMK in an AWS CloudHSM cluster that you own and manage.

Decision Criteria: VPC Peering vs Custom Key Store

The right solution for you will depend on factors like your VPC configuration, security requirements, network setup, and the type of cryptographic operations you need. The following table provides a high-level summary of how these two options compare. Later in this post, I’ll go over both options in detail and explain the design considerations you need to be aware of before deploying the solution in your environment.

Technical ConsiderationsSolution
VPC PeeringCustom Keystore
Are you able to peer or connect your HSM VPC with your SaaS provider?✔
Is your SaaS provider sensitive to costs from KMS usage in their AWS account?✔
Do you need CloudHSM-specific cryptographic tasks like signing, HMAC, or random number generation?✔
Does your SaaS provider need to encrypt your data directly with the Master Key?✔
Does your application rely on a PKCS#11-compliant or JCE-compliant SDK?✔
Does your SaaS provider need to use the keys in AWS services?✔
Do you need to log all key usage activities when SaaS providers use your HSM keys?✔

Option 1: VPC Peering

 

Figure 1: Architecture diagram showing VPC peering between the SaaS provider's HSM client VPC and the customer's HSM VPC

Figure 1: Architecture diagram showing VPC peering between the SaaS provider’s HSM client VPC and the customer’s HSM VPC

Figure 1 shows how you can deploy a CloudHSM cluster in a dedicated HSM VPC and peer this HSM VPC with your service provider’s VPC to allow them to access the HSM cluster through the client/application. I recommend that you deploy the CloudHSM cluster in a separate HSM VPC to limit the scope of resources running in that VPC. Since VPC peering is not transitive, service providers will not have access to any resources in your application VPCs or any other VPCs that are peered with the HSM VPC.

It’s possible to leverage the HSM cluster for other purposes and applications, but you should be aware of the potential drawbacks before you do. This approach could make it harder for you to find non-overlapping CIDR ranges for use with your SaaS provider. It would also mean that your SaaS provider could accidentally overwrite HSM account credentials or lock out your HSMs, causing an availability issue for your other applications. Due to these reasons, I recommend that you dedicate a CloudHSM cluster for use with your SaaS providers and use small VPC and subnet sizes, like /27, so that you’re not wasting IP space and it’s easier to find non-overlapping IP addresses with your SaaS provider.

If you’re using VPC peering, your HSM VPC CIDR cannot overlap with your SaaS provider’s VPC. Deploying the HSM cluster in a separate VPC gives you flexibility in selecting a suitable CIDR range that is non-overlapping with the service provider since you don’t have to worry about your other applications. Also, since you’re only hosting the HSM Cluster in this VPC, you can choose a CIDR range that is relatively small.

Design considerations

Here are additional considerations to think about when deploying this solution in your environment:

  • VPC peering allow resources in either VPC to communicate with each other as long as security groups, NACLS, and routing allow for it. In order to improve security, place only resources that are meant to be shared in the VPC, and secure communication at the port/protocol level by using security groups.
  • If you decide to revoke the SaaS provider’s access to your CloudHSM, you have two choices:
    • At the network layer, you can remove connectivity by deleting the VPC peering or by modifying the CloudHSM security groups to disallow the SaaS provider’s CIDR ranges.
    • Alternately, you can log in to the CloudHSM as Crypto Officer (CO) and change the password or delete the Crypto user that the SaaS provider is using.
  • If you’re deploying CloudHSM across multiple accounts or VPCs within your organization, you can also use AWS Transit Gateway to connect the CloudHSM VPC to your application VPCs. Transit Gateway is ideal when you have multiple application VPCs that needs CloudHSM access, as it easily scales and you don’t have to worry about the VPC peering limits or the number of peering connections to manage.
  • If you’re the SaaS provider, and you have multiple clients who might be interested in this solution, you must make sure that one customer IP space doesn’t overlap with yours. You must also make sure that each customer’s HSM VPC doesn’t overlap with any of the others. One solution is to dedicate one VPC per customer, to keep the client/application dedicated to that customer, and to peer this VPC with your application VPC. This reduces the overlapping CIDR dependency among all your customers.

Option 2: Custom Key Store

As the AWS KMS documentation explains, KMS supports custom key stores backed by AWS CloudHSM clusters. When you create an AWS KMS customer master key (CMK) in a custom key store, AWS KMS generates and stores non-extractable key material for the CMK in an AWS CloudHSM cluster that you own and manage. When you use a CMK in a custom key store, the cryptographic operations are performed in the HSMs in the cluster. This feature combines the convenience and widespread integration of AWS KMS with the added control of an AWS CloudHSM cluster in your AWS account. This option allows you to keep your master key in the CloudHSM cluster but allows your SaaS provider to use your master key securely by using KMS.

Each custom key store is associated with an AWS CloudHSM cluster in your AWS account. When you connect the custom key store to its cluster, AWS KMS creates the network infrastructure to support the connection. Then it logs into the key AWS CloudHSM client in the cluster using the credentials of a dedicated crypto user in the cluster. All of this is automatically set up, with no need to peer VPCs or connect to your SaaS provider’s VPC.

You create and manage your custom key stores in AWS KMS, and you create and manage your HSM clusters in AWS CloudHSM. When you create CMKs in an AWS KMS custom key store, you view and manage the CMKs in AWS KMS. But you can also view and manage their key material in AWS CloudHSM, just as you would do for other keys in the cluster.

The following diagram shows how some keys can be located in a CloudHSM cluster but be visible through AWS KMS. These are the keys that AWS KMS can use for crypto operations performed through KMS.
 

Figure 2: High level overview of KMS custom key store

Figure 2: High level overview of KMS custom key store

While this option eliminates many of the networking components you need to set up for Option 1, it does limit the type of cryptographic operations that your SaaS provider can perform. Since the SaaS provider doesn’t have direct access to CloudHSM, the crypto operations are limited to the encrypt and decrypt operations supported by KMS, and your SaaS provider must use KMS APIs for all of their operations. This is easy if they’re using AWS services which use KMS already, but if they’re performing operations within their application before storing the data in AWS storage services, this approach could be challenging, because KMS doesn’t support all the same types of cryptographic operations that CloudHSM supports.

Figure 3 illustrates the various components that make up a custom key store and shows how a CloudHSM cluster can connect to KMS to create a customer controlled key store.
 

Figure 3: A cluster of two CloudHSM instances is connected to KMS to create a customer controlled key store

Figure 3: A cluster of two CloudHSM instances is connected to KMS to create a customer controlled key store

Design Considerations

  • Note that when using custom key store, you’re creating a kmsuser CU account in your AWS CloudHSM cluster and providing the kmsuser account credentials to AWS KMS.
  • This option requires your service provider to be able to use KMS as the key management option within their application. Because your SaaS provider cannot communicate directly with the CloudHSM cluster, they must instead use KMS APIs to encrypt the data. If your SaaS provider is encrypting within their application without using KMS, this option may not work for you.
  • When deploying a custom key store, you must not only control access to the CloudHSM cluster, you must also control access to AWS KMS.
  • Because the custom key store and KMS are located in your account, you must give permission to the SaaS provider to use certain KMS keys. You can do this by enabling cross account access. For more information, please refer to the blog post “Share custom encryption keys more securely between accounts by using AWS Key Management Service.”
  • I recommend dedicating an AWS account to the CloudHSM cluster and custom key store, as this simplifies setup. For more information, please refer to Controlling Access to Your Custom Key Store.

Network architecture that is not supported by CloudHSM

Figure 4: Diagram showing the network anti-pattern for deploying CloudHSM

Figure 4: Diagram showing the network anti-pattern for deploying CloudHSM

Figure 4 shows various networking technologies, like AWS PrivateLink, Network Address Translation (NAT), and AWS Load Balancers, that cannot be used with CloudHSM when placed between the CloudHSM cluster and the client/application. All of these methods mask the real IPs of the HSM cluster nodes from the client, which breaks the communication between the CloudHSM client and the HSMs.

When the CloudHSM client successfully connects to the HSM cluster, it downloads a list of HSM IP addresses which is then stored and used for subsequent connections. When one of the HSM nodes is unavailable, the client/application will automatically try the IP address of the HSM nodes it knows about. When HSMs are added or removed from the cluster, the client is automatically reconfigured. Since the client relies on a current list of IP addresses to transparently handle high availability and failover within the cluster, masking the real IP address of the HSM node thus breaks the communication between the cluster and the client.

You can read more about how the CloudHSM client works in the AWS CloudHSM User Guide.

Summary

In this blog post, I’ve shown you two options for deploying CloudHSM to store your key material while allowing your SaaS provider to access and use those keys on your behalf. This allows you to remain in control of your encryption keys and use a SaaS solution without compromising security.

It’s important to understand the security requirements, network setup, and type of cryptographic operation for each approach, and to choose the option that aligns the best with your goals. As a best practice, it’s also important to understand how to secure your CloudHSM and KMS deployment and to use necessary role-based access control with minimum privilege. Read more about AWS KMS Best Practices and CloudHSM Best Practices.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Key Management Service discussion forum.

Want more AWS Security news? Follow us on Twitter.

Vinod Madabushi

Vinod is an Enterprise Solutions Architect with AWS. He works with customers on building highly available, scalable, and secure applications on AWS Cloud. He’s passionate about solving technology challenges and helping customers with their cloud journey.

How to decrypt ciphertexts in multiple regions with the AWS Encryption SDK in C

Post Syndicated from Liz Roth original https://aws.amazon.com/blogs/security/how-to-decrypt-ciphertexts-multiple-regions-aws-encryption-sdk-in-c/

You’ve told us that you want to encrypt data once with AWS Key Management Service (AWS KMS) and decrypt that data with customer master keys (CMKs) that you specify, often with CMKs in different AWS Regions. Doing this saves you compute resources and helps you to enable secure and efficient high-availability schemes.

The AWS Crypto Tools team has introduced the AWS Encryption SDK for C so you can achieve these goals. The new tool also adds more options for language and platform support and is fully interoperable with the implementations in Java and Python.

The AWS Encryption SDK is a client-side encryption library that helps make it easier for you to implement encryption best practices in your applications. You can use it with master keys from multiple sources, including AWS KMS CMKs. The AWS Encryption SDK doesn’t require AWS KMS or any other AWS service.

You can use AWS KMS APIs directly to encrypt data keys using multiple CMKs, but the AWS Encryption SDK provides tools to make working with multiple CMKs even easier, with everything you need stored in the Encryption SDK’s portable encrypted message format. The AWS Encryption SDK for C uses the concept of keyrings, which makes it easy to work with ciphertexts encrypted using multiple CMKs.

In this post, I will walk you through an example using the new AWS Encryption SDK for C. I’ll focus on some highlights from example code in the context of what an example application deployment might look like. You can find the complete example code in this GitHub repository. As always, we welcome your comments and your contributions.

Example scenario

To add some context around the example code, assume that you have a data processing application deployed both in US West (Oregon) us-west-2 and EU Central (Frankfurt) eu-central-1. For added durability, this example application creates and encrypts data in us-west-2 before it’s copied to the eu-central-1 Region. You have assurance that you could decrypt that data in us-west-2 if needed, but you want to mitigate the case where the decryption service in us-west-2 is unavailable. So how do you ensure you can decrypt your data in the eu-central-1 region when you need to?

In this example, your data processing application uses the AWS Encryption SDK and AWS KMS to generate a 256-bit data key to encrypt content locally in us-west-2. The AWS Encryption SDK for C deletes the plaintext data key after use, but an encrypted copy of that data key is included in the encrypted message that the AWS Encryption SDK returns. This prevents you from losing the encrypted copy of the data key, which would make your encrypted content unrecoverable. The data key is encrypted under the AWS KMS CMKs in each of the two regions in which you might want to decrypt the data in the future.

A best practice is to plan to decrypt data using in-region data keys and CMKs. This reduces latency and simplifies the permissions and auditing properties of the decryption operation. The latency impact from the cross-region API calls occur only during the encryption operation.

In this scenario, the AWS KMS CMK key policy permissions look like this:

  • To encrypt data, the AWS identity used by the data processing application in us-west-2 needs kms:GenerateDataKey permission on the us-west-2 CMK and kms:Encrypt permission on the eu-central-1 CMK. You can specify these permissions in a key policy or IAM policy. This will let the application create a data key in us-west-2 and encrypt the data key under CMKs in both AWS Regions.
  • To decrypt data, the AWS identity used by the data processing application in us-west-2 needs kms:Decrypt permissions on the CMK in us-west-2 or the CMK in eu-central-1.

Encryption path

First, define variables for the Amazon Resource Names (ARNs) of your CMKs in us-west-2 and eu-central-1. In the Encryption SDK for C, to encrypt, you can identify a CMK by its CMK ARN or the Alias ARN that is mapped to the CMK ARN.


const char *KEY_ARN_US_WEST_2 = "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab";

const char *KEY_ARN_EU_CENTRAL_1 = "arn:aws:kms:eu-central-1:111122223333:key/0987dcba-09fe-87dc-65ba-ab0987654321";      
     

Now, use the CMK ARNs to create a keyring. In the Encryption SDK, a keyring is used to generate, encrypt, and decrypt data keys under multiple master keys. You’ll create a KMS keyring configured to use multiple CMKs.


struct aws_cryptosdk_keyring *kms_keyring=Aws::Cryptosdk::KmsKeyring::Builder().Build(KEY_ARN_US_WEST_2, { KEY_ARN_EU_CENTRAL_1 });

When the AWS Encryption SDK uses this keyring to encrypt data, it calls GenerateDataKey on the first CMK that you specify, and Encrypt on each of the remaining CMKs that you specify. The result is a plaintext data key generated in us-west-2, an encryption of the data key using the CMK in us-west-2, and an encryption of the data key using the CMK in eu-central-1.

The plaintext data key that AWS KMS generated in us-west-2 is protected under a TLS session using only cipher suites that support forward-secrecy. The process of sending that same plaintext data key to the AWS KMS endpoint in eu-central-1 for encryption is also protected under a similar TLS session.

The Encryption SDK uses the data key to encrypt your data, and it stores the encrypted data keys with your encrypted content. The result is an encrypted message that can be decrypted using the CMK in us-west-2 or the CMK in eu-central-1.

Now that you understand what’s going to happen after you create the keyring, I’ll return to the code sample. Next, you need to create an encrypt-mode session with your keyring and pass in the CMM. In the AWS Encryption SDK for C, you use a session to encrypt a single plaintext message or decrypt a single ciphertext message, regardless of its size. The session maintains the state of the message throughout its processing.


struct aws_cryptosdk_session *session = aws_cryptosdk_session_new_from_keyring(alloc, AWS_CRYPTOSDK_ENCRYPT, kms_keyring);

With the keyring and encrypt-mode session, the data processing application can ask the Encryption SDK to encrypt the data under the CMKs that you specified in two different AWS regions:


aws_cryptosdk_session_process(
    session,
    out_ciphertext,
    out_ciphertext_buf_sz,
    out_ciphertext_len,
    in_plaintext,
    in_plaintext_len,
    &in_plaintext_consumed))

The result is an encrypted message that contains the ciphertext and two encrypted copies of the same data key. One encrypted data key was encrypted by your CMK in us-west-2 and other encrypted data key was encrypted by your CMK in eu-central-1.

Decryption path

In the AWS Encryption SDK for C, you use keyrings for both encrypting and decrypting. You can use the same keyring for both, or you can use different keyrings for each operation.

Why would you want to use a different keyring for decryption? At a high level, encrypt keyrings specify all CMKs that can decrypt the ciphertext. Decrypt keyrings constrain the CMKs the application is permitted to use.

Reusing a keyring for both encrypt and decrypt mode can simplify your AWS Encryption SDK client configuration, but splitting the keyring and using different AWS KMS clients provides more flexibility to meet your security and architecture goals. The option you choose depends in part on the constraints you want to place on the CMKs your application uses.

The Decrypt API in the AWS KMS service doesn’t permit you to specify a CMK as a request parameter. But the AWS Encryption SDK lets you specify one or many CMKs in a decryption keyring, or even discover which CMKs to try automatically. I’ll discuss each option in the next section.

Decryption path 1: Use a specific CMK

This keyring option configures the AWS Encryption SDK to use only a specified CMK in the specified AWS Region. This implies that your data processing application will need kms:Decrypt permissions on that specific CMK and your application will always call the same AWS KMS endpoints in the specified AWS Region. CloudTrail events from the Decrypt API will also only appear in the specified AWS Region.

You might use a specific CMK when the user or application that is decrypting the data has kms:Decrypt permission on only one of the CMKs that encrypted the data keys.

The CMK that you specify to decrypt the data must be one of the CMKs that was used to encrypt the data. Make sure that at least one of the CMKs from your encrypt keyring is included in the decrypt keyring and that the caller has kms:Decrypt permission to use it.

In my example, I encrypted the data keys using CMKs in us-west-2 and eu-central 1, so I’ll start decrypting in eu-central-1 because I want to have a specific decrypt instantiation of the data processing application dedicated to eu-central-1. Assume the eu-central-1 data processing application has configured AWS IAM credentials for a principal with permission to call the Decrypt operation on the eu-central-1 CMK.

Configure a keyring that asks the AWS Encryption SDK to use the CMK in eu-central-1 to decrypt:

Aws::Cryptosdk::KmsKeyring::Builder().Build(KEY_ARN_EU_CENTRAL_1)

The Encryption SDK reads the encrypted message, finds the encrypted data key that was encrypted using the CMK in eu-central-1, and uses this keyring to decrypt.

Decryption path 2: Use any of several CMKs

This keyring option configures the AWS Encryption SDK to try several specific CMKs during its decryption attempts, stopping as soon as it succeeds. You should configure the AWS IAM credentials used by your data processing application to have kms:Decrypt permissions on each of the specified regional CMKs.

Your application could end up calling multiple regional AWS KMS endpoints. CloudTrail events from the Decrypt API will appear in the AWS Region in which the decrypt operation succeeds, and in any of the other AWS Regions that the keyring attempts to use. The CMK that you specify to decrypt the data must be one of the CMKs that was used to encrypt the data. Make sure that at least one of the CMKs from your encrypt keyring is included in the decrypt keyring and that the application has kms:Decrypt permission to use it.

You might define an encryption keyring that includes multiple CMKs so that users with different permissions can decrypt the same message. For example, you might include in your encryption keyring keys in multiple AWS regions.

Here’s an example keyring constructed with multiple CMKs:

Aws::Cryptosdk::KmsKeyring::Builder().Build(KEY_ARN_EU_CENTRAL_1, { KEY_ARN_US_WEST_2 })

The AWS Encryption SDK reads each of the encrypted data keys stored in the encrypted message in the order that they appear. For each data key, the Encryption SDK searches the keyring for the matching CMK that encrypted it. If it finds that CMK, the AWS Encryption SDK calls AWS KMS in the AWS Region where the CMK exists to decrypt that data key, then uses that decrypted key to decrypt the message. If the decryption operation fails for any reason, the AWS Encryption SDK moves on to the next encrypted data key in the message and tries again.

The AWS Encryption SDK will try to decrypt the encrypted message in this way until either decryption succeeds, or the AWS Encryption SDK has attempted and failed to decrypt any of the encrypted data keys using the CMKs specified in the keyring.

If this keyring configuration looks familiar, it’s because it’s similar to the configuration you used on the encrypt path when you encrypted under multiple CMKs. The difference is this:

  • Encryption: The AWS Encryption SDK uses every CMK in the keyring to encrypt the data key, and adds all of the encrypted data keys to the encrypted message.
  • Decryption: The AWS Encryption SDK attempts to decrypt one of the encrypted data key using only the CMKs in the keyring. It stops as soon as it succeeds.

Decryption path 3: Strategic ARNs reduction using the Discovery keyring

The previous decryption paths required you to keep track of the exact CMKs used during the encryption operation, which may suit your needs for security and event logging. But what if you want more flexibility? What if you want to change the CMKs that you use in encryption operations without updating the data processing application that decrypts your data? You can configure a keyring that doesn’t specify CMKs to use for decryption, but instead tries each CMK that encrypted a data key until decryption succeeds or all referenced CMKs fail. We call this configuration a KMS Discovery keyring.

A Discovery keyring is equivalent to a keyring that includes all of the same CMKs that were used to encrypt the data, but it’s simpler and less error-prone. You might use a KMS Discovery keyring if you have no preference among the CMKs that encrypted a data key, and don’t mind the latency tradeoffs of trying CMKs in remote AWS Regions, or trying CMKs that will fail a permissions check while searching for one that succeeds. You can think of the KMS Discovery keyring as a universal keyring that you can use and reuse in your applications in many AWS Regions.

When you use a KMS Discovery keyring, the AWS Encryption SDK reads each encrypted data key and discovers the ARN of the CMK used to encrypt it. The AWS Encryption SDK then uses the configured IAM credentials to call AWS KMS in that CMK’s AWS Region to decrypt the data key. The AWS Encryption SDK repeats that process until it has decrypted the data key or runs out of encrypted data keys to try. .


Aws::Cryptosdk::KmsKeyring::Builder().BuildDiscovery();

While KMS Discovery keyrings are simpler, you run the risk of having your data processing application make a cross-region call to an AWS KMS endpoint that adds unwanted latency. In my example, you might not want the decrypting application running in us-west-2 to wait for the AWS Encryption SDK to call AWS KMS in eu-central-1. To use only the CMKs in a particular AWS Region to decrypt the data keys, create a KMS Regional Discovery keyring that specifies the AWS Region, but not the CMK ARNs. In my example, the following keyring allows the AWS Encryption SDK to use only CMKs in us-west-2.


Aws::Cryptosdk::KmsKeyring::Builder()
        .WithKmsClient(create_kms_client(Aws::Region::US_WEST_2)).BuildDiscovery());

Because this example KMS Regional Discovery keyring specifies a client for the us-west-2 AWS Region, not a CMK ARN, the AWS Encryption SDK will only try to decrypt any encrypted data key it finds that was encrypted using any CMK in us-west-2. If, for some reason, none of the encrypted data keys was encrypted using a CMK in us-west-2, or the application decrypting the data doesn’t have permission to use CMKs in us-west-2, the AWS Encryption SDK call to decrypt the message with this keyring fails and fails fast. This may provide you with more options for deterministic error handling.

Keep in mind that the KMS Regional Discovery keyring allows the AWS Encryption SDK to try the CMK for each encrypted data key in the specified AWS Region. However, AWS KMS never uses a CMK until it verifies that the caller has permission to perform the requested operation. If the application doesn’t have kms:Decrypt permission for any of the CMKs that were used to encrypt the data keys, decryption fails.

Summary

Encrypting KMS data keys using multiple CMKs provides a variety of options to decrypt ciphertexts to meet your security, auditing, and latency requirements. My examples show how encrypted messages can be decrypted by using AWS KMS CMKs in multiple AWS Regions. You can also use the Encryption SDK with master keys supplied by a custom key management infrastructure independent of AWS.

The AWS Encryption SDK’s portable and interoperable encrypted message format makes it easier to combine multiple encrypted data keys with your encrypted data to support the decryption access scheme you want. The AWS Encryption SDK for C brings these utilities to a new, broader set of platform and application environments to complement the existing Java and Python versions.

You can find the AWS Encryption SDK for C on GitHub.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Crypto Tools forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Liz Roth

Liz is a Senior Software Development Engineer at Amazon Web Services. She has been at Amazon for more than 8 years and has more than 10 years of industry experience across a variety of areas, including security, networks, and operations.

AWS re:Invent Security Recap: Launches, Enhancements, and Takeaways

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/aws-reinvent-security-recap-launches-enhancements-and-takeaways/

For more from Steve, follow him on Twitter

Customers continue to tell me that our AWS re:Invent conference is a winner. It’s a place where they can learn, meet their peers, and rediscover the art of the possible. Of course, there is always an air of anticipation around what new AWS service releases will be announced. This time around, we went even bigger than we ever have before. There were over 50,000 people in attendance, spread across the Las Vegas strip, with over 2,000 breakout sessions, and jam packed hands-on learning opportunities with multiple day hackathons, workshops, and bootcamps.

A big part of all this activity included sharing knowledge about the latest AWS Security, Identity and Compliance services and features, as well as announcing new technology that we’re excited to be adopted so quickly across so many use-cases.

Here are the top Security, Identity and Compliance releases from re:invent 2018:

Keynotes: All that’s new

New AWS offerings provide more prescriptive guidance

The AWS re:Invent keynotes from Andy Jassy, Werner Vogels, and Peter DeSantis, as well as my own leadership session, featured the following new releases and service enhancements. We continue to strive to make architecting easier for developers, as well as our partners and our customers, so they stay secure as they build and innovate in the cloud.

  • We launched several prescriptive security services to assist developers and customers in understanding and managing their security and compliance postures in real time. My favorite new service is AWS Security Hub, which helps you centrally manage your security and compliance controls. With Security Hub, you now have a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie, as well as from AWS Partner solutions. Findings are visually summarized on integrated dashboards with actionable graphs and tables. You can also continuously monitor your environment using automated compliance checks based on the AWS best practices and industry standards your organization follows. Get started with AWS Security Hub with just a few clicks in the Management Console and once enabled, Security Hub will begin aggregating and prioritizing findings. You can enable Security Hub on a single account with one click in the AWS Security Hub console or a single API call.
  • Another prescriptive service we launched is called AWS Control Tower. One of the first things customers think about when moving to the cloud is how to set up a landing zone for their data. AWS Control Tower removes the guesswork, automating the set-up of an AWS landing zone that is secure, well-architected and supports multiple accounts. AWS Control Tower does this by using a set of blueprints that embody AWS best practices. Guardrails, both mandatory and recommended, are available for high-level, rule-based governance, allowing you to have the right operational control over your accounts. An integrated dashboard enables you to keep a watchful eye over the accounts provisioned, the guardrails that are enabled, and your overall compliance status. Sign up for the Control Tower preview, here.
  • The third prescriptive service, called AWS Lake Formation, will reduce your data lake build time from months to days. Prior to AWS Lake Formation, setting up a data lake involved numerous granular tasks. Creating a data lake with Lake Formation is as simple as defining where your data resides and what data access and security policies you want to apply. Lake Formation then collects and catalogs data from databases and object storage, moves the data into your new Amazon S3 data lake, cleans and classifies data using machine learning algorithms, and secures access to your sensitive data. Get started with a preview of AWS Lake Formation, here.
  • Next up, IoT Greengrass enables enhanced security through hardware root of trusted private key storage on hardware secure elements including Trusted Platform Modules (TPMs) and Hardware Security Modules (HSMs). Storing your private key on a hardware secure element adds hardware root of trust level-security to existing AWS IoT Greengrass security features that include X.509 certificates for TLS mutual authentication and encryption of data both in transit and at rest. You can also use the hardware secure element to protect secrets that you deploy to your AWS IoT Greengrass device using AWS IoT Greengrass Secrets Manager. To try these security enhancements for yourself, check out https://aws.amazon.com/greengrass/.
  • You can now use the AWS Key Management Service (KMS) custom key store feature to gain more control over your KMS keys. Previously, KMS offered the ability to store keys in shared HSMs managed by KMS. However, we heard from customers that their needs were more nuanced. In particular, they needed to manage keys in single-tenant HSMs under their exclusive control. With KMS custom key store, you can configure your own CloudHSM cluster and authorize KMS to use it as a dedicated key store for your keys. Then, when you create keys in KMS, you can choose to generate the key material in your CloudHSM cluster. Get started with KMS custom key store by following the steps in this blog post.
  • We’re excited to announce the release of ATO on AWS to help customers and partners speed up the FedRAMP approval process (which has traditionally taken SaaS providers up to 2 years to complete). We’ve already had customers, such as Smartsheet, complete the process in less than 90 days with ATO on AWS. Customers will have access to training, tools, pre-built CloudFormation templates, control implementation details, and pre-built artifacts. Additionally, customers are able to access direct engagement and guidance from AWS compliance specialists and support from expert AWS consulting and technology partners who are a part of our Security Automation and Orchestration (SAO) initiative, including GitHub, Yubico, RedHat, Splunk, Allgress, Puppet, Trend Micro, Telos, CloudCheckr, Saint, Center for Internet Security (CIS), OKTA, Barracuda, Anitian, Kratos, and Coalfire. To get started with ATO on AWS, contact the AWS partner team at [email protected].
  • Finally, I announced our first conference dedicated to cloud security, identity and compliance: AWS re:Inforce. The inaugural AWS re:Inforce, a hands-on gathering of like-minded security professionals, will take place in Boston, MA on June 25th and 26th, 2019 at the Boston Convention and Exhibition Center. The cost for a full conference pass will be $1,099. I’m hoping to see you all there. Sign up here to be notified of when registration opens.

Key re:Invent Takeaways

AWS is here to help you build

  1. Customers want to innovate, and cloud needs to securely enable this. Companies need to able to innovate to meet rapidly evolving consumer demands. This means they need cloud security capabilities they can rely on to meet their specific security requirements, while allowing them to continue to meet and exceed customer expectations. AWS Lake Formation, AWS Control Tower, and AWS Security Hub aggregate and automate otherwise manual processes involved with setting up a secure and compliant cloud environment, giving customers greater flexibility to innovate, create, and manage their businesses.
  2. Cloud Security is as much art as it is science. Getting to what you really need to know about your security posture can be a challenge. At AWS, we’ve found that the sweet spot lies in services and features that enable you to continuously gain greater depth of knowledge into your security posture, while automating mission critical tasks that relieve you from having to constantly monitor your infrastructure. This manifests itself in having an end-to-end automated remediation workflow. I spent some time covering this in my re:Invent session, and will continue to advocate using a combination of services, such as AWS Lambda, WAF, S3, AWS CloudTrail, and AWS Config to proactively identify, mitigate, and remediate threats that may arise as your infrastructure evolves.
  3. Remove human access to data. I’ve set a goal at AWS to reduce human access to data by 80%. While that number may sound lofty, it’s purposeful, because the only way to achieve this is through automation. There have been a number of security incidents in the news across industries, ranging from inappropriate access to personal information in healthcare, to credential stuffing in financial services. The way to protect against such incidents? Automate key security measures and minimize your attack surface by enabling access control and credential management with services like AWS IAM and AWS Secrets Manager. Additional gains can be found by leveraging threat intelligence through continuous monitoring of incidents via services such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie (intelligence from these services will now be available in AWS Security Hub).
  4. Get your leadership on board with your security plan. We offer 500+ security services and features; however, new services and technology can’t be wholly responsible for implementing reliable security measures. Security teams need to set expectations with leadership early, aligning on a number of critical protocols, including how to restrict and monitor human access to data, patching and log retention duration, credential lifespan, blast radius reduction, embedded encryption throughout AWS architecture, and canaries and invariants for security functionality. It’s also important to set security Key Performance Indicators (KPIs) to continuously track. At AWS, we monitor the number of AppSec reviews, how many security checks we can automate, third-party compliance audits, metrics on internal time spent, and conformity with Service Level Agreements (SLAs). While the needs of your business may vary, we find baseline KPIs to be consistent measures of security assurance that can be easily communicated to leadership.

Final Thoughts

Queen’s famous lyric, “I want it all, I want it all, and I want it now,” accurately captures the sentiment at re:Invent this year. Security will always be job zero for us, and we continue to iterate on behalf of customers so they can securely build, experiment and create … right now! AWS is trusted by many of the world’s most risk-sensitive organizations precisely because we have demonstrated this unwavering commitment to putting security above all. Still, I believe we are in the early days of innovation and adoption of the cloud, and I look forward to seeing both the gains and use cases that come out of our latest batch of tools and services.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds five patents in the field of cloud security architecture. Follow Steve on Twitter

Are KMS custom key stores right for you?

Post Syndicated from Richard Moulds original https://aws.amazon.com/blogs/security/are-kms-custom-key-stores-right-for-you/

You can use the AWS Key Management Service (KMS) custom key store feature to gain more control over your KMS keys. The KMS custom key store integrates KMS with AWS CloudHSM to help satisfy compliance obligations that would otherwise require the use of on-premises hardware security modules (HSMs) while providing the AWS service integrations of KMS. However, the additional control comes with increased cost and potential impact on performance and availability. This post will help you decide if this feature is the best approach for you.

KMS is a fully managed service that generates encryption keys and helps you manage their use across more than 45 AWS services. It also supports the AWS Encryption SDK and other client-side encryption tools, and you can integrate it into your own applications. KMS is designed to meet the requirements of the vast majority of AWS customers. However, there are situations where customers need to manage their keys in single-tenant HSMs that they exclusively control. Previously, KMS did not meet these requirements since it offered only the ability to store keys in shared HSMs that are managed by KMS.

AWS CloudHSM is a service that’s primarily intended to support customer-managed applications that are specifically designed to use HSMs. It provides direct control over HSM resources, but the service isn’t, by itself, widely integrated with other AWS managed services. Before custom key store, this meant that if you required direct control of your HSMs but still wanted to use and store regulated data in AWS managed services, you had to choose between changing those requirements, not using a given AWS service, or building your own solution. KMS custom key store gives you another option.

How does a custom key store work?

With custom key store, you can configure your own CloudHSM cluster and authorize KMS to use it as a dedicated key store for your keys rather than the default KMS key store. Then, when you create keys in KMS, you can choose to generate the key material in your CloudHSM cluster. Your KMS customer master keys (CMKs) never leave the CloudHSM instances, and all KMS operations that use those keys are only performed in your HSMs. In all other respects, the master keys stored in your custom key store are used in a way that is consistent with other KMS CMKs.

This diagram illustrates the primary components of the service and shows how a cluster of two CloudHSM instances is connected to KMS to create a customer controlled key store.
 

Figure 1: A cluster of two CloudHSM instances is connected to the KMS front-end hosts to create a customer controlled key store

Figure 1: A cluster of two CloudHSM instances is connected to KMS to create a customer controlled key store

Because you control your CloudHSM cluster, you can take direct action to manage certain aspects of the lifecycle of your keys, independently of KMS. Specifically, you can verify that KMS correctly created keys in your HSMs and you can delete key material and restore keys from backup at any time. You can also choose to connect and disconnect the CloudHSM cluster from KMS, effectively isolating your keys from KMS. However, with more control comes more responsibility. It’s important that you understand the availability and durability impact of using this feature, and I discuss the issues in the next section.

Decision criteria

KMS customers who plan to use a custom key store tell us they expect to use the feature selectively, deciding on a key-by-key basis where to store them. To help you decide if and how you might use the new feature, here are some important issues to consider.

Here are some reasons you might want to store a key in a custom key store:

  • You have keys that are required to be protected in a single-tenant HSM or in an HSM over which you have direct control.
  • You have keys that are explicitly required to be stored in an HSM validated at FIPS 140-2 level 3 overall (the HSMs used in the default KMS key store are validated to level 2 overall, with level 3 in several categories, including physical security).
  • You have keys that are required to be auditable independently of KMS.

And here are some considerations that might influence your decision to use a custom key store:

  • Cost — Each custom key store requires that your CloudHSM cluster contains at least two HSMs. CloudHSM charges vary by region, but you should expect costs of at least $1,000 per month, per HSM, if each device is permanently provisioned. This cost occurs regardless of whether you make any requests of the KMS API directly or indirectly through an AWS service.
  • Performance — The number of HSMs determines the rate at which keys can be used. It’s important that you understand the intended usage patterns for your keys and ensure that you have provisioned your HSM resources appropriately.
  • Availability — The number of HSMs and the use of availability zones (AZs) impacts the availability of your cluster and, therefore, your keys. The risk of your configuration errors that result in a custom key store being disconnected, or key material being deleted and unrecoverable, must be understood and assessed.
  • Operations — By using the custom key store feature, you will perform certain tasks that are normally handled by KMS. You will need to set up HSM clusters, configure HSM users, and potentially restore HSMs from backup. These are security-sensitive tasks for which you should have the appropriate resources and organizational controls in place to perform.

Getting Started

Here’s a basic rundown of the steps that you’ll take to create your first key in a custom key store within a given region.

  1. Create your CloudHSM cluster, initialize it, and add HSMs to the cluster. If you already have a CloudHSM cluster, you can use it as a custom key store in addition to your existing applications.
  2. Create a CloudHSM user so that KMS can access your cluster to create and use keys.
  3. Create a custom key store entry in KMS, give it a name, define which CloudHSM cluster you want it to use, and give KMS the credentials to access your cluster.
  4. Instruct KMS to make a connection to your cluster and log in.
  5. Create a CMK in KMS in the usual way except now select CloudHSM as the source of your key material. You’ll define administrators, users, and policies for the key as you would for any other CMK.
  6. Use the key via the existing KMS APIs, AWS CLI, or the AWS Encryption SDK. Requests to use the key don’t need to be context-aware of whether the key is stored in a custom key store or the default KMS key store.

Summary

Some customers need specific controls in place before they can use KMS to manage encryption keys in AWS. The new KMS custom key store feature is intended to satisfy that requirement. You can now apply the controls provided by CloudHSM to keys managed in KMS, without changing access control policies or service integration.

However, by using the new feature, you take responsibility for certain operational aspects that would otherwise be handled by KMS. It’s important that you have the appropriate controls in place and understand the performance and availability requirements of each key that you create in a custom key store.

If you’ve been prevented from migrating sensitive data to AWS because of specific key management requirements that are currently not met by KMS, consider using the new KMS custom key store feature.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Key Management Service discussion forum.

Want more AWS Security news? Follow us on Twitter.

Author

Richard Moulds

Richard is a Principal Product Manager at AWS. He’s a member of the KMS team and is primarily focused on helping to define the product roadmap and satisfy customer requirements. Richard has more than 15 years experience in helping customers build encryption and key management systems to protect their data. His attraction to cryptography stems from the challenge of taking such a complex subject and translating it into simple solutions that customers should be able to take for granted, on a grand scale. When he’s not thinking ahead he’s focused on the past, restoring classic cars, the more rust the better.

Encrypting messages published to Amazon SNS with AWS KMS

Post Syndicated from Michelle Mercier original https://aws.amazon.com/blogs/compute/encrypting-messages-published-to-amazon-sns-with-aws-kms/

Amazon Simple Notification Service (Amazon SNS) is a fully managed pub/sub messaging service for decoupling event-driven microservices, distributed systems, and serverless applications. Enterprises around the world use Amazon SNS to support applications that handle private and sensitive data. Many of these enterprises operate in regulated markets, and their systems are subject to stringent security and compliance standards, such as HIPAA for healthcare, PCI DSS for finance, and FedRAMP for government. To address the requirements of highly critical workloads, Amazon SNS provides message encryption in transit, based on Amazon Trust Services (ATS) certificates, as well as message encryption at rest, using AWS Key Management Service (AWS KMS) keys. 

Message encryption in transit

The Amazon SNS API is served through Secure HTTP (HTTPS) and encrypts all messages in transit with Transport Layer Security (TLS) certificates issued by ATS. These certificates verify the identity of the Amazon SNS API server whenever an encrypted connection is established. A certificate authority (CA) issues the certificate to a specific domain. Thus, when a domain presents a certificate issued by a trusted CA, the API client can determine that it’s safe to establish a connection.

Message encryption at rest

Amazon SNS supports encrypted topics. When you publish messages to encrypted topics, Amazon SNS uses customer master keys (CMK), powered by AWS KMS, to encrypt your messages. Amazon SNS supports customer-managed as well as AWS-managed CMKs. As soon as Amazon SNS receives your messages, the encryption takes place on the server, using a 256-bit AES-GCM algorithm. The messages are stored in encrypted form across multiple Availability Zones (AZs) for durability and are decrypted just before being delivered to subscribed endpoints, such as Amazon Simple Queue Service (Amazon SQS) queues, AWS Lambda functions, and HTTP and HTTPS webhooks.

 

Notes:

  • Amazon SNS encrypts only the body of the messages that you publish. It doesn’t encrypt message metadata (identifier, subject, timestamp, and attributes), topic metadata (name and attributes), or topic metrics. Thus, encryption doesn’t affect the operations of Amazon SNS, such as message fanout and message filtering.
  • Amazon SNS doesn’t retroactively encrypt messages that were published before server-side encryption (SSE) was enabled for the topic. In addition, any encrypted message retains its encryption even if you disable the encryption of its topic. It can take up to a minute for encryption to be effective after enabled.
  • Amazon SNS uses envelope encryption internally. It uses your configured CMK to generate a short-lived data encryption key (DEK) and then reuses this DEK to encrypt your published messages for 5 minutes. When the DEK expires, Amazon SNS automatically rotates to generate a new DEK from AWS KMS.

Applying encrypted topics in a use case

You can use encrypted topics for a variety of scenarios, especially for processing sensitive data, such as personally identifiable information (PII) and protected health information (PHI).

The following example illustrates an electronic medical record (EMR) system deployed in several clinics and hospitals. The system manages patients’ medical histories, diagnoses, medications, immunizations, visits, lab results, and doctors’ notes.

The clinic’s EMR system is integrated with three auxiliary systems. Each system, hosted on an Amazon EC2 instance, polls incoming patient records from an Amazon SQS queue and takes action:

  • The Billing system manages the clinic’s revenue cycles and processes accounts receivable and reimbursements.
  • The Scheduling system keeps patients informed of their upcoming clinic and lab appointments and reminds them to take their medications.
  • The Prescription system transmits electronic prescriptions to authorized pharmacies and tracks medication-filling history.

When a physician inputs a new record into a patient’s file, the EMR system publishes a message to an Amazon SNS topic. The topic in turn fans out a copy of the message to each one of the three subscribing Amazon SQS queues for parallel processing. When the message is retrieved from the queue, the Billing system invoices the patient, the Scheduling system books the patient’s next clinic or lab appointment, and the Prescription system orders the required medication from an authorized pharmacy.

The Amazon SNS topic and all Amazon SQS queues described in this use case are encrypted using AWS KMS keys that the clinic creates. The communication among services is based on HTTPS API calls. This end-to-end encryption protects patients’ medical records by making their content unavailable to unauthorized or anonymous users while messages are either in transit or at rest.

Creating, subscribing, and publishing to encrypted topics

You can create an Amazon SNS encrypted topic or an Amazon SQS encrypted queue by setting its attribute KmsMasterKeyId, which expects an AWS KMS key identifier. The key identifier can be a key ID, key ARN, or key alias. You can use the identifier of either a customer-managed CMK, such as alias/MyKey, or the AWS-managed CMK in your account, whose alias is alias/aws/sns.

The following code snippets work with the AWS SDK for Java. You can use these code samples for the healthcare system scenario in the previous section.

First, the principal publishing messages to the Amazon SNS encrypted topic must have access permission to execute the AWS KMS operations GenerateDataKey and Decrypt, in addition to the Amazon SNS operation Publish. The principal can be either an IAM user or an IAM role. The following IAM policy grants the required access permission to the principal.

{
  "Version": "2012-10-17",
  "Statement": {
    "Effect": "Allow",
    "Action": [
      "kms:GenerateDataKey",
      "kms:Decrypt",
      "sns:Publish"
    ],
    "Resource": "*"
  }
}

If you want to use a customer-managed CMK, a CMK needs to be created and secured by granting the publisher access to the same AWS KMS operations GenerateDataKey and Decrypt. The access permission is granted using KMS key policies. The following JSON document shows an example policy statement for the customer-managed CMK used by the healthcare system. For more information on creating and securing keys, see Creating Keys and Using Key Policies in the AWS Key Management Service Developer Guide.

{
  "Version": "2012-10-17",
  "Id": "EMR-System-KeyPolicy",
  "Statement": [
    {
      "Sid": "Allow access for Root User",
      "Effect": "Allow",
      "Principal": {"AWS": "arn:aws:iam::123456789012:root"},
      "Action": "kms:*",
      "Resource": "*"
    },
    {
      "Sid": "Allow access for Key Administrator",
      "Effect": "Allow",
      "Principal": {"AWS": "arn:aws:iam::123456789012:user/EMRAdmin"},
      "Action": [
        "kms:Create*",
        "kms:Describe*",
        "kms:Enable*",
        "kms:List*",
        "kms:Put*",
        "kms:Update*",
        "kms:Revoke*",
        "kms:Disable*",
        "kms:Get*",
        "kms:Delete*",
        "kms:TagResource",
        "kms:UntagResource",
        "kms:ScheduleKeyDeletion",
        "kms:CancelKeyDeletion"
      ],
      "Resource": "*"
    },
    {
      "Sid": "Allow access for Key User (SNS Publisher)",
      "Effect": "Allow",
      "Principal": {"AWS": "arn:aws:iam::123456789012:user/EMRUser"},
      "Action": [
        "kms:GenerateDataKey*",
        "kms:Decrypt"
      ],
      "Resource": "*"
    }
  ]
}

The following snippet uses the CMK to create an encrypted topic, three encrypted queues, and their corresponding subscriptions.

// Create API clients

String userArn = "arn:aws:iam::123456789012:user/EMRUser";

AWSCredentialsProvider credentials = getCredentials(userArn);

AmazonSNS sns = new AmazonSNSClient(credentials);
AmazonSQS sqs = new AmazonSQSClient(credentials);

// Create an attributes collection for the topic and queues

String keyId = "arn:aws:kms:us-east-1:123456789012:alias/EMRKey"; 

Map<String, String> attributes = new HashMap<>();
attributes.put("KmsMasterKeyId", keyId);

// Create an encrypted topic

String topicArn = sns.createTopic(
    new CreateTopicRequest("Patient-Records")
        .withAttributes(attributes)).getTopicArn();

// Create encrypted queues

String billingQueueUrl = sqs.createQueue(
    new CreateQueueRequest("Billing-Integration")
        .withAttributes(attributes)).getQueueUrl();

String schedulingQueueUrl = sqs.createQueue(
    new CreateQueueRequest("Scheduling-Integration")
        .withAttributes(attributes)).getQueueUrl();

String prescriptionQueueUrl = sqs.createQueue(
    new CreateQueueRequest("Prescription-Integration")
        .withAttributes(attributes)).getQueueUrl();

// Create subscriptions

Topics.subscribeQueue(sns, sqs, topicArn, billingQueueUrl);
Topics.subscribeQueue(sns, sqs, topicArn, schedulingQueueUrl);
Topics.subscribeQueue(sns, sqs, topicArn, prescriptionQueueUrl);

Next, the following code composes a JSON message and publishes it to the encrypted topic.

// Publish message to encrypted topic

String messageBody = "{\"patient\": 2911, \"medication\": 151}";
String messageSubject = "Electronic Medical Record - 3472";

sns.publish(
    new PublishRequest()
        .withSubject(messageSubject)
        .withMessage(messageBody)
        .withTopicArn(topicArn));

Note:

Publishing messages to encrypted topics isn’t different from publishing messages to standard, unencrypted topics. Your publisher needs access to perform AWS KMS operations GenerateDataKey and Decrypt on the configured CMK. All of the encryption logic is offloaded to Amazon SNS, and the message is delivered to all subscribed endpoints.

A copy of the message is now available in each subscribing queue. The final code snippet retrieves the messages from the encrypted queues.

// Retrieve messages from encrypted queues

List<Message> messagesForBilling = sqs.receiveMessage(
    new ReceiveMessageRequest(billingQueueUrl)).getMessages();

List<Message> messagesForScheduling = sqs.receiveMessage(
    new ReceiveMessageRequest(schedulingQueueUrl)).getMessages();

List<Message> messagesForPrescription = sqs.receiveMessage(
    new ReceiveMessageRequest(prescriptionQueueUrl)).getMessages();

Note:

Retrieving messages from encrypted queues isn’t different from retrieving messages from standard, unencrypted queues. All of the decryption logic is offloaded to Amazon SQS.

Enabling compatibility between encrypted topics and event sources

Several AWS services publish events to Amazon SNS topics. To allow these event sources to work with encrypted topics, you must first create a customer-managed CMK and then add the following statement to the policy of the CMK.

{
    "Version": "2012-10-17",
    "Statement": [{
        "Effect": "Allow",
        "Principal": {"Service": "service.amazonaws.com"},
        "Action": ["kms:GenerateDataKey*", "kms:Decrypt"],
        "Resource": "*"
    }]
}

You can use the following example service principals in the statement:

Other Amazon SNS event sources require you to provide an IAM role, as opposed to their service principal, in the KMS key policy. This set of event sources includes the following:

Once the CMK key policy has been configured, you can enable encryption on the topic using the CMK, and then provide the encrypted topic’s ARN to the event source.

Note:

As of November 2018, Amazon CloudWatch alarms don’t yet work with Amazon SNS encrypted topics. For information on publishing alarms to standard, unencrypted topics, see Using Amazon CloudWatch Alarms in the Amazon CloudWatch User Guide.

Publishing messages privately to encrypted topics through VPC endpoints

In addition to encrypting messages in transit and at rest, you can further harden the security and privacy of your applications by publishing messages to encrypted topics privately, without traversing the public Internet. Amazon SNS supports VPC endpoints via AWS PrivateLink. You can use VPC endpoints to privately publish messages to both standard and encrypted topics from a virtual private cloud (VPC) subnet. When you use AWS PrivateLink, you don’t have to set up an internet gateway, network address translation (NAT) device, or virtual private network (VPN) connection. For more information, see Publishing to Amazon SNS topics from Amazon Virtual Private Cloud in the Amazon Simple Notification Service Developer Guide.

Auditing the usage of encrypted topics

You can use AWS CloudTrail to audit the usage of the AWS KMS keys applied to your Amazon SNS topics. AWS CloudTrail creates log files that contain a history of AWS API calls and related events for your account. These log files include all AWS KMS API requests made with the AWS Management Console , SDKs, and Command Line Tools, as well as those made through integrated AWS services. You can use these log files to get information about when your CMK was used, the operation that was requested, the identity of the requester, and the IP address that the request came from. For more information, see Logging AWS KMS API calls with AWS CloudTrail in the AWS Key Management Service Developer Guide.

Summary

Amazon SNS provides a full set of security features to protect your data from unauthorized and anonymous access, including message encryption in transit with Amazon ATS certificates, message encryption at rest with AWS KMS keys, message privacy with AWS PrivateLink, and auditing with AWS CloudTrail. Moreover, you can subscribe Amazon SQS encrypted queues to Amazon SNS encrypted topics to establish end-to-end encryption in your messaging use cases.

Amazon SNS encrypted topics are available in all AWS Regions where AWS KMS is available. For pricing details, see AWS KMS pricing and Amazon SNS pricing. There is no increase in Amazon SNS charges for using encrypted topics, beyond the AWS KMS request charges incurred. For more information, see Protecting Amazon SNS Data Using Server-Side Encryption (SSE) in the Amazon Simple Notification Service Developer Guide.

Get started today by creating your Amazon SNS encrypted topics via the AWS Management Console and AWS SDKs.

Maintaining Transport Layer Security all the way to your container part 2: Using AWS Certificate Manager Private Certificate Authority

Post Syndicated from Nathan Taber original https://aws.amazon.com/blogs/compute/maintaining-transport-layer-security-all-the-way-to-your-container-part-2-using-aws-certificate-manager-private-certificate-authority/

This post contributed by AWS Senior Cloud Infrastructure Architect Anabell St Vincent and AWS Solutions Architect Alex Kimber.

The previous post, Maintaining Transport Layer Security All the Way to Your Container, covered how the layer 4 Network Load Balancer can be used to maintain Transport Layer Security (TLS) all the way from the client to running containers.

In this post, we discuss the various options available for ensuring that certificates can be securely and reliably made available to containers. By simplifying the process of distributing or generating certificates and other secrets, it’s easier for you to build inherently secure architectures without compromising scalability.

There are several ways to achieve this:

1. Storing the certificate and private key in the Docker image

Certificates and keys can be included in the Docker image and made available to the container at runtime. This approach makes the deployment of containers with certificates and keys simple and easy.

However, there are some drawbacks. First, the certificates and keys need to be created, stored securely, and then included in the Docker image. There are some manual or additional automation steps required to securely create, retrieve, and include them for every new revision of the Docker image.

The following example Docker file creates an NGINX container that has the certificate and the key included:

FROM nginx:alpine

# Copy in secret materials
RUN mkdir -p /root/certs/nginxdemotls.com
COPY nginxdemotls.com.key /root/certs/nginxdemotls.com/nginxdemotls.com.key
COPY nginxdemotls.com.crt /root/certs/nginxdemotls.com/nginxdemotls.com.crt
RUN chmod 400 /root/certs/nginxdemotls.com/nginxdemotls.com.key

# Copy in nginx configuration files
COPY nginx.conf /etc/nginx/nginx.conf
COPY nginxdemo.conf /etc/nginx/conf.d
COPY nginxdemotls.conf /etc/nginx/conf.d

# Create folders to hold web content and copy in HTML files.
RUN mkdir -p /var/www/nginxdemo.com
RUN mkdir -p /var/www/nginxdemotls.com

COPY index.html /var/www/nginxdemo.com/index.html
COPY indextls.html /var/www/nginxdemotls.com/index.html

From a security perspective, this approach has additional drawbacks. Because certificates and private keys are bundled with the Docker images, anyone with access to a Docker image can also retrieve the certificate and private key.
The other drawback is the updated certificates are not replaced automatically and the Docker image must be re-created to include any updated certificates. Running containers must either be restarted with the new image, or have the certificates updated.

2. Storing the certificates in AWS Systems Manager Parameter Store and Amazon S3

The post Managing Secrets for Amazon ECS Applications Using Parameter Store and IAM Roles for Tasks explains how you can use Systems Manager Parameter Store to store secrets. Some customers use Parameter Store to keep their secrets for simpler retrieval, as well as fine-grained access control. Parameter Store allows for securing data using AWS Key Management Service (AWS KMS) for the encryption. Each encryption key created in KMS can be accessed and controlled using AWS Identity and Access Management (IAM) roles in addition to key policy functionality within KMS. This approach allows for resource-level permissions to each item that is stored in Parameter Store, based on the KMS key used for the encryption.

Some certificates can be stored in Parameter Store using the ‘Secure String’ type and using KMS for encryption. With this approach, you can make an API call to retrieve the certificate when the container is deployed. As mentioned earlier, the access to the certificate can be based on the role used to retrieve the certificate. The advantage of this approach is that the certificate can be replaced. The next time the container is deployed, it picks up the new certificate and there is no need to update the Docker image.

Currently, there is a limitation of 4,096 characters that can be stored in Parameter Store. This may not be sufficient for some type of certificates. For example, some x509 certs include the chain and so can exceed the 4,096 character limit. To avoid any character size limitation, Amazon S3 can be used to store the certificate with Parameter Store. The certificate can be stored on Amazon S3, encrypted with KMS and the private key, or the password can be stored in Parameter Store.

With this approach, there is no limitation on certificate length and the private key remains secured with KMS. However, it does involve some additional complexity in setting up the process of creating the certificates, storing them in S3, and then storing the password or private keys in Parameter Store. That is in addition to securing, trusting, and auditing the system handling the private keys and certificates.

3. Storing the certificates in AWS Secrets Manager

AWS Secrets Manager offers a number of features to allow you to store and manage credentials, keys, and other secret materials. This eliminates the need to store these materials with the application code and instead allows them to be referenced on demand. By centralizing the management of secret materials, this single service can manage fine-grained access control through granular IAM policies as well as the revocation and rotation, all through API calls.

All materials stored in the AWS Secrets Manager are encrypted with the customer’s choice of KMS key. The post AWS Secrets Manager: Store, Distribute, and Rotate Credentials Securely shows how AWS Secrets Manager can be used to store RDS database credentials. However, the same process can apply to TLS certificates and keys.

Secrets currently have a limit of 4,096 characters. This approach may be unsuitable for some x509 certificates that include the chain and can exceed this limit. This limit applies to the sum of all key-value pairs within a single secret, so certificates and keys may need to be stored in separate secrets.

After the secure material is in place, it can be retrieved by the container instance at runtime via the AWS Command Line Interface (AWS CLI) or directly from within the application code. All that’s required is for the container task role to have the requisite permissions in IAM to read the secrets.

With the exception of rotating RDS credentials, AWS Secrets Manager requires the user to provide Lambda function code, which is called on a configurable schedule to manage the rotation. This rotation would need to consider the generation of new keys and certificates and redeploying the containers.

4. Using self-signed certificates, generated as the Docker container is created

The advantage of this approach is that it allows the use of TLS communications without any of the complexity of distributing certificates or private keys. However, this approach does require implicit trust of the server. Some applications may generate warnings that there is no acceptable root of trust.

5. Building and managing a private certificate authority

A private certificate authority (CA) can offer greater security and flexibility than the solutions outlined earlier. Typically, a private CA solution would manage the following for each ‘Common name’:

  • A private key
  • A certificate, created with the private key
  • Lists of certificates issued and those that have been revoked
  • Policies for managing certificates, for example which services have the right to make a request for a new certificate
  • Audit logs to track the lifecycle of certificates, in particular to ensure timely renewal where necessary

It is possible for an organization to build and maintain their own certificate issuing platform. This approach requires the implementation of a platform that is highly available and secure. These types of systems add to the overall overhead of maintaining infrastructures from a security, availability, scalability, and maintenance perspective. Some customers have also implemented Lambda functions to achieve the same functionality when it comes to issuing private certificates.

While it’s possible to build a private CA for internal services, there are some challenges to be aware of. Any solution should provide a number of features that are key to ensuring appropriate management of the certificates throughout their lifecycle.

For instance, the solution must support the creation, tracking, distribution, renewal, and revocation of certificates. All of these operations must be provided with the requisite security and authentication controls to ensure that certificates are distributed appropriately.

Scalability is another consideration. As applications become increasingly stateless and elastic, it’s conceivable that certificates may be required for every new container instance or wildcard certificates created to support an environment. Whatever CA solution is implemented must be ready to accommodate such a load while also providing high availability.

These types of approaches have drawbacks from various perspectives:

  • Custom code can be hard to maintain
  • Additional security measures must be implemented
  • Certificate renewal and revocation mechanisms also must be implemented
  • The platform must be maintained and kept up-to-date from a patching perspective while maintaining high availability

6. Using the new ACM Private CA to issue private certificates

ACM Private CA offers a secure, managed infrastructure to support the issuance and revocation of private digital certificates. It supports RSA and ECDSA key types for CA keys used for the creation of new certificates, as well as certificate revocation lists (CRLs) to inform clients when a certificate should no longer be trusted. Currently, ACM Private CA does not offer root CA support.

The following screenshot shows a subordinate certificate that is available for use:

The private key for any private CA that you create with ACM Private CA is created and stored in a FIPS 140-2 Level 3 Hardware Security Module (HSM) managed by AWS. The ACM Private CA is also integrated with AWS CloudTrail, which allows you to record the audit trail of API calls made using the AWS Management Console, AWS CLI, and AWS SDKs.

Setting up ACM Private CA requires a root CA. This can be used to sign a certificate signing request (CSR) for the new subordinate (CA), which is then imported into ACM Private CA. After this is complete, it’s possible for containers within your platform to generate their own key-value pairs at runtime using OpenSSL. They can then use the key-value pairs to make their own CSR and ultimately receive their own certificate.

More specifically, the container would complete the following steps at runtime:

  1. Add OpenSSL to the Docker image (if it is not already included).
  2. Generate a key-value pair (a cryptographically related private and public key).
  3. Use that private key to make a CSR.
  4. Call the ACM Private CA API or CLI issue-certificate operation, which issues a certificate based on the CSR.
  5. Call the ACM Private CA API or CLI get-certificate operation, which returns an issued certificate.

The following diagram shows these steps:

The authorization to successfully request a certificate is controlled via IAM policies, which can be attached via a role to the Amazon ECS task. Containers require the ‘Allow’ effect for at least the acm-pca:GetCertificate and acm:IssueCertificate actions. The following is a sample IAM policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "",
            "Effect": "Allow",
            "Action": "acm-pca:*",
            "Resource": "arn:aws:acm-pca:us-east-1:1234567890:certificate-authority/2c4ccba1-215e-418a-a654-aaaaaaaa"
        }
    ]
}

For additional security, it is possible to store the certificate and keys in a temporary volume mounted in memory through the ‘tmpfs’ parameter. With this option enabled, the secure material is never written to the filesystem of the host machine.

Note: This feature is not currently available for containers run on AWS Fargate.

The task now has the necessary materials and starts up. Clients should be able to establish the trust hierarchy from the server, through ACM Private CA to the root or intermediate CA.

One consideration to be aware of is that ACM Private CA currently has a limit of 50,000 certificates for each CA in each Region. If the requirement is for each, short-lived container instance to have a separate certificate, then this limit could be reached.

Summary

The approaches outlined in this post describe the available options for ensuring that generation, storage, or distribution of sensitive material is done efficiently and securely. It should also be done in a way that supports the ephemeral, automatic scaling capabilities of container-based architectures. ACM Private CA provides a single interface to manage public and now private certificates, as well as seamlessly integrating with the AWS services.

If you have questions or suggestions, please comment below.

Podcast: How AWS KMS could help customers meet encryption and deletion requirements, including GDPR

Post Syndicated from Katie Doptis original https://aws.amazon.com/blogs/security/podcast-how-aws-kms-could-help-customers-meet-encryption-and-deletion-requirements-including-gdpr/

Encryption is a powerful tool to protect your data but it can be difficult to get right because it demands understanding how encryption keys are created, distributed, used, and managed. To make encryption easier to use, we created AWS Key Management Service (KMS) to let you scale your use of the cloud without struggling to ensure encryption is used consistently across workloads.

Because AWS KMS makes it easy for you to create and control the encryption keys used to encrypt your data, the service can be used to meet both encryption and deletion requirements in a data lifecycle management policy. Cryptographic deletion is the idea is that you can delete a relatively small number of keys to make a large amount of encrypted data irretrievable. This concept is being widely discussed as an option for organizations facing data deletion requirements, such as those in the EU’s General Data Protection Regulation (GDPR).

Listen to the podcast and hear from Ken Beer, general manager of AWS KMS, about best practices related to encryption, key management, and cryptographic deletion. He also covers the advantages of KMS over on-premises systems and how the service has been designed so that even AWS operators can’t access customer keys.

How to retain system tables’ data spanning multiple Amazon Redshift clusters and run cross-cluster diagnostic queries

Post Syndicated from Karthik Sonti original https://aws.amazon.com/blogs/big-data/how-to-retain-system-tables-data-spanning-multiple-amazon-redshift-clusters-and-run-cross-cluster-diagnostic-queries/

Amazon Redshift is a data warehouse service that logs the history of the system in STL log tables. The STL log tables manage disk space by retaining only two to five days of log history, depending on log usage and available disk space.

To retain STL tables’ data for an extended period, you usually have to create a replica table for every system table. Then, for each you load the data from the system table into the replica at regular intervals. By maintaining replica tables for STL tables, you can run diagnostic queries on historical data from the STL tables. You then can derive insights from query execution times, query plans, and disk-spill patterns, and make better cluster-sizing decisions. However, refreshing replica tables with live data from STL tables at regular intervals requires schedulers such as Cron or AWS Data Pipeline. Also, these tables are specific to one cluster and they are not accessible after the cluster is terminated. This is especially true for transient Amazon Redshift clusters that last for only a finite period of ad hoc query execution.

In this blog post, I present a solution that exports system tables from multiple Amazon Redshift clusters into an Amazon S3 bucket. This solution is serverless, and you can schedule it as frequently as every five minutes. The AWS CloudFormation deployment template that I provide automates the solution setup in your environment. The system tables’ data in the Amazon S3 bucket is partitioned by cluster name and query execution date to enable efficient joins in cross-cluster diagnostic queries.

I also provide another CloudFormation template later in this post. This second template helps to automate the creation of tables in the AWS Glue Data Catalog for the system tables’ data stored in Amazon S3. After the system tables are exported to Amazon S3, you can run cross-cluster diagnostic queries on the system tables’ data and derive insights about query executions in each Amazon Redshift cluster. You can do this using Amazon QuickSight, Amazon Athena, Amazon EMR, or Amazon Redshift Spectrum.

You can find all the code examples in this post, including the CloudFormation templates, AWS Glue extract, transform, and load (ETL) scripts, and the resolution steps for common errors you might encounter in this GitHub repository.

Solution overview

The solution in this post uses AWS Glue to export system tables’ log data from Amazon Redshift clusters into Amazon S3. The AWS Glue ETL jobs are invoked at a scheduled interval by AWS Lambda. AWS Systems Manager, which provides secure, hierarchical storage for configuration data management and secrets management, maintains the details of Amazon Redshift clusters for which the solution is enabled. The last-fetched time stamp values for the respective cluster-table combination are maintained in an Amazon DynamoDB table.

The following diagram covers the key steps involved in this solution.

The solution as illustrated in the preceding diagram flows like this:

  1. The Lambda function, invoke_rs_stl_export_etl, is triggered at regular intervals, as controlled by Amazon CloudWatch. It’s triggered to look up the AWS Systems Manager parameter store to get the details of the Amazon Redshift clusters for which the system table export is enabled.
  2. The same Lambda function, based on the Amazon Redshift cluster details obtained in step 1, invokes the AWS Glue ETL job designated for the Amazon Redshift cluster. If an ETL job for the cluster is not found, the Lambda function creates one.
  3. The ETL job invoked for the Amazon Redshift cluster gets the cluster credentials from the parameter store. It gets from the DynamoDB table the last exported time stamp of when each of the system tables was exported from the respective Amazon Redshift cluster.
  4. The ETL job unloads the system tables’ data from the Amazon Redshift cluster into an Amazon S3 bucket.
  5. The ETL job updates the DynamoDB table with the last exported time stamp value for each system table exported from the Amazon Redshift cluster.
  6. The Amazon Redshift cluster system tables’ data is available in Amazon S3 and is partitioned by cluster name and date for running cross-cluster diagnostic queries.

Understanding the configuration data

This solution uses AWS Systems Manager parameter store to store the Amazon Redshift cluster credentials securely. The parameter store also securely stores other configuration information that the AWS Glue ETL job needs for extracting and storing system tables’ data in Amazon S3. Systems Manager comes with a default AWS Key Management Service (AWS KMS) key that it uses to encrypt the password component of the Amazon Redshift cluster credentials.

The following table explains the global parameters and cluster-specific parameters required in this solution. The global parameters are defined once and applicable at the overall solution level. The cluster-specific parameters are specific to an Amazon Redshift cluster and repeat for each cluster for which you enable this post’s solution. The CloudFormation template explained later in this post creates these parameters as part of the deployment process.

Parameter nameTypeDescription
Global parametersdefined once and applied to all jobs
redshift_query_logs.global.s3_prefixStringThe Amazon S3 path where the query logs are exported. Under this path, each exported table is partitioned by cluster name and date.
redshift_query_logs.global.tempdirStringThe Amazon S3 path that AWS Glue ETL jobs use for temporarily staging the data.
redshift_query_logs.global.role>StringThe name of the role that the AWS Glue ETL jobs assume. Just the role name is sufficient. The complete Amazon Resource Name (ARN) is not required.
redshift_query_logs.global.enabled_cluster_listStringListA comma-separated list of cluster names for which system tables’ data export is enabled. This gives flexibility for a user to exclude certain clusters.
Cluster-specific parametersfor each cluster specified in the enabled_cluster_list parameter
redshift_query_logs.<<cluster_name>>.connectionStringThe name of the AWS Glue Data Catalog connection to the Amazon Redshift cluster. For example, if the cluster name is product_warehouse, the entry is redshift_query_logs.product_warehouse.connection.
redshift_query_logs.<<cluster_name>>.userStringThe user name that AWS Glue uses to connect to the Amazon Redshift cluster.
redshift_query_logs.<<cluster_name>>.passwordSecure StringThe password that AWS Glue uses to connect the Amazon Redshift cluster’s encrypted-by key that is managed in AWS KMS.

For example, suppose that you have two Amazon Redshift clusters, product-warehouse and category-management, for which the solution described in this post is enabled. In this case, the parameters shown in the following screenshot are created by the solution deployment CloudFormation template in the AWS Systems Manager parameter store.

Solution deployment

To make it easier for you to get started, I created a CloudFormation template that automatically configures and deploys the solution—only one step is required after deployment.

Prerequisites

To deploy the solution, you must have one or more Amazon Redshift clusters in a private subnet. This subnet must have a network address translation (NAT) gateway or a NAT instance configured, and also a security group with a self-referencing inbound rule for all TCP ports. For more information about why AWS Glue ETL needs the configuration it does, described previously, see Connecting to a JDBC Data Store in a VPC in the AWS Glue documentation.

To start the deployment, launch the CloudFormation template:

CloudFormation stack parameters

The following table lists and describes the parameters for deploying the solution to export query logs from multiple Amazon Redshift clusters.

PropertyDefaultDescription
S3BucketmybucketThe bucket this solution uses to store the exported query logs, stage code artifacts, and perform unloads from Amazon Redshift. For example, the mybucket/extract_rs_logs/data bucket is used for storing all the exported query logs for each system table partitioned by the cluster. The mybucket/extract_rs_logs/temp/ bucket is used for temporarily staging the unloaded data from Amazon Redshift. The mybucket/extract_rs_logs/code bucket is used for storing all the code artifacts required for Lambda and the AWS Glue ETL jobs.
ExportEnabledRedshiftClustersRequires InputA comma-separated list of cluster names from which the system table logs need to be exported.
DataStoreSecurityGroupsRequires InputA list of security groups with an inbound rule to the Amazon Redshift clusters provided in the parameter, ExportEnabledClusters. These security groups should also have a self-referencing inbound rule on all TCP ports, as explained on Connecting to a JDBC Data Store in a VPC.

After you launch the template and create the stack, you see that the following resources have been created:

  1. AWS Glue connections for each Amazon Redshift cluster you provided in the CloudFormation stack parameter, ExportEnabledRedshiftClusters.
  2. All parameters required for this solution created in the parameter store.
  3. The Lambda function that invokes the AWS Glue ETL jobs for each configured Amazon Redshift cluster at a regular interval of five minutes.
  4. The DynamoDB table that captures the last exported time stamps for each exported cluster-table combination.
  5. The AWS Glue ETL jobs to export query logs from each Amazon Redshift cluster provided in the CloudFormation stack parameter, ExportEnabledRedshiftClusters.
  6. The IAM roles and policies required for the Lambda function and AWS Glue ETL jobs.

After the deployment

For each Amazon Redshift cluster for which you enabled the solution through the CloudFormation stack parameter, ExportEnabledRedshiftClusters, the automated deployment includes temporary credentials that you must update after the deployment:

  1. Go to the parameter store.
  2. Note the parameters <<cluster_name>>.user and redshift_query_logs.<<cluster_name>>.password that correspond to each Amazon Redshift cluster for which you enabled this solution. Edit these parameters to replace the placeholder values with the right credentials.

For example, if product-warehouse is one of the clusters for which you enabled system table export, you edit these two parameters with the right user name and password and choose Save parameter.

Querying the exported system tables

Within a few minutes after the solution deployment, you should see Amazon Redshift query logs being exported to the Amazon S3 location, <<S3Bucket_you_provided>>/extract_redshift_query_logs/data/. In that bucket, you should see the eight system tables partitioned by customer name and date: stl_alert_event_log, stl_dlltext, stl_explain, stl_query, stl_querytext, stl_scan, stl_utilitytext, and stl_wlm_query.

To run cross-cluster diagnostic queries on the exported system tables, create external tables in the AWS Glue Data Catalog. To make it easier for you to get started, I provide a CloudFormation template that creates an AWS Glue crawler, which crawls the exported system tables stored in Amazon S3 and builds the external tables in the AWS Glue Data Catalog.

Launch this CloudFormation template to create external tables that correspond to the Amazon Redshift system tables. S3Bucket is the only input parameter required for this stack deployment. Provide the same Amazon S3 bucket name where the system tables’ data is being exported. After you successfully create the stack, you can see the eight tables in the database, redshift_query_logs_db, as shown in the following screenshot.

Now, navigate to the Athena console to run cross-cluster diagnostic queries. The following screenshot shows a diagnostic query executed in Athena that retrieves query alerts logged across multiple Amazon Redshift clusters.

You can build the following example Amazon QuickSight dashboard by running cross-cluster diagnostic queries on Athena to identify the hourly query count and the key query alert events across multiple Amazon Redshift clusters.

How to extend the solution

You can extend this post’s solution in two ways:

  • Add any new Amazon Redshift clusters that you spin up after you deploy the solution.
  • Add other system tables or custom query results to the list of exports from an Amazon Redshift cluster.

Extend the solution to other Amazon Redshift clusters

To extend the solution to more Amazon Redshift clusters, add the three cluster-specific parameters in the AWS Systems Manager parameter store following the guidelines earlier in this post. Modify the redshift_query_logs.global.enabled_cluster_list parameter to append the new cluster to the comma-separated string.

Extend the solution to add other tables or custom queries to an Amazon Redshift cluster

The current solution ships with the export functionality for the following Amazon Redshift system tables:

  • stl_alert_event_log
  • stl_dlltext
  • stl_explain
  • stl_query
  • stl_querytext
  • stl_scan
  • stl_utilitytext
  • stl_wlm_query

You can easily add another system table or custom query by adding a few lines of code to the AWS Glue ETL job, <<cluster-name>_extract_rs_query_logs. For example, suppose that from the product-warehouse Amazon Redshift cluster you want to export orders greater than $2,000. To do so, add the following five lines of code to the AWS Glue ETL job product-warehouse_extract_rs_query_logs, where product-warehouse is your cluster name:

  1. Get the last-processed time-stamp value. The function creates a value if it doesn’t already exist.

salesLastProcessTSValue = functions.getLastProcessedTSValue(trackingEntry=”mydb.sales_2000",job_configs=job_configs)

  1. Run the custom query with the time stamp.

returnDF=functions.runQuery(query="select * from sales s join order o where o.order_amnt > 2000 and sale_timestamp > '{}'".format (salesLastProcessTSValue) ,tableName="mydb.sales_2000",job_configs=job_configs)

  1. Save the results to Amazon S3.

functions.saveToS3(dataframe=returnDF,s3Prefix=s3Prefix,tableName="mydb.sales_2000",partitionColumns=["sale_date"],job_configs=job_configs)

  1. Get the latest time-stamp value from the returned data frame in Step 2.

latestTimestampVal=functions.getMaxValue(returnDF,"sale_timestamp",job_configs)

  1. Update the last-processed time-stamp value in the DynamoDB table.

functions.updateLastProcessedTSValue(“mydb.sales_2000",latestTimestampVal[0],job_configs)

Conclusion

In this post, I demonstrate a serverless solution to retain the system tables’ log data across multiple Amazon Redshift clusters. By using this solution, you can incrementally export the data from system tables into Amazon S3. By performing this export, you can build cross-cluster diagnostic queries, build audit dashboards, and derive insights into capacity planning by using services such as Athena. I also demonstrate how you can extend this solution to other ad hoc query use cases or tables other than system tables by adding a few lines of code.


Additional Reading

If you found this post useful, be sure to check out Using Amazon Redshift Spectrum, Amazon Athena, and AWS Glue with Node.js in Production and Amazon Redshift – 2017 Recap.


About the Author

Karthik Sonti is a senior big data architect at Amazon Web Services. He helps AWS customers build big data and analytical solutions and provides guidance on architecture and best practices.

 

 

 

 

Now Available: Encryption at Rest for Amazon DynamoDB

Post Syndicated from Nitin Sagar original https://aws.amazon.com/blogs/security/now-available-encryption-at-rest-for-amazon-dynamodb/

Today, AWS announced Amazon DynamoDB encryption at rest, a new DynamoDB feature that gives you enhanced security of your data at rest by encrypting it using your associated AWS Key Management Service encryption keys. Encryption at rest can help you meet your security requirements for regulatory compliance.

You now can create an encrypted DynamoDB table anytime with a single click in the AWS Management Console or a single API call. Encrypting DynamoDB data has no impact on table performance. DynamoDB encryption at rest is available starting today in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland) Regions for no additional fees.

For more information, see the full AWS Blog post.

– Nitin