Tag Archives: AWS Encryption SDK

How to decrypt ciphertexts in multiple regions with the AWS Encryption SDK in C

Post Syndicated from Liz Roth original https://aws.amazon.com/blogs/security/how-to-decrypt-ciphertexts-multiple-regions-aws-encryption-sdk-in-c/

You’ve told us that you want to encrypt data once with AWS Key Management Service (AWS KMS) and decrypt that data with customer master keys (CMKs) that you specify, often with CMKs in different AWS Regions. Doing this saves you compute resources and helps you to enable secure and efficient high-availability schemes.

The AWS Crypto Tools team has introduced the AWS Encryption SDK for C so you can achieve these goals. The new tool also adds more options for language and platform support and is fully interoperable with the implementations in Java and Python.

The AWS Encryption SDK is a client-side encryption library that helps make it easier for you to implement encryption best practices in your applications. You can use it with master keys from multiple sources, including AWS KMS CMKs. The AWS Encryption SDK doesn’t require AWS KMS or any other AWS service.

You can use AWS KMS APIs directly to encrypt data keys using multiple CMKs, but the AWS Encryption SDK provides tools to make working with multiple CMKs even easier, with everything you need stored in the Encryption SDK’s portable encrypted message format. The AWS Encryption SDK for C uses the concept of keyrings, which makes it easy to work with ciphertexts encrypted using multiple CMKs.

In this post, I will walk you through an example using the new AWS Encryption SDK for C. I’ll focus on some highlights from example code in the context of what an example application deployment might look like. You can find the complete example code in this GitHub repository. As always, we welcome your comments and your contributions.

Example scenario

To add some context around the example code, assume that you have a data processing application deployed both in US West (Oregon) us-west-2 and EU Central (Frankfurt) eu-central-1. For added durability, this example application creates and encrypts data in us-west-2 before it’s copied to the eu-central-1 Region. You have assurance that you could decrypt that data in us-west-2 if needed, but you want to mitigate the case where the decryption service in us-west-2 is unavailable. So how do you ensure you can decrypt your data in the eu-central-1 region when you need to?

In this example, your data processing application uses the AWS Encryption SDK and AWS KMS to generate a 256-bit data key to encrypt content locally in us-west-2. The AWS Encryption SDK for C deletes the plaintext data key after use, but an encrypted copy of that data key is included in the encrypted message that the AWS Encryption SDK returns. This prevents you from losing the encrypted copy of the data key, which would make your encrypted content unrecoverable. The data key is encrypted under the AWS KMS CMKs in each of the two regions in which you might want to decrypt the data in the future.

A best practice is to plan to decrypt data using in-region data keys and CMKs. This reduces latency and simplifies the permissions and auditing properties of the decryption operation. The latency impact from the cross-region API calls occur only during the encryption operation.

In this scenario, the AWS KMS CMK key policy permissions look like this:

  • To encrypt data, the AWS identity used by the data processing application in us-west-2 needs kms:GenerateDataKey permission on the us-west-2 CMK and kms:Encrypt permission on the eu-central-1 CMK. You can specify these permissions in a key policy or IAM policy. This will let the application create a data key in us-west-2 and encrypt the data key under CMKs in both AWS Regions.
  • To decrypt data, the AWS identity used by the data processing application in us-west-2 needs kms:Decrypt permissions on the CMK in us-west-2 or the CMK in eu-central-1.

Encryption path

First, define variables for the Amazon Resource Names (ARNs) of your CMKs in us-west-2 and eu-central-1. In the Encryption SDK for C, to encrypt, you can identify a CMK by its CMK ARN or the Alias ARN that is mapped to the CMK ARN.


const char *KEY_ARN_US_WEST_2 = "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab";

const char *KEY_ARN_EU_CENTRAL_1 = "arn:aws:kms:eu-central-1:111122223333:key/0987dcba-09fe-87dc-65ba-ab0987654321";      
     

Now, use the CMK ARNs to create a keyring. In the Encryption SDK, a keyring is used to generate, encrypt, and decrypt data keys under multiple master keys. You’ll create a KMS keyring configured to use multiple CMKs.


struct aws_cryptosdk_keyring *kms_keyring=Aws::Cryptosdk::KmsKeyring::Builder().Build(KEY_ARN_US_WEST_2, { KEY_ARN_EU_CENTRAL_1 });

When the AWS Encryption SDK uses this keyring to encrypt data, it calls GenerateDataKey on the first CMK that you specify, and Encrypt on each of the remaining CMKs that you specify. The result is a plaintext data key generated in us-west-2, an encryption of the data key using the CMK in us-west-2, and an encryption of the data key using the CMK in eu-central-1.

The plaintext data key that AWS KMS generated in us-west-2 is protected under a TLS session using only cipher suites that support forward-secrecy. The process of sending that same plaintext data key to the AWS KMS endpoint in eu-central-1 for encryption is also protected under a similar TLS session.

The Encryption SDK uses the data key to encrypt your data, and it stores the encrypted data keys with your encrypted content. The result is an encrypted message that can be decrypted using the CMK in us-west-2 or the CMK in eu-central-1.

Now that you understand what’s going to happen after you create the keyring, I’ll return to the code sample. Next, you need to create an encrypt-mode session with your keyring and pass in the CMM. In the AWS Encryption SDK for C, you use a session to encrypt a single plaintext message or decrypt a single ciphertext message, regardless of its size. The session maintains the state of the message throughout its processing.


struct aws_cryptosdk_session *session = aws_cryptosdk_session_new_from_keyring(alloc, AWS_CRYPTOSDK_ENCRYPT, kms_keyring);

With the keyring and encrypt-mode session, the data processing application can ask the Encryption SDK to encrypt the data under the CMKs that you specified in two different AWS regions:


aws_cryptosdk_session_process(
    session,
    out_ciphertext,
    out_ciphertext_buf_sz,
    out_ciphertext_len,
    in_plaintext,
    in_plaintext_len,
    &in_plaintext_consumed))

The result is an encrypted message that contains the ciphertext and two encrypted copies of the same data key. One encrypted data key was encrypted by your CMK in us-west-2 and other encrypted data key was encrypted by your CMK in eu-central-1.

Decryption path

In the AWS Encryption SDK for C, you use keyrings for both encrypting and decrypting. You can use the same keyring for both, or you can use different keyrings for each operation.

Why would you want to use a different keyring for decryption? At a high level, encrypt keyrings specify all CMKs that can decrypt the ciphertext. Decrypt keyrings constrain the CMKs the application is permitted to use.

Reusing a keyring for both encrypt and decrypt mode can simplify your AWS Encryption SDK client configuration, but splitting the keyring and using different AWS KMS clients provides more flexibility to meet your security and architecture goals. The option you choose depends in part on the constraints you want to place on the CMKs your application uses.

The Decrypt API in the AWS KMS service doesn’t permit you to specify a CMK as a request parameter. But the AWS Encryption SDK lets you specify one or many CMKs in a decryption keyring, or even discover which CMKs to try automatically. I’ll discuss each option in the next section.

Decryption path 1: Use a specific CMK

This keyring option configures the AWS Encryption SDK to use only a specified CMK in the specified AWS Region. This implies that your data processing application will need kms:Decrypt permissions on that specific CMK and your application will always call the same AWS KMS endpoints in the specified AWS Region. CloudTrail events from the Decrypt API will also only appear in the specified AWS Region.

You might use a specific CMK when the user or application that is decrypting the data has kms:Decrypt permission on only one of the CMKs that encrypted the data keys.

The CMK that you specify to decrypt the data must be one of the CMKs that was used to encrypt the data. Make sure that at least one of the CMKs from your encrypt keyring is included in the decrypt keyring and that the caller has kms:Decrypt permission to use it.

In my example, I encrypted the data keys using CMKs in us-west-2 and eu-central 1, so I’ll start decrypting in eu-central-1 because I want to have a specific decrypt instantiation of the data processing application dedicated to eu-central-1. Assume the eu-central-1 data processing application has configured AWS IAM credentials for a principal with permission to call the Decrypt operation on the eu-central-1 CMK.

Configure a keyring that asks the AWS Encryption SDK to use the CMK in eu-central-1 to decrypt:

Aws::Cryptosdk::KmsKeyring::Builder().Build(KEY_ARN_EU_CENTRAL_1)

The Encryption SDK reads the encrypted message, finds the encrypted data key that was encrypted using the CMK in eu-central-1, and uses this keyring to decrypt.

Decryption path 2: Use any of several CMKs

This keyring option configures the AWS Encryption SDK to try several specific CMKs during its decryption attempts, stopping as soon as it succeeds. You should configure the AWS IAM credentials used by your data processing application to have kms:Decrypt permissions on each of the specified regional CMKs.

Your application could end up calling multiple regional AWS KMS endpoints. CloudTrail events from the Decrypt API will appear in the AWS Region in which the decrypt operation succeeds, and in any of the other AWS Regions that the keyring attempts to use. The CMK that you specify to decrypt the data must be one of the CMKs that was used to encrypt the data. Make sure that at least one of the CMKs from your encrypt keyring is included in the decrypt keyring and that the application has kms:Decrypt permission to use it.

You might define an encryption keyring that includes multiple CMKs so that users with different permissions can decrypt the same message. For example, you might include in your encryption keyring keys in multiple AWS regions.

Here’s an example keyring constructed with multiple CMKs:

Aws::Cryptosdk::KmsKeyring::Builder().Build(KEY_ARN_EU_CENTRAL_1, { KEY_ARN_US_WEST_2 })

The AWS Encryption SDK reads each of the encrypted data keys stored in the encrypted message in the order that they appear. For each data key, the Encryption SDK searches the keyring for the matching CMK that encrypted it. If it finds that CMK, the AWS Encryption SDK calls AWS KMS in the AWS Region where the CMK exists to decrypt that data key, then uses that decrypted key to decrypt the message. If the decryption operation fails for any reason, the AWS Encryption SDK moves on to the next encrypted data key in the message and tries again.

The AWS Encryption SDK will try to decrypt the encrypted message in this way until either decryption succeeds, or the AWS Encryption SDK has attempted and failed to decrypt any of the encrypted data keys using the CMKs specified in the keyring.

If this keyring configuration looks familiar, it’s because it’s similar to the configuration you used on the encrypt path when you encrypted under multiple CMKs. The difference is this:

  • Encryption: The AWS Encryption SDK uses every CMK in the keyring to encrypt the data key, and adds all of the encrypted data keys to the encrypted message.
  • Decryption: The AWS Encryption SDK attempts to decrypt one of the encrypted data key using only the CMKs in the keyring. It stops as soon as it succeeds.

Decryption path 3: Strategic ARNs reduction using the Discovery keyring

The previous decryption paths required you to keep track of the exact CMKs used during the encryption operation, which may suit your needs for security and event logging. But what if you want more flexibility? What if you want to change the CMKs that you use in encryption operations without updating the data processing application that decrypts your data? You can configure a keyring that doesn’t specify CMKs to use for decryption, but instead tries each CMK that encrypted a data key until decryption succeeds or all referenced CMKs fail. We call this configuration a KMS Discovery keyring.

A Discovery keyring is equivalent to a keyring that includes all of the same CMKs that were used to encrypt the data, but it’s simpler and less error-prone. You might use a KMS Discovery keyring if you have no preference among the CMKs that encrypted a data key, and don’t mind the latency tradeoffs of trying CMKs in remote AWS Regions, or trying CMKs that will fail a permissions check while searching for one that succeeds. You can think of the KMS Discovery keyring as a universal keyring that you can use and reuse in your applications in many AWS Regions.

When you use a KMS Discovery keyring, the AWS Encryption SDK reads each encrypted data key and discovers the ARN of the CMK used to encrypt it. The AWS Encryption SDK then uses the configured IAM credentials to call AWS KMS in that CMK’s AWS Region to decrypt the data key. The AWS Encryption SDK repeats that process until it has decrypted the data key or runs out of encrypted data keys to try. .


Aws::Cryptosdk::KmsKeyring::Builder().BuildDiscovery();

While KMS Discovery keyrings are simpler, you run the risk of having your data processing application make a cross-region call to an AWS KMS endpoint that adds unwanted latency. In my example, you might not want the decrypting application running in us-west-2 to wait for the AWS Encryption SDK to call AWS KMS in eu-central-1. To use only the CMKs in a particular AWS Region to decrypt the data keys, create a KMS Regional Discovery keyring that specifies the AWS Region, but not the CMK ARNs. In my example, the following keyring allows the AWS Encryption SDK to use only CMKs in us-west-2.


Aws::Cryptosdk::KmsKeyring::Builder()
        .WithKmsClient(create_kms_client(Aws::Region::US_WEST_2)).BuildDiscovery());

Because this example KMS Regional Discovery keyring specifies a client for the us-west-2 AWS Region, not a CMK ARN, the AWS Encryption SDK will only try to decrypt any encrypted data key it finds that was encrypted using any CMK in us-west-2. If, for some reason, none of the encrypted data keys was encrypted using a CMK in us-west-2, or the application decrypting the data doesn’t have permission to use CMKs in us-west-2, the AWS Encryption SDK call to decrypt the message with this keyring fails and fails fast. This may provide you with more options for deterministic error handling.

Keep in mind that the KMS Regional Discovery keyring allows the AWS Encryption SDK to try the CMK for each encrypted data key in the specified AWS Region. However, AWS KMS never uses a CMK until it verifies that the caller has permission to perform the requested operation. If the application doesn’t have kms:Decrypt permission for any of the CMKs that were used to encrypt the data keys, decryption fails.

Summary

Encrypting KMS data keys using multiple CMKs provides a variety of options to decrypt ciphertexts to meet your security, auditing, and latency requirements. My examples show how encrypted messages can be decrypted by using AWS KMS CMKs in multiple AWS Regions. You can also use the Encryption SDK with master keys supplied by a custom key management infrastructure independent of AWS.

The AWS Encryption SDK’s portable and interoperable encrypted message format makes it easier to combine multiple encrypted data keys with your encrypted data to support the decryption access scheme you want. The AWS Encryption SDK for C brings these utilities to a new, broader set of platform and application environments to complement the existing Java and Python versions.

You can find the AWS Encryption SDK for C on GitHub.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Crypto Tools forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Liz Roth

Liz is a Senior Software Development Engineer at Amazon Web Services. She has been at Amazon for more than 8 years and has more than 10 years of industry experience across a variety of areas, including security, networks, and operations.

How to Encrypt Amazon S3 Objects with the AWS SDK for Ruby

Post Syndicated from Doug Schwartz original https://aws.amazon.com/blogs/security/how-to-encrypt-amazon-s3-objects-with-the-aws-sdk-for-ruby/

AWS KMS image

Recently, Amazon announced some new Amazon S3 encryption and security features. The AWS Blog post showed how to use the Amazon S3 console to take advantage of these new features. However, if you have a large number of Amazon S3 buckets, using the console to implement these features could take hours, if not days. As an alternative, I created documentation topics in the AWS SDK for Ruby Developer Guide that include code examples showing you how to use the new Amazon S3 encryption features using the AWS SDK for Ruby.

What are my encryption options?

You can encrypt Amazon S3 bucket objects on a server or on a client:

  • When you encrypt objects on a server, you request that Amazon S3 encrypt the objects before saving them to disk in data centers and decrypt the objects when you download them. The main advantage of this approach is that Amazon S3 manages the entire encryption process.
  • When you encrypt objects on a client, you encrypt the objects before you upload them to Amazon S3. In this case, you manage the encryption process, the encryption keys, and related tools. Use this option when:
    • Company policy and standards require it.
    • You already have a development process in place that meets your needs.

    Encrypting on the client has always been available, but you should know the following points:

    • You must be diligent about protecting your encryption keys, which is analogous to having a burglar-proof lock on your front door. If you leave a key under the mat, your security is compromised.
    • If you lose your encryption keys, you won’t be able to decrypt your data.

    If you encrypt objects on the client, we strongly recommend that you use an AWS Key Management Service (AWS KMS) managed customer master key (CMK)

How to use encryption on a server

You can specify that Amazon S3 automatically encrypts objects as you upload them to a bucket or require that objects uploaded to an Amazon S3 bucket include encryption on a server before they are uploaded to an Amazon S3 bucket.

The advantage of these settings is that when you specify them, you ensure that objects uploaded to Amazon S3 are encrypted. Alternatively, you can have Amazon S3 encrypt individual objects on the server as you upload them to a bucket or encrypt them on the server with your own key as you upload them to a bucket.

The AWS SDK for Ruby Developer Guide now contains the following topics that explain your encryption options on a server:

How to use encryption on a client

You can encrypt objects on a client before you upload them to a bucket and decrypt them after you download them from a bucket by using the Amazon S3 encryption client.

The AWS SDK for Ruby Developer Guide now contains the following topics that explain your encryption options on the client:

Note: The Amazon S3 encryption client in the AWS SDK for Ruby is compatible with other Amazon S3 encryption clients, but it is not compatible with other AWS client-side encryption libraries, including the AWS Encryption SDK and the Amazon DynamoDB encryption client for Java. Each library returns a different ciphertext (“encrypted message”) format, so you can’t use one library to encrypt objects and a different library to decrypt them. For more information, see Protecting Data Using Client-Side Encryption.

If you have comments about this blog post, submit them in the “Comments” section below. If you have questions about encrypting objects on servers and clients, start a new thread on the Amazon S3 forum or contact AWS Support.

– Doug

Introducing the New GDPR Center and “Navigating GDPR Compliance on AWS” Whitepaper

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/introducing-the-new-gdpr-center-and-navigating-gdpr-compliance-on-aws-whitepaper/

European Union flag

At AWS re:Invent 2017, the AWS Compliance team participated in excellent engagements with AWS customers about the General Data Protection Regulation (GDPR), including discussions that generated helpful input. Today, I am announcing resulting enhancements to our recently launched GDPR Center and the release of a new whitepaper, Navigating GDPR Compliance on AWS. The resources available on the GDPR Center are designed to give you GDPR basics, and provide some ideas as you work out the details of the regulation and find a path to compliance.

In this post, I focus on two of these new GDPR requirements in terms of articles in the GDPR, and explain some of the AWS services and other resources that can help you meet these requirements.

Background about the GDPR

The GDPR is a European privacy law that will become enforceable on May 25, 2018, and is intended to harmonize data protection laws throughout the European Union (EU) by applying a single data protection law that is binding throughout each EU member state. The GDPR not only applies to organizations located within the EU, but also to organizations located outside the EU if they offer goods or services to, or monitor the behavior of, EU data subjects. All AWS services will comply with the GDPR in advance of the May 25, 2018, enforcement date.

We are already seeing customers move personal data to AWS to help solve challenges in complying with the EU’s GDPR because of AWS’s advanced toolset for identifying, securing, and managing all types of data, including personal data. Steve Schmidt, the AWS CISO, has already written about the internal and external work we have been undertaking to help you use AWS services to meet your own GDPR compliance goals.

Article 25 – Data Protection by Design and by Default (Privacy by Design)

Privacy by Design is the integration of data privacy and compliance into the systems development process, enabling applications, systems, and accounts, among other things, to be secure by default. To secure your AWS account, we offer a script to evaluate your AWS account against the full Center for Internet Security (CIS) Amazon Web Services Foundations Benchmark 1.1. You can access this public benchmark on GitHub. Additionally, AWS Trusted Advisor is an online resource to help you improve security by optimizing your AWS environment. Among other things, Trusted Advisor lists a number of security-related controls you should be monitoring. AWS also offers AWS CloudTrail, a logging tool to track usage and API activity. Another example of tooling that enables data protection is Amazon Inspector, which includes a knowledge base of hundreds of rules (regularly updated by AWS security researchers) mapped to common security best practices and vulnerability definitions. Examples of built-in rules include checking for remote root login being enabled or vulnerable software versions installed. These and other tools enable you to design an environment that protects customer data by design.

An accurate inventory of all the GDPR-impacting data is important but sometimes difficult to assess. AWS has some advanced tooling, such as Amazon Macie, to help you determine where customer data is present in your AWS resources. Macie uses advanced machine learning to automatically discover and classify data so that you can protect data, per Article 25.

Article 32 – Security of Processing

You can use many AWS services and features to secure the processing of data regulated by the GDPR. Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch resources in a virtual network that you define. You have complete control over your virtual networking environment, including the selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways. With Amazon VPC, you can make the Amazon Cloud a seamless extension of your existing on-premises resources.

AWS Key Management Service (AWS KMS) is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data, and uses hardware security modules (HSMs) to help protect your keys. Managing keys with AWS KMS allows you to choose to encrypt data either on the server side or the client side. AWS KMS is integrated with several other AWS services to help you protect the data you store with these services. AWS KMS is also integrated with CloudTrail to provide you with logs of all key usage to help meet your regulatory and compliance needs. You can also use the AWS Encryption SDK to correctly generate and use encryption keys, as well as protect keys after they have been used.

We also recently announced new encryption and security features for Amazon S3, including default encryption and a detailed inventory report. Services of this type as well as additional GDPR enablers will be published regularly on our GDPR Center.

Other resources

As you prepare for GDPR, you may want to visit our AWS Customer Compliance Center or Tools for Amazon Web Services to learn about options for building anything from small scripts that delete data to a full orchestration framework that uses AWS Code services.

-Chad

AWS Encryption SDK: How to Decide if Data Key Caching Is Right for Your Application

Post Syndicated from June Blender original https://aws.amazon.com/blogs/security/aws-encryption-sdk-how-to-decide-if-data-key-caching-is-right-for-your-application/

AWS KMS image

Today, the AWS Crypto Tools team introduced a new feature in the AWS Encryption SDK: data key caching. Data key caching lets you reuse the data keys that protect your data, instead of generating a new data key for each encryption operation.

Data key caching can reduce latency, improve throughput, reduce cost, and help you stay within service limits as your application scales. In particular, caching might help if your application is hitting the AWS Key Management Service (KMS) requests-per-second limit and raising the limit does not solve the problem.

However, these benefits come with some security tradeoffs. Encryption best practices generally discourage extensive reuse of data keys.

In this blog post, I explore those tradeoffs and provide information that can help you decide whether data key caching is a good strategy for your application. I also explain how data key caching is implemented in the AWS Encryption SDK and describe the security thresholds that you can set to limit the reuse of data keys. Finally, I provide some practical examples of using the security thresholds to meet cost, performance, and security goals.

Introducing data key caching

The AWS Encryption SDK is a client-side encryption library that makes it easier for you to implement cryptography best practices in your application. It includes secure default behavior for developers who are not encryption experts, while being flexible enough to work for the most experienced users.

In the AWS Encryption SDK, by default, you generate a new data key for each encryption operation. This is the most secure practice. However, in some applications, the overhead of generating a new data key for each operation is not acceptable.

Data key caching saves the plaintext and ciphertext of the data keys you use in a configurable cache. When you need a key to encrypt or decrypt data, you can reuse a data key from the cache instead of creating a new data key. You can create multiple data key caches and configure each one independently. Most importantly, the AWS Encryption SDK provides security thresholds that you can set to determine how much data key reuse you will allow.

To make data key caching easier to implement, the AWS Encryption SDK provides LocalCryptoMaterialsCache, an in-memory, least-recently-used cache with a configurable size. The SDK manages the cache for you, including adding store, search, and match logic to all encryption and decryption operations.

We recommend that you use LocalCryptoMaterialsCache as it is, but you can customize it, or substitute a compatible cache. However, you should never store plaintext data keys on disk.

The AWS Encryption SDK documentation includes sample code in Java and Python for an application that uses data key caching to encrypt data sent to and from Amazon Kinesis Streams.

Balance cost and security

Your decision to use data key caching should balance cost—in time, money, and resources—against security. In every consideration, though, the balance should favor your security requirements. As a rule, use the minimal caching required to achieve your cost and performance goals.

Before implementing data key caching, consider the details of your applications, your security requirements, and the cost and frequency of your encryption operations. In general, your application can benefit from data key caching if each operation is slow or expensive, or if you encrypt and decrypt data frequently. If the cost and speed of your encryption operations are already acceptable or can be improved by other means, do not use a data key cache.

Data key caching can be the right choice for your application if you have high encryption and decryption traffic. For example, if you are hitting your KMS requests-per-second limit, caching can help because you get some of your data keys from the cache instead of calling KMS for every request.

However, you can also create a case in the AWS Support Center to raise the KMS limit for your account. If raising the limit solves the problem, you do not need data key caching.

Configure caching thresholds for cost and security

In the AWS Encryption SDK, you can configure data key caching to allow just enough data key reuse to meet your cost and performance targets while conforming to the security requirements of your application. The SDK enforces the thresholds so that you can use them with any compatible cache.

The data key caching security thresholds apply to each cache entry. The AWS Encryption SDK will not use the data key from a cache entry that exceeds any of the thresholds that you set.

  • Maximum age (required): Set the lifetime of each cached key to be long enough to get cache hits, but short enough to limit exposure of a plaintext data key in memory to a specific time period.

You can use the maximum age threshold like a key rotation policy. Use it to limit the reuse of data keys and minimize exposure of cryptographic materials. You can also use it to evict data keys when the type or source of data that your application is processing changes.

  • Maximum messages encrypted (optional; default is 232 messages): Set the number of messages protected by each cached data key to be large enough to get value from reuse, but small enough to limit the number of messages that might potentially be exposed.

The AWS Encryption SDK only caches data keys that use an algorithm suite with a key derivation function. This technique avoids the cryptographic limits on the number of bytes encrypted with a single key. However, the more data that a key encrypts, the more data that is exposed if the data key is compromised.

Limiting the number of messages, rather than the number of bytes, is particularly useful if your application encrypts many messages of a similar size or when potential exposure must be limited to very few messages. This threshold is also useful when you want to reuse a data key for a particular type of message and know in advance how many messages of that type you have. You can also use an encryption context to select particular cached data keys for your encryption requests.

  • Maximum bytes encrypted (optional; default is 263 – 1): Set the bytes protected by each cached data key to be large enough to allow the reuse you need, but small enough to limit the amount of data encrypted under the same key.

Limiting the number of bytes, rather than the number of messages, is preferable when your application encrypts messages of widely varying size or when possibly exposing large amounts of data is much more of a concern than exposing smaller amounts of data.

In addition to these security thresholds, the LocalCryptoMaterialsCache in the AWS Encryption SDK lets you set its capacity, which is the maximum number of entries the cache can hold.

Use the capacity value to tune the performance of your LocalCryptoMaterialsCache. In general, use the smallest value that will achieve the performance improvements that your application requires. You might want to test with a very small cache of 5–10 entries and expand if necessary. You will need a slightly larger cache if you are using the cache for both encryption and decryption requests, or if you are using encryption contexts to select particular cache entries.

Consider these cache configuration examples

After you determine the security and performance requirements of your application, consider the cache security thresholds carefully and adjust them to meet your needs. There are no magic numbers for these thresholds: the ideal settings are specific to each application, its security and performance requirements, and budget. Use the minimal amount of caching necessary to get acceptable performance and cost.

The following examples show ways you can use the LocalCryptoMaterialsCache capacity setting and the security thresholds to help meet your security requirements:

  • Slow master key operations: If your master key processes only 100 transactions per second (TPS) but your application needs to process 1,000 TPS, you can meet your application requirements by allowing a maximum of 10 messages to be protected under each data key.
  • High frequency and volume: If your master key costs $0.01 per operation and you need to process a consistent 1,000 TPS while staying within a budget of $100,000 per month, allow a maximum of 275 messages for each cache entry.
  • Burst traffic: If your application’s processing bursts to 100 TPS for five seconds in each minute but is otherwise zero, and your master key costs $0.01 per operation, setting maximum messages to 3 can achieve significant savings. To prevent data keys from being reused across bursts (55 seconds), set the maximum age of each cached data key to 20 seconds.
  • Expensive master key operations: If your application uses a low-throughput encryption service that costs as much as $1.00 per operation, you might want to minimize the number of operations. To do so, create a cache that is large enough to contain the data keys you need. Then, set the byte and message limits high enough to allow reuse while conforming to your security requirements. For example, if your security requirements do not permit a data key to encrypt more than 10 GB of data, setting bytes processed to 10 GB still significantly minimizes operations and conforms to your security requirements.

Learn more about data key caching

To learn more about data key caching, including how to implement it, how to set the security thresholds, and details about the caching components, see Data Key Caching in the AWS Encryption SDK. Also, see the AWS Encryption SDKs for Java and Python as well as the Javadoc and Python documentation.

If you have comments about this blog post, submit them in the “Comments” section below. If you have questions, file an issue in the GitHub repos for the Encryption SDK in Java or Python, or start a new thread on the KMS forum.

– June

In Case You Missed These: AWS Security Blog Posts from January, February, and March

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/in-case-you-missed-these-aws-security-blog-posts-from-january-february-and-march/

Image of lock and key

In case you missed any AWS Security Blog posts published so far in 2017, they are summarized and linked to below. The posts are shown in reverse chronological order (most recent first), and the subject matter ranges from protecting dynamic web applications against DDoS attacks to monitoring AWS account configuration changes and API calls to Amazon EC2 security groups.

March

March 22: How to Help Protect Dynamic Web Applications Against DDoS Attacks by Using Amazon CloudFront and Amazon Route 53
Using a content delivery network (CDN) such as Amazon CloudFront to cache and serve static text and images or downloadable objects such as media files and documents is a common strategy to improve webpage load times, reduce network bandwidth costs, lessen the load on web servers, and mitigate distributed denial of service (DDoS) attacks. AWS WAF is a web application firewall that can be deployed on CloudFront to help protect your application against DDoS attacks by giving you control over which traffic to allow or block by defining security rules. When users access your application, the Domain Name System (DNS) translates human-readable domain names (for example, www.example.com) to machine-readable IP addresses (for example, 192.0.2.44). A DNS service, such as Amazon Route 53, can effectively connect users’ requests to a CloudFront distribution that proxies requests for dynamic content to the infrastructure hosting your application’s endpoints. In this blog post, I show you how to deploy CloudFront with AWS WAF and Route 53 to help protect dynamic web applications (with dynamic content such as a response to user input) against DDoS attacks. The steps shown in this post are key to implementing the overall approach described in AWS Best Practices for DDoS Resiliency and enable the built-in, managed DDoS protection service, AWS Shield.

March 21: New AWS Encryption SDK for Python Simplifies Multiple Master Key Encryption
The AWS Cryptography team is happy to announce a Python implementation of the AWS Encryption SDK. This new SDK helps manage data keys for you, and it simplifies the process of encrypting data under multiple master keys. As a result, this new SDK allows you to focus on the code that drives your business forward. It also provides a framework you can easily extend to ensure that you have a cryptographic library that is configured to match and enforce your standards. The SDK also includes ready-to-use examples. If you are a Java developer, you can refer to this blog post to see specific Java examples for the SDK. In this blog post, I show you how you can use the AWS Encryption SDK to simplify the process of encrypting data and how to protect your encryption keys in ways that help improve application availability by not tying you to a single region or key management solution.

March 21: Updated CJIS Workbook Now Available by Request
The need for guidance when implementing Criminal Justice Information Services (CJIS)–compliant solutions has become of paramount importance as more law enforcement customers and technology partners move to store and process criminal justice data in the cloud. AWS services allow these customers to easily and securely architect a CJIS-compliant solution when handling criminal justice data, creating a durable, cost-effective, and secure IT infrastructure that better supports local, state, and federal law enforcement in carrying out their public safety missions. AWS has created several documents (collectively referred to as the CJIS Workbook) to assist you in aligning with the FBI’s CJIS Security Policy. You can use the workbook as a framework for developing CJIS-compliant architecture in the AWS Cloud. The workbook helps you define and test the controls you operate, and document the dependence on the controls that AWS operates (compute, storage, database, networking, regions, Availability Zones, and edge locations).

March 9: New Cloud Directory API Makes It Easier to Query Data Along Multiple Dimensions
Today, we made available a new Cloud Directory API, ListObjectParentPaths, that enables you to retrieve all available parent paths for any directory object across multiple hierarchies. Use this API when you want to fetch all parent objects for a specific child object. The order of the paths and objects returned is consistent across iterative calls to the API, unless objects are moved or deleted. In case an object has multiple parents, the API allows you to control the number of paths returned by using a paginated call pattern. In this blog post, I use an example directory to demonstrate how this new API enables you to retrieve data across multiple dimensions to implement powerful applications quickly.

March 8: How to Access the AWS Management Console Using AWS Microsoft AD and Your On-Premises Credentials
AWS Directory Service for Microsoft Active Directory, also known as AWS Microsoft AD, is a managed Microsoft Active Directory (AD) hosted in the AWS Cloud. Now, AWS Microsoft AD makes it easy for you to give your users permission to manage AWS resources by using on-premises AD administrative tools. With AWS Microsoft AD, you can grant your on-premises users permissions to resources such as the AWS Management Console instead of adding AWS Identity and Access Management (IAM) user accounts or configuring AD Federation Services (AD FS) with Security Assertion Markup Language (SAML). In this blog post, I show how to use AWS Microsoft AD to enable your on-premises AD users to sign in to the AWS Management Console with their on-premises AD user credentials to access and manage AWS resources through IAM roles.

March 7: How to Protect Your Web Application Against DDoS Attacks by Using Amazon Route 53 and an External Content Delivery Network
Distributed Denial of Service (DDoS) attacks are attempts by a malicious actor to flood a network, system, or application with more traffic, connections, or requests than it is able to handle. To protect your web application against DDoS attacks, you can use AWS Shield, a DDoS protection service that AWS provides automatically to all AWS customers at no additional charge. You can use AWS Shield in conjunction with DDoS-resilient web services such as Amazon CloudFront and Amazon Route 53 to improve your ability to defend against DDoS attacks. Learn more about architecting for DDoS resiliency by reading the AWS Best Practices for DDoS Resiliency whitepaper. You also have the option of using Route 53 with an externally hosted content delivery network (CDN). In this blog post, I show how you can help protect the zone apex (also known as the root domain) of your web application by using Route 53 to perform a secure redirect to prevent discovery of your application origin.

Image of lock and key

February

February 27: Now Generally Available – AWS Organizations: Policy-Based Management for Multiple AWS Accounts
Today, AWS Organizations moves from Preview to General Availability. You can use Organizations to centrally manage multiple AWS accounts, with the ability to create a hierarchy of organizational units (OUs). You can assign each account to an OU, define policies, and then apply those policies to an entire hierarchy, specific OUs, or specific accounts. You can invite existing AWS accounts to join your organization, and you can also create new accounts. All of these functions are available from the AWS Management Console, the AWS Command Line Interface (CLI), and through the AWS Organizations API.To read the full AWS Blog post about today’s launch, see AWS Organizations – Policy-Based Management for Multiple AWS Accounts.

February 23: s2n Is Now Handling 100 Percent of SSL Traffic for Amazon S3
Today, we’ve achieved another important milestone for securing customer data: we have replaced OpenSSL with s2n for all internal and external SSL traffic in Amazon Simple Storage Service (Amazon S3) commercial regions. This was implemented with minimal impact to customers, and multiple means of error checking were used to ensure a smooth transition, including client integration tests, catching potential interoperability conflicts, and identifying memory leaks through fuzz testing.

February 22: Easily Replace or Attach an IAM Role to an Existing EC2 Instance by Using the EC2 Console
AWS Identity and Access Management (IAM) roles enable your applications running on Amazon EC2 to use temporary security credentials. IAM roles for EC2 make it easier for your applications to make API requests securely from an instance because they do not require you to manage AWS security credentials that the applications use. Recently, we enabled you to use temporary security credentials for your applications by attaching an IAM role to an existing EC2 instance by using the AWS CLI and SDK. To learn more, see New! Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI. Starting today, you can attach an IAM role to an existing EC2 instance from the EC2 console. You can also use the EC2 console to replace an IAM role attached to an existing instance. In this blog post, I will show how to attach an IAM role to an existing EC2 instance from the EC2 console.

February 22: How to Audit Your AWS Resources for Security Compliance by Using Custom AWS Config Rules
AWS Config Rules enables you to implement security policies as code for your organization and evaluate configuration changes to AWS resources against these policies. You can use Config rules to audit your use of AWS resources for compliance with external compliance frameworks such as CIS AWS Foundations Benchmark and with your internal security policies related to the US Health Insurance Portability and Accountability Act (HIPAA), the Federal Risk and Authorization Management Program (FedRAMP), and other regimes. AWS provides some predefined, managed Config rules. You also can create custom Config rules based on criteria you define within an AWS Lambda function. In this post, I show how to create a custom rule that audits AWS resources for security compliance by enabling VPC Flow Logs for an Amazon Virtual Private Cloud (VPC). The custom rule meets requirement 4.3 of the CIS AWS Foundations Benchmark: “Ensure VPC flow logging is enabled in all VPCs.”

February 13: AWS Announces CISPE Membership and Compliance with First-Ever Code of Conduct for Data Protection in the Cloud
I have two exciting announcements today, both showing AWS’s continued commitment to ensuring that customers can comply with EU Data Protection requirements when using our services.

February 13: How to Enable Multi-Factor Authentication for AWS Services by Using AWS Microsoft AD and On-Premises Credentials
You can now enable multi-factor authentication (MFA) for users of AWS services such as Amazon WorkSpaces and Amazon QuickSight and their on-premises credentials by using your AWS Directory Service for Microsoft Active Directory (Enterprise Edition) directory, also known as AWS Microsoft AD. MFA adds an extra layer of protection to a user name and password (the first “factor”) by requiring users to enter an authentication code (the second factor), which has been provided by your virtual or hardware MFA solution. These factors together provide additional security by preventing access to AWS services, unless users supply a valid MFA code.

February 13: How to Create an Organizational Chart with Separate Hierarchies by Using Amazon Cloud Directory
Amazon Cloud Directory enables you to create directories for a variety of use cases, such as organizational charts, course catalogs, and device registries. Cloud Directory offers you the flexibility to create directories with hierarchies that span multiple dimensions. For example, you can create an organizational chart that you can navigate through separate hierarchies for reporting structure, location, and cost center. In this blog post, I show how to use Cloud Directory APIs to create an organizational chart with two separate hierarchies in a single directory. I also show how to navigate the hierarchies and retrieve data. I use the Java SDK for all the sample code in this post, but you can use other language SDKs or the AWS CLI.

February 10: How to Easily Log On to AWS Services by Using Your On-Premises Active Directory
AWS Directory Service for Microsoft Active Directory (Enterprise Edition), also known as Microsoft AD, now enables your users to log on with just their on-premises Active Directory (AD) user name—no domain name is required. This new domainless logon feature makes it easier to set up connections to your on-premises AD for use with applications such as Amazon WorkSpaces and Amazon QuickSight, and it keeps the user logon experience free from network naming. This new interforest trusts capability is now available when using Microsoft AD with Amazon WorkSpaces and Amazon QuickSight Enterprise Edition. In this blog post, I explain how Microsoft AD domainless logon works with AD interforest trusts, and I show an example of setting up Amazon WorkSpaces to use this capability.

February 9: New! Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI
AWS Identity and Access Management (IAM) roles enable your applications running on Amazon EC2 to use temporary security credentials that AWS creates, distributes, and rotates automatically. Using temporary credentials is an IAM best practice because you do not need to maintain long-term keys on your instance. Using IAM roles for EC2 also eliminates the need to use long-term AWS access keys that you have to manage manually or programmatically. Starting today, you can enable your applications to use temporary security credentials provided by AWS by attaching an IAM role to an existing EC2 instance. You can also replace the IAM role attached to an existing EC2 instance. In this blog post, I show how you can attach an IAM role to an existing EC2 instance by using the AWS CLI.

February 8: How to Remediate Amazon Inspector Security Findings Automatically
The Amazon Inspector security assessment service can evaluate the operating environments and applications you have deployed on AWS for common and emerging security vulnerabilities automatically. As an AWS-built service, Amazon Inspector is designed to exchange data and interact with other core AWS services not only to identify potential security findings but also to automate addressing those findings. Previous related blog posts showed how you can deliver Amazon Inspector security findings automatically to third-party ticketing systems and automate the installation of the Amazon Inspector agent on new Amazon EC2 instances. In this post, I show how you can automatically remediate findings generated by Amazon Inspector. To get started, you must first run an assessment and publish any security findings to an Amazon Simple Notification Service (SNS) topic. Then, you create an AWS Lambda function that is triggered by those notifications. Finally, the Lambda function examines the findings and then implements the appropriate remediation based on the type of issue.

February 6: How to Simplify Security Assessment Setup Using Amazon EC2 Systems Manager and Amazon Inspector
In a July 2016 AWS Blog post, I discussed how to integrate Amazon Inspector with third-party ticketing systems by using Amazon Simple Notification Service (SNS) and AWS Lambda. This AWS Security Blog post continues in the same vein, describing how to use Amazon Inspector to automate various aspects of security management. In this post, I show you how to install the Amazon Inspector agent automatically through the Amazon EC2 Systems Manager when a new Amazon EC2 instance is launched. In a subsequent post, I will show you how to update EC2 instances automatically that run Linux when Amazon Inspector discovers a missing security patch.

Image of lock and key

January

January 30: How to Protect Data at Rest with Amazon EC2 Instance Store Encryption
Encrypting data at rest is vital for regulatory compliance to ensure that sensitive data saved on disks is not readable by any user or application without a valid key. Some compliance regulations such as PCI DSS and HIPAA require that data at rest be encrypted throughout the data lifecycle. To this end, AWS provides data-at-rest options and key management to support the encryption process. For example, you can encrypt Amazon EBS volumes and configure Amazon S3 buckets for server-side encryption (SSE) using AES-256 encryption. Additionally, Amazon RDS supports Transparent Data Encryption (TDE). Instance storage provides temporary block-level storage for Amazon EC2 instances. This storage is located on disks attached physically to a host computer. Instance storage is ideal for temporary storage of information that frequently changes, such as buffers, caches, and scratch data. By default, files stored on these disks are not encrypted. In this blog post, I show a method for encrypting data on Linux EC2 instance stores by using Linux built-in libraries. This method encrypts files transparently, which protects confidential data. As a result, applications that process the data are unaware of the disk-level encryption.

January 27: How to Detect and Automatically Remediate Unintended Permissions in Amazon S3 Object ACLs with CloudWatch Events
Amazon S3 Access Control Lists (ACLs) enable you to specify permissions that grant access to S3 buckets and objects. When S3 receives a request for an object, it verifies whether the requester has the necessary access permissions in the associated ACL. For example, you could set up an ACL for an object so that only the users in your account can access it, or you could make an object public so that it can be accessed by anyone. If the number of objects and users in your AWS account is large, ensuring that you have attached correctly configured ACLs to your objects can be a challenge. For example, what if a user were to call the PutObjectAcl API call on an object that is supposed to be private and make it public? Or, what if a user were to call the PutObject with the optional Acl parameter set to public-read, therefore uploading a confidential file as publicly readable? In this blog post, I show a solution that uses Amazon CloudWatch Events to detect PutObject and PutObjectAcl API calls in near-real time and helps ensure that the objects remain private by making automatic PutObjectAcl calls, when necessary.

January 26: Now Available: Amazon Cloud Directory—A Cloud-Native Directory for Hierarchical Data
Today we are launching Amazon Cloud Directory. This service is purpose-built for storing large amounts of strongly typed hierarchical data. With the ability to scale to hundreds of millions of objects while remaining cost-effective, Cloud Directory is a great fit for all sorts of cloud and mobile applications.

January 24: New SOC 2 Report Available: Confidentiality
As with everything at Amazon, the success of our security and compliance program is primarily measured by one thing: our customers’ success. Our customers drive our portfolio of compliance reports, attestations, and certifications that support their efforts in running a secure and compliant cloud environment. As a result of our engagement with key customers across the globe, we are happy to announce the publication of our new SOC 2 Confidentiality report. This report is available now through AWS Artifact in the AWS Management Console.

January 18: Compliance in the Cloud for New Financial Services Cybersecurity Regulations
Financial regulatory agencies are focused more than ever on ensuring responsible innovation. Consequently, if you want to achieve compliance with financial services regulations, you must be increasingly agile and employ dynamic security capabilities. AWS enables you to achieve this by providing you with the tools you need to scale your security and compliance capabilities on AWS. The following breakdown of the most recent cybersecurity regulations, NY DFS Rule 23 NYCRR 500, demonstrates how AWS continues to focus on your regulatory needs in the financial services sector.

January 9: New Amazon GameDev Blog Post: Protect Multiplayer Game Servers from DDoS Attacks by Using Amazon GameLift
In online gaming, distributed denial of service (DDoS) attacks target a game’s network layer, flooding servers with requests until performance degrades considerably. These attacks can limit a game’s availability to players and limit the player experience for those who can connect. Today’s new Amazon GameDev Blog post uses a typical game server architecture to highlight DDoS attack vulnerabilities and discusses how to stay protected by using built-in AWS Cloud security, AWS security best practices, and the security features of Amazon GameLift. Read the post to learn more.

January 6: The Top 10 Most Downloaded AWS Security and Compliance Documents in 2016
The following list includes the 10 most downloaded AWS security and compliance documents in 2016. Using this list, you can learn about what other people found most interesting about security and compliance last year.

January 6: FedRAMP Compliance Update: AWS GovCloud (US) Region Receives a JAB-Issued FedRAMP High Baseline P-ATO for Three New Services
Three new services in the AWS GovCloud (US) region have received a Provisional Authority to Operate (P-ATO) from the Joint Authorization Board (JAB) under the Federal Risk and Authorization Management Program (FedRAMP). JAB issued the authorization at the High baseline, which enables US government agencies and their service providers the capability to use these services to process the government’s most sensitive unclassified data, including Personal Identifiable Information (PII), Protected Health Information (PHI), Controlled Unclassified Information (CUI), criminal justice information (CJI), and financial data.

January 4: The Top 20 Most Viewed AWS IAM Documentation Pages in 2016
The following 20 pages were the most viewed AWS Identity and Access Management (IAM) documentation pages in 2016. I have included a brief description with each link to give you a clearer idea of what each page covers. Use this list to see what other people have been viewing and perhaps to pique your own interest about a topic you’ve been meaning to research.

January 3: The Most Viewed AWS Security Blog Posts in 2016
The following 10 posts were the most viewed AWS Security Blog posts that we published during 2016. You can use this list as a guide to catch up on your blog reading or even read a post again that you found particularly useful.

January 3: How to Monitor AWS Account Configuration Changes and API Calls to Amazon EC2 Security Groups
You can use AWS security controls to detect and mitigate risks to your AWS resources. The purpose of each security control is defined by its control objective. For example, the control objective of an Amazon VPC security group is to permit only designated traffic to enter or leave a network interface. Let’s say you have an Internet-facing e-commerce website, and your security administrator has determined that only HTTP (TCP port 80) and HTTPS (TCP 443) traffic should be allowed access to the public subnet. As a result, your administrator configures a security group to meet this control objective. What if, though, someone were to inadvertently change this security group’s rules and enable FTP or other protocols to access the public subnet from any location on the Internet? That expanded access could weaken the security posture of your assets. Consequently, your administrator might need to monitor the integrity of your company’s security controls so that the controls maintain their desired effectiveness. In this blog post, I explore two methods for detecting unintended changes to VPC security groups. The two methods address not only control objectives but also control failures.

If you have questions about or issues with implementing the solutions in any of these posts, please start a new thread on the forum identified near the end of each post.

– Craig

New AWS Encryption SDK for Python Simplifies Multiple Master Key Encryption

Post Syndicated from Matt Bullock original https://aws.amazon.com/blogs/security/new-aws-encryption-sdk-for-python-simplifies-multiple-master-key-encryption/

The AWS Cryptography team is happy to announce a Python implementation of the AWS Encryption SDK. This new SDK helps manage data keys for you, and it simplifies the process of encrypting data under multiple master keys. As a result, this new SDK allows you to focus on the code that drives your business forward. It also provides a framework you can easily extend to ensure that you have a cryptographic library that is configured to match and enforce your standards. The SDK also includes ready-to-use examples. If you are a Java developer, you can refer to this blog post to see specific Java examples for the SDK.

In this blog post, I show you how you can use the AWS Encryption SDK to simplify the process of encrypting data and how to protect your encryption keys in ways that help improve application availability by not tying you to a single region or key management solution.

How does the AWS Encryption SDK help me?

Developers using encryption often face three problems:

  1. How do I correctly generate and use a data key to encrypt data?
  2. How do I protect the data key after it has been used?
  3. How do I store the data key and ciphertext in a portable manner?

The library provided in the AWS Encryption SDK addresses the first problem by implementing the low-level envelope encryption details transparently using the cryptographic provider available in your development environment. The library helps address the second problem by providing intuitive interfaces to let you choose how you want to generate data keys and the master keys or key-encrypting keys that will protect data keys. Developers can then focus on the core of the application they are building instead of on the complexities of encryption. The ciphertext addresses the third problem, as described later in this post.

The AWS Encryption SDK defines a carefully designed and reviewed ciphertext data format that supports multiple secure algorithm combinations (with room for future expansion) and has no limits on the types or algorithms of the master keys. The ciphertext output of clients (created with the SDK) is a single binary blob that contains your encrypted message and one or more copies of the data key, as encrypted by each master key referenced in the encryption request. This single ciphertext data format for envelope-encrypted data makes it easier to ensure the data key has the same durability and availability properties as the encrypted message itself.

The AWS Encryption SDK provides production-ready reference implementations in Java and Python with direct support for key providers such as AWS Key Management Service (KMS). The Java implementation also supports the Java Cryptography Architecture (JCA/JCE) natively, which includes support for AWS CloudHSM and other PKCS #11 devices. The standard ciphertext data format the AWS Encryption SDK defines means that you can use combinations of the Java and Python clients for encryption and decryption as long as they each have access to the key provider that manages the correct master key used to encrypt the data key.

Let’s look at how you can use the AWS Encryption SDK to simplify the process of encrypting data and how to protect your data keys in ways that help improve application availability by not tying you to a single region or key management solution.

Example 1: Encrypting application secrets under multiple regional KMS master keys for high availability

Many customers want to build systems that not only span multiple Availability Zones, but also multiple regions. You cannot share KMS customer master keys (CMKs) across regions. However, with envelope encryption, you can encrypt the data key with multiple KMS CMKs in different regions. Applications running in each region can use the local KMS endpoint to decrypt the ciphertext for faster and more reliable access.

For the examples in this post, I will assume that I am running on Amazon EC2 instances configured with IAM roles for EC2. This enables me to avoid credential management and take advantage of built-in logic that routes requests to the nearest endpoints. These examples also assume that the latest version of the AWS SDK for Python (different from the AWS Encryption SDK) is available.

The encryption logic has a simple high-level design. Using provided parameters, I get the master keys and use them to encrypt some provided data, as shown in the following code example. I will define how to construct the multi-region KMS key provider next.

import aws_encryption_sdk


def encrypt_data(plaintext):
    # Get all the master keys needed
    key_provider = build_multiregion_kms_master_key_provider()

    # Encrypt the provided data
    ciphertext, header = aws_encryption_sdk.encrypt(
        source=plaintext,
        key_provider=key_provider
    )
    return ciphertext

Create a master key provider containing multiple master keys

The following code example shows how you can encrypt data under CMKs in three US regions: us-east-1, us-west-1, and us-west-2. The example assumes that you have already set up the CMKs and created an alias named alias/exampleKey in each region for each CMK. For more information about creating CMKs and aliases, see Creating Keys in the AWS KMS documentation.

This example creates a single KMSMasterKeyProvider to which all CMKs are added. The KMSMasterKeyProvider handles interacting with CMKs in multiple regions. Note that the first master key added to the KMSMasterKeyProvider is the one used to generate the new data key, and the other master keys are used to encrypt the new data key.

import aws_encryption_sdk
import boto3


def build_multiregion_kms_master_key_provider():
    regions = ('us-east-1', 'us-west-1', 'us-west-2')
    alias = 'alias/exampleKey'
    arn_template = 'arn:aws:kms:{region}:{account_id}:{alias}'

    # Create AWS KMS master key provider
    kms_master_key_provider = aws_encryption_sdk.KMSMasterKeyProvider()

    # Find your AWS account ID
    account_id = boto3.client('sts').get_caller_identity()['Account']

    # Add the KMS alias in each region to the master key provider
    for region in regions:
        kms_master_key_provider.add_master_key(arn_template.format(
            region=region,
            account_id=account_id,
            alias=alias
        ))
    return kms_master_key_provider

The logic to construct a master key provider could be built once by your central security team and then reused across your company to both simplify development and ensure that all encrypted data meets corporate standards.

Encrypt the data

The data you encrypt can come from anywhere and you can distribute it however you like. In the following code example, I read a file from disk and write out an encrypted copy. The AWS Encryption SDK provides a stream interface that behaves as a standard Python stream context manager to make this easy.

import aws_encryption_sdk
import boto3


def encrypt_file(input_filename, output_filename):
    # Get all the master keys needed
    key_provider = build_multiregion_kms_master_key_provider()

    # Open the files for reading and writing
    with open(input_filename, 'rb') as infile,\
            open(output_filename, 'wb') as outfile:
        # Encrypt the file
        with aws_encryption_sdk.stream(
            mode='e',
            source=infile,
            key_provider=key_provider
        ) as encryptor:
            for chunk in encryptor:
                outfile.write(chunk)

This file could contain, for example, secret application configuration data (such as passwords, certificates, and the like) that is then sent to EC2 instances as EC2 user data upon launch.

Decrypt the data

The following code example decrypts the contents of the EC2 user data and writes it to the specified file. The KMSMasterKeyProvider  defaults to using KMS in the local region, so decryption proceeds quickly without cross-region calls.

from botocore.vendored import requests


def decrypt_user_data(output_filename):
    # Create a master key provider that points to the local KMS stack
    kms_key_provider = aws_encryption_sdk.KMSMasterKeyProvider()

    # Read the user data
    user_data = requests.get('http://169.254.169.254/latest/user-data/').content
    # Open a stream to write out the decrypted file
    # Decrypt the userdata and write the plaintext into the file
    with open(output_filename, 'wb') as outfile,\
            aws_encryption_sdk.stream(
                mode='d',
                source=user_data,
                key_provider=kms_key_provider
            ) as decryptor:
        for chunk in decryptor:
            outfile.write(chunk)

Congratulations! You have just encrypted data under master keys in multiple regions and have code that will always decrypt the data by using the local KMS stack. This gives you higher availability and lower latency for decryption, while still only needing to manage a single ciphertext.

Example 2: Encrypting application secrets under master keys from different providers for escrow and portability

Another reason why you might want to encrypt data under multiple master keys is to avoid relying on a single provider for your keys. By not tying yourself to a single key management solution, you help improve your applications’ availability. This approach also might help if you have compliance, data loss prevention, or disaster recovery requirements that require multiple providers.

You can use the same technique demonstrated previously in this post to encrypt your data to an escrow or additional decryption master key that is independent of your primary provider. This example demonstrates how to use an additional master key, which is an RSA public key randomly generated upon request. (Storing and managing the RSA key pair are out of scope for this blog.)

Encrypt the data with a public master key

Just like the previous code example that created a number of KMS master keys to encrypt data, the following code example creates one more master key for use with the RSA public key.

import aws_encryption_sdk
from aws_encryption_sdk.internal.crypto import WrappingKey
from aws_encryption_sdk.key_providers.raw import RawMasterKeyProvider
from aws_encryption_sdk.identifiers import WrappingAlgorithm, EncryptionKeyType
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import serialization
from cryptography.hazmat.primitives.asymmetric import rsa


class StaticRandomMasterKeyProvider(RawMasterKeyProvider):
    """Randomly generates and provides 4096-bit RSA keys consistently per unique key id."""
    provider_id = 'static-random'

    def __init__(self, **kwargs):
        self._static_keys = {}

    def _get_raw_key(self, key_id):
        """Retrieves a static, randomly generated RSA key for the specified key id.

        :param str key_id: Key ID
        :returns: Wrapping key which contains the specified static key
        :rtype: :class:`aws_encryption_sdk.internal.crypto.WrappingKey`
        """
        try:
            static_key = self._static_keys[key_id]
        except KeyError:
            private_key = rsa.generate_private_key(
                public_exponent=65537,
                key_size=4096,
                backend=default_backend()
            )
            static_key = private_key.private_bytes(
                encoding=serialization.Encoding.PEM,
                format=serialization.PrivateFormat.PKCS8,
                encryption_algorithm=serialization.NoEncryption()
            )
            self._static_keys[key_id] = static_key
        return WrappingKey(
            wrapping_algorithm=WrappingAlgorithm.RSA_OAEP_SHA1_MGF1,
            wrapping_key=static_key,
            wrapping_key_type=EncryptionKeyType.PRIVATE
        )


def get_multi_master_key_provider():
    # Create multiregion KMS master key provider
    multi_master_key_provider = build_multiregion_kms_master_key_provider()

    # Create static master key provider and add a key
    static_key_id = os.urandom(8)
    static_master_key_provider = StaticRandomMasterKeyProvider()
    static_master_key_provider.add_master_key(static_key_id)

    # Add static master key provider to KMS master key provider
    multi_master_key_provider.add_master_key_provider(static_master_key_provider)

    return multi_master_key_provider, static_master_key_provider

Decrypt the data with the private key

The following decryption code example uses the static RSA master key provider generated previously to demonstrate decryption with a non-AWS master key.

def cycle_data(input_data):
    # Create multi-source master key provider
    multi_master_key_provider, static_master_key_provider = get_multi_master_key_provider()

    # Encrypt data with multi-source master key provider
    ciphertext, header = aws_encryption_sdk.encrypt(
        source=input_data,
        key_provider=multi_master_key_provider
    )

    # Decrypt data using only static master key provider
    plaintext, header = aws_encryption_sdk.decrypt(
        source=ciphertext,
        key_provider=static_master_key_provider
    )

Conclusion

Envelope encryption is powerful, but traditionally, it has been challenging to implement. The new AWS Encryption SDK helps manage data keys for you, and it simplifies the process of encrypting data under multiple master keys. As a result, this new SDK allows you to focus on the code that drives your business forward. It also provides a framework you can easily extend to ensure that you have a cryptographic library that is configured to match and enforce your standards.

We are excited about releasing the AWS Encryption SDK and cannot wait to hear what you do with it. If you have comments about the new SDK or anything in this blog post, submit a comment in the “Comments” section below. If you have implementation or usage questions, start a new thread on the KMS forum.

– Matt