Tag Archives: AWS Encryption SDK

Improved client-side encryption: Explicit KeyIds and key commitment

Post Syndicated from Alex Tribble original https://aws.amazon.com/blogs/security/improved-client-side-encryption-explicit-keyids-and-key-commitment/

I’m excited to announce the launch of two new features in the AWS Encryption SDK (ESDK): local KeyId filtering and key commitment. These features each enhance security for our customers, acting as additional layers of protection for your most critical data. In this post I’ll tell you how they work. Let’s dig in.

The ESDK is a client-side encryption library designed to make it easy for you to implement client-side encryption in your application using industry standards and best practices. Since the security of your encryption is only as strong as the security of your key management, the ESDK integrates with the AWS Key Management Service (AWS KMS), though the ESDK doesn’t require you to use any particular source of keys. When using AWS KMS, the ESDK wraps data keys to one or more customer master keys (CMKs) stored in AWS KMS on encrypt, and calls AWS KMS again on decrypt to unwrap the keys.

It’s important to use only CMKs you trust. If you encrypt to an untrusted CMK, someone with access to the message and that CMK could decrypt your message. It’s equally important to only use trusted CMKs on decrypt! Decrypting with an untrusted CMK could expose you to ciphertext substitution, where you could decrypt a message that was valid, but written by an untrusted actor. There are several controls you can use to prevent this. I recommend a belt-and-suspenders approach. (Technically, this post’s approach is more like a belt, suspenders, and an extra pair of pants.)

The first two controls aren’t new, but they’re important to consider. First, you should configure your application with an AWS Identity and Access Management (IAM) policy that only allows it to use specific CMKs. An IAM policy allowing Decrypt on “Resource”:”*” might be appropriate for a development or testing account, but production accounts should list out CMKs explicitly. Take a look at our best practices for IAM policies for use with AWS KMS for more detailed guidance. Using IAM policy to control access to specific CMKs is a powerful control, because you can programmatically audit that the policy is being used across all of your accounts. To help with this, AWS Config has added new rules and AWS Security Hub added new controls to detect existing IAM policies that might allow broader use of CMKs than you intended. We recommend that you enable Security Hub’s Foundational Security Best Practices standard in all of your accounts and regions. This standard includes a set of vetted automated security checks that can help you assess your security posture across your AWS environment. To help you when writing new policies, the IAM policy visual editor in the AWS Management Console warns you if you are about to create a new policy that would add the “Resource”:”*” condition in any policy.

The second control to consider is to make sure you’re passing the KeyId parameter to AWS KMS on Decrypt and ReEncrypt requests. KeyId is optional for symmetric CMKs on these requests, since the ciphertext blob that the Encrypt request returns includes the KeyId as metadata embedded in the blob. That’s quite useful—it’s easier to use, and means you can’t (permanently) lose track of the KeyId without also losing the ciphertext. That’s an important concern for data that you need to access over long periods of time. Data stores that would otherwise include the ciphertext and KeyId as separate objects get re-architected over time and the mapping between the two objects might be lost. If you explicitly pass the KeyId in a decrypt operation, AWS KMS will only use that KeyId to decrypt, and you won’t be surprised by using an untrusted CMK. As a best practice, pass KeyId whenever you know it. ESDK messages always include the KeyId; as part of this release, the ESDK will now always pass KeyId when making AWS KMS Decrypt requests.

A third control to protect you from using an unexpected CMK is called local KeyId filtering. If you explicitly pass the KeyId of an untrusted CMK, you would still be open to ciphertext substitution—so you need to be sure you’re only passing KeyIds that you trust. The ESDK will now filter KeyIds locally by using a list of trusted CMKs or AWS account IDs you configure. This enforcement happens client-side, before calling AWS KMS. Let’s walk through a code sample. I’ll use Java here, but this feature is available in all of the supported languages of the ESDK.

Let’s say your app is decrypting ESDK messages read out of an Amazon Simple Queue Service (Amazon SQS) queue. Somewhere you’ll likely have a function like this:

public byte[] decryptMessage(final byte[] messageBytes,
                             final Map<String, String> encryptionContext) {
    // The Amazon Resource Name (ARN) of your CMK.
    final String keyArn = "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab";

    // 1. Instantiate the SDK
    AwsCrypto crypto = AwsCrypto.builder().build();

Now, when you create a KmsMasterKeyProvider, you’ll configure it with one or more KeyIds you expect to use. I’m passing a single element here for simplicity.

	// 2. Instantiate a KMS master key provider in Strict Mode using buildStrict()
    final KmsMasterKeyProvider keyProvider = KmsMasterKeyProvider.builder().buildStrict(keyArn); 

Decrypt the message as normal. The ESDK will check each encrypted data key against the list of KeyIds configured at creation: in the preceeding example, the single CMK in keyArn. The ESDK will only call AWS KMS for matching encrypted data keys; if none match, it will throw a CannotUnwrapDataKeyException.

	// 3. Decrypt the message.
    final CryptoResult<byte[], KmsMasterKey> decryptResult = crypto.decryptData(keyProvider, messageBytes);

    // 4. Validate the encryption context.

(See our documentation for more information on how encryption context provides additional authentication features!)

	checkEncryptionContext(decryptResult, encryptionContext);

    // 5. Return the decrypted bytes.
    return decryptResult.getResult();

We recommend that everyone using the ESDK with AWS KMS adopt local KeyId filtering. How you do this varies by language—the ESDK Developer Guide provides detailed instructions and example code.

I’m especially excited to announce the second new feature of the ESDK, key commitment, which addresses a non-obvious property of modern symmetric ciphers used in the industry (including the Advanced Encryption Standard (AES)). These ciphers have the property that decrypting a single ciphertext with two different keys could give different plaintexts! Picking a pair of keys that decrypt to two specific messages involves trying random keys until you get the message you want, making it too expensive for most messages. However, if you’re encrypting messages of a few bytes, it might be feasible. Most authenticated encryption schemes, such as AES-GCM, don’t solve for this issue. Instead, they prevent someone who doesn’t control the keys from tampering with the ciphertext. But someone who controls both keys can craft a ciphertext that will properly authenticate under each key by using AES-GCM.

All of this means that if a sender can get two parties to use different keys, those two parties could decrypt the exact same ciphertext and get different results. That could be problematic if the message reads, for example, as “sell 1000 shares” to one party, and “buy 1000 shares” to another.

The ESDK solves this problem for you with key commitment. Key commitment means that only a single data key can decrypt a given message, and that trying to use any other data key will result in a failed authentication check and a failure to decrypt. This property allows for senders and recipients of encrypted messages to know that everyone will see the same plaintext message after decryption.

Key commitment is on by default in version 2.0 of the ESDK. This is a breaking change from earlier versions. Existing customers should follow the ESDK migration guide for their language to upgrade from 1.x versions of the ESDK currently in their environment. I recommend a thoughtful and careful migration.

AWS is always looking for feedback on ways to improve our services and tools. Security-related concerns can be reported to AWS Security at [email protected]. We’re deeply grateful for security research, and we’d like to thank Thai Duong from Google’s security team for reaching out to us. I’d also like to thank my colleagues on the AWS Crypto Tools team for their collaboration, dedication, and commitment (pun intended) to continuously improving our libraries.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Crypto Tools forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.


Alex Tribble

Alex is a Principal Software Development Engineer in AWS Crypto Tools. She joined Amazon in 2008 and has spent her time building security platforms, protecting availability, and generally making things faster and cheaper. Outside of work, she, her wife, and children love to pack as much stuff into as few bikes as possible.

How to enable encryption in a browser with the AWS Encryption SDK for JavaScript and Node.js

Post Syndicated from Spencer Janyk original https://aws.amazon.com/blogs/security/how-to-enable-encryption-browser-aws-encryption-sdk-javascript-node-js/

In this post, we’ll show you how to use the AWS Encryption SDK (“ESDK”) for JavaScript to handle an in-browser encryption workload for a hypothetical application. First, we’ll review some of the security and privacy properties of encryption, including the names AWS uses for the different components of a typical application. Then, we’ll discuss some of the reasons you might want to encrypt each of those components, with a focus on in-browser encryption, and we’ll describe how to perform that encryption using the ESDK. Lastly, we’ll talk about some of the security properties to be mindful of when designing an application, and where to find additional resources.

An overview of the security and privacy properties of encryption

Encryption is a technique that can restrict access to sensitive data by making it unreadable without a key. An encryption process takes data that is plainly readable or processable (“plaintext”) and uses principles of mathematics to obscure the contents so that it can’t be read without the use of a secret key. To preserve user privacy and prevent unauthorized disclosure of sensitive business data, developers need ways to protect sensitive data during the entire data lifecycle. Data needs to be protected from risks associated with unintentional disclosure as data flows between collection, storage, processing, and sharing components of an application. In this context, encryption is typically divided into two separate techniques: encryption at rest for storing data; and encryption in transit for moving data between entities or systems.

Many applications use encryption in transit to secure connections between their users and the services they provide, and then encrypt the data before it’s stored. However, as applications become more complex and data must be moved between more nodes and stored in more diverse places, there are more opportunities for data to be accidentally leaked or unintentionally disclosed. When a user enters their data in a browser, Transport Layer Security (TLS) can protect that data in transit between the user’s browser and a service endpoint. But in a distributed system, intermediary services between that endpoint and the service that processes that sensitive data might log or cache the data before transporting it. Encrypting sensitive data at the point of collection in the browser is a form of encryption at rest that minimizes the risk of unauthorized access and protects the data if it’s lost, stolen, or accidentally exposed. Encrypting data in the browser means that even if it’s completely exposed elsewhere, it’s unreadable and worthless to anyone without access to the key.

A typical web application

A typical web application will accept some data as input, process it, and then store it. When the user needs to access stored data, the data often follows the same path used when it was input. In our example there are three primary components to the path:

Figure 1: A hypothetical web application where the application is composed of an end-user interacting with a browser front-end, a third party which processes data received from the browser, processing is performed in Amazon EC2, and storage happens in Amazon S3

Figure 1: A hypothetical web application where the application is composed of an end-user interacting with a browser front-end, a third party which processes data received from the browser, processing is performed in Amazon EC2, and storage happens in Amazon S3

  1. An end-user interacts with the application using an interface in the browser.
  2. As data is sent to Amazon EC2, it passes through the infrastructure of a third party which could be an Internet Service Provider, an appliance in the user’s environment, or an application running in the cloud.
  3. The application on Amazon EC2 processes the data once it has been received.
  4. Once the application is done processing data, it is stored in Amazon S3 until it is needed again.

As data moves between components, TLS is used to prevent inadvertent disclosure. But what if one or more of these components is a third-party service that doesn’t need access to sensitive data? That’s where encryption at rest comes in.

Encryption at rest is available as a server-side, client-side, and client-side in-browser protection. Server-side encryption (SSE) is the most commonly used form of encryption with AWS customers, and for good reason: it’s easy to use because it’s natively supported by many services, such as Amazon S3. When SSE is used, the service that’s storing data will encrypt each piece of data with a key (a “data key”) when it’s received, and then decrypt it transparently when it’s requested by an authorized user. This has the benefit of being seamless for application developers because they only need to check a box in Amazon S3 to enable encryption, and it also adds an additional level of access control by having separate permissions to download an object and perform a decryption operation. However, there is a security/convenience tradeoff to consider, because the service will allow any role with the appropriate permissions to perform a decryption. For additional control, many AWS services—including S3—support the use of customer-managed AWS Key Management Service (AWS KMS) customer master keys (CMKs) that allow you to specify key policies or use grants or AWS Identity and Access Management (IAM) policies to control which roles or users have access to decryption, and when. Configuring permission to decrypt using customer-managed CMKs is often sufficient to satisfy compliance regimes that require “application-level encryption.”

Some threat models or compliance regimes may require client-side encryption (CSE), which can add a powerful additional level of access control at the expense of additional complexity. As noted above, services perform server-side encryption on data after it has left the boundary of your application. TLS is used to secure the data in transit to the service, but some customers might want to only manage encrypt/decrypt operations within their application on EC2 or in the browser. Applications can use the AWS Encryption SDK to encrypt data within the application trust boundary before it’s sent to a storage service.

But what about a use case where customers don’t even want plaintext data to leave the browser? Or what if end-users input data that is passed through or logged by intermediate systems that belong to a third-party? It’s possible to create a separate application that only manages encryption to ensure that your environment is segregated, but using the AWS Encryption SDK for JavaScript allows you to encrypt data in an end-user browser before it’s ever sent to your application, so only your end-user will be able to view their plaintext data. As you can see in Figure 2 below, in-browser encryption can allow data to be safely handled by untrusted intermediate systems while ensuring its confidentiality and integrity.

Figure 2: A hypothetical web application with encryption where the application is composed of an end-user interacting with a browser front-end, a third party which processes data received from the browser, processing is performed in Amazon EC2, and storage happens in Amazon S3

Figure 2: A hypothetical web application with encryption where the application is composed of an end-user interacting with a browser front-end, a third party which processes data received from the browser, processing is performed in Amazon EC2, and storage happens in Amazon S3

  1. The application in the browser requests a data key to encrypt sensitive data entered by the user before it is passed to a third party.
  2. Because the sensitive data has been encrypted, the third party cannot read it. The third party may be an Internet Service Provider, an appliance in the user’s environment, an application running in the cloud, or a variety of other actors.
  3. The application on Amazon EC2 can make a request to KMS to decrypt the data key so the data can be decrypted, processed, and re-encrypted.
  4. The encrypted object is stored in S3 where a second encryption request is made so the object can be encrypted when it is stored server side.

How to encrypt in the browser

The first step of in-browser encryption is including a copy of the AWS Encryption SDK for JavaScript with the scripts you’re already sending to the user when they access your application. Once it’s present in the end-user environment, it’s available for your application to make calls. To perform the encryption, the ESDK will request a data key from the cryptographic materials provider that is used to encrypt, and an encrypted copy of the data key that will be stored with the object being encrypted. After a piece of data is encrypted within the browser, the ciphertext can be uploaded to your application backend for processing or storage. When a user needs to retrieve the plaintext, the ESDK can read the metadata attached to the ciphertext to determine the appropriate method to decrypt the data key, and if they have access to the CMK decrypt the data key and then use it to decrypt the data.

Important considerations

One common issue with browser-based applications is inconsistent feature support across different browser vendors and versions. For example, how will the application respond to browsers that lack native support for the strongest recommended cryptographic algorithm suites? Or, will there be a message or alternative mode if a user accesses the application using a browser that has JavaScript disabled? The ESDK for JavaScript natively supports a fallback mode, but it may not be appropriate for all use cases. Be sure to understand what kind of browser environments you will need to support to determine whether in-browser encryption is appropriate, and include support for graceful degradation if you expect limited browser support. Developers should also consider the ways that unauthorized users might monitor user actions via a browser extension, make unauthorized browser requests without user knowledge, or request a “downgraded” (less mathematically intensive) cryptographic operation.

It’s always a good idea to have your application designs reviewed by security professionals. If you have an AWS Account Manager or Technical Account Manager, you can ask them to connect you with a Solutions Architect to review your design. If you’re an AWS customer but don’t have an account manager, consider visiting an AWS Loft to participate in our “Ask an Expert” plan.

Where to learn more

If you have questions about this post, let us know in the Comments section below, or consult the AWS Encryption SDK Developer Forum. Because the Encryption SDK is open-source, you can always contribute, open an issue, or ask questions in Github.

The AWS Encryption SDK for JavaScript is available at: https://github.com/awslabs/aws-encryption-sdk-javascript
Documentation is available at: https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/javascript.html

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Janyk author photo

Spencer Janyk

Spencer is a Senior Product Manager at Amazon Web Services working on data encryption and privacy. He has previously worked on vulnerability management and monitoring for enterprises and applying machine learning to challenges in ad tech, social media, diversity in recruiting, and talent management. Spencer holds a Master of Arts in Performance Studies from New York University and a Bachelor of Arts in Gender Studies from Whitman College.

Gray author photo

Amanda Gray

Amanda is a Senior Security Engineer at Amazon Web Services on the Crypto Tools team. Previously, Amanda worked on application security and privacy by design, and she continues to promote these goals every day. Amanda holds Bachelors’ degrees in Physics and Computer Science from the University of Washington and Smith College respectively, and a Master’s degree in Physical Oceanography from the University of Washington.

How to decrypt ciphertexts in multiple regions with the AWS Encryption SDK in C

Post Syndicated from Liz Roth original https://aws.amazon.com/blogs/security/how-to-decrypt-ciphertexts-multiple-regions-aws-encryption-sdk-in-c/

You’ve told us that you want to encrypt data once with AWS Key Management Service (AWS KMS) and decrypt that data with customer master keys (CMKs) that you specify, often with CMKs in different AWS Regions. Doing this saves you compute resources and helps you to enable secure and efficient high-availability schemes.

The AWS Crypto Tools team has introduced the AWS Encryption SDK for C so you can achieve these goals. The new tool also adds more options for language and platform support and is fully interoperable with the implementations in Java and Python.

The AWS Encryption SDK is a client-side encryption library that helps make it easier for you to implement encryption best practices in your applications. You can use it with master keys from multiple sources, including AWS KMS CMKs. The AWS Encryption SDK doesn’t require AWS KMS or any other AWS service.

You can use AWS KMS APIs directly to encrypt data keys using multiple CMKs, but the AWS Encryption SDK provides tools to make working with multiple CMKs even easier, with everything you need stored in the Encryption SDK’s portable encrypted message format. The AWS Encryption SDK for C uses the concept of keyrings, which makes it easy to work with ciphertexts encrypted using multiple CMKs.

In this post, I will walk you through an example using the new AWS Encryption SDK for C. I’ll focus on some highlights from example code in the context of what an example application deployment might look like. You can find the complete example code in this GitHub repository. As always, we welcome your comments and your contributions.

Example scenario

To add some context around the example code, assume that you have a data processing application deployed both in US West (Oregon) us-west-2 and EU Central (Frankfurt) eu-central-1. For added durability, this example application creates and encrypts data in us-west-2 before it’s copied to the eu-central-1 Region. You have assurance that you could decrypt that data in us-west-2 if needed, but you want to mitigate the case where the decryption service in us-west-2 is unavailable. So how do you ensure you can decrypt your data in the eu-central-1 region when you need to?

In this example, your data processing application uses the AWS Encryption SDK and AWS KMS to generate a 256-bit data key to encrypt content locally in us-west-2. The AWS Encryption SDK for C deletes the plaintext data key after use, but an encrypted copy of that data key is included in the encrypted message that the AWS Encryption SDK returns. This prevents you from losing the encrypted copy of the data key, which would make your encrypted content unrecoverable. The data key is encrypted under the AWS KMS CMKs in each of the two regions in which you might want to decrypt the data in the future.

A best practice is to plan to decrypt data using in-region data keys and CMKs. This reduces latency and simplifies the permissions and auditing properties of the decryption operation. The latency impact from the cross-region API calls occur only during the encryption operation.

In this scenario, the AWS KMS CMK key policy permissions look like this:

  • To encrypt data, the AWS identity used by the data processing application in us-west-2 needs kms:GenerateDataKey permission on the us-west-2 CMK and kms:Encrypt permission on the eu-central-1 CMK. You can specify these permissions in a key policy or IAM policy. This will let the application create a data key in us-west-2 and encrypt the data key under CMKs in both AWS Regions.
  • To decrypt data, the AWS identity used by the data processing application in us-west-2 needs kms:Decrypt permissions on the CMK in us-west-2 or the CMK in eu-central-1.

Encryption path

First, define variables for the Amazon Resource Names (ARNs) of your CMKs in us-west-2 and eu-central-1. In the Encryption SDK for C, to encrypt, you can identify a CMK by its CMK ARN or the Alias ARN that is mapped to the CMK ARN.

const char *KEY_ARN_US_WEST_2 = "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab";

const char *KEY_ARN_EU_CENTRAL_1 = "arn:aws:kms:eu-central-1:111122223333:key/0987dcba-09fe-87dc-65ba-ab0987654321";      

Now, use the CMK ARNs to create a keyring. In the Encryption SDK, a keyring is used to generate, encrypt, and decrypt data keys under multiple master keys. You’ll create a KMS keyring configured to use multiple CMKs.

struct aws_cryptosdk_keyring *kms_keyring=Aws::Cryptosdk::KmsKeyring::Builder().Build(KEY_ARN_US_WEST_2, { KEY_ARN_EU_CENTRAL_1 });

When the AWS Encryption SDK uses this keyring to encrypt data, it calls GenerateDataKey on the first CMK that you specify, and Encrypt on each of the remaining CMKs that you specify. The result is a plaintext data key generated in us-west-2, an encryption of the data key using the CMK in us-west-2, and an encryption of the data key using the CMK in eu-central-1.

The plaintext data key that AWS KMS generated in us-west-2 is protected under a TLS session using only cipher suites that support forward-secrecy. The process of sending that same plaintext data key to the AWS KMS endpoint in eu-central-1 for encryption is also protected under a similar TLS session.

The Encryption SDK uses the data key to encrypt your data, and it stores the encrypted data keys with your encrypted content. The result is an encrypted message that can be decrypted using the CMK in us-west-2 or the CMK in eu-central-1.

Now that you understand what’s going to happen after you create the keyring, I’ll return to the code sample. Next, you need to create an encrypt-mode session with your keyring and pass in the CMM. In the AWS Encryption SDK for C, you use a session to encrypt a single plaintext message or decrypt a single ciphertext message, regardless of its size. The session maintains the state of the message throughout its processing.

struct aws_cryptosdk_session *session = aws_cryptosdk_session_new_from_keyring(alloc, AWS_CRYPTOSDK_ENCRYPT, kms_keyring);

With the keyring and encrypt-mode session, the data processing application can ask the Encryption SDK to encrypt the data under the CMKs that you specified in two different AWS regions:


The result is an encrypted message that contains the ciphertext and two encrypted copies of the same data key. One encrypted data key was encrypted by your CMK in us-west-2 and other encrypted data key was encrypted by your CMK in eu-central-1.

Decryption path

In the AWS Encryption SDK for C, you use keyrings for both encrypting and decrypting. You can use the same keyring for both, or you can use different keyrings for each operation.

Why would you want to use a different keyring for decryption? At a high level, encrypt keyrings specify all CMKs that can decrypt the ciphertext. Decrypt keyrings constrain the CMKs the application is permitted to use.

Reusing a keyring for both encrypt and decrypt mode can simplify your AWS Encryption SDK client configuration, but splitting the keyring and using different AWS KMS clients provides more flexibility to meet your security and architecture goals. The option you choose depends in part on the constraints you want to place on the CMKs your application uses.

The Decrypt API in the AWS KMS service doesn’t permit you to specify a CMK as a request parameter. But the AWS Encryption SDK lets you specify one or many CMKs in a decryption keyring, or even discover which CMKs to try automatically. I’ll discuss each option in the next section.

Decryption path 1: Use a specific CMK

This keyring option configures the AWS Encryption SDK to use only a specified CMK in the specified AWS Region. This implies that your data processing application will need kms:Decrypt permissions on that specific CMK and your application will always call the same AWS KMS endpoints in the specified AWS Region. CloudTrail events from the Decrypt API will also only appear in the specified AWS Region.

You might use a specific CMK when the user or application that is decrypting the data has kms:Decrypt permission on only one of the CMKs that encrypted the data keys.

The CMK that you specify to decrypt the data must be one of the CMKs that was used to encrypt the data. Make sure that at least one of the CMKs from your encrypt keyring is included in the decrypt keyring and that the caller has kms:Decrypt permission to use it.

In my example, I encrypted the data keys using CMKs in us-west-2 and eu-central 1, so I’ll start decrypting in eu-central-1 because I want to have a specific decrypt instantiation of the data processing application dedicated to eu-central-1. Assume the eu-central-1 data processing application has configured AWS IAM credentials for a principal with permission to call the Decrypt operation on the eu-central-1 CMK.

Configure a keyring that asks the AWS Encryption SDK to use the CMK in eu-central-1 to decrypt:


The Encryption SDK reads the encrypted message, finds the encrypted data key that was encrypted using the CMK in eu-central-1, and uses this keyring to decrypt.

Decryption path 2: Use any of several CMKs

This keyring option configures the AWS Encryption SDK to try several specific CMKs during its decryption attempts, stopping as soon as it succeeds. You should configure the AWS IAM credentials used by your data processing application to have kms:Decrypt permissions on each of the specified regional CMKs.

Your application could end up calling multiple regional AWS KMS endpoints. CloudTrail events from the Decrypt API will appear in the AWS Region in which the decrypt operation succeeds, and in any of the other AWS Regions that the keyring attempts to use. The CMK that you specify to decrypt the data must be one of the CMKs that was used to encrypt the data. Make sure that at least one of the CMKs from your encrypt keyring is included in the decrypt keyring and that the application has kms:Decrypt permission to use it.

You might define an encryption keyring that includes multiple CMKs so that users with different permissions can decrypt the same message. For example, you might include in your encryption keyring keys in multiple AWS regions.

Here’s an example keyring constructed with multiple CMKs:

Aws::Cryptosdk::KmsKeyring::Builder().Build(KEY_ARN_EU_CENTRAL_1, { KEY_ARN_US_WEST_2 })

The AWS Encryption SDK reads each of the encrypted data keys stored in the encrypted message in the order that they appear. For each data key, the Encryption SDK searches the keyring for the matching CMK that encrypted it. If it finds that CMK, the AWS Encryption SDK calls AWS KMS in the AWS Region where the CMK exists to decrypt that data key, then uses that decrypted key to decrypt the message. If the decryption operation fails for any reason, the AWS Encryption SDK moves on to the next encrypted data key in the message and tries again.

The AWS Encryption SDK will try to decrypt the encrypted message in this way until either decryption succeeds, or the AWS Encryption SDK has attempted and failed to decrypt any of the encrypted data keys using the CMKs specified in the keyring.

If this keyring configuration looks familiar, it’s because it’s similar to the configuration you used on the encrypt path when you encrypted under multiple CMKs. The difference is this:

  • Encryption: The AWS Encryption SDK uses every CMK in the keyring to encrypt the data key, and adds all of the encrypted data keys to the encrypted message.
  • Decryption: The AWS Encryption SDK attempts to decrypt one of the encrypted data key using only the CMKs in the keyring. It stops as soon as it succeeds.

Decryption path 3: Strategic ARNs reduction using the Discovery keyring

The previous decryption paths required you to keep track of the exact CMKs used during the encryption operation, which may suit your needs for security and event logging. But what if you want more flexibility? What if you want to change the CMKs that you use in encryption operations without updating the data processing application that decrypts your data? You can configure a keyring that doesn’t specify CMKs to use for decryption, but instead tries each CMK that encrypted a data key until decryption succeeds or all referenced CMKs fail. We call this configuration a KMS Discovery keyring.

A Discovery keyring is equivalent to a keyring that includes all of the same CMKs that were used to encrypt the data, but it’s simpler and less error-prone. You might use a KMS Discovery keyring if you have no preference among the CMKs that encrypted a data key, and don’t mind the latency tradeoffs of trying CMKs in remote AWS Regions, or trying CMKs that will fail a permissions check while searching for one that succeeds. You can think of the KMS Discovery keyring as a universal keyring that you can use and reuse in your applications in many AWS Regions.

When you use a KMS Discovery keyring, the AWS Encryption SDK reads each encrypted data key and discovers the ARN of the CMK used to encrypt it. The AWS Encryption SDK then uses the configured IAM credentials to call AWS KMS in that CMK’s AWS Region to decrypt the data key. The AWS Encryption SDK repeats that process until it has decrypted the data key or runs out of encrypted data keys to try. .


While KMS Discovery keyrings are simpler, you run the risk of having your data processing application make a cross-region call to an AWS KMS endpoint that adds unwanted latency. In my example, you might not want the decrypting application running in us-west-2 to wait for the AWS Encryption SDK to call AWS KMS in eu-central-1. To use only the CMKs in a particular AWS Region to decrypt the data keys, create a KMS Regional Discovery keyring that specifies the AWS Region, but not the CMK ARNs. In my example, the following keyring allows the AWS Encryption SDK to use only CMKs in us-west-2.


Because this example KMS Regional Discovery keyring specifies a client for the us-west-2 AWS Region, not a CMK ARN, the AWS Encryption SDK will only try to decrypt any encrypted data key it finds that was encrypted using any CMK in us-west-2. If, for some reason, none of the encrypted data keys was encrypted using a CMK in us-west-2, or the application decrypting the data doesn’t have permission to use CMKs in us-west-2, the AWS Encryption SDK call to decrypt the message with this keyring fails and fails fast. This may provide you with more options for deterministic error handling.

Keep in mind that the KMS Regional Discovery keyring allows the AWS Encryption SDK to try the CMK for each encrypted data key in the specified AWS Region. However, AWS KMS never uses a CMK until it verifies that the caller has permission to perform the requested operation. If the application doesn’t have kms:Decrypt permission for any of the CMKs that were used to encrypt the data keys, decryption fails.


Encrypting KMS data keys using multiple CMKs provides a variety of options to decrypt ciphertexts to meet your security, auditing, and latency requirements. My examples show how encrypted messages can be decrypted by using AWS KMS CMKs in multiple AWS Regions. You can also use the Encryption SDK with master keys supplied by a custom key management infrastructure independent of AWS.

The AWS Encryption SDK’s portable and interoperable encrypted message format makes it easier to combine multiple encrypted data keys with your encrypted data to support the decryption access scheme you want. The AWS Encryption SDK for C brings these utilities to a new, broader set of platform and application environments to complement the existing Java and Python versions.

You can find the AWS Encryption SDK for C on GitHub.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Crypto Tools forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.


Liz Roth

Liz is a Senior Software Development Engineer at Amazon Web Services. She has been at Amazon for more than 8 years and has more than 10 years of industry experience across a variety of areas, including security, networks, and operations.

How to Encrypt Amazon S3 Objects with the AWS SDK for Ruby

Post Syndicated from Doug Schwartz original https://aws.amazon.com/blogs/security/how-to-encrypt-amazon-s3-objects-with-the-aws-sdk-for-ruby/

AWS KMS image

Recently, Amazon announced some new Amazon S3 encryption and security features. The AWS Blog post showed how to use the Amazon S3 console to take advantage of these new features. However, if you have a large number of Amazon S3 buckets, using the console to implement these features could take hours, if not days. As an alternative, I created documentation topics in the AWS SDK for Ruby Developer Guide that include code examples showing you how to use the new Amazon S3 encryption features using the AWS SDK for Ruby.

What are my encryption options?

You can encrypt Amazon S3 bucket objects on a server or on a client:

  • When you encrypt objects on a server, you request that Amazon S3 encrypt the objects before saving them to disk in data centers and decrypt the objects when you download them. The main advantage of this approach is that Amazon S3 manages the entire encryption process.
  • When you encrypt objects on a client, you encrypt the objects before you upload them to Amazon S3. In this case, you manage the encryption process, the encryption keys, and related tools. Use this option when:
    • Company policy and standards require it.
    • You already have a development process in place that meets your needs.

    Encrypting on the client has always been available, but you should know the following points:

    • You must be diligent about protecting your encryption keys, which is analogous to having a burglar-proof lock on your front door. If you leave a key under the mat, your security is compromised.
    • If you lose your encryption keys, you won’t be able to decrypt your data.

    If you encrypt objects on the client, we strongly recommend that you use an AWS Key Management Service (AWS KMS) managed customer master key (CMK)

How to use encryption on a server

You can specify that Amazon S3 automatically encrypts objects as you upload them to a bucket or require that objects uploaded to an Amazon S3 bucket include encryption on a server before they are uploaded to an Amazon S3 bucket.

The advantage of these settings is that when you specify them, you ensure that objects uploaded to Amazon S3 are encrypted. Alternatively, you can have Amazon S3 encrypt individual objects on the server as you upload them to a bucket or encrypt them on the server with your own key as you upload them to a bucket.

The AWS SDK for Ruby Developer Guide now contains the following topics that explain your encryption options on a server:

How to use encryption on a client

You can encrypt objects on a client before you upload them to a bucket and decrypt them after you download them from a bucket by using the Amazon S3 encryption client.

The AWS SDK for Ruby Developer Guide now contains the following topics that explain your encryption options on the client:

Note: The Amazon S3 encryption client in the AWS SDK for Ruby is compatible with other Amazon S3 encryption clients, but it is not compatible with other AWS client-side encryption libraries, including the AWS Encryption SDK and the Amazon DynamoDB encryption client for Java. Each library returns a different ciphertext (“encrypted message”) format, so you can’t use one library to encrypt objects and a different library to decrypt them. For more information, see Protecting Data Using Client-Side Encryption.

If you have comments about this blog post, submit them in the “Comments” section below. If you have questions about encrypting objects on servers and clients, start a new thread on the Amazon S3 forum or contact AWS Support.

– Doug

Introducing the New GDPR Center and “Navigating GDPR Compliance on AWS” Whitepaper

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/introducing-the-new-gdpr-center-and-navigating-gdpr-compliance-on-aws-whitepaper/

European Union flag

At AWS re:Invent 2017, the AWS Compliance team participated in excellent engagements with AWS customers about the General Data Protection Regulation (GDPR), including discussions that generated helpful input. Today, I am announcing resulting enhancements to our recently launched GDPR Center and the release of a new whitepaper, Navigating GDPR Compliance on AWS. The resources available on the GDPR Center are designed to give you GDPR basics, and provide some ideas as you work out the details of the regulation and find a path to compliance.

In this post, I focus on two of these new GDPR requirements in terms of articles in the GDPR, and explain some of the AWS services and other resources that can help you meet these requirements.

Background about the GDPR

The GDPR is a European privacy law that will become enforceable on May 25, 2018, and is intended to harmonize data protection laws throughout the European Union (EU) by applying a single data protection law that is binding throughout each EU member state. The GDPR not only applies to organizations located within the EU, but also to organizations located outside the EU if they offer goods or services to, or monitor the behavior of, EU data subjects. All AWS services will comply with the GDPR in advance of the May 25, 2018, enforcement date.

We are already seeing customers move personal data to AWS to help solve challenges in complying with the EU’s GDPR because of AWS’s advanced toolset for identifying, securing, and managing all types of data, including personal data. Steve Schmidt, the AWS CISO, has already written about the internal and external work we have been undertaking to help you use AWS services to meet your own GDPR compliance goals.

Article 25 – Data Protection by Design and by Default (Privacy by Design)

Privacy by Design is the integration of data privacy and compliance into the systems development process, enabling applications, systems, and accounts, among other things, to be secure by default. To secure your AWS account, we offer a script to evaluate your AWS account against the full Center for Internet Security (CIS) Amazon Web Services Foundations Benchmark 1.1. You can access this public benchmark on GitHub. Additionally, AWS Trusted Advisor is an online resource to help you improve security by optimizing your AWS environment. Among other things, Trusted Advisor lists a number of security-related controls you should be monitoring. AWS also offers AWS CloudTrail, a logging tool to track usage and API activity. Another example of tooling that enables data protection is Amazon Inspector, which includes a knowledge base of hundreds of rules (regularly updated by AWS security researchers) mapped to common security best practices and vulnerability definitions. Examples of built-in rules include checking for remote root login being enabled or vulnerable software versions installed. These and other tools enable you to design an environment that protects customer data by design.

An accurate inventory of all the GDPR-impacting data is important but sometimes difficult to assess. AWS has some advanced tooling, such as Amazon Macie, to help you determine where customer data is present in your AWS resources. Macie uses advanced machine learning to automatically discover and classify data so that you can protect data, per Article 25.

Article 32 – Security of Processing

You can use many AWS services and features to secure the processing of data regulated by the GDPR. Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch resources in a virtual network that you define. You have complete control over your virtual networking environment, including the selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways. With Amazon VPC, you can make the Amazon Cloud a seamless extension of your existing on-premises resources.

AWS Key Management Service (AWS KMS) is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data, and uses hardware security modules (HSMs) to help protect your keys. Managing keys with AWS KMS allows you to choose to encrypt data either on the server side or the client side. AWS KMS is integrated with several other AWS services to help you protect the data you store with these services. AWS KMS is also integrated with CloudTrail to provide you with logs of all key usage to help meet your regulatory and compliance needs. You can also use the AWS Encryption SDK to correctly generate and use encryption keys, as well as protect keys after they have been used.

We also recently announced new encryption and security features for Amazon S3, including default encryption and a detailed inventory report. Services of this type as well as additional GDPR enablers will be published regularly on our GDPR Center.

Other resources

As you prepare for GDPR, you may want to visit our AWS Customer Compliance Center or Tools for Amazon Web Services to learn about options for building anything from small scripts that delete data to a full orchestration framework that uses AWS Code services.


AWS Encryption SDK: How to Decide if Data Key Caching Is Right for Your Application

Post Syndicated from June Blender original https://aws.amazon.com/blogs/security/aws-encryption-sdk-how-to-decide-if-data-key-caching-is-right-for-your-application/

AWS KMS image

Today, the AWS Crypto Tools team introduced a new feature in the AWS Encryption SDK: data key caching. Data key caching lets you reuse the data keys that protect your data, instead of generating a new data key for each encryption operation.

Data key caching can reduce latency, improve throughput, reduce cost, and help you stay within service limits as your application scales. In particular, caching might help if your application is hitting the AWS Key Management Service (KMS) requests-per-second limit and raising the limit does not solve the problem.

However, these benefits come with some security tradeoffs. Encryption best practices generally discourage extensive reuse of data keys.

In this blog post, I explore those tradeoffs and provide information that can help you decide whether data key caching is a good strategy for your application. I also explain how data key caching is implemented in the AWS Encryption SDK and describe the security thresholds that you can set to limit the reuse of data keys. Finally, I provide some practical examples of using the security thresholds to meet cost, performance, and security goals.

Introducing data key caching

The AWS Encryption SDK is a client-side encryption library that makes it easier for you to implement cryptography best practices in your application. It includes secure default behavior for developers who are not encryption experts, while being flexible enough to work for the most experienced users.

In the AWS Encryption SDK, by default, you generate a new data key for each encryption operation. This is the most secure practice. However, in some applications, the overhead of generating a new data key for each operation is not acceptable.

Data key caching saves the plaintext and ciphertext of the data keys you use in a configurable cache. When you need a key to encrypt or decrypt data, you can reuse a data key from the cache instead of creating a new data key. You can create multiple data key caches and configure each one independently. Most importantly, the AWS Encryption SDK provides security thresholds that you can set to determine how much data key reuse you will allow.

To make data key caching easier to implement, the AWS Encryption SDK provides LocalCryptoMaterialsCache, an in-memory, least-recently-used cache with a configurable size. The SDK manages the cache for you, including adding store, search, and match logic to all encryption and decryption operations.

We recommend that you use LocalCryptoMaterialsCache as it is, but you can customize it, or substitute a compatible cache. However, you should never store plaintext data keys on disk.

The AWS Encryption SDK documentation includes sample code in Java and Python for an application that uses data key caching to encrypt data sent to and from Amazon Kinesis Streams.

Balance cost and security

Your decision to use data key caching should balance cost—in time, money, and resources—against security. In every consideration, though, the balance should favor your security requirements. As a rule, use the minimal caching required to achieve your cost and performance goals.

Before implementing data key caching, consider the details of your applications, your security requirements, and the cost and frequency of your encryption operations. In general, your application can benefit from data key caching if each operation is slow or expensive, or if you encrypt and decrypt data frequently. If the cost and speed of your encryption operations are already acceptable or can be improved by other means, do not use a data key cache.

Data key caching can be the right choice for your application if you have high encryption and decryption traffic. For example, if you are hitting your KMS requests-per-second limit, caching can help because you get some of your data keys from the cache instead of calling KMS for every request.

However, you can also create a case in the AWS Support Center to raise the KMS limit for your account. If raising the limit solves the problem, you do not need data key caching.

Configure caching thresholds for cost and security

In the AWS Encryption SDK, you can configure data key caching to allow just enough data key reuse to meet your cost and performance targets while conforming to the security requirements of your application. The SDK enforces the thresholds so that you can use them with any compatible cache.

The data key caching security thresholds apply to each cache entry. The AWS Encryption SDK will not use the data key from a cache entry that exceeds any of the thresholds that you set.

  • Maximum age (required): Set the lifetime of each cached key to be long enough to get cache hits, but short enough to limit exposure of a plaintext data key in memory to a specific time period.

You can use the maximum age threshold like a key rotation policy. Use it to limit the reuse of data keys and minimize exposure of cryptographic materials. You can also use it to evict data keys when the type or source of data that your application is processing changes.

  • Maximum messages encrypted (optional; default is 232 messages): Set the number of messages protected by each cached data key to be large enough to get value from reuse, but small enough to limit the number of messages that might potentially be exposed.

The AWS Encryption SDK only caches data keys that use an algorithm suite with a key derivation function. This technique avoids the cryptographic limits on the number of bytes encrypted with a single key. However, the more data that a key encrypts, the more data that is exposed if the data key is compromised.

Limiting the number of messages, rather than the number of bytes, is particularly useful if your application encrypts many messages of a similar size or when potential exposure must be limited to very few messages. This threshold is also useful when you want to reuse a data key for a particular type of message and know in advance how many messages of that type you have. You can also use an encryption context to select particular cached data keys for your encryption requests.

  • Maximum bytes encrypted (optional; default is 263 – 1): Set the bytes protected by each cached data key to be large enough to allow the reuse you need, but small enough to limit the amount of data encrypted under the same key.

Limiting the number of bytes, rather than the number of messages, is preferable when your application encrypts messages of widely varying size or when possibly exposing large amounts of data is much more of a concern than exposing smaller amounts of data.

In addition to these security thresholds, the LocalCryptoMaterialsCache in the AWS Encryption SDK lets you set its capacity, which is the maximum number of entries the cache can hold.

Use the capacity value to tune the performance of your LocalCryptoMaterialsCache. In general, use the smallest value that will achieve the performance improvements that your application requires. You might want to test with a very small cache of 5–10 entries and expand if necessary. You will need a slightly larger cache if you are using the cache for both encryption and decryption requests, or if you are using encryption contexts to select particular cache entries.

Consider these cache configuration examples

After you determine the security and performance requirements of your application, consider the cache security thresholds carefully and adjust them to meet your needs. There are no magic numbers for these thresholds: the ideal settings are specific to each application, its security and performance requirements, and budget. Use the minimal amount of caching necessary to get acceptable performance and cost.

The following examples show ways you can use the LocalCryptoMaterialsCache capacity setting and the security thresholds to help meet your security requirements:

  • Slow master key operations: If your master key processes only 100 transactions per second (TPS) but your application needs to process 1,000 TPS, you can meet your application requirements by allowing a maximum of 10 messages to be protected under each data key.
  • High frequency and volume: If your master key costs $0.01 per operation and you need to process a consistent 1,000 TPS while staying within a budget of $100,000 per month, allow a maximum of 275 messages for each cache entry.
  • Burst traffic: If your application’s processing bursts to 100 TPS for five seconds in each minute but is otherwise zero, and your master key costs $0.01 per operation, setting maximum messages to 3 can achieve significant savings. To prevent data keys from being reused across bursts (55 seconds), set the maximum age of each cached data key to 20 seconds.
  • Expensive master key operations: If your application uses a low-throughput encryption service that costs as much as $1.00 per operation, you might want to minimize the number of operations. To do so, create a cache that is large enough to contain the data keys you need. Then, set the byte and message limits high enough to allow reuse while conforming to your security requirements. For example, if your security requirements do not permit a data key to encrypt more than 10 GB of data, setting bytes processed to 10 GB still significantly minimizes operations and conforms to your security requirements.

Learn more about data key caching

To learn more about data key caching, including how to implement it, how to set the security thresholds, and details about the caching components, see Data Key Caching in the AWS Encryption SDK. Also, see the AWS Encryption SDKs for Java and Python as well as the Javadoc and Python documentation.

If you have comments about this blog post, submit them in the “Comments” section below. If you have questions, file an issue in the GitHub repos for the Encryption SDK in Java or Python, or start a new thread on the KMS forum.

– June

In Case You Missed These: AWS Security Blog Posts from January, February, and March

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/in-case-you-missed-these-aws-security-blog-posts-from-january-february-and-march/

Image of lock and key

In case you missed any AWS Security Blog posts published so far in 2017, they are summarized and linked to below. The posts are shown in reverse chronological order (most recent first), and the subject matter ranges from protecting dynamic web applications against DDoS attacks to monitoring AWS account configuration changes and API calls to Amazon EC2 security groups.


March 22: How to Help Protect Dynamic Web Applications Against DDoS Attacks by Using Amazon CloudFront and Amazon Route 53
Using a content delivery network (CDN) such as Amazon CloudFront to cache and serve static text and images or downloadable objects such as media files and documents is a common strategy to improve webpage load times, reduce network bandwidth costs, lessen the load on web servers, and mitigate distributed denial of service (DDoS) attacks. AWS WAF is a web application firewall that can be deployed on CloudFront to help protect your application against DDoS attacks by giving you control over which traffic to allow or block by defining security rules. When users access your application, the Domain Name System (DNS) translates human-readable domain names (for example, www.example.com) to machine-readable IP addresses (for example, A DNS service, such as Amazon Route 53, can effectively connect users’ requests to a CloudFront distribution that proxies requests for dynamic content to the infrastructure hosting your application’s endpoints. In this blog post, I show you how to deploy CloudFront with AWS WAF and Route 53 to help protect dynamic web applications (with dynamic content such as a response to user input) against DDoS attacks. The steps shown in this post are key to implementing the overall approach described in AWS Best Practices for DDoS Resiliency and enable the built-in, managed DDoS protection service, AWS Shield.

March 21: New AWS Encryption SDK for Python Simplifies Multiple Master Key Encryption
The AWS Cryptography team is happy to announce a Python implementation of the AWS Encryption SDK. This new SDK helps manage data keys for you, and it simplifies the process of encrypting data under multiple master keys. As a result, this new SDK allows you to focus on the code that drives your business forward. It also provides a framework you can easily extend to ensure that you have a cryptographic library that is configured to match and enforce your standards. The SDK also includes ready-to-use examples. If you are a Java developer, you can refer to this blog post to see specific Java examples for the SDK. In this blog post, I show you how you can use the AWS Encryption SDK to simplify the process of encrypting data and how to protect your encryption keys in ways that help improve application availability by not tying you to a single region or key management solution.

March 21: Updated CJIS Workbook Now Available by Request
The need for guidance when implementing Criminal Justice Information Services (CJIS)–compliant solutions has become of paramount importance as more law enforcement customers and technology partners move to store and process criminal justice data in the cloud. AWS services allow these customers to easily and securely architect a CJIS-compliant solution when handling criminal justice data, creating a durable, cost-effective, and secure IT infrastructure that better supports local, state, and federal law enforcement in carrying out their public safety missions. AWS has created several documents (collectively referred to as the CJIS Workbook) to assist you in aligning with the FBI’s CJIS Security Policy. You can use the workbook as a framework for developing CJIS-compliant architecture in the AWS Cloud. The workbook helps you define and test the controls you operate, and document the dependence on the controls that AWS operates (compute, storage, database, networking, regions, Availability Zones, and edge locations).

March 9: New Cloud Directory API Makes It Easier to Query Data Along Multiple Dimensions
Today, we made available a new Cloud Directory API, ListObjectParentPaths, that enables you to retrieve all available parent paths for any directory object across multiple hierarchies. Use this API when you want to fetch all parent objects for a specific child object. The order of the paths and objects returned is consistent across iterative calls to the API, unless objects are moved or deleted. In case an object has multiple parents, the API allows you to control the number of paths returned by using a paginated call pattern. In this blog post, I use an example directory to demonstrate how this new API enables you to retrieve data across multiple dimensions to implement powerful applications quickly.

March 8: How to Access the AWS Management Console Using AWS Microsoft AD and Your On-Premises Credentials
AWS Directory Service for Microsoft Active Directory, also known as AWS Microsoft AD, is a managed Microsoft Active Directory (AD) hosted in the AWS Cloud. Now, AWS Microsoft AD makes it easy for you to give your users permission to manage AWS resources by using on-premises AD administrative tools. With AWS Microsoft AD, you can grant your on-premises users permissions to resources such as the AWS Management Console instead of adding AWS Identity and Access Management (IAM) user accounts or configuring AD Federation Services (AD FS) with Security Assertion Markup Language (SAML). In this blog post, I show how to use AWS Microsoft AD to enable your on-premises AD users to sign in to the AWS Management Console with their on-premises AD user credentials to access and manage AWS resources through IAM roles.

March 7: How to Protect Your Web Application Against DDoS Attacks by Using Amazon Route 53 and an External Content Delivery Network
Distributed Denial of Service (DDoS) attacks are attempts by a malicious actor to flood a network, system, or application with more traffic, connections, or requests than it is able to handle. To protect your web application against DDoS attacks, you can use AWS Shield, a DDoS protection service that AWS provides automatically to all AWS customers at no additional charge. You can use AWS Shield in conjunction with DDoS-resilient web services such as Amazon CloudFront and Amazon Route 53 to improve your ability to defend against DDoS attacks. Learn more about architecting for DDoS resiliency by reading the AWS Best Practices for DDoS Resiliency whitepaper. You also have the option of using Route 53 with an externally hosted content delivery network (CDN). In this blog post, I show how you can help protect the zone apex (also known as the root domain) of your web application by using Route 53 to perform a secure redirect to prevent discovery of your application origin.

Image of lock and key


February 27: Now Generally Available – AWS Organizations: Policy-Based Management for Multiple AWS Accounts
Today, AWS Organizations moves from Preview to General Availability. You can use Organizations to centrally manage multiple AWS accounts, with the ability to create a hierarchy of organizational units (OUs). You can assign each account to an OU, define policies, and then apply those policies to an entire hierarchy, specific OUs, or specific accounts. You can invite existing AWS accounts to join your organization, and you can also create new accounts. All of these functions are available from the AWS Management Console, the AWS Command Line Interface (CLI), and through the AWS Organizations API.To read the full AWS Blog post about today’s launch, see AWS Organizations – Policy-Based Management for Multiple AWS Accounts.

February 23: s2n Is Now Handling 100 Percent of SSL Traffic for Amazon S3
Today, we’ve achieved another important milestone for securing customer data: we have replaced OpenSSL with s2n for all internal and external SSL traffic in Amazon Simple Storage Service (Amazon S3) commercial regions. This was implemented with minimal impact to customers, and multiple means of error checking were used to ensure a smooth transition, including client integration tests, catching potential interoperability conflicts, and identifying memory leaks through fuzz testing.

February 22: Easily Replace or Attach an IAM Role to an Existing EC2 Instance by Using the EC2 Console
AWS Identity and Access Management (IAM) roles enable your applications running on Amazon EC2 to use temporary security credentials. IAM roles for EC2 make it easier for your applications to make API requests securely from an instance because they do not require you to manage AWS security credentials that the applications use. Recently, we enabled you to use temporary security credentials for your applications by attaching an IAM role to an existing EC2 instance by using the AWS CLI and SDK. To learn more, see New! Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI. Starting today, you can attach an IAM role to an existing EC2 instance from the EC2 console. You can also use the EC2 console to replace an IAM role attached to an existing instance. In this blog post, I will show how to attach an IAM role to an existing EC2 instance from the EC2 console.

February 22: How to Audit Your AWS Resources for Security Compliance by Using Custom AWS Config Rules
AWS Config Rules enables you to implement security policies as code for your organization and evaluate configuration changes to AWS resources against these policies. You can use Config rules to audit your use of AWS resources for compliance with external compliance frameworks such as CIS AWS Foundations Benchmark and with your internal security policies related to the US Health Insurance Portability and Accountability Act (HIPAA), the Federal Risk and Authorization Management Program (FedRAMP), and other regimes. AWS provides some predefined, managed Config rules. You also can create custom Config rules based on criteria you define within an AWS Lambda function. In this post, I show how to create a custom rule that audits AWS resources for security compliance by enabling VPC Flow Logs for an Amazon Virtual Private Cloud (VPC). The custom rule meets requirement 4.3 of the CIS AWS Foundations Benchmark: “Ensure VPC flow logging is enabled in all VPCs.”

February 13: AWS Announces CISPE Membership and Compliance with First-Ever Code of Conduct for Data Protection in the Cloud
I have two exciting announcements today, both showing AWS’s continued commitment to ensuring that customers can comply with EU Data Protection requirements when using our services.

February 13: How to Enable Multi-Factor Authentication for AWS Services by Using AWS Microsoft AD and On-Premises Credentials
You can now enable multi-factor authentication (MFA) for users of AWS services such as Amazon WorkSpaces and Amazon QuickSight and their on-premises credentials by using your AWS Directory Service for Microsoft Active Directory (Enterprise Edition) directory, also known as AWS Microsoft AD. MFA adds an extra layer of protection to a user name and password (the first “factor”) by requiring users to enter an authentication code (the second factor), which has been provided by your virtual or hardware MFA solution. These factors together provide additional security by preventing access to AWS services, unless users supply a valid MFA code.

February 13: How to Create an Organizational Chart with Separate Hierarchies by Using Amazon Cloud Directory
Amazon Cloud Directory enables you to create directories for a variety of use cases, such as organizational charts, course catalogs, and device registries. Cloud Directory offers you the flexibility to create directories with hierarchies that span multiple dimensions. For example, you can create an organizational chart that you can navigate through separate hierarchies for reporting structure, location, and cost center. In this blog post, I show how to use Cloud Directory APIs to create an organizational chart with two separate hierarchies in a single directory. I also show how to navigate the hierarchies and retrieve data. I use the Java SDK for all the sample code in this post, but you can use other language SDKs or the AWS CLI.

February 10: How to Easily Log On to AWS Services by Using Your On-Premises Active Directory
AWS Directory Service for Microsoft Active Directory (Enterprise Edition), also known as Microsoft AD, now enables your users to log on with just their on-premises Active Directory (AD) user name—no domain name is required. This new domainless logon feature makes it easier to set up connections to your on-premises AD for use with applications such as Amazon WorkSpaces and Amazon QuickSight, and it keeps the user logon experience free from network naming. This new interforest trusts capability is now available when using Microsoft AD with Amazon WorkSpaces and Amazon QuickSight Enterprise Edition. In this blog post, I explain how Microsoft AD domainless logon works with AD interforest trusts, and I show an example of setting up Amazon WorkSpaces to use this capability.

February 9: New! Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI
AWS Identity and Access Management (IAM) roles enable your applications running on Amazon EC2 to use temporary security credentials that AWS creates, distributes, and rotates automatically. Using temporary credentials is an IAM best practice because you do not need to maintain long-term keys on your instance. Using IAM roles for EC2 also eliminates the need to use long-term AWS access keys that you have to manage manually or programmatically. Starting today, you can enable your applications to use temporary security credentials provided by AWS by attaching an IAM role to an existing EC2 instance. You can also replace the IAM role attached to an existing EC2 instance. In this blog post, I show how you can attach an IAM role to an existing EC2 instance by using the AWS CLI.

February 8: How to Remediate Amazon Inspector Security Findings Automatically
The Amazon Inspector security assessment service can evaluate the operating environments and applications you have deployed on AWS for common and emerging security vulnerabilities automatically. As an AWS-built service, Amazon Inspector is designed to exchange data and interact with other core AWS services not only to identify potential security findings but also to automate addressing those findings. Previous related blog posts showed how you can deliver Amazon Inspector security findings automatically to third-party ticketing systems and automate the installation of the Amazon Inspector agent on new Amazon EC2 instances. In this post, I show how you can automatically remediate findings generated by Amazon Inspector. To get started, you must first run an assessment and publish any security findings to an Amazon Simple Notification Service (SNS) topic. Then, you create an AWS Lambda function that is triggered by those notifications. Finally, the Lambda function examines the findings and then implements the appropriate remediation based on the type of issue.

February 6: How to Simplify Security Assessment Setup Using Amazon EC2 Systems Manager and Amazon Inspector
In a July 2016 AWS Blog post, I discussed how to integrate Amazon Inspector with third-party ticketing systems by using Amazon Simple Notification Service (SNS) and AWS Lambda. This AWS Security Blog post continues in the same vein, describing how to use Amazon Inspector to automate various aspects of security management. In this post, I show you how to install the Amazon Inspector agent automatically through the Amazon EC2 Systems Manager when a new Amazon EC2 instance is launched. In a subsequent post, I will show you how to update EC2 instances automatically that run Linux when Amazon Inspector discovers a missing security patch.

Image of lock and key


January 30: How to Protect Data at Rest with Amazon EC2 Instance Store Encryption
Encrypting data at rest is vital for regulatory compliance to ensure that sensitive data saved on disks is not readable by any user or application without a valid key. Some compliance regulations such as PCI DSS and HIPAA require that data at rest be encrypted throughout the data lifecycle. To this end, AWS provides data-at-rest options and key management to support the encryption process. For example, you can encrypt Amazon EBS volumes and configure Amazon S3 buckets for server-side encryption (SSE) using AES-256 encryption. Additionally, Amazon RDS supports Transparent Data Encryption (TDE). Instance storage provides temporary block-level storage for Amazon EC2 instances. This storage is located on disks attached physically to a host computer. Instance storage is ideal for temporary storage of information that frequently changes, such as buffers, caches, and scratch data. By default, files stored on these disks are not encrypted. In this blog post, I show a method for encrypting data on Linux EC2 instance stores by using Linux built-in libraries. This method encrypts files transparently, which protects confidential data. As a result, applications that process the data are unaware of the disk-level encryption.

January 27: How to Detect and Automatically Remediate Unintended Permissions in Amazon S3 Object ACLs with CloudWatch Events
Amazon S3 Access Control Lists (ACLs) enable you to specify permissions that grant access to S3 buckets and objects. When S3 receives a request for an object, it verifies whether the requester has the necessary access permissions in the associated ACL. For example, you could set up an ACL for an object so that only the users in your account can access it, or you could make an object public so that it can be accessed by anyone. If the number of objects and users in your AWS account is large, ensuring that you have attached correctly configured ACLs to your objects can be a challenge. For example, what if a user were to call the PutObjectAcl API call on an object that is supposed to be private and make it public? Or, what if a user were to call the PutObject with the optional Acl parameter set to public-read, therefore uploading a confidential file as publicly readable? In this blog post, I show a solution that uses Amazon CloudWatch Events to detect PutObject and PutObjectAcl API calls in near-real time and helps ensure that the objects remain private by making automatic PutObjectAcl calls, when necessary.

January 26: Now Available: Amazon Cloud Directory—A Cloud-Native Directory for Hierarchical Data
Today we are launching Amazon Cloud Directory. This service is purpose-built for storing large amounts of strongly typed hierarchical data. With the ability to scale to hundreds of millions of objects while remaining cost-effective, Cloud Directory is a great fit for all sorts of cloud and mobile applications.

January 24: New SOC 2 Report Available: Confidentiality
As with everything at Amazon, the success of our security and compliance program is primarily measured by one thing: our customers’ success. Our customers drive our portfolio of compliance reports, attestations, and certifications that support their efforts in running a secure and compliant cloud environment. As a result of our engagement with key customers across the globe, we are happy to announce the publication of our new SOC 2 Confidentiality report. This report is available now through AWS Artifact in the AWS Management Console.

January 18: Compliance in the Cloud for New Financial Services Cybersecurity Regulations
Financial regulatory agencies are focused more than ever on ensuring responsible innovation. Consequently, if you want to achieve compliance with financial services regulations, you must be increasingly agile and employ dynamic security capabilities. AWS enables you to achieve this by providing you with the tools you need to scale your security and compliance capabilities on AWS. The following breakdown of the most recent cybersecurity regulations, NY DFS Rule 23 NYCRR 500, demonstrates how AWS continues to focus on your regulatory needs in the financial services sector.

January 9: New Amazon GameDev Blog Post: Protect Multiplayer Game Servers from DDoS Attacks by Using Amazon GameLift
In online gaming, distributed denial of service (DDoS) attacks target a game’s network layer, flooding servers with requests until performance degrades considerably. These attacks can limit a game’s availability to players and limit the player experience for those who can connect. Today’s new Amazon GameDev Blog post uses a typical game server architecture to highlight DDoS attack vulnerabilities and discusses how to stay protected by using built-in AWS Cloud security, AWS security best practices, and the security features of Amazon GameLift. Read the post to learn more.

January 6: The Top 10 Most Downloaded AWS Security and Compliance Documents in 2016
The following list includes the 10 most downloaded AWS security and compliance documents in 2016. Using this list, you can learn about what other people found most interesting about security and compliance last year.

January 6: FedRAMP Compliance Update: AWS GovCloud (US) Region Receives a JAB-Issued FedRAMP High Baseline P-ATO for Three New Services
Three new services in the AWS GovCloud (US) region have received a Provisional Authority to Operate (P-ATO) from the Joint Authorization Board (JAB) under the Federal Risk and Authorization Management Program (FedRAMP). JAB issued the authorization at the High baseline, which enables US government agencies and their service providers the capability to use these services to process the government’s most sensitive unclassified data, including Personal Identifiable Information (PII), Protected Health Information (PHI), Controlled Unclassified Information (CUI), criminal justice information (CJI), and financial data.

January 4: The Top 20 Most Viewed AWS IAM Documentation Pages in 2016
The following 20 pages were the most viewed AWS Identity and Access Management (IAM) documentation pages in 2016. I have included a brief description with each link to give you a clearer idea of what each page covers. Use this list to see what other people have been viewing and perhaps to pique your own interest about a topic you’ve been meaning to research.

January 3: The Most Viewed AWS Security Blog Posts in 2016
The following 10 posts were the most viewed AWS Security Blog posts that we published during 2016. You can use this list as a guide to catch up on your blog reading or even read a post again that you found particularly useful.

January 3: How to Monitor AWS Account Configuration Changes and API Calls to Amazon EC2 Security Groups
You can use AWS security controls to detect and mitigate risks to your AWS resources. The purpose of each security control is defined by its control objective. For example, the control objective of an Amazon VPC security group is to permit only designated traffic to enter or leave a network interface. Let’s say you have an Internet-facing e-commerce website, and your security administrator has determined that only HTTP (TCP port 80) and HTTPS (TCP 443) traffic should be allowed access to the public subnet. As a result, your administrator configures a security group to meet this control objective. What if, though, someone were to inadvertently change this security group’s rules and enable FTP or other protocols to access the public subnet from any location on the Internet? That expanded access could weaken the security posture of your assets. Consequently, your administrator might need to monitor the integrity of your company’s security controls so that the controls maintain their desired effectiveness. In this blog post, I explore two methods for detecting unintended changes to VPC security groups. The two methods address not only control objectives but also control failures.

If you have questions about or issues with implementing the solutions in any of these posts, please start a new thread on the forum identified near the end of each post.

– Craig

New AWS Encryption SDK for Python Simplifies Multiple Master Key Encryption

Post Syndicated from Matt Bullock original https://aws.amazon.com/blogs/security/new-aws-encryption-sdk-for-python-simplifies-multiple-master-key-encryption/

The AWS Cryptography team is happy to announce a Python implementation of the AWS Encryption SDK. This new SDK helps manage data keys for you, and it simplifies the process of encrypting data under multiple master keys. As a result, this new SDK allows you to focus on the code that drives your business forward. It also provides a framework you can easily extend to ensure that you have a cryptographic library that is configured to match and enforce your standards. The SDK also includes ready-to-use examples. If you are a Java developer, you can refer to this blog post to see specific Java examples for the SDK.

In this blog post, I show you how you can use the AWS Encryption SDK to simplify the process of encrypting data and how to protect your encryption keys in ways that help improve application availability by not tying you to a single region or key management solution.

How does the AWS Encryption SDK help me?

Developers using encryption often face three problems:

  1. How do I correctly generate and use a data key to encrypt data?
  2. How do I protect the data key after it has been used?
  3. How do I store the data key and ciphertext in a portable manner?

The library provided in the AWS Encryption SDK addresses the first problem by implementing the low-level envelope encryption details transparently using the cryptographic provider available in your development environment. The library helps address the second problem by providing intuitive interfaces to let you choose how you want to generate data keys and the master keys or key-encrypting keys that will protect data keys. Developers can then focus on the core of the application they are building instead of on the complexities of encryption. The ciphertext addresses the third problem, as described later in this post.

The AWS Encryption SDK defines a carefully designed and reviewed ciphertext data format that supports multiple secure algorithm combinations (with room for future expansion) and has no limits on the types or algorithms of the master keys. The ciphertext output of clients (created with the SDK) is a single binary blob that contains your encrypted message and one or more copies of the data key, as encrypted by each master key referenced in the encryption request. This single ciphertext data format for envelope-encrypted data makes it easier to ensure the data key has the same durability and availability properties as the encrypted message itself.

The AWS Encryption SDK provides production-ready reference implementations in Java and Python with direct support for key providers such as AWS Key Management Service (KMS). The Java implementation also supports the Java Cryptography Architecture (JCA/JCE) natively, which includes support for AWS CloudHSM and other PKCS #11 devices. The standard ciphertext data format the AWS Encryption SDK defines means that you can use combinations of the Java and Python clients for encryption and decryption as long as they each have access to the key provider that manages the correct master key used to encrypt the data key.

Let’s look at how you can use the AWS Encryption SDK to simplify the process of encrypting data and how to protect your data keys in ways that help improve application availability by not tying you to a single region or key management solution.

Example 1: Encrypting application secrets under multiple regional KMS master keys for high availability

Many customers want to build systems that not only span multiple Availability Zones, but also multiple regions. You cannot share KMS customer master keys (CMKs) across regions. However, with envelope encryption, you can encrypt the data key with multiple KMS CMKs in different regions. Applications running in each region can use the local KMS endpoint to decrypt the ciphertext for faster and more reliable access.

For the examples in this post, I will assume that I am running on Amazon EC2 instances configured with IAM roles for EC2. This enables me to avoid credential management and take advantage of built-in logic that routes requests to the nearest endpoints. These examples also assume that the latest version of the AWS SDK for Python (different from the AWS Encryption SDK) is available.

The encryption logic has a simple high-level design. Using provided parameters, I get the master keys and use them to encrypt some provided data, as shown in the following code example. I will define how to construct the multi-region KMS key provider next.

import aws_encryption_sdk

def encrypt_data(plaintext):
    # Get all the master keys needed
    key_provider = build_multiregion_kms_master_key_provider()

    # Encrypt the provided data
    ciphertext, header = aws_encryption_sdk.encrypt(
    return ciphertext

Create a master key provider containing multiple master keys

The following code example shows how you can encrypt data under CMKs in three US regions: us-east-1, us-west-1, and us-west-2. The example assumes that you have already set up the CMKs and created an alias named alias/exampleKey in each region for each CMK. For more information about creating CMKs and aliases, see Creating Keys in the AWS KMS documentation.

This example creates a single KMSMasterKeyProvider to which all CMKs are added. The KMSMasterKeyProvider handles interacting with CMKs in multiple regions. Note that the first master key added to the KMSMasterKeyProvider is the one used to generate the new data key, and the other master keys are used to encrypt the new data key.

import aws_encryption_sdk
import boto3

def build_multiregion_kms_master_key_provider():
    regions = ('us-east-1', 'us-west-1', 'us-west-2')
    alias = 'alias/exampleKey'
    arn_template = 'arn:aws:kms:{region}:{account_id}:{alias}'

    # Create AWS KMS master key provider
    kms_master_key_provider = aws_encryption_sdk.KMSMasterKeyProvider()

    # Find your AWS account ID
    account_id = boto3.client('sts').get_caller_identity()['Account']

    # Add the KMS alias in each region to the master key provider
    for region in regions:
    return kms_master_key_provider

The logic to construct a master key provider could be built once by your central security team and then reused across your company to both simplify development and ensure that all encrypted data meets corporate standards.

Encrypt the data

The data you encrypt can come from anywhere and you can distribute it however you like. In the following code example, I read a file from disk and write out an encrypted copy. The AWS Encryption SDK provides a stream interface that behaves as a standard Python stream context manager to make this easy.

import aws_encryption_sdk
import boto3

def encrypt_file(input_filename, output_filename):
    # Get all the master keys needed
    key_provider = build_multiregion_kms_master_key_provider()

    # Open the files for reading and writing
    with open(input_filename, 'rb') as infile,\
            open(output_filename, 'wb') as outfile:
        # Encrypt the file
        with aws_encryption_sdk.stream(
        ) as encryptor:
            for chunk in encryptor:

This file could contain, for example, secret application configuration data (such as passwords, certificates, and the like) that is then sent to EC2 instances as EC2 user data upon launch.

Decrypt the data

The following code example decrypts the contents of the EC2 user data and writes it to the specified file. The KMSMasterKeyProvider  defaults to using KMS in the local region, so decryption proceeds quickly without cross-region calls.

from botocore.vendored import requests

def decrypt_user_data(output_filename):
    # Create a master key provider that points to the local KMS stack
    kms_key_provider = aws_encryption_sdk.KMSMasterKeyProvider()

    # Read the user data
    user_data = requests.get('').content
    # Open a stream to write out the decrypted file
    # Decrypt the userdata and write the plaintext into the file
    with open(output_filename, 'wb') as outfile,\
            ) as decryptor:
        for chunk in decryptor:

Congratulations! You have just encrypted data under master keys in multiple regions and have code that will always decrypt the data by using the local KMS stack. This gives you higher availability and lower latency for decryption, while still only needing to manage a single ciphertext.

Example 2: Encrypting application secrets under master keys from different providers for escrow and portability

Another reason why you might want to encrypt data under multiple master keys is to avoid relying on a single provider for your keys. By not tying yourself to a single key management solution, you help improve your applications’ availability. This approach also might help if you have compliance, data loss prevention, or disaster recovery requirements that require multiple providers.

You can use the same technique demonstrated previously in this post to encrypt your data to an escrow or additional decryption master key that is independent of your primary provider. This example demonstrates how to use an additional master key, which is an RSA public key randomly generated upon request. (Storing and managing the RSA key pair are out of scope for this blog.)

Encrypt the data with a public master key

Just like the previous code example that created a number of KMS master keys to encrypt data, the following code example creates one more master key for use with the RSA public key.

import aws_encryption_sdk
from aws_encryption_sdk.internal.crypto import WrappingKey
from aws_encryption_sdk.key_providers.raw import RawMasterKeyProvider
from aws_encryption_sdk.identifiers import WrappingAlgorithm, EncryptionKeyType
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import serialization
from cryptography.hazmat.primitives.asymmetric import rsa

class StaticRandomMasterKeyProvider(RawMasterKeyProvider):
    """Randomly generates and provides 4096-bit RSA keys consistently per unique key id."""
    provider_id = 'static-random'

    def __init__(self, **kwargs):
        self._static_keys = {}

    def _get_raw_key(self, key_id):
        """Retrieves a static, randomly generated RSA key for the specified key id.

        :param str key_id: Key ID
        :returns: Wrapping key which contains the specified static key
        :rtype: :class:`aws_encryption_sdk.internal.crypto.WrappingKey`
            static_key = self._static_keys[key_id]
        except KeyError:
            private_key = rsa.generate_private_key(
            static_key = private_key.private_bytes(
            self._static_keys[key_id] = static_key
        return WrappingKey(

def get_multi_master_key_provider():
    # Create multiregion KMS master key provider
    multi_master_key_provider = build_multiregion_kms_master_key_provider()

    # Create static master key provider and add a key
    static_key_id = os.urandom(8)
    static_master_key_provider = StaticRandomMasterKeyProvider()

    # Add static master key provider to KMS master key provider

    return multi_master_key_provider, static_master_key_provider

Decrypt the data with the private key

The following decryption code example uses the static RSA master key provider generated previously to demonstrate decryption with a non-AWS master key.

def cycle_data(input_data):
    # Create multi-source master key provider
    multi_master_key_provider, static_master_key_provider = get_multi_master_key_provider()

    # Encrypt data with multi-source master key provider
    ciphertext, header = aws_encryption_sdk.encrypt(

    # Decrypt data using only static master key provider
    plaintext, header = aws_encryption_sdk.decrypt(


Envelope encryption is powerful, but traditionally, it has been challenging to implement. The new AWS Encryption SDK helps manage data keys for you, and it simplifies the process of encrypting data under multiple master keys. As a result, this new SDK allows you to focus on the code that drives your business forward. It also provides a framework you can easily extend to ensure that you have a cryptographic library that is configured to match and enforce your standards.

We are excited about releasing the AWS Encryption SDK and cannot wait to hear what you do with it. If you have comments about the new SDK or anything in this blog post, submit a comment in the “Comments” section below. If you have implementation or usage questions, start a new thread on the KMS forum.

– Matt