Tag Archives: AWS Certificate Manager

Monitoring AWS Certificate Manager Private CA with AWS Security Hub

Post Syndicated from Anthony Pasquariello original https://aws.amazon.com/blogs/security/monitoring-aws-certificate-manager-private-ca-with-aws-security-hub/

Certificates are a vital part of any security infrastructure because they allow a company’s internal or external facing products, like websites and devices, to be trusted. To deploy certificates successfully and at scale, you need to set up a certificate authority hierarchy that provisions and issues certificates. You also need to monitor this hierarchy closely, looking for any activity that occurs within your infrastructure, such as creating or deleting a root certificate authority (CA). You can achieve this using AWS Certificate Manager (ACM) Private Certificate Authority (CA) with AWS Security Hub.

AWS Certificate Manager (ACM) Private CA is a managed private certificate authority service that extends ACM certificates to private certificates. With private certificates, you can authenticate resources inside an organization. Private certificates allow entities like users, web servers, VPN users, internal API endpoints, and IoT devices to prove their identity and establish encrypted communications channels. With ACM Private CA, you can create complete CA hierarchies, including root and subordinate CAs, without the investment and maintenance costs of operating your own certificate authority.

AWS Security Hub provides a comprehensive view of your security state within AWS and your compliance with security industry standards and best practices. Security Hub centralizes and prioritizes security and compliance findings from across AWS accounts, services, and supported third-party partners to help you analyze your security trends and identify the highest priority security issues.

In this example, we show how to monitor your root CA and generate a security finding in Security Hub if your root is used to issue a certificate. Following best practices, the root CA should be used rarely and only to issue certificates under controlled circumstances, such as during a ceremony to create a subordinate CA. Issuing a certificate from the root at any other time is a red flag that should be investigated by your security team. This will show up as a finding in Security Hub indicated by ‘ACM Private CA Certificate Issuance.’

Example scenario

For highly privileged actions within an IT infrastructure, it’s crucial that you use the principle of least privilege when allowing employee access. To ensure least privilege is followed, you should track highly sensitive actions using monitoring and alerting solutions. Highly sensitive actions should only be performed by authorized personnel. In this post, you’ll learn how to monitor activity that occurs within ACM Private CA, such as creating or deleting a root CA, using AWS Security Hub. In this example scenario, we cover a highly sensitive action within an organization building a private certificate authority hierarchy using ACM Private CA:

Creation of a subordinate CA that is signed by the root CA:

Creating a CA certificate is a privileged action. Only authorized personnel within the CA Hierarchy Management team should create CA certificates. Certificate authorities can sign private certificates that allow entities to prove their identity and establish encrypted communications channels.

Architecture overview

This solution requires some background information about the example scenario. In the example, the organization has the following CA hierarchy: root CA → subordinate CA → end entity certificates. To learn how to build your own private certificate infrastructure see this post.

Figure 1: An example of a certificate authority hierarchy

Figure 1: An example of a certificate authority hierarchy

There is one root CA and one subordinate CA. The subordinate CA issues end entity certificates (private certificates) to internal applications.

To use the test solution, you will first deploy a CloudFormation template that has set up an Amazon CloudWatch Events Rule and a Lambda function. Then, you will assume the persona of a security or certificate administrator within the example organization who has the ability to create certificate authorities within ACM Private CA.

Figure 2: Architecture diagram of the solution

Figure 2: Architecture diagram of the solution

The architecture diagram in Figure 2 outlines the entire example solution. At a high level this architecture enables customers to monitor activity within ACM Private CA in Security Hub. The components are explained as follows:

  1. Administrators within your organization have the ability to create certificate authorities and provision private certificates.
  2. Amazon CloudWatch Events tracks API calls using ACM Private CA as a source.
  3. Each CloudWatch Event triggers a corresponding AWS Lambda function that is also deployed by the CloudFormation template. The Lambda function reads the event details and formats them into an AWS Security Finding Format (ASFF).
  4. Findings are generated in AWS Security Hub by the Lambda function for your security team to monitor and act on.

This post assumes you have administrative access to the resources used, such as ACM Private CA, Security Hub, CloudFormation, and Amazon Simple Storage Service (Amazon S3). We also cover how to remediate through practicing the principle of least privilege, and what that looks like within the example scenario.

Deploy the example solution

First, make sure that AWS Security Hub is turned on, as it isn’t on by default. If you haven’t used the service yet, go to the Security Hub landing page within the AWS Management Console, select Go to Security Hub, and then select Enable Security Hub. See documentation for more ways to enable Security Hub.

Next, launch the CloudFormation template. Here’s how:

  1. Log in to the AWS Management Console and select AWS Region us-east-1 (N. Virginia) for this example deployment.
  2. Make sure you have the necessary privileges to create resources, as described in the “Architecture overview” section.
  3. Set up the sample deployment by selecting Launch Stack below.

The example solution must be launched in an AWS Region where ACM Private CA and Security Hub are enabled. The Launch Stack button will default to us-east-1. If you want to launch in another region, download the CloudFormation template from the GitHub repository found at the end of the blog.

Select this image to open a link that starts building the CloudFormation stack

Now that you’ve deployed the CloudFormation stack, we’ll help you understand how we’ve utilized AWS Security Finding Format (ASFF) in the Lambda functions.

How to create findings using AWS Security Finding Format (ASFF)

Security Hub consumes, aggregates, organizes, and prioritizes findings from AWS security services and from third-party product integrations. Security Hub receives these findings using a standard findings format called the AWS Security Finding Format (ASFF), thus eliminating the need for time-consuming data conversion efforts. Then it correlates ingested findings across products to prioritize the most important ones.

Below you can find an example input that shows how to use ASFF to populate findings in AWS Security Hub for the creation of a CA certificate. We placed this information in the Lambda function Certificate Authority Creation that was deployed in the CloudFormation stack.


{
 "SchemaVersion": "2018-10-08",
 "Id": region + "/" + accountNum + "/" + caCertARN,
 "ProductArn": "arn:aws:securityhub:" + region + ":" + accountNum + ":product/" + accountNum + "/default",
 "GeneratorId": caCertARN,
 "AwsAccountId": accountNum,
 "Types": [
     "Unusual Behaviors"
 ],
 "CreatedAt": date,
 "UpdatedAt": date,
 "Severity": {
     "Normalized": 60
 },
 "Title": "Private CA Certificate Creation",
 "Description": "A Private CA certificate was issued in AWS Certificate Manager Private CA",
 "Remediation": {
     "Recommendation": {
         "Text": "Verify this CA certificate creation was taken by a privileged user",
         "Url": "https://docs.aws.amazon.com/acm-pca/latest/userguide/ca-best-practices.html#minimize-root-use"
     }
 },
 "ProductFields": {
     "ProductName": "ACM PCA"
 },
 "Resources": [
     {
         "Details": {
             "Other": {
                "CAArn": CaArn,
                "CertARN": caCertARN
             }
         },
         "Type": "Other",
         "Id": caCertARN,
         "Region": region,
         "Partition": "aws"
    }
 ],
 "RecordState": "ACTIVE"
}

Below, we summarize some important fields within the finding generated by ASFF. We set these fields within the Lambda function in the CloudFormation template you deployed for the example scenario, and so you don’t have to do this yourself.

ProductARN

AWS services that are not yet integrated with Security Hub are treated similar to third party findings. Therefore, the company-id must be the account ID. The product-id must be the reserved word “default”, as shown below.


"ProductArn": "arn:aws:securityhub:" + region + ":" + accountNum + ":product/" + accountNum + "/default",

Severity

Assigning the correct severity is important to ensure useful findings. This example scenario sets the severity within the ASFF generated, as shown above. For future findings, you can determine the appropriate severity by comparing the score to the labels listed in Table 1.

Table 1: Severity labels in AWS Security Finding Format

Severity LabelSeverity Score Range
Informational0
Low1–39
Medium40–69
High70–89
Critical90–100
  • Informational: No issue was found.
  • Low: Findings with issues that could result in future compromises, such as vulnerabilities, configuration weaknesses, or exposed passwords.
  • Medium: Findings with issues that indicate an active compromise, but no indication that an adversary has completed their objectives. Examples include malware activity, hacking activity, or unusual behavior detection.
  • Critical: Findings associated with an adversary completing their objectives. Examples include data loss or compromise, or a denial of service.

Remediation

This provides the remediation options for a finding. In our example, we link you to least privilege documentation to learn how to fix the overly permissive resource.

Types

These indicate one or more finding types in the format of namespace/category/classifier that classify a finding. Finding types should match against the Types Taxonomy for ASFF.

To learn more about ASFF parameters, see ASFF syntax documentation.

Trigger a Security Hub finding

Figure 1 above shows the CA hierarchy you are building in this post. The CloudFormation template you deployed created the root CA. The following steps will walk you through signing the root CA and creating a CA certificate for the subordinate CA. The architecture we deployed will notify the security team of these actions via Security Hub.

First, we will activate the root CA and install the root CA certificate. This step signs the root CA Certificate Signing Request (CSR) with the root CA’s private key.

  1. Navigate to the ACM Private CA service. Under the Private certificate authority section, select Private CAs.
  2. Under Actions, select Install CA certificate.
  3. Set the validity period and signature algorithm for the root CA certificate. In this case, leave the default values for both fields as shown in Figure 3, and then select Next.

    Figure 3: Specify the root CA certificate parameters

    Figure 3: Specify the root CA certificate parameters

  4. Under Review, generate, and install root CA certificate, select Confirm and install. This creates the root CA certificate.
  5. You should now see the root CA within the console with a status of Active, as shown in Figure 4 below.

    Figure 4: The root CA is now active

    Figure 4: The root CA is now active

Now we will create a subordinate CA and install a CA certificate onto it.

  1. Select the Create CA button.
  2. Under Select the certificate authority (CA) type, select Subordinate CA, and then select Next.
  3. Configure the root CA parameters by entering the following values (or any values that make sense for the CA hierarchy you’re trying to build) in the fields shown in Figure 5, and then select Next.

    Figure 5: Configure the certificate authority

    Figure 5: Configure the certificate authority

  4. Under Configure the certificate authority (CA) key algorithm, select RSA 2048, and then select Next.
  5. Check Enable CRL distribution, and then, under Create a new S3 bucket, select No. Under S3 bucket name, enter acm-private-ca-crl-bucket-<account-number>, and then select Next.
  6. Under Configure CA permissions, select Authorize ACM to use this CA for renewals, and then select Next.
  7. To create the subordinate CA, review and accept the conditions described at the bottom of the page, select the check box if you agree to the conditions, and then select Confirm and create.

Now, you need to activate the subordinate CA and install the subordinate certificate authority certificate. This step allows you to sign the subordinate CA Certificate Signing Request (CSR) with the root CA’s private key.

  1. Select Get started to begin the process, as shown in Figure 6.

    Figure 6: Begin installing the root certificate authority certificate

    Figure 6: Begin installing the root certificate authority certificate

  2. Under Install subordinate CA certificate, select ACM Private CA, and then select Next. This starts the process of signing the subordinate CA cert with the root CA that was created earlier.
  3. Set the parent private CA with the root CA that was created, the validity period, the signature algorithm, and the path length for the subordinate CA certificate. In this case, leave the default values for validity period, signature algorithm, and path length fields as shown in Figure 7, and then select Next.

    Figure 7: Specify the subordinate CA certificate parameters

    Figure 7: Specify the subordinate CA certificate parameters

  4. Under Review, select Generate. This creates the subordinate CA certificate.
  5. You should now see the subordinate CA within the console with a status of Active.

How to view Security Hub findings

Now that you have created a root CA and a subordinate CA under the root, you can review findings from the perspective of your security team who is notified of the findings within Security Hub. In the example scenario, creating the CA certificates triggers a CloudWatch Events rule generated from the CloudFormation template.

This events rule utilizes the native ACM Private CA CloudWatch Event integration. The event keeps track of ACM Private CA Certificate Issuance of the root CA ARN. See below for the CloudWatch Event.


{
  "detail-type": [
    "ACM Private CA Certificate Issuance"
  ],
  "resources": [
    "arn:aws:acm-pca:us-east-2:xxxxxxxxxxx:certificate-authority/xxxxxx-xxxx-xxx-xxxx-xxxxxxxx"
  ],
  "source": [
    "aws.acm-pca"
  ]
}

When the event of creating a CA certificate from the root CA occurs, it triggers the Lambda function with the finding in ASFF to generate that finding in Security Hub.

To assess a finding in Security Hub

  1. Navigate to Security Hub. On the left side of the Security Hub page, select Findings to view the finding generated from the Lambda function. Filter by Title EQUALS Private CA Certificate Creation, as shown in Figure 8.

    Figure 8: Filter the findings in Security Hub

    Figure 8: Filter the findings in Security Hub

  2. Select the finding’s title (CA CertificateCreation) to open it. You will see the details generated from this finding:
    
    Severity: Medium
    Company: Personal
    Title: CA Certificate Creation
    Remediation: Verify this certificate was taken by a privileged user
    	

    The finding has a Medium severity level since we specified it through our level 60 definition. This could indicate a potential active compromise, but no indication that a potential adversary has completed their objectives. In the hypothetical example covered earlier, a user has provisioned a CA certificate from the root CA, which should only be provisioned under controlled circumstances, such as during a ceremony to create a subordinate CA. Issuing a certificate from the root at any other time is a red flag that should be investigated by the security team. The remediation attribute in the finding shown here links to security best practices for ACM Private CA.

    Figure 9: Remediation tab in Security Hub Finding

    Figure 9: Remediation tab in Security Hub Finding

  3. To see more details about the finding, in the upper right corner of the console, under CA Certificate Creation, select the Finding ID link, as shown in Figure 10.

    Figure 10: Select the Finding ID link to learn more about the finding

    Figure 10: Select the Finding ID link to learn more about the finding

  4. The Finding JSON box will appear. Scroll down to Resources > Details > Other, as shown in Figure 11. The CAArn is the Root Certificate Authority that provisioned the certificate. The CertARN is the certificate that it provisioned.

    Figure 11: Details about the finding

    Figure 11: Details about the finding

Cleanup

To avoid costs associated with the test CA hierarchy created and other test resources generated from the CloudFormation template, ensure that you clean up your test environment. Take the following steps:

  1. Disable and delete the CA hierarchy you created (including root and subordinate CAs, as well as the additional subordinate CAs created).
  2. Delete the CloudFormation template.

Any new account to ACM Private CA can try the service for 30 days with no charge for operation of the first private CA created in the account. You pay for the certificates you issue during the trial period.

Next steps

In this post, you learned how to create a pipeline from ACM PCA action to Security Hub findings. There are many other API calls that you can send to Security Hub for monitoring:

To generate an ASFF object for one of these API calls, follow the steps from the ASFF section above. For more details, see the documentation. For the latest updates and changes to the CloudFormation template and resources within this post, please check the Github repository.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Certificate Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Anthony Pasquariello

Anthony is an Enterprise Solutions Architect based in New York City. He provides technical consultation to customers during their cloud journey, especially around security best practices. He has an MS and BS in electrical & computer engineering from Boston University. In his free time, he enjoys ramen, writing non-fiction, and philosophy.

Author

Christine Samson

Christine is an AWS Solutions Architect based in New York City. She provides customers with technical guidance for emerging technologies within the cloud, such as IoT, Serverless, and Security. She has a BS in Computer Science with a certificate in Engineering Leadership from the University of Colorado Boulder. She enjoys exploring new places to eat, playing the piano, and playing sports such as basketball and volleyball.

Author

Ram Ramani

Ram is a Security Specialist Solutions Architect at AWS focusing on data protection. Ram works with customers across all verticals to help with security controls and best practices on how customers can best protect their data that they store on AWS. In his free time, Ram likes playing table tennis and teaching coding skills to his kids.

Code signing using AWS Certificate Manager Private CA and AWS Key Management Service asymmetric keys

Post Syndicated from Ram Ramani original https://aws.amazon.com/blogs/security/code-signing-aws-certificate-manager-private-ca-aws-key-management-service-asymmetric-keys/

In this post, we show you how to combine the asymmetric signing feature of the AWS Key Management Service (AWS KMS) and code-signing certificates from the AWS Certificate Manager (ACM) Private Certificate Authority (PCA) service to digitally sign any binary data blob and then verify its identity and integrity. AWS KMS makes it easy for you to create and manage cryptographic keys and control their use across a wide range of AWS services and with your applications running on AWS. ACM PCA provides you a highly available private certificate authority (CA) service without the upfront investment and ongoing maintenance costs of operating your own private CA. CA administrators can use ACM PCA to create a complete CA hierarchy, including online root and subordinate CAs, with no need for external CAs. Using ACM PCA, you can provision, rotate, and revoke certificates that are trusted within your organization.

Traditionally, a person’s signature helps to validate that the person signed an agreement and agreed to the terms. Signatures are a big part of our lives, from our driver’s licenses to our home mortgage documents. When a signature is requested, the person or entity requesting the signature needs to verify the validity of the signature and the integrity of the message being signed.

As the internet and cryptography research evolved, technologists found ways to carry the usefulness of signatures from the analog world to the digital world. In the digital world, public and private key cryptography and X.509 certificates can help with digital signing, verifying message integrity, and verifying signature authenticity. In simple terms, an entity—which could be a person, an organization, a device, or a server—can digitally sign a piece of data, and another entity can validate the authenticity of the signature and validate the integrity of the signed data. The data that’s being signed could be a document, a software package, or any other binary data blob.

To learn more about AWS KMS asymmetric keys and ACM PCA, see Digital signing with the new asymmetric keys feature of AWS KMS and How to host and manage an entire private certificate infrastructure in AWS.

We provide Java code snippets for each part of the process in the following steps. In addition, the complete Java code with the maven build configuration file pom.xml are available for download from this GitHub project. The steps below illustrate the different processes that are involved and the associated Java code snippet. However, you need to use the GitHub project to be able to build and run the Java code successfully.

Let’s take a look at the steps.

1. Create an asymmetric key pair

For digital signing, you need a code-signing certificate and an asymmetric key pair. In this step, you create an asymmetric key pair using AWS KMS. The below code snippet in the main method within the file Runner.java is used to create the asymmetric key pair within KMS in your AWS account. An asymmetric KMS key with the alias CodeSigningCMK is created.


AsymmetricCMK codeSigningCMK = AsymmetricCMK.builder()
                .withAlias(CMK_ALIAS)
                .getOrCreate();

2. Create a code-signing certificate

To create a code-signing certificate, you need a private CA hierarchy, which you create within the ACM PCA service. This uses a simple CA hierarchy of one root CA and one subordinate CA under the root because the recommendation is that you should not use the root CA directly for signing code-signing certificates. The certificate authorities are needed to create the code-signing certificate. The common name for the root CA certificate is root CA, and the common name for the subordinate CA certificate is subordinate CA. The following code snippet in the main method within the file Runner.java is used to create the private CA hierarchy.


PrivateCA rootPrivateCA = PrivateCA.builder()
                .withCommonName(ROOT_COMMON_NAME)
                .withType(CertificateAuthorityType.ROOT)
                .getOrCreate();

PrivateCA subordinatePrivateCA = PrivateCA.builder()
        .withIssuer(rootPrivateCA)
        .withCommonName(SUBORDINATE_COMMON_NAME)
        .withType(CertificateAuthorityType.SUBORDINATE)
        .getOrCreate();

3. Create a certificate signing request

In this step, you create a certificate signing request (CSR) for the code-signing certificate. The following code snippet in the main method within the file Runner.java is used to create the CSR. The END_ENTITY_COMMON_NAME refers to the common name parameter of the code signing certificate.


String codeSigningCSR = codeSigningCMK.generateCSR(END_ENTITY_COMMON_NAME);

4. Sign the CSR

In this step, the code-signing CSR is signed by the subordinate CA that was generated in step 2 to create the code-signing certificate.


GetCertificateResult codeSigningCertificate = subordinatePrivateCA.issueCodeSigningCertificate(codeSigningCSR);

Note: The code-signing certificate that’s generated contains the public key of the asymmetric key pair generated in step 1.

5. Create the custom signed object

The data to be signed is a simple string: “the data I want signed”. Its binary representation is hashed and digitally signed by the asymmetric KMS private key created in step 1, and a custom signed object that contains the signature and the code-signing certificate is created.

The below code snippet in the main method within the file Runner.java is used to create the custom signed object.


CustomCodeSigningObject customCodeSigningObject = CustomCodeSigningObject.builder()
                .withAsymmetricCMK(codeSigningCMK)
                .withDataBlob(TBS_DATA.getBytes(StandardCharsets.UTF_8))
                .withCertificate(codeSigningCertificate.getCertificate())
                .build();

6. Verify the signature

The custom signed object is verified for integrity, and the root CA certificate is used to verify the chain of trust to confirm non-repudiation of the identity that produced the digital signature.

The below code snippet in the main method within the file Runner.java is used for signature verification:


String rootCACertificate = rootPrivateCA.getCertificate();
 String customCodeSigningObjectCertificateChain = codeSigningCertificate.getCertificate() + "\n" + codeSigningCertificate.getCertificateChain();

 CustomCodeSigningObject.getInstance(customCodeSigningObject.toString())
        .validate(rootCACertificate, customCodeSigningObjectCertificateChain);

During this signature validation process, the validation method shown in the code above retrieves the public key portion of the AWS KMS asymmetric key pair generated in step 1 from the code-signing certificate. This process has the advantage that credentials to access AWS KMS aren’t needed during signature validation. Any entity that has the root CA certificate loaded in its trust store can verify the signature without needing access to the AWS KMS verify API.

Note: The implementation outlined in this post is an example. It doesn’t use a certificate trust store that’s either part of a browser or part of a file system within the resident operating system of a device or a server. The trust store is placed in an instance of a Java class object for the purpose of this post. If you are planning to use this code-signing example in a production system, you must change the implementation to use a trust store on the host. To do so, you can build and distribute a secure trust store that includes the root CA certificate.

Conclusion

In this post, we showed you how a binary data blob can be digitally signed using ACM PCA and AWS KMS and how the signature can be verified using only the root CA certificate. No secret information or credentials are required to verify the signature. You can use this method to build a custom code-signing solution to address your particular use cases. The GitHub repository provides the Java code and the maven pom.xml that you can use to build and try it yourself. The README.md file in the GitHub repository shows the instructions to execute the code.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Certificate Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ram Ramani

Ram is a Security Solutions Architect at AWS focusing on data protection. Ram works with customers across different industry verticals to provide them with solutions that help with protecting data at rest and in transit. In prior roles, Ram built ML algorithms for video quality optimization and worked on identity and access management solutions for financial services organizations.

Author

Kyle Schultheiss

Kyle is a Senior Software Engineer on the AWS Cryptography team. He has been working on the ACM Private Certificate Authority service since its inception in 2018. In prior roles, he contributed to other AWS services such as Amazon Virtual Private Cloud, Amazon EC2, and Amazon Route 53.

Digital signing with the new asymmetric keys feature of AWS KMS

Post Syndicated from Raj Copparapu original https://aws.amazon.com/blogs/security/digital-signing-asymmetric-keys-aws-kms/

AWS Key Management Service (AWS KMS) now supports asymmetric keys. You can create, manage, and use public/private key pairs to protect your application data using the new APIs via the AWS SDK. Similar to the symmetric key features we’ve been offering, asymmetric keys can be generated as customer master keys (CMKs) where the private portion never leaves the service, or as a data key where the private portion is returned to your calling application encrypted under a CMK. The private portion of asymmetric CMKs are used in AWS KMS hardware security modules (HSMs) designed so that no one, including AWS employees, can access the plaintext key material. AWS KMS supports the following asymmetric key types – RSA 2048, RSA 3072, RSA 4096, ECC NIST P-256, ECC NIST P-384, ECC NIST-521, and ECC SECG P-256k1.

We’ve talked with customers and know that one popular use case for asymmetric keys is digital signing. In this post, I will walk you through an example of signing and verifying files using some of the new APIs in AWS KMS.

Background

A common way to ensure the integrity of a digital message as it passes between systems is to use a digital signature. A sender uses a secret along with cryptographic algorithms to create a data structure that is appended to the original message. A recipient with access to that secret can cryptographically verify that the message hasn’t been modified since the sender signed it. In cases where the recipient doesn’t have access to the same secret used by the sender for verification, a digital signing scheme that uses asymmetric keys is useful. The sender can make the public portion of the key available to any recipient to verify the signature, but the sender retains control over creating signatures using the private portion of the key. Asymmetric keys are used for digital signature applications such as trusted source code, authentication/authorization tokens, document e-signing, e-commerce transactions, and secure messaging. AWS KMS supports what are known as raw digital signatures, where there is no identity information about the signer embedded in the signature object. A common way to attach identity information to a digital signature is to use digital certificates. If your application relies on digital certificates for signing and signature verification, we recommend you look at AWS Certificate Manager and Private Certificate Authority. These services allow you to programmatically create and deploy certificates with keys to your applications for digital signing operations. A common application of digital certificates is TLS termination on a web server to secure data in transit.

Signing and verifying files with AWS KMS

Assume that you have an application A that sends a file to application B in your AWS account. You want the file to be digitally signed so that the receiving application B can verify it hasn’t been tampered with in transit. You also want to make sure only application A can digitally sign files using the key because you don’t want application B to receive a file thinking it’s from application A when it was really from a different sender that had access to the signing key. Because AWS KMS is designed so that the private portion of the asymmetric key pair used for signing cannot be used outside the service or by unauthenticated users, you’re able to define and enforce policies so that only application A can sign with the key.

To start, application A will submit either the file itself or a digest of the file to the AWS KMS Sign API under an asymmetric CMK. If the file is less than 4KB, AWS KMS will compute a digest for you as a part of the signing operation. If the file is greater than 4KB, you must send only the digest you created locally and you must tell AWS KMS that you’re passing a digest in the MessageType parameter of the request. You can use any of several hashing functions in your local environment to create a digest of the file, but be aware that the receiving application in account B will need to be able to compute the digest using the same hash function in order to verify the integrity of the file. In my example, I’m using SHA256 as the hash function. Once the digest is created, AWS KMS uses the private portion of the asymmetric CMK to encrypt the digest using the signing algorithm specified in the API request. The result is a binary data object, which we’ll refer to as “the signature” throughout this post.

Once application B receives the file with the signature, it must create a digest of the file. It then passes this newly generated digest, the signature object, the signing algorithm used, and the CMK keyId to the Verify API. AWS KMS uses the corresponding public key of the CMK with the signing algorithm specified in the request to verify the signature. Instead of submitting the signature to the Verify API, application B could verify the signature locally by acquiring the public key. This might be an attractive option if application B didn’t have a way to acquire valid AWS credentials to make a request of AWS KMS. However, this method requires application B to have access to the necessary cryptographic algorithms and to have previously received the public portion of the asymmetric CMK. In my example, application B is running in the same account as application A, so it can acquire AWS credentials to make the Verify API request. I’ll describe how to verify signatures using both methods in a bit more detail later in the post.

Creating signing keys and setting up key policy permissions

To start, you need to create an asymmetric CMK. When calling the CreateKey API, you’ll pass one of the asymmetric values for the CustomerMasterKeySpec parameter. In my example, I’m choosing a key spec of ECC_NIST_P384 because keys used with elliptic curve algorithms tend to be more efficient than those used with RSA-based algorithms.

As a part of creating your asymmetric CMK, you need to attach a resource policy to the key to control which cryptographic operations the AWS principals representing applications A and B can use. A best practice is to use a different IAM principal for each application in order to scope down permissions. In this case, you want application A to only be able to sign files, and application B to only be able to verify them. I will assume each of these applications are running in Amazon EC2, and so I’ll create a couple of IAM roles.

  • The IAM role for application A (SignRole) will be given kms:Sign permission in the CMK key policy
  • The IAM role for application B (VerifyRole) will be given kms:Verify permission in the CMK key policy

The stanza in the CMK key policy document to allow signing should look like this (replace the account ID value of <111122223333> with your own):


{
	"Sid": "Allow use of the key for digital signing",
	"Effect": "Allow",
	"Principal": {"AWS":"arn:aws:iam::<111122223333>:role/SignRole"},
	"Action": "kms:Sign",
	"Resource": "*"
}

The stanza in the CMK key policy document to allow verification should look like this (replace the account ID value of <111122223333> with your own):


{
	"Sid": "Allow use of the key for verification",
	"Effect": "Allow",
	"Principal": {"AWS":"arn:aws:iam::<111122223333>:role/VerifyRole"},
	"Action": "kms:Verify",
	"Resource": "*"
}

Signing Workflow

Once you have created the asymmetric CMK and IAM roles, you’re ready to sign your file. Application A will create a message digest of the file and make a sign request to AWS KMS with the asymmetric CMK keyId, and signing algorithm. The CLI command to do this is shown below. Replace the key-id parameter with your CMK’s specific keyId.


aws kms sign \
	--key-id <1234abcd-12ab-34cd-56ef-1234567890ab> \
	--message-type DIGEST \
	--signing-algorithm ECDSA_SHA_256 \
	--message fileb://ExampleDigest

I chose the ECDSA_SHA_256 signing algorithm for this example. See the Sign API specification for a complete list of supported signing algorithms.

After validating that the API call is authorized by the credentials available to SignRole, KMS generates a signature around the digest and returns the CMK keyId, signature, and the signing algorithm.

Verify Workflow 1 — Calling the verify API

Once application B receives the file and the signature, it computes the SHA 256 digest over the copy of the file it received. It then makes a verify request to AWS KMS, passing this new digest, the signature it received from application A, signing algorithm, and the CMK keyId. The CLI command to do this is shown below. Replace the key-id parameter with your CMK’s specific keyId.


aws kms verify \
	--key-id <1234abcd-12ab-34cd-56ef-1234567890ab> \
	--message-type DIGEST \
	--signing-algorithm ECDSA_SHA_256 \
	--message fileb://ExampleDigest \
	--signature fileb://Signature

After validating that the verify request is authorized, AWS KMS verifies the signature by first decrypting the signature using the public portion of the CMK. It then compares the decrypted result to the digest received in the verify request. If they match, it returns a SignatureValid boolean of True, indicating that the original digest created by the sender matches the digest created by the recipient. Because the original digest is unique to the original file, the recipient can know that the file was not tampered with during transit.

One advantage of using the AWS KMS verify API is that the caller doesn’t have to keep track of the specific public key matching the private key used to create the signature; the caller only has to know the CMK keyId and signing algorithm used. Also, because all request to AWS KMS are logged to AWS CloudTrail, you can audit that the signature and verification operations were both executed as expected. See the Verify API specification for more detail on available parameters.

Verify Workflow 2 — Verifying locally using the public key

Apart from using the Verify API directly, you can choose to retrieve the public key in the CMK using the AWS KMS GetPublicKey API and verify the signature locally. You might want to do this if application B needs to verify multiple signatures at a high rate and you don’t want to make a network call to the Verify API each time. In this method, application B makes a GetPublicKey request to AWS KMS to retrieve the public key. The CLI command to do this is below. Replace the key-id parameter with your CMK’s specific keyId.

aws kms get-public-key \
–key-id <1234abcd-12ab-34cd-56ef-1234567890ab>

Note that the application B will need permissions to make a GetPublicKey request to AWS KMS. The stanza in the CMK key policy document to allow the VerifyRole identity to download the public key should look like this (replace the account ID value of <111122223333> with your own):


{
	"Sid": "Allow retrieval of the public key for verification",
	"Effect": "Allow",
	"Principal": {"AWS":"arn:aws:iam::<111122223333>:role/VerifyRole"},
	"Action": "kms:GetPublicKey ",
	"Resource": "*"
}

Once application B has the public key, it can use your preferred cryptographic provider to perform the signature verification locally. Application B needs to keep track of the public key and signing algorithm used for each signature object it will verify locally. Using the wrong public key will fail to decrypt the signature from application A, making the signature verification operation unsuccessful.

Availability and pricing

Asymmetric keys and operations in AWS KMS are available now in the Northern Virginia, Oregon, Sydney, Ireland, and Tokyo AWS Regions with support for other regions planned. Pricing information for the new feature can be found at the AWS KMS pricing page.

Summary

I showed you a simple example of how you can use the new AWS KMS APIs to digitally sign and verify an arbitrary file. By having AWS KMS generate and store the private portion of the asymmetric key, you can limit use of the key for signing only to IAM principals you define. OIDC ID tokens, OAuth 2.0 access tokens, documents, configuration files, system update messages, and audit logs are but a few of the types of objects you might want to sign and verify using this feature.

You can also perform encrypt and decrypt operations under asymmetric CMKs in AWS KMS as an alternative to using the symmetric CMKs available since the service launched. Similar to how you can ask AWS KMS to generate symmetric keys for local use in your application, you can ask AWS KMS to generate and return asymmetric key pairs for local use to encrypt and decrypt data. Look for a future AWS Security Blog post describing these use cases. For more information about asymmetric key support, see the AWS KMS documentation page.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about the asymmetric key feature, please start a new thread on the AWS KMS Discussion Forum.

Want more AWS Security news? Follow us on Twitter.

Raj Copparapu

Raj Copparapu

Raj Copparapu is a Senior Product Manager Technical. He’s a member of the AWS KMS team and focuses on defining the product roadmap to satisfy customer requirements. He spent over 5 years innovating on behalf of customers to deliver products to help customers secure their data in the cloud. Raj received his MBA from the Duke’s Fuqua School of Business and spent his early career working as an engineer and a business intelligence consultant. In his spare time, Raj enjoys yoga and spending time with his kids.

How to host and manage an entire private certificate infrastructure in AWS

Post Syndicated from Josh Rosenthol original https://aws.amazon.com/blogs/security/how-to-host-and-manage-an-entire-private-certificate-infrastructure-in-aws/

AWS Certificate Manager (ACM) Private Certificate Authority (CA) now offers the option for managing online root CAs and a full online PKI hierarchy. You can now host and manage your organization’s entire private certificate infrastructure in AWS. Supporting a full hierarchy expands AWS Certificate Manager (ACM) Private Certificate Authority capabilities.

CA administrators can use ACM Private CA to create a complete CA hierarchy, including root and subordinate CAs, with no need for external CAs. Customers can create secure and highly available CAs in any one of the AWS Regions in which ACM Private CA is available, without building and maintaining their own on-premises CA infrastructure. ACM Private CA provides essential security for operating a CA in accordance with your internal compliance rules and security best practices. ACM Private CA is secured with AWS-managed hardware security modules (HSMs), removing the operational and cost burden from customers.

An overview of CA hierarchy

Certificates are used to establish identity and secure connections. A resource presents a certificate to a server to establish its identity. If the certificate is valid, and a chain can be constructed from the certificate to a trusted root CA, the server can positively identify and trust the resource.

A CA hierarchy provides strong security and restrictive access controls for the most-trusted root CA at the top of the trust chain, while allowing more permissive access and bulk certificate issuance for subordinate CAs lower in the chain.

The root CA is a cryptographic building block (root of trust) upon which certificates can be issued. It’s comprised of a private key for signing (issuing) certificates and a root certificate that identifies the root CA and binds the private key to the name of the CA. The root certificate is distributed to the trust stores of each entity in an environment. When resources attempt to connect with one another, they check the certificates that each entity presents. If the certificates are valid and a chain can be constructed from the certificate to a root certificate installed in the trust store, a “handshake” occurs between resources that cryptographically prove the identity of each entity to the other. This creates an encrypted communication channel (TLS/SSL) between them.

How to configure a CA hierarchy with ACM Private CA

You can use root CAs to create a CA hierarchy without the need for an external root CA, and start issuing certificates to identify resources within your organizations. You can create root and subordinate CAs in nearly any configuration you want, including defining a CA structure to fit your needs or replicating an existing CA structure.

To get started, you can use the ACM Private CA console, APIs, or CLI to create a root and subordinate CA and issue certificates from the subordinate CA.
 

Figure 1: Issue certificates after creating a root and subordinate CA

Figure 1: Ceating a root CA

You can create a two-level CA hierarchy using the ACM console in less than 10 minutes using the ACM Private CA console wizard, which walks you through each step of creating a root or subordinate CA. When you create a subordinate CA, the wizard prompts you to chain the subordinate to a parent CA.
 

Figure 2: Walk through each step with the ACM Private CA console wizard

Figure 2: The “Install subordinate CA certificate” page

After creating a new root CA, you need to distribute the new root to the trust stores in your servers’ operating systems and browsers. If you want a simple, one-level CA hierarchy for development and testing, you can create a root certificate authority and start issuing private certificates directly from the root CA.

Note: The trade-off of this approach is that you can’t revoke the root CA certificate because the root CA certificate is installed in your trust stores. To effectively “untrust” the root CA in this scenario, you would need to replace the root CA in your trust stores with a new root CA.

Offline versus online root CAs

Some organizations, and all public CAs, keep their root CAs offline (that is, disconnected from the network) in a physical vault. In contrast, most organizations have root CAs that are connected to the network only when they’re used to sign the certificates of CAs lower in the chain. For example, customers might create a root CA with a 20-year lifetime, and disable it under normal circumstances to prevent it from being used except when enabled by a privileged administrator when it’s necessary to sign CA certificates for a child CA. Because using the root CA to issue a certificate is a rare and carefully controlled operation, customers monitor logs, audit reports, and generate alarms notifying them when their root CA is used to issue a certificate. Subordinate issuing CAs are the lowest in the hierarchy. They are typically used for bulk certificate issuance that identify devices and resources. Subordinate issuing CAs typically have shorter lifetimes (1-2 years), and fewer policy controls and monitors.

With ACM Private CA, you can create a trusted root CA with a lifetime of 10 or more years. All CA private keys are protected by FIPS 140-2 level 3 HSMs. You can verify the CA is used only for authorized purposes by reviewing AWS CloudTrail logs and audit reports. You can further protect against mis-issuance by configuring AWS Identity and Access Management (IAM) permissions that limit access to your CA. With an ACM Private CA, you can revoke certificates issued from your CA and use the certificate revocation list (CRL) generated by ACM Private CA to provide revocation information to clients. This simplifies configuration and deployment.

Customer use cases for root CA hierarchy

There are three common use cases for root CA hierarchy.

The most common use case is customers who are advanced PKI users and already have an offline root CA protected by an HSM. However, when it comes to development and network staging, they don’t want to use the same root CA and certificate. The new root CA hierarchy feature allows them to easily stand up a PKI for their test environment that mimics production, but uses a separate root of trust.

The second use case is customers who are interested in using a private CA but don’t have strong knowledge of PKI, nor have they invested in HSMs. These customers have gotten by, generating a root CA using OpenSSL. With the offering of root CA hierarchy, they’re now able to stand up a root CA within ACM Private CA that is protected by an HSM and restricted by IAM access policy. This increases the security of their hierarchy and simplifies their deployment.

The third use case is customers who are evaluating an internal PKI and also looking at managing an offline HSM. These customers recognize the significant process, management, cost, and training investments to stand up the full infrastructure required. Customers can remove these costs by managing their organization’s entire private certificate infrastructure in AWS.

How to get started

With ACM Private CA root CA hierarchy feature, you can create a PKI hierarchy and begin issuing private certificates for identity and securing TLS communication. To get started, open the ACM Private CA console. To learn more, read getting started with AWS Certificate Manager and getting started in the ACM Private CA user guide.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Josh Rosenthol

Josh is a Product Manager who helps solve customer problems with public and private certificate and CAs from AWS. He enjoys listening to customers describe their use cases and translate them into improvements to AWS Certificate Manager and ACM Private CA.

Author

Todd Cignetti

Todd Cignetti is a Principal Product Manager at Amazon Web Services. He is responsible for AWS Certificate Manager (ACM) and ACM Private CA. He focuses on helping AWS customers identify and secure their resources and endpoints with public and private certificates.

Maintaining Transport Layer Security all the way to your container part 2: Using AWS Certificate Manager Private Certificate Authority

Post Syndicated from Nathan Taber original https://aws.amazon.com/blogs/compute/maintaining-transport-layer-security-all-the-way-to-your-container-part-2-using-aws-certificate-manager-private-certificate-authority/

This post contributed by AWS Senior Cloud Infrastructure Architect Anabell St Vincent and AWS Solutions Architect Alex Kimber.

The previous post, Maintaining Transport Layer Security All the Way to Your Container, covered how the layer 4 Network Load Balancer can be used to maintain Transport Layer Security (TLS) all the way from the client to running containers.

In this post, we discuss the various options available for ensuring that certificates can be securely and reliably made available to containers. By simplifying the process of distributing or generating certificates and other secrets, it’s easier for you to build inherently secure architectures without compromising scalability.

There are several ways to achieve this:

1. Storing the certificate and private key in the Docker image

Certificates and keys can be included in the Docker image and made available to the container at runtime. This approach makes the deployment of containers with certificates and keys simple and easy.

However, there are some drawbacks. First, the certificates and keys need to be created, stored securely, and then included in the Docker image. There are some manual or additional automation steps required to securely create, retrieve, and include them for every new revision of the Docker image.

The following example Docker file creates an NGINX container that has the certificate and the key included:

FROM nginx:alpine

# Copy in secret materials
RUN mkdir -p /root/certs/nginxdemotls.com
COPY nginxdemotls.com.key /root/certs/nginxdemotls.com/nginxdemotls.com.key
COPY nginxdemotls.com.crt /root/certs/nginxdemotls.com/nginxdemotls.com.crt
RUN chmod 400 /root/certs/nginxdemotls.com/nginxdemotls.com.key

# Copy in nginx configuration files
COPY nginx.conf /etc/nginx/nginx.conf
COPY nginxdemo.conf /etc/nginx/conf.d
COPY nginxdemotls.conf /etc/nginx/conf.d

# Create folders to hold web content and copy in HTML files.
RUN mkdir -p /var/www/nginxdemo.com
RUN mkdir -p /var/www/nginxdemotls.com

COPY index.html /var/www/nginxdemo.com/index.html
COPY indextls.html /var/www/nginxdemotls.com/index.html

From a security perspective, this approach has additional drawbacks. Because certificates and private keys are bundled with the Docker images, anyone with access to a Docker image can also retrieve the certificate and private key.
The other drawback is the updated certificates are not replaced automatically and the Docker image must be re-created to include any updated certificates. Running containers must either be restarted with the new image, or have the certificates updated.

2. Storing the certificates in AWS Systems Manager Parameter Store and Amazon S3

The post Managing Secrets for Amazon ECS Applications Using Parameter Store and IAM Roles for Tasks explains how you can use Systems Manager Parameter Store to store secrets. Some customers use Parameter Store to keep their secrets for simpler retrieval, as well as fine-grained access control. Parameter Store allows for securing data using AWS Key Management Service (AWS KMS) for the encryption. Each encryption key created in KMS can be accessed and controlled using AWS Identity and Access Management (IAM) roles in addition to key policy functionality within KMS. This approach allows for resource-level permissions to each item that is stored in Parameter Store, based on the KMS key used for the encryption.

Some certificates can be stored in Parameter Store using the ‘Secure String’ type and using KMS for encryption. With this approach, you can make an API call to retrieve the certificate when the container is deployed. As mentioned earlier, the access to the certificate can be based on the role used to retrieve the certificate. The advantage of this approach is that the certificate can be replaced. The next time the container is deployed, it picks up the new certificate and there is no need to update the Docker image.

Currently, there is a limitation of 4,096 characters that can be stored in Parameter Store. This may not be sufficient for some type of certificates. For example, some x509 certs include the chain and so can exceed the 4,096 character limit. To avoid any character size limitation, Amazon S3 can be used to store the certificate with Parameter Store. The certificate can be stored on Amazon S3, encrypted with KMS and the private key, or the password can be stored in Parameter Store.

With this approach, there is no limitation on certificate length and the private key remains secured with KMS. However, it does involve some additional complexity in setting up the process of creating the certificates, storing them in S3, and then storing the password or private keys in Parameter Store. That is in addition to securing, trusting, and auditing the system handling the private keys and certificates.

3. Storing the certificates in AWS Secrets Manager

AWS Secrets Manager offers a number of features to allow you to store and manage credentials, keys, and other secret materials. This eliminates the need to store these materials with the application code and instead allows them to be referenced on demand. By centralizing the management of secret materials, this single service can manage fine-grained access control through granular IAM policies as well as the revocation and rotation, all through API calls.

All materials stored in the AWS Secrets Manager are encrypted with the customer’s choice of KMS key. The post AWS Secrets Manager: Store, Distribute, and Rotate Credentials Securely shows how AWS Secrets Manager can be used to store RDS database credentials. However, the same process can apply to TLS certificates and keys.

Secrets currently have a limit of 4,096 characters. This approach may be unsuitable for some x509 certificates that include the chain and can exceed this limit. This limit applies to the sum of all key-value pairs within a single secret, so certificates and keys may need to be stored in separate secrets.

After the secure material is in place, it can be retrieved by the container instance at runtime via the AWS Command Line Interface (AWS CLI) or directly from within the application code. All that’s required is for the container task role to have the requisite permissions in IAM to read the secrets.

With the exception of rotating RDS credentials, AWS Secrets Manager requires the user to provide Lambda function code, which is called on a configurable schedule to manage the rotation. This rotation would need to consider the generation of new keys and certificates and redeploying the containers.

4. Using self-signed certificates, generated as the Docker container is created

The advantage of this approach is that it allows the use of TLS communications without any of the complexity of distributing certificates or private keys. However, this approach does require implicit trust of the server. Some applications may generate warnings that there is no acceptable root of trust.

5. Building and managing a private certificate authority

A private certificate authority (CA) can offer greater security and flexibility than the solutions outlined earlier. Typically, a private CA solution would manage the following for each ‘Common name’:

  • A private key
  • A certificate, created with the private key
  • Lists of certificates issued and those that have been revoked
  • Policies for managing certificates, for example which services have the right to make a request for a new certificate
  • Audit logs to track the lifecycle of certificates, in particular to ensure timely renewal where necessary

It is possible for an organization to build and maintain their own certificate issuing platform. This approach requires the implementation of a platform that is highly available and secure. These types of systems add to the overall overhead of maintaining infrastructures from a security, availability, scalability, and maintenance perspective. Some customers have also implemented Lambda functions to achieve the same functionality when it comes to issuing private certificates.

While it’s possible to build a private CA for internal services, there are some challenges to be aware of. Any solution should provide a number of features that are key to ensuring appropriate management of the certificates throughout their lifecycle.

For instance, the solution must support the creation, tracking, distribution, renewal, and revocation of certificates. All of these operations must be provided with the requisite security and authentication controls to ensure that certificates are distributed appropriately.

Scalability is another consideration. As applications become increasingly stateless and elastic, it’s conceivable that certificates may be required for every new container instance or wildcard certificates created to support an environment. Whatever CA solution is implemented must be ready to accommodate such a load while also providing high availability.

These types of approaches have drawbacks from various perspectives:

  • Custom code can be hard to maintain
  • Additional security measures must be implemented
  • Certificate renewal and revocation mechanisms also must be implemented
  • The platform must be maintained and kept up-to-date from a patching perspective while maintaining high availability

6. Using the new ACM Private CA to issue private certificates

ACM Private CA offers a secure, managed infrastructure to support the issuance and revocation of private digital certificates. It supports RSA and ECDSA key types for CA keys used for the creation of new certificates, as well as certificate revocation lists (CRLs) to inform clients when a certificate should no longer be trusted. Currently, ACM Private CA does not offer root CA support.

The following screenshot shows a subordinate certificate that is available for use:

The private key for any private CA that you create with ACM Private CA is created and stored in a FIPS 140-2 Level 3 Hardware Security Module (HSM) managed by AWS. The ACM Private CA is also integrated with AWS CloudTrail, which allows you to record the audit trail of API calls made using the AWS Management Console, AWS CLI, and AWS SDKs.

Setting up ACM Private CA requires a root CA. This can be used to sign a certificate signing request (CSR) for the new subordinate (CA), which is then imported into ACM Private CA. After this is complete, it’s possible for containers within your platform to generate their own key-value pairs at runtime using OpenSSL. They can then use the key-value pairs to make their own CSR and ultimately receive their own certificate.

More specifically, the container would complete the following steps at runtime:

  1. Add OpenSSL to the Docker image (if it is not already included).
  2. Generate a key-value pair (a cryptographically related private and public key).
  3. Use that private key to make a CSR.
  4. Call the ACM Private CA API or CLI issue-certificate operation, which issues a certificate based on the CSR.
  5. Call the ACM Private CA API or CLI get-certificate operation, which returns an issued certificate.

The following diagram shows these steps:

The authorization to successfully request a certificate is controlled via IAM policies, which can be attached via a role to the Amazon ECS task. Containers require the ‘Allow’ effect for at least the acm-pca:GetCertificate and acm:IssueCertificate actions. The following is a sample IAM policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "",
            "Effect": "Allow",
            "Action": "acm-pca:*",
            "Resource": "arn:aws:acm-pca:us-east-1:1234567890:certificate-authority/2c4ccba1-215e-418a-a654-aaaaaaaa"
        }
    ]
}

For additional security, it is possible to store the certificate and keys in a temporary volume mounted in memory through the ‘tmpfs’ parameter. With this option enabled, the secure material is never written to the filesystem of the host machine.

Note: This feature is not currently available for containers run on AWS Fargate.

The task now has the necessary materials and starts up. Clients should be able to establish the trust hierarchy from the server, through ACM Private CA to the root or intermediate CA.

One consideration to be aware of is that ACM Private CA currently has a limit of 50,000 certificates for each CA in each Region. If the requirement is for each, short-lived container instance to have a separate certificate, then this limit could be reached.

Summary

The approaches outlined in this post describe the available options for ensuring that generation, storage, or distribution of sensitive material is done efficiently and securely. It should also be done in a way that supports the ephemeral, automatic scaling capabilities of container-based architectures. ACM Private CA provides a single interface to manage public and now private certificates, as well as seamlessly integrating with the AWS services.

If you have questions or suggestions, please comment below.

AWS Online Tech Talks – May and Early June 2018

Post Syndicated from Devin Watson original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-may-and-early-june-2018/

AWS Online Tech Talks – May and Early June 2018  

Join us this month to learn about some of the exciting new services and solution best practices at AWS. We also have our first re:Invent 2018 webinar series, “How to re:Invent”. Sign up now to learn more, we look forward to seeing you.

Note – All sessions are free and in Pacific Time.

Tech talks featured this month:

Analytics & Big Data

May 21, 2018 | 11:00 AM – 11:45 AM PT Integrating Amazon Elasticsearch with your DevOps Tooling – Learn how you can easily integrate Amazon Elasticsearch Service into your DevOps tooling and gain valuable insight from your log data.

May 23, 2018 | 11:00 AM – 11:45 AM PTData Warehousing and Data Lake Analytics, Together – Learn how to query data across your data warehouse and data lake without moving data.

May 24, 2018 | 11:00 AM – 11:45 AM PTData Transformation Patterns in AWS – Discover how to perform common data transformations on the AWS Data Lake.

Compute

May 29, 2018 | 01:00 PM – 01:45 PM PT – Creating and Managing a WordPress Website with Amazon Lightsail – Learn about Amazon Lightsail and how you can create, run and manage your WordPress websites with Amazon’s simple compute platform.

May 30, 2018 | 01:00 PM – 01:45 PM PTAccelerating Life Sciences with HPC on AWS – Learn how you can accelerate your Life Sciences research workloads by harnessing the power of high performance computing on AWS.

Containers

May 24, 2018 | 01:00 PM – 01:45 PM PT – Building Microservices with the 12 Factor App Pattern on AWS – Learn best practices for building containerized microservices on AWS, and how traditional software design patterns evolve in the context of containers.

Databases

May 21, 2018 | 01:00 PM – 01:45 PM PTHow to Migrate from Cassandra to Amazon DynamoDB – Get the benefits, best practices and guides on how to migrate your Cassandra databases to Amazon DynamoDB.

May 23, 2018 | 01:00 PM – 01:45 PM PT5 Hacks for Optimizing MySQL in the Cloud – Learn how to optimize your MySQL databases for high availability, performance, and disaster resilience using RDS.

DevOps

May 23, 2018 | 09:00 AM – 09:45 AM PT.NET Serverless Development on AWS – Learn how to build a modern serverless application in .NET Core 2.0.

Enterprise & Hybrid

May 22, 2018 | 11:00 AM – 11:45 AM PTHybrid Cloud Customer Use Cases on AWS – Learn how customers are leveraging AWS hybrid cloud capabilities to easily extend their datacenter capacity, deliver new services and applications, and ensure business continuity and disaster recovery.

IoT

May 31, 2018 | 11:00 AM – 11:45 AM PTUsing AWS IoT for Industrial Applications – Discover how you can quickly onboard your fleet of connected devices, keep them secure, and build predictive analytics with AWS IoT.

Machine Learning

May 22, 2018 | 09:00 AM – 09:45 AM PTUsing Apache Spark with Amazon SageMaker – Discover how to use Apache Spark with Amazon SageMaker for training jobs and application integration.

May 24, 2018 | 09:00 AM – 09:45 AM PTIntroducing AWS DeepLens – Learn how AWS DeepLens provides a new way for developers to learn machine learning by pairing the physical device with a broad set of tutorials, examples, source code, and integration with familiar AWS services.

Management Tools

May 21, 2018 | 09:00 AM – 09:45 AM PTGaining Better Observability of Your VMs with Amazon CloudWatch – Learn how CloudWatch Agent makes it easy for customers like Rackspace to monitor their VMs.

Mobile

May 29, 2018 | 11:00 AM – 11:45 AM PT – Deep Dive on Amazon Pinpoint Segmentation and Endpoint Management – See how segmentation and endpoint management with Amazon Pinpoint can help you target the right audience.

Networking

May 31, 2018 | 09:00 AM – 09:45 AM PTMaking Private Connectivity the New Norm via AWS PrivateLink – See how PrivateLink enables service owners to offer private endpoints to customers outside their company.

Security, Identity, & Compliance

May 30, 2018 | 09:00 AM – 09:45 AM PT – Introducing AWS Certificate Manager Private Certificate Authority (CA) – Learn how AWS Certificate Manager (ACM) Private Certificate Authority (CA), a managed private CA service, helps you easily and securely manage the lifecycle of your private certificates.

June 1, 2018 | 09:00 AM – 09:45 AM PTIntroducing AWS Firewall Manager – Centrally configure and manage AWS WAF rules across your accounts and applications.

Serverless

May 22, 2018 | 01:00 PM – 01:45 PM PTBuilding API-Driven Microservices with Amazon API Gateway – Learn how to build a secure, scalable API for your application in our tech talk about API-driven microservices.

Storage

May 30, 2018 | 11:00 AM – 11:45 AM PTAccelerate Productivity by Computing at the Edge – Learn how AWS Snowball Edge support for compute instances helps accelerate data transfers, execute custom applications, and reduce overall storage costs.

June 1, 2018 | 11:00 AM – 11:45 AM PTLearn to Build a Cloud-Scale Website Powered by Amazon EFS – Technical deep dive where you’ll learn tips and tricks for integrating WordPress, Drupal and Magento with Amazon EFS.

 

 

 

 

New .BOT gTLD from Amazon

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/new-bot-gtld-from-amazon/

Today, I’m excited to announce the launch of .BOT, a new generic top-level domain (gTLD) from Amazon. Customers can use .BOT domains to provide an identity and portal for their bots. Fitness bots, slack bots, e-commerce bots, and more can all benefit from an easy-to-access .BOT domain. The phrase “bot” was the 4th most registered domain keyword within the .COM TLD in 2016 with more than 6000 domains per month. A .BOT domain allows customers to provide a definitive internet identity for their bots as well as enhancing SEO performance.

At the time of this writing .BOT domains start at $75 each and must be verified and published with a supported tool like: Amazon Lex, Botkit Studio, Dialogflow, Gupshup, Microsoft Bot Framework, or Pandorabots. You can expect support for more tools over time and if your favorite bot framework isn’t supported feel free to contact us here: [email protected].

Below, I’ll walk through the experience of registering and provisioning a domain for my bot, whereml.bot. Then we’ll look at setting up the domain as a hosted zone in Amazon Route 53. Let’s get started.

Registering a .BOT domain

First, I’ll head over to https://amazonregistry.com/bot, type in a new domain, and click magnifying class to make sure my domain is available and get taken to the registration wizard.

Next, I have the opportunity to choose how I want to verify my bot. I build all of my bots with Amazon Lex so I’ll select that in the drop down and get prompted for instructions specific to AWS. If I had my bot hosted somewhere else I would need to follow the unique verification instructions for that particular framework.

To verify my Lex bot I need to give the Amazon Registry permissions to invoke the bot and verify it’s existence. I’ll do this by creating an AWS Identity and Access Management (IAM) cross account role and providing the AmazonLexReadOnly permissions to that role. This is easily accomplished in the AWS Console. Be sure to provide the account number and external ID shown on the registration page.

Now I’ll add read only permissions to our Amazon Lex bots.

I’ll give my role a fancy name like DotBotCrossAccountVerifyRole and a description so it’s easy to remember why I made this then I’ll click create to create the role and be transported to the role summary page.

Finally, I’ll copy the ARN from the created role and save it for my next step.

Here I’ll add all the details of my Amazon Lex bot. If you haven’t made a bot yet you can follow the tutorial to build a basic bot. I can refer to any alias I’ve deployed but if I just want to grab the latest published bot I can pass in $LATEST as the alias. Finally I’ll click Validate and proceed to registering my domain.

Amazon Registry works with a partner EnCirca to register our domains so we’ll select them and optionally grab Site Builder. I know how to sling some HTML and Javascript together so I’ll pass on the Site Builder side of things.

 

After I click continue we’re taken to EnCirca’s website to finalize the registration and with any luck within a few minutes of purchasing and completing the registration we should receive an email with some good news:

Alright, now that we have a domain name let’s find out how to host things on it.

Using Amazon Route53 with a .BOT domain

Amazon Route 53 is a highly available and scalable DNS with robust APIs, healthchecks, service discovery, and many other features. I definitely want to use this to host my new domain. The first thing I’ll do is navigate to the Route53 console and create a hosted zone with the same name as my domain.


Great! Now, I need to take the Name Server (NS) records that Route53 created for me and use EnCirca’s portal to add these as the authoritative nameservers on the domain.

Now I just add my records to my hosted zone and I should be able to serve traffic! Way cool, I’ve got my very own .bot domain for @WhereML.

Next Steps

  • I could and should add to the security of my site by creating TLS certificates for people who intend to access my domain over TLS. Luckily with AWS Certificate Manager (ACM) this is extremely straightforward and I’ve got my subdomains and root domain verified in just a few clicks.
  • I could create a cloudfront distrobution to front an S3 static single page application to host my entire chatbot and invoke Amazon Lex with a cognito identity right from the browser.

Randall

AWS Certificate Manager Launches Private Certificate Authority

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/aws-certificate-manager-launches-private-certificate-authority/

Today we’re launching a new feature for AWS Certificate Manager (ACM), Private Certificate Authority (CA). This new service allows ACM to act as a private subordinate CA. Previously, if a customer wanted to use private certificates, they needed specialized infrastructure and security expertise that could be expensive to maintain and operate. ACM Private CA builds on ACM’s existing certificate capabilities to help you easily and securely manage the lifecycle of your private certificates with pay as you go pricing. This enables developers to provision certificates in just a few simple API calls while administrators have a central CA management console and fine grained access control through granular IAM policies. ACM Private CA keys are stored securely in AWS managed hardware security modules (HSMs) that adhere to FIPS 140-2 Level 3 security standards. ACM Private CA automatically maintains certificate revocation lists (CRLs) in Amazon Simple Storage Service (S3) and lets administrators generate audit reports of certificate creation with the API or console. This service is packed full of features so let’s jump in and provision a CA.

Provisioning a Private Certificate Authority (CA)

First, I’ll navigate to the ACM console in my region and select the new Private CAs section in the sidebar. From there I’ll click Get Started to start the CA wizard. For now, I only have the option to provision a subordinate CA so we’ll select that and use my super secure desktop as the root CA and click Next. This isn’t what I would do in a production setting but it will work for testing out our private CA.

Now, I’ll configure the CA with some common details. The most important thing here is the Common Name which I’ll set as secure.internal to represent my internal domain.

Now I need to choose my key algorithm. You should choose the best algorithm for your needs but know that ACM has a limitation today that it can only manage certificates that chain up to to RSA CAs. For now, I’ll go with RSA 2048 bit and click Next.

In this next screen, I’m able to configure my certificate revocation list (CRL). CRLs are essential for notifying clients in the case that a certificate has been compromised before certificate expiration. ACM will maintain the revocation list for me and I have the option of routing my S3 bucket to a custome domain. In this case I’ll create a new S3 bucket to store my CRL in and click Next.

Finally, I’ll review all the details to make sure I didn’t make any typos and click Confirm and create.

A few seconds later and I’m greeted with a fancy screen saying I successfully provisioned a certificate authority. Hooray! I’m not done yet though. I still need to activate my CA by creating a certificate signing request (CSR) and signing that with my root CA. I’ll click Get started to begin that process.

Now I’ll copy the CSR or download it to a server or desktop that has access to my root CA (or potentially another subordinate – so long as it chains to a trusted root for my clients).

Now I can use a tool like openssl to sign my cert and generate the certificate chain.


$openssl ca -config openssl_root.cnf -extensions v3_intermediate_ca -days 3650 -notext -md sha256 -in csr/CSR.pem -out certs/subordinate_cert.pem
Using configuration from openssl_root.cnf
Enter pass phrase for /Users/randhunt/dev/amzn/ca/private/root_private_key.pem:
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
stateOrProvinceName   :ASN.1 12:'Washington'
localityName          :ASN.1 12:'Seattle'
organizationName      :ASN.1 12:'Amazon'
organizationalUnitName:ASN.1 12:'Engineering'
commonName            :ASN.1 12:'secure.internal'
Certificate is to be certified until Mar 31 06:05:30 2028 GMT (3650 days)
Sign the certificate? [y/n]:y


1 out of 1 certificate requests certified, commit? [y/n]y
Write out database with 1 new entries
Data Base Updated

After that I’ll copy my subordinate_cert.pem and certificate chain back into the console. and click Next.

Finally, I’ll review all the information and click Confirm and import. I should see a screen like the one below that shows my CA has been activated successfully.

Now that I have a private CA we can provision private certificates by hopping back to the ACM console and creating a new certificate. After clicking create a new certificate I’ll select the radio button Request a private certificate then I’ll click Request a certificate.

From there it’s just similar to provisioning a normal certificate in ACM.

Now I have a private certificate that I can bind to my ELBs, CloudFront Distributions, API Gateways, and more. I can also export the certificate for use on embedded devices or outside of ACM managed environments.

Available Now
ACM Private CA is a service in and of itself and it is packed full of features that won’t fit into a blog post. I strongly encourage the interested readers to go through the developer guide and familiarize themselves with certificate based security. ACM Private CA is available in in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), EU (Frankfurt) and EU (Ireland). Private CAs cost $400 per month (prorated) for each private CA. You are not charged for certificates created and maintained in ACM but you are charged for certificates where you have access to the private key (exported or created outside of ACM). The pricing per certificate is tiered starting at $0.75 per certificate for the first 1000 certificates and going down to $0.001 per certificate after 10,000 certificates.

I’m excited to see administrators and developers take advantage of this new service. As always please let us know what you think of this service on Twitter or in the comments below.

Randall

Preparing for AWS Certificate Manager (ACM) Support of Certificate Transparency

Post Syndicated from Jonathan Kozolchyk original https://aws.amazon.com/blogs/security/how-to-get-ready-for-certificate-transparency/

 

Update from March 27, 2018: On March 27, 2018, we updated ACM APIs so that you can disable Certificate Transparency logging on a per-certificate basis.


Starting April 30, 2018, Google Chrome will require all publicly trusted certificates issued after this date to be logged in at least two Certificate Transparency logs. This means that any certificate issued that is not logged will result in an error message in Google Chrome. Beginning April 24, 2018, Amazon will log all new and renewed certificates in at least two public logs unless you disable Certificate Transparency logging.

Without Certificate Transparency, it can be difficult for a domain owner to know if an unexpected certificate was issued for their domain. Under the current system, no record is kept of certificates being issued, and domain owners do not have a reliable way to identify rogue certificates.

To address this situation, Certificate Transparency creates a cryptographically secure log of each certificate issued. Domain owners can search the log to identify unexpected certificates, whether issued by mistake or malice. Domain owners can also identify Certificate Authorities (CAs) that are improperly issuing certificates. In this blog post, I explain more about Certificate Transparency and tell you how to prepare for it.

How does Certificate Transparency work?

When a CA issues a publicly trusted certificate, the CA must submit the certificate to one or more Certificate Transparency log servers. The Certificate Transparency log server responds with a signed certificate timestamp (SCT) that confirms the log server will add the certificate to the list of known certificates. The SCT is then embedded in the certificate and delivered automatically to a browser. The SCT is like a receipt that proves the certificate was published into the Certificate Transparency log. Starting April 30, Google Chrome will require an SCT as proof that the certificate was published to a Certificate Transparency log in order to trust the certificate without displaying an error message.

What is Amazon doing to support Certificate Transparency?

Certificate Transparency is a good practice. It enables AWS customers to be more confident that an unauthorized certificate hasn’t been issued by a CA. Beginning on April 24, 2018, Amazon will log all new and renewed certificates in at least two Certificate Transparency logs unless you disable Certificate Transparency logging.

We recognize that there can be times when our customers do not want to log certificates. For example, if you are building a website for an unreleased product and have registered the subdomain, newproduct.example.com, requesting a logged certificate for your domain will make it publicly known that the new product is coming. Certificate Transparency logging also can expose server hostnames that you want to keep private. Hostnames such as payments.example.com can reveal the purpose of a server and provide attackers with information about your private network. These logs do not contain the private key for your certificate. For these reasons, on March 27, 2018 we updated ACM APIs so that you can disable Certificate Transparency logging on a per-certificate basis using the ACM APIs or with the AWS CLI. Doing so will lead to errors in Google Chrome, which may be preferable to exposing the information.

Please refer to ACM documentation for specifics on how to opt out of Certificate Transparency logging.

Conclusion

Beginning April 24, 2018, ACM will begin logging all new and renewed certificates by default. If you don’t want a certificate to be logged, you’ll be able to opt out using the AWS API or CLI. However, for Google Chrome to trust the certificate, all issued or imported certificates must have the SCT information embedded in them by April 30, 2018.

If you have comments about this blog post, submit them in the “Comments” section below. If you have questions, start a new thread in the ACM forum.

– Jonathan

Interested in AWS Security news? Follow the AWS Security Blog on Twitter.

Now Open AWS EU (Paris) Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-open-aws-eu-paris-region/

Today we are launching our 18th AWS Region, our fourth in Europe. Located in the Paris area, AWS customers can use this Region to better serve customers in and around France.

The Details
The new EU (Paris) Region provides a broad suite of AWS services including Amazon API Gateway, Amazon Aurora, Amazon CloudFront, Amazon CloudWatch, CloudWatch Events, Amazon CloudWatch Logs, Amazon DynamoDB, Amazon Elastic Compute Cloud (EC2), EC2 Container Registry, Amazon ECS, Amazon Elastic Block Store (EBS), Amazon EMR, Amazon ElastiCache, Amazon Elasticsearch Service, Amazon Glacier, Amazon Kinesis Streams, Polly, Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Route 53, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Storage Service (S3), Amazon Simple Workflow Service (SWF), Amazon Virtual Private Cloud, Auto Scaling, AWS Certificate Manager (ACM), AWS CloudFormation, AWS CloudTrail, AWS CodeDeploy, AWS Config, AWS Database Migration Service, AWS Direct Connect, AWS Elastic Beanstalk, AWS Identity and Access Management (IAM), AWS Key Management Service (KMS), AWS Lambda, AWS Marketplace, AWS OpsWorks Stacks, AWS Personal Health Dashboard, AWS Server Migration Service, AWS Service Catalog, AWS Shield Standard, AWS Snowball, AWS Snowball Edge, AWS Snowmobile, AWS Storage Gateway, AWS Support (including AWS Trusted Advisor), Elastic Load Balancing, and VM Import.

The Paris Region supports all sizes of C5, M5, R4, T2, D2, I3, and X1 instances.

There are also four edge locations for Amazon Route 53 and Amazon CloudFront: three in Paris and one in Marseille, all with AWS WAF and AWS Shield. Check out the AWS Global Infrastructure page to learn more about current and future AWS Regions.

The Paris Region will benefit from three AWS Direct Connect locations. Telehouse Voltaire is available today. AWS Direct Connect will also become available at Equinix Paris in early 2018, followed by Interxion Paris.

All AWS infrastructure regions around the world are designed, built, and regularly audited to meet the most rigorous compliance standards and to provide high levels of security for all AWS customers. These include ISO 27001, ISO 27017, ISO 27018, SOC 1 (Formerly SAS 70), SOC 2 and SOC 3 Security & Availability, PCI DSS Level 1, and many more. This means customers benefit from all the best practices of AWS policies, architecture, and operational processes built to satisfy the needs of even the most security sensitive customers.

AWS is certified under the EU-US Privacy Shield, and the AWS Data Processing Addendum (DPA) is GDPR-ready and available now to all AWS customers to help them prepare for May 25, 2018 when the GDPR becomes enforceable. The current AWS DPA, as well as the AWS GDPR DPA, allows customers to transfer personal data to countries outside the European Economic Area (EEA) in compliance with European Union (EU) data protection laws. AWS also adheres to the Cloud Infrastructure Service Providers in Europe (CISPE) Code of Conduct. The CISPE Code of Conduct helps customers ensure that AWS is using appropriate data protection standards to protect their data, consistent with the GDPR. In addition, AWS offers a wide range of services and features to help customers meet the requirements of the GDPR, including services for access controls, monitoring, logging, and encryption.

From Our Customers
Many AWS customers are preparing to use this new Region. Here’s a small sample:

Societe Generale, one of the largest banks in France and the world, has accelerated their digital transformation while working with AWS. They developed SG Research, an application that makes reports from Societe Generale’s analysts available to corporate customers in order to improve the decision-making process for investments. The new AWS Region will reduce latency between applications running in the cloud and in their French data centers.

SNCF is the national railway company of France. Their mobile app, powered by AWS, delivers real-time traffic information to 14 million riders. Extreme weather, traffic events, holidays, and engineering works can cause usage to peak at hundreds of thousands of users per second. They are planning to use machine learning and big data to add predictive features to the app.

Radio France, the French public radio broadcaster, offers seven national networks, and uses AWS to accelerate its innovation and stay competitive.

Les Restos du Coeur, a French charity that provides assistance to the needy, delivering food packages and participating in their social and economic integration back into French society. Les Restos du Coeur is using AWS for its CRM system to track the assistance given to each of their beneficiaries and the impact this is having on their lives.

AlloResto by JustEat (a leader in the French FoodTech industry), is using AWS to to scale during traffic peaks and to accelerate their innovation process.

AWS Consulting and Technology Partners
We are already working with a wide variety of consulting, technology, managed service, and Direct Connect partners in France. Here’s a partial list:

AWS Premier Consulting PartnersAccenture, Capgemini, Claranet, CloudReach, DXC, and Edifixio.

AWS Consulting PartnersABC Systemes, Atos International SAS, CoreExpert, Cycloid, Devoteam, LINKBYNET, Oxalide, Ozones, Scaleo Information Systems, and Sopra Steria.

AWS Technology PartnersAxway, Commerce Guys, MicroStrategy, Sage, Software AG, Splunk, Tibco, and Zerolight.

AWS in France
We have been investing in Europe, with a focus on France, for the last 11 years. We have also been developing documentation and training programs to help our customers to improve their skills and to accelerate their journey to the AWS Cloud.

As part of our commitment to AWS customers in France, we plan to train more than 25,000 people in the coming years, helping them develop highly sought after cloud skills. They will have access to AWS training resources in France via AWS Academy, AWSome days, AWS Educate, and webinars, all delivered in French by AWS Technical Trainers and AWS Certified Trainers.

Use it Today
The EU (Paris) Region is open for business now and you can start using it today!

Jeff;

 

Easier Certificate Validation Using DNS with AWS Certificate Manager

Post Syndicated from Todd Cignetti original https://aws.amazon.com/blogs/security/easier-certificate-validation-using-dns-with-aws-certificate-manager/

Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates are used to secure network communications and establish the identity of websites over the internet. Before issuing a certificate for your website, Amazon must validate that you control the domain name for your site. You can now use AWS Certificate Manager (ACM) Domain Name System (DNS) validation to establish that you control a domain name when requesting SSL/TLS certificates with ACM. Previously ACM supported only email validation, which required the domain owner to receive an email for each certificate request and validate the information in the request before approving it.

With DNS validation, you write a CNAME record to your DNS configuration to establish control of your domain name. After you have configured the CNAME record, ACM can automatically renew DNS-validated certificates before they expire, as long as the DNS record has not changed. To make it even easier to validate your domain, ACM can update your DNS configuration for you if you manage your DNS records with Amazon Route 53. In this blog post, I demonstrate how to request a certificate for a website by using DNS validation. To perform the equivalent steps using the AWS CLI or AWS APIs and SDKs, see AWS Certificate Manager in the AWS CLI Reference and the ACM API Reference.

Requesting an SSL/TLS certificate by using DNS validation

In this section, I walk you through the four steps required to obtain an SSL/TLS certificate through ACM to identify your site over the internet. SSL/TLS provides encryption for sensitive data in transit and authentication by using certificates to establish the identity of your site and secure connections between browsers and applications and your site. DNS validation and SSL/TLS certificates provisioned through ACM are free.

Step 1: Request a certificate

To get started, sign in to the AWS Management Console and navigate to the ACM console. Choose Get started to request a certificate.

Screenshot of getting started in the ACM console

If you previously managed certificates in ACM, you will instead see a table with your certificates and a button to request a new certificate. Choose Request a certificate to request a new certificate.

Screenshot of choosing "Request a certificate"

Type the name of your domain in the Domain name box and choose Next. In this example, I type www.example.com. You must use a domain name that you control. Requesting certificates for domains that you don’t control violates the AWS Service Terms.

Screenshot of entering a domain name

Step 2: Select a validation method

With DNS validation, you write a CNAME record to your DNS configuration to establish control of your domain name. Choose DNS validation, and then choose Review.

Screenshot of selecting validation method

Step 3: Review your request

Review your request and choose Confirm and request to request the certificate.

Screenshot of reviewing request and confirming it

Step 4: Submit your request

After a brief delay while ACM populates your domain validation information, choose the down arrow (highlighted in the following screenshot) to display all the validation information for your domain.

Screenshot of validation information

ACM displays the CNAME record you must add to your DNS configuration to validate that you control the domain name in your certificate request. If you use a DNS provider other than Route 53 or if you use a different AWS account to manage DNS records in Route 53, copy the DNS CNAME information from the validation information, or export it to a file (choose Export DNS configuration to a file) and write it to your DNS configuration. For information about how to add or modify DNS records, check with your DNS provider. For more information about using DNS with Route 53 DNS, see the Route 53 documentation.

If you manage DNS records for your domain with Route 53 in the same AWS account, choose Create record in Route 53 to have ACM update your DNS configuration for you.

After updating your DNS configuration, choose Continue to return to the ACM table view.

ACM then displays a table that includes all your certificates. The certificate you requested is displayed so that you can see the status of your request. After you write the DNS record or have ACM write the record for you, it typically takes DNS 30 minutes to propagate the record, and it might take several hours for Amazon to validate it and issue the certificate. During this time, ACM shows the Validation status as Pending validation. After ACM validates the domain name, ACM updates the Validation status to Success. After the certificate is issued, the certificate status is updated to Issued. If ACM cannot validate your DNS record and issue the certificate after 72 hours, the request times out, and ACM displays a Timed out validation status. To recover, you must make a new request. Refer to the Troubleshooting Section of the ACM User Guide for instructions about troubleshooting validation or issuance failures.

Screenshot of a certificate issued and validation successful

You now have an ACM certificate that you can use to secure your application or website. For information about how to deploy certificates with other AWS services, see the documentation for Amazon CloudFront, Amazon API Gateway, Application Load Balancers, and Classic Load Balancers. Note that your certificate must be in the US East (N. Virginia) Region to use the certificate with CloudFront.

ACM automatically renews certificates that are deployed and in use with other AWS services as long as the CNAME record remains in your DNS configuration. To learn more about ACM DNS validation, see the ACM FAQs and the ACM documentation.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about this blog post, start a new thread on the ACM forum or contact AWS Support.

– Todd

The 10 Most Viewed Security-Related AWS Knowledge Center Articles and Videos for November 2017

Post Syndicated from Maggie Burke original https://aws.amazon.com/blogs/security/the-10-most-viewed-security-related-aws-knowledge-center-articles-and-videos-for-november-2017/

AWS Knowledge Center image

The AWS Knowledge Center helps answer the questions most frequently asked by AWS Support customers. The following 10 Knowledge Center security articles and videos have been the most viewed this month. It’s likely you’ve wondered about a few of these topics yourself, so here’s a chance to learn the answers!

  1. How do I create an AWS Identity and Access Management (IAM) policy to restrict access for an IAM user, group, or role to a particular Amazon Virtual Private Cloud (VPC)?
    Learn how to apply a custom IAM policy to restrict IAM user, group, or role permissions for creating and managing Amazon EC2 instances in a specified VPC.
  2. How do I use an MFA token to authenticate access to my AWS resources through the AWS CLI?
    One IAM best practice is to protect your account and its resources by using a multi-factor authentication (MFA) device. If you plan use the AWS Command Line Interface (CLI) while using an MFA device, you must create a temporary session token.
  3. Can I restrict an IAM user’s EC2 access to specific resources?
    This article demonstrates how to link multiple AWS accounts through AWS Organizations and isolate IAM user groups in their own accounts.
  4. I didn’t receive a validation email for the SSL certificate I requested through AWS Certificate Manager (ACM)—where is it?
    Can’t find your ACM validation emails? Be sure to check the email address to which you requested that ACM send validation emails.
  5. How do I create an IAM policy that has a source IP restriction but still allows users to switch roles in the AWS Management Console?
    Learn how to write an IAM policy that not only includes a source IP restriction but also lets your users switch roles in the console.
  6. How do I allow users from another account to access resources in my account through IAM?
    If you have the 12-digit account number and permissions to create and edit IAM roles and users for both accounts, you can permit specific IAM users to access resources in your account.
  7. What are the differences between a service control policy (SCP) and an IAM policy?
    Learn how to distinguish an SCP from an IAM policy.
  8. How do I share my customer master keys (CMKs) across multiple AWS accounts?
    To grant another account access to your CMKs, create an IAM policy on the secondary account that grants access to use your CMKs.
  9. How do I set up AWS Trusted Advisor notifications?
    Learn how to receive free weekly email notifications from Trusted Advisor.
  10. How do I use AWS Key Management Service (AWS KMS) encryption context to protect the integrity of encrypted data?
    Encryption context name-value pairs used with AWS KMS encryption and decryption operations provide a method for checking ciphertext authenticity. Learn how to use encryption context to help protect your encrypted data.

The AWS Security Blog will publish an updated version of this list regularly going forward. You also can subscribe to the AWS Knowledge Center Videos playlist on YouTube.

– Maggie

Building a Multi-region Serverless Application with Amazon API Gateway and AWS Lambda

Post Syndicated from Stefano Buliani original https://aws.amazon.com/blogs/compute/building-a-multi-region-serverless-application-with-amazon-api-gateway-and-aws-lambda/

This post written by: Magnus Bjorkman – Solutions Architect

Many customers are looking to run their services at global scale, deploying their backend to multiple regions. In this post, we describe how to deploy a Serverless API into multiple regions and how to leverage Amazon Route 53 to route the traffic between regions. We use latency-based routing and health checks to achieve an active-active setup that can fail over between regions in case of an issue. We leverage the new regional API endpoint feature in Amazon API Gateway to make this a seamless process for the API client making the requests. This post does not cover the replication of your data, which is another aspect to consider when deploying applications across regions.

Solution overview

Currently, the default API endpoint type in API Gateway is the edge-optimized API endpoint, which enables clients to access an API through an Amazon CloudFront distribution. This typically improves connection time for geographically diverse clients. By default, a custom domain name is globally unique and the edge-optimized API endpoint would invoke a Lambda function in a single region in the case of Lambda integration. You can’t use this type of endpoint with a Route 53 active-active setup and fail-over.

The new regional API endpoint in API Gateway moves the API endpoint into the region and the custom domain name is unique per region. This makes it possible to run a full copy of an API in each region and then use Route 53 to use an active-active setup and failover. The following diagram shows how you do this:

Active/active multi region architecture

  • Deploy your Rest API stack, consisting of API Gateway and Lambda, in two regions, such as us-east-1 and us-west-2.
  • Choose the regional API endpoint type for your API.
  • Create a custom domain name and choose the regional API endpoint type for that one as well. In both regions, you are configuring the custom domain name to be the same, for example, helloworldapi.replacewithyourcompanyname.com
  • Use the host name of the custom domain names from each region, for example, xxxxxx.execute-api.us-east-1.amazonaws.com and xxxxxx.execute-api.us-west-2.amazonaws.com, to configure record sets in Route 53 for your client-facing domain name, for example, helloworldapi.replacewithyourcompanyname.com

The above solution provides an active-active setup for your API across the two regions, but you are not doing failover yet. For that to work, set up a health check in Route 53:

Route 53 Health Check

A Route 53 health check must have an endpoint to call to check the health of a service. You could do a simple ping of your actual Rest API methods, but instead provide a specific method on your Rest API that does a deep ping. That is, it is a Lambda function that checks the status of all the dependencies.

In the case of the Hello World API, you don’t have any other dependencies. In a real-world scenario, you could check on dependencies as databases, other APIs, and external dependencies. Route 53 health checks themselves cannot use your custom domain name endpoint’s DNS address, so you are going to directly call the API endpoints via their region unique endpoint’s DNS address.

Walkthrough

The following sections describe how to set up this solution. You can find the complete solution at the blog-multi-region-serverless-service GitHub repo. Clone or download the repository locally to be able to do the setup as described.

Prerequisites

You need the following resources to set up the solution described in this post:

  • AWS CLI
  • An S3 bucket in each region in which to deploy the solution, which can be used by the AWS Serverless Application Model (SAM). You can use the following CloudFormation templates to create buckets in us-east-1 and us-west-2:
    • us-east-1:
    • us-west-2:
  • A hosted zone registered in Amazon Route 53. This is used for defining the domain name of your API endpoint, for example, helloworldapi.replacewithyourcompanyname.com. You can use a third-party domain name registrar and then configure the DNS in Amazon Route 53, or you can purchase a domain directly from Amazon Route 53.

Deploy API with health checks in two regions

Start by creating a small “Hello World” Lambda function that sends back a message in the region in which it has been deployed.


"""Return message."""
import logging

logging.basicConfig()
logger = logging.getLogger()
logger.setLevel(logging.INFO)

def lambda_handler(event, context):
    """Lambda handler for getting the hello world message."""

    region = context.invoked_function_arn.split(':')[3]

    logger.info("message: " + "Hello from " + region)
    
    return {
		"message": "Hello from " + region
    }

Also create a Lambda function for doing a health check that returns a value based on another environment variable (either “ok” or “fail”) to allow for ease of testing:


"""Return health."""
import logging
import os

logging.basicConfig()
logger = logging.getLogger()
logger.setLevel(logging.INFO)

def lambda_handler(event, context):
    """Lambda handler for getting the health."""

    logger.info("status: " + os.environ['STATUS'])
    
    return {
		"status": os.environ['STATUS']
    }

Deploy both of these using an AWS Serverless Application Model (SAM) template. SAM is a CloudFormation extension that is optimized for serverless, and provides a standard way to create a complete serverless application. You can find the full helloworld-sam.yaml template in the blog-multi-region-serverless-service GitHub repo.

A few things to highlight:

  • You are using inline Swagger to define your API so you can substitute the current region in the x-amazon-apigateway-integration section.
  • Most of the Swagger template covers CORS to allow you to test this from a browser.
  • You are also using substitution to populate the environment variable used by the “Hello World” method with the region into which it is being deployed.

The Swagger allows you to use the same SAM template in both regions.

You can only use SAM from the AWS CLI, so do the following from the command prompt. First, deploy the SAM template in us-east-1 with the following commands, replacing “<your bucket in us-east-1>” with a bucket in your account:


> cd helloworld-api
> aws cloudformation package --template-file helloworld-sam.yaml --output-template-file /tmp/cf-helloworld-sam.yaml --s3-bucket <your bucket in us-east-1> --region us-east-1
> aws cloudformation deploy --template-file /tmp/cf-helloworld-sam.yaml --stack-name multiregionhelloworld --capabilities CAPABILITY_IAM --region us-east-1

Second, do the same in us-west-2:


> aws cloudformation package --template-file helloworld-sam.yaml --output-template-file /tmp/cf-helloworld-sam.yaml --s3-bucket <your bucket in us-west-2> --region us-west-2
> aws cloudformation deploy --template-file /tmp/cf-helloworld-sam.yaml --stack-name multiregionhelloworld --capabilities CAPABILITY_IAM --region us-west-2

The API was created with the default endpoint type of Edge Optimized. Switch it to Regional. In the Amazon API Gateway console, select the API that you just created and choose the wheel-icon to edit it.

API Gateway edit API settings

In the edit screen, select the Regional endpoint type and save the API. Do the same in both regions.

Grab the URL for the API in the console by navigating to the method in the prod stage.

API Gateway endpoint link

You can now test this with curl:


> curl https://2wkt1cxxxx.execute-api.us-west-2.amazonaws.com/prod/helloworld
{"message": "Hello from us-west-2"}

Write down the domain name for the URL in each region (for example, 2wkt1cxxxx.execute-api.us-west-2.amazonaws.com), as you need that later when you deploy the Route 53 setup.

Create the custom domain name

Next, create an Amazon API Gateway custom domain name endpoint. As part of using this feature, you must have a hosted zone and domain available to use in Route 53 as well as an SSL certificate that you use with your specific domain name.

You can create the SSL certificate by using AWS Certificate Manager. In the ACM console, choose Get started (if you have no existing certificates) or Request a certificate. Fill out the form with the domain name to use for the custom domain name endpoint, which is the same across the two regions:

Amazon Certificate Manager request new certificate

Go through the remaining steps and validate the certificate for each region before moving on.

You are now ready to create the endpoints. In the Amazon API Gateway console, choose Custom Domain Names, Create Custom Domain Name.

API Gateway create custom domain name

A few things to highlight:

  • The domain name is the same as what you requested earlier through ACM.
  • The endpoint configuration should be regional.
  • Select the ACM Certificate that you created earlier.
  • You need to create a base path mapping that connects back to your earlier API Gateway endpoint. Set the base path to v1 so you can version your API, and then select the API and the prod stage.

Choose Save. You should see your newly created custom domain name:

API Gateway custom domain setup

Note the value for Target Domain Name as you need that for the next step. Do this for both regions.

Deploy Route 53 setup

Use the global Route 53 service to provide DNS lookup for the Rest API, distributing the traffic in an active-active setup based on latency. You can find the full CloudFormation template in the blog-multi-region-serverless-service GitHub repo.

The template sets up health checks, for example, for us-east-1:


HealthcheckRegion1:
  Type: "AWS::Route53::HealthCheck"
  Properties:
    HealthCheckConfig:
      Port: "443"
      Type: "HTTPS_STR_MATCH"
      SearchString: "ok"
      ResourcePath: "/prod/healthcheck"
      FullyQualifiedDomainName: !Ref Region1HealthEndpoint
      RequestInterval: "30"
      FailureThreshold: "2"

Use the health check when you set up the record set and the latency routing, for example, for us-east-1:


Region1EndpointRecord:
  Type: AWS::Route53::RecordSet
  Properties:
    Region: us-east-1
    HealthCheckId: !Ref HealthcheckRegion1
    SetIdentifier: "endpoint-region1"
    HostedZoneId: !Ref HostedZoneId
    Name: !Ref MultiregionEndpoint
    Type: CNAME
    TTL: 60
    ResourceRecords:
      - !Ref Region1Endpoint

You can create the stack by using the following link, copying in the domain names from the previous section, your existing hosted zone name, and the main domain name that is created (for example, hellowordapi.replacewithyourcompanyname.com):

The following screenshot shows what the parameters might look like:
Serverless multi region Route 53 health check

Specifically, the domain names that you collected earlier would map according to following:

  • The domain names from the API Gateway “prod”-stage go into Region1HealthEndpoint and Region2HealthEndpoint.
  • The domain names from the custom domain name’s target domain name goes into Region1Endpoint and Region2Endpoint.

Using the Rest API from server-side applications

You are now ready to use your setup. First, demonstrate the use of the API from server-side clients. You can demonstrate this by using curl from the command line:


> curl https://hellowordapi.replacewithyourcompanyname.com/v1/helloworld/
{"message": "Hello from us-east-1"}

Testing failover of Rest API in browser

Here’s how you can use this from the browser and test the failover. Find all of the files for this test in the browser-client folder of the blog-multi-region-serverless-service GitHub repo.

Use this html file:


<!DOCTYPE HTML>
<html>
<head>
    <meta charset="utf-8"/>
    <meta http-equiv="X-UA-Compatible" content="IE=edge"/>
    <meta name="viewport" content="width=device-width, initial-scale=1"/>
    <title>Multi-Region Client</title>
</head>
<body>
<div>
   <h1>Test Client</h1>

    <p id="client_result">

    </p>

    <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.3/jquery.min.js"></script>
    <script src="settings.js"></script>
    <script src="client.js"></script>
</body>
</html>

The html file uses this JavaScript file to repeatedly call the API and print the history of messages:


var messageHistory = "";

(function call_service() {

   $.ajax({
      url: helloworldMultiregionendpoint+'v1/helloworld/',
      dataType: "json",
      cache: false,
      success: function(data) {
         messageHistory+="<p>"+data['message']+"</p>";
         $('#client_result').html(messageHistory);
      },
      complete: function() {
         // Schedule the next request when the current one's complete
         setTimeout(call_service, 10000);
      },
      error: function(xhr, status, error) {
         $('#client_result').html('ERROR: '+status);
      }
   });

})();

Also, make sure to update the settings in settings.js to match with the API Gateway endpoints for the DNS-proxy and the multi-regional endpoint for the Hello World API: var helloworldMultiregionendpoint = "https://hellowordapi.replacewithyourcompanyname.com/";

You can now open the HTML file in the browser (you can do this directly from the file system) and you should see something like the following screenshot:

Serverless multi region browser test

You can test failover by changing the environment variable in your health check Lambda function. In the Lambda console, select your health check function and scroll down to the Environment variables section. For the STATUS key, modify the value to fail.

Lambda update environment variable

You should see the region switch in the test client:

Serverless multi region broker test switchover

During an emulated failure like this, the browser might take some additional time to switch over due to connection keep-alive functionality. If you are using a browser like Chrome, you can kill all the connections to see a more immediate fail-over: chrome://net-internals/#sockets

Summary

You have implemented a simple way to do multi-regional serverless applications that fail over seamlessly between regions, either being accessed from the browser or from other applications/services. You achieved this by using the capabilities of Amazon Route 53 to do latency based routing and health checks for fail-over. You unlocked the use of these features in a serverless application by leveraging the new regional endpoint feature of Amazon API Gateway.

The setup was fully scripted using CloudFormation, the AWS Serverless Application Model (SAM), and the AWS CLI, and it can be integrated into deployment tools to push the code across the regions to make sure it is available in all the needed regions. For more information about cross-region deployments, see Building a Cross-Region/Cross-Account Code Deployment Solution on AWS on the AWS DevOps blog.

How to Prepare for AWS’s Move to Its Own Certificate Authority

Post Syndicated from Jonathan Kozolchyk original https://aws.amazon.com/blogs/security/how-to-prepare-for-aws-move-to-its-own-certificate-authority/

AWS Certificate Manager image

 

Update from March 28, 2018: We updated the Amazon Trust Services table by replacing an out-of-date value with a new value.


Transport Layer Security (TLS, formerly called Secure Sockets Layer [SSL]) is essential for encrypting information that is exchanged on the internet. For example, Amazon.com uses TLS for all traffic on its website, and AWS uses it to secure calls to AWS services.

An electronic document called a certificate verifies the identity of the server when creating such an encrypted connection. The certificate helps establish proof that your web browser is communicating securely with the website that you typed in your browser’s address field. Certificate Authorities, also known as CAs, issue certificates to specific domains. When a domain presents a certificate that is issued by a trusted CA, your browser or application knows it’s safe to make the connection.

In January 2016, AWS launched AWS Certificate Manager (ACM), a service that lets you easily provision, manage, and deploy SSL/TLS certificates for use with AWS services. These certificates are available for no additional charge through Amazon’s own CA: Amazon Trust Services. For browsers and other applications to trust a certificate, the certificate’s issuer must be included in the browser’s trust store, which is a list of trusted CAs. If the issuing CA is not in the trust store, the browser will display an error message (see an example) and applications will show an application-specific error. To ensure the ubiquity of the Amazon Trust Services CA, AWS purchased the Starfield Services CA, a root found in most browsers and which has been valid since 2005. This means you shouldn’t have to take any action to use the certificates issued by Amazon Trust Services.

AWS has been offering free certificates to AWS customers from the Amazon Trust Services CA. Now, AWS is in the process of moving certificates for services such as Amazon EC2 and Amazon DynamoDB to use certificates from Amazon Trust Services as well. Most software doesn’t need to be changed to handle this transition, but there are exceptions. In this blog post, I show you how to verify that you are prepared to use the Amazon Trust Services CA.

How to tell if the Amazon Trust Services CAs are in your trust store

The following table lists the Amazon Trust Services certificates. To verify that these certificates are in your browser’s trust store, click each Test URL in the following table to verify that it works for you. When a Test URL does not work, it displays an error similar to this example.

Distinguished nameSHA-256 hash of subject public key informationTest URL
CN=Amazon Root CA 1,O=Amazon,C=USfbe3018031f9586bcbf41727e417b7d1c45c2f47f93be372a17b96b50757d5a2Test URL
CN=Amazon Root CA 2,O=Amazon,C=US7f4296fc5b6a4e3b35d3c369623e364ab1af381d8fa7121533c9d6c633ea2461Test URL
CN=Amazon Root CA 3,O=Amazon,C=US36abc32656acfc645c61b71613c4bf21c787f5cabbee48348d58597803d7abc9Test URL
CN=Amazon Root CA 4,O=Amazon,C=USf7ecded5c66047d28ed6466b543c40e0743abe81d109254dcf845d4c2c7853c5Test URL
CN=Starfield Services Root Certificate Authority – G2,O=Starfield Technologies\, Inc.,L=Scottsdale,ST=Arizona,C=US2b071c59a0a0ae76b0eadb2bad23bad4580b69c3601b630c2eaf0613afa83f92Test URL
Starfield Class 2 Certification Authority15f14ac45c9c7da233d3479164e8137fe35ee0f38ae858183f08410ea82ac4b4Not available*

* Note: Amazon doesn’t own this root and doesn’t have a test URL for it. The certificate can be downloaded from here.

You can calculate the SHA-256 hash of Subject Public Key Information as follows. With the PEM-encoded certificate stored in certificate.pem, run the following openssl commands:

openssl x509 -in certificate.pem -noout -pubkey | openssl asn1parse -noout -inform pem -out certificate.key
openssl dgst -sha256 certificate.key

As an example, with the Starfield Class 2 Certification Authority self-signed cert in a PEM encoded file sf-class2-root.crt, you can use the following openssl commands:

openssl x509 -in sf-class2-root.crt -noout -pubkey | openssl asn1parse -noout -inform pem -out sf-class2-root.key
openssl dgst -sha256 sf-class2-root.key ~
SHA256(sf-class2-root.key)= 15f14ac45c9c7da233d3479164e8137fe35ee0f38ae858183f08410ea82ac4b4

What to do if the Amazon Trust Services CAs are not in your trust store

If your tests of any of the Test URLs failed, you must update your trust store. The easiest way to update your trust store is to upgrade the operating system or browser that you are using.

You will find the Amazon Trust Services CAs in the following operating systems (release dates are in parentheses):

  • Microsoft Windows versions that have January 2005 or later updates installed, Windows Vista, Windows 7, Windows Server 2008, and newer versions
  • Mac OS X 10.4 with Java for Mac OS X 10.4 Release 5, Mac OS X 10.5 and newer versions
  • Red Hat Enterprise Linux 5 (March 2007), Linux 6, and Linux 7 and CentOS 5, CentOS 6, and CentOS 7
  • Ubuntu 8.10
  • Debian 5.0
  • Amazon Linux (all versions)
  • Java 1.4.2_12, Java 5 update 2, and all newer versions, including Java 6, Java 7, and Java 8

All modern browsers trust Amazon’s CAs. You can update the certificate bundle in your browser simply by updating your browser. You can find instructions for updating the following browsers on their respective websites:

If your application is using a custom trust store, you must add the Amazon root CAs to your application’s trust store. The instructions for doing this vary based on the application or platform. Please refer to the documentation for the application or platform you are using.

AWS SDKs and CLIs

Most AWS SDKs and CLIs are not impacted by the transition to the Amazon Trust Services CA. If you are using a version of the Python AWS SDK or CLI released before October 29, 2013, you must upgrade. The .NET, Java, PHP, Go, JavaScript, and C++ SDKs and CLIs do not bundle any certificates, so their certificates come from the underlying operating system. The Ruby SDK has included at least one of the required CAs since June 10, 2015. Before that date, the Ruby V2 SDK did not bundle certificates.

Certificate pinning

If you are using a technique called certificate pinning to lock down the CAs you trust on a domain-by-domain basis, you must adjust your pinning to include the Amazon Trust Services CAs. Certificate pinning helps defend you from an attacker using misissued certificates to fool an application into creating a connection to a spoofed host (an illegitimate host masquerading as a legitimate host). The restriction to a specific, pinned certificate is made by checking that the certificate issued is the expected certificate. This is done by checking that the hash of the certificate public key received from the server matches the expected hash stored in the application. If the hashes do not match, the code stops the connection.

AWS recommends against using certificate pinning because it introduces a potential availability risk. If the certificate to which you pin is replaced, your application will fail to connect. If your use case requires pinning, we recommend that you pin to a CA rather than to an individual certificate. If you are pinning to an Amazon Trust Services CA, you should pin to all CAs shown in the table earlier in this post.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about this post, start a new thread on the ACM forum.

– Jonathan

Implementing Default Directory Indexes in Amazon S3-backed Amazon CloudFront Origins Using [email protected]

Post Syndicated from Ronnie Eichler original https://aws.amazon.com/blogs/compute/implementing-default-directory-indexes-in-amazon-s3-backed-amazon-cloudfront-origins-using-lambdaedge/

With the recent launch of [email protected], it’s now possible for you to provide even more robust functionality to your static websites. Amazon CloudFront is a content distribution network service. In this post, I show how you can use [email protected] along with the CloudFront origin access identity (OAI) for Amazon S3 and still provide simple URLs (such as www.example.com/about/ instead of www.example.com/about/index.html).

Background

Amazon S3 is a great platform for hosting a static website. You don’t need to worry about managing servers or underlying infrastructure—you just publish your static to content to an S3 bucket. S3 provides a DNS name such as <bucket-name>.s3-website-<AWS-region>.amazonaws.com. Use this name for your website by creating a CNAME record in your domain’s DNS environment (or Amazon Route 53) as follows:

www.example.com -> <bucket-name>.s3-website-<AWS-region>.amazonaws.com

You can also put CloudFront in front of S3 to further scale the performance of your site and cache the content closer to your users. CloudFront can enable HTTPS-hosted sites, by either using a custom Secure Sockets Layer (SSL) certificate or a managed certificate from AWS Certificate Manager. In addition, CloudFront also offers integration with AWS WAF, a web application firewall. As you can see, it’s possible to achieve some robust functionality by using S3, CloudFront, and other managed services and not have to worry about maintaining underlying infrastructure.

One of the key concerns that you might have when implementing any type of WAF or CDN is that you want to force your users to go through the CDN. If you implement CloudFront in front of S3, you can achieve this by using an OAI. However, in order to do this, you cannot use the HTTP endpoint that is exposed by S3’s static website hosting feature. Instead, CloudFront must use the S3 REST endpoint to fetch content from your origin so that the request can be authenticated using the OAI. This presents some challenges in that the REST endpoint does not support redirection to a default index page.

CloudFront does allow you to specify a default root object (index.html), but it only works on the root of the website (such as http://www.example.com > http://www.example.com/index.html). It does not work on any subdirectory (such as http://www.example.com/about/). If you were to attempt to request this URL through CloudFront, CloudFront would do a S3 GetObject API call against a key that does not exist.

Of course, it is a bad user experience to expect users to always type index.html at the end of every URL (or even know that it should be there). Until now, there has not been an easy way to provide these simpler URLs (equivalent to the DirectoryIndex Directive in an Apache Web Server configuration) to users through CloudFront. Not if you still want to be able to restrict access to the S3 origin using an OAI. However, with the release of [email protected], you can use a JavaScript function running on the CloudFront edge nodes to look for these patterns and request the appropriate object key from the S3 origin.

Solution

In this example, you use the compute power at the CloudFront edge to inspect the request as it’s coming in from the client. Then re-write the request so that CloudFront requests a default index object (index.html in this case) for any request URI that ends in ‘/’.

When a request is made against a web server, the client specifies the object to obtain in the request. You can use this URI and apply a regular expression to it so that these URIs get resolved to a default index object before CloudFront requests the object from the origin. Use the following code:

'use strict';
exports.handler = (event, context, callback) => {
    
    // Extract the request from the CloudFront event that is sent to [email protected] 
    var request = event.Records[0].cf.request;

    // Extract the URI from the request
    var olduri = request.uri;

    // Match any '/' that occurs at the end of a URI. Replace it with a default index
    var newuri = olduri.replace(/\/$/, '\/index.html');
    
    // Log the URI as received by CloudFront and the new URI to be used to fetch from origin
    console.log("Old URI: " + olduri);
    console.log("New URI: " + newuri);
    
    // Replace the received URI with the URI that includes the index page
    request.uri = newuri;
    
    // Return to CloudFront
    return callback(null, request);

};

To get started, create an S3 bucket to be the origin for CloudFront:

Create bucket

On the other screens, you can just accept the defaults for the purposes of this walkthrough. If this were a production implementation, I would recommend enabling bucket logging and specifying an existing S3 bucket as the destination for access logs. These logs can be useful if you need to troubleshoot issues with your S3 access.

Now, put some content into your S3 bucket. For this walkthrough, create two simple webpages to demonstrate the functionality:  A page that resides at the website root, and another that is in a subdirectory.

<s3bucketname>/index.html

<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Root home page</title>
    </head>
    <body>
        <p>Hello, this page resides in the root directory.</p>
    </body>
</html>

<s3bucketname>/subdirectory/index.html

<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Subdirectory home page</title>
    </head>
    <body>
        <p>Hello, this page resides in the /subdirectory/ directory.</p>
    </body>
</html>

When uploading the files into S3, you can accept the defaults. You add a bucket policy as part of the CloudFront distribution creation that allows CloudFront to access the S3 origin. You should now have an S3 bucket that looks like the following:

Root of bucket

Subdirectory in bucket

Next, create a CloudFront distribution that your users will use to access the content. Open the CloudFront console, and choose Create Distribution. For Select a delivery method for your content, under Web, choose Get Started.

On the next screen, you set up the distribution. Below are the options to configure:

  • Origin Domain Name:  Select the S3 bucket that you created earlier.
  • Restrict Bucket Access: Choose Yes.
  • Origin Access Identity: Create a new identity.
  • Grant Read Permissions on Bucket: Choose Yes, Update Bucket Policy.
  • Object Caching: Choose Customize (I am changing the behavior to avoid having CloudFront cache objects, as this could affect your ability to troubleshoot while implementing the Lambda code).
    • Minimum TTL: 0
    • Maximum TTL: 0
    • Default TTL: 0

You can accept all of the other defaults. Again, this is a proof-of-concept exercise. After you are comfortable that the CloudFront distribution is working properly with the origin and Lambda code, you can re-visit the preceding values and make changes before implementing it in production.

CloudFront distributions can take several minutes to deploy (because the changes have to propagate out to all of the edge locations). After that’s done, test the functionality of the S3-backed static website. Looking at the distribution, you can see that CloudFront assigns a domain name:

CloudFront Distribution Settings

Try to access the website using a combination of various URLs:

http://<domainname>/:  Works

› curl -v http://d3gt20ea1hllb.cloudfront.net/
*   Trying 54.192.192.214...
* TCP_NODELAY set
* Connected to d3gt20ea1hllb.cloudfront.net (54.192.192.214) port 80 (#0)
> GET / HTTP/1.1
> Host: d3gt20ea1hllb.cloudfront.net
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 200 OK
< ETag: "cb7e2634fe66c1fd395cf868087dd3b9"
< Accept-Ranges: bytes
< Server: AmazonS3
< X-Cache: Miss from cloudfront
< X-Amz-Cf-Id: -D2FSRwzfcwyKZKFZr6DqYFkIf4t7HdGw2MkUF5sE6YFDxRJgi0R1g==
< Content-Length: 209
< Content-Type: text/html
< Last-Modified: Wed, 19 Jul 2017 19:21:16 GMT
< Via: 1.1 6419ba8f3bd94b651d416054d9416f1e.cloudfront.net (CloudFront), 1.1 iad6-proxy-3.amazon.com:80 (Cisco-WSA/9.1.2-010)
< Connection: keep-alive
<
<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Root home page</title>
    </head>
    <body>
        <p>Hello, this page resides in the root directory.</p>
    </body>
</html>
* Curl_http_done: called premature == 0
* Connection #0 to host d3gt20ea1hllb.cloudfront.net left intact

This is because CloudFront is configured to request a default root object (index.html) from the origin.

http://<domainname>/subdirectory/:  Doesn’t work

› curl -v http://d3gt20ea1hllb.cloudfront.net/subdirectory/
*   Trying 54.192.192.214...
* TCP_NODELAY set
* Connected to d3gt20ea1hllb.cloudfront.net (54.192.192.214) port 80 (#0)
> GET /subdirectory/ HTTP/1.1
> Host: d3gt20ea1hllb.cloudfront.net
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 200 OK
< ETag: "d41d8cd98f00b204e9800998ecf8427e"
< x-amz-server-side-encryption: AES256
< Accept-Ranges: bytes
< Server: AmazonS3
< X-Cache: Miss from cloudfront
< X-Amz-Cf-Id: Iqf0Gy8hJLiW-9tOAdSFPkL7vCWBrgm3-1ly5tBeY_izU82ftipodA==
< Content-Length: 0
< Content-Type: application/x-directory
< Last-Modified: Wed, 19 Jul 2017 19:21:24 GMT
< Via: 1.1 6419ba8f3bd94b651d416054d9416f1e.cloudfront.net (CloudFront), 1.1 iad6-proxy-3.amazon.com:80 (Cisco-WSA/9.1.2-010)
< Connection: keep-alive
<
* Curl_http_done: called premature == 0
* Connection #0 to host d3gt20ea1hllb.cloudfront.net left intact

If you use a tool such like cURL to test this, you notice that CloudFront and S3 are returning a blank response. The reason for this is that the subdirectory does exist, but it does not resolve to an S3 object. Keep in mind that S3 is an object store, so there are no real directories. User interfaces such as the S3 console present a hierarchical view of a bucket with folders based on the presence of forward slashes, but behind the scenes the bucket is just a collection of keys that represent stored objects.

http://<domainname>/subdirectory/index.html:  Works

› curl -v http://d3gt20ea1hllb.cloudfront.net/subdirectory/index.html
*   Trying 54.192.192.130...
* TCP_NODELAY set
* Connected to d3gt20ea1hllb.cloudfront.net (54.192.192.130) port 80 (#0)
> GET /subdirectory/index.html HTTP/1.1
> Host: d3gt20ea1hllb.cloudfront.net
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Thu, 20 Jul 2017 20:35:15 GMT
< ETag: "ddf87c487acf7cef9d50418f0f8f8dae"
< Accept-Ranges: bytes
< Server: AmazonS3
< X-Cache: RefreshHit from cloudfront
< X-Amz-Cf-Id: bkh6opXdpw8pUomqG3Qr3UcjnZL8axxOH82Lh0OOcx48uJKc_Dc3Cg==
< Content-Length: 227
< Content-Type: text/html
< Last-Modified: Wed, 19 Jul 2017 19:21:45 GMT
< Via: 1.1 3f2788d309d30f41de96da6f931d4ede.cloudfront.net (CloudFront), 1.1 iad6-proxy-3.amazon.com:80 (Cisco-WSA/9.1.2-010)
< Connection: keep-alive
<
<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Subdirectory home page</title>
    </head>
    <body>
        <p>Hello, this page resides in the /subdirectory/ directory.</p>
    </body>
</html>
* Curl_http_done: called premature == 0
* Connection #0 to host d3gt20ea1hllb.cloudfront.net left intact

This request works as expected because you are referencing the object directly. Now, you implement the [email protected] function to return the default index.html page for any subdirectory. Looking at the example JavaScript code, here’s where the magic happens:

var newuri = olduri.replace(/\/$/, '\/index.html');

You are going to use a JavaScript regular expression to match any ‘/’ that occurs at the end of the URI and replace it with ‘/index.html’. This is the equivalent to what S3 does on its own with static website hosting. However, as I mentioned earlier, you can’t rely on this if you want to use a policy on the bucket to restrict it so that users must access the bucket through CloudFront. That way, all requests to the S3 bucket must be authenticated using the S3 REST API. Because of this, you implement a [email protected] function that takes any client request ending in ‘/’ and append a default ‘index.html’ to the request before requesting the object from the origin.

In the Lambda console, choose Create function. On the next screen, skip the blueprint selection and choose Author from scratch, as you’ll use the sample code provided.

Next, configure the trigger. Choosing the empty box shows a list of available triggers. Choose CloudFront and select your CloudFront distribution ID (created earlier). For this example, leave Cache Behavior as * and CloudFront Event as Origin Request. Select the Enable trigger and replicate box and choose Next.

Lambda Trigger

Next, give the function a name and a description. Then, copy and paste the following code:

'use strict';
exports.handler = (event, context, callback) => {
    
    // Extract the request from the CloudFront event that is sent to [email protected] 
    var request = event.Records[0].cf.request;

    // Extract the URI from the request
    var olduri = request.uri;

    // Match any '/' that occurs at the end of a URI. Replace it with a default index
    var newuri = olduri.replace(/\/$/, '\/index.html');
    
    // Log the URI as received by CloudFront and the new URI to be used to fetch from origin
    console.log("Old URI: " + olduri);
    console.log("New URI: " + newuri);
    
    // Replace the received URI with the URI that includes the index page
    request.uri = newuri;
    
    // Return to CloudFront
    return callback(null, request);

};

Next, define a role that grants permissions to the Lambda function. For this example, choose Create new role from template, Basic Edge Lambda permissions. This creates a new IAM role for the Lambda function and grants the following permissions:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": [
                "arn:aws:logs:*:*:*"
            ]
        }
    ]
}

In a nutshell, these are the permissions that the function needs to create the necessary CloudWatch log group and log stream, and to put the log events so that the function is able to write logs when it executes.

After the function has been created, you can go back to the browser (or cURL) and re-run the test for the subdirectory request that failed previously:

› curl -v http://d3gt20ea1hllb.cloudfront.net/subdirectory/
*   Trying 54.192.192.202...
* TCP_NODELAY set
* Connected to d3gt20ea1hllb.cloudfront.net (54.192.192.202) port 80 (#0)
> GET /subdirectory/ HTTP/1.1
> Host: d3gt20ea1hllb.cloudfront.net
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Thu, 20 Jul 2017 21:18:44 GMT
< ETag: "ddf87c487acf7cef9d50418f0f8f8dae"
< Accept-Ranges: bytes
< Server: AmazonS3
< X-Cache: Miss from cloudfront
< X-Amz-Cf-Id: rwFN7yHE70bT9xckBpceTsAPcmaadqWB9omPBv2P6WkIfQqdjTk_4w==
< Content-Length: 227
< Content-Type: text/html
< Last-Modified: Wed, 19 Jul 2017 19:21:45 GMT
< Via: 1.1 3572de112011f1b625bb77410b0c5cca.cloudfront.net (CloudFront), 1.1 iad6-proxy-3.amazon.com:80 (Cisco-WSA/9.1.2-010)
< Connection: keep-alive
<
<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Subdirectory home page</title>
    </head>
    <body>
        <p>Hello, this page resides in the /subdirectory/ directory.</p>
    </body>
</html>
* Curl_http_done: called premature == 0
* Connection #0 to host d3gt20ea1hllb.cloudfront.net left intact

You have now configured a way for CloudFront to return a default index page for subdirectories in S3!

Summary

In this post, you used [email protected] to be able to use CloudFront with an S3 origin access identity and serve a default root object on subdirectory URLs. To find out some more about this use-case, see [email protected] integration with CloudFront in our documentation.

If you have questions or suggestions, feel free to comment below. For troubleshooting or implementation help, check out the Lambda forum.

Application Load Balancers Now Support Multiple TLS Certificates With Smart Selection Using SNI

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/new-application-load-balancer-sni/

Today we’re launching support for multiple TLS/SSL certificates on Application Load Balancers (ALB) using Server Name Indication (SNI). You can now host multiple TLS secured applications, each with its own TLS certificate, behind a single load balancer. In order to use SNI, all you need to do is bind multiple certificates to the same secure listener on your load balancer. ALB will automatically choose the optimal TLS certificate for each client. These new features are provided at no additional charge.

If you’re looking for a TL;DR on how to use this new feature just click here. If you’re like me and you’re a little rusty on the specifics of Transport Layer Security (TLS) then keep reading.

TLS? SSL? SNI?

People tend to use the terms SSL and TLS interchangeably even though the two are technically different. SSL technically refers to a predecessor of the TLS protocol. To keep things simple I’ll be using the term TLS for the rest of this post.

TLS is a protocol for securely transmitting data like passwords, cookies, and credit card numbers. It enables privacy, authentication, and integrity of the data being transmitted. TLS uses certificate based authentication where certificates are like ID cards for your websites. You trust the person that signed and issued the certificate, the certificate authority (CA), so you trust that the data in the certificate is correct. When a browser connects to your TLS-enabled ALB, ALB presents a certificate that contains your site’s public key, which has been cryptographically signed by a CA. This way the client can be sure it’s getting the ‘real you’ and that it’s safe to use your site’s public key to establish a secure connection.

With SNI support we’re making it easy to use more than one certificate with the same ALB. The most common reason you might want to use multiple certificates is to handle different domains with the same load balancer. It’s always been possible to use wildcard and subject-alternate-name (SAN) certificates with ALB, but these come with limitations. Wildcard certificates only work for related subdomains that match a simple pattern and while SAN certificates can support many different domains, the same certificate authority has to authenticate each one. That means you have reauthenticate and reprovision your certificate everytime you add a new domain.

One of our most frequent requests on forums, reddit, and in my e-mail inbox has been to use the Server Name Indication (SNI) extension of TLS to choose a certificate for a client. Since TLS operates at the transport layer, below HTTP, it doesn’t see the hostname requested by a client. SNI works by having the client tell the server “This is the domain I expect to get a certificate for” when it first connects. The server can then choose the correct certificate to respond to the client. All modern web browsers and a large majority of other clients support SNI. In fact, today we see SNI supported by over 99.5% of clients connecting to CloudFront.

Smart Certificate Selection on ALB

ALB’s smart certificate selection goes beyond SNI. In addition to containing a list of valid domain names, certificates also describe the type of key exchange and cryptography that the server supports, as well as the signature algorithm (SHA2, SHA1, MD5) used to sign the certificate. To establish a TLS connection, a client starts a TLS handshake by sending a “ClientHello” message that outlines the capabilities of the client: the protocol versions, extensions, cipher suites, and compression methods. Based on what an individual client supports, ALB’s smart selection algorithm chooses a certificate for the connection and sends it to the client. ALB supports both the classic RSA algorithm and the newer, hipper, and faster Elliptic-curve based ECDSA algorithm. ECDSA support among clients isn’t as prevalent as SNI, but it is supported by all modern web browsers. Since it’s faster and requires less CPU, it can be particularly useful for ultra-low latency applications and for conserving the amount of battery used by mobile applications. Since ALB can see what each client supports from the TLS handshake, you can upload both RSA and ECDSA certificates for the same domains and ALB will automatically choose the best one for each client.

Using SNI with ALB

I’ll use a few example websites like VimIsBetterThanEmacs.com and VimIsTheBest.com. I’ve purchased and hosted these domains on Amazon Route 53, and provisioned two separate certificates for them in AWS Certificate Manager (ACM). If I want to securely serve both of these sites through a single ALB, I can quickly add both certificates in the console.

First, I’ll select my load balancer in the console, go to the listeners tab, and select “view/edit certificates”.

Next, I’ll use the “+” button in the top left corner to select some certificates then I’ll click the “Add” button.

There are no more steps. If you’re not really a GUI kind of person you’ll be pleased to know that it’s also simple to add new certificates via the AWS Command Line Interface (CLI) (or SDKs).

aws elbv2 add-listener-certificates --listener-arn <listener-arn> --certificates CertificateArn=<cert-arn>

Things to know

  • ALB Access Logs now include the client’s requested hostname and the certificate ARN used. If the “hostname” field is empty (represented by a “-“) the client did not use the SNI extension in their request.
  • You can use any of your certificates in ACM or IAM.
  • You can bind multiple certificates for the same domain(s) to a secure listener. Your ALB will choose the optimal certificate based on multiple factors including the capabilities of the client.
  • If the client does not support SNI your ALB will use the default certificate (the one you specified when you created the listener).
  • There are three new ELB API calls: AddListenerCertificates, RemoveListenerCertificates, and DescribeListenerCertificates.
  • You can bind up to 25 certificates per load balancer (not counting the default certificate).
  • These new features are supported by AWS CloudFormation at launch.

You can see an example of these new features in action with a set of websites created by my colleague Jon Zobrist: https://www.exampleloadbalancer.com/.

Overall, I will personally use this feature and I’m sure a ton of AWS users will benefit from it as well. I want to thank the Elastic Load Balancing team for all their hard work in getting this into the hands of our users.

Randall