Amazon Web Services (AWS) announces that it has successfully renewed the Portuguese GNS (Gabinete Nacional de Segurança, National Security Cabinet) certification in the AWS Regions and edge locations in the European Union. This accreditation confirms that AWS cloud infrastructure, security controls, and operational processes adhere to the stringent requirements set forth by the Portuguese government for handling classified information at the National Reservado level (equivalent to the NATO Restricted level).
The GNS certification is based on the NIST SP800-53 Rev. 5 and CSA CCM v4 frameworks. It demonstrates the AWS commitment to providing the most secure cloud services to public-sector customers, particularly those with the most demanding security and compliance needs. By achieving this certification, AWS has demonstrated its ability to safeguard classified data up to the Reservado (Restricted) level, in accordance with the Portuguese government’s rigorous security standards.
AWS was evaluated by an authorized and independent third-party auditor, Adyta Lda, and by the Portuguese GNS itself. With the GNS certification, AWS customers in Portugal, including public sector organizations and defense contractors, can now use the full extent of AWS cloud services to handle national restricted information. This enables these customers to take advantage of AWS scalability, reliability, and cost-effectiveness, while safeguarding data in alignment with GNS standards.
Amazon Web Services (AWS) has recently renewed the Esquema Nacional de Seguridad (ENS) High certification, upgrading to the latest version regulated under Royal Decree 311/2022. The ENS establishes security standards that apply to government agencies and public organizations in Spain and service providers on which Spanish public services depend.
This security framework has gone through significant updates since the Royal Decree 3/2010 to the latest Royal Decree 311/2022 to adapt to evolving cybersecurity threats and technologies. The current scheme defines basic requirements and lists additional security reinforcements to meet the bar of the different security levels (Low, Medium, High).
Achieving the ENS High certification for its 311/2022 version underscores AWS commitment to maintaining robust cybersecurity controls and highlights our proactive approach to cybersecurity.
We are happy to announce the addition of 14 services to the scope of our ENS certification, for a new total of 172 services in scope. The certification now covers 31 Regions. Some of the additional services in scope for ENS High include the following:
Amazon Bedrock – This fully managed service offers a choice of high-performing foundation models (FMs) from leading artificial intelligence (AI) companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI.
Amazon EventBridge – Use this service to easily build loosely coupled, event-driven architectures. It creates point-to-point integrations between event producers and consumers without needing to write custom code or manage and provision servers.
AWS HealthOmics – This service helps healthcare and life science organizations and their software partners store, query, and analyze genomic, transcriptomic, and other omics data and then uses that data to generate insights to improve health.
AWS Signer – This is a fully managed code-signing service to ensure the trust and integrity of your code. AWS Signer manages the code-signing certificate’s public and private keys and enables central management of the code-signing lifecycle.
AWS Wickr – This service encrypts messages, calls, and files with a 256-bit end-to-end encryption protocol. Only the intended recipients and the customer organization can decrypt these communications, reducing the risk of adversary-in-the-middle attacks.
AWS achievement of the ENS High certification is verified by BDO Auditores S.L.P., which conducted an independent audit and confirmed that AWS continues to adhere to the confidentiality, integrity, and availability standards at its highest level as described in Royal Decree 311/2022.
As always, we are committed to bringing new services into the scope of our ENS High program based on your architectural and regulatory needs. If you have questions about the ENS program, reach out to your AWS account team or contact AWS Compliance.
If you have feedback about this post, submit comments in the Comments section below.
Want more AWS Security how-to content, news, and feature announcements? Follow us onTwitter.
Amazon Web Services (AWS) is proud to announce the successful completion of its first Standar Nasional Indonesia (SNI) certification for the AWS Asia Pacific (Jakarta) Region in Indonesia. SNI is the Indonesian National Standard, and it comprises a set of standards that are nationally applicable in Indonesia. AWS is now certified according to the SNI 27001 requirements. An independent third-party auditor that is accredited by the Komite Akreditasi Nasional (KAN/National Accreditation Committee) assessed AWS, per regulations in Indonesia.
SNI 27001 is based on the ISO/IEC 27001 standard, which provides a framework for the development and implementation of an effective information security management system (ISMS). An ISMS that is implemented according to this standard is a tool for risk management, cyber-resilience, and operational excellence.
AWS achieved the certification for compliance with SNI 27001 on October 28, 2023. The SNI 27001 certification covers the Asia Pacific (Jakarta) Region in Indonesia. For a full list of AWS services that are certified under the SNI 27001, see the SNI 27001 compliance page. Customers can also download the latest SNI 27001 certificate on AWS Artifact, a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact.
AWS is committed to bringing new services into the scope of its compliance programs to help you meet your architectural, business, and regulatory needs. If you have questions about the SNI 27001 certification, contact your AWS account team.
If you have feedback about this post, submit comments in the Comments section below.
Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.
Amazon Web Services (AWS) is proud to announce the successful completion of the ISO/IEC 20000-1:2018 certification for the AWS Asia Pacific (Mumbai) and (Hyderabad) Regions in India.
The scope of the ISO/IEC 20000-1:2018 certification is limited to the IT Service Management System (ITSMS) of AWS India Data Center (DC) Operations that supports the delivery of Security Operations Center (SOC) and Network Operation Center (NOC) managed services.
ISO/IEC 20000-1 is a service management system (SMS) standard that specifies requirements for establishing, implementing, maintaining, and continually improving an SMS. An SMS supports the management of the service lifecycle, including the planning, design, transition, delivery, and improvement of services, which meet agreed upon requirements and deliver value for customers, users, and the organization that delivers the services.
The ISO/IEC 20000-1 certification provides an assurance that the AWS Data Center operations in India support the delivery of SOC and NOC managed services, in accordance with the ISO/IEC 20000-1 guidance and in line with the requirements of the Ministry of Electronics and Information Technology (MeitY), government of India.
AWS is committed to bringing new services into the scope of its compliance programs to help you meet your architectural, business, and regulatory needs. If you have questions about the ISO/IEC 20000-1:2018 certification, contact your AWS account team.
If you have feedback about this post, submit comments in the Comments section below.
Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.
ENS is Spain’s National Security Framework. The ENS certification is regulated under the Spanish Royal Decree 3/2010 and is a compulsory requirement for central government customers in Spain. ENS establishes security standards that apply to government agencies and public organizations in Spain, and service providers on which Spanish public services depend. Updating and achieving this certification every year demonstrates our ongoing commitment to meeting the heightened expectations for cloud service providers set forth by the Spanish government.
We are happy to announce the addition of 17 services to the scope of our ENS High certification, for a new total of 166 services in scope. The certification now covers 25 Regions. Some of the additional security services in scope for ENS High include the following:
AWS CloudShell – a browser-based shell that makes it simpler to securely manage, explore, and interact with your AWS resources. With CloudShell, you can quickly run scripts with the AWS Command Line Interface (AWS CLI), experiment with AWS service APIs by using the AWS SDKs, or use a range of other tools for productivity.
AWS Cloud9 – a cloud-based integrated development environment (IDE) that you can use to write, run, and debug your code with just a browser. It includes a code editor, debugger, and terminal.
Amazon DevOps Guru – a service that uses machine learning to detect abnormal operating patterns so that you can identify operational issues before they impact your customers.
Amazon HealthLake – a HIPAA-eligible service that offers healthcare and life sciences companies a complete view of individual or patient population health data for query and analytics at scale.
AWS IoT SiteWise – a managed service that simplifies collecting, organizing, and analyzing industrial equipment data.
AWS achievement of the ENS High certification is verified by BDO Auditores S.L.P., which conducted an independent audit and confirmed that AWS continues to adhere to the confidentiality, integrity, and availability standards at its highest level.
As always, we are committed to bringing new services into the scope of our ENS High program based on your architectural and regulatory needs. If you have questions about the ENS program, reach out to your AWS account team or contact AWS Compliance.
If you have feedback about this post, submit comments in the Comments section below.
Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.
Last year the AddTrust root certificate expired and lots of clients had a bad time. Some Roku devices weren’t working right, Heroku had problems, and some folks couldn’t even curl. In the aftermath Ryan Sleevi wrote a really great blog post not just about the issue of this one certificate’s expiry, but the problem that so many TLS implementations have in general with certificate path building. If you haven’t read that blog post, you should. This post is probably going to make a lot more sense if you’ve read that one first, so go ahead and read it now.
To recap that previous AddTrust root certificate expiry, there was a certificate graph that looked like this:
This is a real example, and you can see the five certificates in the above graph here:
The important thing to understand about a certificate graph is that the boxes represent entities (meaning an X.500 Distinguished Name and public key). Entities are things you trust (or don’t, as the case may be). The arrows between entities represent certificates: a way to extend trust from one entity to another. This means that if you trust either the “USERTrust RSA Certification Authority” entity or the “AddTrust External CA Root” entity, you should be able to discover a chain of trust from that trusted entity (the “trust anchor”) down to “www.agwa.name”, the “end-entity”.
(Note that the self-signed certificates (4 and 5) are often useful for defining trusted entities, but aren’t going to be important in the context of path building.)
The problem that occurred last summer started because certificate 3 expired. The “USERTrust RSA Certificate Authority” was relatively new and not directly trusted by many clients and so most servers would send certificates 1, 2, and 3 to clients. If a client only trusted “AddTrust External CA Root” then this would be fine; that client can build a certificate chain 1 ← 2 ← 3 and see that they should trust www.agwa.name’s public key. On the other hand, if the client trusts “USERTrust RSA Certification Authority” then that’s also fine; it only needs to build a chain 1 ← 2.
The problem that arose was that some clients weren’t good at certificate path building (even though this is a fairly simple case of path building compared to the next example below). Those clients didn’t realize that they could stop building a chain at 2 if they trusted “USERTrust RSA Certification Authority”. So when certificate 3 expired on May 30, 2020, these clients treated the entire collection of certificates sent by the server as invalid and would no longer establish trust in the end-entity.
Even though that story is a year old and was well covered then, I’m retelling it here because a couple of weeks ago something kind of similar happened: a certificate for the Let’s Encrypt R3 CA expired (certificate 2 below) on September 30, 2021. This should have been fine; the Let’s Encrypt R3 entity also has a certificate signed by the ISRG Root X1 CA (3) which nowadays is trusted by most clients.
But predictably, even though it’s been a year since Ryan’s post, lots of services and clients had issues. You should read Scott Helme’s full post-mortem on the event to understand some of the contributing factors, but one big problem is that most TLS implementations still aren’t very good at path building. As a result, servers generally can’t send a complete collection of certificates down to clients (containing different possible paths to different trust anchors) which makes it hard to host a service that both old and new devices can talk to.
Maybe it’s because I saw history repeating or maybe it’s because I had just listened to Ryan Sleevi talk about the history of web PKI, but the whole episode really made me want to get back to something I had been wanting to do for a while. So over the last couple of weeks I set some time aside, started reading some RFCs, had to get more coffee, finished reading some RFCs, and finally started making certificates. The end result is the first major update to BetterTLS since its first release: a new suite of tests to exercise TLS implementations’ certificate path building. As a bonus, it also checks whether TLS implementations apply certain validity checks. Some of the checks are part of RFCs, like Basic Constraints, while others are not fully standardized, like distrusting deprecated signature algorithms and enforcing EKU constraints on CAs.
I found the results of applying these tests to various TLS implementations pretty interesting, but before I get into those results let me give you a few more details about why TLS implementations should be doing good path building and why we care about it.
What is Certificate Path Building?
If you want the really detailed answer to “What is Certificate Path Building” you can take a look at RFC 4158. But in short, certificate path building is the process of building a chain of trust from some end entity (like the blue boxes in the examples above) back to a trust anchor (like the ISRG Root X1 CA) given a collection of certificates. In the context of TLS, that collection of certificates is sent from the server to the client as part of the TLS handshake. A lot of the time, that collection of certificates is actually already an ordered sequence of certificates from end-entity to trust anchor, such as in the first example where servers would send certificates 1, 2, 3. This happens to already be a chain from “www.agwa.name” to “AddTrust External CA Root”.
But what happens if we can’t be sure what trust anchor the client has, such as the second example above where the server doesn’t know if the client will trust DST Root CA X3 or ISRG Root X1? In this case the server could send all the certificates (1, 2, and 3) and let the client figure out which path makes sense (either 1 ← 2, or 1 ← 3). But if the client expects the server’s certificates to simply be a chain already, the sequence 1 ← 2 ← 3 is going to fail to validate.
Why Does This Matter?
The most important reason for clients to support robust path building is that it allows for agility in the web PKI ecosystem. For example, we can add additional certificates that conform to new requirements such as SHA-1 deprecation, validity length restrictions, or trust anchor removal, all while leaving existing certificates in place to preserve legacy client functionality. This allows static, infrequently updated, or intentionally end-of-lifed clients to continue working while browsers (which frequently enforce new constraints like the ones mentioned above) can take advantage of the additional certificates in the handshake that conform to the new requirements.
In particular, Netflix runs on a lot of devices. Millions of them. The reality though is that the above description applies to many of them. Some devices only run older versions of Android of iOS. Some are embedded devices that no longer receive updates. Regardless of the specifics, the update cadence (if one exists) for those devices is outside of our control. But ideally we’d love it if every device that used to work just kept working. To that end, it’s helpful to know what trade-offs we can make in terms of agility versus retaining support for every device. Are those devices stuck using certain signature algorithms or cipher suites? Will those devices accept a certificate set that includes extra certificates with other alternate signature algorithms?
As service owners, having test suites that can answer these questions can guide decision making. On the other hand, TLS library implementers using these test suites can ensure that applications built with their libraries operate reliably throughout churn in the web PKI ecosystem.
An Aside About Agility
More than 4 years passed between publication of the first draft of the TLS 1.3 specification and the final version. An impressive amount of consideration went into the design of all of the versions of the TLS and SSL protocols and it speaks to the designers’ foresight and diligence that a single server can support clients speaking SSL 3.0 (final specification released 1996) all the way up to TLS 1.3 (final specification released 2018).
(Although I should say that in practice, supporting such a broad set of protocol versions on a single server is probably not a good idea.)
The reason that TLS protocol can support this is because agility has been designed into the system. The client advertises the TLS versions, cipher suites, and extensions it supports and the server can make decisions about the best supported version of those options and negotiate the details in its response. Over time this has allowed the ecosystem to evolve gracefully, supporting new cryptographic primitives (like elliptic curve cryptography) and deprecating old ones (like the MD5 hash algorithm).
Unfortunately the TLS specification has not enabled the same agility with the certificates that it relies on in practice. While there are great specifications like RFC 4158 for how to think about certificate path building, TLS specifications up to 1.2 only allowed for server to present “the chain”:
This is a sequence (chain) of certificates. The sender’s certificate MUST come first in the list. Each following certificate MUST directly certify the one preceding it.
Only in TLS 1.3 did the specification allow for greater flexibility:
The sender’s certificate MUST come in the first CertificateEntry in the list. Each following certificate SHOULD directly certify the one immediately preceding it. … Note: Prior to TLS 1.3, “certificate_list” ordering required each certificate to certify the one immediately preceding it; however, some implementations allowed some flexibility. Servers sometimes send both a current and deprecated intermediate for transitional purposes, and others are simply configured incorrectly, but these cases can nonetheless be validated properly. For maximum compatibility, all implementations SHOULD be prepared to handle potentially extraneous certificates and arbitrary orderings from any TLS version, with the exception of the end-entity certificate which MUST be first.
This change to the specification is hugely significant because it’s the first formalization that TLS implementations should be doing robust path building. Implementations which conform to this are far more likely to continue operating in a PKI ecosystem undergoing frequent changes. If more TLS implementations can tolerate changes, then web PKI ecosystem will be in a place where it is able to undergo those changes. And ultimately this means we will be able to update best practices and retain trust agility as time goes on, making the web a more secure place.
It’s hard to imagine a world where SSL and TLS were so inflexible that we wouldn’t have been able to gracefully transition off of MD5 or transition to PFS cipher suites. I’m hopeful that this update to the TLS specification will help bring the same agility that has existed in the TLS protocol itself to the web PKI ecosystem.
Test Results
So what does the new test suite in BetterTLS tell us about the state of certificate path building in TLS implementations? The good news is that there has been some improvement in the state of the world since Ryan’s roundup last year. The bad news is that that improvement isn’t everywhere.
The test suite both identifies what relevant features a TLS implementation supports (like default distrust of deprecated signing algorithms) and evaluates correctness. Here’s a quick enumeration of what features this test suite looks for:
Branching Path Building: Implementations that support “branching” path building can handle cases like the Let’s Encrypt R3 example above where an entity has multiple issuing certificates and the client needs to check multiple possible paths to find a route to a trust anchor. Importantly, as invalid certificates are found during path building (for all of the reasons listed below) the implementation should be able to pick an alternate issuer certificate to continue building a path. This is the primary feature of interest in this test suite.
Certificate expiration: Implementations should treat expired certificates as invalid during path building. This is a pretty straightforward expectation and fortunately all the tested implementations were properly verifying this.
Name constraints: Implementations should treat certificates with a name constraint extension in conflict with the end entity’s identity as invalid. Check out BetterTLS’s name constraints test suite for more thorough evaluations of this evaluation. All of the implementations tested below correctly evaluated the simple name constraints check in this test suite.
Bad Extended Key Usage (EKU) on CAs: This check tests whether an implementation rejects CA certificates with an Extended Key Usage extension that is incompatible with the end-entity’s use of the certificate for TLS server authentication. The Mozilla Certificate Policy FAQ states:
Inclusion of EKU in CA certificates is generally allowed. NSS and CryptoAPI both treat the EKU extension in intermediate certificates as a constraint on the permitted EKU OIDs in end-entity certificates. Browsers and certificate client software have been using EKU in intermediate certificates, and it has been common for enterprise subordinate CAs in Windows environments to use EKU in their intermediate certificates to constrain certificate issuance.
While many implementations support the semantics of an incompatible EKU in CAs as a reason to treat a certificate as invalid, RFCs do not require this behavior so we do see several implementations below not applying this verification.
Missing Basic Constraints Extension: This check tests whether the implementation rejects paths where a CA is missing the Basic Constraints extension. RFC 5280 requires that all CAs have this extension, so all implementations should support this.
Not a CA: This check tests whether the implementation rejects paths where a CA has a Basic Constraints extension, but that extension does not identify the certificate as being a CA. Similarly to the above, all implementations should support this and fortunately all of the implementations tested applied this check correctly.
Deprecated Signing Algorithm: This check tests whether the implementation rejects certificates that have been signed with an algorithm that is considered deprecated (in particular, with an algorithm using SHA-1). Enforcement of SHA-1 deprecation is not universally present in all TLS implementations at this point, so we see a mix of implementations below that do and do not apply it.
For more information about these checks, check out the repository’s README. Now on to the results!
webpki
webpki is a rust library for validating web PKI certificates. It’s the underlying validation mechanism for the rustls library that I actually tested. webpki shows up as the hero of the non-browser TLS implementations, supporting all of the features and having a 100% test pass rate. webpki is primarily maintained by Brian Smith who also worked on the mozilla::pkix codebase that’s used by Firefox.
Go
Go didn’t distrust deprecated signature algorithms by default (although looking at the issues tracker, an update was merged to change this long before I started working on this test suite; it should land in Go 1.18), but otherwise supported all the features in the test suite. However, while it supported EKU constraints on CAs the test suite discovered a bug that causes path building to fail under certain conditions when only a subset of paths have an EKU constraint.
Upon inspection, the Go x509 library validates most certificate constraints (like expiration and name constraints) as it builds paths, but EKU constraints are only applied after candidate paths are found. This looks to be a violation of Sleevi’s Rule, which probably explains why the EKU corner case causes Go to have a bad time:
Even if a library supports path building, doing some form of depth-first search in the PKI graph, the next most common mistake is still treating path building and path verification as separable, independent steps. That is, the path builder finds “a chain” that is rooted in a trusted CA, and then completes. The completed chain is then handed to a path verifier, which asks “Does this chain meet all the caller’s/application’s requirements”, and returns a “Yes/No” answer. If the answer is “No”, you want the path builder to consider those other paths in the graph, to see if there are any “Yes” paths. Yet if the path building and verification steps are different, you’re bound to have a bad time.
Java
I didn’t evaluate JDKs other than OpenJDK, but the latest version of OpenJDK 11 actually performed quite well. This JDK didn’t enforce EKU constraints on CAs or distrust certificates signed with SHA-1 algorithms. Otherwise, the JDK did a good job of building certificate paths.
PKI.js
The PKI.js library is a javascript library that can perform a number of PKI-related operations including certificate verification. It’s unclear if the “certificate chain validator” is meant to support complex certificate sets or if it was only meant to handle pre-validated paths, but the implementation fared poorly against the test suite. It didn’t support EKU constraints, distrust deprecated signature algorithms, didn’t perform any branching path building, and failed to validate even a simple “chain” when a parent certificate has expired but the intermediate was already trusted (this is the same issue OpenSSL ran into with the expired AddTrust certificate last year).
Even worse, when the certificate pool had a cycle (like in RFC 4158 figure 7), the validator got stuck in an infinite loop.
OpenSSL
In short, OpenSSL doesn’t appear to have changed significantly since Ryan’s roundup last year. OpenSSL does support the less ubiquitous validation checks (such as EKU constraints on CAs and distrusting deprecated signing algorithms), but it still doesn’t support branching path building (only non-branching chains).
LibreSSL
LibreSSL showed significant improvement over last year’s evaluation, which appears to be largely attributable to Bob Beck’s work on a new x509 verifier in LibreSSL 3.2.2 based on Go’s verifier. It supported path building and passed all of the non-skipped tests. As with other implementations it didn’t distrust deprecated algorithms by default. The one big surprise though is that it also didn’t distrust certificates missing the Basic Constraints extension, which as we described above is strictly required by the RFC 5280 spec:
If the basic constraints extension is not present in a version 3 certificate, or the extension is present but the cA boolean is not asserted, then the certified public key MUST NOT be used to verify certificate signatures.
BoringSSL
BoringSSL performed similarly to OpenSSL. Not only did it not support any path building (only non-branching chains), but it also didn’t distrust deprecated signature algorithms.
GnuTLS
GnuTLS looked just like OpenSSL in its results. It also supported all the validation checks in the test suite (EKU constraints, deprecated signature algorithms) but didn’t support branching path building.
Browsers
By and large, browsers (or the operating system libraries they utilize) do a good job of path building.
Firefox (all platforms)
Firefox didn’t distrust deprecated signature algorithms, but otherwise supported path building and passed all tests.
Chrome (all platforms) Chrome supported all validation cases and passed all tests.
Microsoft Edge (Windows) Edge supported all validation cases and passed all tests.
Safari (MacOS) Safari didn’t support EKU constraints on CAs but did pass simple branching path building test cases. However, it failed most of the more complicated path building test cases (such as cases with cycles).
For most of the history of TLS, implementations have been pretty poor at certificate path building (if they supported it at all). In fairness, until recently the TLS specifications asserted that servers MUST behave in such a way that didn’t require clients to implement certificate path building.
However the evolution of the web PKI ecosystem has necessitated flexibility and this has been more directly codified in the TLS 1.3 specification. If you work on a TLS implementation, you really really ought to take heed of these new expectations in the TLS 1.3 specification. We’re going to have a lot more stability on the web if implementations can do good path building.
To be clear, it doesn’t need to be every implementation’s goal to pass every test in this suite. I’ll be the first to admit that the test suite contains some more pathological test cases than you’re likely to see in web PKI in practice. But at a minimum you should look at the changes that have occurred in the web PKI ecosystem in the past decade and be confident that your library supports enough path building to easily handle transitions (such as servers sending multiple possible paths to a trust anchor). And passing all of the tests in the BetterTLS test suite is a useful metric for establishing that confidence.
It’s important to make sure clients are forward-compatible with changes to the web PKI, because it’s not a matter of “if” but “when.” In Scott’s own words:
One thing that’s certain is that this event is coming again. Over the next few years we’re going to see a wide selection of Root Certificates expiring for all of the major CAs and we’re likely to keep experiencing the exact same issues unless something changes in the wider ecosystem.
If you are in a position to choose between different client-side TLS libraries, you can use these test results as a point of consideration for which libraries are most likely to weather those changes.
And if you are a service owner, it is important to know your customers. Will they be able to handle a transition from RSA to ECDSA? Will they be able to handle a transition from ECDSA to a post-quantum signature algorithm? Will they be able to handle having multiple certificates in a handshake when an old trust is expiring or no longer trusted by new clients? Knowing your clients can help you be resilient and set up appropriate configurations and endpoints to support them.
Standards, security base lines, and best practices in web PKI have been rapidly changing over the last few years and are only going to keep changing. Whether you implement TLS or just consume it, whether it’s a distrusted CA, a broken signature algorithm, or just the expiry of a certificate in good standing, it’s important to make sure that your application will be able to handle the next big change. We hope that BetterTLS can play a part in making that easier!
In this post, I take you through the steps to deploy a public AWS Certificate Manager (ACM) certificate across multiple accounts and AWS Regions by using the functionality of AWS CloudFormationStackSets and AWS Lambda. ACM is a service offered by Amazon Web Services (AWS) that you can use to obtain x509 v3 SSL/TLS certificates. New certificates can be either requested or—if you’ve already obtained the certificate from a third-party certificate provider—imported into AWS. These certificates can then be used with AWS services to ensure that your content is delivered over HTTPS.
ACM is a regional service. The certificates issued by ACM can be used only with AWS resources in the same Region as your ACM service. Additionally, ACM public certificates cannot be exported for use with external resources, since the private keys aren’t made available to users and are managed solely by AWS. Hence, when your architecture becomes large and complex, involving multiple accounts and resources distributed across various Regions, you must manually request and deploy individual certificates in each Region and account to use the functionalities of ACM. So, the question arises as to how you can simplify the task of obtaining and deploying ACM certificates across multiple accounts.
The proposed solution (illustrated in Figure 1), deploys AWS CloudFormation stack sets to create necessary resources like AWS Identity and Access Management roles and Lambda functions in AWS accounts. The IAM roles provide Lambda functions with the permissions needed. The function can be hosted as a deployment package in an Amazon Simple Storage Service (Amazon S3) bucket of your choice, which then requests ACM certificates on your behalf and ensures they are validated.
Figure 1: Architecture diagram
Before I describe the implementation, let’s review the important aspects of an ACM certificate from the time it’s requested to the time it’s available for use.
Important aspects of an ACM certificate
When requesting a new certificate, ACM prompts you to provide one or more domains for the certificate. Before the certificate is issued, ACM must validate the ownership of the domains that the certificate is being requested for. ACM lets you choose either of two options to validate the domain. These options are:
You can choose only one option for validating the domain—this cannot be changed for the entirety of the life of the certificate. ACM uses the same validation option to validate the domain when renewing the certificate.
In this post, I discuss validation through DNS. Validating through DNS can be automated, which helps in achieving the end goal of having public AWS certificates in multiple AWS accounts and Regions. Let’s get started.
Validate DNS by using Lambda
During DNS validation, ACM generates a new CNAME record for the domains the certificate is requested for. ACM then checks if the records are in place.
The Lambda function, which the CloudFormation stack starts, populates the CNAME records from certificates requested in multiple accounts and Regions into a single Route 53 hosted zone. The Lambda function execution role in various accounts assumes the IAM role in the parent account to make changes to the hosted zone and add the required records.
Here are a few things that you need to keep in mind with respect to the Lambda function:
All the certificates are issued for all of the domains. There’s no option to deploy the certificates for different domains in different accounts.
Route 53 is a global service. Every ACM certificate in an account has the same CNAME record name and value regardless of the Region the certificate is requested from, as CNAME records are all the same for the domain in an account. This means that you need to populate the CNAME record for an account only once, irrespective of the number of Regions for which you are requesting the certificates.
However, you don’t use the Lambda function directly, instead, you use automation through AWS CloudFormation. Using AWS CloudFormation, you can create customized scripts called stacks in JSON or YAML to deploy AWS resources in a specific order. AWS CloudFormation offers another functionality known as StackSets. CloudFormation stacks can only be used within the account and Region they’re launched in. Stack sets give you the ability to deploy the same stack in different accounts and Regions within those accounts automatically. Let’s look at how AWS CloudFormation fits in with everything that I’ve discussed so far.
Deploy resources in multiple accounts and Regions
Let’s look at how AWS CloudFormation can help you extend this solution across multiple accounts and Regions. Using two CloudFormation stacks, you can deploy the following AWS resources:
Note: From this point, I discuss only the prerequisites and steps needed to deploy the solution. You can follow the included hyperlinks to learn more about the services and concepts discussed.
Route 53 and IAM are global services and so you don’t need to create these resources in every Region. The following implementation has been broken into two CloudFormation stacks. One for deploying global resources and the second stack as a stack set to deploy cross-account and cross-Region resources.
Prerequisites before deploying the stacks
It’s important to understand the parent-child relationship between the accounts that are used in the following workflow. The parent account is where the stacks are deployed. The stack set deploys individual stacks in each of the child accounts where the certificate resources are needed. Here are the prerequisites that you must set up before deploying the stack:
The DNS of your domain should be set up in a Route 53 hosted zone in the parent account.
You must have an Amazon S3 bucket to store the Lambdadeployment package. The AWS CloudFormation stack set fetches the deployment package from the bucket, which is added as a parameter when launching the stack set.
Since the bucket is in the parent account, you must modify the bucket policy to add the ARN of the cross-account AWS CloudFormation stack set IAM roles, which allows the stack to access the bucket and fetch the Lambda deployment package. For this to work, you must make sure that the bucket policy allows this cross-account access.
For stack sets to run, there are a few prerequisites related to cross-account IAM permissions that you must fulfil. Refer to Prerequisites for stack set operations.
Once the prerequisites are met, you can deploy the two CloudFormation stacks. One deploys the Global-resources stack, and the other deploys the Cross-account stack.
Deploy the global resources stack
Let me show you how to deploy the global resources stack. The Global-resources stack creates an IAM role in the parent account and attaches the necessary permissions to it. Please log in to your AWS management console and navigate to the AWS CloudFormation service home page to get started. You can leverage the stack Global resources template given inline directly during the setup.
To deploy the global resources stack
Deploy the stack named Global-resources (the stack can be deployed in any AWS Region). You must deploy this stack in the parent account. This stack consists of a parent account IAM role: This role is assumed by the Lambda execution role from other child accounts to populate the CNAME records of ACM certificates in the hosted zone of the parent account.
Note: Make sure that the AWS CloudFormation role has enough permissions to perform these actions.
While deploying the stack, you’ll be prompted to supply values for two parameters:
TrustedAccounts – The child accounts, which are populated in the trust policy of the role.
HostedZoneId – This hosted zone ID is used to create the IAM policy for the parent account role.
When the stack finishes running, go to the Outputs tab, and take note of the RoleARN, which you need for the second part of this implementation.
The following is the Global-resources CloudFormation template:
AWSTemplateFormatVersion: 2010-09-09
Parameters:
TrustedAccounts:
Type: List<Number>
Description: >-
List of AWS accounts in which the template will be deployed. These
accounts will form a trust policy of the role that will be used to edit
the records in the hosted zone.
HostedZoneId:
Type: String
Description: Hosted zone ID for the domain
Resources:
IamRole:
Type: 'AWS::IAM::Role'
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Sid: ''
Effect: Allow
Principal:
AWS: !Ref TrustedAccounts
Action: 'sts:AssumeRole'
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/AmazonRoute53AutoNamingFullAccess'
Path: /
Policies:
- PolicyName: lambda-policy
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- 'route53:ListResourceRecordSets'
Resource: !Join
- ''
- - 'arn:aws:route53:::hostedzone/'
- !Ref HostedZoneId
Outputs:
RoleARN:
Description: The role arn to be passed on to the next template
Value: !GetAtt
- IamRole
- Arn
Deploy the cross-account stack
When the Global-resources stack is in the CREATE_COMPLETE state, you can deploy the second stack. The Cross-account stack deploys the rest of the resources that need to be created in all the Regions and AWS accounts where you want to deploy the certificates.
To deploy the cross-account stack
Before deploying the stack set, download this deployment package and upload it to an Amazon S3 bucket. Don’t create a new folder—object key—in the bucket to store this package. Upload it directly under the root prefix. Make a note of the Region this bucket belongs to.
Navigate to the AWS CloudFormation console to deploy the cross-account stack. You deploy the cross-account stack as a stack set, which can be deployed in any Region. To deploy the stack set, you must provide the following parameters:
HostedZone – The hosted zone ID where your domain is hosted.
DomainNameParameter – The same parameter as in the previous stack.
S3BucketNameParameter – The name of the bucket that hosts the deployment package.
SubjectAlternativeNames – These are the additional domain names that you want to create the certificates for. Add only the subdomains of your hosted zone. Route 53 doesn’t allow creation of CNAME records not applicable for the domain.
Regions – The different AWS Regions these certificates are deployed in. Note that the certificates are in the same Region in other accounts as well. You can enter multiple Regions as a comma-separated Region code.
RoleARN: The IAM role created by the Global-account stack (RoleARN outputs of the previous stack).
Deploy the stack set either in individual accounts (self-service permissions) or in accounts under AWS Organizations (service-managed permissions). You can learn more about the required permissions from Prerequisites for stack set operations.
If you choose self-service permissions, be sure to choose the parent account role under the IAM admin role ARN – optional section and the execution role under the IAM execution role name section before moving to the next step.
If you choose service-managed permissions, be sure to enable trusted access for AWS CloudFormation stack sets from the AWS Organizations console.
Choose the Region you want to deploy this stack in. In this section, choose the Region in which the Amazon S3 bucket was created. If you deploy this in any other Region, the stack will fail.
Note: This might not be the same as the Region the certificate is in.
Select Submit to deploy the stack set.
The following is the Cross-account CloudFormation template:
AWSTemplateFormatVersion: 2010-09-09
Parameters:
DomainNameParameter:
Type: String
Description: The domain name for which the certificate will be issued.
Regions:
Type: List<CommaDelimitedList>
Description: >-
The regions in which this certificate will be deployed in (same across
multiple accounts).
HostedZone:
Type: String
Description: The hosted zone ID of your domain in Route53.
RoleArn:
Type: String
Description: >-
The Arn of the role that the lambda's execution role will assume to
populate the CNAME records.
S3BucketNameParameter:
Type: String
Description: >-
The S3 bucket name that has the lambda deployment package (should not be
within an object)
SubjectAlternativeName:
Type: List<CommaDelimitedList>
Description: Alternative sub-domain names that will be covered in your certificate.
Resources:
CustomResource:
Type: 'Custom::CustomResource'
Properties:
ServiceToken: !GetAtt
- LambdaFunction
- Arn
HostedZone: !Ref HostedZone
DomainName: !Ref DomainNameParameter
SAN: !Ref SubjectAlternativeName
RoleARN: !Ref RoleArn
Regions: !Ref Regions
LambdaFunction:
Type: 'AWS::Lambda::Function'
Properties:
Code:
S3Bucket: !Ref S3BucketNameParameter
S3Key: Lambda_Custom_Resource-c57297c6-ee20-401d-852f-1a71e1facbbe.zip
Handler: index.lambda_handler
Runtime: python3.6
Timeout: 900
Role: !GetAtt
- LambdaExecutionRole
- Arn
LambdaExecutionRole:
Type: 'AWS::IAM::Role'
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action:
- 'sts:AssumeRole'
Path: /
Policies:
- PolicyName: lambda-policy
PolicyDocument:
Version: 2012-10-17
Statement:
- Action:
- 'acm:DescribeCertificate'
- 'acm:DeleteCertificate'
- 'acm:GetCertificate'
- 'logs:PutLogEvents'
- 'logs:CreateLogGroup'
- 'logs:CreateLogStream'
Resource:
- !Sub 'arn:aws:acm:*:${AWS::AccountId}:certificate/*'
- !Sub 'arn:aws:logs:*:${AWS::AccountId}:log-group:/aws/lambda/*'
- !Sub 'arn:aws:logs:*:${AWS::AccountId}:log-group:/aws/lambda/*:log-stream:*'
Effect: Allow
- Action:
- 'acm:ListCertificates'
- 'acm:RequestCertificate'
Resource: '*'
Effect: Allow
- Effect: Allow
Action: 'sts:AssumeRole'
Resource: !Ref RoleArn
This completes the implementation of your cross-account setup. All the CNAMEs of cross-account certificates are now populated in the hosted zone of the parent account, and the certificates are validated after the CNAME records are successfully populated globally, which ideally takes only a few minutes. When set up is complete, you can delete the CloudFormation stacks.
Note: When you delete the CloudFormation stacks, the ACM certificates and the corresponding Route 53 record sets remain. This is to prevent inconsistency. Other resources such as the Lambda functions and IAM roles are deleted.
Summary
In this post, I’ve shown you how to use Lambda and AWS CloudFormation to automate ACM certificate creation across your AWS environment. The automation simplifies the certificate creation by completing tasks that are normally done manually. The certificates can now be used with other AWS resources to support your use cases. You can learn more about how you can use ACM certificates with integrated services like AWS load balancers and using alternate domain names with Amazon CloudFront distributions.
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Certificate Manager forum or contact AWS Support.
Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.
The collective thoughts of the interwebz
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.