Tag Archives: data localization

Disaster recovery compliance in the cloud, part 2: A structured approach

Post Syndicated from Dan MacKay original https://aws.amazon.com/blogs/security/disaster-recovery-compliance-in-the-cloud-part-2-a-structured-approach/

Compliance in the cloud is fraught with myths and misconceptions. This is particularly true when it comes to something as broad as disaster recovery (DR) compliance where the requirements are rarely prescriptive and often based on legacy risk-mitigation techniques that don’t account for the exceptional resilience of modern cloud-based architectures. For regulated entities subject to principles-based supervision such as many financial institutions (FIs), the responsibility lies with the FI to determine what’s necessary to adequately recover from a disaster event. Without clear instructions, FIs are susceptible to making incorrect assumptions regarding their compliance requirements for DR.

In Part 1 of this two-part series, I provided some examples of common misconceptions FIs have about compliance requirements for disaster recovery in the cloud. In Part 2, I outline five steps you can take to avoid these misconceptions when architecting DR-compliant workloads for deployment on Amazon Web Services (AWS).

1. Identify workloads planned for deployment

It’s common for FIs to have a portfolio of workloads they are considering deploying to the cloud and often want to know that they can be compliant across the board. But compliance isn’t a one-size-fits-all domain—it’s based on the characteristics of each workload. For example, does the workload contain personally identifiable information (PII)? Will it be used to store, process, or transmit credit card information? Compliance is dependent on the answers to questions such as these and must be assessed on a case-by-case basis. Therefore, the first step in architecting for compliance is to identify the specific workloads you plan to deploy to the cloud. This way, you can assess the requirements of these specific workloads and not be distracted by aspects of compliance that might not be relevant.

2. Define the workload’s resiliency requirements

Resiliency is the ability of a workload to recover from infrastructure or service disruptions. DR is an important part of your resiliency strategy and concerns how your workload responds to a disaster event. DR strategies on AWS range from simple, low cost options such as backup and restore, to more complex options such as multi-site active-active, as shown in Figure 1.
 

For more information, I encourage you to read Seth Eliot’s blog series on DR Architecture on AWS as well as the AWS whitepaper Disaster Recovery of Workloads on AWS: Recovery in the Cloud.

The DR strategy you choose for a particular workload is dependent on your organization’s requirements for avoiding loss of data—known as the recovery point objective (RPO)—and reducing downtime where the workload isn’t available —known as the recovery time objective (RTO). RPO and RTO are key factors for determining the minimum architectural specifications necessary to meet the workload’s resiliency requirements. For example, can the workload’s RPO and RTO be achieved using a multi-AZ architecture in a single AWS Region, or do the resiliency requirements necessitate deploying the workload across multiple AWS Regions? Even if your workload is not subject to explicit compliance requirements for resiliency, understanding these requirements is necessary for assessing other aspects of DR compliance, including data residency and geodiversity.

3. Confirm the workload’s data residency requirements

As I mentioned in Part 1, data residency requirements might restrict which AWS Region or Regions you can deploy your workload to. Therefore, you need to confirm whether the workload is subject to any data residency requirements within applicable laws and regulations, corporate policies, or contractual obligations.

In order to properly assess these requirements, you must review the explicit language of the requirements so as to understand the specific constraints they impose. You should also consult legal, privacy, and compliance subject-matter specialists to help you interpret these requirements based on the characteristics of the workload. For example, do the requirements specifically state that the data cannot leave the country, or can the requirement be met so long as the data can be accessed from that country? Does the requirement restrict you from storing a copy of the data in another country—for example, for backup and recovery purposes? What if the data is encrypted and can only be read using decryption keys kept within the home country? Consulting subject-matter specialists to help interpret these requirements can help you avoid making overly restrictive assumptions and imposing unnecessary constraints on the workload’s architecture.

4. Confirm the workload’s geodiversity requirements

A single Region, multiple-AZ architecture is often sufficient to meet a workload’s resiliency requirements. However, if the workload is subject to geodiversity requirements, the distance between the AZs in an AWS Region might not conform to the minimum distance between individual data centers specified by the requirements. Therefore, it’s critical to confirm whether any geodiversity requirements apply to the workload.

Like data residency, it’s important to assess the explicit language of geodiversity requirements. Are they written down in a regulation or corporate policy, or are they just a recommended practice? Can the requirements be met if the workload is deployed across three or more AZs even if the minimum distance between those AZs is less than the specified minimum distance between the primary and backup data centers? If it’s a corporate policy, does it allow for exceptions if an alternative method provides equal or greater resiliency than asynchronous replication between two geographically distant data centers? Or perhaps the corporate policy is outdated and should be revised to reflect modern risk mitigation techniques. Understanding these parameters can help you avoid unnecessary constraints as you assess architectural options for your workloads.

5. Assess architectural options to meet the workload’s requirements

Now that you understand the workload’s requirements for resiliency, data residency, and geodiversity, you can assess the architectural options that meet these requirements in the cloud.

As per AWS Well-Architected best practices, you should strive for the simplest architecture necessary to meet your requirements. This includes assessing whether the workload can be accommodated within a single AWS Region. If the workload is constrained by explicit geographic diversity requirements or has resiliency requirements that cannot be accommodated by a single AWS Region, then you might need to architect the workload for deployment across multiple AWS Regions. If the workload is also constrained by explicit data residency requirements, then it might not be possible to deploy to multiple AWS Regions. In cases such as these, you can work with our AWS Solution Architects to assess hybrid options that might meet your compliance requirements, such as using AWS Outposts, Amazon Elastic Container Service (Amazon ECS) Anywhere, or Amazon Elastic Kubernetes Service (Amazon EKS) Anywhere. Another option may be to consider a DR solution in which your on-premises infrastructure is used as a backup for a workload running on AWS. In some cases, this might be a long-term solution. In others, it might be an interim solution until certain constraints can be removed—for example, a change to corporate policy or the introduction of additional AWS Regions in a particular country.

Conclusion

Let’s recap by summarizing some guiding principles for architecting compliant DR workloads as outlined in this two-part series:

  • Avoid assumptions; confirm the facts. If it’s not written down, it’s unlikely to be considered a mandatory compliance requirement.
  • Consult the experts. Legal, privacy, and compliance, as well as AWS Solution Architects, AWS security and compliance specialists, and other subject-matter specialists.
  • Avoid generalities; focus on the specifics. There is no one-size-fits-all approach.
  • Strive for simplicity, not zero risk. Don’t use multiple AWS Regions when one will suffice.
  • Don’t get distracted by exceptions. Focus on your current requirements, not workloads you’re not yet prepared to deploy to the cloud.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Dan MacKay

Dan is the Financial Services Compliance Specialist for AWS Canada. As a member of the Worldwide Financial Services Security & Compliance team, Dan advises financial services customers on best practices and practical solutions for cloud-related governance, risk, and compliance. He specializes in helping AWS customers navigate financial services and privacy regulations applicable to the use of cloud technology in Canada.

Disaster recovery compliance in the cloud, part 1: Common misconceptions

Post Syndicated from Dan MacKay original https://aws.amazon.com/blogs/security/disaster-recovery-compliance-in-the-cloud-part-1-common-misconceptions/

Compliance in the cloud can seem challenging, especially for organizations in heavily regulated sectors such as financial services. Regulated financial institutions (FIs) must comply with laws and regulations (often in multiple jurisdictions), global security standards, their own corporate policies, and even contractual obligations with their customers and counterparties. These various compliance requirements may impose constraints on how their workloads can be architected for the cloud, and may require interpretation on what FIs must do in order to be compliant. It’s common for FIs to make assumptions regarding their compliance requirements, which can result in unnecessary costs and increased complexity, and might not align with their strategic objectives. A modern, rationalized approach to compliance can help FIs avoid imposing unnecessary constraints while meeting their mandatory requirements.

In my role as an Amazon Web Services (AWS) Compliance Specialist, I work with our financial services customers to identify, assess, and determine solutions to address their compliance requirements as they move to the cloud. One of the most common challenges customers ask me about is how to comply with disaster recovery (DR) requirements for workloads they plan to run in the cloud. In this blog post, I share some of the typical misconceptions FIs have about DR compliance in the cloud. In Part 2, I outline a structured approach to designing compliant architectures for your DR workloads. As my primary market is Canada, the examples in this blog post largely pertain to FIs operating in Canada, but the principles and best practices are relevant to regulated organizations in any country.

“Why isn’t there a checklist for compliance in the cloud?”

Compliance requirements are sometimes prescriptive: “if X, then you must do Y.” When requirements are prescriptive, it’s usually clear what you must do in order to be compliant. For example, the Payment Card Industry Data Security Standard (PCI DSS) requirement 8.2.4 obliges companies that process, store, or transmit credit card information to “change user passwords/passphrases at least once every 90 days.” But in the financial services sector, compliance requirements for managing operational risks can be subjective. When regulators take what is known as a principles-based approach to setting regulatory expectations, each FI is required to assess their specific risks and determine the mitigating controls necessary to conform with the organization’s tolerance for operational risk. Because the rules aren’t prescriptive, there is no “checklist for achieving compliance.” Instead, principles-based requirements are guidelines that FIs are expected to consider as they design and implement technology solutions. They are, by definition, subject to interpretation and can be prone to myths and misconceptions among FIs and their service providers. To illustrate this, let’s look at two aspects of DR that are frequently misunderstood within the Canadian financial services industry: data residency and geodiversity.

“My data has to stay in country X”

Data residency or data localization is a requirement for specific data-sets processed and stored in an IT system to remain within a specific jurisdiction (for example, a country). As discussed in our Policy Perspectives whitepaper, contrary to historical perspectives, data residency doesn’t provide better security. Most cyber-attacks are perpetrated remotely and attackers aren’t deterred by the physical location of their victims. In fact, data residency can run counter to an organization’s objectives for security and resilience. For example, data residency requirements can limit the options our customers have when choosing the AWS Region or Regions in which to run their production workloads. This is especially challenging for customers who want to use multiple Regions for backup and recovery purposes.

It’s common for FIs operating in Canada to assume that they’re required to keep their data—particularly customer data—in Canada. In reality, there’s very little from a statutory perspective that imposes such a constraint. None of the private sector privacy laws include data residency requirements, nor do any of the financial services regulatory guidelines. There are some place of records requirements in Canadian federal financial services legislation such as The Bank Act and The Insurance Companies Act, but these are relatively narrow in scope and apply primarily to corporate records. For most Canadian FIs, their requirements are more often a result of their own corporate policies or contractual obligations, not externally imposed by public policies or regulations.

“My data centers have to be X kilometers apart”

Geodiversity—short for geographic diversity—is the concept of maintaining a minimum distance between primary and backup data processing sites. Geodiversity is based on the principle that requiring a certain distance between data centers mitigates the risk of location-based disruptions such as natural disasters. The principle is still relevant in a cloud computing context, but is not the only consideration when it comes to planning for DR. The cloud allows FIs to define operational resilience requirements instead of limiting themselves to antiquated business continuity planning and DR concepts like physical data center implementation requirements. Legacy disaster recovery solutions and architectures, and lifting and shifting such DR strategies into the cloud, can diminish the potential benefits of using the cloud to improve operational resilience. Modernizing your information technology also means modernizing your organization’s approach to DR.

In the cloud, vast physical distance separation is an anti-pattern—it’s an arbitrary metric that does little to help organizations achieve availability and recovery objectives. At AWS, we design our global infrastructure so that there’s a meaningful distance between the Availability Zones (AZs) within an AWS Region to support high availability, but close enough to facilitate synchronous replication across those AZs (an AZ being a cluster of data centers). Figure 1 shows the relationship between Regions, AZs, and data centers.
 

Synchronous replication across multiple AZs enables you to minimize data loss (defined as the recovery point objective or RPO) and reduce the amount of time that workloads are unavailable (defined as the recovery time objective or RTO). However, the low latency required for synchronous replication becomes less achievable as the distance between data centers increases. Therefore, a geodiversity requirement that mandates a minimum distance between data centers that’s too far for synchronous replication might prohibit you from taking advantage of AWS’s multiple-AZ architecture. A multiple-AZ architecture can achieve RTOs and RPOs that aren’t possible with a simple geodiversity mitigation strategy. For more information, refer to the AWS whitepaper Disaster Recovery of Workloads on AWS: Recovery in the Cloud.

Again, it’s a common perception among Canadian FIs that the disaster recovery architecture for their production workloads must comply with specific geodiversity requirements. However, there are no statutory requirements applicable to FIs operating in Canada that mandate a minimum distance between data centers. Some FIs might have corporate policies or contractual obligations that impose geodiversity requirements, but for most FIs I’ve worked with, geodiversity is usually a recommended practice rather than a formal policy. Informal corporate guidelines can have some value, but they aren’t absolute rules and shouldn’t be treated the same as mandatory compliance requirements. Otherwise, you might be unintentionally restricting yourself from taking advantage of more effective risk management techniques.

“But if it is a compliance requirement, doesn’t that mean I have no choice?”

Both of the previous examples illustrate the importance of not only confirming your compliance requirements, but also recognizing the source of those requirements. It might be infeasible to obtain an exception to an externally-imposed obligation such as a regulatory requirement, but exceptions or even revisions to corporate policies aren’t out of the question if you can demonstrate that modern approaches provide equal or greater protection against a particular risk—for example, the high availability and rapid recoverability supported by a multiple-AZ architecture. Consider whether your compliance requirements provide for some level of flexibility in their application.

Also, because many of these requirements are principles-based, they might be subject to interpretation. You have to consider the specific language of the requirement in the context of the workload. For example, a data residency requirement might not explicitly prohibit you from storing a copy of the content in another country for backup and recovery purposes. For this reason, I recommend that you consult applicable specialists from your legal, privacy, and compliance teams to aid in the interpretation of compliance requirements. Once you understand the legal boundaries of your compliance requirements, AWS Solutions Architects and other financial services industry specialists such as myself can help you assess viable options to meet your needs.

Conclusion

In this first part of a two-part series, I provided some examples of common misconceptions FIs have about compliance requirements for disaster recovery in the cloud. The key is to avoid making assumptions that might impose greater constraints on your architecture than are necessary. In Part 2, I show you a structured approach for architecting compliant DR workloads that can help you to avoid these preventable missteps.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Dan MacKay

Dan is the Financial Services Compliance Specialist for AWS Canada. As a member of the Worldwide Financial Services Security & Compliance team, Dan advises financial services customers on best practices and practical solutions for cloud-related governance, risk, and compliance. He specializes in helping AWS customers navigate financial services and privacy regulations applicable to the use of cloud technology in Canada.

Data Privacy Day 2021 – Looking ahead at the always on, always secure, always private Internet

Post Syndicated from Emily Hancock original https://blog.cloudflare.com/data-privacy-day-2021-looking-ahead-at-the-always-on-always-secure-always-private-internet/

Data Privacy Day 2021 - Looking ahead at the always on, always secure, always private Internet

Data Privacy Day 2021 - Looking ahead at the always on, always secure, always private Internet

Welcome to Data Privacy Day 2021! Last year at this time, I was writing about how Cloudflare builds privacy into everything we do, with little idea about how dramatically the world was going to change. The tragedy of the COVID-19 pandemic has reshaped the way we go about our daily lives. Our dependence on the Internet grew exponentially in 2020 as we started working from home, attending school from home, and participating in online weddings, concerts, parties, and more. So as we begin this new year, it’s impossible to think about data privacy in 2021 without thinking about how an always-on, always secure, always private Internet is more important than ever.

The pandemic wasn’t the only thing to dramatically shape data privacy conversations last year. We saw a flurry of new activity on data protection legislation around the globe, and a trend toward data localization in a variety of jurisdictions.

I don’t think I’m taking any risks when I say that 2021 looks to be another busy year in the world of privacy and data protection. Let me tell you a bit about what that looks like for us at Cloudflare. We’ll be spending a lot of time in 2021 helping our customers find the solutions they need to meet data protection obligations; enhancing our technical, organizational, and contractual measures to protect the privacy of personal data no matter where in the world it is processed; and continuing to develop privacy-enhancing technologies that can help everyone on the Internet.

Focus on International Data Transfers

One of the biggest stories in data protection in 2020 was the Court of Justice of the European Union’s decision in the “Schrems II” case (Case C-311/18, Data Protection Commissioner v Facebook Ireland and Maximillian Schrems) that invalidated the EU-U.S. Privacy Shield. The court’s interpretation of U.S. surveillance laws meant that data controllers transferring EU personal data to U.S. data processors now have an obligation to make sure additional safeguards are in place to provide the same level of data protection as the General Data Protection Regulation (“GDPR”).

The court decision was followed by draft guidance from the European Data Protection Board (EDPB) that created new expectations and challenges for transfers of EU personal data to processors outside the EU pursuant to the GDPR. In addition, the EU Commission issued new draft standard contractual clauses that further emphasized the need for data transfer impact assessments and due diligence to be completed prior to transferring EU personal data to processors outside the EU. Meanwhile, even before the EDPB and EU Commission weighed in, France’s data protection authority, the CNIL, challenged the use of a U.S. cloud service provider for the processing of certain health data.

This year, the EDPB is poised to issue its final guidance on international data transfers, the EU Commission is set to release a final version of new standard contractual clauses, and the new Biden administration in the United States has already appointed a deputy assistant secretary for services at the U.S. Department of Commerce who will focus on negotiations around a new EU-U.S. Privacy Shield or another data transfer mechanism.

However, the trend to regulate international data transfers isn’t confined to Europe. India’s Personal Data Protection Bill, likely to become law in 2021, would bar certain types of personal data from leaving India. And Brazil’s Lei Geral de Proteção de Dados (LGPD”), which went into effect in 2020, contains requirements for contractual guarantees that need to be in place for personal data to be processed outside Brazil.

Meanwhile, we’re seeing more data protection regulation across the globe: The California Consumer Privacy Act (“CCPA”) was amended by a new ballot initiative last year. Countries like Japan, China, Singapore, Canada, and New Zealand, that already had data protection legislation in some form, proposed or enacted amendments to strengthen those protections. And even the United States is considering comprehensive Federal data privacy regulation.

In light of last year’s developments and those we expect to see in 2021, Cloudflare is thinking a lot about what it means to process personal data outside its home jurisdiction. One of the key messages to come out of Europe in the second half of 2020 was the idea that to be able to transfer EU personal data to the United States, data processors would have to provide additional safeguards to ensure GDPR-level protection for personal data, even in light of the application of U.S. surveillance laws. While we are eagerly awaiting the EDPB’s final guidance on the subject, we aren’t waiting to ensure that we have in place the necessary additional safeguards.

In fact, Cloudflare has long maintained policies to address concerns about access to personal data. We’ve done so because we believe it’s the right thing to do, and because the conflicts of law we are seeing today seemed inevitable. We feel so strongly about our ability to provide that level of protection for data processed in the U.S., that today we are publishing a paper, “Cloudflare’s Policies around Data Privacy and Law Enforcement Requests,” to describe how we address government and other legal requests for data.

Our paper describes our policies around data privacy and data requests, such as providing notice to our customers of any legal process requesting their data, and the measures we take to push back on any legal process requesting data where we believe that legal process creates a conflict of law. The paper also describes our public commitments about how we approach requests for data and public statements about things we have never done and, in CEO Matthew Prince’s words, that we “will fight like hell to never do”:

  • Cloudflare has never turned over our encryption or authentication keys or our customers’ encryption or authentication keys to anyone.
  • Cloudflare has never installed any law enforcement software or equipment anywhere on our network.
  • Cloudflare has never provided any law enforcement organization a feed of our customers’ content transiting our network.
  • Cloudflare has never modified customer content at the request of law enforcement or another third party.

In 2021, the Cloudflare team will continue to focus on these safeguards to protect all our customers’ personal data.

Data Privacy Day 2021 - Looking ahead at the always on, always secure, always private Internet

Addressing Data Localization Challenges

We also recognize that attention to international data transfers isn’t just a jurisdictional issue. Even if jurisdictions don’t require data localization by law, highly regulated industries like banking and healthcare may adopt best practice guidance asserting more requirements for data if it is to be processed outside a data subject’s home country.

With so much activity around data localization trends and international data transfers, companies will continue to struggle to understand regulatory requirements, as well as update products and business processes to meet those requirements and trends. So while we believe that Cloudflare can provide adequate protections for this data regardless of whether it is processed inside or outside its jurisdiction of origin, we also recognize that our customers are dealing with unique compliance challenges that we can help them face.

That means that this year we’ll also continue the work we started with our Cloudflare Data Localization Suite, which we announced during our Privacy & Compliance Week in December 2020. The Data Localization Suite is designed to help customers build local requirements into their global online operations. We help our customers ensure that their data stays as private as they want it to, and only goes where they want it to go in the following ways:

  1. DDoS attacks are detected and mitigated at the data center closest to the end user.
  2. Data centers inside the preferred region decrypt TLS and apply services like WAF, CDN, and Cloudflare Workers.
  3. Keyless SSL and Geo Key Manager store private SSL keys in a user-specified region.
  4. Edge Log Delivery securely transmits logs from the inspection point to the log storage location of your choice.

Doubling Down on Privacy-Enhancing Technologies

Cloudflare’s mission is to “Help Build a Better Internet,” and we’ve said repeatedly that a privacy-respecting Internet is a better Internet. We believe in empowering individuals and entities of all sizes with technological tools to reduce the amount of personal data that gets funnelled into the data ocean — regardless of whether someone lives in a country with laws protecting the privacy of their personal data. If we can build tools to help individuals share less personal data online, then that’s a win for privacy no matter what their country of residence.

For example, when Cloudflare launched the  1.1.1.1 public DNS resolver — the Internet’s fastest, privacy-first public DNS resolver — we committed to our public resolver users that we would not retain any personal data about requests made using our 1.1.1.1 resolver. And because we baked anonymization best practices into the 1.1.1.1 resolver when we built it, we were able to demonstrate that we didn’t have any personal data to sell when we asked independent accountants to conduct a privacy examination of the 1.1.1.1 resolver.

2021 will also see a continuation of a number of initiatives that we announced during Privacy and Compliance Week that are aimed at improving Internet protocols related to user privacy:

  1. Fixing one of the last information leaks in HTTPS through Encrypted Client Hello (ECH), the evolution of Encrypted SNI.
  2. Developing a superior protocol for password authentication, OPAQUE, that makes password breaches less likely to occur.
  3. Making DNS even more private by supporting Oblivious DNS-over-HTTPS (ODoH).

Encrypted Client Hello (ECH)

Under the old TLS handshake, privacy-sensitive parameters were negotiated completely in the clear and available to network observers. One example is the Server Name Indication (SNI), used by the client to indicate to the server the website it wants to reach — this is not information that should be exposed to eavesdroppers. Previously, this problem was mitigated through the Encrypted SNI (ESNI) extension. While ESNI took a significant step forward, it is an incomplete solution; a major shortcoming is that it protects only SNI. The Encrypted Client Hello (ECH) extension aims to close this gap by enabling encryption of the entire ClientHello, thereby protecting all privacy-sensitive handshake parameters. These changes represent a significant upgrade to TLS, one that will help preserve end-user privacy as the protocol continues to evolve. As this work continues, Cloudflare is committed to doing its part, along with close collaborators in the standards process, to ensure this important upgrade for TLS reaches Internet-scale deployment.

OPAQUE

Research has repeatedly shown that passwords are hard for users to manage — and they are also a challenge for servers: passwords are difficult to store securely, they’re frequently leaked and subsequently brute-forced. As long as people still use passwords, we’d like to make the process as secure as possible. Current methods rely on the risky practice of handling plaintext passwords on the server side while checking their correctness. One potential alternative is to use OPAQUE, an asymmetric Password-Authenticated Key Exchange (aPAKE) protocol that allows secure password login without ever letting the server see the passwords.

With OPAQUE, instead of storing a traditional salted password hash, the server stores a secret envelope associated with the user that is “locked” by two pieces of information: the user’s password (known only by the user), and a random secret key (known only by the server). To log in, the client initiates a cryptographic exchange that reveals the envelope key only to the client (but not to the server). The server then sends this envelope to the user, who now can retrieve the encrypted keys. Once those keys are unlocked, they will serve as parameters for an Authenticated Key Exchange (AKE) protocol, which establishes a secret key for encrypting future communications.

Cloudflare has been pushing the development of OPAQUE forward, and has released a reference core OPAQUE implementation in Go and a demo TLS integration (with a running version you can try out). A Typescript client implementation of OPAQUE is coming soon.

Data Privacy Day 2021 - Looking ahead at the always on, always secure, always private Internet

Oblivious DNS-over-HTTPS (ODoH)

Encryption is a powerful tool that protects the privacy of personal data. This is why Cloudflare has doubled down on its implementation of DNS over HTTPS (DoH). In the snail mail world, courts have long recognized a distinction between the level of privacy afforded to the contents of a letter vs. the addressing information on an envelope. But we’re not living in an age where the only thing someone can tell from the outside of the envelope are the “to” and “from” addresses and place of postage. The “digital envelopes” of DNS requests can contain much more information about a person than one might expect. Not only is there information about the sender and recipient addresses, but there is specific timestamp information about when requests were submitted, the domains and subdomains visited, and even how long someone stayed on a certain site. Encrypting those requests ensures that only the user and the resolver get that information, and that no one involved in the transit in between sees it. Given that our digital envelopes tell a much more robust story than the envelope in your physical mailbox, we think encrypting these envelopes is just as important as encrypting the messages they carry.

However, there are more ways in which DNS privacy can be enhanced, and Cloudflare took another incremental step in December 2020 by announcing support for Oblivious DoH (ODoH). ODoH is a proposed DNS standard — co-authored by engineers from Cloudflare, Apple, and Fastly — that separates IP addresses from queries, so that no single entity can see both at the same time. ODoH requires a proxy as a key part of the communication path between client and resolver, with encryption ensuring that the proxy does not know the contents of the DNS query (only where to send it), and the resolver knowing what the query is but not who originally requested it (only the proxy’s IP address). Barring collusion between the proxy and the resolver, the identity of the requester and the content of the request are unlinkable.

As with DoH, successful deployment requires partners. A key component of ODoH is a proxy that is disjoint from the target resolver. Cloudflare is working with several leading proxy partners — currently PCCW, SURF, and Equinix — who are equally committed to privacy, and hopes to see this list grow.

Data Privacy Day 2021 - Looking ahead at the always on, always secure, always private Internet

Post-Quantum Cryptography

Even with all of these encryption measures, we also know that everything encrypted with today’s public key cryptography can likely be decrypted with tomorrow’s quantum computers. This makes deploying post-quantum cryptography a pressing privacy concern. We’re likely 10 to 15 years away from that development, but as our Head of Research Nick Sullivan described in his blog post in December, we’re not waiting for that future. We’ve been paying close attention to the National Institute of Standards and Technology (NIST)’s initiative to define post-quantum cryptography algorithms to replace RSA and ECC. Last year, Cloudflare and Google performed the TLS Post-Quantum Experiment, which involved implementing and supporting new key exchange mechanisms based on post-quantum cryptography for all Cloudflare customers for a period of a few months.

In addition, Cloudflare’s Research Team has been working with researchers from the University of Waterloo and Radboud University on a new protocol called KEMTLS. KEMTLS is designed to be fully post-quantum and relies only on public-key encryption. On the implementation side, Cloudflare has developed high-speed assembly versions of several of the NIST finalists (Kyber, Dilithium), as well as other relevant post-quantum algorithms (CSIDH, SIDH) in our CIRCL cryptography library written in Go. Cloudflare is endeavoring to use post-quantum cryptography for most internal services by the end of 2021, and plans to be among the first services to offer post-quantum cipher suites to customers as standards emerge.

Looking forward to 2021

If there’s anything 2020 taught us, it’s that our world can change almost overnight. One thing that doesn’t change, though, is that people will always want privacy for their personal data, and regulators will continue to define rules and requirements for what data protection should look like. And as these rules and requirements evolve, Cloudflare will be there every step of the way, developing innovative product and security solutions to protect data, and building privacy into everything we do.

Cloudflare is also celebrating Data Privacy Day on Cloudflare TV. Tune in for a full day of special programming.

Additional on-premises option for data localization with AWS

Post Syndicated from Min Hyun original https://aws.amazon.com/blogs/security/additional-on-premises-option-for-data-localization-with-aws/

Today, AWS released an updated resource — AWS Policy Perspectives-Data Residency — to provide an additional option for you if you need to store and process your data on premises. This white paper update discusses AWS Outposts, which offers a hybrid solution for customers that might find that certain workloads are better suited for on-premises management — whether for lower latency or other local processing needs.

Until this capability, you’d select the nearest AWS region to keep data closer in proximity. By extending AWS infrastructure and services to your environments, you can support workloads — including sensitive workloads — that need to remain on-premises while leveraging the security and operational capabilities of AWS cloud services.

Read the updated whitepaper to learn why data residency doesn’t mean better data security, and to learn about an additional option available for you to address various data protection needs, including data localization.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Min Hyun

Min is the Global Lead for Growth Strategies at AWS. Her team’s mission is to set the industry bar in thought leadership for security and data privacy assurance in emerging technology, trends and strategy to advance customers’ journeys to AWS. View her other Security Blog publications here.