Today, we’re very happy to announce the general availability of a new region for Regional Services that allows you to limit your traffic to only ISO 27001 certified data centers inside the EU. This helps customers that have very strict requirements surrounding which data centers are allowed to decrypt and service traffic. Enabling this feature is a one-click operation right on the Cloudflare dashboard.
Regional Services – a recap
In 2020, we saw an increase in prospects asking about data localization. Specifically, increased regulatory pressure limited them from using vendors that operated at global scale. We launched Regional Services, a new way for customers to use the Cloudflare network. With Regional Services, we put customers back in control over which data centers are used to service traffic. Regional Services operates by limiting exactly which data centers are used to decrypt and service HTTPS traffic. For example, a customer may want to use only data centers inside the European Union to service traffic. Regional Services operates by leveraging our global network for DDoS protection but only decrypting traffic and applying Layer 7 products inside data centers that are located inside the European Union.
With Regional Services, customers get the best of both worlds: we empower them to use our global network for volumetric DDoS protection whilst limiting where traffic is serviced. We do that by accepting the raw TCP connection at the closest data center but forwarding it on to a data center in-region for decryption. That means that only machines of the customer’s choosing actually see the raw HTTP request, which could contain sensitive data such as a customer’s bank account or medical information.
A new region and a new UI
Traditionally we’ve seen requests for data localization largely center around countries or geographic areas. Many types of regulations require companies to make promises about working only with vendors that are capable of restricting where their traffic is serviced geographically. Organizations can have many reasons for being limited in their choices, but they generally fall into two buckets: compliance and contractual commitments.
More recently, we are seeing that more and more companies are asking about security requirements. An often asked question about security in IT is: how do you ensure that something is safe? For instance, for a data center you might be wondering how physical access is managed. Or how often security policies are reviewed and updated. This is where certifications come in. A common certification in IT is the ISO 27001 certification:
“ISO/IEC 27001 is the world’s best-known standard for information security management systems (ISMS) and their requirements. Additional best practice in data protection and cyber resilience are covered by more than a dozen standards in the ISO/IEC 27000 family. Together, they enable organizations of all sectors and sizes to manage the security of assets such as financial information, intellectual property, employee data and information entrusted by third parties.”
In short, ISO 27001 is a certification that a data center can achieve that ensures that they maintain a set of security standards to keep the data center secure. With the new Regional Services region, HTTPS traffic will only be decrypted in data centers that hold the ISO 27001 certification. Products such as WAF, Bot Management and Workers will only be applied in those relevant data centers.
The other update we’re excited to announce is a brand new User Interface for configuring the Data Localization Suite. The previous UI was limited in that customers had to preconfigure a region for an entire zone: you couldn’t mix and match regions. The new UI allows you to do just that: each individual hostname can be configured for a different region, directly on the DNS tab:
Configuring a region for a particular hostname is now just a single click away. Changes take effect within seconds, making this the easiest way to configure data localization yet. For customers using the Metadata Boundary, we’ve also launched a self-serve UI that allows you to configure where logs flow:
We’re excited about these new updates that give customers more flexibility in choosing which of Cloudflare’s data centers to use as well as making it easier than ever to configure them. The new region and existing regions are now a one-click configuration option right from the dashboard. As always, we love getting feedback, especially on what new regions you’d like to see us add in the future. In the meantime, if you’re interested in using the Data Localization Suite, please reach out to your account team.
At Cloudflare, we believe that deploying effective cybersecurity measures is the best way to protect the privacy of personal information and can be more effective than making sure that information stays within a particular jurisdiction. Yet, we hear from customers in Europe, India, Australia, Japan, and many other regions that, as part of their privacy programs, they need solutions to localize data in order to meet their regulatory obligations.
So as we think about Data Privacy Day, which is coming up on January 28, we are in the interesting position of disagreeing with those who believe that data localization is a proxy for better data privacy, but of also wanting to support our customers who have to comply with certain regulations.
For this reason, we introduced our Data Localization Suite (DLS) in 2020 to help customers navigate a data protection landscape that focuses more and more on data localization. With the DLS, customers can use Cloudflare’s powerful global network and security measures to protect their businesses, while keeping the data we process on their behalf local. Since its launch, we’ve had many customers adopt the Data Localization Suite. In this blog post we want to share updates about how we’re making the DLS more comprehensive and easier to use.
The confusing state of data protection regulations
We frequently field questions from customers who hear about new local laws or interpretations of existing regulations that seem to limit what they can do with data. This is especially confusing for customers doing business on the global Internet because they have to navigate regulations that suggest customers based in one country can’t use products from companies based in another country, unless extensive measures are put in place.
We don’t think this is any way to regulate the Internet. As we’ll talk more about in our blog post tomorrow about cross-border data transfers, we’re encouraged to see new developments aimed at establishing a common set of data protections across jurisdictions to make these data transfers more seamless.
In the meantime, we have the Data Localization suite to help our customers navigate these challenges.
A recap of how the Data Localization Suite works
We developed DLS to address three primary customer concerns:
How do I ensure my encryption keys stay in my jurisdiction?
How can I ensure that application services like caching and WAF only run in my jurisdiction?
How can I ensure that logs and metadata are never transferred outside my jurisdiction?
To address these concerns, our DLS has an encryption key component, a component that addresses where content in transit is terminated and inspected, and a component that keeps metadata within a customers’ jurisdiction:
1. Encryption Keys Cloudflare has long offered Keyless SSL and Geo Key Manager, which ensure that private SSL/TLS key material never leaves the EU. Customers using our Geo Key Manager can choose for encryption keys to be stored only in data centers in the region the customer specifies. Keyless SSL ensures that Cloudflare never has possession of the private key material at all; Geo Key Manager ensures that keys are protected with cryptographic access control, so they can only be used in specified regions.
2. Regional Services: Regional Services ensures that Cloudflare will only be able to decrypt and inspect the content of HTTPS traffic inside a customer’s chosen region. When Regional Services is enabled, regardless of which data center traffic first hits on our global network, rather than decrypting it at the first data center, we forward the TCP stream in encrypted form. Once it reaches a data center inside the customer’s chosen region, we decrypt and apply our Layer 7 security measures to prevent malicious traffic from reaching our customers’ websites.
3. Customer Metadata Boundary: With this option enabled, no end user traffic logs (which contain IP addresses) that Cloudflare processes on behalf of our customers will leave the region chosen by the customer. (Currently only available only in the EU and US.)
Expanding Data Localization Suite to new regions
Although we launched the Data Localization Suite with Europe and America in mind at first, we quickly realized a lot of our customers were interested in versions specific to the Asia-Pacific region as well. In September of last year, we added support for Regional Services in Japan, Australia, and India.
Then in December 2022 we announced that Geo Key Manager is now accessible in 15 regions. Customers can both allow- and deny-list the regions that they want us to support for fine-grained control over where their key material is stored.
Regional Services and the Customer Metadata Boundary offer important protections for our customers — but they’ve been too hard to use. Both have required manual steps taken by teams at Cloudflare, and have had confusing (or no) public APIs.
Today, we’re fixing that! We’re excited to announce two big improvements to usability:
Regional Services customers now have a dedicated UI and API for enabling Regional Services, accessible straight from the DNS tab. Different regions can now be set on a per-hostname basis
Customers who want to use the Metadata Boundary can use our self-service API to enable it.
We’re excited about making it easier to use the Data Localization Suite and give customers more control over exactly how to localize which parts of their traffic.
What’s next
The Data Localization Suite is accessible today for enterprise customers. Please chat with your account representative if you’re interested in using it, and you can find more information here about configuring it in our developer docs.
We have lots more planned for the Data Localization Suite this year. We plan to support many more regions for Regional Services and the Metadata Boundary. We also plan to have full data localization support for all of our Zero Trust products. Stay tuned to the blog for more!
Amazon Web Services (AWS) recently released a new whitepaper, Does data localization cause more problems than it solves?, as part of the AWS Innovating Securely briefing series. The whitepaper draws on research from Emily Wu’s paper Sovereignty and Data Localization, published by Harvard University’s Belfer Center, and describes how countries can realize similar data localization objectives through AWS services without incurring the unintended effects highlighted by Wu.
Wu’s research analyzes the intent of data localization policies, and compares that to the reality of the policies’ effects, concluding that data localization policies are often counterproductive to their intended goals of data security, economic competitiveness, and protecting national values.
The new whitepaper explains how you can use the security capabilities of AWS to take advantage of up-to-date technology and help meet your data localization requirements while maintaining full control over the physical location of where your data is stored.
AWS offers robust privacy and security services and features that let you implement your own controls. AWS uses lessons learned around the globe and applies them at the local level for improved cybersecurity against security events. As an AWS customer, after you pick a geographic location to store your data, the cloud infrastructure provides you greater resiliency and availability than you can achieve by using on-prem infrastructure. When you choose an AWS Region, you maintain full control to determine the physical location of where your data is stored. AWS also provides you with resources through the AWS compliance program, to help you understand the robust controls in place at AWS to maintain security and compliance in the cloud.
An important finding of Wu’s research is that localization constraints can deter innovation and hurt local economies because they limit which services are available, or increase costs because there are a smaller number of service providers to choose from. Wu concludes that data localization can “raise the barriers [to entrepreneurs] for market entry, which suppresses entrepreneurial activity and reduces the ability for an economy to compete globally.” Data localization policies are especially challenging for companies that trade across national borders. International trade used to be the remit of only big corporations. Current data-driven efficiencies in shipping and logistics mean that international trade is open to companies of all sizes. There has been particular growth for small and medium enterprises involved in services trade (of which cross-border data flows are a key element). In a 2016 worldwide survey conducted by McKinsey, 86 percent of tech-based startups had at least one cross-border activity. The same report showed that cross-border data flows added some US$2.8 trillion to world GDP in 2014.
However, the availability of cloud services supports secure and efficient cross-border data flows, which in turn can contribute to national economic competitiveness. Deloitte Consulting’s report, The cloud imperative: Asia Pacific’s unmissable opportunity, estimates that by 2024, the cloud will contribute $260 billion to GDP across eight regional markets, with more benefit possible in the future. The World Trade Organization’s World Trade Report 2018 estimates that digital technologies, which includes advanced cloud services, will account for a 34 percent increase in global trade by 2030.
Wu also cites a link between national data governance policies and a government’s concerns that movement of data outside national borders can diminish their control. However, the technology, storage capacity, and compute power provided by hyperscale cloud service providers like AWS, can empower local entrepreneurs.
AWS continually updates practices to meet the evolving needs and expectations of both customers and regulators. This allows AWS customers to use effective tools for processing data, which can help them meet stringent local standards to protect national values and citizens’ rights.
Wu’s research concludes that “data localization is proving ineffective” for meeting intended national goals, and offers practical alternatives for policymakers to consider. Wu has several recommendations, such as continuing to invest in cybersecurity, supporting industry-led initiatives to develop shared standards and protocols, and promoting international cooperation around privacy and innovation. Despite the continued existence of data localization policies, countries can currently realize similar objectives through cloud services. AWS implements rigorous contractual, technical, and organizational measures to protect the confidentiality, integrity, and availability of customer data, regardless of which AWS Region you select to store their data. As an AWS customer, this means you can take advantage of the economic benefits and the support for innovation provided by cloud computing, while improving your ability to meet your core security and compliance requirements.
Compliance in the cloud is fraught with myths and misconceptions. This is particularly true when it comes to something as broad as disaster recovery (DR) compliance where the requirements are rarely prescriptive and often based on legacy risk-mitigation techniques that don’t account for the exceptional resilience of modern cloud-based architectures. For regulated entities subject to principles-based supervision such as many financial institutions (FIs), the responsibility lies with the FI to determine what’s necessary to adequately recover from a disaster event. Without clear instructions, FIs are susceptible to making incorrect assumptions regarding their compliance requirements for DR.
In Part 1 of this two-part series, I provided some examples of common misconceptions FIs have about compliance requirements for disaster recovery in the cloud. In Part 2, I outline five steps you can take to avoid these misconceptions when architecting DR-compliant workloads for deployment on Amazon Web Services (AWS).
1. Identify workloads planned for deployment
It’s common for FIs to have a portfolio of workloads they are considering deploying to the cloud and often want to know that they can be compliant across the board. But compliance isn’t a one-size-fits-all domain—it’s based on the characteristics of each workload. For example, does the workload contain personally identifiable information (PII)? Will it be used to store, process, or transmit credit card information? Compliance is dependent on the answers to questions such as these and must be assessed on a case-by-case basis. Therefore, the first step in architecting for compliance is to identify the specific workloads you plan to deploy to the cloud. This way, you can assess the requirements of these specific workloads and not be distracted by aspects of compliance that might not be relevant.
2. Define the workload’s resiliency requirements
Resiliency is the ability of a workload to recover from infrastructure or service disruptions. DR is an important part of your resiliency strategy and concerns how your workload responds to a disaster event. DR strategies on AWS range from simple, low cost options such as backup and restore, to more complex options such as multi-site active-active, as shown in Figure 1.
The DR strategy you choose for a particular workload is dependent on your organization’s requirements for avoiding loss of data—known as the recovery point objective (RPO)—and reducing downtime where the workload isn’t available —known as the recovery time objective (RTO). RPO and RTO are key factors for determining the minimum architectural specifications necessary to meet the workload’s resiliency requirements. For example, can the workload’s RPO and RTO be achieved using a multi-AZ architecture in a single AWS Region, or do the resiliency requirements necessitate deploying the workload across multiple AWS Regions? Even if your workload is not subject to explicit compliance requirements for resiliency, understanding these requirements is necessary for assessing other aspects of DR compliance, including data residency and geodiversity.
3. Confirm the workload’s data residency requirements
As I mentioned in Part 1, data residency requirements might restrict which AWS Region or Regions you can deploy your workload to. Therefore, you need to confirm whether the workload is subject to any data residency requirements within applicable laws and regulations, corporate policies, or contractual obligations.
In order to properly assess these requirements, you must review the explicit language of the requirements so as to understand the specific constraints they impose. You should also consult legal, privacy, and compliance subject-matter specialists to help you interpret these requirements based on the characteristics of the workload. For example, do the requirements specifically state that the data cannot leave the country, or can the requirement be met so long as the data can be accessed from that country? Does the requirement restrict you from storing a copy of the data in another country—for example, for backup and recovery purposes? What if the data is encrypted and can only be read using decryption keys kept within the home country? Consulting subject-matter specialists to help interpret these requirements can help you avoid making overly restrictive assumptions and imposing unnecessary constraints on the workload’s architecture.
4. Confirm the workload’s geodiversity requirements
A single Region, multiple-AZ architecture is often sufficient to meet a workload’s resiliency requirements. However, if the workload is subject to geodiversity requirements, the distance between the AZs in an AWS Region might not conform to the minimum distance between individual data centers specified by the requirements. Therefore, it’s critical to confirm whether any geodiversity requirements apply to the workload.
Like data residency, it’s important to assess the explicit language of geodiversity requirements. Are they written down in a regulation or corporate policy, or are they just a recommended practice? Can the requirements be met if the workload is deployed across three or more AZs even if the minimum distance between those AZs is less than the specified minimum distance between the primary and backup data centers? If it’s a corporate policy, does it allow for exceptions if an alternative method provides equal or greater resiliency than asynchronous replication between two geographically distant data centers? Or perhaps the corporate policy is outdated and should be revised to reflect modern risk mitigation techniques. Understanding these parameters can help you avoid unnecessary constraints as you assess architectural options for your workloads.
5. Assess architectural options to meet the workload’s requirements
Now that you understand the workload’s requirements for resiliency, data residency, and geodiversity, you can assess the architectural options that meet these requirements in the cloud.
As per AWS Well-Architected best practices, you should strive for the simplest architecture necessary to meet your requirements. This includes assessing whether the workload can be accommodated within a single AWS Region. If the workload is constrained by explicit geographic diversity requirements or has resiliency requirements that cannot be accommodated by a single AWS Region, then you might need to architect the workload for deployment across multiple AWS Regions. If the workload is also constrained by explicit data residency requirements, then it might not be possible to deploy to multiple AWS Regions. In cases such as these, you can work with our AWS Solution Architects to assess hybrid options that might meet your compliance requirements, such as using AWS Outposts, Amazon Elastic Container Service (Amazon ECS) Anywhere, or Amazon Elastic Kubernetes Service (Amazon EKS) Anywhere. Another option may be to consider a DR solution in which your on-premises infrastructure is used as a backup for a workload running on AWS. In some cases, this might be a long-term solution. In others, it might be an interim solution until certain constraints can be removed—for example, a change to corporate policy or the introduction of additional AWS Regions in a particular country.
Conclusion
Let’s recap by summarizing some guiding principles for architecting compliant DR workloads as outlined in this two-part series:
Avoid assumptions; confirm the facts. If it’s not written down, it’s unlikely to be considered a mandatory compliance requirement.
Consult the experts. Legal, privacy, and compliance, as well as AWS Solution Architects, AWS security and compliance specialists, and other subject-matter specialists.
Avoid generalities; focus on the specifics. There is no one-size-fits-all approach.
Strive for simplicity, not zero risk. Don’t use multiple AWS Regions when one will suffice.
Don’t get distracted by exceptions. Focus on your current requirements, not workloads you’re not yet prepared to deploy to the cloud.
If you have feedback about this post, submit comments in the Comments section below.
Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.
Compliance in the cloud can seem challenging, especially for organizations in heavily regulated sectors such as financial services. Regulated financial institutions (FIs) must comply with laws and regulations (often in multiple jurisdictions), global security standards, their own corporate policies, and even contractual obligations with their customers and counterparties. These various compliance requirements may impose constraints on how their workloads can be architected for the cloud, and may require interpretation on what FIs must do in order to be compliant. It’s common for FIs to make assumptions regarding their compliance requirements, which can result in unnecessary costs and increased complexity, and might not align with their strategic objectives. A modern, rationalized approach to compliance can help FIs avoid imposing unnecessary constraints while meeting their mandatory requirements.
In my role as an Amazon Web Services (AWS) Compliance Specialist, I work with our financial services customers to identify, assess, and determine solutions to address their compliance requirements as they move to the cloud. One of the most common challenges customers ask me about is how to comply with disaster recovery (DR) requirements for workloads they plan to run in the cloud. In this blog post, I share some of the typical misconceptions FIs have about DR compliance in the cloud. In Part 2, I outline a structured approach to designing compliant architectures for your DR workloads. As my primary market is Canada, the examples in this blog post largely pertain to FIs operating in Canada, but the principles and best practices are relevant to regulated organizations in any country.
“Why isn’t there a checklist for compliance in the cloud?”
Compliance requirements are sometimes prescriptive: “if X, then you must do Y.” When requirements are prescriptive, it’s usually clear what you must do in order to be compliant. For example, the Payment Card Industry Data Security Standard (PCI DSS) requirement 8.2.4 obliges companies that process, store, or transmit credit card information to “change user passwords/passphrases at least once every 90 days.” But in the financial services sector, compliance requirements for managing operational risks can be subjective. When regulators take what is known as a principles-based approach to setting regulatory expectations, each FI is required to assess their specific risks and determine the mitigating controls necessary to conform with the organization’s tolerance for operational risk. Because the rules aren’t prescriptive, there is no “checklist for achieving compliance.” Instead, principles-based requirements are guidelines that FIs are expected to consider as they design and implement technology solutions. They are, by definition, subject to interpretation and can be prone to myths and misconceptions among FIs and their service providers. To illustrate this, let’s look at two aspects of DR that are frequently misunderstood within the Canadian financial services industry: data residency and geodiversity.
“My data has to stay in country X”
Data residency or data localization is a requirement for specific data-sets processed and stored in an IT system to remain within a specific jurisdiction (for example, a country). As discussed in our Policy Perspectives whitepaper, contrary to historical perspectives, data residency doesn’t provide better security. Most cyber-attacks are perpetrated remotely and attackers aren’t deterred by the physical location of their victims. In fact, data residency can run counter to an organization’s objectives for security and resilience. For example, data residency requirements can limit the options our customers have when choosing the AWS Region or Regions in which to run their production workloads. This is especially challenging for customers who want to use multiple Regions for backup and recovery purposes.
It’s common for FIs operating in Canada to assume that they’re required to keep their data—particularly customer data—in Canada. In reality, there’s very little from a statutory perspective that imposes such a constraint. None of the private sector privacy laws include data residency requirements, nor do any of the financial services regulatory guidelines. There are some place of records requirements in Canadian federal financial services legislation such as The Bank Act and The Insurance Companies Act, but these are relatively narrow in scope and apply primarily to corporate records. For most Canadian FIs, their requirements are more often a result of their own corporate policies or contractual obligations, not externally imposed by public policies or regulations.
“My data centers have to be X kilometers apart”
Geodiversity—short for geographic diversity—is the concept of maintaining a minimum distance between primary and backup data processing sites. Geodiversity is based on the principle that requiring a certain distance between data centers mitigates the risk of location-based disruptions such as natural disasters. The principle is still relevant in a cloud computing context, but is not the only consideration when it comes to planning for DR. The cloud allows FIs to define operational resilience requirements instead of limiting themselves to antiquated business continuity planning and DR concepts like physical data center implementation requirements. Legacy disaster recovery solutions and architectures, and lifting and shifting such DR strategies into the cloud, can diminish the potential benefits of using the cloud to improve operational resilience. Modernizing your information technology also means modernizing your organization’s approach to DR.
In the cloud, vast physical distance separation is an anti-pattern—it’s an arbitrary metric that does little to help organizations achieve availability and recovery objectives. At AWS, we design our global infrastructure so that there’s a meaningful distance between the Availability Zones (AZs) within an AWS Region to support high availability, but close enough to facilitate synchronous replication across those AZs (an AZ being a cluster of data centers). Figure 1 shows the relationship between Regions, AZs, and data centers.
Synchronous replication across multiple AZs enables you to minimize data loss (defined as the recovery point objective or RPO) and reduce the amount of time that workloads are unavailable (defined as the recovery time objective or RTO). However, the low latency required for synchronous replication becomes less achievable as the distance between data centers increases. Therefore, a geodiversity requirement that mandates a minimum distance between data centers that’s too far for synchronous replication might prohibit you from taking advantage of AWS’s multiple-AZ architecture. A multiple-AZ architecture can achieve RTOs and RPOs that aren’t possible with a simple geodiversity mitigation strategy. For more information, refer to the AWS whitepaper Disaster Recovery of Workloads on AWS: Recovery in the Cloud.
Again, it’s a common perception among Canadian FIs that the disaster recovery architecture for their production workloads must comply with specific geodiversity requirements. However, there are no statutory requirements applicable to FIs operating in Canada that mandate a minimum distance between data centers. Some FIs might have corporate policies or contractual obligations that impose geodiversity requirements, but for most FIs I’ve worked with, geodiversity is usually a recommended practice rather than a formal policy. Informal corporate guidelines can have some value, but they aren’t absolute rules and shouldn’t be treated the same as mandatory compliance requirements. Otherwise, you might be unintentionally restricting yourself from taking advantage of more effective risk management techniques.
“But if it is a compliance requirement, doesn’t that mean I have no choice?”
Both of the previous examples illustrate the importance of not only confirming your compliance requirements, but also recognizing the source of those requirements. It might be infeasible to obtain an exception to an externally-imposed obligation such as a regulatory requirement, but exceptions or even revisions to corporate policies aren’t out of the question if you can demonstrate that modern approaches provide equal or greater protection against a particular risk—for example, the high availability and rapid recoverability supported by a multiple-AZ architecture. Consider whether your compliance requirements provide for some level of flexibility in their application.
Also, because many of these requirements are principles-based, they might be subject to interpretation. You have to consider the specific language of the requirement in the context of the workload. For example, a data residency requirement might not explicitly prohibit you from storing a copy of the content in another country for backup and recovery purposes. For this reason, I recommend that you consult applicable specialists from your legal, privacy, and compliance teams to aid in the interpretation of compliance requirements. Once you understand the legal boundaries of your compliance requirements, AWS Solutions Architects and other financial services industry specialists such as myself can help you assess viable options to meet your needs.
Conclusion
In this first part of a two-part series, I provided some examples of common misconceptions FIs have about compliance requirements for disaster recovery in the cloud. The key is to avoid making assumptions that might impose greater constraints on your architecture than are necessary. In Part 2, I show you a structured approach for architecting compliant DR workloads that can help you to avoid these preventable missteps.
If you have feedback about this post, submit comments in the Comments section below.
Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.
Welcome to Data Privacy Day 2021! Last year at this time, I was writing about how Cloudflare builds privacy into everything we do, with little idea about how dramatically the world was going to change. The tragedy of the COVID-19 pandemic has reshaped the way we go about our daily lives. Our dependence on the Internet grew exponentially in 2020 as we started working from home, attending school from home, and participating in online weddings, concerts, parties, and more. So as we begin this new year, it’s impossible to think about data privacy in 2021 without thinking about how an always-on, always secure, always private Internet is more important than ever.
The pandemic wasn’t the only thing to dramatically shape data privacy conversations last year. We saw a flurry of new activity on data protection legislation around the globe, and a trend toward data localization in a variety of jurisdictions.
I don’t think I’m taking any risks when I say that 2021 looks to be another busy year in the world of privacy and data protection. Let me tell you a bit about what that looks like for us at Cloudflare. We’ll be spending a lot of time in 2021 helping our customers find the solutions they need to meet data protection obligations; enhancing our technical, organizational, and contractual measures to protect the privacy of personal data no matter where in the world it is processed; and continuing to develop privacy-enhancing technologies that can help everyone on the Internet.
Focus on International Data Transfers
One of the biggest stories in data protection in 2020 was the Court of Justice of the European Union’s decision in the “Schrems II” case (Case C-311/18, Data Protection Commissioner v Facebook Ireland and Maximillian Schrems) that invalidated the EU-U.S. Privacy Shield. The court’s interpretation of U.S. surveillance laws meant that data controllers transferring EU personal data to U.S. data processors now have an obligation to make sure additional safeguards are in place to provide the same level of data protection as the General Data Protection Regulation (“GDPR”).
The court decision was followed by draft guidance from the European Data Protection Board (EDPB) that created new expectations and challenges for transfers of EU personal data to processors outside the EU pursuant to the GDPR. In addition, the EU Commission issued new draft standard contractual clauses that further emphasized the need for data transfer impact assessments and due diligence to be completed prior to transferring EU personal data to processors outside the EU. Meanwhile, even before the EDPB and EU Commission weighed in, France’s data protection authority, the CNIL, challenged the use of a U.S. cloud service provider for the processing of certain health data.
This year, the EDPB is poised to issue its final guidance on international data transfers, the EU Commission is set to release a final version of new standard contractual clauses, and the new Biden administration in the United States has already appointed a deputy assistant secretary for services at the U.S. Department of Commerce who will focus on negotiations around a new EU-U.S. Privacy Shield or another data transfer mechanism.
However, the trend to regulate international data transfers isn’t confined to Europe. India’s Personal Data Protection Bill, likely to become law in 2021, would bar certain types of personal data from leaving India. And Brazil’s Lei Geral de Proteção de Dados (“LGPD”), which went into effect in 2020, contains requirements for contractual guarantees that need to be in place for personal data to be processed outside Brazil.
Meanwhile, we’re seeing more data protection regulation across the globe: The California Consumer Privacy Act (“CCPA”) was amended by a new ballot initiative last year. Countries like Japan, China, Singapore, Canada, and New Zealand, that already had data protection legislation in some form, proposed or enacted amendments to strengthen those protections. And even the United States is considering comprehensive Federal data privacy regulation.
In light of last year’s developments and those we expect to see in 2021, Cloudflare is thinking a lot about what it means to process personal data outside its home jurisdiction. One of the key messages to come out of Europe in the second half of 2020 was the idea that to be able to transfer EU personal data to the United States, data processors would have to provide additional safeguards to ensure GDPR-level protection for personal data, even in light of the application of U.S. surveillance laws. While we are eagerly awaiting the EDPB’s final guidance on the subject, we aren’t waiting to ensure that we have in place the necessary additional safeguards.
In fact, Cloudflare has long maintained policies to address concerns about access to personal data. We’ve done so because we believe it’s the right thing to do, and because the conflicts of law we are seeing today seemed inevitable. We feel so strongly about our ability to provide that level of protection for data processed in the U.S., that today we are publishing a paper, “Cloudflare’s Policies around Data Privacy and Law Enforcement Requests,” to describe how we address government and other legal requests for data.
Our paper describes our policies around data privacy and data requests, such as providing notice to our customers of any legal process requesting their data, and the measures we take to push back on any legal process requesting data where we believe that legal process creates a conflict of law. The paper also describes our public commitments about how we approach requests for data and public statements about things we have never done and, in CEO Matthew Prince’s words, that we “will fight like hell to never do”:
Cloudflare has never turned over our encryption or authentication keys or our customers’ encryption or authentication keys to anyone.
Cloudflare has never installed any law enforcement software or equipment anywhere on our network.
Cloudflare has never provided any law enforcement organization a feed of our customers’ content transiting our network.
Cloudflare has never modified customer content at the request of law enforcement or another third party.
In 2021, the Cloudflare team will continue to focus on these safeguards to protect all our customers’ personal data.
Addressing Data Localization Challenges
We also recognize that attention to international data transfers isn’t just a jurisdictional issue. Even if jurisdictions don’t require data localization by law, highly regulated industries like banking and healthcare may adopt best practice guidance asserting more requirements for data if it is to be processed outside a data subject’s home country.
With so much activity around data localization trends and international data transfers, companies will continue to struggle to understand regulatory requirements, as well as update products and business processes to meet those requirements and trends. So while we believe that Cloudflare can provide adequate protections for this data regardless of whether it is processed inside or outside its jurisdiction of origin, we also recognize that our customers are dealing with unique compliance challenges that we can help them face.
That means that this year we’ll also continue the work we started with our Cloudflare Data Localization Suite, which we announced during our Privacy & Compliance Week in December 2020. The Data Localization Suite is designed to help customers build local requirements into their global online operations. We help our customers ensure that their data stays as private as they want it to, and only goes where they want it to go in the following ways:
DDoS attacks are detected and mitigated at the data center closest to the end user.
Data centers inside the preferred region decrypt TLS and apply services like WAF, CDN, and Cloudflare Workers.
Keyless SSL and Geo Key Manager store private SSL keys in a user-specified region.
Edge Log Delivery securely transmits logs from the inspection point to the log storage location of your choice.
Doubling Down on Privacy-Enhancing Technologies
Cloudflare’s mission is to “Help Build a Better Internet,” and we’ve said repeatedly that a privacy-respecting Internet is a better Internet. We believe in empowering individuals and entities of all sizes with technological tools to reduce the amount of personal data that gets funnelled into the data ocean — regardless of whether someone lives in a country with laws protecting the privacy of their personal data. If we can build tools to help individuals share less personal data online, then that’s a win for privacy no matter what their country of residence.
For example, when Cloudflare launched the 1.1.1.1 public DNS resolver — the Internet’s fastest, privacy-first public DNS resolver — we committed to our public resolver users that we would not retain any personal data about requests made using our 1.1.1.1 resolver. And because we baked anonymization best practices into the 1.1.1.1 resolver when we built it, we were able to demonstrate that we didn’t have any personal data to sell when we asked independent accountants to conduct a privacy examination of the 1.1.1.1 resolver.
2021 will also see a continuation of a number of initiatives that we announced during Privacy and Compliance Week that are aimed at improving Internet protocols related to user privacy:
Fixing one of the last information leaks in HTTPS through Encrypted Client Hello (ECH), the evolution of Encrypted SNI.
Developing a superior protocol for password authentication, OPAQUE, that makes password breaches less likely to occur.
Making DNS even more private by supporting Oblivious DNS-over-HTTPS (ODoH).
Encrypted Client Hello (ECH)
Under the old TLS handshake, privacy-sensitive parameters were negotiated completely in the clear and available to network observers. One example is the Server Name Indication (SNI), used by the client to indicate to the server the website it wants to reach — this is not information that should be exposed to eavesdroppers. Previously, this problem was mitigated through the Encrypted SNI (ESNI) extension. While ESNI took a significant step forward, it is an incomplete solution; a major shortcoming is that it protects only SNI. The Encrypted Client Hello (ECH) extension aims to close this gap by enabling encryption of the entire ClientHello, thereby protecting all privacy-sensitive handshake parameters. These changes represent a significant upgrade to TLS, one that will help preserve end-user privacy as the protocol continues to evolve. As this work continues, Cloudflare is committed to doing its part, along with close collaborators in the standards process, to ensure this important upgrade for TLS reaches Internet-scale deployment.
OPAQUE
Research has repeatedly shown that passwords are hard for users to manage — and they are also a challenge for servers: passwords are difficult to store securely, they’re frequently leaked and subsequently brute-forced. As long as people still use passwords, we’d like to make the process as secure as possible. Current methods rely on the risky practice of handling plaintext passwords on the server side while checking their correctness. One potential alternative is to use OPAQUE, an asymmetric Password-Authenticated Key Exchange (aPAKE) protocol that allows secure password login without ever letting the server see the passwords.
With OPAQUE, instead of storing a traditional salted password hash, the server stores a secret envelope associated with the user that is “locked” by two pieces of information: the user’s password (known only by the user), and a random secret key (known only by the server). To log in, the client initiates a cryptographic exchange that reveals the envelope key only to the client (but not to the server). The server then sends this envelope to the user, who now can retrieve the encrypted keys. Once those keys are unlocked, they will serve as parameters for an Authenticated Key Exchange (AKE) protocol, which establishes a secret key for encrypting future communications.
Cloudflare has been pushing the development of OPAQUE forward, and has released a reference core OPAQUE implementation in Go and a demo TLS integration (with a running version you can try out). A Typescript client implementation of OPAQUE is coming soon.
Oblivious DNS-over-HTTPS (ODoH)
Encryption is a powerful tool that protects the privacy of personal data. This is why Cloudflare has doubled down on its implementation of DNS over HTTPS (DoH). In the snail mail world, courts have long recognized a distinction between the level of privacy afforded to the contents of a letter vs. the addressing information on an envelope. But we’re not living in an age where the only thing someone can tell from the outside of the envelope are the “to” and “from” addresses and place of postage. The “digital envelopes” of DNS requests can contain much more information about a person than one might expect. Not only is there information about the sender and recipient addresses, but there is specific timestamp information about when requests were submitted, the domains and subdomains visited, and even how long someone stayed on a certain site. Encrypting those requests ensures that only the user and the resolver get that information, and that no one involved in the transit in between sees it. Given that our digital envelopes tell a much more robust story than the envelope in your physical mailbox, we think encrypting these envelopes is just as important as encrypting the messages they carry.
However, there are more ways in which DNS privacy can be enhanced, and Cloudflare took another incremental step in December 2020 by announcing support for Oblivious DoH (ODoH). ODoH is a proposed DNS standard — co-authored by engineers from Cloudflare, Apple, and Fastly — that separates IP addresses from queries, so that no single entity can see both at the same time. ODoH requires a proxy as a key part of the communication path between client and resolver, with encryption ensuring that the proxy does not know the contents of the DNS query (only where to send it), and the resolver knowing what the query is but not who originally requested it (only the proxy’s IP address). Barring collusion between the proxy and the resolver, the identity of the requester and the content of the request are unlinkable.
As with DoH, successful deployment requires partners. A key component of ODoH is a proxy that is disjoint from the target resolver. Cloudflare is working with several leading proxy partners — currently PCCW, SURF, and Equinix — who are equally committed to privacy, and hopes to see this list grow.
Post-Quantum Cryptography
Even with all of these encryption measures, we also know that everything encrypted with today’s public key cryptography can likely be decrypted with tomorrow’s quantum computers. This makes deploying post-quantum cryptography a pressing privacy concern. We’re likely 10 to 15 years away from that development, but as our Head of Research Nick Sullivan described in his blog post in December, we’re not waiting for that future. We’ve been paying close attention to the National Institute of Standards and Technology (NIST)’s initiative to define post-quantum cryptography algorithms to replace RSA and ECC. Last year, Cloudflare and Google performed the TLS Post-Quantum Experiment, which involved implementing and supporting new key exchange mechanisms based on post-quantum cryptography for all Cloudflare customers for a period of a few months.
In addition, Cloudflare’s Research Team has been working with researchers from the University of Waterloo and Radboud University on a new protocol called KEMTLS. KEMTLS is designed to be fully post-quantum and relies only on public-key encryption. On the implementation side, Cloudflare has developed high-speed assembly versions of several of the NIST finalists (Kyber, Dilithium), as well as other relevant post-quantum algorithms (CSIDH, SIDH) in our CIRCL cryptography library written in Go. Cloudflare is endeavoring to use post-quantum cryptography for most internal services by the end of 2021, and plans to be among the first services to offer post-quantum cipher suites to customers as standards emerge.
Looking forward to 2021
If there’s anything 2020 taught us, it’s that our world can change almost overnight. One thing that doesn’t change, though, is that people will always want privacy for their personal data, and regulators will continue to define rules and requirements for what data protection should look like. And as these rules and requirements evolve, Cloudflare will be there every step of the way, developing innovative product and security solutions to protect data, and building privacy into everything we do.
Cloudflare is also celebrating Data Privacy Day on Cloudflare TV. Tune in for a full day of special programming.
The collective thoughts of the interwebz
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.