All posts by Guest Author

Terraforming Cloudflare: in quest of the optimal setup

Post Syndicated from Guest Author original https://blog.cloudflare.com/terraforming-cloudflare/

Terraforming Cloudflare: in quest of the optimal setup

This is a guest post by Dimitris Koutsourelis and Alexis Dimitriadis, working for the Security Team at Workable, a company that makes software to help companies find and hire great people.

Terraforming Cloudflare: in quest of the optimal setup

This post is about our introductive journey to the infrastructure-as-code practice; managing Cloudflare configuration in a declarative and version-controlled way. We’d like to share the experience we’ve gained during this process; our pain points, limitations we faced, different approaches we took and provide parts of our solution and experimentations.

Terraform world

Terraform is a great tool that fulfills our requirements, and fortunately, Cloudflare maintains its own provider that allows us to manage its service configuration hasslefree.

On top of that, Terragrunt, is a thin wrapper that provides extra commands and functionality for keeping Terraform configurations DRY, and managing remote state.

The combination of both leads to a more modular and re-usable structure for Cloudflare resources (configuration), by utilizing terraform and terragrunt modules.
We’ve chosen to use the latest version of both tools (Terraform-v0.12 & Terragrunt-v0.19 respectively) and constantly upgrade to take advantage of the valuable new features and functionality, which at this point in time, remove important limitations.

Workable context

Our set up includes multiple domains that are grouped in two distinct Cloudflare organisations: production & staging. Our environments have their own purposes and technical requirements (i.e.: QA, development, sandbox and production) which translates to slightly different sets of Cloudflare zone configuration.

Our approach

Our main goal was to have a modular set up with the ability to manage any configuration for any zone, while keeping code repetition to a minimum. This is more complex than it sounds; we have repeatedly changed our Terraform folder structure – and other technical aspects – during the development period. The following sections illustrate a set of alternatives through our path, along with pros & cons.

Structure

Terraform configuration is based on the project’s directory structure, so this is the place to start.
Instead of retaining the Cloudflare organisation structure (production & staging as root level directories containing the zones that belong in each organization), our decision was to group zones that share common configuration under the same directory. This helps keep the code dry and the set up consistent and readable.
On the down side, this structure adds an extra layer of complexity, as two different sets of credentials need to be handled conditionally and two state files (at the environments/ root level) must be managed and isolated using workspaces.
On top of that, we used Terraform modules, to keep sets of common configuration across zone groups into a single place.
Terraform modules repository

modules/
│    ├── firewall/
│        ├── main.tf
│        ├── variables.tf
│    ├── zone_settings/
│        ├── main.tf
│        ├── variables.tf
│    └── [...]
└──

Terragrunt modules repository

environments/
│    ├── [...]
│    ├── dev/
│    ├── qa/
│    ├── demo/
│        ├── zone-8/ (production)
│            └── terragrunt.hcl
│        ├── zone-9/ (staging)
│            └── terragrunt.hcl
│        ├── config.tfvars
│        ├── main.tf
│        └── variables.tf
│    ├── config.tfvars
│    ├── secrets.tfvars
│    ├── main.tf
│    ├── variables.tf
│    └── terragrunt.hcl
└──

The Terragrunt modules tree gives flexibility, since we are able to apply configuration on a zone, group zone, or organisation level (which is inline with Cloudflare configuration capabilities – i.e.: custom error pages can also be configured on the organisation level).

Resource types

We decided to implement Terraform resources in different ways, to cover our requirements more efficiently.

1. Static resource

The first thought that came to mind was having one, or multiple .tf files implementing all the resources with hardcoded values assigned to each attribute. It’s simple and straightforward, but can have a high maintenance cost if it leads to code copy/paste between environments.
So, common settings seem to be a good use case; we chose to implement access_rules Terraform resources accordingly:
modules/access_rules/main.tf

resource "cloudflare_access_rule" "no_17" {
notes		= "this is a description"
mode 	= "blacklist"
configuration = {
target	= "ip"
value 	= "x.x.x.x"
}
}
[...]

2. Parametrized resources

Our next step was to add variables to gain flexibility. This is useful when few attributes of a shared resource configuration differ between multiple zones. Most of the configuration remains the same (as described above) and the variable instantiation is added in the Terraform module, while their values are fed through the Terragrunt module, as input variables, or entries inside_.tfvars_ files. The zone_settings_override resource was implemented accordingly:
modules/zone_settings/main.tf

resource "cloudflare_zone_settings_override" "zone_settings" {
zone_id = var.zone_id
settings {
always_online		= "on"
always_use_https		= "on"
[...]
browser_check		= var.browser_check
mobile_redirect {
mobile_subdomain	= var.mobile_redirect_subdomain
status			= var.mobile_redirect_status
strip_uri			= var.mobile_redirect_uri
}
[...]
waf			= "on"
webp		= "off"
websockets		= "on"
}
}

environments/qa/main.tf

module "zone_settings" {
source		= "[email protected]:foo/modules/zone_settings"
zone_name		= var.zone_name
browser_check	= var.zone_settings_browser_check
[...]
}

environments/qa/config.tfvars

#zone settings
zone_settings_browser_check = "off"
[...]
}

3. Dynamic resource

At that point, we thought that a more interesting approach would be to create generic resource templates to manage all instances of a given resource in one place. A template is implemented as a Terraform module and creates each resource dynamically, based on its input: data fed through the Terragrunt modules (/environments in our case), or entries in the tfvars files.
We chose to implement the account_member resource this way.
modules/account_members/variables.tf

variable "users" {
description	= "map of users - roles"
type        	= map(list(string))
}
variable "member_roles" {
description 	= "account role ids"
type        	= map(string)
}

modules/account_members/main.tf


resource "cloudflare_account_member" "account_member" {
for_each     		= var.users
email_address	= each.key
role_ids     		= [for role in each.value : lookup(var.member_roles, role)]
lifecycle {
prevent_destroy = true
}
}

We feed the template with a list of users (list of maps). Each member is assigned a number of roles. To make code more readable, we mapped users to role names instead of role ids:
environments/config.tfvars


member_roles = {
admin		= "000013091sds0193jdskd01d1dsdjhsd1"
admin_ro		= "0000ds81hd131bdsjd813hh173hds8adh"
analytics		= "0000hdsa8137djahd81y37318hshdsjhd"
[...]
super_admin		= "00001534sd1a2123781j5gj18gj511321"
}
users = {
"[email protected]"  	= ["super_admin"]
"[email protected]"	= ["analytics", "audit_logs", "cache_purge", "cf_workers"]
"[email protected]"	= ["cf_stream"]
[...]
"[email protected]"	= ["cf_stream"]
}

Another interesting case we dealt with was the rate_limit resource; the variable declaration (list of objects) & implementation goes as follows:
modules/rate_limit/variables.tf

variable "rate_limits" {
description	= "list of rate limits"
default	= []
type		= list(object(
{
disabled	= bool,
threshold	= number,
description	= string,
period	= number,
match	= object({
request	= object({
url_pattern	= map(string),
schemes		= list(string),
methods 		= list(string)
}),
response 		= object({
statuses		= list(number),
origin_traffic	= bool
})
}),
action	= object({
mode	= string,
timeout	= number
})
}))
}

modules/rate_limit/main.tf

locals {
[…]
}
data "cloudflare_zones" "zone" {
filter {
name   	= var.zone_name
status 	= "active"
paused 	= false
}
}
resource "cloudflare_rate_limit" "rate_limit" {
count 	= length(var.rate_limits)
zone_id    	=  lookup(data.cloudflare_zones.zone.zones[0], "id")
disabled    	= var.rate_limits[count.index].disabled
threshold   	= var.rate_limits[count.index].threshold
description 	= var.rate_limits[count.index].description
period        	= var.rate_limits[count.index].period
match {
request {
url_pattern 	= local.url_patterns[count.index]
schemes 		= var.rate_limits[count.index].match.request.schemes
methods 		= var.rate_limits[count.index].match.request.methods
}
response {
statuses       	= var.rate_limits[count.index].match.response.statuses
origin_traffic	= var.rate_limits[count.index].match.response.origin_traffic
}
}
action {
mode   	 = var.rate_limits[count.index].action.mode
timeout 	= var.rate_limits[count.index].action.timeout
}
}

environments/qa/rate_limit.tfvars

{
#1
disabled    	= false
threshold   	= 50
description 	= "sample description"
period     	 = 60
match 	= {
request 	= {
url_pattern 	= {
"subdomain" 	= "foo"
"path" 	= "/api/v1/bar"
}
schemes = [ "_ALL_", ]
methods = [ "GET", "POST", ]
}
response 	= {
statuses       	= []
origin_traffic 	= true
}
}
action 	= {
mode    	= "simulate"
timeout 	= 3600
}
},
[...]
}
]

The biggest advantage of this approach is that all common rate_limit rules are in one place and each environment can include its own rules in their .tfvars. The combination of those using Terraform built-in concat() function, achieves a 2-layer join of the two lists (common|unique rules). So we wanted to give it a try:

locals {
rate_limits  = concat(var.common_rate_limits, var.unique_rate_limits)
}

There is however a drawback: .tfvars files can only contain static values. So, since all url attributes – that include the zone name itself – have to be set explicitly in the data of each environment, it means that every time a change is needed to a url, this value has to be copied across all environments and change the zone name to match the environment.
The solution we came up with, in order to make the zone name dynamic, was to split the url attribute into 3 parts: subdomain, domain and path. This is effective for the .tfvars, but the added complexity to handle the new variables is non negligible. The corresponding code illustrates the issue:
modules/rate_limit/main.tf

locals {
rate_limits  	= concat(var.common_rate_limits, var.unique_rate_limits)
url_patterns 	= [for rate_limit in local.rate_limits:  "${lookup(rate_limit.match.request.url_pattern, "subdomain", null) != null ? "${lookup(rate_limit.match.request.url_pattern, "subdomain")}." : ""}"${lookup(rate_limit.match.request.url_pattern, "domain", null) != null ? "${lookup(rate_limit.match.request.url_pattern, "domain")}" : ${var.zone_name}}${lookup(rate_limit.match.request.url_pattern, "path", null) != null ? lookup(rate_limit.match.request.url_pattern, "path") : ""}"]
}

Readability vs functionality: although flexibility is increased and code duplication is reduced, the url transformations have an impact on code’s readability and ease of debugging (it took us several minutes to spot a typo). You can imagine this is even worse if you attempt to implement a more complex resource (such as page_rule which is a list of maps with four url attributes).
The underlying issue here is that at the point we were implementing our resources, we had to choose maps over objects due to their capability to omit attributes, using the lookup() function (by setting default values). This is a requirement for certain resources such as page_rules: only certain attributes need to be defined (and others ignored).
In the end, the context will determine if more complex resources can be implemented with dynamic resources.

4. Sequential resources

Cloudflare page rule resource has a specific peculiarity that differentiates it from other types of resources: the priority attribute.
When a page rule is applied, it gets a unique id and priority number which corresponds to the order it has been submitted. Although Cloudflare API and terraform provider give the ability to explicitly specify the priority, there is a catch.
Terraform doesn’t respect the order of resources inside a .tf file (even in a _for each loop!); each resource is randomly picked up and then applied to the provider. So, if page_rule priority is important – as in our case – the submission order counts. The solution is to lock the sequence in which the resources are created through the depends_on meta-attribute:

resource "cloudflare_page_rule" "no_3" {
depends_on 	= [cloudflare_page_rule.no_2]
zone_id    	= lookup(data.cloudflare_zones.zone.zones[0], "id")
target     	= "www.${var.zone_name}/foo"
status     	= "active"
priority   	= 3
actions {
forwarding_url {
status_code 	= 301
url        		 = "https://www.${var.zone_name}"
}
}
}
resource "cloudflare_page_rule" "no_2" {
depends_on = [cloudflare_page_rule.no_1]
zone_id   	= lookup(data.cloudflare_zones.zone.zones[0], "id")
target    	= "www.${var.zone_name}/lala*"
status     	= "active"
priority   	= 24
actions {
ssl                 		= "flexible"
cache_level         		= "simplified"
resolve_override    		= "bar.${var.zone_name}"
host_header_override 	= "new.domain.com"
}
}
resource "cloudflare_page_rule" "page_rule_1" {
zone_id    	= lookup(data.cloudflare_zones.zone.zones[0], "id")
target   	= "*.${var.zone_name}/foo/*"
status   	= "active"
priority 	= 1
actions {
forwarding_url {
status_code 	= 301
url         		= "https://foo.${var.zone_name}/$1/$2"
}
}
}

So we had to go with to a more static resource configuration because the depends_on attribute only takes static values (not dynamically calculated ones during the runtime).

Conclusion

After changing our minds several times along the way on Terraform structure and other technical details, we believe that there isn’t a single best solution. It all comes down to the requirements and keeping a balance between complexity and simplicity. In our case, a mixed approach is good middle ground.
Terraform is evolving quickly, but at this point it lacks some common coding capabilities. So over engineering can be a catch (which we fell-in too many times). Keep it simple and as DRY as possible. 🙂

How Castle is Building Codeless Customer Account Protection

Post Syndicated from Guest Author original https://blog.cloudflare.com/castle-building-codeless-customer-account-protection/

How Castle is Building Codeless Customer Account Protection

How Castle is Building Codeless Customer Account Protection

This is a guest post by Johanna Larsson, of Castle, who designed and built the Castle Cloudflare app and the supporting infrastructure.

Strong security should be easy.

Asking your consumers again and again to take responsibility for their security through robust passwords and other security measures doesn’t work. The responsibility of security needs to shift from end users to the companies who serve them.

Castle is leading the way for companies to better protect their online accounts with millions of consumers being protected every day. Uniquely, Castle extends threat prevention and protection for both pre and post login ensuring you can keep friction low but security high. With realtime responses and automated workflows for account recovery, overwhelmed security teams are given a hand. However, when you’re that busy, sometimes deploying new solutions takes more time than you have. Reducing time to deployment was a priority so Castle turned to Cloudflare Workers.

User security and friction

When security is no longer optional and threats are not black or white, security teams are left with trying to determine how to allow end-user access and transaction completions when there are hints of risk, or when not all of the information is available. Keeping friction low is important to customer experience. Castle helps organizations be more dynamic and proactive by making continuous security decisions based on realtime risk and trust.

Some of the challenges with traditional solutions is that they are often just focused on protecting the app or they are only focused on point of access, protecting against bot access for example. Tools specifically designed for securing user accounts however are fundamentally focused on protecting the accounts of the end-users, whether they are being targeting by human or bots. Being able to understand end-user behaviors and their devices both pre and post login is therefore critical in being able to truly protect each users. The key to protecting users is being able to decipher between normal and anomalous activity on an individual account and device basis. You also need a playbook to respond to anomalies and attacks with dedicated flows, that allows your end users to interact directly and provide feedback around security events.

By understanding the end user and their good behaviors, devices, and transactions, it is possible to automatically respond to account threats in real-time based on risk level and policy. This approach not only reduces end-user friction but enables security teams to feel more confident that they won’t ever be blocking a legitimate login or transaction.

Castle processes tens of millions of events every day through its APIs, including contextual information like headers, IP, and device types. The more information that can be associated with a request the better. This allows us to better recognize abnormalities and protect the end user. Collection of this information is done in two ways. One is done on the web application’s backend side through our SDKs and the other is done on the client side using our mobile SDK or browser script. Our experience shows that any integration of a security service based on user behavior and anomaly detection can involve many different parties across an organization, and it affects multiple layers of the tech stack. On top of the security related roles, it’s not unusual to also have to coordinate between backend, devops, and frontend teams. The information related to an end user session is often spread widely over a code base.

The cost of security

One of the biggest challenges in implementing a user-facing security and risk management solution is the variety of people and teams it needs attention from, each with competing priorities. Security teams are often understaffed and overwhelmed making it difficult to take on new projects. At the same time, it consumes time from product and engineering personnel on the application side, who are responsible for UX flows and performing continuous authentication post-login.

We’ve been experimenting with approaches where we can extract that complexity from your application code base, while also reducing the effort of integrating. At Castle, we believe that strong security should be easy.

How Castle is Building Codeless Customer Account Protection

With Cloudflare we found a service that enables us to create a more friendly, simple, and in the end, safe integration process by placing the security layer directly between the end user and your application. Security-related logic shouldn’t pollute your app, but should reside in a separate service, or shield, that covers your app. When the two environments are kept separate, this reduces the time and cost of implementing complex systems making integration and maintenance less stressful and much easier.

Our integration with Cloudflare aims to solve this implementation challenge, delivering end-to-end account protection for your users, both pre and post login, with the click of a button.

The codeless integration

In our quest for a purely codeless integration, key features are required. When every customer application is different, this means every integration is different. We want to solve this problem for you once and for all. To do this, we needed to move the security work away from the implementation details so that we could instead focus on describing the key interactions with the end user, like logins or bank transactions. We also wanted to empower key decision makers to recognize and handle crucial interactions in their systems. Creating a single solution that could be customized to fit each specific use case was a priority.

Building on top of Cloudflare’s platform, we made use of three unique and powerful products: Workers, Apps for Workers, and Workers KV.

Thanks to Workers we have full access to the interactions between the end user and your application. With their impressive performance, we can confidently run inline of website requests without creating noticeable latency. We will never slow down your site. And in order to achieve the flexibility required to match your specific use case, we created an internal configuration format that fully describes the interactions of devices and servers across HTTP, including web and mobile app traffic. It is in this Worker where we’ve implemented an advanced routing engine to match and collect information about requests and responses to events, directly from the edge. It also fully handles injecting the Castle browser script — one less thing to worry about.

All of this logic is kept separate from your application code, and through the Cloudflare App Store we are able to distribute this Worker, giving you control over when and where it is enabled, as well as what configurations are used. There’s no need to copy/paste code or manage your own Workers.

In order to achieve the required speed while running in distributed edge locations, we needed a high performing low latency datastore, and we found one in the Cloudflare Workers KV Store. Cloudflare Apps are not able to access the KV Store directly, but we’ve solved this by exposing it through a separate Worker that the Castle App connects to. Because traffic between Workers never leaves the Cloudflare network, this is both secure and fast enough to match your requirements. The KV Store allows us to maintain end user sessions across the world, and also gives us a place to store and update the configurations and sessions that drive the Castle App.

In combining these products we have a complete and codeless integration that is fully configurable and that won’t slow you down.

How does it work?

The data flow is straightforward. After installing the Castle App, Cloudflare will route your traffic through the Castle App, which uses the Castle Data Store and our API to intelligently protect your end users. The impact to traffic latency is minimal because most work is done in the background, not blocking the requests. Let’s dig deeper into each technical feature:

Script injection

One of the tools we use to verify user identity is a browser script: Castle.js. It is responsible for gathering device information and UI interaction behavior, and although it is not required for our service to function, it helps improve our verdicts. This means it’s important that it is properly added to every page in your web application. The Castle App, running between the end user and your application, is able to unobtrusively add the script to each page as it is served. In order for the script to also track page interactions it needs to be able to connect them to your users, which is done through a call to our script and also works out of the box with the Cloudflare interaction. This removes 100% of the integration work from your frontend teams.

Collect contextual information

The second half of the information that forms the basis of our security analysis is the information related to the request itself, such as IP and headers, as well as timestamps. Gathering this information may seem straightforward, but our experience shows some recurring problems in traditional integrations. IP-addresses are easily lost behind reverse proxies, as they need to be maintained as separate headers, like `X-Forwarded-For`, and the internal format of headers differs from platform to platform. Headers in general might get cut off based on whitelisting. The Castle App sees the original request as it comes in, with no outside influence or platform differences, enabling it to reliably create the context of the request. This saves your infrastructure and backend engineers from huge efforts debugging edge cases.

Advanced routing engine

Finally, in order to reliably recognize important events, like login attempts, we’ve built a fully configurable routing engine. This is fast enough to run inline of your web application, and supports near real-time configuration updates. It is powerful enough to translate requests to actual events in your system, like logins, purchases, profile updates or transactions. Using information from the request, it is then able to send this information to Castle, where you are able to analyze, verify and take action on suspicious activity. What’s even better, is that at any point in the future if you want to Castle protect a new critical user event – such as a withdrawal or transfer event – all it takes is adding a record to the configuration file. You never have to touch application code in order to expand your Castle integration across sensitive events.

We’ve put together an example TypeScript snippet that naively implements the flow and features we’ve discussed. The details are glossed over so that we can focus on the functionality.

addEventListener(event => event.respondWith(handleEvent(event)));

const respondWith = async (event: CloudflareEvent) => {
  // You configure the application with your Castle API key
  const { apiKey } = INSTALL_OPTIONS;
  const { request } = event;

  // Configuration is fetched from the KV Store
  const configuration = await getConfiguration(apiKey);

  // The session is also retrieved from the KV Store
  const session = await getUserSession(request);

  // Pass the request through and get the response
  let response = await fetch(request);

  // Using the configuration we can recognize events by running
  // the request+response and configuration through our matching engine
  const securityEvent = getMatchingEvent(request, response, configuration);

  if (securityEvent) {
    // With direct access to the raw request, we can confidently build the context
    // including a device ID generated by the browser script, IP, and headers
    const requestContext = getRequestContext(request);

    // Collecting the relevant information, the data is passed to the Castle API
    event.waitUntil(sendToCastle(securityEvent, session, requestContext));
  }

  // Because we have access to the response HTML page we can safely inject the browser
  // script. If the response is not an HTML page it is passed through untouched.
  response = injectScript(response, session);

  return response;
};

We hope we have inspired you and demonstrated how Workers can provide speed and flexibility when implementing end to end account protection for your end users with Castle. If you are curious about our service, learn more here.

Introducing time.cloudflare.com

Post Syndicated from Guest Author original https://blog.cloudflare.com/secure-time/

Introducing time.cloudflare.com

This is a guest post by Aanchal Malhotra, a Graduate Research Assistant at Boston University and former Cloudflare intern on the Cryptography team.

Introducing time.cloudflare.com

Cloudflare has always been a leader in deploying secure versions of insecure Internet protocols and making them available for free for anyone to use. In 2014, we launched one of the world’s first free, secure HTTPS service (Universal SSL) to go along with our existing free HTTP plan. When we launched the 1.1.1.1 DNS resolver, we also supported the new secure versions of DNS (DNS over HTTPS and DNS over TLS). Today, we are doing the same thing for the Network Time Protocol (NTP), the dominant protocol for obtaining time over the Internet.

This announcement is personal for me. I’ve spent the last four years identifying and fixing vulnerabilities in time protocols. Today I’m proud to help introduce a service that would have made my life from 2015 through 2019 a whole lot harder: time.cloudflare.com, a free time service that supports both NTP and the emerging Network Time Security (NTS) protocol for securing NTP. Now, anyone can get time securely from all our datacenters in 180 cities around the world.

You can use time.cloudflare.com as the source of time for all your devices today with NTP, while NTS clients are still under development. NTPsec includes experimental support for NTS. If you’d like to get updates about NTS client development, email us asking to join at [email protected]. To use NTS to secure time synchronization, reach out to your vendors and inquire about NTS support.

A small tale of “time” first

Back in 2015, as a fresh graduate student interested in Internet security, I came across this mostly esoteric Internet protocol called the Network Time Protocol (NTP). NTP was designed to synchronize time between computer systems communicating over unreliable and variable-latency network paths. I was actually studying Internet routing security, in particular attacks against the Resource Public Key Infrastructure (RPKI), and kept hitting a dead end because of a cache-flushing issue. As a last-ditch effort I decided to roll back the time on my computer manually, and the attack worked.

I had discovered the importance of time to computer security. Most cryptography uses timestamps to limit certificate and signature validity periods. When connecting to a website, knowledge of the correct time ensures that the certificate you see is current and is not compromised by an attacker. When looking at logs, time synchronization makes sure that events on different machines can be correlated accurately. Certificates and logging infrastructure can break with minutes, hours or months of time difference. Other applications like caching and Bitcoin are sensitive to even very small differences in time on the order of seconds.

Two factor authentication using rolling numbers also rely on accurate clocks. This then creates the need for computer clocks to have access to reasonably accurate time that is securely delivered. NTP is the most commonly used protocol for time synchronization on the Internet. If an attacker can leverage vulnerabilities in NTP to manipulate time on computer clocks, they can undermine the security guarantees provided by these systems.

Motivated by the severity of the issue, I decided to look deeper into NTP and its security. Since the need for synchronizing time across networks was visible early on, NTP is a very old protocol. The first standardized version of NTP dates back to 1985, while the latest NTP version 4 was completed in 2010 (see RFC5905).

In its most common mode, NTP works by having a client send a query packet out to an NTP server that then responds with its clock time. The client then computes an estimate of the difference between its clock and the remote clock and attempts to compensate for network delay in this. NTP client queries multiple servers and implements algorithms to select the best estimate, and rejects clearly wrong answers.

Introducing time.cloudflare.com
Request response flow of NTP

Surprisingly enough, research on NTP and its security was not very active at the time. Before this, in late 2013 and early 2014, high-profile Distributed Denial of Service (DDoS) attacks were carried out by amplifying traffic from NTP servers; attackers able to spoof a victim’s IP address were able to funnel copious amounts of traffic overwhelming the targeted domains. This caught the attention of some researchers. However, these attacks did not exploit flaws in the fundamental protocol design. The attackers simply used NTP as a boring bandwidth multiplier. Cloudflare wrote extensively about these attacks and you can read about it here, here, and here.

I found several flaws in the core NTP protocol design and its implementation that can be exploited by network attackers to launch much more devastating attacks by shifting time or denying service to NTP clients. What is even more concerning was that these attackers do not need to be a Monster-In-The-Middle (MITM), where an attacker can modify traffic between the client and the server, to mount these attacks. A set of recent papers authored by one of us showed that an off-path attacker present anywhere on the network can shift time or deny service to NTP clients. One of the ways this is done is by abusing IP fragmentation.

Fragmentation is a feature of the IP layer where a large packet is chopped into several smaller fragments so that they can pass through the networks that do not support large packets. Basically, any random network element on the path between the client and the server can send a special “ICMP fragmentation needed” packet to the server telling them to fragment the packet to say X bytes. Since the server is not expected to know the IP addresses of all the network elements on its path, this packet can be sent from any source IP.

Introducing time.cloudflare.com
Fragmentation attack against NTP

In our attack, the attacker exploits this feature to make the NTP server fragment its NTP response packet for the victim NTP client. The attacker then spoofs carefully crafted overlapping response fragments from off-path that contain the attacker’s timestamp values. By further exploiting the reassembly policies for overlapping fragments the attacker fools the client into assembling a packet with legitimate fragments and the attacker’s insertions. This evades the authenticity checks that rely on values in the original parts of the packet.

NTP’s past and future

At the time of NTP’s creation back in 1985, there were two main design goals for the service provided by NTP. First, they wanted it to be robust enough to handle networking errors and equipment failures. So it was designed as a service where client can gather timing samples from multiple peers over multiple communication paths and then average them to get more accurate measurement.

The second goal was load distribution. While every client would like to talk to time servers which are directly attached to high precision time-keeping devices like atomic clocks, GPS, etc, and thus have more accurate time, the capacity of those devices is only so much. So, to reduce protocol load on the network, the service was designed in a hierarchical manner. At the top of the hierarchy are servers connected to non-NTP time sources, that distribute time to other servers, that further distribute time to even more servers. Most computers connect to either these second or third level servers.

Introducing time.cloudflare.com
The stratum hierarchy of NTP

The original specification (RFC 958) also states the “non-goals” of the protocol, namely peer authentication and data integrity. Security wasn’t considered critical in the relatively small and trusting early Internet, and the protocols and applications that rely on time for security didn’t exist then. Securing NTP came second to improving the protocol and implementation.

As the Internet has grown, more and more core Internet protocols have been secured through cryptography to protect against abuse: TLS, DNSSEC, RPKI are all steps toward ensuring the security of all communications on the Internet. These protocols use “time” to provide security guarantees. Since security of Internet hinges on the security of NTP, it becomes even more important to secure NTP.

This research perspicuously showed the need for securing NTP. As a result, there was more work in the standards body for Internet Protocols, the Internet Engineering Task Force (IETF) towards cryptographically authenticating NTP. At the time, even though NTPv4 supported both symmetric and asymmetric cryptographic authentication, it was rarely used in practice due to limitations of both approaches.

NTPv4’s symmetric approach to securing synchronization doesn’t scale as the symmetric key must be pre-shared and configured manually: imagine if every client on earth needed a special secret key with the servers they wanted to get time from, the organizations that run those servers would have to do a great deal of work managing keys. This makes this solution quite cumbersome for public servers that must accept queries from arbitrary clients. For context, NIST operates important public time servers and distributes symmetric keys only to users that register, once per year, via US mail or facsimile; the US Naval Office does something similar.

The first attempt to solve the problem of key distribution was the Autokey protocol, described in RFC 5906. Many public NTP servers do not support Autokey (e.g., the NIST and USNO time servers, and many servers in pool.ntp.org). The protocol is badly broken as any network attacker can trivially retrieve the secret key shared between the client and server. The authentication mechanisms are non-standard and quite idiosyncratic.

The future of the Internet is a secure Internet, which means an authenticated and encrypted Internet. But until now NTP remains mostly insecure, despite continuing protocol development. In the meantime more and more services depended on it.

Introducing time.cloudflare.com
Timeline of NTP development

Fixing the problem

Following the release of our paper, there was a lot more enthusiasm in the NTP community at standards body for Internet Protocols, the Internet Engineering Task Force (IETF) and outside for improving the state of NTP security. As a short-term fix, the ntpd reference implementation software was patched for several vulnerabilities that we found. And for a long-term solution, the community realized the dire need for a secure, authenticated time synchronization protocol based on public-key cryptography, which enables encryption and authentication without requiring the sharing of key material beforehand. Today we have a Network Time Security (NTS) draft at the IETF, thanks to the work of dozens of dedicated individuals at the NTP working group.

In a nutshell, the NTS protocol is divided into two-phases. The first phase is the NTS key exchange that establishes the necessary key material between the NTP client and the server. This phase uses the Transport Layer Security (TLS) handshake and relies on the same public key infrastructure as the web. Once the keys are exchanged, the TLS channel is closed and the protocol enters the second phase. In this phase the results of that TLS handshake are used to authenticate NTP time synchronization packets via extension fields. The interested reader can find more information in the Internet draft.

Cloudflare’s new service

Today, Cloudflare announces its free time service to anyone on the Internet. We intend to solve the limitations with the existing public time services, in particular by increasing availability, robustness and security.

We use our global network to provide an advantage in latency and accuracy. Our 180 locations around the world all use anycast to automatically route your packets to our closest server. All of our servers are synchronized with stratum 1 time service providers, and then offer NTP to the general public, similar to how other public NTP providers function. The biggest source of inaccuracy for time synchronization protocols is the network asymmetry, leading to a difference in travel times between the client and server and back from the server to the client. However, our servers’ proximity to users means there will be less jitter — a measurement of variance in latency on the network — and possible asymmetry in packet paths. We also hope that in regions with a dearth of NTP servers our service significantly improves the capacity and quality of the NTP ecosystem.

Cloudflare servers obtain authenticated time by using a shared symmetric key with our stratum 1 upstream servers. These upstream servers are geographically spread and ensure that our servers have accurate time in our datacenters. But this approach to securing time doesn’t scale. We had to exchange emails individually with the organizations that run stratum 1 servers, as well as negotiate permission to use them. While this is a solution for us, it isn’t a solution for everyone on the Internet.

As a secure time service provider Cloudflare is proud to announce that we are among the first to offer a free and secure public time service based on Network Time Security. We have implemented the latest NTS IETF draft. As this draft progresses through the Internet standards process we are committed to keeping our service current.

Most NTP implementations are currently working on NTS support, and we expect that the next few months will see broader introduction as well as advancement of the current draft protocol to an RFC. Currently we have interoperability with NTPsec who have implemented draft 18 of NTS. We hope that our service will spur faster adoption of this important improvement to Internet security. Because this is a new service with no backwards compatibility requirements, we are requiring the use of TLS v1.3 with it to promote adoption of the most secure version of TLS.

Use it

If you have an NTS client, point it at time.cloudflare.com:1234. Otherwise point your NTP client at time.cloudflare.com. More details on configuration are available in the developer docs.

Conclusion

From our Roughtime service to Universal SSL Cloudflare has played a role in expanding the availability and use of secure protocols. Now with our free public time service we provide a trustworthy, widely available alternative to another insecure legacy protocol. It’s all a part of our mission to help make a faster, reliable, and more secure Internet for everyone.

Introducing time.cloudflare.com

Thanks to the many other engineers who worked on this project, including Watson Ladd, Gabbi Fisher, and Dina Kozlov

Introducing time.cloudflare.com

Post Syndicated from Guest Author original https://blog.cloudflare.com/secure-time/

Introducing time.cloudflare.com

This is a guest post by Aanchal Malhotra, a Graduate Research Assistant at Boston University and former Cloudflare intern on the Cryptography team.

Introducing time.cloudflare.com

Cloudflare has always been a leader in deploying secure versions of insecure Internet protocols and making them available for free for anyone to use. In 2014, we launched one of the world’s first free, secure HTTPS service (Universal SSL) to go along with our existing free HTTP plan. When we launched the 1.1.1.1 DNS resolver, we also supported the new secure versions of DNS (DNS over HTTPS and DNS over TLS). Today, as part of Crypto Week 2019, we are doing the same thing for the Network Time Protocol (NTP), the dominant protocol for obtaining time over the Internet.

This announcement is personal for me. I’ve spent the last four years identifying and fixing vulnerabilities in time protocols. Today I’m proud to help introduce a service that would have made my life from 2015 through 2019 a whole lot harder: time.cloudflare.com, a free time service that supports both NTP and the emerging Network Time Security (NTS) protocol for securing NTP. Now, anyone can get time securely from all our datacenters in 180 cities around the world.

You can use time.cloudflare.com as the source of time for all your devices today with NTP, while NTS clients are still under development. NTPsec includes experimental support for NTS. If you’d like to get updates about NTS client development, email us asking to join at [email protected]. To use NTS to secure time synchronization, reach out to your vendors and inquire about NTS support.

A small tale of “time” first

Back in 2015, as a fresh graduate student interested in Internet security, I came across this mostly esoteric Internet protocol called the Network Time Protocol (NTP). NTP was designed to synchronize time between computer systems communicating over unreliable and variable-latency network paths. I was actually studying Internet routing security, in particular attacks against the Resource Public Key Infrastructure (RPKI), and kept hitting a dead end because of a cache-flushing issue. As a last-ditch effort I decided to roll back the time on my computer manually, and the attack worked.

I had discovered the importance of time to computer security. Most cryptography uses timestamps to limit certificate and signature validity periods. When connecting to a website, knowledge of the correct time ensures that the certificate you see is current and is not compromised by an attacker. When looking at logs, time synchronization makes sure that events on different machines can be correlated accurately. Certificates and logging infrastructure can break with minutes, hours or months of time difference. Other applications like caching and Bitcoin are sensitive to even very small differences in time on the order of seconds.

Two factor authentication using rolling numbers also rely on accurate clocks. This then creates the need for computer clocks to have access to reasonably accurate time that is securely delivered. NTP is the most commonly used protocol for time synchronization on the Internet. If an attacker can leverage vulnerabilities in NTP to manipulate time on computer clocks, they can undermine the security guarantees provided by these systems.

Motivated by the severity of the issue, I decided to look deeper into NTP and its security. Since the need for synchronizing time across networks was visible early on, NTP is a very old protocol. The first standardized version of NTP dates back to 1985, while the latest NTP version 4 was completed in 2010 (see RFC5905).

In its most common mode, NTP works by having a client send a query packet out to an NTP server that then responds with its clock time. The client then computes an estimate of the difference between its clock and the remote clock and attempts to compensate for network delay in this. NTP client queries multiple servers and implements algorithms to select the best estimate, and rejects clearly wrong answers.

Introducing time.cloudflare.com

Surprisingly enough, research on NTP and its security was not very active at the time. Before this, in late 2013 and early 2014, high-profile Distributed Denial of Service (DDoS) attacks were carried out by amplifying traffic from NTP servers; attackers able to spoof a victim’s IP address were able to funnel copious amounts of traffic overwhelming the targeted domains. This caught the attention of some researchers. However, these attacks did not exploit flaws in the fundamental protocol design. The attackers simply used NTP as a boring bandwidth multiplier. Cloudflare wrote extensively about these attacks and you can read about it here, here, and here.

I found several flaws in the core NTP protocol design and its implementation that can be exploited by network attackers to launch much more devastating attacks by shifting time or denying service to NTP clients. What is even more concerning was that these attackers do not need to be a Monster-In-The-Middle (MITM), where an attacker can modify traffic between the client and the server, to mount these attacks. A set of recent papers authored by one of us showed that an off-path attacker present anywhere on the network can shift time or deny service to NTP clients. One of the ways this is done is by abusing IP fragmentation.

Fragmentation is a feature of the IP layer where a large packet is chopped into several smaller fragments so that they can pass through the networks that do not support large packets. Basically, any random network element on the path between the client and the server can send a special “ICMP fragmentation needed” packet to the server telling them to fragment the packet to say X bytes. Since the server is not expected to know the IP addresses of all the network elements on its path, this packet can be sent from any source IP.

Introducing time.cloudflare.com
Fragmentation attack against NTP

In our attack, the attacker exploits this feature to make the NTP server fragment its NTP response packet for the victim NTP client. The attacker then spoofs carefully crafted overlapping response fragments from off-path that contain the attacker’s timestamp values. By further exploiting the reassembly policies for overlapping fragments the attacker fools the client into assembling a packet with legitimate fragments and the attacker’s insertions. This evades the authenticity checks that rely on values in the original parts of the packet.

NTP’s past and future

At the time of NTP’s creation back in 1985, there were two main design goals for the service provided by NTP. First, they wanted it to be robust enough to handle networking errors and equipment failures. So it was designed as a service where client can gather timing samples from multiple peers over multiple communication paths and then average them to get more accurate measurement.

The second goal was load distribution. While every client would like to talk to time servers which are directly attached to high precision time-keeping devices like atomic clocks, GPS, etc, and thus have more accurate time, the capacity of those devices is only so much. So, to reduce protocol load on the network, the service was designed in a hierarchical manner. At the top of the hierarchy are servers connected to non-NTP time sources, that distribute time to other servers, that further distribute time to even more servers. Most computers connect to either these second or third level servers.

Introducing time.cloudflare.com
The stratum hierarchy of NTP

The original specification (RFC 958) also states the “non-goals” of the protocol, namely peer authentication and data integrity. Security wasn’t considered critical in the relatively small and trusting early Internet, and the protocols and applications that rely on time for security didn’t exist then. Securing NTP came second to improving the protocol and implementation.

As the Internet has grown, more and more core Internet protocols have been secured through cryptography to protect against abuse: TLS, DNSSEC, RPKI are all steps toward ensuring the security of all communications on the Internet. These protocols use “time” to provide security guarantees. Since security of Internet hinges on the security of NTP, it becomes even more important to secure NTP.

This research perspicuously showed the need for securing NTP. As a result, there was more work in the standards body for Internet Protocols, the Internet Engineering Task Force (IETF) towards cryptographically authenticating NTP. At the time, even though NTPv4 supported both symmetric and asymmetric cryptographic authentication, it was rarely used in practice due to limitations of both approaches.

NTPv4’s symmetric approach to securing synchronization doesn’t scale as the symmetric key must be pre-shared and configured manually: imagine if every client on earth needed a special secret key with the servers they wanted to get time from, the organizations that run those servers would have to do a great deal of work managing keys. This makes this solution quite cumbersome for public servers that must accept queries from arbitrary clients. For context, NIST operates important public time servers and distributes symmetric keys only to users that register, once per year, via US mail or facsimile; the US Naval Office does something similar.

The first attempt to solve the problem of key distribution was the Autokey protocol, described in RFC 5906. Many public NTP servers do not support Autokey (e.g., the NIST and USNO time servers, and many servers in pool.ntp.org). The protocol is badly broken as any network attacker can trivially retrieve the secret key shared between the client and server. The authentication mechanisms are non-standard and quite idiosyncratic.

The future of the Internet is a secure Internet, which means an authenticated and encrypted Internet. But until now NTP remains mostly insecure, despite continuing protocol development. In the meantime more and more services depended on it.

Introducing time.cloudflare.com
Timeline of NTP development

Fixing the problem

Following the release of our paper, there was a lot more enthusiasm in the NTP community at standards body for Internet Protocols, the Internet Engineering Task Force (IETF) and outside for improving the state of NTP security. As a short-term fix, the ntpd reference implementation software was patched for several vulnerabilities that we found. And for a long-term solution, the community realized the dire need for a secure, authenticated time synchronization protocol based on public-key cryptography, which enables encryption and authentication without requiring the sharing of key material beforehand. Today we have a Network Time Security (NTS) draft at the IETF, thanks to the work of dozens of dedicated individuals at the NTP working group.

In a nutshell, the NTS protocol is divided into two-phases. The first phase is the NTS key exchange that establishes the necessary key material between the NTP client and the server. This phase uses the Transport Layer Security (TLS) handshake and relies on the same public key infrastructure as the web. Once the keys are exchanged, the TLS channel is closed and the protocol enters the second phase. In this phase the results of that TLS handshake are used to authenticate NTP time synchronization packets via extension fields. The interested reader can find more information in the Internet draft.

Cloudflare’s new service

Today, Cloudflare announces its free time service to anyone on the Internet. We intend to solve the limitations with the existing public time services, in particular by increasing availability, robustness and security.

We use our global network to provide an advantage in latency and accuracy. Our 180 locations around the world all use anycast to automatically route your packets to our closest server. All of our servers are synchronized with stratum 1 time service providers, and then offer NTP to the general public, similar to how other public NTP providers function. The biggest source of inaccuracy for time synchronization protocols is the network asymmetry, leading to a difference in travel times between the client and server and back from the server to the client. However, our servers’ proximity to users means there will be less jitter — a measurement of variance in latency on the network — and possible asymmetry in packet paths. We also hope that in regions with a dearth of NTP servers our service significantly improves the capacity and quality of the NTP ecosystem.

Cloudflare servers obtain authenticated time by using a shared symmetric key with our stratum 1 upstream servers. These upstream servers are geographically spread and ensure that our servers have accurate time in our datacenters. But this approach to securing time doesn’t scale. We had to exchange emails individually with the organizations that run stratum 1 servers, as well as negotiate permission to use them. While this is a solution for us, it isn’t a solution for everyone on the Internet.

As a secure time service provider Cloudflare is proud to announce that we are among the first to offer a free and secure public time service based on Network Time Security. We have implemented the latest NTS IETF draft. As this draft progresses through the Internet standards process we are committed to keeping our service current.

Most NTP implementations are currently working on NTS support, and we expect that the next few months will see broader introduction as well as advancement of the current draft protocol to an RFC. Currently we have interoperability with NTPsec who have implemented draft 18 of NTS. We hope that our service will spur faster adoption of this important improvement to Internet security. Because this is a new service with no backwards compatibility requirements, we are requiring the use of TLS v1.3 with it to promote adoption of the most secure version of TLS.

Use it

If you have an NTS client, point it at time.cloudflare.com:1234. Otherwise point your NTP client at time.cloudflare.com. More details on configuration are available in the developer docs.

Conclusion

From our Roughtime service to Universal SSL Cloudflare has played a role in expanding the availability and use of secure protocols. Now with our free public time service we provide a trustworthy, widely available alternative to another insecure legacy protocol. It’s all a part of our mission to help make a faster, reliable, and more secure Internet for everyone.

Introducing time.cloudflare.com

Thanks to the many other engineers who worked on this project, including Watson Ladd, Gabbi Fisher, and Dina Kozlov

Cloudflare Repositories FTW

Post Syndicated from Guest Author original https://blog.cloudflare.com/cloudflare-repositories-ftw/

Cloudflare Repositories FTW

This is a guest post by Jim “Elwood” O’Gorman, one of the maintainers of Kali Linux. Kali Linux is a Debian based GNU/Linux distribution popular amongst the security research communities.

Cloudflare Repositories FTW

Kali Linux turned six years old this year!

In this time, Kali has established itself as the de-facto standard open source penetration testing platform. On a quarterly basis, we release updated ISOs for multiple platforms, pre-configured virtual machines, Kali Docker, WSL, Azure, AWS images, tons of ARM devices, Kali NetHunter, and on and on and on. This has lead to Kali being trusted and relied on to always being there for both security professionals and enthusiasts alike.

But that popularity has always led to one complication: How to get Kali to people?

With so many different downloads plus the apt repository, we have to move a lot of data. To accomplish this, we have always relied on our network of first- and third-party mirrors.

The way this works is, we run a master server that pushes out to a number of mirrors. We then pay to host a number of servers that are geographically dispersed and use them as our first-party mirrors. Then, a number of third parties donate storage and bandwidth to operate third-party mirrors, ensuring that we have even more systems that are geographically close to you. When you go to download, you hit a redirector that will send you to a mirror that is close to you, ideally allowing you to download your files quickly.

This solution has always been pretty decent, however it has some drawbacks. First, our network of first-party mirrors is expensive. Second, some mirrors are not as good as others. Nothing is worse than trying to download Kali and getting sent to a slow mirror, where your download might drag on for hours. Third, we always always need more mirrors as Kali continues to grow in popularity.

This situation led to us encountering Cloudflare thanks to some extremely generous outreach


I will be honest, we are a bunch of security nerds, so we were a bit skeptical at first. We have some pretty unique needs, we use a lot of bandwidth, syncing an apt repository to a CDN is no small task, and well, we are paranoid. We have an average of 1,000,000 downloads a month on just our ISO images. Add in our apt repos and you are talking some serious, serious traffic. So how much help could we really expect from Cloudflare anyway? Were we really going to be able to put this to use, or would this just be a nice fancy front end to our website and nothing else?

On the other hand, it was a chance to use something new and shiny, and it is an expensive product, so of course we dove right in to play with it.

Initially we had some sync issues. A package repository is a mix of static data (binary and source packages) and dynamic data (package lists are updated every 6 hours). To make things worse, the cryptographic sealing of the metadata means that we need atomic updates of all the metadata (the signed top-level ‘Release’ file contains checksums of all the binary and source package lists).

The default behavior of a CDN is not appropriate for this purpose as it caches all files for a certain amount of time after they have been fetched for the first time. This means that you could have different versions of various metadata files in the cache, resulting in invalid checksums errors returned by apt-get. So we had to implement a few tweaks to make it work and reap the full benefits of Cloudflare’s CDN network.

First we added an “Expires” HTTP header to disable expiration of all files that will never change. Then we added another HTTP header to tag all metadata files so that we could manually purge those files from the CDN cache through an API call that we integrated at the end of the repository update procedure on our backend server.

With nginx in our backend, the configuration looks like this:

location /kali/dists/ {
    add_header Cache-Tag metadata,dists;
}
location /kali/project/trace/ {
    add_header Cache-Tag metadata,trace;
    expires 1h;
}
location /kali/pool/ {
    add_header Cache-Tag pool;
    location ~ \.(deb|udeb|dsc|changes|xz|gz|bz2)$ {
        expires max;
    }
}

The API call is a simple shell script launched by a hook of the repository mirroring script:

#!/bin/sh
curl -sS -X POST "https://api.cloudflare.com/client/v4/zones/xxxxxxxxxxx/purge_cache" \
    -H "Content-Type:application/json" \
    -H "X-Auth-Key:XXXXXXXXXXXXX" \
    -H "X-Auth-Email:[email protected]" \
    --data '{"tags":["metadata"]}'

With this simple yet powerful feature, we ensure that the CDN cache always contains consistent versions of the metadata files. Going further, we might want to configure Prefetching so that Cloudflare downloads all the package lists as soon as a user downloads the top-level ‘Release’ file.

In short, we were using this system in a way that was never intended, but it worked! This really reduced the load on our backend, as a single server could feed the entire CDN. Putting the files geographically close to users, allowing the classic apt dist-upgrade to occur much, much faster than ever before.

A huge benefit, and was not really a lot of work to set up. Sevki Hasirci was there with us the entire time as we worked through this process, ensuring any questions we had were answered promptly. A great win.

However, there was just one problem.

Looking at our logs, while the apt repo was working perfectly, our image distribution was not so great. None of those images were getting cached, and our origin server was dying.

Talking with Sevki, it turns out there were limits to how large of a file Cloudflare would cache. He upped our limit to the system capacity, but that still was not enough for how large some of our images are. At this point, we just assumed that was that–we could use this solution for the repo but for our image distribution it would not help. However, Sevki told us to wait a bit. He had a surprise in the works for us.

After some development time, Cloudflare pushed out an update to address our issue, allowing us to cache very large files. With that in place, everything just worked with no additional tweaking. Even items like partial downloads for users using download accelerators worked just fine. Amazing!

To show an example of what this translated into, let’s look at some graphs. Once the very large file support was added and we started to push out our images through Cloudflare, you could see that there is not a real increase in requests:

Cloudflare Repositories FTW

However, looking at Bandwidth there is a clear increase:

Cloudflare Repositories FTW

After it had been implemented for a while, we see a clear pattern.

Cloudflare Repositories FTW

Cloudflare Repositories FTW

This pushed us from around 80 TB a week when we had just the repo, to now around 430TB a month when its repo and images. As you can imagine, that’s an amazing bandwidth savings for an open source project such as ours.

Performance is great, and with a cache hit rate of over 97% (amazingly high considering how often and frequently files in our repo changes), we could not be happier.

So what’s next? That’s the question we are asking ourselves. This solution has worked so well, we are looking at other ways to leverage it, and there are a lot of options. One thing is for sure, we are not done with this.

Thanks to Cloudflare, Sevki, Justin, and Matthew for helping us along this path. It is fair to say this is the single largest contribution to Kali that we have received outside of the support by Offensive Security.

Support we received from Cloudflare was amazing. The Kali project and community thanks you immensely every time they update their distribution or download an image.