Tag Archives: deep dive

Workers Sites: Extending the Workers platform with our own serverless building blocks

Post Syndicated from Ashley Williams original https://blog.cloudflare.com/extending-the-workers-platform/

Workers Sites: Extending the Workers platform with our own serverless building blocks

As of today, with the Wrangler CLI, you can now deploy entire websites directly to Cloudflare Workers and Workers KV. If you can statically generate the assets for your site, think create-react-app, Jekyll, or even the WP2Static plugin, you can deploy it to our entire global network, which spans 194 cities in more than 90 countries.

While you could deploy an entire site directly to Workers before, it wasn’t the easiest process. So, the Workers Developer Experience Team came up with a solution to make deploying static assets a significantly better experience.

Using our Workers command-line tool Wrangler, we’ve made it possible to deploy any static site to Workers in three easy steps: run wrangler init --site, configure the newly created wrangler.toml file with your account and project details, and then publish it to Cloudflare’s edge with wrangler publish. If you want to explore how this works, check out our new Workers Sites tutorial for create-react-app, where we cover how this new functionality allows you to deploy without needing to write any additional code!

While in hindsight the path we took to get to this point might not seem the most straightforward, it really highlights the flexibility of the entire Workers platform to easily support use cases that we didn’t originally envision. With this in mind, I’ll walk you through the implementation and thinking we did to get to this point. I’ll also talk a bit about how the flexibility of the Workers platform has us excited, both for the ethos it represents, and the future it enables.

So, what went into building Workers Sites?

“Filesystem?! Where we’re going, we don’t need a filesystem!”

The Workers platform is built on v8 isolates, which, while awesome, lack a filesystem. If you’ve ever deployed a static site via FTP, uploaded it to object storage, or used a computer, you’d probably agree that filesystems are important. For many use cases, like building an API or routing, you don’t need a filesystem, but as the vision for Workers grew and our audience grew with it, it became clear to us that this was a limitation we needed to address for new features.

Welcome to the simulation

Without a filesystem, we decided to simulate one on top of Workers KV! Workers KV provides access to a secure key-value store that runs across Cloudflare’s Edge alongside Workers.

When running wrangler preview or wrangler publish, we check your wrangler.toml for the site key. The site key points to a bucket, which represents the KV namespace we’ll use to represent your static assets. We then upload each of your assets, where the path relative to the entry directory is the key, and the blob of the file is the value.

Workers Sites: Extending the Workers platform with our own serverless building blocks

When a request from a user comes in, the Worker reads the request’s URI and looks up the asset that matches the segment requested. For example, if a user fetches “my-site.com/about.html”, the Worker looks up the “about.html” key in KV and returns the blob. Behind the scenes, we’ll also detect the mime-type of the requested asset and return the response with the correct content-type headers.

For folks who are used to building static sites or sites with a static asset serving component, this could feel deeply overengineered. Others may argue that, indeed, this is just how filesystems are built! The interesting thing for us is that we had to build one, there wasn’t just one there waiting for us.

It was great that we could put this together with Workers KV, but we still had a problem…

Cache rules everything around me

Workers KV is a database, and so it’s set up for both read and write operations. However, it’s primarily tuned for read-heavy workloads on entries that don’t generally have a long life span. This works well for applications where data is accessed frequently and often updated. But, for static websites, assets are generally written once, and then they are never (or infrequently) written to again. Static site content should be cached for a very long time, if not forever (long live Space Jam). This means we need to cache data much longer than KV is used to.

To fix this, on publish or preview, Wrangler walks the entry-point directory you’ve declared in your wrangler.toml and creates an asset manifest: a map of your filenames to a hash of their content. We use this asset manifest to map requests for a particular filename, say index.html, to the content hash of the most recently uploaded static asset.

You may be familiar with the concept of an asset manifest from using tools like create-react-app. Asset manifests help maintain asset fingerprints for caching in the browser. We took this idea and implemented it in Workers Sites, so that we can leverage the edge cache as well!

Workers Sites: Extending the Workers platform with our own serverless building blocks

This now allows us to, after first read per location, cache the static assets in the Cloudflare cache so that the assets can be stored on the edge indefinitely. This reduces reads to KV to almost nothing; we want to use KV for durability purposes, but we want to use a longer caching strategy for performance. Let’s dive in to exactly what this looks like:

How it works

When a new asset is created, Wrangler publish will push the new asset to KV as well as an asset manifest to the edge alongside your Worker.

Workers Sites: Extending the Workers platform with our own serverless building blocks

When someone first accesses your page, the Cloudflare location closest to them will run your Worker. The Worker script will determine the content hash of the asset they’ve requested by looking up that asset in the asset manifest. It will use the filename and content hash as the key to fetch the asset’s contents from KV. At this time it will also insert the asset’s contents into Cloudflare’s edge cache, again keyed by filename and content hash. It will then respond to the request with the asset.

Workers Sites: Extending the Workers platform with our own serverless building blocks

On subsequent requests, the Worker script will look up the content hash in the asset manifest, and check the cache to see if the asset is there. Since this is a subsequent request, it will find your asset in the cache on the edge and return a response containing the asset without having to fetch the asset contents from KV.

Workers Sites: Extending the Workers platform with our own serverless building blocks

So what happens when you update your “index.html”- or any of your static assets? The process is very similar to what happens on the upload of a new asset. You’ll run wrangler publish with your new asset on your local machine. Wrangler will walk your asset directory and upload them to KV. At the same time, it will create a new asset manifest containing the filename and a content hash representing the new contents of the asset. When a request comes into your Worker, your Worker will look into the asset manifest and retrieve the new content hash for that asset. The Worker will look in the cache now for the new hash! It will then fetch the new asset from KV, populate the cache, and return the new file to your end user.

Edge caching happens per location across 194 cities around the world, ensuring that the most frequently accessed content on your page is cached in a location closest to those requesting content, reducing latency. All of this happens in *addition* to the browser cache, which means that your assets are nearly always incredibly close to end users!

By being on the edge, a Worker is in a unique position to be able to cache not only static assets like JS, CSS and images, but also HTML assets! Traditional static site solutions utilize your site’s HTML an entry point to the static site generator’s asset manifest. With this method of caching your HTML, it would be impossible to bust that cache because there is no other entry point to manage your assets’ fingerprints other than the HTML itself. However, in a Worker the entry point is your *Worker*! We can then leverage our wrangler asset-manifest to look up and fetch the accurate and cacheable HTML, while at the same time cache bust on content hash.

Making the possible imaginable

“What we have is a crisis of imagination. Albert Einstein said that you cannot solve a problem with the same mind-set that created it.” – Peter Buffett

When building a brand new developer platform, there’s often a vast number of possible applications. However, the sheer number of possibilities often make each one difficult to imagine. That’s why we think the most important part of any platform is its flexibility to adapt to previously unimagined use cases. And, we don’t mean that just for us. It’s important that everyone has the ability to customize the platform to new and interesting use cases!

At face value, the work we did to implement this feature might seem like another solution for a previously solved problem. However, it’s a great example of how a group of dedicated developers can improve the platform experience for others.

We hope that by paving a way to include static assets in a Worker, developers can use the extra cognitive space to conceive of even more new ways to use Workers that may have been hard to imagine before.

Workers Sites isn’t the end goal, but a stepping stone to continue to think critically about what it means to build a Web Application. We’re excited to give developers the space to explore how simple static applications can grow and evolve, when combined with the dynamic power of edge computing.

Go forth and build something awesome!


Have you built something interesting with Workers? Let us know @CloudflareDev!

The Technical Challenges of Building Cloudflare WARP

Post Syndicated from Zack Bloom original https://blog.cloudflare.com/warp-technical-challenges/

The Technical Challenges of Building Cloudflare WARP

If you have seen our other post you know that we released WARP to the last members of our waiting list today. With WARP our goal was to secure and improve the connection between your mobile devices and the Internet. Along the way we ran into problems with phone and operating system versions, diverse networks, and our own infrastructure, all while working to meet the pent up demand of a waiting list nearly two million people long.

To understand all these problems and how we solved them we first need to give you some background on how the Cloudflare network works:

How Our Network Works

The Cloudflare network is composed of data centers located in 194 cities and in more than 90 countries. Every Cloudflare data center is composed of many servers that receive a continual flood of requests and has to distribute those requests between the servers that handle them. We use a set of routers to perform that operation:

The Technical Challenges of Building Cloudflare WARP

Our routers listen on Anycast IP addresses which are advertised over the public Internet. If you have a site on Cloudflare, your site is available via two of these addresses. In this case, I am doing a DNS query for “workers.dev”, a site which is powered by Cloudflare:

➜ dig workers.dev

;; QUESTION SECTION:
;workers.dev.      IN  A

;; ANSWER SECTION:
workers.dev.    161  IN  A  198.41.215.162
workers.dev.    161  IN  A  198.41.214.162

;; SERVER: 1.1.1.1#53(1.1.1.1)

workers.dev is available at two addresses 198.41.215.162 and 198.41.214.162 (along with two IPv6 addresses available via the AAAA DNS query). Those two addresses are advertised from every one of our data centers around the world. When someone connects to any Internet property on Cloudflare, each networking device their packets pass through will choose the shortest path to the nearest Cloudflare data center from their computer or phone.

Once the packets hit our data center, we send them to one of the many servers which operate there. Traditionally, one might use a load balancer to do that type of traffic distribution across multiple machines. Unfortunately putting a set of load balancers capable of handling our volume of traffic in every data center would be exceptionally expensive, and wouldn’t scale as easily as our servers do. Instead, we use devices built for operating on exceptional volumes of traffic: network routers.

Once a packet hits our data center it is processed by a router. That router sends the traffic to one of a set of servers responsible for handling that address using a routing strategy called ECMP (Equal-Cost Multi-Path). ECMP refers to the situation where the router doesn’t have a clear ‘winner’ between multiple routes, it has multiple good next hops, all to the same ultimate destination. In our case we hack that concept a bit, rather than using ECMP to balance across multiple intermediary links, we make the intermediary link addresses the final destination of our traffic: our servers.

The Technical Challenges of Building Cloudflare WARP

Here’s the configuration of a Juniper-brand router of the type which might be in one of our data centers, and which is configured to balance traffic across three destinations:

[email protected]# show routing-options

static {
  route 172.16.1.0/24 next-hop [ 172.16.2.1 172.16.2.2 172.16.2.3 ];
}
forwarding-table {
  export load-balancing-policy;
}

Since the ‘next-hop’ is our server, traffic will be split across multiple machines very efficiently.

TCP, IP, and ECMP

IP is responsible for sending packets of data from addresses like 93.184.216.34 to 208.80.153.224 (or [2606:2800:220:1:248:1893:25c8:1946] to [2620:0:860:ed1a::1] in the case of IPv6) across the Internet. It’s the “Internet Protocol”.

TCP (Transmission Control Protocol) operates on top of a protocol like IP which can send a packet from one place to another, and makes data transmission reliable and useful for more than one process at a time. It is responsible for taking the unreliable and misordered packets that might arrive over a protocol like IP and delivering them reliably, in the correct order. It also introduces the concept of a ‘port’, a number from 1-65535 which help route traffic on a computer or phone to a specific service (such as the web or email). Each TCP connection has a source and destination port which is included in the header TCP adds to the beginning of each packet. Without the idea of ports it would not be easy to figure out which messages were destined for which program. For example, both Google Chrome and Mail might wish to send messages over your WiFi connection at the same time, so they will each use their own port.

Here’s an example of making a request for https://cloudflare.com/ at 198.41.215.162, on the default port for HTTPS: 443. My computer has randomly assigned me the port 51602 which it will listen on it for a response, which will (hopefully) receive the contents of the site:

Internet Protocol Version 4, Src: 19.5.7.21, Dst: 198.41.215.162
    Protocol: TCP (6)
    Source: 19.5.7.21
    Destination: 198.41.215.162
Transmission Control Protocol, Src Port: 51602, Dst Port: 443, Seq: 0, Len: 0
    Source Port: 51602
    Destination Port: 443

Looking at the same request from the Cloudflare side will be a mirror image, a request from my public IP address originating at my source port, destined for port 443 (I’m ignoring NAT for the moment, more on that later):

Internet Protocol Version 4, Src: 198.41.215.16, Dst: 19.5.7.21
    Protocol: TCP (6)
    Source: 198.41.215.162
    Destination: 19.5.7.21
Transmission Control Protocol, Src Port: 443, Dst Port: 51602, Seq: 0, Len: 0
    Source Port: 443
    Destination Port: 51602

We can now return to ECMP! It could be theoretically possible to use ECMP to balance packets between servers randomly, but you would almost never want to do that. A message over the Internet is generally composed of multiple TCP packets. If each packet were sent to a different server it would be impossible to reconstruct the original message in any one place and act on it. Even beyond that, it would be terrible for performance: we rely on being able to maintain long-lived TCP and TLS sessions which require a persistent connection to a single server. To provide that persistence, our routers don’t balance traffic randomly, they use a combination of four values: the source address, the source port, the destination address, and the destination port. Traffic with the same combination of those four values will always make it to the same server. In the case of my example above, all of my messages destined to cloudflare.com will make it to a single server which can reconstruct the TCP packets into my request and return packets in a response.

Enter WARP

For a conventional request it is very important that our ECMP routing sends all of your packets to the same server for the duration of your request. Over the web a request commonly lasts less than ten seconds and the system works well. Unfortunately we quickly ran into issues with WARP.

WARP uses a session key negotiated with public-key encryption to secure packets. For a successful connection, both sides must negotiate a connection which is then only valid for that particular client and the specific server they are talking to. This negotiation takes time and has to be completed any time a client talks to a new server. Even worse, if packets get sent which expect one server, and end up at another, they can’t be decrypted, breaking the connection. Detecting those failed packets and restarting the connection from scratch takes so much time that our alpha testers experienced it as a complete loss of their Internet connection. As you can imagine, testers don’t leave WARP on very long when it prevents them from using the Internet.

WARP was experiencing so many failures because devices were switching servers much more often than we expected. If you recall, our ECMP router configuration uses a combination of (Source IP, Source Port, Destination IP, Destination Port) to match a packet to a server. Destination IP doesn’t generally change, WARP clients are always connecting to the same Anycast addresses. Similarly, Destination Port doesn’t change, we always listen on the same port for WARP traffic. The other two values, Source IP and Source Port, were changing much more frequently than we had planned.

One source of these changes was expected. WARP runs on cell phones, and cell phones commonly switch from Cellular to Wi-Fi connections. When you make that switch you suddenly go from communicating over the Internet via your cellular carrier’s (like AT&T or Verizon) IP address space to that of the Internet Service Provider your Wi-Fi connection uses (like Comcast or Google Fiber). It’s essentially impossible that your IP address won’t change when you move between connections.

The port changes occurred even more frequently than could be explained by network switches however. For an understanding of why we need to introduce one more component of Internet lore: Network Address Translation.

NAT

An IPv4 address is composed of 32 bits (often written as four eight-bit numbers). If you exclude the reserved addresses which can’t be used, you are left with 3,706,452,992 possible addresses. This number has remained constant since IPv4 was deployed on the ARPANET in 1983, even as the number of devices has exploded (although it might go up a bit soon if the 0.0.0.0/8 becomes available). This data is based on Gartner Research predictions and estimates:

The Technical Challenges of Building Cloudflare WARP

IPv6 is the definitive solution to this problem. It expands the length of an address from 32 to 128 bits, with 125 available in a valid Internet address at the moment (all public IPv6 addresses have the first three bits set to 001, the remaining 87.5% of the IPv6 address space is not considered necessary yet). 2^125 is an impossibly large number and would be more than enough for every device on Earth to have its own address. Unfortunately, 21 years after it was published, IPv6 still remains unsupported on many networks. Much of the Internet still relies on IPv4, and as seen above, there aren’t enough IPv4 addresses for every device to have their own.

To solve this problem many devices are commonly put behind a single Internet-addressable IP address. A router is used to do Network Address Translation; to take messages which arrive on that single public IP and forward them to the appropriate device on their local network. In effect it’s as if everyone in your apartment building had the same street address, and the postal worker was responsible for sorting out what mail was meant for which person.

When your devices send a packet destined for the Internet your router intercepts it. The router then rewrites the source address to the single public Internet address allocated for you, and the source port to a port which is unique for all the messages being sent across all the Internet-connected devices on your network. Just as your computer chooses a random source port for your messages which was unique between all the different processes on your computer, your router chooses a random source port which is unique for all the Internet connections across your entire network. It remembers the port it is selecting for you as belonging to your connection, and allows the message to continue over the Internet.

The Technical Challenges of Building Cloudflare WARP

When a response arrives destined for the port it has allocated to you, it matches it to your connection and again rewrites it, this time replacing the destination address with your address on the local network, and the destination port with the original source port you specified. It has transparently allowed all the devices on your network to act as if they were one big computer with a single Internet-connected IP address.

This process works very well for the duration of a common request over the Internet. Your router only has so much space however, so it will helpfully delete old port assignments, freeing up space for new ones. It generally waits for the connection to not have any messages for thirty seconds or more before deleting an assignment, making it unlikely a response will arrive which it can no longer direct to the appropriate source. Unfortunately, WARP sessions need to last much longer than thirty seconds.

When you next send a message after your NAT session has expired, you are given a new source port. That new port causes your ECMP mapping (based on source IP, source port, destination IP, destination port) to change, causing us to route your requests to a new machine within the Cloudflare data center your messages are arriving at. This breaks your WARP session, and your Internet connection.

We experimented extensively with methods of keeping your NAT session fresh by periodically sending keep-alive messages which would prevent routers and mobile carriers from evicting mappings. Unfortunately waking the radio of your device every thirty seconds has unfortunate consequences for your battery life, and it was not entirely successful at preventing port and address changes. We needed a way to always map sessions to the same machine, even as their source port (and even source address) changed.

Fortunately, we had a solution which came from elsewhere at Cloudflare. We don’t use dedicated load balancers, but we do have many of the same problems load balancers solve. We have long needed to map traffic to Cloudflare servers with more control than ECMP allows alone. Rather than deploying an entire tier of load balancers, we use every server in our network as a load balancer, forwarding packets first to an arbitrary machine and then relying on that machine to forward the packet to the appropriate host. This consumes minimal resources and allows us to scale our load balancing infrastructure with each new machine we add. We have a lot more to share on how this infrastructure works and what makes it unique, subscribe to this blog to be notified when that post is released.

To make our load balancing technique work though we needed a way to identify which client a WARP packet was associated with before it could be decrypted. To understand how we did that it’s helpful to understand how WARP encrypts your messages. The industry standard way of connecting a device to a remote network is a VPN. VPNs use a protocol like IPsec to allow your device to send messages securely to a remote network. Unfortunately, VPNs are generally rather disliked. They slow down connections, eat battery life, and their complexity makes them frequently the source of security vulnerabilities. Users of corporate networks which mandate VPNs often hate them, and the idea that we would convince millions of consumers to install one voluntarily seemed ridiculous.

After considering and testing several more modern options, we landed on WireGuard®. WireGuard is a modern, high performance, and most importantly, simple, protocol created by Jason Donenfeld to solve the same problem. Its original code-base is less than 1% the size of a popular IPsec implementation, making it easy for us to understand and secure. We chose Rust as the language most likely to give us the performance and safety we needed and implemented WireGuard while optimizing the code heavily to run quickly on the platforms we were targeting. Then we open sourced the project.

The Technical Challenges of Building Cloudflare WARP

WireGuard changes two very relevant things about the traffic you send over the Internet. The first is it uses UDP not TCP. The second is it uses a session key negotiated with public-key encryption to secure the contents of that UDP packet.

TCP is the conventional protocol used for loading a website over the Internet. It combines the ability to address ports (which we talked about previously) with reliable delivery and flow control. Reliable delivery ensures that if a message is dropped, TCP will eventually resend the missing data. Flow control gives TCP the tools it needs to handle many clients all sharing the same link who exceed its capacity. UDP is a much simpler protocol which trades these capabilities for simplicity, it makes a best-effort attempt to send a message, and if the message is missing or there is too much data for the links, messages are simply never heard of again.

UDP’s lack of reliability would normally be a problem while browsing the Internet, but we are not simply sending UDP, we are sending a complete TCP packet _inside_ our UDP packets.

Inside the payload encrypted by WireGuard we have a complete TCP header which contains all the information necessary to ensure reliable delivery. We then wrap it with WireGuard’s encryption and use UDP to (less-than-reliably) send it over the Internet. Should it be dropped TCP will do its job just as if a network link lost the message and resend it. If we instead wrapped our inner, encrypted, TCP session in another TCP packet as some other protocols do we would dramatically increase the number of network messages required, destroying performance.

The second interesting component of WireGuard relevant to our discussion is public-key encryption. WireGuard allows you to secure each message you send such that only the specific destination you are sending it to can decrypt it. That is a powerful way of ensuring your security as you browse the Internet, but it means it is impossible to read anything inside the encrypted payload until the message has reached the server which is responsible for your session.

Returning to our load balancing issue, you can see that only three things are accessible to us before we can decrypt the message: The IP Header, the UDP Header, and the WireGuard header. Neither the IP Header or UDP Header include the information we need, as we have already failed with the four pieces of information they contain (source IP, source port, destination IP, destination port). That leaves the WireGuard header as the one location where we can find an identifier which can be used to keep track of who the client was before decrypting the message. Unfortunately, there isn’t one. This is the format of the message used to initiate a connection:

The Technical Challenges of Building Cloudflare WARP

sender looks temptingly like a client id, but it’s randomly assigned every handshake. Handshakes have to be performed every two minutes to rotate keys making them insufficiently persistent. We could have forked the protocol to add any number of additional fields, but it is important to us to remain wire-compatible with other WireGuard clients. Fortunately, WireGuard has a three byte block in its header which is not currently used by other clients. We decided to put our identifier in this region and still support messages from other WireGuard clients (albeit with less reliable routing than we can offer). If this reserved section is used for other purposes we can ignore those bits or work with the WireGuard team to extend the protocol in another suitable way.

When we begin a WireGuard session we include our clientid field which is provided by our authentication server which has to be communicated with to begin a WARP session:

The Technical Challenges of Building Cloudflare WARP

Data messages similarly include the same field:

The Technical Challenges of Building Cloudflare WARP

It’s important to note that the clientid is only 24 bits long. That means there are less possible clientid values than the current number of users waiting to use WARP. This suits us well as we don’t need or want the ability to track individual WARP users. clientid is only necessary for load balancing, once it serves its purpose we get it expunged from our systems as quickly as we can.

The load balancing system now uses a hash of the clientid to identify which machine a packet should be routed to, meaning  WARP messages always arrive at the same machine even as you change networks or move from Wi-Fi to cellular, and the problem was eliminated.

Client Software

Cloudflare has never developed client software before. We take pride in selling a service anyone can use without needing to buy hardware or provision infrastructure. To make WARP work, however, we needed to deploy our code onto one of the most ubiquitous hardware platforms on Earth: smartphones.

While developing software on mobile devices has gotten steadily easier over the past decade, unfortunately developing low-level networking software remains rather difficult. To consider one example: we began the project using the latest iOS connection API called Network, introduced in iOS 12. Apple strongly recommends the use of Network, in their words “Your customers are going to appreciate how much better your connections, how much more reliable your connections are established, and they’ll appreciate the longer battery life from the better performance.”

The Network framework provides a pleasantly high-level API which, as they say, integrates well with the native performance features built into iOS. Creating a UDP connection (connection is a bit of a misnomer, there are no connections in UDP, just packets) is as simple as:

self.connection = NWConnection(host: hostUDP, port: portUDP, using: .udp)

And sending a message can be as easy as:

self.connection?.send(content: content)

Unfortunately, at a certain point code actually gets deployed, and bug reports begin flowing in. The first issue was the simplicity of the API made it impossible for us to process more than a single UDP packet at a time. We commonly use packets of up to 1500 bytes, running a speed test on my Google Fiber connection currently results in a speed of 370 Mbps, or almost thirty-one thousand packets per second. Attempting to process each packet individually was slowing down connections by as much as 40%. According to Apple, the best solution to get the performance we needed was to fallback to the older NWUDPSession API, introduced in iOS 9.

IPv6

If we compare the code required to create a NWUDPSession to the example above you will notice that we suddenly care which protocol, IPv4 or IPv6, we are using:

let v4Session = NWUDPSession(upgradeFor: self.ipv4Session)
v4Session.setReadHandler(self.filteringReadHandler, maxDatagrams: 32)

In fact, NWUDPSession does not handle many of the more tricky elements of creating connections over the Internet. For example, the Network framework will automatically determine whether a connection should be made over IPv4 or 6:

The Technical Challenges of Building Cloudflare WARP

NWUDPSession does not do this for you, so we began creating our own logic to determine which type of connection should be used. Once we began to experiment, it quickly became clear that they are not created equal. It’s fairly common for a route to the same destination to have very different performance based on whether you use its IPv4 or IPv6 address. Often this is because there are simply fewer IPv4 addresses which have been around for longer, making it possible for those routes to be better optimized by the Internet’s infrastructure.

Every Cloudflare product has to support IPv6 as a rule. In 2016, we enabled IPv6 for over 98% of our network, over four million sites, and made a pretty big dent in IPv6 adoption on the web:

The Technical Challenges of Building Cloudflare WARP

We couldn’t release WARP without IPv6 support. We needed to ensure that we were always using the fastest possible connection while still supporting both protocols with equal measure. To solve that we turned to a technology we have used with DNS for years: Happy Eyeballs. As codified in RFC 6555 Happy Eyeballs is the idea that you should try to look for both an IPv4 and IPv6 address when doing a DNS lookup. Whichever returns first, wins. That way you can allow IPv6 websites to load quickly even in a world which does not fully support it.

As an example, I am loading the website http://zack.is/. My web browser makes a DNS request for both the IPv4 address (an “A” record) and the IPv6 address (an “AAAA” record) at the same time:

Internet Protocol Version 4, Src: 192.168.7.21, Dst: 1.1.1.1
User Datagram Protocol, Src Port: 47447, Dst Port: 53
Domain Name System (query)
    Queries
        zack.is: type A, class IN

Internet Protocol Version 4, Src: 192.168.7.21, Dst: 1.1.1.1
User Datagram Protocol, Src Port: 49946, Dst Port: 53
Domain Name System (query)
    Queries
        zack.is: type AAAA, class IN

In this case the response to the A query returned more quickly, and the connection is begun using that protocol:

Internet Protocol Version 4, Src: 1.1.1.1, Dst: 192.168.7.21
User Datagram Protocol, Src Port: 53, Dst Port: 47447
Domain Name System (response)
    Queries
        zack.is: type A, class IN
    Answers
        zack.is: type A, class IN, addr 104.24.101.191
       
Internet Protocol Version 4, Src: 192.168.7.21, Dst: 104.24.101.191
Transmission Control Protocol, Src Port: 55244, Dst Port: 80, Seq: 0, Len: 0
    Source Port: 55244
    Destination Port: 80
    Flags: 0x002 (SYN)

We don’t need to do DNS queries to make WARP connections, we know the IP addresses of our data centers already, but we do want to know which of the IPv4 and IPv6 addresses will lead to a faster route over the Internet. To accomplish that we perform the same technique but at the network level: we send a packet over each protocol and use the protocol which returns first for subsequent messages. With some error handling and logging removed for brevity, it appears as:

let raceFinished = Atomic<Bool>(false)

let happyEyeballsRacer: (NWUDPSession, NWUDPSession, String) -> Void = {
    (session, otherSession, name) in
    // Session is the session the racer runs for, otherSession is a session we race against

    let handleMessage: ([Data]) -> Void = { datagrams in
        // This handler will be executed twice, once for the winner, again for the loser.
        // It does not matter what reply we received. Any reply means this connection is working.

        if raceFinished.swap(true) {
            // This racer lost
            return self.filteringReadHandler(data: datagrams, error: nil)
        }

        // The winner becomes the current session
        self.wireguardServerUDPSession = session

        session.setReadHandler(self.readHandler, maxDatagrams: 32)
        otherSession.setReadHandler(self.filteringReadHandler, maxDatagrams: 32)
    }

    session.setReadHandler({ (datagrams) in
        handleMessage(datagrams)
    }, maxDatagrams: 1)

    if !raceFinished.value {
        // Send a handshake message
        session.writeDatagram(onViable())
    }
}

This technique successfully allows us to support IPv6 addressing. In fact, every device which uses WARP instantly supports IPv6 addressing even on networks which don’t have support. Using WARP takes the 34% of Comcast’s network which doesn’t support IPv6 or the 69% of Charter’s network which doesn’t (as of 2018), and allows those users to communicate to IPv6 servers successfully.

This test shows my phone’s IPv6 support before and after enabling WARP:

The Technical Challenges of Building Cloudflare WARP

The Technical Challenges of Building Cloudflare WARP

Dying Connections

Nothing is simple however, with iOS 12.2 NWUDPSession began to trigger errors which killed connections. These errors were only identified with a code ‘55’. After some research it appears 55 has referred to the same error since the early foundations of the FreeBSD operating system OS X was originally built upon. In FreeBSD it’s commonly referred to as ENOBUFS, and it’s returned when the operating system does not have sufficient BUFfer Space to handle the operation being completed. For example, looking at the source of a FreeBSD today, you see this code in its IPv6 implementation:

The Technical Challenges of Building Cloudflare WARP

In this example, if enough memory cannot be allocated to accommodate the size of an IPv6 and ICMP6 header, the error ENOBUFS (which is mapped to the number 55) will be returned. Unfortunately, Apple’s take on FreeBSD is not open source however: how, when, and why they might be returning the error is a mystery. This error has been experienced by other UDP-based projects, but a resolution is not forthcoming.

What is clear is once an error 55 begins occurring, the connection is no longer usable. To handle this case we need to reconnect, but doing the same Happy Eyeballs mechanic we do on initial connection is both unnecessary (as we were already talking over the fastest connection), and will consume valuable time. Instead we add a second connection method which is only used to recreate an already working session:

/**
Create a new UDP connection to the server using a Happy Eyeballs like heuristic.

This function should be called when first establishing a connection to the edge server.

It will initiate a new connection over IPv4 and IPv6 in parallel, keeping the connection that receives the first response.
*/

func connect(onViable: @escaping () -> Data, onReply: @escaping () -> Void, onFailure: @escaping () -> Void, onDisconnect: @escaping () -> Void)

/**
Recreate the current connections.

This function should be called as a response to error code 55, when a quick connection is required.

Unlike `happyEyeballs`, this function will use viability as its only success criteria.
*/

func reconnect(onViable: @escaping () -> Void, onFailure: @escaping () -> Void, onDisconnect: @escaping () -> Void)

Using reconnect we are able to recreate sessions broken by code 55 errors, but it still adds a latency hit which is not ideal. As with all client software development on a closed-source platform however, we are dependent on the platform to identify and fix platform-level bugs.

Truthfully, this is just one of a long list of platform-specific bugs we ran into building WARP. We hope to continue working with device vendors to get them fixed. There are an unimaginable number of device and connection combinations, and each connection doesn’t just exist at one moment in time, they are always changing, entering and leaving broken states almost faster than we can track. Even now, getting WARP to work on every device and connection on Earth is not a solved problem, we still get daily bug reports which we work to triage and resolve.

WARP+

WARP is meant to be a place where we can apply optimizations which make the Internet better. We have a lot of experience making websites more performant, WARP is our opportunity to experiment with doing the same for all Internet traffic.

At Cloudflare we have a product called Argo. Argo makes websites’ time to first byte more than 30% faster on average by continually monitoring thousands of routes over the Internet between our data centers. That data builds a database which maps every IP address range with the fastest possible route to every destination. When a packet arrives it first reaches the closest data center to the client, then that data center uses data from our tests to discover the route which will get the packet to its destination with the lowest possible latency. You can think of it like a traffic-aware GPS for the Internet.

Argo has historically only operated on HTTP packets. HTTP is the protocol which powers the web, sending messages which load websites on top of TCP and IP. For example, if I load http://zack.is/, an HTTP message is sent inside a TCP packet:

Internet Protocol Version 4, Src: 192.168.7.21, Dst: 104.24.101.191
Transmission Control Protocol, Src Port: 55244, Dst Port: 80
    Source Port: 55244
    Destination Port: 80
    TCP payload (414 bytes)
Hypertext Transfer Protocol
    GET / HTTP/1.1\r\n
    Host: zack.is\r\n
    Connection: keep-alive\r\n
    Accept-Encoding: gzip, deflate\r\n
    Accept-Language: en-US,en;q=0.9\r\n
    \r\n

The modern and secure web presents a problem for us however: When I make the same request over HTTPS (https://zack.is) rather than just HTTP (http://zack.is), I see a very different result over the wire:

Internet Protocol Version 4, Src: 192.168.7.21, Dst: 104.25.151.102
Transmission Control Protocol, Src Port: 55983, Dst Port: 443
    Source Port: 55983
    Destination Port: 443
    Transport Layer Security
    TCP payload (54 bytes)
Transport Layer Security
    TLSv1.2 Record Layer: Application Data Protocol: http-over-tls
        Encrypted Application Data: 82b6dd7be8c5758ad012649fae4f469c2d9e68fe15c17297…

My request has been encrypted! It’s no longer possible for WARP (or anyone but the destination) to tell what is in the payload. It might be HTTP, but it also might be any other protocol. If my site is one of the twenty-million which use Cloudflare already, we can decrypt the traffic and accelerate it (along with a long list of other optimizations). But for encrypted traffic destined for another source existing HTTP-only Argo technology was not going to work.

Fortunately we now have a good amount of experience working with non-HTTP traffic through our Spectrum and Magic Transit products. To solve our problem the Argo team turned to the CONNECT protocol.

As we now know, when a WARP request is made it first communicates over the WireGuard protocol to a server running in one of our 194 data centers around the world. Once the WireGuard message has been decrypted, we examine the destination IP address to see if it is an HTTP request destined for a Cloudflare-powered site, or a request destined elsewhere. If it’s destined for us it enters our standard HTTP serving path; often we can reply to the request directly from our cache in the very same data center.

If it’s not destined for a Cloudflare-powered site we instead forward the packet to a proxy process which runs on each machine. This proxy is responsible for loading the fastest path from our Argo database and beginning an HTTP session with a machine in the data center this traffic should be forwarded to. It uses the CONNECT command to both transmit metadata (as headers) and turn the HTTP session into a connection which can transmit the raw bytes of the payload:

CONNECT 8.54.232.11:5564 HTTP/1.1\r\n
Exit-Tcp-Keepalive-Duration: 15\r\n
Application: warp\r\n
\r\n
<data to send to origin>

Once the message arrives at the destination data center it is either forwarded to another data center (if that is best for performance), or directed directly to the origin which is awaiting the traffic.

The Technical Challenges of Building Cloudflare WARP

Smart routing is just the beginning of WARP+; We have a long list of projects and plans which are all aimed at making your Internet faster, and couldn’t be more thrilled to finally have a platform to test them with.

Our Mission

Today, after well over a year of development, WARP is available to you and to your friends and family. For us though, this is just the beginning. With the ability to improve full network connection for all traffic, we unlock a whole new world of optimizations and security improvements which were simply impossible before. We couldn’t be more excited to experiment, play, and eventually release, all sorts of new WARP and WARP+ features.

Cloudflare’s mission is to help build a better Internet. If we are willing to experiment and solve hard technical problems together we believe we can help make the future of the Internet better than the Internet of today, and we are all grateful to play a part in that. Thank you for trusting us with your Internet connection.

WARP was built by Vlad Krasnov, Chris Branch, Dane Knecht, Naga Tripirineni, Andrew Plunk, Adam Schwartz, Irtefa, and intern Michelle Chen with support from members of our Austin, San Francisco, Champaign, London, Warsaw, and Lisbon offices.

Inside the Web Browser’s Performance API

Post Syndicated from Young Park original https://blog.cloudflare.com/browser-performance-api/

Inside the Web Browser’s Performance API

Building a beautiful, feature-rich website is easier than ever before. Not long ago, you’d have to fire up a text editor and hand-craft a lot of HTML, CSS, and JavaScript. Today, you can use WYSIWYG tools and third-party libraries that make development much simpler. The flip side of this is that it can be hard to see everything that’s going into your website — and the performance can suffer.

The good news is that modern web browsers expose lots of performance data that can help you understand how your web page performs. With the launch of Browser Insights today, we can analyze the performance from the perspective of the web browser and what the end user actually experiences. In this post, we’ll dive into how we think about performance and utilize the timing APIs in the web browser.

How web browsers measure performance

In the old days, the only way for a developer to profile performance was to intercept requests and measure the time from the beginning of the page load until the end of the load event.

Today, we can use Web APIs that are supported by modern browsers. This is part of the web standard called the Performance API. The Performance API consists of a collection of individual APIs that include:

  • Navigation Timing (for timing information relates to the page and navigation)
  • Resource Timing (for timing data regarding the loading of an application’s resources)
  • Paint Timing (that provides timing information about paint operation during the page construction)

In this blog post, we will primarily focus on the Navigation Timing API.

Inside the Performance API

To see what’s collected with the Performance API, you can open the Developer Tools in Chrome browser and type ‘performance’ in the console tab (or type in performance.timing to get direct access to the PerformanceTiming attribute).

Try expanding the Performance object by clicking on the arrow before the label and again expand the ‘timing’. This is called PerformanceTiming, which includes all the timings that relate to the current page load as UNIX epoch timestamp (milliseconds). The timing attributes shown are not in the order of the load. So for better understanding, let’s look at the illustration provided by the W3C.

Inside the Web Browser’s Performance API
Image from https://www.w3.org/TR/navigation-timing/

As we can see from the diagram shown, each element (represented as a box above) in the order from left-to-right, represents the navigation flow of the page load. Each element has an attribute from the starting point to the end (and some have multiple attributes!) so that we can measure the elapsed time for each element. For example, to get the Request Time, you could type in the command like shown below in the console which appears to be 60 milliseconds.

Inside the Web Browser’s Performance API

How Cloudflare uses performance data

Once your website is proxied through Cloudflare and Browser Insights is enabled, we write and inject a JavaScript beacon into the web page. Our beacon collects metrics from the Performance API to send to our edge, where it can be used to understand where your website is slowing down or having any network problems. The reported data is shown in the Cloudflare dashboard on the Speed Page showing as averages of each timing metric:

Inside the Web Browser’s Performance API

The metrics we surface are:

  • DNS (domainLookupEnd – domainLookupStart): How long the DNS query takes. This could appear as zero if the connection is reused or the content was stored in the local cache (memory or disk).
  • TCP (connectEnd – connectStart): How long it takes to establish a TCP connection the server. If HTTPS, this process includes TLS negotiation time.
  • Request (responseStart – requestStart): The time elapsed between making an HTTP request and receiving the first byte of the response.
  • Response (responseEnd – responseStart): The time elapsed between the first byte and the last byte of the response received. You can think of this as a resource download time.
  • Processing (domComplete – domLoading): How long it took to render the page. If this number is big, you can optimize your document architecture, resource size, or configure settings under Speed page such as Auto Minify the source code. This document process can be drill down more with domInteractive, domContentLoadedEventStart, domContentLoadedEventEnd, and domComplete. We plan to provide more detailed analytics on this later on.
  • Load Event (loadEventEnd – loadEventStart): When the browser finishes loading its document and resources, it triggers a `load` event. This duration may be helpful to you if you have additional functions or any logic for the load event.
  • Total Time: Sum of each timing metrics shown on the graph.

If you are seeing any spikes or unusual form of a line in the stacked line chart, you could start investigating on each element to see what is causing the problem.

For more about how to use Browser Insights, see our announcement blog post.

What’s next

In this blog post, we’ve focused on the Navigation Timing API, because it’s at the heart of our first version of Browser Insights. In the near future, we plan to incorporate metrics from some of the other APIs. For example, we can break down some of the longer timings by looking at individual resource loads, and pointing out what’s taking longer. In addition to that, we plan to track JavaScript errors, provide a way to measure A/B performance, set up monitoring/alerting based on the metrics, and so on. So stay tuned!

Details of the Cloudflare outage on July 2, 2019

Post Syndicated from John Graham-Cumming original https://blog.cloudflare.com/details-of-the-cloudflare-outage-on-july-2-2019/

Almost nine years ago, Cloudflare was a tiny company and I was a customer not an employee. Cloudflare had launched a month earlier and one day alerting told me that my little site, jgc.org, didn’t seem to have working DNS any more. Cloudflare had pushed out a change to its use of Protocol Buffers and it had broken DNS.

I wrote to Matthew Prince directly with an email titled “Where’s my dns?” and he replied with a long, detailed, technical response (you can read the full email exchange here) to which I replied:

From: John Graham-Cumming
Date: Thu, Oct 7, 2010 at 9:14 AM
Subject: Re: Where's my dns?
To: Matthew Prince

Awesome report, thanks. I'll make sure to call you if there's a
problem.  At some point it would probably be good to write this up as
a blog post when you have all the technical details because I think
people really appreciate openness and honesty about these things.
Especially if you couple it with charts showing your post launch
traffic increase.

I have pretty robust monitoring of my sites so I get an SMS when
anything fails.  Monitoring shows I was down from 13:03:07 to
14:04:12.  Tests are made every five minutes.

It was a blip that I'm sure you'll get past.  But are you sure you
don't need someone in Europe? :-)

To which he replied:

From: Matthew Prince
Date: Thu, Oct 7, 2010 at 9:57 AM
Subject: Re: Where's my dns?
To: John Graham-Cumming

Thanks. We've written back to everyone who wrote in. I'm headed in to
the office now and we'll put something on the blog or pin an official
post to the top of our bulletin board system. I agree 100%    
transparency is best.

And so, today, as an employee of a much, much larger Cloudflare I get to be the one who writes, transparently about a mistake we made, its impact and what we are doing about it.

The events of July 2

On July 2, we deployed a new rule in our WAF Managed Rules that caused CPUs to become exhausted on every CPU core that handles HTTP/HTTPS traffic on the Cloudflare network worldwide. We are constantly improving WAF Managed Rules to respond to new vulnerabilities and threats. In May, for example, we used the speed with which we can update the WAF to push a rule to protect against a serious SharePoint vulnerability. Being able to deploy rules quickly and globally is a critical feature of our WAF.

Unfortunately, last Tuesday’s update contained a regular expression that backtracked enormously and exhausted CPU used for HTTP/HTTPS serving. This brought down Cloudflare’s core proxying, CDN and WAF functionality. The following graph shows CPUs dedicated to serving HTTP/HTTPS traffic spiking to nearly 100% usage across the servers in our network.

CPU utilization in one of our PoPs during the incident

This resulted in our customers (and their customers) seeing a 502 error page when visiting any Cloudflare domain. The 502 errors were generated by the front line Cloudflare web servers that still had CPU cores available but were unable to reach the processes that serve HTTP/HTTPS traffic.

We know how much this hurt our customers. We’re ashamed it happened. It also had a negative impact on our own operations while we were dealing with the incident.

It must have been incredibly stressful, frustrating and frightening if you were one of our customers. It was even more upsetting because we haven’t had a global outage for six years.

The CPU exhaustion was caused by a single WAF rule that contained a poorly written regular expression that ended up creating excessive backtracking. The regular expression that was at the heart of the outage is (?:(?:\"|'|\]|\}|\\|\d|(?:nan|infinity|true|false|null|undefined|symbol|math)|\`|\-|\+)+[)]*;?((?:\s|-|~|!|{}|\|\||\+)*.*(?:.*=.*)))

Although the regular expression itself is of interest to many people (and is discussed more below), the real story of how the Cloudflare service went down for 27 minutes is much more complex than “a regular expression went bad”. We’ve taken the time to write out the series of events that lead to the outage and kept us from responding quickly. And, if you want to know more about regular expression backtracking and what to do about it, then you’ll find it in an appendix at the end of this post.

What happened

Let’s begin with the sequence of events. All times in this blog are UTC.

At 13:42 an engineer working on the firewall team deployed a minor change to the rules for XSS detection via an automatic process. This generated a Change Request ticket. We use Jira to manage these tickets and a screenshot is below.

Three minutes later the first PagerDuty page went out indicating a fault with the WAF. This was a synthetic test that checks the functionality of the WAF (we have hundreds of such tests) from outside Cloudflare to ensure that it is working correctly. This was rapidly followed by pages indicating many other end-to-end tests of Cloudflare services failing, a global traffic drop alert, widespread 502 errors and then many reports from our points-of-presence (PoPs) in cities worldwide indicating there was CPU exhaustion.

Some of these alerts hit my watch and I jumped out of the meeting I was in and was on my way back to my desk when a leader in our Solutions Engineering group told me we had lost 80% of our traffic. I ran over to SRE where the team was debugging the situation. In the initial moments of the outage there was speculation it was an attack of some type we’d never seen before.

Cloudflare’s SRE team is distributed around the world, with continuous, around-the-clock coverage. Alerts like these, the vast majority of which are noting very specific issues of limited scopes in localized areas, are monitored in internal dashboards and addressed many times every day. This pattern of pages and alerts, however, indicated that something gravely serious had happened, and SRE immediately declared a P0 incident and escalated to engineering leadership and systems engineering.

The London engineering team was at that moment in our main event space listening to an internal tech talk. The talk was interrupted and everyone assembled in a large conference room and others dialed-in. This wasn’t a normal problem that SRE could handle alone, it needed every relevant team online at once.

At 14:00 the WAF was identified as the component causing the problem and an attack dismissed as a possibility. The Performance Team pulled live CPU data from a machine that clearly showed the WAF was responsible. Another team member used strace to confirm. Another team saw error logs indicating the WAF was in trouble. At 14:02 the entire team looked at me when it was proposed that we use a ‘global kill’, a mechanism built into Cloudflare to disable a single component worldwide.

But getting to the global WAF kill was another story. Things stood in our way. We use our own products and with our Access service down we couldn’t authenticate to our internal control panel (and once we were back we’d discover that some members of the team had lost access because of a security feature that disables their credentials if they don’t use the internal control panel frequently).

And we couldn’t get to other internal services like Jira or the build system. To get to them we had to use a bypass mechanism that wasn’t frequently used (another thing to drill on after the event). Eventually, a team member executed the global WAF kill at 14:07 and by 14:09 traffic levels and CPU were back to expected levels worldwide. The rest of Cloudflare’s protection mechanisms continued to operate.

Then we moved on to restoring the WAF functionality. Because of the sensitivity of the situation we performed both negative tests (asking ourselves “was it really that particular change that caused the problem?”) and positive tests (verifying the rollback worked) in a single city using a subset of traffic after removing our paying customers’ traffic from that location.

At 14:52 we were 100% satisfied that we understood the cause and had a fix in place and the WAF was re-enabled globally.

How Cloudflare operates

Cloudflare has a team of engineers who work on our WAF Managed Rules product; they are constantly working to improve detection rates, lower false positives, and respond rapidly to new threats as they emerge. In the last 60 days, 476 change requests have been handled for the WAF Managed Rules (averaging one every 3 hours).

This particular change was to be deployed in “simulate” mode where real customer traffic passes through the rule but nothing is blocked. We use that mode to test the effectiveness of a rule and measure its false positive and false negative rate. But even in the simulate mode the rules actually need to execute and in this case the rule contained a regular expression that consumed excessive CPU.

As can be seen from the Change Request above there’s a deployment plan, a rollback plan and a link to the internal Standard Operating Procedure (SOP) for this type of deployment. The SOP for a rule change specifically allows it to be pushed globally. This is very different from all the software we release at Cloudflare where the SOP first pushes software to an internal dogfooding network point of presence (PoP) (which our employees pass through), then to a small number of customers in an isolated location, followed by a push to a large number of customers and finally to the world.

The process for a software release looks like this: We use git internally via BitBucket. Engineers working on changes push code which is built by TeamCity and when the build passes, reviewers are assigned. Once a pull request is approved the code is built and the test suite runs (again).

If the build and tests pass then a Change Request Jira is generated and the change has to be approved by the relevant manager or technical lead. Once approved deployment to what we call the “animal PoPs” occurs: DOG, PIG, and the Canaries.

The DOG PoP is a Cloudflare PoP (just like any of our cities worldwide) but it is used only by Cloudflare employees. This dogfooding PoP enables us to catch problems early before any customer traffic has touched the code. And it frequently does.

If the DOG test passes successfully code goes to PIG (as in “Guinea Pig”). This is a Cloudflare PoP where a small subset of customer traffic from non-paying customers passes through the new code.

If that is successful the code moves to the Canaries. We have three Canary PoPs spread across the world and run paying and non-paying customer traffic running through them on the new code as a final check for errors.

Cloudflare software release process

Once successful in Canary the code is allowed to go live. The entire DOG, PIG, Canary, Global process can take hours or days to complete, depending on the type of code change. The diversity of Cloudflare’s network and customers allows us to test code thoroughly before a release is pushed to all our customers globally. But, by design, the WAF doesn’t use this process because of the need to respond rapidly to threats.

WAF Threats

In the last few years we have seen a dramatic increase in vulnerabilities in common applications. This has happened due to the increased availability of software testing tools, like fuzzing for example (we just posted a new blog on fuzzing here).

Source: https://cvedetails.com/

What is commonly seen is a Proof of Concept (PoC) is created and often published on Github quickly, so that teams running and maintaining applications can test to make sure they have adequate protections. Because of this, it’s imperative that Cloudflare are able to react as quickly as possible to new attacks to give our customers a chance to patch their software.

A great example of how Cloudflare proactively provided this protection was through the deployment of our protections against the SharePoint vulnerability in May (blog here). Within a short space of time from publicised announcements, we saw a huge spike in attempts to exploit our customer’s Sharepoint installations. Our team continuously monitors for new threats and writes rules to mitigate them on behalf of our customers.

The specific rule that caused last Tuesday’s outage was targeting Cross-site scripting (XSS) attacks. These too have increased dramatically in recent years.

Source: https://cvedetails.com/

The standard procedure for a WAF Managed Rules change indicates that Continuous Integration (CI) tests must pass prior to a global deploy. That happened normally last Tuesday and the rules were deployed. At 13:31 an engineer on the team had merged a Pull Request containing the change after it was approved.

At 13:37 TeamCity built the rules and ran the tests, giving it the green light. The WAF test suite tests that the core functionality of the WAF works and consists of a large collection of unit tests for individual matching functions. After the unit tests run the individual WAF rules are tested by executing a huge collection of HTTP requests against the WAF. These HTTP requests are designed to test requests that should be blocked by the WAF (to make sure it catches attacks) and those that should be let through (to make sure it isn’t over-blocking and creating false positives). What it didn’t do was test for runaway CPU utilization by the WAF and examining the log files from previous WAF builds shows that no increase in test suite run time was observed with the rule that would ultimately cause CPU exhaustion on our edge.

With the tests passing, TeamCity automatically began deploying the change at 13:42.

Quicksilver

Because WAF rules are required to address emergent threats they are deployed using our Quicksilver distributed key-value (KV) store that can push changes globally in seconds. This technology is used by all our customers when making configuration changes in our dashboard or via the API and is the backbone of our service’s ability to respond to changes very, very rapidly.

We haven’t really talked about Quicksilver much. We previously used Kyoto Tycoon as a globally distributed key-value store, but we ran into operational issues with it and wrote our own KV store that is replicated across our more than 180 cities. Quicksilver is how we push changes to customer configuration, update WAF rules, and distribute JavaScript code written by customers using Cloudflare Workers.

From clicking a button in the dashboard or making an API call to change configuration to that change coming into effect takes seconds, globally. Customers have come to love this high speed configurability. And with Workers they expect near instant, global software deployment. On average Quicksilver distributes about 350 changes per second.

And Quicksilver is very fast.  On average we hit a p99 of 2.29s for a change to be distributed to every machine worldwide. Usually, this speed is a great thing. It means that when you enable a feature or purge your cache you know that it’ll be live globally nearly instantly. When you push code with Cloudflare Workers it’s pushed out a the same speed. This is part of the promise of Cloudflare fast updates when you need them.

However, in this case, that speed meant that a change to the rules went global in seconds. You may notice that the WAF code uses Lua. Cloudflare makes use of Lua extensively in production and details of the Lua in the WAF have been discussed before. The Lua WAF uses PCRE internally and it uses backtracking for matching and has no mechanism to protect against a runaway expression. More on that and what we’re doing about it below.

Everything that occurred up to the point the rules were deployed was done “correctly”: a pull request was raised, it was approved, CI/CD built the code and tested it, a change request was submitted with an SOP detailing rollout and rollback, and the rollout was executed.

Cloudflare WAF deployment process

What went wrong

As noted, we deploy dozens of new rules to the WAF every week, and we have numerous systems in place to prevent any negative impact of that deployment. So when things do go wrong, it’s generally the unlikely convergence of multiple causes. Getting to a single root cause, while satisfying, may obscure the reality. Here are the multiple vulnerabilities that converged to get to the point where Cloudflare’s service for HTTP/HTTPS went offline.

  1. An engineer wrote a regular expression that could easily backtrack enormously.
  2. A protection that would have helped prevent excessive CPU use by a regular expression was removed by mistake during a refactoring of the WAF weeks prior—a refactoring that was part of making the WAF use less CPU.
  3. The regular expression engine being used didn’t have complexity guarantees.
  4. The test suite didn’t have a way of identifying excessive CPU consumption.
  5. The SOP allowed a non-emergency rule change to go globally into production without a staged rollout.
  6. The rollback plan required running the complete WAF build twice taking too long.
  7. The first alert for the global traffic drop took too long to fire.
  8. We didn’t update our status page quickly enough.
  9. We had difficulty accessing our own systems because of the outage and the bypass procedure wasn’t well trained on.
  10. SREs had lost access to some systems because their credentials had been timed out for security reasons.
  11. Our customers were unable to access the Cloudflare Dashboard or API because they pass through the Cloudflare edge.

What’s happened since last Tuesday

Firstly, we stopped all release work on the WAF completely and are doing the following:

  1. Re-introduce the excessive CPU usage protection that got removed. (Done)
  2. Manually inspecting all 3,868 rules in the WAF Managed Rules to find and correct any other instances of possible excessive backtracking. (Inspection complete)
  3. Introduce performance profiling for all rules to the test suite. (ETA:  July 19)
  4. Switching to either the re2 or Rust regex engine which both have run-time guarantees. (ETA: July 31)
  5. Changing the SOP to do staged rollouts of rules in the same manner used for other software at Cloudflare while retaining the ability to do emergency global deployment for active attacks.
  6. Putting in place an emergency ability to take the Cloudflare Dashboard and API off Cloudflare’s edge.
  7. Automating update of the Cloudflare Status page.

In the longer term we are moving away from the Lua WAF that I wrote years ago. We are porting the WAF to use the new firewall engine. This will make the WAF both faster and add yet another layer of protection.

Conclusion

This was an upsetting outage for our customers and for the team. We responded quickly to correct the situation and are correcting the process deficiencies that allowed the outage to occur and going deeper to protect against any further possible problems with the way we use regular expressions by replacing the underlying technology used.

We are ashamed of the outage and sorry for the impact on our customers. We believe the changes we’ve made mean such an outage will never recur.

Appendix: About Regular Expression Backtracking

To fully understand how (?:(?:\"|'|\]|\}|\\|\d|(?:nan|infinity|true|false|null|undefined|symbol|math)|\`|\-|\+)+[)]*;?((?:\s|-|~|!|{}|\|\||\+)*.*(?:.*=.*)))  caused CPU exhaustion you need to understand a little about how a standard regular expression engine works. The critical part is .*(?:.*=.*). The (?: and matching ) are a non-capturing group (i.e. the expression inside the parentheses is grouped together as a single expression).

For the purposes of the discussion of why this pattern causes CPU exhaustion we can safely ignore it and treat the pattern as .*.*=.*. When reduced to this, the pattern obviously looks unnecessarily complex; but what’s important is any “real-world” expression (like the complex ones in our WAF rules) that ask the engine to “match anything followed by anything” can lead to catastrophic backtracking. Here’s why.

In a regular expression, . means match a single character, .* means match zero or more characters greedily (i.e. match as much as possible) so .*.*=.* means match zero or more characters, then match zero or more characters, then find a literal = sign, then match zero or more characters.

Consider the test string x=x. This will match the expression .*.*=.*. The .*.* before the equal can match the first x (one of the .* matches the x, the other matches zero characters). The .* after the = matches the final x.

It takes 23 steps for this match to happen. The first .* in .*.*=.* acts greedily and matches the entire x=x string. The engine moves on to consider the next .*. There are no more characters left to match so the second .* matches zero characters (that’s allowed). Then the engine moves on to the =. As there are no characters left to match (the first .* having consumed all of x=x) the match fails.

At this point the regular expression engine backtracks. It returns to the first .* and matches it against x= (instead of x=x) and then moves onto the second .*. That .* matches the second x and now there are no more characters left to match. So when the engine tries to match the = in .*.*=.* the match fails. The engine backtracks again.

This time it backtracks so that the first .* is still matching x= but the second .* no longer matches x; it matches zero characters. The engine then moves on to try to find the literal = in the .*.*=.* pattern but it fails (because it was already matched against the first .*). The engine backtracks again.

This time the first .* matches just the first x. But the second .* acts greedily and matches =x. You can see what’s coming. When it tries to match the literal = it fails and backtracks again.

The first .* still matches just the first x. Now the second .* matches just =. But, you guessed it, the engine can’t match the literal = because the second .* matched it. So the engine backtracks again. Remember, this is all to match a three character string.

Finally, with the first .* matching just the first x, the second .* matching zero characters the engine is able to match the literal = in the expression with the = in the string. It moves on and the final .* matches the final x.

23 steps to match x=x. Here’s a short video of that using the Perl Regexp::Debugger showing the steps and backtracking as they occur.

That’s a lot of work but what happens if the string is changed from x=x to x=xx? This time is takes 33 steps to match. And if the input is x=xxx it takes 45. That’s not linear. Here’s a chart showing matching from x=x to x=xxxxxxxxxxxxxxxxxxxx (20 x’s after the =). With 20 x’s after the = the engine takes 555 steps to match! (Worse, if the x= was missing, so the string was just 20 x’s, the engine would take 4,067 steps to find the pattern doesn’t match).

This video shows all the backtracking necessary to match x=xxxxxxxxxxxxxxxxxxxx:

That’s bad because as the input size goes up the match time goes up super-linearly. But things could have been even worse with a slightly different regular expression. Suppose it had been .*.*=.*; (i.e. there’s a literal semicolon at the end of the pattern). This could easily have been written to try to match an expression like foo=bar;.

This time the backtracking would have been catastrophic. To match x=x takes 90 steps instead of 23. And the number of steps grows very quickly. Matching x= followed by 20 x’s takes 5,353 steps. Here’s the corresponding chart. Look carefully at the Y-axis values compared the previous chart.

To complete the picture here are all 5,353 steps of failing to match x=xxxxxxxxxxxxxxxxxxxx against .*.*=.*;

Using lazy rather than greedy matches helps control the amount of backtracking that occurs in this case. If the original expression is changed to .*?.*?=.*? then matching x=x takes 11 steps (instead of 23) and so does matching x=xxxxxxxxxxxxxxxxxxxx. That’s because the ? after the .* instructs the engine to match the smallest number of characters first before moving on.

But laziness isn’t the total solution to this backtracking behaviour. Changing the catastrophic example .*.*=.*; to .*?.*?=.*?; doesn’t change its run time at all. x=x still takes 555 steps and x= followed by 20 x’s still takes 5,353 steps.

The only real solution, short of fully re-writing the pattern to be more specific, is to move away from a regular expression engine with this backtracking mechanism. Which we are doing within the next few weeks.

The solution to this problem has been known since 1968 when Ken Thompson wrote a paper titled “Programming Techniques: Regular expression search algorithm”. The paper describes a mechanism for converting a regular expression into an NFA (non-deterministic finite automata) and then following the state transitions in the NFA using an algorithm that executes in time linear in the size of the string being matched against.

Thompson’s paper doesn’t actually talk about NFA but the linear time algorithm is clearly explained and an ALGOL-60 program that generates assembly language code for the IBM 7094 is presented. The implementation may be arcane but the idea it presents is not.

Here’s what the .*.*=.* regular expression would look like when diagrammed in a similar manner to the pictures in Thompson’s paper.

Figure 0 has five states starting at 0. There are three loops which begin with the states 1, 2 and 3. These three loops correspond to the three .* in the regular expression. The three lozenges with dots in them match a single character. The lozenge with an = sign in it matches the literal = sign. State 4 is the ending state, if reached then the regular expression has matched.

To see how such a state diagram can be used to match the regular expression .*.*=.* we’ll examine matching the string x=x. The program starts in state 0 as shown in Figure 1.

The key to making this algorithm work is that the state machine is in multiple states at the same time. The NFA will take every transition it can, simultaneously.

Even before it reads any input, it immediately transitions to both states 1 and 2 as shown in Figure 2.

Looking at Figure 2 we can see what happened when it considers  first x in x=x. The x can match the top dot by transitioning from state 1 and back to state 1. Or the x can match the dot below it by transitioning from state 2 and back to state 2.

So after matching the first x in x=x the states are still 1 and 2. It’s not possible to reach state 3 or 4 because a literal = sign is needed.

Next the algorithm considers the = in x=x. Much like the x before it, it can be matched by either of the top two loops transitioning from state 1 to state 1 or state 2 to state 2, but additionally the literal = can be matched and the algorithm can transition state 2 to state 3 (and immediately state 4). That’s illustrated in Figure 3.

Next the algorithm reaches the final x in x=x. From states 1 and 2 the same transitions are possible back to states 1 and 2. From state 3 the x can match the dot on the right and transition back to state 3.

At that point every character of x=x has been considered and because state 4 has been reached the regular expression matches that string. Each character was processed once so the algorithm was linear in the length of the input string. And no backtracking was needed.

It might also be obvious that once state 4 was reached (after x= was matched) the regular expression had matched and the algorithm could terminate without considering the final x at all.

This algorithm is linear in the size of its input.

Détails de la panne Cloudflare du 2 juillet 2019

Post Syndicated from John Graham-Cumming original https://blog.cloudflare.com/details-of-the-cloudflare-outage-on-july-2-2019-fr/

Il y a près de neuf ans, Cloudflare était une toute petite entreprise dont j’étais le client, et non l’employé. Cloudflare était sorti depuis un mois et un jour, une notification m’alerte que mon petit site,  jgc.org, semblait ne plus disposer d’un DNS fonctionnel. Cloudflare avait effectué une modification dans l’utilisation de Protocol Buffers qui avait endommagé le DNS.

J’ai contacté directement Matthew Prince avec un e-mail intitulé « Où est mon DNS ? » et il m’a envoyé une longue réponse technique et détaillée (vous pouvez lire tous nos échanges d’e-mails ici) à laquelle j’ai répondu :

De: John Graham-Cumming
Date: Jeudi 7 octobre 2010 à 09:14
Objet: Re: Où est mon DNS?
À: Matthew Prince

Superbe rapport, merci. Je veillerai à vous appeler s’il y a un
problème.  Il serait peut-être judicieux, à un certain moment, d’écrire tout cela dans un article de blog, lorsque vous aurez tous les détails techniques, car je pense que les gens apprécient beaucoup la franchise et l’honnêteté sur ce genre de choses. Surtout si vous y ajoutez les tableaux qui montrent l’augmentation du trafic suite à votre lancement.

Je dispose d’un système robuste de surveillance de mes sites qui m’envoie un SMS en cas de problème.  La surveillance montre une panne de 13:03:07 à 14:04:12.  Des tests sont réalisés toutes les cinq minutes.

Un accident de parcours dont vous vous relèverez certainement.  Mais ne pensez-vous pas qu’il vous faudrait quelqu’un en Europe? :-)

Ce à quoi il a répondu :

De: Matthew Prince
Date: Jeudi 7 octobre 2010 à 09:57
Objet: Re: Où est mon DNS?
À: John Graham-Cumming

Merci. Nous avons répondu à tous ceux qui nous ont contacté. Je suis en route vers le bureau et nous allons publier un article sur le blog ou épingler un post officiel en haut de notre panneau d’affichage. Je suis parfaitement d’accord, la transparence est nécessaire.

Aujourd’hui, en tant qu’employé d’un Cloudflare bien plus grand, je vous écris de manière transparente à propos d’une erreur que nous avons commise, de son impact et de ce que nous faisons pour régler le problème.

Les événements du 2 juillet

Le 2 juillet, nous avons déployé une nouvelle règle dans nos règles gérées du pare-feu applicatif Web (WAF) qui ont engendré un surmenage des processeurs sur chaque cœur de processeur traitant le trafic HTTP/HTTPS sur le réseau Cloudflare à travers le monde. Nous améliorons constamment les règles gérées du WAF pour répondre aux nouvelles menaces et vulnérabilités. En mai, par exemple, nous avons utilisé la vitesse avec laquelle nous pouvons mettre à jour le WAF pour appliquer une règle et nous protéger d’une vulnérabilité SharePoint importante. Être capable de déployer rapidement et globalement des règles est une fonctionnalité essentielle de notre WAF.

Malheureusement, la mise à jour de mardi dernier contenait une expression régulière qui nous a fait reculer énormément et qui a épuisé les processeurs utilisés pour la distribution HTTP/HTTPS. Les fonctionnalités essentielles de mise en proxy, du CDN et du WAF de Cloudflare sont tombées en panne. Le graphique suivant présente les processeurs dédiés à la distribution du trafic HTTP/HTTPS atteindre presque 100 % d’utilisation sur les serveurs de notre réseau.

Utilisation des processeurs pendant l’incident dans l’un de nos PoP

En conséquence, nos clients (et leurs clients) voyaient une page d’erreur 502 lorsqu’ils visitaient n’importe quel domaine Cloudflare. Les erreurs 502 étaient générées par les serveurs Web frontaux Cloudflare dont les cœurs de processeurs étaient encore disponibles mais incapables d’atteindre les processus qui distribuent le trafic HTTP/HTTPS.

Nous réalisons à quel points nos clients ont été affectés. Nous sommes terriblement gênés qu’une telle chose se soit produite. L’impact a été négatif également pour nos propres activités lorsque nous avons traité l’incident.

Cela a du être particulièrement stressant, frustrant et angoissant si vous étiez l’un de nos clients. Le plus regrettable est que nous n’avions pas eu de panne globale depuis six ans.

Le surmenage des processeurs fut causé par une seule règle WAF qui contenait une expression régulière mal écrite et qui a engendré un retour en arrière excessif. L’expression régulière au cœur de la panne est (?:(?:\"|'|\]|\}|\\|\d|(?:nan|infinity|true|false|null|undefined|symbol|math)|\`|\-|\+)+[)]*;?((?:\s|-|~|!|{}|\|\||\+)*.*(?:.*=.*)))

Bien que l’expression régulière soit intéressante pour de nombreuses personnes (et évoquée plus en détails ci-dessous), comprendre ce qui a causé la panne du service Cloudflare pendant 27 minutes est bien plus complexe « qu’une expression régulière qui a mal tourné ». Nous avons pris le temps de noter les séries d’événements qui ont engendré la panne et nous ont empêché de réagir rapidement. Et si vous souhaitez en savoir plus sur le retour en arrière d’une expression régulière et que faire dans ce cas, veuillez consulter l’annexe à la fin de cet article.

Ce qui s’est produit

Commençons par l’ordre des événements. Toutes les heures de ce blog sont en UTC.

À 13:42, un ingénieur de l’équipe pare-feu a déployé une légère modification aux règles de détection XSS via un processus automatique. Cela a généré un ticket de requête de modification. Nous utilisons Jira pour gérer ces tickets (capture d’écran ci-dessous).

Trois minutes plus tard, la première page PagerDuty indiquait une erreur avec le pare-feu applicatif Web (WAF). C’était un test synthétique qui vérifie les fonctionnalités du WAF (nous disposons de centaines de tests de ce type) en dehors de Cloudflare pour garantir qu’il fonctionne correctement. Ce test fut rapidement suivi par des pages indiquant de nombreux échecs d’autres tests de bout-en-bout des services Cloudflare, une alerte de perte de trafic globale, des erreurs 502 généralisées, puis par de nombreux rapports de nos points de présence (PoP) dans des villes à travers le monde indiquant un surmenage des processeurs.

Préoccupé par ces alertes, j’ai quitté la réunion à laquelle j’assistais et en me dirigeant vers mon bureau, un responsable de notre groupe Ingénieur Solutions m’a dit que nous avions perdu 80 % de notre trafic. J’ai couru au département SRE où l’équipe était en train de déboguer la situation. Au tout début de la panne, certains pensaient à un type d’attaque que nous n’avions jamais connu auparavant.

L’équipe SRE de Cloudflare est répartie dans le monde entier pour assurer une couverture continue. Les alertes de ce type, dont la majorité identifient des problèmes très spécifiques sur des cadres limités et dans des secteurs précis, sont surveillées dans des tableaux de bord internes et traitées de nombreuses fois chaque jour. Cependant, ce modèle de pages et d’alertes indiquait que quelque chose de grave s’était produit, et le département SRE a immédiatement déclaré un incident P0 et transmis à la direction de l’ingénierie et à l’ingénierie des systèmes.

À ce moment, l’équipe d’ingénieurs de Londres assistait à une conférence technique interne dans notre espace principal d’événements. La discussion fut interrompue et tout le monde s’est rassemblé dans une grande salle de conférence ; les autres se sont connectés à distance. Ce n’était pas un problème normal que le département SRE pouvait régler seul ; nous avions besoin que toutes les équipes appropriées soient en ligne au même moment.

À 14:00, nous avons identifié que le composant à l’origine du problème était le WAF, et nous avons écarté la possibilité d’une attaque. L’équipe Performances a extrait les données en direct du processeur d’une machine qui montrait clairement que le WAF était responsable. Un autre membre de l’équipe a utilisé Strace pour confirmer. Une autre équipe a observé des journaux d’erreur indiquant que le WAF présentait un problème. À 14:02, toute l’équipe m’a regardé : la proposition était d’utiliser un « global kill », un mécanisme intégré à Cloudflare pour désactiver un composant unique partout dans le monde.

Mais utiliser un « global kill » pour le WAF, c’était une toute autre histoire. Nous avons rencontré plusieurs obstacles. Nous utilisons nos propres produits et avec la panne de notre service Access, nous ne pouvions plus authentifier notre panneau de contrôle interne (une fois de retour en ligne, nous avons découvert que certains membres de l’équipe avaient perdu leur accès à cause d’une fonctionnalité de sécurité qui désactive les identifiants s’ils n’utilisent pas régulièrement le panneau de contrôle interne).

Et nous ne pouvions pas atteindre les autres services internes comme Jira ou le moteur de production. Pour les atteindre, nous avons du employer un mécanisme de contournement très rarement utilisé (un autre point à creuser après l’événement). Finalement, un membre de l’équipe a exécuté le « global kill » du WAF à 14:07 ; à 14:09, les niveaux du trafic et des processeurs sont revenus à la normale à travers le monde. Le reste des mécanismes de protection de Cloudflare a continué de fonctionner.

Nous sommes ensuite passés à la restauration de la fonctionnalité WAF. Le côté sensible de la situation nous a poussé à réaliser des tests négatifs (nous demander « est-ce réellement ce changement particulier qui a causé le problème ? ») et des tests positifs (vérifier le fonctionnement du retour) dans une ville unique à l’aide d’un sous-ensemble de trafic après avoir retiré de cet emplacement le trafic de nos clients d’offres payantes.

À 14:52, nous étions certains à 100 % d’avoir compris la cause, résolu le problème et que le WAF était réactivé partout dans le monde.

Fonctionnement de Cloudflare

Cloudflare dispose d’une équipe d’ingénieurs dédiée à notre solution de règles gérées du WAF ; ils travaillent constamment pour améliorer les taux de détection, réduire les faux positifs et répondre rapidement aux nouvelles menaces dès qu’elles apparaissent. Dans les 60 derniers jours, 476 requêtes de modifications ont été traitées pour les règles gérées du WAF (en moyenne une toutes les 3 heures).

Cette modification spécifique fut déployée en mode « simulation » où le trafic client passe au travers de la règle mais rien n’est bloqué. Nous utilisons ce mode pour tester l’efficacité d’une règle et mesurer son taux de faux positifs et de faux négatifs. Cependant, même en mode simulation, les règles doivent être exécutées, et dans cette situation, la règle contenait une expression régulière qui consommait trop de processeur.

On peut observer dans la requête de modification ci-dessus un plan de déploiement, un plan de retour et un lien vers la procédure opérationnelle standard (POS) interne pour ce type de déploiement. La POS pour une modification de règle lui permet d’être appliquée à l’échelle mondiale. Ce processus est très différent de tous les logiciels que nous publions chez Cloudflare, où la POS applique d’abord le logiciel au réseau interne de dogfooding d’un point de présence (PoP) (au travers duquel passent nos employés), puis à quelques clients sur un emplacement isolé, puis à un plus grand nombre de clients et finalement au monde entier.

Le processus pour le lancement d’un logiciel ressemble à cela : Nous utilisons Git en interne via BitBucket. Les ingénieurs qui travaillent sur les modifications intègrent du code créé par TeamCity ; les examinateurs sont désignés une fois la construction validée. Lorsqu’une requête d’extraction est approuvée, le code est créé et la série de tests exécutée (à nouveau).

Si la construction et les tests sont validés, une requête de modification Jira est générée et la modification doit être approuvée par le responsable approprié ou la direction technique. Une fois approuvée, le déploiement de ce que l’on appelle les « PDP animaux » survient : DOG, PIG et les Canaries

Le PoP DOG est un PoP Cloudflare (au même titre que toutes nos villes à travers le monde) mais qui n’est utilisé que par les employés Cloudflare. Ce PoP de dogfooding nous permet d’identifier les problèmes avant que le trafic client n’atteigne le code. Et il le fait régulièrement.

Si le test DOG réussit, le code est transféré vers PIG (en référence au « cobaye » ou « cochon d’Inde »). C’est un PoP Cloudflare sur lequel un petit sous-ensemble de trafic client provenant de clients de l’offre gratuite passe au travers du nouveau code.

Si le test réussit, le code est transféré vers les Canaries. Nous disposons de trois PoP Canaries répartis dans le monde entier ; nous exécutons du trafic client des offres payantes et gratuite qui traverse ces points sur le nouveau code afin de vérifier une dernière fois qu’il n’y a pas d’erreur.

Processus de lancement d’un logiciel Cloudflare

Une fois validé aux Canaries, le code peut être déployé. L’intégralité du processus DOG, PIG, Canari, Global peut prendre plusieurs heures ou plusieurs jours en fonction du type de modification de code. La diversité du réseau et des clients de Cloudflare nous permet de tester rigoureusement le code avant d’appliquer un lancement à tous nos clients partout dans le monde. Cependant, le WAF n’est pas conçu pour utiliser ce processus car il doit répondre rapidement aux menaces.

Menaces WAF

Ces derniers années, nous avons observé une augmentation considérable des vulnérabilités dans les applications communes. C’est une conséquence directe de la disponibilité augmentée des outils de test de logiciels, comme par exemple le fuzzing (nous venons de publier un nouveau blog sur le fuzzing ici).

Source: https://cvedetails.com/

Le plus souvent, une preuve de concept (PoC, Proof of Concept) est créée et publiée rapidement sur Github, afin que les équipes en charge de l’exécution et de la maintenance des applications puissent effectuer des tests et vérifier qu’ils disposent des protections adéquates. Il est donc essentiel que Cloudflare soit capable de réagir aussi vite que possible aux nouvelles attaques pour offrir à nos clients une chance de corriger leur logiciel.

Le déploiement de nos protections contre la vulnérabilité SharePoint au mois de mai illustre parfaitement la façon dont Cloudflare est capable d’offrir une protection de manière proactive (blog ici). Peu de temps après la médiatisation de nos annonces, nous avons observé un énorme pic de tentatives visant à exploiter les installations Sharepoint de notre client. Notre équipe surveille constamment les nouvelles menaces et rédige des règles pour les atténuer pour le compte de nos clients.

La règle spécifique qui a engendré la panne de mardi dernier ciblait les attaques Cross-Site Scripting (XSS). Ce type d’attaque a aussi considérablement augmenté ces dernières années.

Source: https://cvedetails.com/

La procédure standard pour une modification des règles gérées du WAF indique que les tests d’intégration continue (CI) doivent être validés avant un déploiement à l’échelle mondiale. Cela s’est passé normalement mardi dernier et les règles ont été déployées. À 13:31, un ingénieur de l’équipe fusionnait une requête d’extraction contenant la modification déjà approuvée.

À 13:37, TeamCity a construit les règles et exécuté les tests puis donné son feu vert. La série de tests WAF vérifie que les fonctionnalités principales du WAF fonctionnent ; c’est un vaste ensemble de tests individuels ayant chacun une fonction associée. Une fois les tests individuels réalisés, les règles individuelles du WAF sont testées en exécutant une longue série de requêtes HTTP sur le WAF. Ces requêtes HTTP sont conçues pour tester les requêtes qui doivent être bloquées par le WAF (pour s’assurer qu’il identifie les attaques) et celles qui doivent être transmises (pour s’assurer qu’il ne bloque pas trop et ne crée pas de faux positifs). Il n’a pas vérifié si le WAF utilisait excessivement le processeur, et en examinant les journaux des précédentes constructions du WAF, on peut voir qu’aucune augmentation n’est observée pendant l’exécution de la série de tests avec la règle qui aurait engendré le surmenage du processeur sur notre périphérie.

Après le succès des tests, TeamCity a commencé à déployer automatiquement la modification à 13:42.

Quicksilver

Les règles du WAF doivent répondre à de nouvelles menaces ; elles sont donc déployées à l’aide de notre base de données clé-valeur Quicksilver, capable d’appliquer des modifications à l’échelle mondiale en quelques secondes. Cette technologie est utilisée par tous nos clients pour réaliser des changements de configuration dans notre tableau de bord ou via l’API. C’est la base de notre service : répondre très rapidement aux modifications.

Nous n’avons pas vraiment abordé Quicksilver. Auparavant, nous utilisions Kyoto Tycoon comme base de données clé-valeur distribuées globalement, mais nous avons rencontré des problèmes opérationnels et rédigé notre propre base de données clé-valeur que nous avons reproduite dans plus de 180 villes. Quicksilver nous permet d’appliquer des modifications de configuration client, de mettre à jour les règles du WAF et de distribuer du code JavaScript rédigé par nos clients à l’aide de Cloudflare Workers.

Cliquer sur un bouton dans le tableau de bord, passer un appel d’API pour modifier la configuration et cette modification prendra effet en quelques secondes à l’échelle mondiale. Le clients adorent cette configurabilité haut débit. Avec Workers, ils peuvent profiter d’un déploiement logiciel global quasi instantané. En moyenne, Quicksilver distribue environ 350 modifications par seconde.

Et Quicksilver est très rapide.  En moyenne, nous atteignons un p99 de 2,29 s pour distribuer une modification sur toutes nos machines à travers le monde. En général, cette vitesse est un avantage. Lorsque vous activez une fonctionnalité ou purgez votre cache, vous êtes sûr que ce sera fait à l’échelle mondiale presque instantanément. Lorsque vous intégrez du code avec Cloudflare Workers, celui-ci est appliqué à la même vitesse. La promesse de Cloudflare est de vous offrir des mises à jour rapides quand vous en avez besoin.

Cependant, dans cette situation, cette vitesse a donc appliqué la modification des règles à l’échelle mondiale en quelques secondes. Vous avez peut-être remarqué que le code du WAF utilise Lua. Cloudflare utilise beaucoup Lua en production, et des informations sur Lua dans le WAF ont été évoquées auparavant. Le WAF Lua utilise PCRE en interne, le retour en arrière pour la correspondance, et ne dispose pas de mécanisme pour se protéger d’une expression étendue. Ci-dessous, plus d’informations sur nos actions.

Tout ce qui s’est passé jusqu’au déploiement des règles a été fait « correctement » : une requête d’extraction a été lancée, puis approuvée, CI/CD a créé le code et l’a testé, une requête de modification a été envoyée avec une procédure opérationnelle standard précisant le déploiement et le retour en arrière, puis le déploiement a été exécuté.

Processus de déploiement du WAF Cloudflare

Les causes du problème

Nous déployons des douzaines de nouvelles règles sur le WAF chaque semaine, et nous avons de nombreux systèmes en place pour empêcher tout impact négatif sur ce déploiement. Quand quelque chose tourne mal, c’est donc généralement la convergence improbable de causes multiples. Trouver une cause racine unique est satisfaisant mais peut obscurcir la réalité. Voici les vulnérabilités multiples qui ont convergé pour engendrer la mise hors ligne du service HTTP/HTTPS de Cloudflare.

  • Un ingénieur a rédigé une expression régulière qui pouvait facilement causer un énorme retour en arrière.
  • Une protection qui aurait pu empêcher l’utilisation excessive du processeur par une expression régulière a été retirée par erreur lors d’une refonte du WAF quelques semaines plus tôt (refonte dont l’objectif était de limiter l’utilisation du processeur par le WAF).
  • Le moteur d’expression régulière utilisé n’avait pas de garanties de complexité.
  • La série de tests ne présentait pas de moyen d’identifier une consommation excessive du processeur.
  • La procédure opérationnelle standard a permis la mise en production globale d’une modification de règle non-urgente sans déploiement progressif.
  • Le plan de retour en arrière nécessitait la double exécution de l’intégralité du WAF, ce qui aurait pris trop de temps.
  • La première alerte de chute du trafic global a mis trop de temps à se déclencher.
  • Nous n’avons pas mis à jour suffisamment rapidement notre page d’état.
  • Nous avons eu des problèmes pour accéder à nos propres systèmes en raison de la panne et la procédure de contournement n’était pas bien préparée.
  • Les ingénieurs fiabilité avaient perdu l’accès à certains systèmes car leurs identifiants ont expiré pour des raisons de sécurité.
  • Nos clients ne pouvaient pas accéder au tableau de bord ou à l’API Cloudflare car ils passent par la périphérie Cloudflare.

Ce qui s’est passé depuis mardi dernier

Tout d’abord, nous avons interrompu la totalité du travail de publication sur le WAF et réalisons les actions suivantes :

  • Réintroduction de la protection en cas d’utilisation excessive du processeur qui avait été retirée. (Terminé)
  • Inspection manuelle de toutes les 3 868 règles des règles gérées du WAF pour trouver et corriger tout autre cas d’éventuel retour en arrière excessif. (Inspection terminée)
  • Introduction de profil de performances pour toutes les règles sur la série de tests. (ETA :  19 juillet)
  • Passage au moteur d’expression régulière re2 ou Rust qui disposent de garanties de temps d’exécution. (ETA : 31 juillet)
  • Modifier la procédure opérationnelle standard afin d’effectuer des déploiements progressifs de règles de la même manière que pour les autres logiciels chez Cloudflare tout en conservant la capacité de réaliser des déploiements globaux d’urgence pour les attaques actives.
  • Mettre en place une capacité d’urgence pour retirer le tableau de bord et l’API Cloudflare de la périphérie de Cloudflare.
  • Mettre à jour automatiquement la page d’état Cloudflare.

À long terme, nous nous éloignons du WAF Lua que j’ai rédigé il y a plusieurs années. Nous déplaçons le WAF pour qu’il utilise le nouveau moteur pare-feu. Cela rendra le WAF plus rapide et offrira une couche de protection supplémentaire.

Conclusion

Ce fut une panne regrettable pour nos clients et pour l’équipe. Nous avons réagi rapidement pour régler la situation et nous réparons les défauts de processus qui ont permis à cette panne de se produire. Afin de nous protéger contre d’éventuels problèmes ultérieurs avec notre utilisation des expressions régulières, nous allons remplacer notre technologie fondamentale.

Nous sommes très gênés par cette panne et désolés de l’impact sur nos clients. Nous sommes convaincus que les changements réalisés nous permettront de ne plus jamais subir une telle panne.

Annexe : À propos du retour en arrière d’expression régulière

Pour comprendre comment (?:(?:\"|'|\]|\}|\\|\d|(?:nan|infinity|true|false|null|undefined|symbol|math)|\`|\-|\+)+[)]*;?((?:\s|-|~|!|{}|\|\||\+)*.*(?:.*=.*)))  a causé le surmenage du processeur, vous devez vous familiariser avec le fonctionnement d’un moteur d’expression régulière standard. La partie critique est .*(?:.*=.*). Le(?: et les ) correspondantes sont un groupe de non-capture (l’expression entre parenthèses est regroupée dans une seule expression).

Pour identifier la raison pour laquelle ce modèle engendre le surmenage du processeur, nous pouvons l’ignorer et traiter le modèle .*.*=.*. En le réduisant, le modèle paraît excessivement complexe, mais ce qui est important, c’est que toute expression du « monde réel » (comme les expressions complexes au sein de nos règles WAF) qui demande au moteur de « faire correspondre quelque chose suivi de quelque chose » peut engendrer un retour en arrière catastrophique. Explication.

Dans une expression régulière, . signifie correspondre à un caractère unique, .* signifie correspondre à zéro ou plus de caractères abondamment (c’est à dire faire correspondre autant que possible). .*.*=.* signifie donc correspondre à zéro ou plus de caractères, puis à nouveau zéro ou plus de caractères, puis trouver un signe littéral =, puis correspondre à zéro ou plus de caractères.

Prenons la chaîne test x=x. Cela correspondra à l’expression .*.*=.*. Les .*.* précédant le signe = peuvent correspondre au premier x (l’un des .* correspond au x, l’autre correspond à zéro caractère). Le .* après le signe = correspond au x final.

Il faut 23 étapes pour atteindre cette correspondance. Le premier .* de .*.*=.* agit abondamment et correspond à la chaîne complète x=x. Le moteur avance pour examiner le .* suivant. Il ne reste aucun caractère à faire correspondre, le deuxième .* correspond donc à zéro caractère (c’est autorisé). Le moteur passe ensuite au =. La correspondance échoue car il n’y a plus de caractère à faire correspondre (le premier .* ayant consommé l’intégralité de x=x).

À ce moment, le moteur d’expression régulière retourne en arrière. Il retourne au premier  .* et le fait correspondre à x= (au lieu de x=x) et passe ensuite au deuxième .*. Ce .* correspond au deuxième x et il n’y a maintenant plus aucun caractère à associer. Quand le moteur essaie d’associer le = dans .*.*=.*, la correspondance échoue. Le moteur retourne de nouveau en arrière.

Avec ce retour en arrière, le premier .* correspond toujours à x= mais le deuxième .* ne correspond plus à x ; il correspond à zéro caractère. Le moteur avance ensuite pour essayer de trouver le signe littéral = dans le modèle .*.*=.* mais échoue (car il correspond déjà au premier .*). Le moteur retourne de nouveau en arrière.

Cette fois-ci, le premier .* correspond seulement au premier x. Mais le deuxième .* agit abondamment et correspond à =x. Vous imaginez la suite. Lorsqu’il essaie d’associer le signe littéral =, il échoue et retourne de nouveau en arrière.

Le premier .* correspond toujours seulement au premier x. Désormais, le deuxième .* correspond seulement à =. Comme vous l’avez deviné, le moteur ne peut pas associer le signe littéral = car il correspond déjà au deuxième .*. Le moteur retourne donc de nouveau en arrière. Tout cela pour associer une chaîne de trois caractères.

Enfin, avec le premier .* correspondant uniquement au premier x et le deuxième .* correspondant à zéro caractère, le moteur est capable d’associer le signe littéral = dans l’expression avec le = contenu dans la chaîne. Il avance et le dernier .* correspond au dernier x.

23 étapes pour faire correspondre x=x. Voici une courte vidéo de l’utilisation de Perl Regexp::Debugger présentant les étapes et le retour en arrière.

Cela représente beaucoup de travail, mais que se passe-t-il si la chaîne passe de x=x à x=xx ? La correspondance prend cette fois 33 étapes. Et si l’entrée est x=xxx, 45 étapes. Ce n’est pas linéaire. Voici un graphique présentant la correspondance de x=x à x=xxxxxxxxxxxxxxxxxxxx (20 x après le =). Avec 20 x après le =, le moteur requiert 555 étapes pour associer ! (Pire encore, si le x= était manquant et que la chaîne ne contenait que 20 x, le moteur nécessiterait 4 067 étapes pour réaliser que le modèle ne correspond pas).

Cette vidéo montre tout le retour en arrière nécessaire pour correspondre à  x=xxxxxxxxxxxxxxxxxxxx:

Ce n’est pas bon signe car à mesure que la taille de l’entrée augmente, le temps de correspondance augmente de manière ultra linéaire. Mais cela aurait pu être encore pire avec une expression régulière légèrement différente. Prenons l’exemple .*.*=.*; (avec un point-virgule littéral à la fin du modèle). Nous aurions facilement pu rédiger cela pour tenter d’associer une expression comme foo=bar;.

Le retour en arrière aurait été catastrophique. Associer x=x prend 90 étapes au lieu de 23. Et le nombre d’étapes augmente très rapidement. Associer x= suivi de 20 x prend 5 353 étapes. Voici le graphique correspondant : Observez attentivement les valeurs de l’axe Y par rapport au graphique précédent.

Pour terminer, voici toutes les 5 353 étapes de l’échec de la correspondance de x=xxxxxxxxxxxxxxxxxxxx avec .*.*=.*;

L’utilisation de correspondances fainéantes plutôt que gourmandes (lazy / greedy) permet de contrôler l’étendue du retour en arrière qui survient dans ce cas précis. Si l’expression originale est modifiée en .*?.*?=.*?, la correspondance de x=x prend 11 étapes (au lieu de 23), tout comme la correspondance de x=xxxxxxxxxxxxxxxxxxxx. C’est parce que le ? après le .*ordonne au moteur d’associer le plus petit nombre de caractères avant de commencer à avancer.

Mais la fainéantise n’est pas la solution totale à ce comportement de retour en arrière. Modifier l’exemple catastrophique .*.*=.*; en .*?.*?=.*?; n’affecte pas du tout son temps d’exécution. x=x prend toujours 555 étapes et x= suivi de 20 x prend toujours 5 353 étapes.

La seule vraie solution, mise à part la réécriture complète du modèle pour être plus précis, c’est de s’éloigner du moteur d’expression régulière avec ce mécanisme de retour en arrière. Ce que nous faisons au cours des prochaines semaines.

La solution à ce problème existe depuis 1968, lorsque Ken Thompson écrivait un article intitulé « Techniques de programmation : Algorithme de recherche d’expression régulière ». L’article décrit un mécanisme pour convertir une expression régulière en AFN (automate fini non-déterministe) et suivre les transitions d’état dans l’AFN à l’aide d’un algorithme exécuté en un temps linéaire de la taille de la chaîne que l’on souhaite faire correspondre.

L’article de Thompson n’évoque pas l’AFN mais l’algorithme de temps linéaire est clairement expliqué. Il présente également un programme ALGOL-60 qui génère du code de langage assembleur pour l’IBM 7094. La réalisation paraît ésotérique mais l’idée présentée ne l’est pas.

Voici à quoi ressemblerait l’expression régulière .*.*=.* si elle était schématisée d’une manière similaire aux images de l’article de Thompson.

Le schéma 0 contient 5 états commençant à 0. Il y a trois boucles qui commencent avec les états 1, 2 et 3. Ces trois boucles correspondent aux trois .* dans l’expression régulière. Les trois pastilles avec des points à l’intérieur correspondent à un caractère unique. La pastille avec le signe = à l’intérieur correspond au signe littéral =. L’état 4 est l’état final : s’il est atteint, l’expression régulière a trouvé une correspondance.

Pour voir comment un tel schéma d’état peut être utilisé pour associer l’expression régulière .*.*=.*, nous allons examiner la correspondance de la chaîne x=x. Le programme commence à l’état 0 comme indiqué dans le schéma 1.

La clé pour faire fonctionner cet algorithme est que la machine d’état soit dans plusieurs états au même moment. L’AFN prendra autant de transitions que possible simultanément.

Avant de lire une entrée, il passe immédiatement aux deux états 1 et 2 comme indiqué dans le schéma 2.

En observant le schéma 2, on peut voir ce qui s’est passé lorsqu’il considère le premier x dans x=x. Le x peut correspondre au point supérieur en faisant une transition de l’état 1 et en retournant à l’état 1. Ou le x peut correspondre au point inférieur en faisant une transition de l’état 2 et en retournant à l’état 2.

Après avoir associé le premier x dans x=x, les états restent 1 et 2. On ne peut pas atteindre l’état 3 ou 4 car il faut un signe littéral =.

L’algorithme considère ensuite le = dans x=x. Comme le x avant lui, il peut être associé par l’une des deux boucles supérieures faisant une transition de l’état 1 à l’état 1 ou de l’état 2 à l’état 2. En outre, le signe littéral = peut être associé et l’algorithme peut faire une transition de l’état 2 à l’état 3 (et directement à l’état 4). C’est illustré dans le schéma 3.

Ensuite, l’algorithme atteint le x final dans x=x. Depuis les états 1 et 2, les mêmes transitions sont possibles vers les états 1 et 2. Depuis l’état 3, le x peut correspondre au point sur la droite et retourner à l’état 3.

À ce niveau, chaque caractère de x=x a été considéré, et puisque l’état 4 a été atteint, l’expression régulière correspond à cette chaîne. Chaque caractère a été traité une fois, l’algorithme était donc linéaire avec la longueur de la chaîne saisie. Aucun retour en arrière nécessaire.

Cela peut paraître évident qu’une fois l’état 4 atteint (après la correspondance de x=), l’expression régulière correspond et l’algorithme peut terminer sans prendre en compte le x final.

Cet algorithme est linéaire avec la taille de son entrée.

2019年7月2日に発生したCloudflareの停止に関する詳細

Post Syndicated from John Graham-Cumming original https://blog.cloudflare.com/details-of-the-cloudflare-outage-on-july-2-2019-jp/

9年ほど前のCloudflareは小さな会社で、当時私は一顧客であり従業員ではありませんでした。Cloudflareがひと月前に設立されたという時期のある日、私は自分の小さなサイト、jgc.orgのDNSが動作していないという警告を受け取りました。そしてCloudflareはProtocol Buffersの使用に変更を加えた上でDNSを切断したのです。

私は「私のDNSはどうなったのでしょうか?」という件名のメールを直接Matthew Prince宛に出しました。すると彼は長文かつ詳細な返信をくれたのです。(メールのやり取りの全文はこちらからご覧いただけます)。下記は私のそのメールに対する返信です。

From: John Graham-Cumming
日時:2010/10/7(木)9:14 AM
件名:Re: 私のDNSはどうなったのでしょうか?
To: Matthew Prince

ご報告ありがとうございました。何か問題があれば
ご連絡します。 技術詳細に関する全容が判明したら、
本件をブログに記載するのはいかがでしょうか。
本件に対しての開示や誠実であることを他の人も評価すると思うのです。
特に、ローンチ後のトラフィック増加を示すグラフを
添えていただければと思います。

私は自分のサイトを厳格に監視しているので、何かあれば
SMSを受け取れます。 監視結果では13:03:07から14:04:12までダウンしていたことが
わかりました。 テストは5分おきに実行されています。

本件は大事には至らずに済んでいますし、解決していただけると確信しています。 しかしながら、ヨーロッパには本当に

誰も必要ないとお考えですか?

これに対するMatthewの返信は以下の通りです。
From: Matthew Prince
日時:2010/10/7(木)9:57 AM
件名:Re: 私のDNSはどうなったのでしょうか?
To: John Graham-Cumming

ありがとうございます。Cloudflareではいただいたメールすべてに対して返信しております。私は現在
オフィスに向かっており、ブログへの投稿またはCloudflareの掲示板システムのトップに
公式投稿をピン留めする予定です。透明性が一番だということには
全面的に同意します。

今日、当時より遥かに大規模になったCloudflareの社員として、私は当社が犯した過ちとその影響、対応内容について明らかにします。

7月2日の件について

7月2日、CloudflareはWAFマネージドルールに新規ルールを追加したのですが、これが世界中のCloudflareネットワーク上にあるHTTP/HTTPSトラフィックを扱う各CPUコアのCPU枯渇を引き起こしました。Cloudflareでは新たな脆弱性や脅威に対応するため、継続的にWAFマネージドルールを改善しています。たとえば5月には、WAFの更新速度を活用して深刻なSharePointの脆弱性に対する保護を行うためのルールを追加しました。迅速かつグローバルにルールをリリースできることはCloudflareのWAFにとって重要な機能です。

しかし残念ながら、先週の火曜日に行った更新に莫大なバックトラックを行いHTTP/HTTPS配信用のCPUを枯渇させるような正規表現が含まれてしまい、これによりCloudflareのコアプロキシ、CDN、WAF機能のダウンに繋がる結果となりました。次のグラフはHTTP/HTTPSトラフィックの配信を専門に行うCPUがCloudflareネットワーク内のサーバー全体で100%に近い使用量まで急上昇したことを示しています。

インシデント中のCloudflare PoPにおけるCPU使用量

この結果、Cloudflareのお客様(およびお客様の顧客の方々)に対し、Cloudflareのドメイン訪問時に502エラーが表示されることとなりました。この502エラーはフロントのCloudflare Webサーバーに利用可能なCPUコアがあるにも関わらずHTTP/HTTPSトラフィックを配信するプロセスに到達できないことにより発生したものです。

Cloudflareは本件がお客様に与えた損害について認識しており、誠に忸怩たる思いでおります。本インシデントの対応中ではありますが、Cloudflareの運営自体にも悪影響が及んでおります。

また、お客様におかれましては、多大なストレス、不満、不安を感じられたことと存じます。6年間グローバルな停止がなかったこともあり、動揺はことさら大きいものでした。

CPUが枯渇した原因は、過剰にバックトラッキングを発生させる不完全な正規表現を記載した1つのWAFルールによるものでした。停止の核心となった正規表現は次の通りです。(?:(?:\"|'|\]|\}|\\|\d|(?:nan|infinity|true|false|null|undefined|symbol|math)|\`|\-|\+)+[)]*;?((?:\s|-|~|!|{}|\|\||\+)*.*(?:.*=.*)))

多くの方が正規表現そのものに対して関心を抱いておりますが(これについては後ほど詳述します)、Cloudflareのサービスが27分間ダウンしたという実際の出来事は「正規表現の失敗」よりもはるかに複雑なものでした。以降、停止を引き起こし我々の迅速な対応を阻んだ一連の出来事を時系列で説明いたします。正規表現のバックトラッキングやその対応方法について詳しく確認したい場合は、本記事の最後に記載した付録をご覧ください。

発生内容

まず本件の流れをご説明します。本記事内に記載する時間は全て協定世界時(UTC)表記です。

13時42分、ファイアウォールチームに所属する1名のエンジニアが自動プロセスでXSSを検出するためのルールに対する小さな変更をリリースしました。そして、これに対する変更申請チケットが作成されました。Cloudflareではこのようなチケットの管理にJiraを使用しておりますが、以下はそのスクリーンショットです。

3分後、1つ目のPagerDutyページがWAFの異常を表示して停止しました。これはCloudflare外からWAFの機能を確認する模擬テストで(このようなテストは数百とあります)、正常動作を確認するためのものでした。そしてすぐにCloudflareサービスのエンドツーエンドテストの失敗、グローバルなトラフィック低下アラート、502エラーの蔓延がページに表示され、世界各都市のPoint of Presence(PoP)からCPU枯渇に関する報告を多数受けました。

これらのアラートの一部を受け取った私が会議を飛び出して自分のデスクに戻ると、ソリューションエンジニアグループのリーダーにCloudflareのトラフィックのうち80%がロストしているという報告を受けました。そこで私は事態に対するデバッグを行っているSREへ向かいました。停止の初期段階では、これまでにない種類の攻撃なのではないかという推測がありました。

CloudflareのSREチームは世界中に配置されており、24時間体制で継続的に対応を行っています。このようなアラート(アラートの大部分が特定の地域の制限された範囲における非常に具体的な問題に言及しているようなもの)は内部のダッシュボードで監視されており、毎日幾度となく対応が行われています。しかしながらこのパターンのページやアラートは非常に深刻な何かが発生しているということを示していたため、SREはすぐにP0インシデントを宣言してエンジニアリーダーおよびシステムエンジニアリングへエスカレーションを行いました。

ロンドンのエンジニアリングチームはその時Cloudflareのメインイベントスペースで内部のTechTalkを聞いているところだったのですが、それを中断して全員が大会議室に集まり他の社員も電話接続しました。これはSREが単独で処理できるような通常の問題ではなく、各関連チームがオンラインで一同に会す必要があったのです。

14時00分、WAFが問題の原因コンポーネントであることを特定し、攻撃が原因である可能性は却下されました。パフォーマンスチームはマシンから稼働中のCPUデータを取得し、WAFが原因であることを明示しました。他のチームのメンバーがstraceで確認を行い、また別のチームはWAFが問題を起こしているという記載があるエラーログを発見しました。14時02分、私は全チームに対して「global kill」を行う提案をしました。これはCloudflareに組み込まれた仕組みで、世界中の単一コンポーネントを無効とするものです。

しかしWAFに対するglobal killの実行も簡単にはいきませんでした。また問題が現れたのです。Cloudflareでは自社製品を使用しているため、Accessサービスがダウンすると内部のコントロールパネルで認証することができないのです(復旧後、内部コントロールパネルをあまり使用していないメンバーはセキュリティ機能により資格情報が無効になったためアクセスできなくなっていることがわかりました)。

さらにJiraやビルドシステムのような他の内部サービスも利用できなくなりました。利用できるようにするにはあまり使っていないバイパスの仕組みを使う必要がありました(これも本件の後で検討すべき項目です)。最終的にチームメンバーがWAFのglobal killを14時07分に実行し、14時09分までに世界中のトラフィックレベルおよびCPUが想定状態にまで戻りました。その他のCloudflareの保護の仕組みは継続して運用できています。

その後我々はWAF機能の復旧に取り掛かりました。微妙な状況だったこともあり、Cloudflareの有料プランのお客様のトラフィックを退避した後で一部のトラフィックを使って1つの都市内で異常系テスト(「この変更が本当に本件の原因なのか」を確認するもの)も正常系テスト(ロールバックの動作検証)も両方実施しました。

14時52分、原因の把握および適切な箇所への修正を行ったということに100%の確信を持てたため、WAFをグローバルに再度有効化しました。

Cloudflareの運営方法

CloudflareにはWAFマネージドルール製品を担当するエンジニアチームがあり、検出率の改善、偽陽性の低下、新たな脅威への迅速な対応に継続的に取り組んでいます。直近60日では、WAFマネージドルールに対する476件の変更申請を処理しました(平均すると3時間ごとに1件のペースです)。

このような変更は「シミュレート」モードにリリースされます。このモードでは実際のカスタマートラフィックに対してルールが実行されますが何もブロックされません。Cloudflareではこのシミュレートモードを使ってルールの有効性をテストし、偽陽性および偽陰性の比率を測定しています。しかし、シミュレートモードでもルールを実際に実行する必要があり、今回の場合はそのルール内に過度にCPUを消費する正規表現が記載されていました。

上記の変更申請でご確認いただける通り、リリース計画、ロールバック計画、この種のリリース向けの内部標準業務手順書(SOP)へのリンクが記載されています。そして、ルール変更向けのSOPでは特別にグローバルなプッシュが許可されています。これはCloudflareでリリースする他のソフトウェアとは大きく異なるものです。通常SOPのプッシュ先はまず内部の試験運用版ネットワークにあるPoit of Presence(PoP)、次に独立した地域にいる少数のお客様、多数のお客様、最後に世界という順になります。

ソフトウェアのリリース手順は次の通りです。Cloudflareでは内部的にBitBucket経由のgitを使用しています。変更を行ったエンジニアがTeamCityでビルドしたコードをプッシュし、ビルドがパスするとレビュアーが割り当てられます。プルリクエストが承認されるとコードがビルドされ、テストスイートが(再度)実行されます。

ビルドとテストが通ったらJiraの変更申請が作成され、関連する管理者または技術リーダーが変更承認を行います。承認されると「アニマルPoP」と呼ばれる場所へのリリースが行われます。アニマルPoPにはDOG、PIG、カナリアがあります。

DOG PoPは(世界の他の場所と同様)CloudflareのPoPですが、Cloudflareの社員のみが利用するものです。この試験運用版のPoPではお客様のトラフィックがコードに接触する前に問題を早期発見することができ、実際頻繁に検出されています。

DOGテストが正常に完了するとコードはPIG(「実験」目的)に移動します。PIGは無料プランのうちごく一部のお客様のトラフィックが新規コードを通過するようになっているCloudflareのPoPです。

ここでも正常であれば、コードは「カナリア」へ移動します。Cloudflareには世界中に3つのカナリアPoPがあり、有料/無料プランのお客様のトラフィックを新規コード上で実行してエラーの最終チェックを行っています。

Cloudflareのソフトウェアリリース手順

カナリアで正常に動作すると、コードの公開ができるようになります。DOG、PIG、カナリア、グローバル手順の完了には、コード変更の種類にもよりますが数時間から数日ほどかかります。Cloudflareのネットワークやお客様が多様であるおかげで、Cloudflareではリリース内容を世界中の全てのお客様に公開する前に徹底的にコードをテストすることができるのです。しかし、設計上WAFにはこの手順を採用していません。それは脅威に迅速に対応する必要があるからです。

WAFの脅威

過去数年で一般的なアプリケーションにおける脆弱性は大幅に増加しています。これは、ファジングなどといったソフトウェアテストツールの可用性が増加したためです(ファジングに関する新規ブログ記事はこちら)。

出典:https://cvedetails.com/

十分な保護ができているかどうかをアプリケーションの実行や維持を行うチームがテストできるよう、概念実証(PoC)が作成されすぐにGithubに公開されるのをよく見かけます。そのため、お客様がこういったソフトウェアに対してパッチを当てられるよう、新たな攻撃にできるだけ早く対応することがCloudflareにとっては必須なのです。

5月にSharePointの脆弱性に対する保護を展開した件はCloudflareが事前にこのような保護を提供できた好例です(ブログはこちら)。発表の公表から間もなく、Cloudflareはお客様のSharePointインストールを悪用しようとする動きが急増したことを確認しました。Cloudflareチームは日々新たな脅威を監視し、お客様のために脅威を軽減するためのルールを記載しています。

先週の火曜日の停止を引き起こしたルールはクロスサイトスクリプティング(XSS)攻撃を対象としたものでした。この攻撃は近年劇的に増加しているものです。

出典:https://cvedetails.com/

WAFマネージドルールの変更における標準的な手順には、グローバルリリース前に継続的インテグレーション(CI)テストに合格しなければならないことが記載されています。これは先週の火曜日の際にも通常通り実施され、ルールがリリースされました。13時31分、チームのエンジニアが承認済みの変更を含むプルリクエストをマージしました。

13時37分、TeamCityがルールをビルドしてテストを実行し、合格を示す緑色を表示しました。WAFテストスイートはWAFの主な機能が動作することをテストするもので、個別のマッチング機能に対する多数の単体テストで構成されています。単体テストにて個別のWAFを実行した後、WAFに対する大規模なHTTPリクエストを実行してルールをテストします。こういったHTTPリクエストはWAFでブロックすべきリクエストのテスト(攻撃を検出できることの確認)やブロックしてはいけないリクエストのテスト(必要以上にブロックしないことや偽陽性を作り出していないことの確認)向けに設計されたものです。WAFテストスイートが実施しなかったのはCPU使用量の急増テストであり、結果的に今回CPU枯渇の原因となったルールが含まれている以前のWAFビルドのログファイルにはテストスイートの実行時間に増加は見られませんでした。

そしてテストが合格し、TeamCityが自動的に13時42分時点の変更をリリースし始めました。

Quicksilver

WAFルールは新たな脅威に対応する必要があるため、数秒で世界中に変更を適用することのできるCloudflareの分散型Key-Value Store(KVS)、Quicksilverを使用してリリースしています。この技術はCloudflareのダッシュボード内やAPI経由での設定変更時にCloudflareの全てのお客様が使用しているもので、Cloudflareが変更に対して非常に迅速に処理できる理由でもあります。

Quicksilverについてはこれまであまり言及したことがありませんでした。以前CloudflareではKyoto Tycoonを分散型Key-Value Storeとしてグローバルに採用しておりましたが、運用上の問題が発生したため独自KVSを構築して180以上の都市に複製していました。Quicksilverはお客様の設定に変更を加えたり、WAFルールを更新したり、Cloudflare Workersを使用して書いたJavaScriptコードを配信したりするための手段です。

ダッシュボードのボタンをクリックまたはAPI呼び出しを行うことで、変更内容は数秒で世界中に適用されます。お客様にはこの高速に実施できる設定を気に入って頂いておりました。Workersを利用するとほぼ瞬時にグローバルなソフトウェアリリースが行えます。平均的ではQuicksilverは1秒あたりおよそ350件の変更を配信します。

さらに、Quicksilverは非常に高速です。 平均では2.29秒で世界中のマシンへ1つの変更を配信することができます。通常、このスピードは素晴らしいことです。要するに、機能を有効にしたりキャッシュをパージしたりする際、世界中に一瞬で稼働させられるのです。Cloudflare Workersでコードをプッシュすると、同じ速度でプッシュすることができます。これは必要なときに高速で更新ができるという、Cloudflareのお約束の1つです。

しかしながら今回はこのスピードがあることでルール変更が世界中に数秒で適用されたということを意味します。また、WAFコードにLuaを採用していることにお気づきの方もいるかもしれません。Cloudflareの製品には広くLuaを採用しておりますが、WAFのLuaに関する詳細は以前ご説明した通りです。WAFのLuaでは内部的にPCREを利用しているのですが、このPCREがマッチングにバックトラッキングを採用しており正規表現の暴走から保護する手段がありません。これに関する詳細や対策を以下に説明します。

ルールがリリースされた時点までは全てが「正しく」実行されていました。プルリクエストがあがって承認され、CI/CDがコードをビルドしてテストを行い、SOPのロールアウトとロールバックを詳述したSOPと共に変更申請が提出され、ロールアウトが実行されました。

Cloudflare WAFのリリース手順

問題点

前述の通り、我々は毎週数十件の新規ルールをリリースし、リリースの悪影響を防止するため多数のシステムを組み込んでいます。そのため、何かおかしなことがあるときは複数の原因に収束することは通常ありません。しかし、1つの根本原因に辿り着くと満足できる一方で、現実が見えなくなることもあります。下記は、CloudflareのHTTP/HTTPSサービスがオフラインになる時点に至るまでの複数の脆弱性です。

  1. エンジニアが簡単に膨大なバックトラックをしてしまう正規表現を記述しました。
  2. 数週間前に実施したWAFのリファクタリングにより、正規表現によるCPUの過度な消費を防ぐための保護が誤って削除されていました。このリファクタリングはWAFによるCPU消費を抑えるためのものでした。
  3. 使用していた正規表現エンジンには複雑性の保証がありませんでした。
  4. テストスイートに過度なCPU消費を特定する手段がありませんでした。
  5. 段階的ロールアウトをせずに世界中の本番環境へ緊急性のないルール変更を展開できるようなSOPになっていました。
  6. WAFの完全なビルドを2回実行するという時間のかかりすぎることがロールバック計画で要求されていました。
  7. グローバルなトラフィックドロップに対する初めのアラートの発火が遅すぎました。
  8. Cloudflareのステータスページをすぐに更新できませんでした。
  9. 停止およびバイパス手順に不慣れだったため、Cloudflare内からのCloudflare独自システムへのアクセスが困難でした。
  10. セキュリティ上の理由により資格情報がタイムアウトしたため、SREが一部のシステムにアクセスできませんでした。
  11. Cloudflareのエッジを経由するお客様は、CloudflareのダッシュボードやAPIにアクセスできませんでした。

火曜日以降の動向

まず、WAF上で動作する全リリースを停止して次のことを行いました。

  1. 過度のCPU利用を行う保護を取り除いた上での再導入(完了)
  2. WAFマネージドルールにある全3,868件のルールを手動にて調査し、過度のバックトラッキングが発生する可能性があるその他のインスタンスを検出、修正(検査は完了)
  3. 全ルールに対するパフォーマンスプロファイリングをテストスイートに導入(完了予定: 7月19日)
  4. 正規表現エンジンをre2またはRustに切り替え(どちらもランタイム保証搭載)(完了予定:7月31日)
  5. 進行中の攻撃に対する緊急かつグローバルなリリースを実施できる点を保持しつつ、Cloudflareの他のソフトウェアと同じ方法でルールを段階的ロールアウトするようSOPを変更
  6. CloudflareのダッシュボードやAPIをCloudflareのエッジから外すための緊急機能を設置
  7. Cloudflareステータスページへの更新の自動化

長期的には、Cloudflareは数年前に私が記述したLuaによるWAFから離脱していく予定です。そして新規ファイアウォールエンジンを採用するようWAFを移植していきます。これによりWAFがより高速になり保護層を追加することができます。

まとめ

本件はお客様にとってもCloudflareチームにとっても大きな混乱を招いた停止でした。我々は事態の収拾のため迅速に対応し、現在は停止を発生させてしまった手順の欠陥を修正し、正規表現に使われている技術を置き換えることでさらなる潜在的問題の防止により一層取り組んでおります。

今回の停止については忸怩たる思いであり、お客様に影響を出してしまったことをお詫び申し上げます。今回の変更により、このような停止が今後再発生しないものと考えております。

付録:正規表現のバックトラッキングについて

(?:(?:\"|'|\]|\}|\\|\d|(?:nan|infinity|true|false|null|undefined|symbol|math)|\`|\-|\+)+[)]*;?((?:\s|-|~|!|{}|\|\||\+)*.*(?:.*=.*))) がどのようにCPU枯渇を引き起こすのかを完全に理解するには、正規表現エンジンの動作を少々理解しておく必要があります。重要なのは.*(?:.*=.*)の部分です。(?:と)は非キャプチャグループです(つまり、カッコ内の表現は1つの表現としてグルーピングされています)。

ここではCPU枯渇の原因となったパターンを説明するため、これを無視して.*.*=.*というパターンを見ていきます。ここまでシンプルにすると、このパターンが不要に複雑であることがわかります。しかし、重要なのは「全てに続く全てにマッチする」ものをエンジンに問い合わせた「実際の」表現(CloudflareのWAFルールに記載された複雑な表現のようなもの)により壊滅的なバックトラッキングを引き起こしたという点です。こちらがその理由です。

正規表現では、.は1文字とのマッチを意味し、.*は0文字以上の貪欲な(greedy)マッチング(つまり可能な限りの数と合致すること)を意味するため、.*.*=.*は、0文字以上のマッチ、0文字以上のマッチ、=リテラルの検索、0文字以上のマッチ、という意味になります。

テスト文字列x=xについて考察してみましょう。これは.*.*=.*にマッチする文字列です。イコールの前にある.*.*が1つ目のxにマッチします(.*のうちの1つがxにマッチしもう一方が0文字にマッチするため)。そして=の後にある.*は最後のxにマッチします。

このマッチングに至るまでには23の手順があります。.*.*=.*にある1つ目の.*が貪欲に(greedy)動作してx=xという文字列全体にマッチします。エンジンは次の.*の考慮に移ります。マッチする文字はもうないので、2つ目の.*は0文字にマッチします(こういう場合もあります)。それからエンジンは=部分に移行します。もうマッチングすべき文字が残っていないので(はじめの.*部分でx=xの全てにマッチしているので)マッチングは失敗します。

ここで正規表現エンジンがバックトラックします。エンジンは1つ目の.*に戻り(x=xではなく)x=とマッチし、それから2つ目の.*に移ります。.*が2つ目のxにマッチするので残りの文字はありません。そこでエンジンが=を.*.*=.*とマッチさせようとするとそのマッチングは失敗します。エンジンはまたバックトラックします。

今回のバックトラックでは1つ目の.*はx=とマッチしますが2つ目の.*はxとマッチするのではなく0文字にマッチします。それからエンジンは.*.*=.*パターンにある=というリテラルを探そうとしますが失敗します(すでに1つ目の.*にマッチしているため)。エンジンはまたバックトラックします。

今度は1つ目の.*がx1文字にマッチします。しかし2つ目の.*が貪欲に動作し、=xとマッチしてしまいます。もうどうなるかわかるでしょう。エンジンが=リテラルのマッチングを探そうとすると失敗して再度バックトラックとなります。

1つ目の.*は1つ目のxとマッチします。そして今回は2つ目の.*が=とのみマッチします。しかし、ご想像どおりエンジンは=にマッチしません。2つ目の.*で既にマッチしているからです。そこでまたバックトラックを行います。ここで思い出していただきたいのですが、これは全て3文字の文字列のマッチングにかかる手順なのです。

最後に、1つ目の.*が1つ目のxに、2つ目の.*が0文字にマッチすると、エンジンは=リテラルと文字列の=をマッチさせることができます。そして最後の.*が最後のxとマッチするのです。

これがx=xにマッチするまでの23の手順です。こちらはPerlのRegexp::Debuggerを使って発生したバックトラッキングの手順を説明した短いビデオです。

これでも作業量が多いのですが、もし文字列がx=xからx=xxに変わったらどうなるでしょうか?この場合のマッチング手順は33です。さらに、入力がx=xxxとなると手順は45になります。直線的な増え方ではありません。ここにx=xからx=xxxxxxxxxxxxxxxxxxxx(=の後のxが20個)までのマッチングを示したグラフがあります。=の後のxが20になると、エンジンのマッチングには555もの手順がかかります。(さらに悪いことに、x=の部分がなく文字列が20個のxだけになった場合、マッチしないパターンを探す手順は4,067になります。)

このビデオではx=xxxxxxxxxxxxxxxxxxxxのマッチングに必要なバックトラッキングを示しています。

残念なことに入力値が増えるとマッチング回数が超線形的に増えています。ただし、もっと悪いのは正規表現に少々の修正が入った場合です。.*.*=.*;という正規表現になった(つまりパターンの最後にセミコロンが追加された)としましょう。これはfoo=bar;のような表現にマッチさせようとして書かれたものです。

この場合のバックトラッキングは最悪です。x=xのマッチには23ではなく90手順もかかります。手順の増加は非常に劇的です。x=の後に20個のxがある場合のマッチングにかかる手順は5,353にも及びます。こちらがそのグラフです。Y軸の値を前回のグラフと比べてみてください。

こちらの画像ではx=xxxxxxxxxxxxxxxxxxxxを.*.*=.*;にマッチさせようとして失敗するまでの全5,353手順を表示しています。

GreedyマッチではなくLazyマッチを用いると、この場合のバックトラッキング数を制限することができます。元の表現を.*?.*?=.*?に変更するとx=xのマッチングは(23手順から)11手順になり、x=xxxxxxxxxxxxxxxxxxxxの場合も同様となります。これは.*の後にある?がエンジンに、移動する前に最小文字数とマッチするよう指示するためです。

しかし、Lazyマッチがこのバックトラッキング行為に対する完全な解決策ではありません。.*.*=.*;という最悪の例を.*?.*?=.*?;に変えても実行回数は全く変わりません。x=xの所要手順は555で、x=の後に20個のxが続く場合の手順数も5,353のままです。

(パターンを完全に書き直してより具体的に記述する以外で)唯一真の解決策となるのが、正規表現エンジンをバックトラッキングの仕組みから退避することです。これは今後数週間で取り組んでいきます。

この問題に対する解決策は1968年のKen Thompson氏による「Programming Techniques:Regular expression search algorithm(プログラミングアルゴリズム:正規表現の検索アルゴリズム)」という論文で知られているものです。この論文では正規表現をNFA(非決定性有限オートマトン)に変換し、その後照合する文字列のサイズで時間線形的に実行するアルゴリズムを用いてNFAの状態遷移をするメカニズムについて説明しています。

この論文で実際にNFAに関する記述があるわけではありませんが、線形時間アルゴリズムに関しては明確に説明されており、IBM 7094用のアセンブリ言語のコードを生成するALGOL60プログラムが提示されています。その実装は難解なものですが、考え方はさほどではありません。

これは.*.*=.*という正規表現をThompson氏の論文の図と同じ形式で図式化したものです。

図0では0から始まる5つの状態があります。そして状態1、2、3から始まるループが3つあります。このループは正規表現にある3つの.*に一致しています。ドットが記載された3つの楕円形がそれぞれ1文字とマッチします。=の楕円形は=とマッチしているということを示しています。状態4は終了状態であり、正規表現がマッチした場合に到達します。

このような状態図を使って.*.*=.*という正規表現のマッチングを行う方法を確認するため、ここではx=xを検証していきます。プログラムは図1の状態0から開始します。

このアルゴリズムの動作のキーとなるのが状態マシンは同時に複数の状態になるという点です。NFAはそれぞれの遷移を同時に行います。

入力読み込みの前でも図2のように状態1と2両方に遷移することが可能です。

図2を見てみると、x=xにある1つ目のxに何が起きたのかを確認できます。xは状態1に遷移して一番上のドットにマッチすることができます。もしくは、xは状態2に移行して2つ目のドットにマッチし、状態2に戻ることができます。

x=xの1つ目のxにマッチした後でも状態は1と2のままです。状態3や4に到達できないのは、リテラル=が必要になるためです。

次に、アルゴリズムはx=xにある=の考察を行います。x同様、上部にある2つのループ(状態1から1、状態2から2のループ)のいずれかにマッチすることができますが、=がマッチするとアルゴリズムは状態2から状態3(そしてすぐに状態4)に遷移します。これは図3で示したとおりです。

次にアルゴリズムはx=xにある最後のxに到達します。状態1や2から同じ遷移が状態1や2に戻ることが可能です。状態3からxは右側にあるドットとマッチして状態3に戻ることができます。

x=xの全文字が考察された時点で状態4に到達するため、この正規表現は文字列にマッチします。各文字が一度処理されるためこのアルゴリズムは入力文字列の長さの点で線形です。さらに、バックトラッキングも必要ありませんでした。

(x=にマッチした後で)一度状態4に到達したらその正規表現はマッチしアルゴリズムは最後のxを全く考察することなく中止になります。

このアルゴリズムは入力サイズの点において線形です。

タグ事後検討,停止,Deep Dive

Cloudflare outage caused by bad software deploy (updated)

Post Syndicated from John Graham-Cumming original https://blog.cloudflare.com/cloudflare-outage/

This is a short placeholder blog and will be replaced with a full post-mortem and disclosure of what happened today.

For about 30 minutes today, visitors to Cloudflare sites received 502 errors caused by a massive spike in CPU utilization on our network. This CPU spike was caused by a bad software deploy that was rolled back. Once rolled back the service returned to normal operation and all domains using Cloudflare returned to normal traffic levels.

This was not an attack (as some have speculated) and we are incredibly sorry that this incident occurred. Internal teams are meeting as I write performing a full post-mortem to understand how this occurred and how we prevent this from ever occurring again.


Update at 2009 UTC:

Starting at 1342 UTC today we experienced a global outage across our network that resulted in visitors to Cloudflare-proxied domains being shown 502 errors (“Bad Gateway”). The cause of this outage was deployment of a single misconfigured rule within the Cloudflare Web Application Firewall (WAF) during a routine deployment of new Cloudflare WAF Managed rules.

The intent of these new rules was to improve the blocking of inline JavaScript that is used in attacks. These rules were being deployed in a simulated mode where issues are identified and logged by the new rule but no customer traffic is actually blocked so that we can measure false positive rates and ensure that the new rules do not cause problems when they are deployed into full production.

Unfortunately, one of these rules contained a regular expression that caused CPU to spike to 100% on our machines worldwide. This 100% CPU spike caused the 502 errors that our customers saw. At its worst traffic dropped by 82%.

This chart shows CPU spiking in one of our PoPs:

We were seeing an unprecedented CPU exhaustion event, which was novel for us as we had not experienced global CPU exhaustion before.

We make software deployments constantly across the network and have automated systems to run test suites and a procedure for deploying progressively to prevent incidents. Unfortunately, these WAF rules were deployed globally in one go and caused today’s outage.

At 1402 UTC we understood what was happening and decided to issue a ‘global kill’ on the WAF Managed Rulesets, which instantly dropped CPU back to normal and restored traffic. That occurred at 1409 UTC.

We then went on to review the offending pull request, roll back the specific rules, test the change to ensure that we were 100% certain that we had the correct fix, and re-enabled the WAF Managed Rulesets at 1452 UTC.

We recognize that an incident like this is very painful for our customers. Our testing processes were insufficient in this case and we are reviewing and making changes to our testing and deployment process to avoid incidents like this in the future.

The deep-dive into how Verizon and a BGP Optimizer Knocked Large Parts of the Internet Offline Monday

Post Syndicated from Martin J Levy original https://blog.cloudflare.com/the-deep-dive-into-how-verizon-and-a-bgp-optimizer-knocked-large-parts-of-the-internet-offline-monday/

A recap on what happened Monday

The deep-dive into how Verizon and a BGP Optimizer Knocked Large Parts of the Internet Offline Monday

On Monday we wrote about a painful Internet wide route leak. We wrote that this should never have happened because Verizon should never have forwarded those routes to the rest of the Internet. That blog entry came out around 19:58 UTC, just over seven hours after the route leak finished (which will we see below was around 12:39 UTC). Today we will dive into the archived routing data and analyze it. The format of the code below is meant to use simple shell commands so that any reader can follow along and, more importantly, do their own investigations on the routing tables.

This was a very public BGP route leak event. It was both reported online via many news outlets and the event’s BGP data was reported via social media as it was happening. Andree Toonk tweeted a quick list of 2,400 ASNs that were affected.


This blog contains a large number of acronyms and those are explained at the end of the blog.

Using RIPE NCC archived data

The RIPE NCC operates a very useful archive of BGP routing. It runs collectors globally and provides an API for querying the data. More can be seen at https://stat.ripe.net/. In the world of BGP all routing is public (within the ability of anyone collecting data to have enough collections points). The archived data is very valuable for research and that’s what we will do in this blog. The site can create some very useful data visualizations.

The deep-dive into how Verizon and a BGP Optimizer Knocked Large Parts of the Internet Offline Monday

Dumping the RIPEstat data for this event

Presently, the RIPEstat data gets ingested around eight to twelve hours after real-time. It’s not meant to be a real-time service. The data can be queried in many ways, including a full web interface and an API. We are using the API to extract the data in a JSON format.

We are going to focus only on the Cloudflare routes that were leaked. Many other ASNs were leaked (see the tweet above); however, we want to deal with a finite data set and focus on what happened to Cloudflare’s routing. All the commands below can be run with ease on many systems. Both the scripts and the raw data file are now available on GitHub. The following was done on MacBook Pro running macOS Mojave.

First we collect 24 hours of route announcements and AS-PATH data that RIPEstat sees coming from AS13335 (Cloudflare).

$ # Collect 24 hours of data - more than enough
$ ASN="AS13335"
$ START="2019-06-24T00:00:00"
$ END="2019-06-25T00:00:00"
$ ARGS="resource=${ASN}&starttime=${START}&endtime=${END}"
$ URL="https://stat.ripe.net/data/bgp-updates/data.json?${ARGS}"
$ # Fetch the data from RIPEstat
$ curl -sS "${URL}" | jq . > 13335-routes.json
$ ls -l 13335-routes.json
-rw-r--r--  1 martin  staff  339363899 Jun 25 08:47 13335-routes.json
$

That’s 340MB of data – which seems like a lot, but it contains plenty of white space and plenty of data we just don’t need. Our second task is to reduce this raw data down to just the required data – that’s timestamps, actual routes, and AS-PATH. The third item will be very useful. Note we are using jq, which can be installed on macOS with the  brew package manager.

$ # Extract just the times, routes, and AS-PATH
$ jq -rc '.data.updates[]|.timestamp,.attrs.target_prefix,.attrs.path' < 13335-routes.json | paste - - - > 13335-listing-a.txt
$ wc -l 13335-listing-a.txt
691318 13335-listing-a.txt
$

We are down to just below seven hundred thousand routing events, however, that’s not a leak, that’s everything that includes Cloudflare’s ASN (the number 13335 above). For that we need to go back to Monday’s blog and realize it was AS396531 (Allegheny Technologies) that showed up with 701 (Verizon) in the leak. Now we reduce the data further:

$ # Extract the route leak 701,396531
$ # AS701 is Verizon and AS396531 is Allegheny Technologies
$ egrep '701,396531' < 13335-listing-a.txt > 13335-listing-b.txt
$ wc -l 13335-listing-b.txt
204568 13335-listing-b.txt
$

At 204 thousand data points, we are looking better. It’s still a lot of data because BGP can be very chatty if topology is changing. A route leak will cause exactly that. Now let’s see how many routes were affected:

$ # Extract the actual routes affected by the route leak
$ cut -f2 < 13335-listing-b.txt | sort -V -u > 13335-listing-c.txt
$ wc -l 13335-listing-c.txt
101 13335-listing-c.txt
$

It’s a much smaller number. We now have a listing of at least 101 routes that were leaked via Verizon. This may not be the full list because route collectors like RIPEstat don’t have direct feeds from Verizon, so this data is a blended view with Verizon’s path and other paths. We can see that if we look at the AS-PATH in the above files. Please note that I had a typo in this script when this blog was first published and only 20 routes showed up because the -n vs -V option was used on sort. Now the list is correct with 101 affected routes. Please see this short article from stackoverflow to see the issue.

Here’s a partial listing of affected routes.

$ cat 13335-listing-c.txt
8.39.214.0/24
8.42.245.0/24
8.44.58.0/24
...
104.16.80.0/21
104.17.168.0/21
104.18.32.0/21
104.19.168.0/21
104.20.64.0/21
104.22.8.0/21
104.23.128.0/21
104.24.112.0/21
104.25.144.0/21
104.26.0.0/21
104.27.160.0/21
104.28.16.0/21
104.31.0.0/21
141.101.120.0/23
162.159.224.0/21
172.68.60.0/22
172.69.116.0/22
...
$

This is an interesting list, as some of these routes do not originate from Cloudflare’s network, however, they show up with AS13335 (our ASN) as the originator. For example, the 104.26.0.0/21 route is not announced from our network, but we do announce 104.26.0.0/20 (which covers that route). More importantly, we have an IRR (Internet Routing Registries) route object plus an RPKI ROA for that block. Here’s the IRR object:

route:          104.26.0.0/20
origin:         AS13335
source:         ARIN

And here’s the RPKI ROA. This ROA has Max Length set to 20, so no smaller route should be accepted.

Prefix:       104.26.0.0/20
Max Length:   /20
ASN:          13335
Trust Anchor: ARIN
Validity:     Thu, 02 Aug 2018 04:00:00 GMT - Sat, 31 Jul 2027 04:00:00 GMT
Emitted:      Thu, 02 Aug 2018 21:45:37 GMT
Name:         535ad55d-dd30-40f9-8434-c17fc413aa99
Key:          4a75b5de16143adbeaa987d6d91e0519106d086e
Parent Key:   a6e7a6b44019cf4e388766d940677599d0c492dc
Path:         rsync://rpki.arin.net/repository/arin-rpki-ta/5e4a23ea-...

The Max Length field in an ROA says what the minimum size of an acceptable announcement is. The fact that this is a /20 route with a /20 Max Length says that a /21 (or /22 or /23 or /24) within this IP space isn’t allowed. Looking further at the route list above we get the following listing:

Route Seen            Cloudflare IRR & ROA    ROA Max Length
104.16.80.0/21    ->  104.16.80.0/20          /20
104.17.168.0/21   ->  104.17.160.0/20         /20
104.18.32.0/21    ->  104.18.32.0/20          /20
104.19.168.0/21   ->  104.19.160.0/20         /20
104.20.64.0/21    ->  104.20.64.0/20          /20
104.22.8.0/21     ->  104.22.0.0/20           /20
104.23.128.0/21   ->  104.23.128.0/20         /20
104.24.112.0/21   ->  104.24.112.0/20         /20
104.25.144.0/21   ->  104.25.144.0/20         /20
104.26.0.0/21     ->  104.26.0.0/20           /20
104.27.160.0/21   ->  104.27.160.0/20         /20
104.28.16.0/21    ->  104.28.16.0/20          /20
104.31.0.0/21     ->  104.31.0.0/20           /20

So how did all these /21’s show up? That’s where we dive into the world of BGP route optimization systems and their propensity to synthesize routes that should not exist. If those routes leak (and it’s very clear after this week that they can), all hell breaks loose. That can be compounded when not one, but two ISPs allow invalid routes to be propagated outside their autonomous network. We will explore the AS-PATH further down this blog.

More than 20 years ago, RFC1997 added the concept of communities to BGP. Communities are a way of tagging or grouping route advertisements. Communities are often used to label routes so that specific handling policies can be applied. RFC1997 includes a small number of universal well-known communities. One of these is the NO_EXPORT community, which has the following specification:

    All routes received carrying a communities attribute
    containing this value MUST NOT be advertised outside a BGP
    confederation boundary (a stand-alone autonomous system that
    is not part of a confederation should be considered a
    confederation itself).

The use of the NO_EXPORT community is very common within BGP enabled networks and is a community tag that would have helped alleviate this route leak immensely.

How BGP route optimization systems work (or don’t work in this case) can be a subject for a whole other blog entry.

Timing of the route leak

As we saved away the timestamps in the JSON file and in the text files, we can confirm the time for every route in the route leak by looking at the first and the last timestamp of a route in the data. We saved data from 00:00:00 UTC until 00:00:00 the next day, so we know we have covered the period of the route leak. We write a script that checks the first and last entry for every route and report the information sorted by start time:

$ # Extract the timing of the route leak
$ while read cidr
do
  echo $cidr
  fgrep $cidr < 13335-listing-b.txt | head -1 | cut -f1
  fgrep $cidr < 13335-listing-b.txt | tail -1 | cut -f1
done < 13335-listing-c.txt |\
paste - - - | sort -k2,3 | column -t | sed -e 's/2019-06-24T//g'
...
104.25.144.0/21   10:34:25  12:38:54
104.22.8.0/21     10:34:27  12:29:39
104.20.64.0/21    10:34:27  12:30:00
104.23.128.0/21   10:34:27  12:30:34
141.101.120.0/23  10:34:27  12:30:39
162.159.224.0/21  10:34:27  12:30:39
104.18.32.0/21    10:34:29  12:30:34
104.24.112.0/21   10:34:29  12:30:34
104.27.160.0/21   10:34:29  12:30:34
104.28.16.0/21    10:34:29  12:30:34
104.31.0.0/21     10:34:29  12:30:34
8.39.214.0/24     10:34:31  12:19:24
104.26.0.0/21     10:34:36  12:29:53
172.68.60.0/22    10:34:38  12:19:24
172.69.116.0/22   10:34:38  12:19:24
8.44.58.0/24      10:34:38  12:19:24
8.42.245.0/24     11:52:49  11:53:19
104.17.168.0/21   12:00:13  12:29:34
104.16.80.0/21    12:00:13  12:30:00
104.19.168.0/21   12:09:39  12:29:34
...
$

Now we know the times. The route leak started at 10:34:25 UTC (just before lunchtime London time) on 2019-06-24 and ended at 12:38:54 UTC. That’s a hair over two hours. Here’s that same time data in a graphical form showing the near-instant start of the event and the duration of each route leaked:

The deep-dive into how Verizon and a BGP Optimizer Knocked Large Parts of the Internet Offline Monday

We can also go back to RIPEstat and look at the activity graph for Cloudflare’s AS13335 network:

The deep-dive into how Verizon and a BGP Optimizer Knocked Large Parts of the Internet Offline Monday

Clearly between 10:30 UTC and 12:40 UTC there’s a lot of route activity – far more than normal.

Note that as we mentioned above, RIPEstat doesn’t get a full view of Verizon’s network routing and hence some of the propagated routes won’t show up.

Drilling down on the AS-PATH part of the data

Having the routes is useful, but now we want to look at the paths of these leaked routes to see which ASNs are involved. We knew the offending ASNs during the route leak on Monday. Now we want to dig deeper using the archived data. This allows us to see the extent and reach of this route leak.

$ # Extract the AS-PATH of the route leak
$ # Use the list of routes to extract the full AS-PATH
$ # Merge the results together to show an amalgamation of paths.
$ # We know (luckily) the last few ASNs in the AS-PATH are consistent
$ cut -f3 < 13335-listing-b.txt | tr -d '[\[\]]' |\
awk '{
  n=split($0, a, ",");
  printf "%50s\n",
    a[n-5] "_" a[n-4] "_" a[n-3] "_" a[n-2] "_" a[n-1] "_" a[n];
}' | sort -u
   174_701_396531_33154_3356_13335
   2497_701_396531_33154_174_13335
   577_701_396531_33154_3356_13335
   6939_701_396531_33154_174_13335
  1239_701_396531_33154_3356_13335
  1273_701_396531_33154_3356_13335
  1280_701_396531_33154_3356_13335
  2497_701_396531_33154_3356_13335
  2516_701_396531_33154_3356_13335
  3320_701_396531_33154_3356_13335
  3491_701_396531_33154_3356_13335
  4134_701_396531_33154_3356_13335
  4637_701_396531_33154_3356_13335
  6453_701_396531_33154_3356_13335
  6461_701_396531_33154_3356_13335
  6762_701_396531_33154_3356_13335
  6830_701_396531_33154_3356_13335
  6939_701_396531_33154_3356_13335
  7738_701_396531_33154_3356_13335
 12956_701_396531_33154_3356_13335
 17639_701_396531_33154_3356_13335
 23148_701_396531_33154_3356_13335
$

This script clearly shows the AS-PATH of the leaked routes. It’s very consistent. Reading from the back of the line to the front, we have 13335 (Cloudflare), 3356 or 174 (Level3/CenturyLink or Cogent – both tier 1 transit providers for Cloudflare). So far, so good. Then we have 33154 (DQE Communications) and 396531 (Allegheny Technologies Inc) which is still technically not a leak, but trending that way. The reason why we can state this is still technically not a leak is because we don’t know the relationship between those two ASs. It’s possible they have a mutual transit agreement between them. That’s up to them.

Back to the AS-PATH’s. Still reading leftwards, we see 701 (Verizon), which is very very bad and clear evidence of a leak. It’s a leak for two reasons. First, this matches the path when a transit is leaking a non-customer route from a customer. This Verizon customer does not have 13335 (Cloudflare) listed as a customer. Second, the route contains within its path a tier 1 ASN. This is the point where a route leak should have been absolutely squashed by filtering on the customer BGP session. Beyond this point there be dragons.

And dragons there be! Everything above is about how Verizon filtered (or didn’t filter) its customer. What follows the 701 (i.e the number to the left of it) is the peers or customers of Verizon that have accepted these leaked routes. They are mainly other tier 1 networks of Verizon in this list: 174 (Cogent), 1239 (Sprint), 1273 (Vodafone), 3320 (DTAG), 3491 (PCCW), 6461 (Zayo), 6762 (Telecom Italia), etc.

What’s missing from that list are three networks worthy of mentioning – 1299 (Telia), 2914 (NTT), and 7018 (AT&T). All three implement a very simple AS-PATH filter which saved the day for their network. They do not allow one tier 1 ISP to send them a route which has another tier 1 further down the path. That’s because when that happens, it’s officially a leak as each tier 1 is fully connected to all other tier 1’s (which is part of the definition of a tier 1 network). The topology of the Internet’s global BGP routing tables simply states that if you see another tier 1 in the path, then it’s a bad route and it should be filtered away.

Additionally we know that 7018 (AT&T) operates a network which drops RPKI invalids. Because Cloudflare routes are RPKI signed, this also means that AT&T would have dropped these routes when it receives them from Verizon. This shows a clear win for RPKI (and for AT&T when you see their bandwidth graph below)!


That all said, keep in mind we are still talking about routes that Cloudflare didn’t announce. They all came from the route optimizer.

What should 701 Verizon network accept from their customer 396531?

This is a great question to ask. Normally we would look at the IRR (Internet Routing Registries) to see what policy an ASN wants for it’s routes.

$ whois -h whois.radb.net AS396531 ; the Verizon customer
%  No entries found for the selected source(s).
$ whois -h whois.radb.net AS33154  ; the downstream of that customer
%  No entries found for the selected source(s).
$ 

That’s enough to say that we should not be seeing this ASN anywhere on the Internet, however, we should go further into checking this. As we know the ASN of the network, we can search for any routes that are listed for that ASN. We find one:

$ whois -h whois.radb.net ' -i origin AS396531' | egrep '^route|^origin|^mnt-by|^source'
route:          192.92.159.0/24
origin:         AS396531
mnt-by:         MNT-DCNSL
source:         ARIN
$

More importantly, now we have a maintainer (the owner of the routing registry entries). We can see what else is there for that network and we are specifically looking for this:

$ whois -h whois.radb.net ' -i mnt-by -T as-set MNT-DCNSL' | egrep '^as-set|^members|^mnt-by|^source'
as-set:         AS-DQECUST
members:        AS4130, AS5050, AS11199, AS11360, AS12017, AS14088, AS14162,
                AS14740, AS15327, AS16821, AS18891, AS19749, AS20326,
                AS21764, AS26059, AS26257, AS26461, AS27223, AS30168,
                AS32634, AS33039, AS33154, AS33345, AS33358, AS33504,
                AS33726, AS40549, AS40794, AS54552, AS54559, AS54822,
                AS393456, AS395440, AS396531, AS15204, AS54119, AS62984,
                AS13659, AS54934, AS18572, AS397284
mnt-by:         MNT-DCNSL
source:         ARIN
$

This object is important. It lists all the downstream ASNs that this network is expected to announce to the world. It does not contain Cloudflare’s ASN (or any of the leaked ASNs). Clearly this as-set was not used for any BGP filtering.

Just for completeness the same exercise can be done for the other ASN (the downstream of the customer of Verizon). In this case, we just searched for the maintainer object (as there are plenty of route and route6 objects listed).

$ whois -h whois.radb.net ' -i origin AS33154' | egrep '^mnt-by' | sort -u
mnt-by:         MNT-DCNSL
mnt-by:     MAINT-AS3257
mnt-by:     MAINT-AS5050
$

None of these maintainers are directly related to 33154 (DQE Communications). They have been created by other parties and hence they become a dead-end in that search.

It’s worth doing a secondary search to see if any as-set object exists with 33154 or 396531 included. We turned to the most excellent IRR Explorer website run by NLNOG. It provides deep insight into the routing registry data. We did a simple search for 33154 using http://irrexplorer.nlnog.net/search/33154 and we found these as-set objects.

The deep-dive into how Verizon and a BGP Optimizer Knocked Large Parts of the Internet Offline Monday

It’s interesting to see this ASN listed in other as-set’s but none are important or related to Monday’s route-leak. Next we looked at 396531

The deep-dive into how Verizon and a BGP Optimizer Knocked Large Parts of the Internet Offline Monday

This shows that there’s nowhere else we need to check. AS-DQECUST is the as-set macro that controls (or should control) filtering for any transit provider of their network.

The summary of all the investigation is a solid statement that no Cloudflare routes or ASNs are listed anywhere within the routing registry data for the customer of Verizon. As there were 2,300 ASNs listed in the tweet above, we can conclusively state no filtering was in place and hence this route leak went on its way unabated.

IPv6? Where is the IPv6 route leak?

In what could be considered the only plus from Monday’s route leak, we can confirm that there was no route leak within IPv6 space. Why?

It turns out that 396531 (Allegheny Technologies Inc) is a network without IPv6 enabled. Normally you would hear Cloudflare chastise anyone that’s yet to enable IPv6, however, in this case we are quite happy that one of the two protocol families survived. IPv6 was stable during this route leak, which now can be called an IPv4-only route leak.

Yet that’s not really the whole story. Let’s look at the percentage of traffic Cloudflare sends Verizon that’s IPv6 (vs IPv4). Normally the IPv4/IPv6 percentage holds steady.

The deep-dive into how Verizon and a BGP Optimizer Knocked Large Parts of the Internet Offline Monday

This uptick in IPv6 traffic could be the direct result of Happy Eyeballs on mobile handsets picking a working IPv6 path into Cloudflare vs a non-working IPv4 path. Happy Eyeballs is meant to protect against IPv6 failure, however in this case it’s doing a wonderful job in protecting from an IPv4 failure. Yet we have to be careful with this graph because after further thought and investigation the percentage only increased because IPv4 reduces. Sometimes graphs can be misinterpreted, yet Happy Eyeballs still did a good job even as end users were being affected.

Happy Eyeballs, described in RFC8305, is a mechanism where a client (lets say a mobile device) tries to connect to a website both on IPv4 and IPv6 simultaneously. IPv6 is sometimes given a head-start. The theory is that, should a failure exist on one of the paths (sadly IPv6 is the norm), then IPv4 will save the day. Monday was a day of opposites for Verizon.

In fact, enabling IPv6 for mobile users is the one area where we can praise the Verizon network this week (or at least the Verizon mobile network), unlike the residential Verizon networks where IPv6 is almost non-existent.

Using bandwidth graphs to confirm routing leaks and stability.

As we have already stated,Verizon had impacted their own users/customers. Let’s start with their bandwidth graph:

The deep-dive into how Verizon and a BGP Optimizer Knocked Large Parts of the Internet Offline Monday

The red line is 24 June 2019 (00:00 UTC to 00:00 UTC the next day). The gray lines are previous days for comparison. This graph includes both Verizon fixed-line services like FiOS along with mobile.

The AT&T graph is quite different.

The deep-dive into how Verizon and a BGP Optimizer Knocked Large Parts of the Internet Offline Monday

There’s no perturbation. This, along with some direct confirmation, shows that 7018 (AT&T) was not affected. This is an important point.

Going back and looking at a third tier 1 network, we can see 6762 (Telecom Italia) affected by this route leak and yet Cloudflare has a direct interconnect with them.

The deep-dive into how Verizon and a BGP Optimizer Knocked Large Parts of the Internet Offline Monday

We will be asking Telecom Italia to improve their route filtering as we now have this data.

Future work that could have helped on Monday

The IETF is doing work in the area of BGP path protection within the Secure Inter-Domain Routing Operations Working Group (sidrops) area. The charter of this IETF group is:

The SIDR Operations Working Group (sidrops) develops guidelines for the operation of SIDR-aware networks, and provides operational guidance on how to deploy and operate SIDR technologies in existing and new networks.

One new effort from this group should be called out to show how important the issue of route leaks like today’s event is. The draft document from Alexander Azimov et al named draft-ietf-sidrops-aspa-profile (ASPA stands for Autonomous System Provider Authorization) extends the RPKI data structures to handle BGP path information. This in ongoing work and Cloudflare and other companies are clearly interested in seeing it progress further.

However, as we said in Monday’s blog and something we should reiterate again and again: Cloudflare encourages all network operators to deploy RPKI now!

Acronyms used in the blog

  • API – Application Programming Interface
  • AS-PATH – The list of ASNs that a routes has traversed so far
  • ASN – Autonomous System Number – A unique number assigned for each network on the Internet
  • BGP – Border Gateway Protocol (version 4) – the core routing protocol for the Internet
  • IETF – Internet Engineering Task Force – an open standards organization
  • IPv4 – Internet Protocol version 4
  • IPv6 – Internet Protocol version 6
  • IRR – Internet Routing Registries – a database of Internet route objects
  • ISP – Internet Service Provider
  • JSON – JavaScript Object Notation – a lightweight data-interchange format
  • RFC – Request For Comment – published by the IETF
  • RIPE NCC – Réseaux IP Européens Network Coordination Centre – a regional Internet registry
  • ROA – Route Origin Authorization – a cryptographically signed attestation of a BGP route announcement
  • RPKI – Resource Public Key Infrastructure – a public key infrastructure framework for routing information
  • Tier 1 – A network that has no default route and peers with all other tier 1’s
  • UTC – Coordinated Universal Time – a time standard for clocks and time
  • "there be dragons" – a mistype, as it was meant to be "here be dragons" which means dangerous or unexplored territories

The Quantum Menace

Post Syndicated from Armando Faz-Hernández original https://blog.cloudflare.com/the-quantum-menace/

The Quantum Menace

The Quantum Menace

Over the last few decades, the word ‘quantum’ has become increasingly popular. It is common to find articles, reports, and many people interested in quantum mechanics and the new capabilities and improvements it brings to the scientific community. This topic not only concerns physics, since the development of quantum mechanics impacts on several other fields such as chemistry, economics, artificial intelligence, operations research, and undoubtedly, cryptography.

This post begins a trio of blogs describing the impact of quantum computing on cryptography, and how to use stronger algorithms resistant to the power of quantum computing.

  • This post introduces quantum computing and describes the main aspects of this new computing model and its devastating impact on security standards; it summarizes some approaches to securing information using quantum-resistant algorithms.
  • Due to the relevance of this matter, we present our experiments on a large-scale deployment of quantum-resistant algorithms.
  • Our third post introduces CIRCL, open-source Go library featuring optimized implementations of quantum-resistant algorithms and elliptic curve-based primitives.

All of this is part of Cloudflare’s Crypto Week 2019, now fasten your seatbelt and get ready to make a quantum leap.

What is Quantum Computing?

Back in 1981, Richard Feynman raised the question about what kind of computers can be used to simulate physics. Although some physical systems can be simulated in a classical computer, the amount of resources used by such a computer can grow exponentially. Then, he conjectured the existence of a computer model that behaves under quantum mechanics rules, which opened a field of research now called quantum computing. To understand the basics of quantum computing, it is necessary to recall how classical computers work, and from that shine a spotlight on the differences between these computational models.

The Quantum Menace
Fellows of the Royal Society: John Maynard Smith, Richard Feynman & Alan Turing

In 1936, Alan Turing and Emil Post independently described models that gave rise to the foundation of the computing model known as the Post-Turing machine, which describes how computers work and allowed further determination of limits for solving problems.

In this model, the units of information are bits, which store one of two possible values, usually denoted by 0 and 1. A computing machine contains a set of bits and performs operations that modify the values of the bits, also known as the machine’s state. Thus, a machine with N bits can be in one of 2ᴺ possible states. With this in mind, the Post-Turing computing model can be abstractly described as a machine of states, in which running a program is translated as machine transitions along the set of states.

A paper David Deutsch published in 1985 describes a computing model that extends the capabilities of a Turing machine based on the theory of quantum mechanics. This computing model introduces several advantages over the Turing model for processing large volumes of information. It also presents unique properties that deviate from the way we understand classical computing. Most of these properties come from the nature of quantum mechanics. We’re going to dive into these details before approaching the concept of quantum computing.

Superposition

One of the most exciting properties of quantum computing that provides an advantage over the classical computing model is superposition. In physics, superposition is the ability to produce valid states from the addition or superposition of several other states that are part of a system.

Applying these concepts to computing information, it means that there is a system in which it is possible to generate a machine state that represents a (weighted) sum of the states 0 and 1; in this case, the term weighted means that the state can keep track of “the quantity of” 0 and 1 present in the state. In the classical computation model, one bit can only store either the state of 0 or 1, not both; even using two bits, they cannot represent the weighted sum of these states. Hence, to make a distinction from the basic states, quantum computing uses the concept of a quantum bit (qubit) — a unit of information to denote the superposition of two states. This is a cornerstone concept of quantum computing as it provides a way of tracking more than a single state per unit of information, making it a powerful tool for processing information.

The Quantum Menace
Classical computing – A bit stores only one of two possible states: ON or OFF.

The Quantum Menace
Quantum computing – A qubit stores a combination of two or more states.

So, a qubit represents the sum of two parts: the 0 or 1 state plus the amount each 0/1 state contributes to produce the state of the qubit.

In mathematical notation, qubit \( | \Psi \rangle \) is an explicit sum indicating that a qubit represents the superposition of the states 0 and 1. This is the Dirac notation used to describe the value of a qubit \( | \Psi \rangle =  A | 0 \rangle +B | 1 \rangle \), where, A and B are complex numbers known as the amplitude of the states 0 and 1, respectively. The value of the basic states is represented by qubits as \( | 0 \rangle =  1 | 0 \rangle + 0 | 1 \rangle \)  and \( | 1 \rangle =  0 | 0 \rangle + 1 | 1 \rangle \), respectively. The right side of the term contains the abbreviated notation for these special states.

Measurement

In a classical computer, the values 0 and 1 are implemented as digital signals. Measuring the current of the signal automatically reveals the status of a bit. This means that at any moment the value of the bit can be observed or measured.

The state of a qubit is maintained in a physically closed system, meaning that the properties of the system, such as superposition, require no interaction with the environment; otherwise any interaction, like performing a measurement, can cause interference on the state of a qubit.

Measuring a qubit is a probabilistic experiment. The result is a bit of information that depends on the state of the qubit. The bit, obtained by measuring \( | \Psi \rangle =  A | 0 \rangle +B | 1 \rangle \), will be equal to 0 with probability \( |A|^2 \),  and equal to 1 with probability \( |B|^2 \), where \( |x| \) represents the absolute value of \(x\).

From Statistics, we know that the sum of probabilities of all possible events is always equal to 1, so it must hold that \( |A|^2 +|B|^2 =1 \). This last equation motivates to represent qubits as the points of a circle of radius one, and more generally, as the points on the surface of a sphere of radius one, which is known as the Bloch Sphere.

The Quantum Menace
The qubit state is analogous to a point on a unitary circle.

The Quantum Menace
The Bloch Sphere by Smite-Meister – Own work, CC BY-SA 3.0.

Let’s break it down: If you measure a qubit you also destroy the superposition of the qubit, resulting in a superposition state collapse, where it assumes one of the basics states, providing your final result.

Another way to think about superposition and measurement is through the coin tossing experiment.

Toss a coin in the air and you give people a random choice between two options: heads or tails. Now, don’t focus on the randomness of the experiment, instead note that while the coin is rotating in the air, participants are uncertain which side will face up when the coin lands. Conversely, once the coin stops with a random side facing up, participants are 100% certain of the status.

The Quantum Menace

How does it relate? Qubits are similar to the participants. When a qubit is in a superposition of states, it is tracking the probability of heads or tails, which is the participants’ uncertainty quotient while the coin is in the air. However, once you start to measure the qubit to retrieve its value, the superposition vanishes, and a classical bit value sticks: heads or tails. Measurement is that moment when the coin is static with only one side facing up.

A fair coin is a coin that is not biased. Each side (assume 0=heads and 1=tails) of a fair coin has the same probability of sticking after a measurement is performed. The qubit \( \tfrac{1}{\sqrt{2}}|0\rangle + \tfrac{1}{\sqrt{2}}|1\rangle \) describes the probabilities of tossing a fair coin. Note that squaring either of the amplitudes results in ½, indicating that there is a 50% chance either heads or tails sticks.

It would be interesting to be able to charge a fair coin at will while it is in the air. Although this is the magic of a professional illusionist, this task, in fact, can be achieved by performing operations over qubits. So, get ready to become the next quantum magician!

The Quantum Menace

Quantum Gates

A logic gate represents a Boolean function operating over a set of inputs (on the left) and producing an output (on the right). A logic circuit is a set of connected logic gates, a convenient way to represent bit operations.

The Quantum Menace
The NOT gate is a single-bit operation that flips the value of the input bit.

Other gates are AND, OR, XOR, and NAND, and more. A set of gates is universal if it can generate other gates. For example, NOR and NAND gates are universal since any circuit can be constructed using only these gates.

Quantum computing also admits a description using circuits. Quantum gates operate over qubits, modifying the superposition of the states. For example, there is a quantum gate analogous to the NOT gate, the X gate.

The X quantum gate interchanges the amplitudes of the states of the input qubit.

The Quantum Menace

The Z quantum gate flips the sign’s amplitude of state 1:

The Quantum Menace

Another quantum gate is the Hadamard gate, which generates an equiprobable superposition of the basic states.

The Quantum Menace

Using our coin tossing analogy, the Hadamard gate has the action of tossing a fair coin to the air. In quantum circuits, a triangle represents measuring a qubit, and the resulting bit is indicated by a double-wire.

The Quantum Menace

Other gates, such as the CNOT gate, Pauli’s gates, Toffoli gate, Deutsch gate, are slightly more advanced. Quirk, the open-source playground, is a fun sandbox where you can construct quantum circuits using all of these gates.

Reversibility

An operation is reversible if there exists another operation that rolls back the output state to the initial state. For instance, a NOT gate is reversible since applying a second NOT gate recovers the initial input.

The Quantum Menace

In contrast, AND, OR, NAND gates are not reversible. This means that some classical computations cannot be reversed by a classic circuit that uses only the output bits. However, if you insert additional bits of information, the operation can be reversed.

Quantum computing mainly focuses on reversible computations, because there’s always a way to construct a reversible circuit to perform an irreversible computation. The reversible version of a circuit could require the use of ancillary qubits as auxiliary (but not temporary) variables.

Due to the nature of composed systems, it could be possible that these ancillas (extra qubits) correlate to qubits of the main computation. This correlation makes it infeasible to reuse ancillas since any modification could have the side-effect on the operation of a reversible circuit. This is like memory assigned to a process by the operating system: the process cannot use memory from other processes or it could cause memory corruption, and processes cannot release their assigned memory to other processes. You could use garbage collection mechanisms for ancillas, but performing reversible computations increases your qubit budget.

Composed Systems

In quantum mechanics, a single qubit can be described as a single closed system: a system that has no interaction with the environment nor other qubits. Letting qubits interact with others leads to a composed system where more states are represented. The state of a 2-qubit composite system is denoted as \(A_0|00\rangle+A_1|01\rangle+A_2|10\rangle+A_3|11\rangle \), where, \( A_i \) values correspond to the amplitudes of the four basic states 00, 01, 10, and 11. This qubit \( \tfrac{1}{2}|00\rangle+\tfrac{1}{2}|01\rangle+\tfrac{1}{2}|10\rangle+\tfrac{1}{2}|11\rangle \) represents the superposition of these basic states, both having the same probability obtained after measuring the two qubits.

In the classical case, the state of N bits represents only one of 2ᴺ possible states, whereas a composed state of N qubits represents all the 2ᴺ states but in superposition. This is one big difference between these computing models as it carries two important properties: entanglement and quantum parallelism.

Entanglement

According to the theory behind quantum mechanics, some composed states can be described through the description of its constituents. However, there are composed states where no description is possible, known as entangled states.

The Quantum Menace
Bell states are entangled qubit examples

The entanglement phenomenon was pointed out by Einstein, Podolsky, and Rosen in the so-called EPR paradox. Suppose there is a composed system of two entangled qubits, in which by performing a measurement in one qubit causes interference in the measurement of the second. This interference occurs even when qubits are separated by a long distance, which means that some information transfer happens faster than the speed of light. This is how quantum entanglement conflicts with the theory of relativity, where information cannot travel faster than the speed of light. The EPR paradox motivated further investigation for deriving new interpretations about quantum mechanics and aiming to resolve the paradox.

Quantum entanglement can help to transfer information at a distance by following a communication protocol. The following protocol examples rely on the fact that Alice and Bob separately possess one of two entangled qubits:

  • The superdense coding protocol allows Alice to communicate a 2-bit message \(m_0,m_1\) to Bob using a quantum communication channel, for example, using fiber optics to transmit photons. All Alice has to do is operate on her qubit according to the value of the message and send the resulting qubit to Bob. Once Bob receives the qubit, he measures both qubits, noting that the collapsed 2-bit state corresponds to Alice’s message.

The Quantum Menace
Superdense coding protocol.

  • The quantum teleportation protocol allows Alice to transmit a qubit to Bob without using a quantum communication channel. Alice measures the qubit to send Bob and her entangled qubit resulting in two bits. Alice sends these bits to Bob, who operates on his entangled qubit according to the bits received and notes that the result state matches the original state of Alice’s qubit.

The Quantum Menace
Quantum teleportation protocol.

Quantum Parallelism

Composed systems of qubits allow representation of more information per composed state. Note that operating on a composed state of N qubits is equivalent to operating over a set of 2ᴺ states in superposition. This procedure is quantum parallelism. In this setting, operating over a large volume of information gives the intuition of performing operations in parallel, like in the parallel computing paradigm; one big caveat is that superposition is not equivalent to parallelism.

Remember that a composed state is a superposition of several states so, a computation that takes a composed state of inputs will result in a composed state of outputs. The main divergence between classical and quantum parallelism is that quantum parallelism can obtain only one of the processed outputs. Observe that a measurement in the output of a composed state causes that the qubits collapse to only one of the outputs, making it unattainable to calculate all computed values.

The Quantum Menace

Although quantum parallelism does not match precisely with the traditional notion of parallel computing, you can still leverage this computational power to get related information.

Deutsch-Jozsa Problem: Assume \(F\) is a function that takes as input N bits, outputs one bit, and is either constant (always outputs the same value for all inputs) or balanced (outputs 0 for half of the inputs and 1 for the other half). The problem is to determine if \(F\) is constant or balanced.

The quantum algorithm that solves the Deutsch-Jozsa problem uses quantum parallelism. First, N qubits are initialized in a superposition of 2ᴺ states. Then, in a single shot, it evaluates \(F\) for all of these states.

The Quantum Menace
(note that some factors were omitted for simplicity)

The result of applying \(F\) appears (in the exponent) of the amplitude of the all-zero state. Note that only when \(F\) is constant is this amplitude, either +1 or -1. If the result of measuring the N qubit is an all-zeros bitstring, then there is a 100% certainty that \(F\) is constant. Any other result indicates that \(F\) is balanced.

A deterministic classical algorithm solves this problem using \( 2^{N-1}+1\) evaluations of \(F\) in the worst case. Meanwhile, the quantum algorithm requires only one evaluation. The Deutsch-Jozsa problem exemplifies the exponential advantage of a quantum algorithm over classical algorithms.

Quantum Computers

The theory of quantum computing is supported by investigations in the field of quantum mechanics. However, constructing a quantum machine requires a physical system that allows representing qubits and manipulation of states in a reliable and precise way.

The DiVincenzo Criteria require that a physical implementation of a quantum computer must:

  1. Be scalable and have well-defined qubits.
  2. Be able to initialize qubits to a state.
  3. Have long decoherence times to apply quantum error-correcting codes. Decoherence of a qubit happens when the qubit interacts with the environment, for example, when a measurement is performed.
  4. Use a universal set of quantum gates.
  5. Be able to measure single qubits without modifying others.

Quantum computer physical implementations face huge engineering obstacles to satisfy these requirements. The most important challenge is to guarantee low error rates during computation and measurement. Lowering these rates require techniques for error correction, which add a significant number of qubits specialized on this task. For this reason, the number of qubits of a quantum computer should not be regarded as for classical systems. In a classical computer, the bits of a computer are all effective for performing a calculation, whereas the number of qubits is the sum of the effective qubits (those used to make calculations) plus the ancillas (used for reversible computations) plus the error correction qubits.

Current implementations of quantum computers partially satisfy the DiVincenzo criteria. Quantum adiabatic computers fit in this category since they do not operate using quantum gates. For this reason, they are not considered to be universal quantum computers.

Quantum Adiabatic Computers

A recurrent problem in optimization is to find the global minimum of an objective function. For example, a route-traffic control system can be modeled as a function that reduces the cost of routing to a minimum. Simulated annealing is a heuristic procedure that provides a good solution to these types of problems. Simulated annealing finds the solution state by slowly introducing changes (the adiabatic process) on the variables that govern the system.

Quantum annealing is the analogous quantum version of simulated annealing. A qubit is initialized into a superposition of states representing all possible solutions to the problem. Here is used the Hamiltonian operator, which is the sum of vectors of potential and kinetic energies of the system. Hence, the objective function is encoded using this operator describing the evolution of the system in correspondence with time. Then, if the system is allowed to evolve very slowly, it will eventually land on a final state representing the optimal value of the objective function.

Currently, there exist adiabatic computers in the market, such as the D-Wave and IBM Q systems, featuring hundreds of qubits; however, their capabilities are somewhat limited to some problems that can be modeled as optimization problems. The limits of adiabatic computers were studied by van Dam et al, showing that despite solving local searching problems and even some instances of the max-SAT problem, there exists harder searching problems this computing model cannot efficiently solve.

Nuclear Magnetic Resonance

Nuclear Magnetic Resonance (NMR) is a physical phenomena that can be used to represent qubits. The spin of atomic nucleus of molecules is perturbed by an oscillating magnetic field. A 2001 report describes successful implementation of Shor’s algorithm in a 7-qubit NMR quantum computer. An iconic result since this computer was able to factor the number 15.

The Quantum Menace
Nucleus spinning induced by a magnetic field, Darekk2CC BY-SA 3.0

The Quantum Menace
NRM Spectrometer by UCSB

Superconducting Quantum Computers

One way to physically construct qubits is based on superconductors, materials that conduct electric current with zero resistance when exposed to temperatures close to absolute zero.

The Quantum Menace

The Josephson effect, in which current flows across the junction of two superconductors separated by a non-superconducting material, is used to physically implement a superposition of states.

The Quantum Menace
A Josephson junction – Public Domain

When a magnetic flux is applied to this junction, the current flows continuously in one direction. But, depending on the quantity of magnetic flux applied, the current can also flow in the opposite direction. There exists a quantum superposition of currents going both clockwise and counterclockwise leading to a physical implementation of a qubit called flux qubit. The complete device is known as the Superconducting Quantum Interference Device (SQUID) and can be easily coupled scaling the number of qubits. Thus, SQUIDs are like the transistors of a quantum computer.

The Quantum Menace
SQUID: Superconducting Quantum Interference Device. Image by Kurzweil Network and original source.

Examples of superconducting computers are:

  • D-wave’s adiabatic computers process quantum annealing for solving diverse optimization problems.
  • Google’s 72-qubit computer was recently announced and also several engineering issues such as achieving lower temperatures.
  • IBM’s IBM-Q Tokyo, a 20-qubit adiabatic computer, and IBM Q Experience, a cloud-based system for exploring quantum circuits.

The Quantum Menace
D-Wave Cooling System by D-Wave Systems Inc.

IBM Q System

The Quantum Menace
IBM Q System One cryostat at CES.

The Imminent Threat of Quantum Algorithms

The quantum zoo website tracks problems that can be solved using quantum algorithms. As of mid-2018, more than 60 problems appear on this list, targeting diverse applications in the area of number theory, approximation, simulation, and searching. As terrific as it sounds, some easily-solvable problems by quantum computing are surrounding the security of information.

Grover’s Algorithm

Tales of a quantum detective (fragment). A couple of detectives have the mission of finding one culprit in a group of suspects that always respond to this question honestly: “are you guilty?”.
The detective C follows a classic interrogative method and interviews every person one at a time, until finding the first one that confesses.
The detective Q proceeds in a different way, First gather all suspects in a completely dark room, and after that, the detective Q asks them — are you guilty? — A steady sound comes from the room saying “No!” while at the same time, a single voice mixed in the air responds “Yes!.” Since everybody is submerged in darkness, the detective cannot see the culprit. However, detective Q knows that, as long as the interrogation advances, the culprit will feel desperate and start to speak louder and louder, and so, he continues asking the same question. Suddenly, detective Q turns on the lights, enters into the room, and captures the culprit. How did he do it?

The task of the detective can be modeled as a searching problem. Given a Boolean function \( f\) that takes N bits and produces one bit, to find the unique input \(x\) such that \( f(x)=1\).

A classical algorithm (detective C) finds \(x\) using \(2^N-1\) function evaluations in the worst case. However, the quantum algorithm devised by Grover, corresponding to detective Q, searches quadratically faster using around \(2^{N/2}\) function evaluations.

The key intuition of Grover’s algorithm is increasing the amplitude of the state that represents the solution while maintaining the other states in a lower amplitude. In this way, a system of N qubits, which is a superposition of 2ᴺ possible inputs, can be continuously updated using this intuition until the solution state has an amplitude closer to 1. Hence, after updating the qubits many times, there will be a high probability to measure the solution state.

Initially, a superposition of 2ᴺ states (horizontal axis) is set, each state has an amplitude (vertical axis) close to 0. The qubits are updated so that the amplitude of the solution state increases more than the amplitude of other states. By repeating the update step, the amplitude of the solution state gets closer to 1, which boosts the probability of collapsing to the solution state after measuring.

The Quantum Menace
Image taken from D. Bernstein’s slides.

Grover’s Algorithm (pseudo-code):

  1. Prepare an N qubit \(|x\rangle \) as a uniform superposition of 2ᴺ states.
  2. Update the qubits by performing this core operation. $$ |x\rangle \mapsto (-1)^{f(x)} |x\rangle $$ The result of \( f(x) \) only flips the amplitude of the searched state.
  3. Negate the N qubit over the average of the amplitudes.
  4. Repeat Step 2 and 3 for \( (\tfrac{\pi}{4})  2^{ N/2} \) times.
  5. Measure the qubit and return the bits obtained.

Alternatively, the second step can be better understood as a conditional statement:

IF f(x) = 1 THEN
     Negate the amplitude of the solution state.
ELSE
     /* nothing */
ENDIF

Grover’s algorithm considers function \(f\) a black box, so with slight modifications, the algorithm can also be used to find collisions on the function. This implies that Grover’s algorithm can find a collision using an asymptotically less number of operations than using a brute-force algorithm.

The power of Grover’s algorithm can be turned against cryptographic hash functions. For instance, a quantum computer running Grover’s algorithm could find a collision on SHA256 performing only 2¹²⁸ evaluations of a reversible circuit of SHA256. The natural protection for hash functions is to increase the output size to double. More generally, most of symmetric key encryption algorithms will survive to the power of Grover’s algorithm by doubling the size of keys.

The scenario for public-key algorithms is devastating in face of Peter Shor’s algorithm.

Shor’s Algorithm

Multiplying integers is an easy task to accomplish, however, finding the factors that compose an integer is difficult. The integer factorization problem is to decompose a given integer number into its prime factors. For example, 42 has three factors 2, 3, and 7 since \( 2\times 3\times 7 = 42\). As the numbers get bigger, integer factorization becomes more difficult to solve, and the hardest instances of integer factorization are when the factors are only two different large primes. Thus, given an integer number \(N\), to find primes \(p\) and \(q\) such that \( N = p \times q\), is known as integer splitting.

Factoring integers is like cutting wood, and the specific task of splitting integers is analogous to using an axe for splitting the log in two parts. There exist many different tools (algorithms) for accomplishing each task.

The Quantum Menace

For integer factorization, trial division, the Rho method, the elliptic curve method are common algorithms. Fermat’s method, the quadratic- and rational-sieve, leads to the (general) number field sieve (NFS) algorithm for integer splitting. The latter relies on finding a congruence of squares, that is, splitting \(N\) as a product of squares such that $$ N = x^2 – y^2 = (x+y)\times(x-y) $$ The complexity of NFS is mainly attached to the number of pairs \((x, y)\) that must be examined before getting a pair that factors \(N\). The NFS algorithm has subexponential complexity on the size of \(N\), meaning that the time required for splitting an integer increases significantly as the size of \(N\) grows. For large integers, the problem becomes intractable for classical computers.

The Axe of Thor Shor

The Quantum Menace
Olaf Tryggvason – Public Domain

The many different guesses of the NFS algorithm are analogous to hitting the log using a dulled axe; after subexponential many tries, the log is cut by half. However, using a sharper axe allows you to split the log faster. This sharpened axe is the quantum algorithm proposed by Shor in 1994.

Let \(x\) be an integer less than \(N\) and of the order \(k\). Then, if \(k\) is even, there exists an integer \(q\) so \(qN\) can be factored as follows.

The Quantum Menace

This approach has some issues. For example, the factorization could correspond to \(q\) not \(N\) and the order of \(x\) is unknown, and here is where Shor’s algorithm enters the picture, finding the order of \(x\).

The internals of Shor’s algorithm rely on encoding the order \(k\) into a periodic function, so that its period can be obtained using the quantum version of the Fourier transform (QFT). The order of \(x\) can be found using a polynomial number quantum evaluations of Shor’s algorithm. Therefore, splitting integers using this quantum approach has polynomial complexity on the size of \(N\).

Shor’s algorithm carries strong implications on the security of the RSA encryption scheme because its security relies on integer factorization. A large-enough quantum computer can efficiently break RSA for current instances.

Alternatively, one may recur to elliptic curves, used in cryptographic protocols like ECDSA or ECDH. Moreover, all TLS ciphersuites use a combination of elliptic curve groups, large prime groups, and RSA and DSA signatures. Unfortunately, these algorithms all succumb to Shor’s algorithm. It only takes a few modifications for Shor’s algorithm to solve the discrete logarithm problem on finite groups. This sounds like a catastrophic story where all of our encrypted data and privacy are no longer secure with the advent of a quantum computer, and in some sense this is true.

On one hand, it is a fact that the quantum computers constructed as of 2019 are not large enough to run, for instance, Shor’s algorithm for the RSA key sizes used in standard protocols. For example, a 2018 report shows experiments on the factorization of a 19-bit number using 94 qubits, they also estimate that 147456 qubits are needed for factoring a 768-bit number. Hence, there numbers indicates that we are still far from breaking RSA.

What if we increment RSA key sizes to be resistant to quantum algorithms, just like for symmetric algorithms?

Bernstein et al. estimated that RSA public keys should be as large as 1 terabyte to maintain secure RSA even in the presence of quantum factoring algorithms. So, for public-key algorithms, increasing the size of keys does not help.

A recent investigation by Gidney and Ekerá shows improvements that accelerate the evaluation of quantum factorization. In their report, the cost of factoring 2048-bit integers is estimated to take a few hours using a quantum machine of 20 million qubits, which is far from any current development. Something worth noting is that the number of qubits needed is two orders of magnitude smaller than the estimated numbers given in previous works developed in this decade. Under these estimates, current encryption algorithms will remain secure several more years; however, consider the following not-so-unrealistic situation.

Information currently encrypted with for example, RSA, can be easily decrypted with a quantum computer in the future. Now, suppose that someone records encrypted information and stores them until a quantum computer is able to decrypt ciphertexts. Although this could be as far as 20 years from now, the forward-secrecy principle is violated. A 20-year gap to the future is sometimes difficult to imagine. So, let’s think backwards, what would happen if all you did on the Internet at the end of the 1990s can be revealed 20 years later — today. How does this impact the security of your personal information? What if the ciphertexts were company secrets or business deals? In 1999, most of us were concerned about the effects of the Y2K problem, now we’re facing Y2Q (years to quantum): the advent of quantum computers.

Post-Quantum Cryptography

Although the current capacity of the physical implementation of quantum computers is far from a real threat to secure communications, a transition to use stronger problems to protect information has already started. This wave emerged as post-quantum cryptography (PQC). The core idea of PQC is finding algorithms difficult enough that no quantum (and classical) algorithm can solve them.

A recurrent question is: How does it look like a problem that even a quantum computer can not solve?

These so-called quantum-resistant algorithms rely on different hard mathematical assumptions; some of them as old as RSA, others more recently proposed. For example, McEliece cryptosystem, formulated in the late 70s, relies on the hardness of decoding a linear code (in the sense of coding theory). The practical use of this cryptosystem didn’t become widespread, since with the passing of time, other cryptosystems superseded in efficiency. Fortunately, McEliece cryptosystem remains immune to Shor’s algorithm, gaining it relevance in the post-quantum era.

Post-quantum cryptography presents alternatives:

  1. Lattice-based Cryptography
  2. Hash-based Cryptography
  3. Isogeny-based Cryptography
  4. Code-based Cryptography
  5. Multivariate-based Cryptography

The Quantum Menace

As of 2017, the NIST started an evaluation process that tracks possible alternatives for next-generation secure algorithms. From a practical perspective, all candidates present different trade-offs in implementation and usage. The time and space requirements are diverse; at this moment, it’s too early to define which will succeed RSA and elliptic curves. An initial round collected 70 algorithms for deploying key encapsulation mechanisms and digital signatures. As of early 2019, 28 of these survive and are currently in the analysis, investigation, and experimentation phase.

Cloudflare’s mission is to help build a better Internet. As a proactive action, our cryptography team is preparing experiments on the deployment of post-quantum algorithms at Cloudflare scale. Watch our blog post for more details.

Towards Post-Quantum Cryptography in TLS

Post Syndicated from Kris Kwiatkowski original https://blog.cloudflare.com/towards-post-quantum-cryptography-in-tls/

Towards Post-Quantum Cryptography in TLS

Towards Post-Quantum Cryptography in TLS

We live in a completely connected society. A society connected by a variety of devices: laptops, mobile phones, wearables, self-driving or self-flying things. We have standards for a common language that allows these devices to communicate with each other. This is critical for wide-scale deployment – especially in cryptography where the smallest detail has great importance.

One of the most important standards-setting organizations is the National Institute of Standards and Technology (NIST), which is hugely influential in determining which standardized cryptographic systems see worldwide adoption. At the end of 2016, NIST announced it would hold a multi-year open project with the goal of standardizing new post-quantum (PQ) cryptographic algorithms secure against both quantum and classical computers.

Many of our devices have very different requirements and capabilities, so it may not be possible to select a “one-size-fits-all” algorithm during the process. NIST mathematician, Dustin Moody, indicated that institute will likely select more than one algorithm:

“There are several systems in use that could be broken by a quantum computer – public-key encryption and digital signatures, to take two examples – and we will need different solutions for each of those systems.”

Initially, NIST selected 82 candidates for further consideration from all submitted algorithms. At the beginning of 2019, this process entered its second stage. Today, there are 26 algorithms still in contention.

Post-quantum cryptography: what is it really and why do I need it?

In 1994, Peter Shor made a significant discovery in quantum computation. He found an algorithm for integer factorization and computing discrete logarithms, both believed to be hard to solve in classical settings. Since then it has become clear that the ‘hard problems’ on which cryptosystems like RSA and elliptic curve cryptography (ECC) rely – integer factoring and computing discrete logarithms, respectively – are efficiently solvable with quantum computing.

A quantum computer can help to solve some of the problems that are intractable on a classical computer. In theory, they could efficiently solve some fundamental problems in mathematics. This amazing computing power would be highly beneficial, which is why companies are actually trying to build quantum computers. At first, Shor’s algorithm was merely a theoretical result – quantum computers powerful enough to execute it did not exist – but this is quickly changing. In March 2018, Google announced a 72-qubit universal quantum computer. While this is not enough to break say RSA-2048 (still more is needed), many fundamental problems have already been solved.

In anticipation of wide-spread quantum computing, we must start the transition from classical public-key cryptography primitives to post-quantum (PQ) alternatives. It may be that consumers will never get to hold a quantum computer, but a few powerful attackers who will get one can still pose a serious threat. Moreover, under the assumption that current TLS handshakes and ciphertexts are being captured and stored, a future attacker could crack these stored individual session keys and use those results to decrypt the corresponding individual ciphertexts. Even strong security guarantees, like forward secrecy, do not help out much there.

In 2006, the academic research community launched a conference series dedicated to finding alternatives to RSA and ECC. This so-called post-quantum cryptography should run efficiently on a classical computer, but it should also be secure against attacks performed by a quantum computer. As a research field, it has grown substantially in popularity.

Several companies, including Google, Microsoft, Digicert and Thales, are already testing the impact of deploying PQ cryptography. Cloudflare is involved in some of this, but we want to be a company that leads in this direction. The first thing we need to do is understand the real costs of deploying PQ cryptography, and that’s not obvious at all.

What options do we have?

Many submissions to the NIST project are still under study. Some are very new and little understood; others are more mature and already standardized as RFCs. Some have been broken or withdrawn from the process; others are more conservative or illustrate how far classical cryptography would need to be pushed so that a quantum computer could not crack it within a reasonable cost. Some are very slow and big; others are not. But most cryptographic schemes can be categorized into these families: lattice-based, multivariate, hash-based (signatures only), code-based and isogeny-based.

For some algorithms, nevertheless, there is a fear they may be too inconvenient to use with today’s Internet. We must also be able to integrate new cryptographic schemes with existing protocols, such as SSH or TLS. To do that, designers of PQ cryptosystems must consider these characteristics:

  • Latency caused by encryption and decryption on both ends of the communication channel, assuming a variety of devices from big and fast servers to slow and memory constrained IoT (Internet of Things) devices
  • Small public keys and signatures to minimize bandwidth
  • Clear design that allows cryptanalysis and determining weaknesses that could be exploited
  • Use of existing hardware for fast implementation

The work on post-quantum public key cryptosystems must be done in a full view of organizations, governments, cryptographers, and the public. Emerging ideas must be properly vetted by this community to ensure widespread support.

Helping Build a Better Internet

Towards Post-Quantum Cryptography in TLS

To better understand the post-quantum world, Cloudflare began experimenting with these algorithms and used them to provide confidentiality in TLS connections.

With Google, we are proposing a wide-scale experiment that combines client- and server-side data collection to evaluate the performance of key-exchange algorithms on actual users’ devices. We hope that this experiment helps choose an algorithm with the best characteristics for the future of the Internet. With Cloudflare’s highly distributed network of access points and Google’s Chrome browser, both companies are in a very good position to perform this experiment.

Our goal is to understand how these algorithms act when used by real clients over real networks, particularly candidate algorithms with significant differences in public-key or ciphertext sizes. Our focus is on how different key sizes affect handshake time in the context of Transport Layer Security (TLS) as used on the web over HTTPS.

Our primary candidates are an NTRU-based construction called HRSS-SXY (by Hülsing – Rijneveld – Schanck – Schwabe, and Tsunekazu Saito – Keita Xagawa – Takashi Yamakawa) and an isogeny-based Supersingular Isogeny Key Encapsulation (SIKE). Details of both algorithms are described in more detail below in section “Dive into post-quantum cryptography”. This table shows a few characteristics for both algorithms. Performance timings were obtained by running the BoringSSL speed test on an Intel Skylake CPU.

KEMPublic Key size (bytes)Ciphertext (bytes)Secret size (bytes)KeyGen (op/sec)Encaps (op/sec)Decaps (op/sec)NIST level
HRSS-SXY11381138323952.376034.721905.81
SIKE/p43433034616367.1228.0209.31

Currently the most commonly used key exchange algorithm (according to Cloudflare’s data) is the non-quantum X25519. Its public keys are 32 bytes and BoringSSL can generate 49301.2 key pairs, and is able to perform 19628.6 key agreements every second on my Skylake CPU.

Note that HRSS-SXY shows a significant speed advantage, while SIKE has a size advantage. In our experiment, we will deploy these two algorithms on both the server side using Cloudflare’s infrastructure, and the client side using Chrome Canary; both sides will collect telemetry information about TLS handshakes using these two PQ algorithms to see how they perform in practice.

What do we expect to find?

In 2018, Adam Langley conducted an experiment with the goal of evaluating the likely latency impact of a post-quantum key exchange in TLS. Chrome was augmented with the ability to include a dummy, arbitrarily-sized extension in the TLS ClientHello (fixed number of bytes of random noise). After taking into account the performance and key size offered by different types key-exchange schemes, he concluded that constructs based on structured lattices may be most suitable for future use in TLS.

However, Langley also observed a peculiar phenomenon; client connections measured at 95th percentile had much higher latency than the median. It means that in those cases, isogeny-based systems may be a better choice. In the “Dive into post-quantum cryptography”, we describe the difference between isogeny-based SIKE and lattice-based NTRU cryptosystems.

In our experiment, we want to more thoroughly evaluate and ascribe root causes to these unexpected latency increases. We would particularly like to learn more about the characteristics of those networks: what causes increased latency? how does the performance cost of isogeny-based algorithms impact the TLS handshake? We want to answer key questions, like:

  • What is a good ratio for speed-to-key size (or how much faster could SIKE get to achieve the client-perceived performance of HRSS)?
  • How do network middleboxes behave when clients use new PQ algorithms, and which networks have problematic middleboxes?
  • How do the different properties of client networks affect TLS performance with different PQ key exchanges? Can we identify specific autonomous systems, device configurations, or network configurations that favor one algorithm over another? How is performance affected in the long tail?

Experiment Design

Our experiment will involve both server- and client-side performance statistics collection from real users around the world (all the data is anonymized). Cloudflare is operating the server-side TLS connections. We will enable the CECPQ2 (HRSS + X25519) and CECPQ2b (SIKE + X25519) key-agreement algorithms on all TLS-terminating edge servers.

In this experiment, the ClientHello will contain a CECPQ2 or CECPQ2b public key (but never both). Additionally, Chrome will always include X25519 for servers that do not support post-quantum key exchange. The post-quantum key exchange will only be negotiated in TLS version 1.3 when both sides support it.

Since Cloudflare only measures the server side of the connection, it is impossible to determine the time it takes for a ClientHello sent from Chrome to reach Cloudflare’s edge servers; however, we can measure the time it takes for the TLS ServerHello message containing post-quantum key exchange, to reach the client and for the client to respond.

On the client side, Chrome Canary will operate the TLS connection. Google will enable either CECPQ2 or CECPQ2b in Chrome for the following mix of architecture and OSes:

  • x86-64: Windows, Linux, macOS, ChromeOS
  • aarch64: Android

Our high-level expectation is to get similar results as Langley’s original experiment in 2018 — slightly increased latency for the 50th percentile and higher latency for the 95th. Unfortunately, data collected purely from real users’ connections may not suffice for diagnosing the root causes of why some clients experience excessive slowdown. To this end, we will perform follow-up experiments based on per-client information we collect server-side.

Our primary hypothesis is that excessive slowdowns, like those Langley observed, are largely due to in-network events, such as middleboxes or bloated/lossy links. As a first-pass analysis, we will investigate whether the slowed-down clients share common network features, like common ASes, common transit networks, common link types, and so on. To determine this, we will run a traceroute from vantage points close to our servers back toward the clients (not overloading any particular links or hosts) and study whether some client locations are subject to slowdowns for all destinations or just for some.

Dive into post-quantum cryptography

Be warned: the details of PQ cryptography may be quite complicated. In some cases it builds on classical cryptography, and in other cases it is completely different math. It would be rather hard to describe details in a single blog post. Instead, we are giving you an intuition of post-quantum cryptography, rather than provide deep academic-level descriptions. We’re skipping a lot of details for the sake of brevity. Nevertheless, settle in for a bit of an epic journey because we have a lot to cover.

Key encapsulation mechanism

NIST requires that all key-agreement algorithms have a form of key-encapsulation mechanism (KEM). The KEM is a simplified form of public key encryption (PKE). As PKE, it also allows agreement on a secret, but in a slightly different way. The idea is that the session key is an output of the encryption algorithm, conversely to public key encryption schemes where session key is an input to the algorithm. In a KEM, Alice generates a random key and uses the pre-generated public key from Bob to encrypt (encapsulate) it. This results in a ciphertext sent to Bob. Bob uses his private key to decrypt (decapsulate) the ciphertext and retrieve the random key. The idea was initially introduced by Cramer and Shoup. Experience shows that such constructs are easier to design, analyze, and implement as the scheme is limited to communicating a fixed-size session key. Leonardo Da Vinci said, “Simplicity is the ultimate sophistication,” which is very true in cryptography.

The key exchange (KEX) protocol, like Diffie-Hellman, is yet a different construct: it allows two parties to agree on a shared secret that can be used as a symmetric encryption key. For example, Alice generates a key pair and sends a public key to Bob. Bob does the same and uses his own key pair with Alice’s public key to generate the shared secret. He then sends his public key to Alice who can now generate the same shared secret. What’s worth noticing is that both Alice and Bob perform exactly the same operations.

KEM construction can be converted to KEX. Alice performs key generation and sends the public key to Bob. Bob uses it to encapsulate a symmetric session key and sends it back to Alice. Alice decapsulates the ciphertext received from Bob and gets the symmetric key. This is actually what we do in our experiment to make integration with the TLS protocol less complicated.

NTRU Lattice-based Encryption  

We will enable the CECPQ2 implemented by Adam Langley from Google on our servers. He described this implementation in detail here. This key exchange uses the HRSS algorithm, which is based on the NTRU (N-Th Degree TRUncated Polynomial Ring) algorithm. Foregoing too much detail, I am going to explain how NTRU works and give simplified examples, and finally, compare it to HRSS.

Towards Post-Quantum Cryptography in TLS

NTRU is a cryptosystem based on a polynomial ring. This means that we do not operate on numbers modulo a prime (like in RSA), but on polynomials of degree \( N \) , where the degree of a polynomial is the highest exponent of its variable. For example, \(x^7 + 6x^3 + 11x^2 \) has degree of 7.

One can add polynomials in the ring in the usual way, by simply adding theirs coefficients modulo some integer. In NTRU this integer is called \( q \). Polynomials can also be multiplied, but remember, you are operating in the ring, therefore the result of a multiplication is always a polynomial of degree less than \(N\). It basically means that exponents of the resulting polynomial are added to modulo \(N\).

Towards Post-Quantum Cryptography in TLS

In other words, polynomial ring arithmetic is very similar to modular arithmetic, but instead of working with a set of numbers less than N, you are working with a set of polynomials with a degree less than N.

To instantiate the NTRU cryptosystem, three domain parameters must be chosen:

  • \(N\) – degree of the polynomial ring, in NTRU the principal objects are polynomials of degree \(N-1\).
  • \(p\) – small modulus used during key generation and decryption for reducing message coefficients.
  • \(q\) – large modulus used during algorithm execution for reducing coefficients of the polynomials.

First, we generate a pair of public and private keys. To do that, two polynomials \(f\) and \(g\) are chosen from the ring in a way that their randomly generated coefficients are much smaller than \(q\). Then key generation computes two inverses of the polynomial: $$ f_p= f^{-1} \bmod{p}   \\  f_q= f^{-1} \bmod{q} $$

The last step is to compute $$ pk = p\cdot f_q\cdot g \bmod q $$, which we will use as public key pk. The private key consists of \(f\) and \(f_p\). The \(f_q\) is not part of any key, however it must remain secret.

It might be the case that after choosing \(f\), the inverses modulo \(p\) and \( q \) do not exist. In this case, the algorithm has to start from the beginning and generate another \(f\). That’s unfortunate because calculating the inverse of a polynomial is a costly operation. HRSS brings an improvement to this issue since it ensures that those inverses always exist, making key generation faster than as proposed initially in NTRU.

The encryption of a message \(m\) proceeds as follows. First, the message \(m\) is converted to a ring element \(pt\) (there exists an algorithm for performing this conversion in both directions). During encryption, NTRU randomly chooses one polynomial \(b\) called blinder. The goal of the blinder is to generate different ciphertexts per encyption. Thus, the ciphetext \(ct\) is obtained as $$ ct = (b\cdot pk + pt ) \bmod q $$ Decryption looks a bit more complicated but it can also be easily understood. Decryption uses both the secret value \(f\) and to recover the plaintext as $$ v =  f \cdot ct \bmod q \\ pt = v \cdot f_p \bmod p $$

This diagram demonstrates why and how decryption works.

Towards Post-Quantum Cryptography in TLS
Step-by-step correctness of decryption procedure.

After obtaining \(pt\), the message \(m\) is recovered by inverting the conversion function.

The underlying hard assumption is that given two polynomials: \(f\) and \(g\) whose coefficients are short compared to the modulus \(q\), it is difficult to distinguish \(pk = \frac{f}{g} \) from a random element in the ring. It means that it’s hard to find \(f\) and \(g\) given only public key pk.

Lattices

NTRU cryptosystem is a grandfather of lattice-based encryption schemes. The idea of using  difficult problems for cryptographic purposes was due to Ajtai. His work evolved into a whole area of research with the goal of creating more practical, lattice-based cryptosystems.

What is a lattice and why it can be used for post-quantum crypto?

The picture below visualizes lattice as points in a two-dimensional space. A lattice is defined by the origin \(O\) and base vectors \( \{ b_1 , b_2\} \). Every point on the lattice is represented as a linear combination of the base vectors, for example  \(V = -2b_1+b_2\).

Towards Post-Quantum Cryptography in TLS

There are two classical NP-hard problems in lattice-based cryptography:

  1. Shortest Vector Problem (SVP): Given a lattice, to find the shortest non-zero vector in the lattice. In the graph, the vector \(s\) is the shortest one. The SVP problem is NP-hard only under some assumptions.
  2. Closest Vector Problem (CVP). Given a lattice and a vector \(V\) (not necessarily in the lattice), to find the closest vector to \(V\). For example, the closest vector to \(t\) is \(z\).

In the graph above, it is easy for us to solve SVP and CVP by simple inspection. However, the lattices used in cryptography have higher dimensions, say above 1000, as well as highly non-orthogonal basis vectors. On these instances, the problems get extremely hard to solve. It’s even believed future quantum computers will have it tough.

NTRU vs HRSS

HRSS, which we use in our experiment, is based on NTRU, but a slightly better instantiation. The main improvements are:

  • Faster key generation algorithm.
  • NTRU encryption can produce ciphertexts that are impossible to decrypt (true for many lattice-based schemes). But HRSS fixes this problem.
  • HRSS is a key encapsulation mechanism.

CECPQ2b – Isogeny-based Post-Quantum TLS

Following CECPQ2, we have integrated into BoringSSL another hybrid key exchange mechanism relying on SIKE. It is called CECPQ2b and we will use it in our experimentation in TLS 1.3. SIKE is a key encapsulation method based on Supersingular Isogeny Diffie-Hellman (SIDH). Read more about SIDH in our previous post. The math behind SIDH is related to elliptic curves. A comparison between SIDH and the classical Elliptic Curve Diffie-Hellman (ECDH) is given.

An elliptic curve is a set of points that satisfy a specific mathematical equation. The equation of an elliptic curve may have multiple forms, the standard form is called the Weierstrass equation $$ y^2 = x^3 +ax +b  $$ and its shape can look like the red curve.

Towards Post-Quantum Cryptography in TLS

An interesting fact about elliptic curves is have a group structure. That is, the set of points on the curve have associated a binary operation called point addition. The set of points on the elliptic curve is closed under addition. Thus, adding two points results in another point that is also on the elliptic curve.

Towards Post-Quantum Cryptography in TLS

If we can add two different points on a curve, then we can also add one point to itself. And if we do it multiple times, then the resulting operations is known as a scalar multiplication and denoted as  \(Q = k\cdot P = P+P+\dots+P\) for an integer \(k\).

Multiplication of scalars is commutative. It means that two scalar multiplications can be evaluated in any order \( \color{darkred}{k_a}\cdot\color{darkgreen}{k_b} =   \color{darkgreen}{k_b}\cdot\color{darkred}{k_a} \); this an important property that makes ECDH possible.

It turns out that carefully if choosing an elliptic curve “correctly”, scalar multiplication is easy to compute but extremely hard to reverse. Meaning, given two points \(Q\) and \(P\) such that \(Q=k\cdot P\), finding the integer k is a difficult task known as the Elliptic Curve Discrete Logarithm problem (ECDLP). This problem is suitable for cryptographic purposes.

Alice and Bob agree on a secret key as follows. Alice generates a private key \( k_a\). Then, she uses some publicly known point \(P\) and calculates her public key as \( Q_a = k_a\cdot P\). Bob proceeds in similar fashion and gets \(k_b\) and \(Q_b = k_b\cdot P\). To agree on a shared secret, each party multiplies their private key with the public key of the other party. The result of this is the shared secret. Key agreement as described above, works thanks to the fact that scalars can commute:
$$  \color{darkgreen}{k_a} \cdot Q_b = \color{darkgreen}{k_a} \cdot  \color{darkred}{k_b} \cdot P \iff \color{darkred}{k_b} \cdot \color{darkgreen}{k_a} \cdot P = \color{darkred}{k_b} \cdot Q_a $$

There is a vast theory behind elliptic curves. An introduction to elliptic curve cryptography was posted before and more details can be found in this book. Now, lets describe SIDH and compare with ECDH.

Isogenies on Elliptic Curves

Before explaining the details of SIDH key exchange, I’ll explain the 3 most important concepts, namely: j-invariant, isogeny and its kernel.

Each curve has a number that can be associated to it. Let’s call this number a j-invariant. This number is not unique per curve, meaning many curves have the same value of j-invariant, but it can be viewed as a way to group multiple elliptic curves into disjoint sets. We say that two curves are isomorphic if they are in the same set, called the isomorphism class. The j-invariant is a simple criterion to determine whether two curves are isomorphic. The j-invariant of a curve \(E\) in Weierstrass form \( y^2 = x^3 + ax + b\) is given as $$ j(E) = 1728\frac{4a^3}{4a^3 +27b^2} $$

When it comes to isogeny, think about it as a map between two curves. Each point on some curve \( E \) is mapped by isogeny to the point on isogenous curve \( E’ \). We denote mapping from curve \( E \) to \( E’ \) by isogeny \( \phi \) as:

$$\phi: E \rightarrow E’ $$

It depends on the map if those two curves are isomorphic or not. Isogeny can be visualised as:

Towards Post-Quantum Cryptography in TLS

There may exist many of those mappings, each curve used in SIDH has small number of isogenies to other curves. Natural question is how do we compute such isogeny. Here is where the kernel of an isogeny comes. The kernel uniquely determines isogeny (up to isomorphism class). Formulas for calculating isogeny from its kernel were initially given by J. Vélu and the idea of calculating them efficiently was extended.

To finish, I will summarize what was said above with a picture.

Towards Post-Quantum Cryptography in TLS

There are two isomorphism classes on the picture above. Both curves \(E_1\) and \(E_2\) are isomorphic and have  j-invariant = 6. As curves \(E_3\) and \(E_4\) have j-invariant=13, they are in a different isomorphism class. There exists an isogeny \(\phi_2\) between curve \(E_3\) and \(E_2\), so they both are isogeneous. Curves \( \phi_1 \) and \( E_2 \) are isomorphic and there is isogeny \( \phi_1 \) between them. Curves \( E_1\) and \(E_4\) are not isomorphic.

For brevity I’m skipping many important details, like details of the finite field, the fact that isogenies must be separable and that the kernel is finite. But curious readers can find a number of academic research papers available on the Internet.

Big picture: similarities with ECDH

Let’s generalize the ECDH algorithm described above, so that we can swap some elements and try to use Supersingular Isogeny Diffie-Hellman.

Note that what actually happens during an ECDH key exchange is:

  • We have a set of points on elliptic curve, set S
  • We have another group of integers used for point multiplication, G
  • We use an element from Z to act on an element from S to get another element from S:

$$ G \cdot S \rightarrow S $$

Now the question is: what is our G and S in an SIDH setting? For SIDH to work, we need a big set of elements and something secret that will act on the elements from that set. This “group action” must also be resistant to attacks performed by quantum computers.

In the SIDH setting, those two sets are defined as:

  • Set S is a set (graph) of j-invariants, such that all the curves are supersingular: \( S = [j(E_1), j(E_2), j(E_3), …. , j(E_n)]\)
  • Set G is a set of isogenies acting on elliptic curves and transforming, for example, the elliptic curve \(E_1\) into \(E_n\):

Random walk on supersingular graph

When we talk about Isogeny Based Cryptography, as a topic distinct from Elliptic Curve Cryptography, we usually mean algorithms and protocols that rely fundamentally on the structure of isogeny graphs. An example of such a (small) graph is pictured below.

Towards Post-Quantum Cryptography in TLS
Animation based on Chloe Martindale slide deck

Each vertex of the graph represents a different j-invariant of a set of supersingular curves. The edges between vertices represent isogenies converting one elliptic curve to another. As you can notice, the graph is strongly connected, meaning every vertex can be reached from every other vertex. In the context of isogeny-based crypto, we call such a graph a supersingular isogeny graph. I’ll skip some technical details about the construction of this graph (look for those here or here), but instead describe ideas about how it can be used.

As the graph is strongly connected, it is possible to walk a whole graph by starting from any vertex, randomly choosing an edge, following it to the next vertex and then start the process again on a new vertex. Such a way of visiting edges of this graph is called a random walk.

The random walk is a key concept that makes isogeny based crypto feasible. When you look closely at the graph, you can notice that each vertex has a small number of edges incident to it, this is why we can compute the isogenies efficiently. But also for any vertex there is only a limited number of isogenies to choose from, which doesn’t look like good base for a cryptographic scheme. The key question is – where does the security of the scheme come from exactly? In order to get it, it is necessary to visit a couple hundred vertices. What it means in practice is that secret isogeny (of large degree) is constructed as a composition of multiple isogenies (of small, prime degree).  Which means, the secret isogeny is:

Towards Post-Quantum Cryptography in TLS

This property and properties of the isogeny graph are what makes some of us believe that scheme has a good chance to be secure. More specifically, there is no efficient way of finding a path that connects \( E_0 \) with \( E_n \), even with quantum computer at hand. The security level of a system depends on value n – the number of steps taken during the walk.

The random walk is a core process used when both generating public keys and computing shared secrets. It starts with party generating random value m (see more below), starting curve \(E_0\) and points P and Q on this curve. Those values are used to compute the kernel of an isogeny \( R_1 \) in the following way:

$$ R_1 = P + m \cdot Q $$

Thanks to formulas given by Vélu we can now use the point \( R_1 \) to compute the isogeny, the party will choose to move from a vertex to another one. After the isogeny \( \phi_{R_1} \) is calculated it is applied to \( E_0 \)  which results in a new curve \( E_1 \):

$$ \phi_{R_1}: E_0 \rightarrow E_1 $$

Isogeny is also applied to points P and Q. Once on \( E_1 \) the process is repeated. This process is applied n times, and at the end a party ends up on some curve \( E_n \) which defines isomorphism class, so also j-invariant.

Supersingular Isogeny Diffie-Hellman

The core idea in SIDH is to compose two random walks on an isogeny graph of elliptic curves in such a way that the end node of both ways of composing is the same.

In order to do it, scheme sets public parameters – starting curve \( E_0 \) and 2 pairs of base points on this curve \( (PA,QA) \) , \( (PB,QB) \). Alice generates her random secret keys m, and calculates a secret isogeny \( \phi_q \) by performing a random walk as described above. The walk finishes with 3 values: elliptic curve \( E_a \) she has ended up with and pair of points \( \phi_a(PB) \) and \( \phi_a(QB) \) after pushing through Alice’s secret isogeny. Bob proceeds analogously which results in the triple \( {E_b, \phi_b(PA), \phi_b(QA)} \). The triple forms a public key which is exchanged between parties.

The picture below visualizes the operation. The black dots represent curves grouped in the same isomorphism classes represented by light blue circles. Alice takes the orange path ending up on a curve \( E_a \) in a separate isomorphism class than Bob after taking his dark blue path ending on \( E_b \). SIDH is parametrized in a way that Alice and Bob will always end up in different isomorphism classes.

Towards Post-Quantum Cryptography in TLS

Upon receipt of triple \( { E_a, \phi_a(PB), \phi_a(QB) } \)  from Alice, Bob will use his secret value m to calculate a new kernel – but instead of using point \(PA\) and \(QA\) to calculate an isogeny kernel, he will now use images \( \phi_a(PB) \) and \( \phi_a(QB) \) received from Alice:

$$ R’_1 = \phi_a(PB) + m \cdot \phi_a(QB) $$

Afterwards, he uses \( R’_1 \) to start the walk again resulting in the isogeny \( \phi’_b: E_a \rightarrow E_{ab} \). Allice proceeds analogously resulting in the isogeny \(\phi’_a: E_b \rightarrow E_{ba} \). With isogenies calculated this way, both Alice and Bob will converge in the same isomorphism class. The math math may seem complicated, hopefully the picture below makes it easier to understand.

Towards Post-Quantum Cryptography in TLS

Bob computes a new isogeny and starts his random walk from \( E_a \) received from Alice. He ends up on some curve \(E_{ba}\). Similarly, Alice calculates a new isogeny, applies it on \( E_b \) received from Bob and her random walk ends on some curve \(E_{ab}\). Curves \(E_{ab}\) and \(E_{ba}\) are not likely to be the same, but construction guarantees that they are isomorphic. As mentioned earlier, isomorphic curves have the same value of j-invariant,  hence the shared secret is a value of j-invariant \(j(E_{ab})\).

Coming back to differences between SIDH and ECDH – we can split them into four categories: the elements of the group we are operating on, the cornerstone computation required to agree on a shared secret, the elements representing secret values, and the difficult problem on which the security relies.

Towards Post-Quantum Cryptography in TLS
Comparison based on Craig Costello’ s slide deck.

In ECDH there is a secret key which is an integer scalar, in case of SIDH it is a secret isogeny, which also is generated from an integer scalar. In the case of ECDH one multiplies a point on a curve by a scalar, in the case of SIDH it is a random walk in an isogeny graph. In the case of ECDH, the public key is a point on a curve, in the case of SIDH, the public part is a curve itself and the image of some points after applying isogeny. The shared secret in the case of ECDH is a point on a curve, in the case of SIDH it is a j-invariant.

SIKE: Supersingular Isogeny Key Encapsulation

SIDH could potentially be used as a drop-in replacement of the ECDH protocol. We have actually implemented a proof-of-concept and added it to our implementation of TLS 1.3 in the tls-tris library and described (together with Mozilla) implementation details in this draft. Nevertheless, there is a problem with SIDH – the keys can be used only once. In 2016, a few researchers came up with an active attack on SIDH which works only when public keys are reused. In the context of TLS, it is not a big problem, because for each session a fresh key pair is generated (ephemeral keys), but it may not be true for other applications.

SIKE is an isogeny key encapsulation which solves this problem. Bob can generate SIKE keys, upload the public part somewhere in the Internet and then anybody can use it whenever he wants to communicate with Bob securely. SIKE reuses SIDH – internally both sides of the connection always perform SIDH key generation, SIDH key agreement and apply some other cryptographic primitives in order to convert SIDH to KEM. SIKE is implemented in a few variants – each variant corresponds to the security levels using 128-, 192- and 256-bit secret keys. Higher security level means longer running time. More details about SIKE can be found here.

SIKE is also one of the candidates in NIST post-quantum “competition“.

I’ve skipped many important details to give a brief description of how isogeny based crypto works. If you’re curious and hungry for details, look at either of these Cloudflare meetups, where Deirdre Connolly talked about isogeny-based cryptography or this talk by Chloe Martindale during PQ Crypto School 2017. And if you would like to know more about quantum attacks on this scheme, I highly recommend this work.

Conclusion

Quantum computers that can break meaningful cryptographic parameter settings do not exist, yet. They won’t be built for at least the next few years. Nevertheless, they have already changed the way we look at current cryptographic deployments. There are at least two reasons it’s worth investing in PQ cryptography:

  • It takes a lot of time to build secure cryptography and we don’t actually know when today’s classical cryptography will be broken. There is a need for a good mathematical base: an initial idea of what may be secure against something that doesn’t exist yet. If you have an idea, you also need good implementation, constant time, resistance to things like time and cache side-channels, DFA, DPA, EM, and a bunch of other abbreviations indicating side-channel resistance. There is also deployment of, for example, algorithms based on elliptic curves were introduced in ’85, but started to really be used in production only during the last decade, 20 or so years later. Obviously, the implementation must be blazingly fast! Last, but not least, integration: we need time to develop standards to allow integration of PQ cryptography with protocols like TLS.
  • Even though efficient quantum computers probably won’t exist for another few years, the threat is real. Data encrypted with current cryptographic algorithms can be recorded now with hopes of being broken in the future.

Cloudflare is motivated to help build the Internet of tomorrow with the tools at hand today. Our interest is in cryptographic techniques that can be integrated into existing protocols and widely deployed on the Internet as seamlessly as possible. PQ cryptography, like the rest of cryptography, includes many cryptosystems that can be used for communications in today’s Internet; Alice and Bob need to perform some computation, but they do not need to buy new hardware to do that.

Cloudflare sees great potential in those algorithms and believes that some of them can be used as a safe replacement for classical public-key cryptosystems. Time will tell if we’re justified in this belief!

Towards Post-Quantum Cryptography in TLS

AWS Online Tech Talks – June 2018

Post Syndicated from Devin Watson original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-june-2018/

AWS Online Tech Talks – June 2018

Join us this month to learn about AWS services and solutions. New this month, we have a fireside chat with the GM of Amazon WorkSpaces and our 2nd episode of the “How to re:Invent” series. We’ll also cover best practices, deep dives, use cases and more! Join us and register today!

Note – All sessions are free and in Pacific Time.

Tech talks featured this month:

 

Analytics & Big Data

June 18, 2018 | 11:00 AM – 11:45 AM PTGet Started with Real-Time Streaming Data in Under 5 Minutes – Learn how to use Amazon Kinesis to capture, store, and analyze streaming data in real-time including IoT device data, VPC flow logs, and clickstream data.
June 20, 2018 | 11:00 AM – 11:45 AM PT – Insights For Everyone – Deploying Data across your Organization – Learn how to deploy data at scale using AWS Analytics and QuickSight’s new reader role and usage based pricing.

 

AWS re:Invent
June 13, 2018 | 05:00 PM – 05:30 PM PTEpisode 2: AWS re:Invent Breakout Content Secret Sauce – Hear from one of our own AWS content experts as we dive deep into the re:Invent content strategy and how we maintain a high bar.
Compute

June 25, 2018 | 01:00 PM – 01:45 PM PTAccelerating Containerized Workloads with Amazon EC2 Spot Instances – Learn how to efficiently deploy containerized workloads and easily manage clusters at any scale at a fraction of the cost with Spot Instances.

June 26, 2018 | 01:00 PM – 01:45 PM PTEnsuring Your Windows Server Workloads Are Well-Architected – Get the benefits, best practices and tools on running your Microsoft Workloads on AWS leveraging a well-architected approach.

 

Containers
June 25, 2018 | 09:00 AM – 09:45 AM PTRunning Kubernetes on AWS – Learn about the basics of running Kubernetes on AWS including how setup masters, networking, security, and add auto-scaling to your cluster.

 

Databases

June 18, 2018 | 01:00 PM – 01:45 PM PTOracle to Amazon Aurora Migration, Step by Step – Learn how to migrate your Oracle database to Amazon Aurora.
DevOps

June 20, 2018 | 09:00 AM – 09:45 AM PTSet Up a CI/CD Pipeline for Deploying Containers Using the AWS Developer Tools – Learn how to set up a CI/CD pipeline for deploying containers using the AWS Developer Tools.

 

Enterprise & Hybrid
June 18, 2018 | 09:00 AM – 09:45 AM PTDe-risking Enterprise Migration with AWS Managed Services – Learn how enterprise customers are de-risking cloud adoption with AWS Managed Services.

June 19, 2018 | 11:00 AM – 11:45 AM PTLaunch AWS Faster using Automated Landing Zones – Learn how the AWS Landing Zone can automate the set up of best practice baselines when setting up new

 

AWS Environments

June 21, 2018 | 11:00 AM – 11:45 AM PTLeading Your Team Through a Cloud Transformation – Learn how you can help lead your organization through a cloud transformation.

June 21, 2018 | 01:00 PM – 01:45 PM PTEnabling New Retail Customer Experiences with Big Data – Learn how AWS can help retailers realize actual value from their big data and deliver on differentiated retail customer experiences.

June 28, 2018 | 01:00 PM – 01:45 PM PTFireside Chat: End User Collaboration on AWS – Learn how End User Compute services can help you deliver access to desktops and applications anywhere, anytime, using any device.
IoT

June 27, 2018 | 11:00 AM – 11:45 AM PTAWS IoT in the Connected Home – Learn how to use AWS IoT to build innovative Connected Home products.

 

Machine Learning

June 19, 2018 | 09:00 AM – 09:45 AM PTIntegrating Amazon SageMaker into your Enterprise – Learn how to integrate Amazon SageMaker and other AWS Services within an Enterprise environment.

June 21, 2018 | 09:00 AM – 09:45 AM PTBuilding Text Analytics Applications on AWS using Amazon Comprehend – Learn how you can unlock the value of your unstructured data with NLP-based text analytics.

 

Management Tools

June 20, 2018 | 01:00 PM – 01:45 PM PTOptimizing Application Performance and Costs with Auto Scaling – Learn how selecting the right scaling option can help optimize application performance and costs.

 

Mobile
June 25, 2018 | 11:00 AM – 11:45 AM PTDrive User Engagement with Amazon Pinpoint – Learn how Amazon Pinpoint simplifies and streamlines effective user engagement.

 

Security, Identity & Compliance

June 26, 2018 | 09:00 AM – 09:45 AM PTUnderstanding AWS Secrets Manager – Learn how AWS Secrets Manager helps you rotate and manage access to secrets centrally.
June 28, 2018 | 09:00 AM – 09:45 AM PTUsing Amazon Inspector to Discover Potential Security Issues – See how Amazon Inspector can be used to discover security issues of your instances.

 

Serverless

June 19, 2018 | 01:00 PM – 01:45 PM PTProductionize Serverless Application Building and Deployments with AWS SAM – Learn expert tips and techniques for building and deploying serverless applications at scale with AWS SAM.

 

Storage

June 26, 2018 | 11:00 AM – 11:45 AM PTDeep Dive: Hybrid Cloud Storage with AWS Storage Gateway – Learn how you can reduce your on-premises infrastructure by using the AWS Storage Gateway to connecting your applications to the scalable and reliable AWS storage services.
June 27, 2018 | 01:00 PM – 01:45 PM PTChanging the Game: Extending Compute Capabilities to the Edge – Discover how to change the game for IIoT and edge analytics applications with AWS Snowball Edge plus enhanced Compute instances.
June 28, 2018 | 11:00 AM – 11:45 AM PTBig Data and Analytics Workloads on Amazon EFS – Get best practices and deployment advice for running big data and analytics workloads on Amazon EFS.

HackSpace magazine 7: Internet of Everything

Post Syndicated from Andrew Gregory original https://www.raspberrypi.org/blog/hackspace-magazine-7-internet-of-everything/

We’re usually averse to buzzwords at HackSpace magazine, but not this month: in issue 7, we’re taking a deep dive into the Internet of Things.HackSpace magazine issue 7 cover

Internet of Things (IoT)

To many people, IoT is a shady term used by companies to sell you something you already own, but this time with WiFi; to us, it’s a way to make our builds smarter, more useful, and more connected. In HackSpace magazine #7, you can join us on a tour of the boards that power IoT projects, marvel at the ways in which other makers are using IoT, and get started with your first IoT project!

Awesome projects

DIY retro computing: this issue, we’re taking our collective hat off to Spencer Owen. He stuck his home-brew computer on Tindie thinking he might make a bit of beer money — now he’s paying the mortgage with his making skills and inviting others to build modules for his machine. And if that tickles your fancy, why not take a crack at our Z80 tutorial? Get out your breadboard, assemble your jumper wires, and prepare to build a real-life computer!

Inside HackSpace magazine issue 7

Shameless patriotism: combine Lego, Arduino, and the car of choice for 1960 gold bullion thieves, and you’ve got yourself a groovy weekend project. We proudly present to you one man’s epic quest to add LED lights (controllable via a smartphone!) to his daughter’s LEGO Mini Cooper.

Makerspaces

Patriotism intensifies: for the last 200-odd years, the Black Country has been a hotbed of making. Urban Hax, based in Walsall, is the latest makerspace to show off its riches in the coveted Space of the Month pages. Every space has its own way of doing things, but not every space has a portrait of Rob Halford on the wall. All hail!

Inside HackSpace magazine issue 7

Diversity: advice on diversity often boils down to ‘Be nice to people’, which might feel more vague than actionable. This is where we come in to help: it is truly worth making the effort to give people of all backgrounds access to your makerspace, so we take a look at why it’s nice to be nice, and at the ways in which one makerspace has put niceness into practice — with great results.

And there’s more!

We also show you how to easily calculate the size and radius of laser-cut gears, use a bank of LEDs to etch PCBs in your own mini factory, and use chemistry to mess with your lunch menu.

Inside HackSpace magazine issue 7
Helen Steer inside HackSpace magazine issue 7
Inside HackSpace magazine issue 7

All this plus much, much more waits for you in HackSpace magazine issue 7!

Get your copy of HackSpace magazine

If you like the sound of that, you can find HackSpace magazine in WHSmith, Tesco, Sainsbury’s, and independent newsagents in the UK. If you live in the US, check out your local Barnes & Noble, Fry’s, or Micro Center next week. We’re also shipping to stores in Australia, Hong Kong, Canada, Singapore, Belgium, and Brazil, so be sure to ask your local newsagent whether they’ll be getting HackSpace magazine.

And if you can’t get to the shops, fear not: you can subscribe from £4 an issue from our online shop. And if you’d rather try before you buy, you can always download the free PDF. Happy reading, and happy making!

The post HackSpace magazine 7: Internet of Everything appeared first on Raspberry Pi.

AWS Online Tech Talks – May and Early June 2018

Post Syndicated from Devin Watson original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-may-and-early-june-2018/

AWS Online Tech Talks – May and Early June 2018  

Join us this month to learn about some of the exciting new services and solution best practices at AWS. We also have our first re:Invent 2018 webinar series, “How to re:Invent”. Sign up now to learn more, we look forward to seeing you.

Note – All sessions are free and in Pacific Time.

Tech talks featured this month:

Analytics & Big Data

May 21, 2018 | 11:00 AM – 11:45 AM PT Integrating Amazon Elasticsearch with your DevOps Tooling – Learn how you can easily integrate Amazon Elasticsearch Service into your DevOps tooling and gain valuable insight from your log data.

May 23, 2018 | 11:00 AM – 11:45 AM PTData Warehousing and Data Lake Analytics, Together – Learn how to query data across your data warehouse and data lake without moving data.

May 24, 2018 | 11:00 AM – 11:45 AM PTData Transformation Patterns in AWS – Discover how to perform common data transformations on the AWS Data Lake.

Compute

May 29, 2018 | 01:00 PM – 01:45 PM PT – Creating and Managing a WordPress Website with Amazon Lightsail – Learn about Amazon Lightsail and how you can create, run and manage your WordPress websites with Amazon’s simple compute platform.

May 30, 2018 | 01:00 PM – 01:45 PM PTAccelerating Life Sciences with HPC on AWS – Learn how you can accelerate your Life Sciences research workloads by harnessing the power of high performance computing on AWS.

Containers

May 24, 2018 | 01:00 PM – 01:45 PM PT – Building Microservices with the 12 Factor App Pattern on AWS – Learn best practices for building containerized microservices on AWS, and how traditional software design patterns evolve in the context of containers.

Databases

May 21, 2018 | 01:00 PM – 01:45 PM PTHow to Migrate from Cassandra to Amazon DynamoDB – Get the benefits, best practices and guides on how to migrate your Cassandra databases to Amazon DynamoDB.

May 23, 2018 | 01:00 PM – 01:45 PM PT5 Hacks for Optimizing MySQL in the Cloud – Learn how to optimize your MySQL databases for high availability, performance, and disaster resilience using RDS.

DevOps

May 23, 2018 | 09:00 AM – 09:45 AM PT.NET Serverless Development on AWS – Learn how to build a modern serverless application in .NET Core 2.0.

Enterprise & Hybrid

May 22, 2018 | 11:00 AM – 11:45 AM PTHybrid Cloud Customer Use Cases on AWS – Learn how customers are leveraging AWS hybrid cloud capabilities to easily extend their datacenter capacity, deliver new services and applications, and ensure business continuity and disaster recovery.

IoT

May 31, 2018 | 11:00 AM – 11:45 AM PTUsing AWS IoT for Industrial Applications – Discover how you can quickly onboard your fleet of connected devices, keep them secure, and build predictive analytics with AWS IoT.

Machine Learning

May 22, 2018 | 09:00 AM – 09:45 AM PTUsing Apache Spark with Amazon SageMaker – Discover how to use Apache Spark with Amazon SageMaker for training jobs and application integration.

May 24, 2018 | 09:00 AM – 09:45 AM PTIntroducing AWS DeepLens – Learn how AWS DeepLens provides a new way for developers to learn machine learning by pairing the physical device with a broad set of tutorials, examples, source code, and integration with familiar AWS services.

Management Tools

May 21, 2018 | 09:00 AM – 09:45 AM PTGaining Better Observability of Your VMs with Amazon CloudWatch – Learn how CloudWatch Agent makes it easy for customers like Rackspace to monitor their VMs.

Mobile

May 29, 2018 | 11:00 AM – 11:45 AM PT – Deep Dive on Amazon Pinpoint Segmentation and Endpoint Management – See how segmentation and endpoint management with Amazon Pinpoint can help you target the right audience.

Networking

May 31, 2018 | 09:00 AM – 09:45 AM PTMaking Private Connectivity the New Norm via AWS PrivateLink – See how PrivateLink enables service owners to offer private endpoints to customers outside their company.

Security, Identity, & Compliance

May 30, 2018 | 09:00 AM – 09:45 AM PT – Introducing AWS Certificate Manager Private Certificate Authority (CA) – Learn how AWS Certificate Manager (ACM) Private Certificate Authority (CA), a managed private CA service, helps you easily and securely manage the lifecycle of your private certificates.

June 1, 2018 | 09:00 AM – 09:45 AM PTIntroducing AWS Firewall Manager – Centrally configure and manage AWS WAF rules across your accounts and applications.

Serverless

May 22, 2018 | 01:00 PM – 01:45 PM PTBuilding API-Driven Microservices with Amazon API Gateway – Learn how to build a secure, scalable API for your application in our tech talk about API-driven microservices.

Storage

May 30, 2018 | 11:00 AM – 11:45 AM PTAccelerate Productivity by Computing at the Edge – Learn how AWS Snowball Edge support for compute instances helps accelerate data transfers, execute custom applications, and reduce overall storage costs.

June 1, 2018 | 11:00 AM – 11:45 AM PTLearn to Build a Cloud-Scale Website Powered by Amazon EFS – Technical deep dive where you’ll learn tips and tricks for integrating WordPress, Drupal and Magento with Amazon EFS.

 

 

 

 

AWS Online Tech Talks – April & Early May 2018

Post Syndicated from Betsy Chernoff original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-april-early-may-2018/

We have several upcoming tech talks in the month of April and early May. Come join us to learn about AWS services and solution offerings. We’ll have AWS experts online to help answer questions in real-time. Sign up now to learn more, we look forward to seeing you.

Note – All sessions are free and in Pacific Time.

April & early May — 2018 Schedule

Compute

April 30, 2018 | 01:00 PM – 01:45 PM PTBest Practices for Running Amazon EC2 Spot Instances with Amazon EMR (300) – Learn about the best practices for scaling big data workloads as well as process, store, and analyze big data securely and cost effectively with Amazon EMR and Amazon EC2 Spot Instances.

May 1, 2018 | 01:00 PM – 01:45 PM PTHow to Bring Microsoft Apps to AWS (300) – Learn more about how to save significant money by bringing your Microsoft workloads to AWS.

May 2, 2018 | 01:00 PM – 01:45 PM PTDeep Dive on Amazon EC2 Accelerated Computing (300) – Get a technical deep dive on how AWS’ GPU and FGPA-based compute services can help you to optimize and accelerate your ML/DL and HPC workloads in the cloud.

Containers

April 23, 2018 | 11:00 AM – 11:45 AM PTNew Features for Building Powerful Containerized Microservices on AWS (300) – Learn about how this new feature works and how you can start using it to build and run modern, containerized applications on AWS.

Databases

April 23, 2018 | 01:00 PM – 01:45 PM PTElastiCache: Deep Dive Best Practices and Usage Patterns (200) – Learn about Redis-compatible in-memory data store and cache with Amazon ElastiCache.

April 25, 2018 | 01:00 PM – 01:45 PM PTIntro to Open Source Databases on AWS (200) – Learn how to tap the benefits of open source databases on AWS without the administrative hassle.

DevOps

April 25, 2018 | 09:00 AM – 09:45 AM PTDebug your Container and Serverless Applications with AWS X-Ray in 5 Minutes (300) – Learn how AWS X-Ray makes debugging your Container and Serverless applications fun.

Enterprise & Hybrid

April 23, 2018 | 09:00 AM – 09:45 AM PTAn Overview of Best Practices of Large-Scale Migrations (300) – Learn about the tools and best practices on how to migrate to AWS at scale.

April 24, 2018 | 11:00 AM – 11:45 AM PTDeploy your Desktops and Apps on AWS (300) – Learn how to deploy your desktops and apps on AWS with Amazon WorkSpaces and Amazon AppStream 2.0

IoT

May 2, 2018 | 11:00 AM – 11:45 AM PTHow to Easily and Securely Connect Devices to AWS IoT (200) – Learn how to easily and securely connect devices to the cloud and reliably scale to billions of devices and trillions of messages with AWS IoT.

Machine Learning

April 24, 2018 | 09:00 AM – 09:45 AM PT Automate for Efficiency with Amazon Transcribe and Amazon Translate (200) – Learn how you can increase the efficiency and reach your operations with Amazon Translate and Amazon Transcribe.

April 26, 2018 | 09:00 AM – 09:45 AM PT Perform Machine Learning at the IoT Edge using AWS Greengrass and Amazon Sagemaker (200) – Learn more about developing machine learning applications for the IoT edge.

Mobile

April 30, 2018 | 11:00 AM – 11:45 AM PTOffline GraphQL Apps with AWS AppSync (300) – Come learn how to enable real-time and offline data in your applications with GraphQL using AWS AppSync.

Networking

May 2, 2018 | 09:00 AM – 09:45 AM PT Taking Serverless to the Edge (300) – Learn how to run your code closer to your end users in a serverless fashion. Also, David Von Lehman from Aerobatic will discuss how they used [email protected] to reduce latency and cloud costs for their customer’s websites.

Security, Identity & Compliance

April 30, 2018 | 09:00 AM – 09:45 AM PTAmazon GuardDuty – Let’s Attack My Account! (300) – Amazon GuardDuty Test Drive – Practical steps on generating test findings.

May 3, 2018 | 09:00 AM – 09:45 AM PTProtect Your Game Servers from DDoS Attacks (200) – Learn how to use the new AWS Shield Advanced for EC2 to protect your internet-facing game servers against network layer DDoS attacks and application layer attacks of all kinds.

Serverless

April 24, 2018 | 01:00 PM – 01:45 PM PTTips and Tricks for Building and Deploying Serverless Apps In Minutes (200) – Learn how to build and deploy apps in minutes.

Storage

May 1, 2018 | 11:00 AM – 11:45 AM PTBuilding Data Lakes That Cost Less and Deliver Results Faster (300) – Learn how Amazon S3 Select And Amazon Glacier Select increase application performance by up to 400% and reduce total cost of ownership by extending your data lake into cost-effective archive storage.

May 3, 2018 | 11:00 AM – 11:45 AM PTIntegrating On-Premises Vendors with AWS for Backup (300) – Learn how to work with AWS and technology partners to build backup & restore solutions for your on-premises, hybrid, and cloud native environments.

Engineering deep dive: Encoding of SCTs in certificates

Post Syndicated from Let's Encrypt - Free SSL/TLS Certificates original https://letsencrypt.org/2018/04/04/sct-encoding.html

<p>Let&rsquo;s Encrypt recently <a href="https://community.letsencrypt.org/t/signed-certificate-timestamps-embedded-in-certificates/57187">launched SCT embedding in
certificates</a>.
This feature allows browsers to check that a certificate was submitted to a
<a href="https://en.wikipedia.org/wiki/Certificate_Transparency">Certificate Transparency</a>
log. As part of the launch, we did a thorough review
that the encoding of Signed Certificate Timestamps (SCTs) in our certificates
matches the relevant specifications. In this post, I&rsquo;ll dive into the details.
You&rsquo;ll learn more about X.509, ASN.1, DER, and TLS encoding, with references to
the relevant RFCs.</p>

<p>Certificate Transparency offers three ways to deliver SCTs to a browser: In a
TLS extension, in stapled OCSP, or embedded in a certificate. We chose to
implement the embedding method because it would just work for Let&rsquo;s Encrypt
subscribers without additional work. In the SCT embedding method, we submit
a &ldquo;precertificate&rdquo; with a <a href="#poison">poison extension</a> to a set of
CT logs, and get back SCTs. We then issue a real certificate based on the
precertificate, with two changes: The poison extension is removed, and the SCTs
obtained earlier are added in another extension.</p>

<p>Given a certificate, let&rsquo;s first look for the SCT list extension. According to CT (<a href="https://tools.ietf.org/html/rfc6962#section-3.3">RFC 6962
section 3.3</a>),
the extension OID for a list of SCTs is <code>1.3.6.1.4.1.11129.2.4.2</code>. An <a href="http://www.hl7.org/Oid/information.cfm">OID (object
ID)</a> is a series of integers, hierarchically
assigned and globally unique. They are used extensively in X.509, for instance
to uniquely identify extensions.</p>

<p>We can <a href="https://acme-v01.api.letsencrypt.org/acme/cert/031f2484307c9bc511b3123cb236a480d451">download an example certificate</a>,
and view it using OpenSSL (if your OpenSSL is old, it may not display the
detailed information):</p>

<pre><code>$ openssl x509 -noout -text -inform der -in Downloads/031f2484307c9bc511b3123cb236a480d451

CT Precertificate SCTs:
Signed Certificate Timestamp:
Version : v1(0)
Log ID : DB:74:AF:EE:CB:29:EC:B1:FE:CA:3E:71:6D:2C:E5:B9:
AA:BB:36:F7:84:71:83:C7:5D:9D:4F:37:B6:1F:BF:64
Timestamp : Mar 29 18:45:07.993 2018 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:44:02:20:7E:1F:CD:1E:9A:2B:D2:A5:0A:0C:81:E7:
13:03:3A:07:62:34:0D:A8:F9:1E:F2:7A:48:B3:81:76:
40:15:9C:D3:02:20:65:9F:E9:F1:D8:80:E2:E8:F6:B3:
25:BE:9F:18:95:6D:17:C6:CA:8A:6F:2B:12:CB:0F:55:
FB:70:F7:59:A4:19
Signed Certificate Timestamp:
Version : v1(0)
Log ID : 29:3C:51:96:54:C8:39:65:BA:AA:50:FC:58:07:D4:B7:
6F:BF:58:7A:29:72:DC:A4:C3:0C:F4:E5:45:47:F4:78
Timestamp : Mar 29 18:45:08.010 2018 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:46:02:21:00:AB:72:F1:E4:D6:22:3E:F8:7F:C6:84:
91:C2:08:D2:9D:4D:57:EB:F4:75:88:BB:75:44:D3:2F:
95:37:E2:CE:C1:02:21:00:8A:FF:C4:0C:C6:C4:E3:B2:
45:78:DA:DE:4F:81:5E:CB:CE:2D:57:A5:79:34:21:19:
A1:E6:5B:C7:E5:E6:9C:E2
</code></pre>

<p>Now let&rsquo;s go a little deeper. How is that extension represented in
the certificate? Certificates are expressed in
<a href="https://en.wikipedia.org/wiki/Abstract_Syntax_Notation_One">ASN.1</a>,
which generally refers to both a language for expressing data structures
and a set of formats for encoding them. The most common format,
<a href="https://en.wikipedia.org/wiki/X.690#DER_encoding">DER</a>,
is a tag-length-value format. That is, to encode an object, first you write
down a tag representing its type (usually one byte), then you write
down a number expressing how long the object is, then you write down
the object contents. This is recursive: An object can contain multiple
objects within it, each of which has its own tag, length, and value.</p>

<p>One of the cool things about DER and other tag-length-value formats is that you
can decode them to some degree without knowing what they mean. For instance, I
can tell you that 0x30 means the data type &ldquo;SEQUENCE&rdquo; (a struct, in ASN.1
terms), and 0x02 means &ldquo;INTEGER&rdquo;, then give you this hex byte sequence to
decode:</p>

<pre><code>30 06 02 01 03 02 01 0A
</code></pre>

<p>You could tell me right away that decodes to:</p>

<pre><code>SEQUENCE
INTEGER 3
INTEGER 10
</code></pre>

<p>Try it yourself with this great <a href="https://lapo.it/asn1js/#300602010302010A">JavaScript ASN.1
decoder</a>. However, you wouldn&rsquo;t know
what those integers represent without the corresponding ASN.1 schema (or
&ldquo;module&rdquo;). For instance, if you knew that this was a piece of DogData, and the
schema was:</p>

<pre><code>DogData ::= SEQUENCE {
legs INTEGER,
cutenessLevel INTEGER
}
</code></pre>

<p>You&rsquo;d know this referred to a three-legged dog with a cuteness level of 10.</p>

<p>We can take some of this knowledge and apply it to our certificates. As a first
step, convert the above certificate to hex with
<code>xxd -ps &lt; Downloads/031f2484307c9bc511b3123cb236a480d451</code>. You can then copy
and paste the result into
<a href="https://lapo.it/asn1js">lapo.it/asn1js</a> (or use <a href="https://lapo.it/asn1js/#3082062F30820517A0030201020212031F2484307C9BC511B3123CB236A480D451300D06092A864886F70D01010B0500304A310B300906035504061302555331163014060355040A130D4C6574277320456E6372797074312330210603550403131A4C6574277320456E637279707420417574686F72697479205833301E170D3138303332393137343530375A170D3138303632373137343530375A302D312B3029060355040313223563396137662E6C652D746573742E686F66666D616E2D616E64726577732E636F6D30820122300D06092A864886F70D01010105000382010F003082010A0282010100BCEAE8F504D9D91FCFC69DB943254A7FED7C6A3C04E2D5C7DDD010CBBC555887274489CA4F432DCE6D7AB83D0D7BDB49C466FBCA93102DC63E0EB1FB2A0C50654FD90B81A6CB357F58E26E50F752BF7BFE9B56190126A47409814F59583BDD337DFB89283BE22E81E6DCE13B4E21FA6009FC8A7F903A17AB05C8BED85A715356837E849E571960A8999701EAE9CE0544EAAB936B790C3C35C375DB18E9AA627D5FA3579A0FB5F8079E4A5C9BE31C2B91A7F3A63AFDFEDB9BD4EA6668902417D286BE4BBE5E43CD9FE1B8954C06F21F5C5594FD3AB7D7A9CBD6ABF19774D652FD35C5718C25A3BA1967846CED70CDBA95831CF1E09FF7B8014E63030CE7A776750203010001A382032A30820326300E0603551D0F0101FF0404030205A0301D0603551D250416301406082B0601050507030106082B06010505070302300C0603551D130101FF04023000301D0603551D0E041604148B3A21ABADF50C4B30DCCD822724D2C4B9BA29E3301F0603551D23041830168014A84A6A63047DDDBAE6D139B7A64565EFF3A8ECA1306F06082B0601050507010104633061302E06082B060105050730018622687474703A2F2F6F6373702E696E742D78332E6C657473656E63727970742E6F7267302F06082B060105050730028623687474703A2F2F636572742E696E742D78332E6C657473656E63727970742E6F72672F302D0603551D110426302482223563396137662E6C652D746573742E686F66666D616E2D616E64726577732E636F6D3081FE0603551D200481F63081F33008060667810C0102013081E6060B2B0601040182DF130101013081D6302606082B06010505070201161A687474703A2F2F6370732E6C657473656E63727970742E6F72673081AB06082B0601050507020230819E0C819B54686973204365727469666963617465206D6179206F6E6C792062652072656C6965642075706F6E2062792052656C79696E67205061727469657320616E64206F6E6C7920696E206163636F7264616E636520776974682074686520436572746966696361746520506F6C69637920666F756E642061742068747470733A2F2F6C657473656E63727970742E6F72672F7265706F7369746F72792F30820104060A2B06010401D6790204020481F50481F200F0007500DB74AFEECB29ECB1FECA3E716D2CE5B9AABB36F7847183C75D9D4F37B61FBF64000001627313EB19000004030046304402207E1FCD1E9A2BD2A50A0C81E713033A0762340DA8F91EF27A48B3817640159CD30220659FE9F1D880E2E8F6B325BE9F18956D17C6CA8A6F2B12CB0F55FB70F759A419007700293C519654C83965BAAA50FC5807D4B76FBF587A2972DCA4C30CF4E54547F478000001627313EB2A0000040300483046022100AB72F1E4D6223EF87FC68491C208D29D4D57EBF47588BB7544D32F9537E2CEC10221008AFFC40CC6C4E3B24578DADE4F815ECBCE2D57A579342119A1E65BC7E5E69CE2300D06092A864886F70D01010B0500038201010095F87B663176776502F792DDD232C216943C7803876FCBEB46393A36354958134482E0AFEED39011618327C2F0203351758FEB420B73CE6C797B98F88076F409F3903F343D1F5D9540F41EF47EB39BD61B62873A44F00B7C8B593C6A416458CF4B5318F35235BC88EABBAA34F3E3F81BD3B047E982EE1363885E84F76F2F079F2B6EEB4ECB58EFE74C8DE7D54DE5C89C4FB5BB0694B837BD6F02BAFD5A6C007D1B93D25007BDA9B2BDBF82201FE1B76B628CE34E2D974E8E623EC57A5CB53B435DD4B9993ADF6BA3972F2B29D259594A94E17BBE06F34AAE5CF0F50297548C4DFFC5566136F78A3D3B324EAE931A14EB6BE6DA1D538E48CF077583C67B52E7E8">this handy link</a>). You can also run <code>openssl asn1parse -i -inform der -in Downloads/031f2484307c9bc511b3123cb236a480d451</code> to use OpenSSL&rsquo;s parser, which is less easy to use in some ways, but easier to copy and paste.</p>

<p>In the decoded data, we can find the OID <code>1.3.6.1.4.1.11129.2.4.2</code>, indicating
the SCT list extension. Per <a href="https://tools.ietf.org/html/rfc5280#page-17">RFC 5280, section
4.1</a>, an extension is defined:</p>

<pre><code>Extension ::= SEQUENCE {
extnID OBJECT IDENTIFIER,
critical BOOLEAN DEFAULT FALSE,
extnValue OCTET STRING
— contains the DER encoding of an ASN.1 value
— corresponding to the extension type identified
— by extnID
}
</code></pre>

<p>We&rsquo;ve found the <code>extnID</code>. The &ldquo;critical&rdquo; field is omitted because it has the
default value (false). Next up is the <code>extnValue</code>. This has the type
<code>OCTET STRING</code>, which has the tag &ldquo;0x04&rdquo;. <code>OCTET STRING</code> means &ldquo;here&rsquo;s
a bunch of bytes!&rdquo; In this case, as described by the spec, those bytes
happen to contain more DER. This is a fairly common pattern in X.509
to deal with parameterized data. For instance, this allows defining a
structure for extensions without knowing ahead of time all the structures
that a future extension might want to carry in its value. If you&rsquo;re a C
programmer, think of it as a <code>void*</code> for data structures. If you prefer Go,
think of it as an <code>interface{}</code>.</p>

<p>Here&rsquo;s that <code>extnValue</code>:</p>

<pre><code>04 81 F5 0481F200F0007500DB74AFEECB29ECB1FECA3E716D2CE5B9AABB36F7847183C75D9D4F37B61FBF64000001627313EB19000004030046304402207E1FCD1E9A2BD2A50A0C81E713033A0762340DA8F91EF27A48B3817640159CD30220659FE9F1D880E2E8F6B325BE9F18956D17C6CA8A6F2B12CB0F55FB70F759A419007700293C519654C83965BAAA50FC5807D4B76FBF587A2972DCA4C30CF4E54547F478000001627313EB2A0000040300483046022100AB72F1E4D6223EF87FC68491C208D29D4D57EBF47588BB7544D32F9537E2CEC10221008AFFC40CC6C4E3B24578DADE4F815ECBCE2D57A579342119A1E65BC7E5E69CE2
</code></pre>

<p>That&rsquo;s tag &ldquo;0x04&rdquo;, meaning <code>OCTET STRING</code>, followed by &ldquo;0x81 0xF5&rdquo;, meaning
&ldquo;this string is 245 bytes long&rdquo; (the 0x81 prefix is part of <a href="#variable-length">variable length
number encoding</a>).</p>

<p>According to <a href="https://tools.ietf.org/html/rfc6962#section-3.3">RFC 6962, section
3.3</a>, &ldquo;obtained SCTs
can be directly embedded in the final certificate, by encoding the
SignedCertificateTimestampList structure as an ASN.1 <code>OCTET STRING</code>
and inserting the resulting data in the TBSCertificate as an X.509v3
certificate extension&rdquo;</p>

<p>So, we have an <code>OCTET STRING</code>, all&rsquo;s good, right? Except if you remove the
tag and length from extnValue to get its value, you&rsquo;re left with:</p>

<pre><code>04 81 F2 00F0007500DB74AFEEC…
</code></pre>

<p>There&rsquo;s that &ldquo;0x04&rdquo; tag again, but with a shorter length. Why
do we nest one <code>OCTET STRING</code> inside another? It&rsquo;s because the
contents of extnValue are required by RFC 5280 to be valid DER, but a
SignedCertificateTimestampList is not encoded using DER (more on that
in a minute). So, by RFC 6962, a SignedCertificateTimestampList is wrapped in an
<code>OCTET STRING</code>, which is wrapped in another <code>OCTET STRING</code> (the extnValue).</p>

<p>Once we decode that second <code>OCTET STRING</code>, we&rsquo;re left with the contents:</p>

<pre><code>00F0007500DB74AFEEC…
</code></pre>

<p>&ldquo;0x00&rdquo; isn&rsquo;t a valid tag in DER. What is this? It&rsquo;s TLS encoding. This is
defined in <a href="https://tools.ietf.org/html/rfc5246#section-4">RFC 5246, section 4</a>
(the TLS 1.2 RFC). TLS encoding, like ASN.1, has both a way to define data
structures and a way to encode those structures. TLS encoding differs
from DER in that there are no tags, and lengths are only encoded when necessary for
variable-length arrays. Within an encoded structure, the type of a field is determined by
its position, rather than by a tag. This means that TLS-encoded structures are
more compact than DER structures, but also that they can&rsquo;t be processed without
knowing the corresponding schema. For instance, here&rsquo;s the top-level schema from
<a href="https://tools.ietf.org/html/rfc6962#section-3.3">RFC 6962, section 3.3</a>:</p>

<pre><code> The contents of the ASN.1 OCTET STRING embedded in an OCSP extension
or X509v3 certificate extension are as follows:

opaque SerializedSCT&lt;1..2^16-1&gt;;

struct {
SerializedSCT sct_list &lt;1..2^16-1&gt;;
} SignedCertificateTimestampList;

Here, &quot;SerializedSCT&quot; is an opaque byte string that contains the
serialized TLS structure.
</code></pre>

<p>Right away, we&rsquo;ve found one of those variable-length arrays. The length of such
an array (in bytes) is always represented by a length field just big enough to
hold the max array size. The max size of an <code>sct_list</code> is 65535 bytes, so the
length field is two bytes wide. Sure enough, those first two bytes are &ldquo;0x00
0xF0&rdquo;, or 240 in decimal. In other words, this <code>sct_list</code> will have 240 bytes. We
don&rsquo;t yet know how many SCTs will be in it. That will become clear only by
continuing to parse the encoded data and seeing where each struct ends (spoiler
alert: there are two SCTs!).</p>

<p>Now we know the first SerializedSCT starts with <code>0075…</code>. SerializedSCT
is itself a variable-length field, this time containing <code>opaque</code> bytes (much like <code>OCTET STRING</code>
back in the ASN.1 world). Like SignedCertificateTimestampList, it has a max size
of 65535 bytes, so we pull off the first two bytes and discover that the first
SerializedSCT is 0x0075 (117 decimal) bytes long. Here&rsquo;s the whole thing, in
hex:</p>

<pre><code>00DB74AFEECB29ECB1FECA3E716D2CE5B9AABB36F7847183C75D9D4F37B61FBF64000001627313EB19000004030046304402207E1FCD1E9A2BD2A50A0C81E713033A0762340DA8F91EF27A48B3817640159CD30220659FE9F1D880E2E8F6B325BE9F18956D17C6CA8A6F2B12CB0F55FB70F759A419
</code></pre>

<p>This can be decoded using the TLS encoding struct defined in <a href="https://tools.ietf.org/html/rfc6962#page-13">RFC 6962, section
3.2</a>:</p>

<pre><code>enum { v1(0), (255) }
Version;

struct {
opaque key_id[32];
} LogID;

opaque CtExtensions&lt;0..2^16-1&gt;;

struct {
Version sct_version;
LogID id;
uint64 timestamp;
CtExtensions extensions;
digitally-signed struct {
Version sct_version;
SignatureType signature_type = certificate_timestamp;
uint64 timestamp;
LogEntryType entry_type;
select(entry_type) {
case x509_entry: ASN.1Cert;
case precert_entry: PreCert;
} signed_entry;
CtExtensions extensions;
};
} SignedCertificateTimestamp;
</code></pre>

<p>Breaking that down:</p>

<pre><code># Version sct_version v1(0)
00
# LogID id (aka opaque key_id[32])
DB74AFEECB29ECB1FECA3E716D2CE5B9AABB36F7847183C75D9D4F37B61FBF64
# uint64 timestamp (milliseconds since the epoch)
000001627313EB19
# CtExtensions extensions (zero-length array)
0000
# digitally-signed struct
04030046304402207E1FCD1E9A2BD2A50A0C81E713033A0762340DA8F91EF27A48B3817640159CD30220659FE9F1D880E2E8F6B325BE9F18956D17C6CA8A6F2B12CB0F55FB70F759A419
</code></pre>

<p>To understand the &ldquo;digitally-signed struct,&rdquo; we need to turn back to <a href="https://tools.ietf.org/html/rfc5246#section-4.7">RFC 5246,
section 4.7</a>. It says:</p>

<pre><code>A digitally-signed element is encoded as a struct DigitallySigned:

struct {
SignatureAndHashAlgorithm algorithm;
opaque signature&lt;0..2^16-1&gt;;
} DigitallySigned;
</code></pre>

<p>And in <a href="https://tools.ietf.org/html/rfc5246#section-7.4.1.4.1">section
7.4.1.4.1</a>:</p>

<pre><code>enum {
none(0), md5(1), sha1(2), sha224(3), sha256(4), sha384(5),
sha512(6), (255)
} HashAlgorithm;

enum { anonymous(0), rsa(1), dsa(2), ecdsa(3), (255) }
SignatureAlgorithm;

struct {
HashAlgorithm hash;
SignatureAlgorithm signature;
} SignatureAndHashAlgorithm;
</code></pre>

<p>We have &ldquo;0x0403&rdquo;, which corresponds to sha256(4) and ecdsa(3). The next two
bytes, &ldquo;0x0046&rdquo;, tell us the length of the &ldquo;opaque signature&rdquo; field, 70 bytes in
decimal. To decode the signature, we reference <a href="https://tools.ietf.org/html/rfc4492#page-20">RFC 4492 section
5.4</a>, which says:</p>

<pre><code>The digitally-signed element is encoded as an opaque vector &lt;0..2^16-1&gt;, the
contents of which are the DER encoding corresponding to the
following ASN.1 notation.

Ecdsa-Sig-Value ::= SEQUENCE {
r INTEGER,
s INTEGER
}
</code></pre>

<p>Having dived through two layers of TLS encoding, we are now back in ASN.1 land!
We
<a href="https://lapo.it/asn1js/#304402207E1FCD1E9A2BD2A50A0C81E713033A0762340DA8F91EF27A48B3817640159CD30220659FE9F1D880E2E8F6B325BE9F18956D17C6CA8A6F2B12CB0F55FB70F759A419">decode</a>
the remaining bytes into a SEQUENCE containing two INTEGERS. And we&rsquo;re done! Here&rsquo;s the whole
extension decoded:</p>

<pre><code># Extension SEQUENCE – RFC 5280
30
# length 0x0104 bytes (260 decimal)
820104
# OBJECT IDENTIFIER
06
# length 0x0A bytes (10 decimal)
0A
# value (1.3.6.1.4.1.11129.2.4.2)
2B06010401D679020402
# OCTET STRING
04
# length 0xF5 bytes (245 decimal)
81F5
# OCTET STRING (embedded) – RFC 6962
04
# length 0xF2 bytes (242 decimal)
81F2
# Beginning of TLS encoded SignedCertificateTimestampList – RFC 5246 / 6962
# length 0xF0 bytes
00F0
# opaque SerializedSCT&lt;1..2^16-1&gt;
# length 0x75 bytes
0075
# Version sct_version v1(0)
00
# LogID id (aka opaque key_id[32])
DB74AFEECB29ECB1FECA3E716D2CE5B9AABB36F7847183C75D9D4F37B61FBF64
# uint64 timestamp (milliseconds since the epoch)
000001627313EB19
# CtExtensions extensions (zero-length array)
0000
# digitally-signed struct – RFC 5426
# SignatureAndHashAlgorithm (ecdsa-sha256)
0403
# opaque signature&lt;0..2^16-1&gt;;
# length 0x0046
0046
# DER-encoded Ecdsa-Sig-Value – RFC 4492
30 # SEQUENCE
44 # length 0x44 bytes
02 # r INTEGER
20 # length 0x20 bytes
# value
7E1FCD1E9A2BD2A50A0C81E713033A0762340DA8F91EF27A48B3817640159CD3
02 # s INTEGER
20 # length 0x20 bytes
# value
659FE9F1D880E2E8F6B325BE9F18956D17C6CA8A6F2B12CB0F55FB70F759A419
# opaque SerializedSCT&lt;1..2^16-1&gt;
# length 0x77 bytes
0077
# Version sct_version v1(0)
00
# LogID id (aka opaque key_id[32])
293C519654C83965BAAA50FC5807D4B76FBF587A2972DCA4C30CF4E54547F478
# uint64 timestamp (milliseconds since the epoch)
000001627313EB2A
# CtExtensions extensions (zero-length array)
0000
# digitally-signed struct – RFC 5426
# SignatureAndHashAlgorithm (ecdsa-sha256)
0403
# opaque signature&lt;0..2^16-1&gt;;
# length 0x0048
0048
# DER-encoded Ecdsa-Sig-Value – RFC 4492
30 # SEQUENCE
46 # length 0x46 bytes
02 # r INTEGER
21 # length 0x21 bytes
# value
00AB72F1E4D6223EF87FC68491C208D29D4D57EBF47588BB7544D32F9537E2CEC1
02 # s INTEGER
21 # length 0x21 bytes
# value
008AFFC40CC6C4E3B24578DADE4F815ECBCE2D57A579342119A1E65BC7E5E69CE2
</code></pre>

<p>One surprising thing you might notice: In the first SCT, <code>r</code> and <code>s</code> are twenty
bytes long. In the second SCT, they are both twenty-one bytes long, and have a
leading zero. Integers in DER are two&rsquo;s complement, so if the leftmost bit is
set, they are interpreted as negative. Since <code>r</code> and <code>s</code> are positive, if the
leftmost bit would be a 1, an extra byte has to be added so that the leftmost
bit can be 0.</p>

<p>This is a little taste of what goes into encoding a certificate. I hope it was
informative! If you&rsquo;d like to learn more, I recommend &ldquo;<a href="http://luca.ntop.org/Teaching/Appunti/asn1.html">A Layman&rsquo;s Guide to a
Subset of ASN.1, BER, and DER</a>.&rdquo;</p>

<p><a name="poison"></a>Footnote 1: A &ldquo;poison extension&rdquo; is defined by <a href="https://tools.ietf.org/html/rfc6962#section-3.1">RFC 6962
section 3.1</a>:</p>

<pre><code>The Precertificate is constructed from the certificate to be issued by adding a special
critical poison extension (OID `1.3.6.1.4.1.11129.2.4.3`, whose
extnValue OCTET STRING contains ASN.1 NULL data (0x05 0x00))
</code></pre>

<p>In other words, it&rsquo;s an empty extension whose only purpose is to ensure that
certificate processors will not accept precertificates as valid certificates. The
specification ensures this by setting the &ldquo;critical&rdquo; bit on the extension, which
ensures that code that doesn&rsquo;t recognize the extension will reject the whole
certificate. Code that does recognize the extension specifically as poison
will also reject the certificate.</p>

<p><a name="variable-length"></a>Footnote 2: Lengths from 0-127 are represented by
a single byte (short form). To express longer lengths, more bytes are used (long form).
The high bit (0x80) on the first byte is set to distinguish long form from short
form. The remaining bits are used to express how many more bytes to read for the
length. For instance, 0x81F5 means &ldquo;this is long form because the length is
greater than 127, but there&rsquo;s still only one byte of length (0xF5) to decode.&rdquo;</p>

Join Us for AWS Security Week February 20–23 in San Francisco!

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/join-us-for-aws-security-week-february-20-23-in-san-francisco/

AWS Pop-up Loft image

Join us for AWS Security Week, February 20–23 at the AWS Pop-up Loft in San Francisco, where you can participate in four days of themed content that will help you secure your workloads on AWS. Each day will highlight a different security and compliance topic, and will include an overview session, a customer or partner speaker, a deep dive into the day’s topic, and a hands-on lab or demos of relevant AWS or partner services.

Tuesday (February 20) will kick off the week with a day devoted to identity and governance. On Wednesday, we will dig into secure configuration and automation, including a discussion about upcoming General Data Protection Regulation (GDPR) requirements. On Thursday, we will cover threat detection and remediation, which will include an Amazon GuardDuty lab. And on Friday, we will discuss incident response on AWS.

Sessions, demos, and labs about each of these topics will be led by seasoned security professionals from AWS, who will help you understand not just the basics, but also the nuances of building applications in the AWS Cloud in a robust and secure manner. AWS subject-matter experts will be available for “Ask the Experts” sessions during breaks.

Register today!

– Craig

AWS Online Tech Talks – January 2018

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-january-2018/

Happy New Year! Kick of 2018 right by expanding your AWS knowledge with a great batch of new Tech Talks. We’re covering some of the biggest launches from re:Invent including Amazon Neptune, Amazon Rekognition Video, AWS Fargate, AWS Cloud9, Amazon Kinesis Video Streams, AWS PrivateLink, AWS Single-Sign On and more!

January 2018– Schedule

Noted below are the upcoming scheduled live, online technical sessions being held during the month of January. Make sure to register ahead of time so you won’t miss out on these free talks conducted by AWS subject matter experts.

Webinars featured this month are:

Monday January 22

Analytics & Big Data
11:00 AM – 11:45 AM PT Analyze your Data Lake, Fast @ Any Scale  Lvl 300

Database
01:00 PM – 01:45 PM PT Deep Dive on Amazon Neptune Lvl 200

Tuesday, January 23

Artificial Intelligence
9:00 AM – 09:45 AM PT  How to get the most out of Amazon Rekognition Video, a deep learning based video analysis service Lvl 300

Containers

11:00 AM – 11:45 AM Introducing AWS Fargate Lvl 200

Serverless
01:00 PM – 02:00 PM PT Overview of Serverless Application Deployment Patterns Lvl 400

Wednesday, January 24

DevOps
09:00 AM – 09:45 AM PT Introducing AWS Cloud9  Lvl 200

Analytics & Big Data
11:00 AM – 11:45 AM PT Deep Dive: Amazon Kinesis Video Streams
Lvl 300
Database
01:00 PM – 01:45 PM PT Introducing Amazon Aurora with PostgreSQL Compatibility Lvl 200

Thursday, January 25

Artificial Intelligence
09:00 AM – 09:45 AM PT Introducing Amazon SageMaker Lvl 200

Mobile
11:00 AM – 11:45 AM PT Ionic and React Hybrid Web/Native Mobile Applications with Mobile Hub Lvl 200

IoT
01:00 PM – 01:45 PM PT Connected Product Development: Secure Cloud & Local Connectivity for Microcontroller-based Devices Lvl 200

Monday, January 29

Enterprise
11:00 AM – 11:45 AM PT Enterprise Solutions Best Practices 100 Achieving Business Value with AWS Lvl 100

Compute
01:00 PM – 01:45 PM PT Introduction to Amazon Lightsail Lvl 200

Tuesday, January 30

Security, Identity & Compliance
09:00 AM – 09:45 AM PT Introducing Managed Rules for AWS WAF Lvl 200

Storage
11:00 AM – 11:45 AM PT  Improving Backup & DR – AWS Storage Gateway Lvl 300

Compute
01:00 PM – 01:45 PM PT  Introducing the New Simplified Access Model for EC2 Spot Instances Lvl 200

Wednesday, January 31

Networking
09:00 AM – 09:45 AM PT  Deep Dive on AWS PrivateLink Lvl 300

Enterprise
11:00 AM – 11:45 AM PT Preparing Your Team for a Cloud Transformation Lvl 200

Compute
01:00 PM – 01:45 PM PT  The Nitro Project: Next-Generation EC2 Infrastructure Lvl 300

Thursday, February 1

Security, Identity & Compliance
09:00 AM – 09:45 AM PT  Deep Dive on AWS Single Sign-On Lvl 300

Storage
11:00 AM – 11:45 AM PT How to Build a Data Lake in Amazon S3 & Amazon Glacier Lvl 300

The deep learning Santa/Not Santa detector

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/deep-learning-santa-detector/

Did you see Mommy kissing Santa Claus? Or was it simply an imposter? The Not Santa detector is here to help solve the mystery once and for all.

Building a “Not Santa” detector on the Raspberry Pi using deep learning, Keras, and Python

The video is a demo of my “Not Santa” detector that I deployed to the Raspberry Pi. I trained the detector using deep learning, Keras, and Python. You can find the full source code and tutorial here: https://www.pyimagesearch.com/2017/12/18/keras-deep-learning-raspberry-pi/

Ho-ho-how does it work?

Note: Adrian Rosebrock is not Santa. But he does a good enough impression of the jolly old fellow that his disguise can fool a Raspberry Pi into thinking otherwise.

Raspberry Pi 'Not Santa' detector

We jest, but has anyone seen Adrian and Santa in the same room together?
Image c/o Adrian Rosebrock

But how is the Raspberry Pi able to detect the Santa-ness or Not-Santa-ness of people who walk into the frame?

Two words: deep learning

If you’re not sure what deep learning is, you’re not alone. It’s a hefty topic, and one that Adrian has written a book about, so I grilled him for a bluffers’ guide. In his words, deep learning is:

…a subfield of machine learning, which is, in turn a subfield of artificial intelligence (AI). While AI embodies a large, diverse set of techniques and algorithms related to automatic reasoning (inference, planning, heuristics, etc), the machine learning subfields are specifically interested in pattern recognition and learning from data.

Artificial Neural Networks (ANNs) are a class of machine learning algorithms that can learn from data. We have been using ANNs successfully for over 60 years, but something special happened in the past 5 years — (1) we’ve been able to accumulate massive datasets, orders of magnitude larger than previous datasets, and (2) we have access to specialized hardware to train networks faster (i.e., GPUs).

Given these large datasets and specialized hardware, deeper neural networks can be trained, leading to the term “deep learning”.

So now we have a bird’s-eye view of deep learning, how does the detector detect?

Cameras and twinkly lights

Adrian used a model he had trained on two datasets to detect whether or not an image contains Santa. He deployed the Not Santa detector code to a Raspberry Pi, then attached a camera, speakers, and The Pi Hut’s 3D Xmas Tree.

Raspberry Pi 'Not Santa' detector

Components for Santa detection
Image c/o Adrian Rosebrock

The camera captures footage of Santa in the wild, while the Christmas tree add-on provides a twinkly notification, accompanied by a resonant ho, ho, ho from the speakers.

A deeper deep dive into deep learning

A full breakdown of the project and the workings of the Not Santa detector can be found on Adrian’s blog, PyImageSearch, which includes links to other deep learning and image classification tutorials using TensorFlow and Keras. It’s an excellent place to start if you’d like to understand more about deep learning.

Build your own Santa detector

Santa might catch on to Adrian’s clever detector and start avoiding the camera, and for that eventuality, we have our own Santa detector. It uses motion detection to notify you of his presence (and your presents!).

Raspberry Pi Santa detector

Check out our Santa Detector resource here and use a passive infrared sensor, Raspberry Pi, and Scratch to catch the big man in action.

The post The deep learning Santa/Not Santa detector appeared first on Raspberry Pi.