One of the more interesting features introduced by TLS 1.3, the latest revision of the TLS protocol, was the so called “zero roundtrip time connection resumption”, a mode of operation that allows a client to start sending application data, such as HTTP requests, without having to wait for the TLS handshake to complete, thus reducing the latency penalty incurred in establishing a new connection.
The basic idea behind 0-RTT connection resumption is that if the client and server had previously established a TLS connection between each other, they can use information cached from that session to establish a new one without having to negotiate the connection’s parameters from scratch. Notably this allows the client to compute the private encryption keys required to protect application data before even talking to the server.
However, in the case of TLS, “zero roundtrip” only refers to the TLS handshake itself: the client and server are still required to first establish a TCP connection in order to be able to exchange TLS data.
Zero means zero
QUIC goes a step further, and allows clients to send application data in the very first roundtrip of the connection, without requiring any other handshake to be completed beforehand.
Unfortunately, 0-RTT connection resumption is not all smooth sailing, and it comes with caveats and risks, which is why Cloudflare does not enable 0-RTT connection resumption by default. Users should consider the risks involved and decide whether to use this feature or not.
For starters, 0-RTT connection resumption does not provide forward secrecy, meaning that a compromise of the secret parameters of a connection will trivially allow compromising the application data sent during the 0-RTT phase of new connections resumed from it. Data sent after the 0-RTT phase, meaning after the handshake has been completed, would still be safe though, as TLS 1.3 (and QUIC) will still perform the normal key exchange algorithm (which is forward secret) for data sent after the handshake completion.
More worryingly, application data sent during 0-RTT can be captured by an on-path attacker and then replayed multiple times to the same server. In many cases this is not a problem, as the attacker wouldn’t be able to decrypt the data, which is why 0-RTT connection resumption is useful, but in some cases this can be dangerous.
For example, imagine a bank that allows an authenticated user (e.g. using HTTP cookies, or other HTTP authentication mechanisms) to send money from their account to another user by making an HTTP request to a specific API endpoint. If an attacker was able to capture that request when 0-RTT connection resumption was used, they wouldn’t be able to see the plaintext and get the user’s credentials, because they wouldn’t know the secret key used to encrypt the data; however they could still potentially drain that user’s bank account by replaying the same request over and over:
Of course this problem is not specific to banking APIs: any non-idempotent request has the potential to cause undesired side effects, ranging from slight malfunctions to serious security breaches.
In order to help mitigate this risk, Cloudflare will always reject 0-RTT requests that are obviously not idempotent (like POST or PUT requests), but in the end it’s up to the application sitting behind Cloudflare to decide which requests can and cannot be allowed with 0-RTT connection resumption, as even innocuous-looking ones can have side effects on the origin server.
To help origins detect and potentially disallow specific requests, Cloudflare also follows the techniques described in RFC8470. Notably, Cloudflare will add the Early-Data: 1 HTTP header to requests received during 0-RTT resumption that are forwarded to origins.
Origins able to understand this header can then decide to answer the request with the 425 (Too Early) HTTP status code, which will instruct the client that originated the request to retry sending the same request but only after the TLS or QUIC handshake have fully completed, at which point there is no longer any risk of replay attacks. This could even be implemented as part of a Cloudflare Worker.
This makes it possible for origins to allow 0-RTT requests for endpoints that are safe, such as a website’s index page which is where 0-RTT is most useful, as that is typically the first request a browser makes after establishing a connection, while still protecting other endpoints such as APIs and form submissions. But if an origin does not provide any of those non-idempotent endpoints, no action is required.
One stop shop for all your 0-RTT needs
Just like we previously did for TLS 1.3, we now support 0-RTT resumption for QUIC as well. In honor of this event, we have dusted off the user-interface controls that allow Cloudflare users to enable this feature for their websites, and introduced a dedicated toggle to control whether 0-RTT connection resumption is enabled or not, which can be found under the “Network” tab on the Cloudflare dashboard:
When TLS 1.3 and/or QUIC (via the HTTP/3 toggle) are enabled, 0-RTT connection resumption will be automatically offered to clients that support it, and the replay mitigation mentioned above will also be applied to the connections making use of this feature.
In addition, if you are a user of our open-source HTTP/3 patch for NGINX, after updating the patch to the latest version, you’ll be able to enable support for 0-RTT connection resumption in your own NGINX-based HTTP/3 deployment by using the built-in “ssl_early_data” option, which will work for both TLS 1.3 and QUIC+HTTP/3.
It was a scorching Monday on July 22 as temperatures soared above 37°C (99°F) in Austin, TX, the live music capital of the world. Only hours earlier, the last crowds dispersed from the historic East 6th Street entertainment district. A few blocks away, Cloudflarians were starting to make their way to the office. Little did those early arrivers know that they would soon be unknowingly participating in a Cloudflare time honored tradition of dogfooding new services before releasing them to the wild.
6th East Street, Austin Texas
Dogfooding is when an organization uses its own products. In this case, we dogfed our newest cloud service, Magic Transit, which both protects and accelerates our customers’ entire network infrastructure—not just their web properties or TCP/UDP applications. With Magic Transit, Cloudflare announces your IP prefixes via BGP, attracts (routes) your traffic to our global network edge, blocks bad packets, and delivers good packets to your data centers via Anycast GRE.
We decided to use Austin’s network because we wanted to test the new service on a live network with real traffic from real people and apps. With the target identified, we began onboarding the Austin office in an always-on routing topology.
In an always-on routing mode, Cloudflare data centers constantly advertise Austin’s prefix, resulting in faster, almost immediate mitigation. As opposed to traditional on-demand scrubbing center solutions with limited networks, Cloudflare operates within 100 milliseconds of 99% of the Internet-connected population in the developed world. For our customers, this means that always-on DDoS mitigation doesn’t sacrifice performance due to suboptimal routing. On the contrary, Magic Transit can actually improve your performance due to our network’s reach.
Cloudflare’s Global Network
Now that we’ve completed onboarding Austin to Magic Transit, all we needed was a motivated attacker to launch a DDoS attack. Luckily, we found more than a few willing volunteers on our Site Reliability Engineering (SRE) team to execute the attack. While the teams were still assembling in multiple locations around the world, our SRE volunteer started firing packets at our target from an undisclosed location.
Without Magic Transit, the Austin office would’ve been hit directly with the packet flood. Two things could have happened in this case (not mutually exclusive):
Austin’s on-premise equipment (routers, firewalls, servers, etc.) would have been overwhelmed and failed
Austin’s service providers would have dropped packets that exceeded its bandwidth allowance
Both cases would result in a very bad day for everyone.
Cloudflare DDoS Mitigation
Instead, when our SRE attacker launched the flood the packets were automatically routed via BGP to Cloudflare’s network. The packets reached the closest data center via Anycast and encountered multiple defenses in the form of XDP, eBPF and iptables. Those defenses are populated with pre-configured static firewall rules as well as dynamic rules generated by our DDoS mitigation systems.
Static rules can vary from straightforward IP blocking and rate-limiting to more sophisticated expressions that match against specific packet attributes. Dynamic rules, on the other hand, are generated automatically in real-time. To play fair with our attacker, we didn’t pre-configure any special rules against the attack. We wanted to give our attacker a fair opportunity to take Austin down. Although due to our multi-layered protection approach, the odds were never actually in their favor.
Generating Dynamic Rules
As part of our multi-layered protection approach, Dynamic Rules are generated on-the-fly by analyzing the packets that route through our network. While the packets are being routed, flow data is asynchronously sampled, collected, and analyzed by two main detection systems. The first is called Gatebot and runs across the entire Cloudflare network; the second is our newly deployed DoSD (denial of service daemon) which operates locally within each data center. DoSD is an exciting improvement that we’ve just recently rolled out and we look forward to writing more about its technical details here soon. DoSD samples at a much faster rate (1/100 packets) versus Gatebot which samples at a lower rate (~1/8000 packets), allowing it to detect even more attacks and block them faster.
The asynchronous attack detection lifecycle is represented as the dotted lines in the diagram below. Attacks are detected out of path to assure that we don’t add any latency, and mitigation rules are pushed in line and removed as needed.
Multiple packet attributes and correlations are taken into consideration during analysis and detection. Gatebot and DoSD search for both new network anomalies and already known attacks. Once an attack is detected, rules are automatically generated, propagated, and applied in the optimal location within 10 seconds or less. Just to give you an idea of the scale, we’re talking about hundreds of thousands of dynamic rules that are applied and removed every second across the entire Cloudflare network.
One of the beauties of Gatebot and DoSD is that they don’t require a traffic learning period. Once a customer is onboarded, they’re protected immediately. They don’t need to sample traffic for weeks before kicking in. While we can always apply specific firewall rules if requested by the customer, no manual configuration is required by the customer or our teams. It just works.
What this mitigation process looks like in practice
Let’s look at what happened in Austin when one of our SREs tried to DDoS Austin and failed. During one of the first attempts, before DoSD had rolled out globally, a degradation in audio and video quality was noticed for Austin employees on video calls for a few seconds before Gatebot kicked in. However, as soon as Gatebot kicked in, the quality was immediately restored. If we hadn’t had Magic Transit in-line, the degradation of service would’ve worsened until the point of full denial of service. Austin would have been offline and our Austin colleagues wouldn’t have had a very productive day.
On a subsequent attack attempt which took place after DoSD was deployed, our SRE launched a SYN flood on Austin. The attack targeted multiple IP addresses in Austin’s prefix and peaked just above 250,000 packets per second. DoSD detected the attack and blocked it in approximately 3 seconds. DoSD’s quick response resulted in no degradation of service for the Austin team.
What We Learned
Dogfooding Magic Transit served as a valuable experiment for us with lots of lessons learned both from the engineering and procedural aspects. From the engineering aspect, we fine-tuned our mitigations and optimized routings. From the procedural aspects, we drilled members of multiple teams including the Security Operations Center and Solution Engineering teams to help refine our run-books. By doing so, we reduced the onboarding duration to hours instead of days in order to assure a quick and smooth onboarding experience for our customers.
Want To Learn More?
Request a demo and learn how you can protect and accelerate your network with Cloudflare.
This week we celebrated Cloudflare’s 9th birthday by launching a variety of new offerings that support our mission: to help build a better Internet. Below is a summary recap of how we celebrated Birthday Week 2019.
Every day Cloudflare protects over 20 million Internet properties from malicious bots, and this week you were invited to join in the fight! Now you can enable “bot fight mode” in the Firewall settings of the Cloudflare Dashboard and we’ll start deploying CPU intensive code to traffic originating from malicious bots. This wastes the bots’ CPU resources and makes it more difficult and costly for perpetrators to deploy malicious bots at scale. We’ll also share the IP addresses of malicious bot traffic with our Bandwidth Alliance partners, who can help kick malicious bots offline. Join us in the battle against bad bots – and, as you can read here – you can help the climate too!
Speed matters, and if you manage a website or app, you want to make sure that you’re delivering a high performing website to all of your global end users. Now you can enable Browser Insights in the Speed section of the Cloudflare Dashboard to analyze website performance from the perspective of your users’ web browsers.
Several months ago we announced WARP, a free mobile app purpose-built to address the security and performance challenges of the mobile Internet, while also respecting user privacy. After months of testing and development, this week we (finally) rolled out WARP to approximately 2 million wait-list customers. We also enabled Warp Plus, a WARP experience that uses Argo routing technology to route your mobile traffic across faster, less-congested, routes through the Internet. Warp and Warp Plus (Warp+) are now available in the iOS and Android App stores and we can’t wait for you to give it a try!
Last year we announced early support for QUIC, a UDP based protocol that aims to make everything on the Internet work faster, with built-in encryption. The IETF subsequently decided that QUIC should be the foundation of the next generation of the HTTP protocol, HTTP/3. This week, Cloudflare was the first to introduce support for HTTP/3 in partnership with Google Chrome and Mozilla.
Finally, to wrap up our birthday week announcements, we announced Workers Sites. The Workers serverless platform continues to grow and evolve, and every day we discover new and innovative ways to help developers build and optimize their applications. Workers Sites enables developers to easily deploy lightweight static sites across Cloudflare’s global cloud platform without having to build out the traditional backend server infrastructure to support these sites.
We look forward to Birthday Week every year, as a chance to showcase some of our exciting new offerings — but we all know building a better Internet is about more than one week. It’s an effort that takes place all year long, and requires the help of our partners, employees and especially you — our customers. Thank you for being a customer, providing valuable feedback and helping us stay focused on our mission to help build a better Internet.
Can’t get enough of this week’s announcements, or want to learn more? Register for next week’s Birthday Week Recap webinar to get the inside scoop on every announcement.
Today, after a longer than expected wait, we’re opening WARP and WARP Plus to the general public. If you haven’t heard about it yet, WARP is a mobile app designed for everyone which uses our global network to secure all of your phone’s Internet traffic.
We announced WARP on April 1 of this year and expected to roll it out over the next few months at a fairly steady clip and get it released to everyone who wanted to use it by July. That didn’t happen. It turned out that building a next generation service to secure consumer mobile connections without slowing them down or burning battery was… harder than we originally thought.
Before today, there were approximately two million people on the waitlist to try WARP. That demand blew us away. It also embarrassed us. The common refrain is consumers don’t care about their security and privacy, but the attention WARP got proved to us how wrong that assumption actually is.
This post is an explanation of why releasing WARP took so long, what we’ve learned along the way, and an apology for those who have been eagerly waiting. It also talks briefly about the rationale for why we built WARP as well as the privacy principles we’ve committed to. However, if you want a deeper dive on those last two topics, I encourage you to read our original launch announcement.
And, if you just want to jump in and try it, you can download and start using WARP on your iOS or Android devices for free through the following links:
If you’ve already installed the 220.127.116.11 App on your device, you may need to update to the latest version in order to get the option to enable Warp.
Let me start with the apology. We are sorry making WARP available took far longer than we ever intended. As a way of hopefully making amends, for everyone who was on the waitlist before today, we’re giving 10 GB of WARP Plus — the even faster version of WARP that uses Cloudflare’s Argo network — to those of you who have been patiently waiting.
For people just signing up today, the basic WARP service is free without bandwidth caps or limitations. The unlimited version of WARP Plus is available for a monthly subscription fee. WARP Plus is the even faster version of WARP that you can optionally pay for. The fee for WARP Plus varies by region and is designed to approximate what a McDonald’s Big Mac would cost in the region. On iOS, the WARP Plus pricing as of the publication of this post is still being adjusted on a regional basis, but that should settle out in the next couple days.
WARP Plus uses Cloudflare’s virtual private backbone, known as Argo, to achieve higher speeds and ensure your connection is encrypted across the long haul of the Internet. We charge for it because it costs us more to provide. However, in order to help spread the word about WARP, you can earn 1GB of WARP Plus for every friend you refer to sign up for WARP. And everyone you refer gets 1GB of WARP Plus for free to get started as well.
Okay, Thanks, That’s Nice, But What Took You So Long?
So what took us so long?
WARP is an ambitious project. We set out to secure Internet connections from mobile devices to the edge of Cloudflare’s network. In doing so, however, we didn’t want to slow devices down or burn excess battery. We wanted it to just work. We also wanted to bet on the technology of the future, not the technology of the past. Specifically, we wanted to build not around legacy protocols like IPsec, but instead around the hyper-efficient WireGuard protocol.
At some level, we thought it would be easy. We already had the 18.104.22.168 App that was securing DNS requests running on millions of mobile devices. That worked great. How much harder could securing all the rest of the requests on a device be? Right??
It turns out, a lot. Zack Bloom has written up a great technical post describing many of the challenges we faced and the solutions we had to invent to deal with them. If you’re interested, I encourage you to check it out.
Apple threw us a curveball by releasing iOS 12.2 just days before the April 1 planned roll out. The new version of iOS significantly changed the underlying network stack implementation in a way that made some of what we were doing to implement WARP unstable. Ultimately we had to find work-arounds in our networking code, costing us valuable time.
We had a version of the WARP app that (kind of) worked on April 1. But, when we started to invite people from outside of Cloudflare to use it, we quickly realized that the mobile Internet around the world was far more wild and varied than we’d anticipated. The Internet is made up of diverse network components which do not always play nicely, we knew that. What we didn’t expect was how much more pain is introduced by the diversity of mobile carriers, mobile operating systems, and mobile device models.
And, while phones in our testbed were relatively stationary, phones in the real world move around — a lot. When they do, their network settings can change wildly. While that doesn’t matter much for stateless, simple DNS queries, for the rest of Internet traffic that makes things complex. Keeping WireGuard fast requires long-lived sessions between your phone and a server in our network, maintaining that for hours and days was very complex. Even beyond that, we use a technology called Anycast to route your traffic to our network. Anycast meant your traffic could move not just between machines, but between entire data centers. That made things very complex.
But there is a huge difference between hard and impossible. From long before the announcement, the team has been hard at work and I’m deeply proud of what they’ve accomplished. We changed our roll out plan to focus on iOS and solidify the shared underpinnings of the app to ensure it would work even with future network stack upgrades. We invited beta users not in the order of when they signed up, but instead based on networks where we didn’t yet have information to help us discover as many corner cases as possible. And we invented new technologies to keep session state even when the wild west of mobile networks and Anycast routing collide.
I’ve been running WARP on my phone since April 1. The first few months were… rough. Really rough. But, today, WARP has blended into the background of my mobile. And I sleep better knowing that my Internet connections from my phone are secure. Using my phone is as fast, and in some cases faster, than without WARP. In other words, WARP today does what we set out to accomplish: securing your mobile Internet connection and otherwise getting out of the way.
There Will Be Bugs
While WARP is a lot better than it was when we first announced it, we know there are still bugs. The most common bug we’re seeing these days is when WARP is significantly slower than using the mobile Internet without WARP. This is usually due to traffic being misrouted. For instance, we discovered a network in Turkey earlier this week that was being routed to London rather than our local Turkish facility. Once we’re aware of these routing issues we can typically fix them quickly.
Other common bugs involved captive portals — the pages where you have to enter information, for instance, when connecting to a hotel WiFI. We’ve fixed a lot of them but we haven’t had WARP users connecting to every hotel WiFi yet, so there will inevitably still be some that are broken.
We’ve made it easy to report issues that you discover. From the 22.214.171.124 App you can click on the little bug icon near the top of the screen, or just shake your phone with the app open, and quickly send us a report. We expect, over the weeks ahead, we’ll be squashing many of the bugs that you report.
Even Faster With Plus
WARP is not just a product, it’s a testbed for all of the Internet-improving technology we have spent years developing. One dream was to use our Argo routing technology to allow all of your Internet traffic to use faster, less-congested, routes through the Internet. When used by Cloudflare customers for the past several years Argo has improved the speed of their websites by an average of over 30%. Through some hard work of the team we are making that technology available to you as WARP Plus.
The WARP Plus technology is not without cost for us. Routing your traffic over our network often costs us more than if we release it directly to the Internet. To cover those costs we charge a monthly fee — $4.99/month or less — for WARP Plus. The fee depends on the region that you’re in and is intended to approximate what a Big Mac would cost in the same region.
Basic WARP is free. Our first priority is not to make money off of WARP however, we want to grow it to secure every single phone. To help make that happen, we wanted to give you an incentive to share WARP with your friends. You can earn 1GB of free WARP Plus for every person you share WARP with. And everyone you refer also gets 1GB of WARP Plus for free as well. There is no limit on how much WARP Plus data you can earn by sharing.
The free consumer security space has traditionally not been the most reputable. Many other companies that have promised to keep consumers’ data safe but instead built businesses around selling it or using it help target you with advertising. We think that’s disgusting. That is not Cloudflare’s business model and it never will be. WARP continues all the strong privacy protections that 126.96.36.199 launched with including:
We don’t write user-identifiable log data to disk;
We will never sell your browsing data or use it in any way to target you with advertising data;
Don’t need to provide any personal information — not your name, phone number, or email address — in order to use WARP or WARP Plus; and
We will regularly work with outside auditors to ensure we’re living up to these promises.
What WARP Is Not
From a technical perspective, WARP is a VPN. But it is designed for a very different audience than a traditional VPN. WARP is not designed to allow you to access geo-restricted content when you’re traveling. It will not hide your IP address from the websites you visit. If you’re looking for that kind of high-security protection then a traditional VPN or a service like Tor are likely better choices for you.
WARP, instead, is built for the average consumer. It’s built to ensure that your data is secured while it’s in transit. So the networks between you and the applications you’re using can’t spy on you. It will help protect you from people sniffing your data while you’re at a local coffee shop. It will also help ensure that your ISP isn’t hoovering up data on your browsing patterns to sell to advertisers.
WARP isn’t designed for the ultra-techie who wants to specify exactly what server their traffic will be routed through. There’s basically only one button in the WARP interface: ON or OFF. It’s simple on purpose. It’s designed for my mom and dad who ask me every holiday dinner what they can do to be a bit safer online. I’m excited this year to have something easy for them to do: install the 188.8.131.52 App, enable WARP, and rest a bit easier.
How Fast Is It?
Once we got WARP to a stable place, this was my first question. My initial inclination was to go to one of the many Speed Test sites and see the results. And the results were… weird. Sometimes much faster, sometimes much slower. Overall, they didn’t make a lot of sense. The reason why is that these sites are designed to measure the speed of your ISP. WARP is different, so these test sites don’t give particularly accurate readings.
The better test is to visit common sites around the Internet and see how they load, in real conditions, on WARP versus off. We’ve built a tool that does this. Generally, in our tests, WARP is around the same speed as non-WARP connections when you’re on a high performance network. As network conditions get worse, WARP will often improve performance more. But your experience will depend on the particular conditions of your network.
We plan, in the next few weeks, to expose the test tool within the 184.108.40.206 App so you can see how your device loads a set of popular sites without WARP, with WARP, and with WARP Plus. And, again, if you’re seeing particularly poor performance, please report it to us. Our goal is to provide security without slowing you down or burning excess battery. We can already do that for many networks and devices and we won’t rest until we can do it for everyone.
Here’s to a More Secure, Fast Internet
Cloudflare’s mission is to help build a better Internet. We’ve done that by securing and making more performance millions of Internet properties since we launched almost exactly 9 years ago. WARP furthers Cloudflare’s mission by extending our network to help make every consumer’s mobile device a bit more secure. Our team is proud of what we’ve built with WARP — albeit a bit embarrassed it took us so long to get into your hands. We hope you’ll forgive us for the delay, give WARP a try, and let us know what you think.
The good news is that modern web browsers expose lots of performance data that can help you understand how your web page performs. With the launch of Browser Insights today, we can analyze the performance from the perspective of the web browser and what the end user actually experiences. In this post, we’ll dive into how we think about performance and utilize the timing APIs in the web browser.
How web browsers measure performance
In the old days, the only way for a developer to profile performance was to intercept requests and measure the time from the beginning of the page load until the end of the load event.
Today, we can use Web APIs that are supported by modern browsers. This is part of the web standard called the Performance API. The Performance API consists of a collection of individual APIs that include:
Navigation Timing (for timing information relates to the page and navigation)
Resource Timing (for timing data regarding the loading of an application’s resources)
Paint Timing (that provides timing information about paint operation during the page construction)
In this blog post, we will primarily focus on the Navigation Timing API.
Inside the Performance API
To see what’s collected with the Performance API, you can open the Developer Tools in Chrome browser and type ‘performance’ in the console tab (or type in performance.timing to get direct access to the PerformanceTiming attribute).
Try expanding the Performance object by clicking on the arrow before the label and again expand the ‘timing’. This is called PerformanceTiming, which includes all the timings that relate to the current page load as UNIX epoch timestamp (milliseconds). The timing attributes shown are not in the order of the load. So for better understanding, let’s look at the illustration provided by the W3C.
As we can see from the diagram shown, each element (represented as a box above) in the order from left-to-right, represents the navigation flow of the page load. Each element has an attribute from the starting point to the end (and some have multiple attributes!) so that we can measure the elapsed time for each element. For example, to get the Request Time, you could type in the command like shown below in the console which appears to be 60 milliseconds.
How Cloudflare uses performance data
The metrics we surface are:
DNS (domainLookupEnd – domainLookupStart): How long the DNS query takes. This could appear as zero if the connection is reused or the content was stored in the local cache (memory or disk).
TCP (connectEnd – connectStart): How long it takes to establish a TCP connection the server. If HTTPS, this process includes TLS negotiation time.
Request (responseStart – requestStart): The time elapsed between making an HTTP request and receiving the first byte of the response.
Response (responseEnd – responseStart): The time elapsed between the first byte and the last byte of the response received. You can think of this as a resource download time.
Processing (domComplete – domLoading): How long it took to render the page. If this number is big, you can optimize your document architecture, resource size, or configure settings under Speed page such as Auto Minify the source code. This document process can be drill down more with domInteractive, domContentLoadedEventStart, domContentLoadedEventEnd, and domComplete. We plan to provide more detailed analytics on this later on.
Load Event (loadEventEnd – loadEventStart): When the browser finishes loading its document and resources, it triggers a `load` event. This duration may be helpful to you if you have additional functions or any logic for the load event.
Total Time: Sum of each timing metrics shown on the graph.
If you are seeing any spikes or unusual form of a line in the stacked line chart, you could start investigating on each element to see what is causing the problem.
Speed matters. We know that when your website or app gets faster, users have a better experience and you get more conversions and more revenue. At Cloudflare, we spend our days obsessing about speed and building new features to squeeze out as much performance as possible.
But to improve speed, you first need to measure it. That’s why we’re launching Browser Insights: a new tool that measures the performance of your website from the perspective of your users. Browser Insights lets you dive in to understand where, when, and why web pages are slow. And you can enable it today, for free, with one click.
Why did we build Browser Insights?
Let’s say you run an e-commerce site, and you want to make your conversion rates better. You’ve noticed that there’s a lot of traffic from visitors in Peru, but they have worse conversion than users in North America. Maybe you theorize that it takes a long time to load your checkout page, which causes customers to drop off before checking out. How would you verify that this is happening?
There are a few ways you could do this: you could check your server logs to look at timing information, or you could load the page a few times in your browser to see what’s slow.
These approaches have a few downsides though:
If you only look at server-side data, you miss factors that impact the end-user experience — how long did it take for the web browser to load all the necessary scripts, execute them, and paint the page?
If you only measure from one computer (or a small number of them), you miss the diversity of the computing population — for example, “how does this work on a phone on a 3G connection?”
To solve these problems, we use Real User Monitoring. This gives us the best of both worlds: we can run a timer inside real web browsers. This timer captures how long it takes web pages to load, from your actual users.
How does it work?
Browser Insights can be enabled with the flip of a switch in the “Speed” section of the dashboard:
There’s a lot of info this graph! At a high level, there are two main types of metrics
Request-level metrics like TCP connection time, or Request time. These metrics are counted on every page load and are impacted by Internet infrastructure, like the mobile network of your end users, or the speed of your servers.
In addition to seeing several metrics about your web page performance, it’s helpful to drill into the dimensions that impact performance like URL and Country. This means you can filter down to the performance of a specific page (like your home page or checkout page), and you can see the locations where your site loads the fastest and slowest.
Going back to our example above, we want to see how performance in Peru compares to North America:
Sure enough, we can confirm that there’s significant traffic from Peru, but web pages take about 13s to load on average — compared with just 4.2 seconds for users in the US. Theory confirmed! Now we can filter all of our metrics to just Peru to understand what’s happening better:
Note that “Processing” has increased the most, all the way to 12 seconds. Request times are higher as well, likely because we are connecting to an origin server in the US. Web pages are made of many individual requests, so it makes sense that, when combined, they lead to slower load times. In this example, caching faster content would probably lead to significantly page loads.
What’s coming next?
Our launch today is just the tip of the iceberg for Browser Insights. In the near future we want to add much more information that will help you understand exactly what’s slowing down your website, and what you can do to make it faster. We plan to add:
More metrics and dimensions, including page-level metrics like Time to First Paint and more dimensions like browser and network type
Subresource analytics. The average web page loads over 100 subresources, and we can provide a waterfall chart to show exactly which one is slow.
A/B testing, to show you how potential configuration changes will impact the performance of your own traffic
Alerting so that you know when performance falls below a pre-defined threshold
Insights powered by Cloudflare that tell you why something might be slow – for example, how your cache hit ratio impacts page load time
Protecting user privacy
Cloudflare’s mission to help build a better Internet is based on the importance we place on establishing trust with our customers, our customers’ end users, and the Internet community globally. We have a transparent business model that aligns with the interests of our customers — we make money from protecting and speeding up our customers’ Internet properties. We do not sell our customers’ (or their end users’) data.
Browser Insights requires that end users’ browsers report timing information back to Cloudflare. We designed Browser Insights so that it reports only the bare-minimum information needed to show our customers how their websites are performing. The only metrics Browser Insights collects are about timing. We do not track individual end users across our customers’ Internet properties. We encourage you to open up the Inspector in your favorite web browser to see what we’re sending back!
Try Browser Insights today
Last May we announced the all-new Speed Page. Our mission with the Speed page is to show you how fast your website is, and what you can do to make it faster. Today, we’re excited to announce that the new Speed Page is available for everyone!
Browser Insights will be available on the Speed page in early access and we’ll be working hard to bring it to everyone as soon as possible in the coming weeks. Watch this space for updates!
Today we’re excited to announce Cloudflare Magic Transit. Magic Transit provides secure, performant, and reliable IP connectivity to the Internet. Out-of-the-box, Magic Transit deployed in front of your on-premise network protects it from DDoS attack and enables provisioning of a full suite of virtual network functions, including advanced packet filtering, load balancing, and traffic management tools.
Magic Transit is built on the standards and networking primitives you are familiar with, but delivered from Cloudflare’s global edge network as a service. Traffic is ingested by the Cloudflare Network with anycast and BGP, announcing your company’s IP address space and extending your network presence globally. Today, our anycast edge network spans 193 cities in more than 90 countries around the world.
Once packets hit our network, traffic is inspected for attacks, filtered, steered, accelerated, and sent onward to the origin. Magic Transit will connect back to your origin infrastructure over Generic Routing Encapsulation (GRE) tunnels, private network interconnects (PNI), or other forms of peering.
Enterprises are often forced to pick between performance and security when deploying IP network services. Magic Transit is designed from the ground up to minimize these trade-offs: performance and security are better together. Magic Transit deploys IP security services across our entire global network. This means no more diverting traffic to small numbers of distant “scrubbing centers” or relying on on-premise hardware to mitigate attacks on your infrastructure.
We’ve been laying the groundwork for Magic Transit for as long as Cloudflare has been in existence, since 2010. Scaling and securing the IP network Cloudflare is built on has required tooling that would have been impossible or exorbitantly expensive to buy. So we built the tools ourselves! We grew up in the age of software-defined networking and network function virtualization, and the principles behind these modern concepts run through everything we do.
When we talk to our customers managing on-premise networks, we consistently hear a few things: building and managing their networks is expensive and painful, and those on-premise networks aren’t going away anytime soon.
Traditionally, CIOs trying to connect their IP networks to the Internet do this in two steps:
Source connectivity to the Internet from transit providers (ISPs).
Purchase, operate, and maintain network function specific hardware appliances. Think hardware load balancers, firewalls, DDoS mitigation equipment, WAN optimization, and more.
Each of these boxes costs time and money to maintain, not to mention the skilled, expensive people required to properly run them. Each additional link in the chain makes a network harder to manage.
This all sounded familiar to us. We had an aha! moment: we had the same issues managing our datacenter networks that power all of our products, and we had spent significant time and effort building solutions to those problems. Now, nine years later, we had a robust set of tools we could turn into products for our own customers.
Magic Transit aims to bring the traditional datacenter hardware model into the cloud, packaging transit with all the network “hardware” you might need to keep your network fast, reliable, and secure. Once deployed, Magic Transit allows seamless provisioning of virtualized network functions, including routing, DDoS mitigation, firewalling, load balancing, and traffic acceleration services.
Magic Transit is your network’s on-ramp to the Internet
Magic Transit delivers its connectivity, security, and performance benefits by serving as the “front door” to your IP network. This means it accepts IP packets destined for your network, processes them, and then outputs them to your origin infrastructure.
Connecting to the Internet via Cloudflare offers numerous benefits. Starting with the most basic, Cloudflare is one of the most extensively connected networks on the Internet. We work with carriers, Internet exchanges, and peering partners around the world to ensure that a bit placed on our network will reach its destination quickly and reliably, no matter the destination.
An example deployment: Acme Corp
Let’s walk through how a customer might deploy Magic Transit. Customer Acme Corp. owns the IP prefix 203.0.113.0/24, which they use to address a rack of hardware they run in their own physical datacenter. Acme currently announces routes to the Internet from their customer-premise equipment (CPE, aka a router at the perimeter of their datacenter), telling the world 203.0.113.0/24 is reachable from their autonomous system number, AS64512. Acme has DDoS mitigation and firewall hardware appliances on-premise.
Acme wants to connect to the Cloudflare Network to improve the security and performance of their own network. Specifically, they’ve been the target of distributed denial of service attacks, and want to sleep soundly at night without relying on on-premise hardware. This is where Cloudflare comes in.
Deploying Magic Transit in front of their network is simple:
Cloudflare uses Border Gateway Protocol (BGP) to announce Acme’s 203.0.113.0/24 prefix from Cloudflare’s edge, with Acme’s permission.
Cloudflare begins ingesting packets destined for the Acme IP prefix.
Magic Transit applies DDoS mitigation and firewall rules to the network traffic. After it is ingested by the Cloudflare network, traffic that would benefit from HTTPS caching and WAF inspection can be “upgraded” to our Layer 7 HTTPS pipeline without incurring additional network hops.
Acme would like Cloudflare to use Generic Routing Encapsulation (GRE) to tunnel traffic back from the Cloudflare Network back to Acme’s datacenter. GRE tunnels are initiated from anycast endpoints back to Acme’s premise. Through the magic of anycast, the tunnels are constantly and simultaneously connected to hundreds of network locations, ensuring the tunnels are highly available and resilient to network failures that would bring down traditionally formed GRE tunnels.
Cloudflare egresses packets bound for Acme over these GRE tunnels.
Let’s dive deeper on how the DDoS mitigation included in Magic Transit works.
Magic Transit protects networks from DDoS attack
Customers deploying Cloudflare Magic Transit instantly get access to the same IP-layer DDoS protection system that has protected the Cloudflare Network for the past 9 years. This is the same mitigation system that stopped a 942Gbps attack dead in its tracks, in seconds. This is the same mitigation system that knew how to stop memcached amplification attacks days before a 1.3Tbps attack took down Github, which did not have Cloudflare watching its back. This is the same mitigation we trust every day to protect Cloudflare, and now it protects your network.
Cloudflare has historically protected Layer 7 HTTP and HTTPS applications from attacks at all layers of the OSI Layer model. The DDoS protection our customers have come to know and love relies on a blend of techniques, but can be broken into a few complementary defenses:
Anycast and a network presence in 193 cities around the world allows our network to get close to users and attackers, allowing us to soak up traffic close to the source without introducing significant latency.
30+Tbps of network capacity allows us to soak up a lot of traffic close to the source. Cloudflare’s network has more capacity to stop DDoS attacks than that of Akamai Prolexic, Imperva, Neustar, and Radware — combined.
Our HTTPS reverse proxy absorbs L3 (IP layer) and L4 (TCP layer) attacks by terminating connections and re-establishing them to the origin. This stops most spurious packet transmissions from ever getting close to a customer origin server.
Layer 7 mitigations and rate limiting stop floods at the HTTPS application layer.
Looking at the above description carefully, you might notice something: our reverse proxy servers protect our customers by terminating connections, but our network and servers still get slammed by the L3 and 4 attacks we stop on behalf of our customers. How do we protect our own infrastructure from these attacks?
Gatebot is a suite of software running on every one of our servers inside each of our datacenters in the 193 cities we operate, constantly analyzing and blocking attack traffic. Part of Gatebot’s beauty is its simple architecture; it sits silently, in wait, sampling packets as they pass from the network card into the kernel and onward into userspace. Gatebot does not have a learning or warm-up period. As soon as it detects an attack, it instructs the kernel of the machine it is running on to drop the packet, log its decision, and move on.
Historically, if you wanted to protect your network from a DDoS attack, you might have purchased a specialized piece of hardware to sit at the perimeter of your network. This hardware box (let’s call it “The DDoS Protection Box”) would have been fantastically expensive, pretty to look at (as pretty as a 2U hardware box could get), and required a ton of recurring effort and money to stay on its feet, keep its licence up to date, and keep its attack detection system accurate and trained.
For one thing, it would have to be carefully monitored to make sure it was stopping attacks but not stopping legitimate traffic. For another, if an attacker managed to generate enough traffic to saturate your datacenter’s transit links to the Internet, you were out of luck; no box sitting inside your datacenter can protect you from an attack generating enough traffic to congest the links running from the outside world to the datacenter itself.
Early on, Cloudflare considered buying The DDoS Protection Box(es) to protect our various network locations, but ruled them out quickly. Buying hardware would have incurred substantial cost and complexity. In addition, buying, racking, and managing specialized pieces of hardware makes a network hard to scale. There had to be a better way. We set out to solve this problem ourselves, starting from first principles and modern technology.
To make our modern approach to DDoS mitigation work, we had to invent a suite of tools and techniques to allow us to do ultra-high performance networking on a generic x86 server running Linux.
At the core of our network data plane is the eXpress Data Path (XDP) and the extended Berkeley Packet Filter (eBPF), a set of APIs that allow us to build ultra-high performance networking applications in the Linux kernel. My colleagues have written extensively about how we use XDP and eBPF to stop DDoS attacks:
At the end of the day, we ended up with a DDoS mitigation system that:
Is delivered by our entire network, spread across 193 cities around the world. To put this another way, our network doesn’t have the concept of “scrubbing centers” — every single one of our network locations is always mitigating attacks, all the time. This means faster attack mitigation and minimal latency impact for your users.
Has exceptionally fast times to mitigate, with most attacks mitigated in 10s or less.
Was built in-house, giving us deep visibility into its behavior and the ability to rapidly develop new mitigations as we see new attack types.
Is deployed as a service, and is horizontally scalable. Adding x86 hardware running our DDoS mitigation software stack to a datacenter (or adding another network location) instantly brings more DDoS mitigation capacity online.
Gatebot is designed to protect Cloudflare infrastructure from attack. And today, as part of Magic Transit, customers operating their own IP networks and infrastructure can rely on Gatebot to protect their own network.
Magic Transit puts your network hardware in the cloud
We’ve covered how Cloudflare Magic Transit connects your network to the Internet, and how it protects you from DDoS attack. If you were running your network the old-fashioned way, this is where you’d stop to buy firewall hardware, and maybe another box to do load balancing.
With Magic Transit, you don’t need those boxes. We have a long track record of delivering common network functions (firewalls, load balancers, etc.) as services. Up until this point, customers deploying our services have relied on DNS to bring traffic to our edge, after which our Layer 3 (IP), Layer 4 (TCP & UDP), and Layer 7 (HTTP, HTTPS, and DNS) stacks take over and deliver performance and security to our customers.
Magic Transit is designed to handle your entire network, but does not enforce a one-size-fits-all approach to what services get applied to which portion of your traffic. To revisit Acme, our example customer from above, they have brought 203.0.113.0/24 to the Cloudflare Network. This represents 256 IPv4 addresses, some of which (eg 203.0.113.8/30) might front load balancers and HTTP servers, others mail servers, and others still custom UDP-based applications.
Each of these sub-ranges may have different security and traffic management requirements. Magic Transit allows you to configure specific IP addresses with their own suite of services, or apply the same configuration to large portions (or all) of your block.
Taking the above example, Acme may wish that the 203.0.113.8/30 block containing HTTP services fronted by a traditional hardware load balancer instead deploy the Cloudflare Load Balancer, and also wants HTTP traffic analyzed with Cloudflare’s WAF and content cached by our CDN. With Magic Transit, deploying these network functions is straight-forward — a few clicks in our dashboard or API calls will have your traffic handled at a higher layer of network abstraction, with all the attendant goodies applying application level load balancing, firewall, and caching logic bring.
This is just one example of a deployment customers might pursue. We’ve worked with several who just want pure IP passthrough, with DDoS mitigation applied to specific IP addresses. Want that? We got you!
Magic Transit runs on the entire Cloudflare Global Network. Or, no more scrubs!
When you connect your network to Cloudflare Magic Transit, you get access to the entire Cloudflare network. This means all of our network locations become your network locations. Our network capacity becomes your network capacity, at your disposal to power your experiences, deliver your content, and mitigate attacks on your infrastructure.
How expansive is the Cloudflare Network? We’re in 193 cities worldwide, with more than 30Tbps of network capacity spread across them. Cloudflare operates within 100 milliseconds of 98% of the Internet-connected population in the developed world, and 93% of the Internet-connected population globally (for context, the blink of an eye is 300-400 milliseconds).
Just as we built our own products in house, we also built our network in house. Every product runs in every datacenter, meaning our entire network delivers all of our services. This might not have been the case if we had assembled our product portfolio piecemeal through acquisition, or not had completeness of vision when we set out to build our current suite of services.
The end result for customers of Magic Transit: a network presence around the globe as soon you come on board. Full access to a diverse set of services worldwide. All delivered with latency and performance in mind.
We’ll be sharing a lot more technical detail on how we deliver Magic Transit in the coming weeks and months.
Magic Transit lowers total cost of ownership
Traditional network services don’t come cheap; they require high capital outlays up front, investment in staff to operate, and ongoing maintenance contracts to stay functional. Just as our product aims to be disruptive technically, we want to disrupt traditional network cost-structures as well.
Magic Transit is delivered and billed as a service. You pay for what you use, and can add services at any time. Your team will thank you for its ease of management; your management will thank you for its ease of accounting. That sounds pretty good to us!
Magic Transit is available today
We’ve worked hard over the past nine years to get our network, management tools, and network functions as a service into the state they’re in today. We’re excited to get the tools we use every day in customers’ hands.
So that brings us to naming. When we showed this to customers the most common word they used was ‘whoa.’ When we pressed what they meant by that they almost all said: ‘It’s so much better than any solution we’ve seen before. It’s, like, magic!’ So it seems only natural, if a bit cheesy, that we call this product what it is: Magic Transit.
Today we announced Cloudflare Magic Transit, which makes Cloudflare’s network available to any IP traffic on the Internet. Up until now, Cloudflare has primarily operated proxy services: our servers terminate HTTP, TCP, and UDP sessions with Internet users and pass that data through new sessions they create with origin servers. With Magic Transit, we are now also operating at the IP layer: in addition to terminating sessions, our servers are applying a suite of network functions (DoS mitigation, firewalling, routing, and so on) on a packet-by-packet basis.
Over the past nine years, we’ve built a robust, scalable global network that currently spans 193 cities in over 90 countries and is ever growing. All Cloudflare customers benefit from this scale thanks to two important techniques. The first is anycast networking. Cloudflare was an early adopter of anycast, using this routing technique to distribute Internet traffic across our data centers. It means that any data center can handle any customer’s traffic, and we can spin up new data centers without needing to acquire and provision new IP addresses. The second technique is homogeneous server architecture. Every server in each of our edge data centers is capable of running every task. We build our servers on commodity hardware, making it easy to quickly increase our processing capacity by adding new servers to existing data centers. Having no specialty hardware to depend on has also led us to develop an expertise in pushing the limits of what’s possible in networking using modern Linux kernel techniques.
Magic Transit is built on the same network using the same techniques, meaning our customers can now run their network functions at Cloudflare scale. Our fast, secure, reliable global edge becomes our customers’ edge. To explore how this works, let’s follow the journey of a packet from a user on the Internet to a Magic Transit customer’s network.
Putting our DoS mitigation to work… for you!
In the announcement blog post we describe an example deployment for Acme Corp. Let’s continue with this example here. When Acme brings their IP prefix 203.0.113.0/24 to Cloudflare, we start announcing that prefix to our transit providers, peers, and to Internet exchanges in each of our data centers around the globe. Additionally, Acme stops announcing the prefix to their own ISPs. This means that any IP packet on the Internet with a destination address within Acme’s prefix is delivered to a nearby Cloudflare data center, not to Acme’s router.
Let’s say I want to access Acme’s FTP server on 203.0.113.100 from my computer in Cloudflare’s office in Champaign, IL. My computer generates a TCP SYN packet with destination address 203.0.113.100 and sends it out to the Internet. Thanks to anycast, that packet ends up at Cloudflare’s data center in Chicago, which is the closest data center (in terms of Internet routing distance) to Champaign. The packet arrives on the data center’s router, which uses ECMP (Equal Cost Multi-Path) routing to select which server should handle the packet and dispatches the packet to the selected server.
Once at the server, the packet flows through our XDP- and iptables-based DoS detection and mitigation functions. If this TCP SYN packet were determined to be part of an attack, it would be dropped and that would be the end of it. Fortunately for me, the packet is permitted to pass.
So far, this looks exactly like any other traffic on Cloudflare’s network. Because of our expertise in running a global anycast network we’re able to attract Magic Transit customer traffic to every data center and apply the same DoS mitigation solution that has been protecting Cloudflare for years. Our DoS solution has handled some of the largest attacks ever recorded, including a 942Gbps SYN flood in 2018. Below is a screenshot of a recent SYN flood of 300M packets per second. Our architecture lets us scale to stop the largest attacks.
Network namespaces for isolation and control
The above looked identical to how all other Cloudflare traffic is processed, but this is where the similarities end. For our other services, the TCP SYN packet would now be dispatched to a local proxy process (e.g. our nginx-based HTTP/S stack). For Magic Transit, we instead want to dynamically provision and apply customer-defined network functions like firewalls and routing. We needed a way to quickly spin up and configure these network functions while also providing inter-network isolation. For that, we turned to network namespaces.
Namespaces are a collection of Linux kernel features for creating lightweight virtual instances of system resources that can be shared among a group of processes. Namespaces are a fundamental building block for containerization in Linux. Notably, Docker is built on Linux namespaces. A network namespace is an isolated instance of the Linux network stack, including its own network interfaces (with their own eBPF hooks), routing tables, netfilter configuration, and so on. Network namespaces give us a low-cost mechanism to rapidly apply customer-defined network configurations in isolation, all with built-in Linux kernel features so there’s no performance hit from userspace packet forwarding or proxying.
When a new customer starts using Magic Transit, we create a brand new network namespace for that customer on every server across our edge network (did I mention that every server can run every task?). We built a daemon that runs on our servers and is responsible for managing these network namespaces and their configurations. This daemon is constantly reading configuration updates from Quicksilver, our globally distributed key-value store, and applying customer-defined configurations for firewalls, routing, etc, inside the customer’s namespace. For example, if Acme wants to provision a firewall rule to allow FTP traffic (TCP ports 20 and 21) to 203.0.113.100, that configuration is propagated globally through Quicksilver and the Magic Transit daemon applies the firewall rule by adding an nftables rule to the Acme customer namespace:
Getting the customer’s traffic to their network namespace requires a little routing configuration in the default network namespace. When a network namespace is created, a pair of virtual ethernet (veth) interfaces is also created: one in the default namespace and one in the newly created namespace. This interface pair creates a “virtual wire” for delivering network traffic into and out of the new network namespace. In the default network namespace, we maintain a routing table that forwards Magic Transit customer IP prefixes to the veths corresponding to those customers’ namespaces. We use iptables to mark the packets that are destined for Magic Transit customer prefixes, and we have a routing rule that specifies that these specially marked packets should use the Magic Transit routing table.
(Why go to the trouble of marking packets in iptables and maintaining a separate routing table? Isolation. By keeping Magic Transit routing configurations separate we reduce the risk of accidentally modifying the default routing table in a way that affects how non-Magic Transit traffic flows through our edge.)
Network namespaces provide a lightweight environment where a Magic Transit customer can run and manage network functions in isolation, letting us put full control in the customer’s hands.
GRE + anycast = magic
After passing through the edge network functions, the TCP SYN packet is finally ready to be delivered back to the customer’s network infrastructure. Because Acme Corp. does not have a network footprint in a colocation facility with Cloudflare, we need to deliver their network traffic over the public Internet.
This poses a problem. The destination address of the TCP SYN packet is 203.0.113.100, but the only network announcing the IP prefix 203.0.113.0/24 on the Internet is Cloudflare. This means that we can’t simply forward this packet out to the Internet—it will boomerang right back to us! In order to deliver this packet to Acme we need to use a technique called tunneling.
Tunneling is a method of carrying traffic from one network over another network. In our case, it involves encapsulating Acme’s IP packets inside of IP packets that can be delivered to Acme’s router over the Internet. There are a number of common tunneling protocols, but Generic Routing Encapsulation (GRE) is often used for its simplicity and widespread vendor support.
GRE tunnel endpoints are configured both on Cloudflare’s servers (inside of Acme’s network namespace) and on Acme’s router. Cloudflare servers then encapsulate IP packets destined for 203.0.113.0/24 inside of IP packets destined for a publicly-routable IP address for Acme’s router, which decapsulates the packets and emits them into Acme’s internal network.
Now, I’ve omitted an important detail in the diagram above: the IP address of Cloudflare’s side of the GRE tunnel. Configuring a GRE tunnel requires specifying an IP address for each side, and the outer IP header for packets sent over the tunnel must use these specific addresses. But Cloudflare has thousands of servers, each of which may need to deliver packets to the customer through a tunnel. So how many Cloudflare IP addresses (and GRE tunnels) does the customer need to talk to? The answer: just one, thanks to the magic of anycast.
Cloudflare uses anycast IP addresses for our GRE tunnel endpoints, meaning that any server in any data center is capable of encapsulating and decapsulating packets for the same GRE tunnel. How is this possible? Isn’t a tunnel a point-to-point link? The GRE protocol itself is stateless—each packet is processed independently and without requiring any negotiation or coordination between tunnel endpoints. While the tunnel is technically bound to an IP address it need not be bound to a specific device. Any device that can strip off the outer headers and then route the inner packet can handle any GRE packet sent over the tunnel. Actually, in the context of anycast the term “tunnel” is misleading since it implies a link between two fixed points. With Cloudflare’s Anycast GRE, a single “tunnel” gives you a conduit to every server in every data center on Cloudflare’s global edge.
One very powerful consequence of Anycast GRE is that it eliminates single points of failure. Traditionally, GRE-over-Internet can be problematic because an Internet outage between the two GRE endpoints fully breaks the “tunnel”. This means reliable data delivery requires going through the headache of setting up and maintaining redundant GRE tunnels terminating at different physical sites and rerouting traffic when one of the tunnels breaks. But because Cloudflare is encapsulating and delivering customer traffic from every server in every data center, there is no single “tunnel” to break. This means Magic Transit customers can enjoy the redundancy and reliability of terminating tunnels at multiple physical sites while only setting up and maintaining a single GRE endpoint, making their jobs simpler.
Our scale is now your scale
Magic Transit is a powerful new way to deploy network functions at scale. We’re not just giving you a virtual instance, we’re giving you a global virtual edge. Magic Transit takes the hardware appliances you would typically rack in your on-prem network and distributes them across every server in every data center in Cloudflare’s network. This gives you access to our global anycast network, our fleet of servers capable of running your tasks, and our engineering expertise building fast, reliable, secure networks. Our scale is now your scale.
The collective thoughts of the interwebz
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.