All posts by Sam Marsh

Recapping Speed Week 2023

Post Syndicated from Sam Marsh original http://blog.cloudflare.com/recapping-speed-week-2023/

Recapping Speed Week 2023

Recapping Speed Week 2023

Speed Week 2023 is officially a wrap.

In our Welcome to Speed Week 2023 blog post, we set a clear goal:

“This week we will help you measure what matters. We’ll help you gain insight into your performance, from Zero Trust and API’s to websites and applications. And finally we’ll help you get faster. Quickly.”.

This week we published five posts on how to measure performance, explaining which metrics and approaches make sense and why. We had a deep dive on the latest Core Web Vital, “Interaction to Next Paint”, what it means and how we can help. There was a post on Time To First Byte (TTFB) and why it isn't a good way to measure good web performance. We also wrote about how to measure Zero Trust performance, and announced the Internet Quality page of Cloudflare Radar – giving everyone the ability to compare Internet connection quality across Internet Service Providers, countries, and more.

We launched new products such as Observatory, Digital Experiencing Monitoring and Timing Insights. These products give an incredible window into how your applications and websites are performing through the eyes of website visitors and your employees.

Next, we showed how we continue to be the fastest, with fresh posts on how we have the fastest network, fastest Secure Web Gateway, fastest Zero Trust Network Access and fastest Remote Browser Isolation solutions. There was even an update on how our global network grew to 300 cities. The Cloudflare network is at the center of everything we do, and every product we build benefits from the speed and scale it provides and the proximity to the user.

There were also a number of great product announcements which make speed simple, single button performance boosts to accelerate your traffic. Smart Hints, HTTP/3 Prioritization, Argo for UDP, Brotli end-to-end, LL-HLS for Stream and Ricochet for API Gateway all make speed simple – giving you an immediate speed boost on your traffic for very minimal, if any configuration.

We also showed how AI / ML continue to play a big part at Cloudflare, with posts discussing why running AI inference on Cloudflare's network makes performance sense, and how we both scale and run machine learning at the microseconds level.

Finally, we wrote about how we are making it easier than ever for customers to migrate to Cloudflare from legacy vendors via our Turpentine and Descaler programs.

We’re on a mission to be the fastest at everything we do, and to make it simple for our customers to get the best performance.

In case you missed any of the announcements, take a look at the summary and navigation guide below.

AI / Machine Learning

Announcement Summary
Globally distributed AI and a Constellation update Announcing new Constellation features, explaining why it’s the first globally distributed AI platform and why deploying your machine learning tasks in our global network is advantageous.
Every request, every second: scalable machine learning at Cloudflare Describing the technical strategies that have enabled us to expand the number of machine learning features and models, all while substantially reducing the processing time for each HTTP request on our network.
How Orpheus automatically routes around bad Internet weather A little less than two years ago, Cloudflare made Orpheus automatically available to all customers for free. Since then, Orpheus has saved 132 billion Internet requests from failing by intelligently routing them around connectivity outages, prevented 50+ Internet incidents from impacting our customers, and made our customer’s origins more reachable to everyone on the Internet. Let’s dive into how Orpheus accomplished these feats over the last year.
How Cloudflare runs machine learning inference in microseconds How we optimized bot management’s machine learning model execution. To reduce processing latency, we've undertaken a project to rewrite our bot management technology, porting it from Lua to Rust, and applying a number of performance optimizations. This post focuses on optimizations applied to the machine-learning detections within the bot management module, which account for approximately 15% of the latency added by bot detection. By switching away from a garbage collected language, removing memory allocations, and optimizing our parsers, we reduce the P50 latency of the bot management module by 79μs – a 20% reduction.

Zero Trust

Announcement Summary
Spotlight on Zero Trust: we're fastest and here's the proof Cloudflare is the fastest Secure Web Gateway in 42% of testing scenarios, the most of any provider. Cloudflare is 46% faster than Zscaler, 56% faster than Netskope, and 10% faster than Palo Alto for ZTNA, and 64% faster than Zscaler for RBI scenarios.
Understanding end user-connectivity and performance with Digital Experience Monitoring, now available in beta DEX allows administrators to monitor their WARP Deployment and create predefined application tests. Features include live team & device analytics, server and traceroute tests, Synthetic Application Monitoring, and Fleet Status for real-time insights on WARP deployment.
Descale your network with Cloudflare’s enhanced Descaler Program The speed at which customers are able to move from Zscaler ZIA to Cloudflare Gateway continually gets faster. It usually takes more time to set up a meeting with the right technical administrators than to migrate settings, configurations, lists, policies and more to Cloudflare.
Donning a MASQUE: building a new protocol into Cloudflare WARP Announcing support for MASQUE, a cutting-edge new protocol for the beta version of our consumer WARP iOS app.
How we think about Zero Trust Performance There are many ways to view network performance. However, at Cloudflare we believe the best way to measure performance is to use end-to-end HTTP response measurements. In this blog, we’re going to talk about why end-to-end performance is the most important thing to look at, why other methods like proxy latency and decrypted latency SLAs are insufficient for performance evaluations, and how you can measure your Zero Trust performance like we do.

Measuring what matters

Announcement Summary
Introducing the Cloudflare Radar Internet Quality Page The new Internet Quality page on Cloudflare Radar provides both country and network (autonomous system) level insight into Internet connection performance (bandwidth) and quality (latency, jitter) over time based on benchmark test data as well as speed.cloudflare.com test results.
Network performance update: Speed Week 2023 A blog post that shares the most recent network performance updates, and tells you about our tools and processes that we use to monitor and improve our network performance.
Introducing Timing Insights: new performance metrics via our GraphQL API If you care about the performance of your website or APIs, it’s critical to understand why things are slow. We're introducing new analytics tools to help you understand what is contributing to "Time to First Byte" (TTFB) of Cloudflare and your origin. But wait – maybe you've heard that you should stop worrying about TTFB? Isn't Cloudflare moving away from TTFB as a metric? Read on to understand why there are still situations where TTFB matters.
Are you measuring what matters? A fresh look at Time To First Byte Time To First Byte (TTFB) is not a good way to measure your websites performance. In this blog we’ll cover what TTFB is a good indicator of, what it's not great for, and what you should be using instead.
INP. Get ready for the new Core Web Vital On May 10, 2023, Google announced that INP will replace FID in the Core Web Vitals in March 2024. The Core Web Vitals play a role in the Google Search algorithm. Website owners who care about Search Engine Optimization (SEO) should prepare for the change. In this post we outline what INP is, and how you can prepare.

Speed made simple

Announcement Summary
Faster website, more customers: Cloudflare Observatory can help your business grow Cloudflare users can now easily monitor website performance using Real User Monitoring (RUM) data along with scheduled tests from different regions in a single dashboard. This will identify any performance issues your website may have. Once we’ve identified any issues, Observatory will highlight customized recommendations to resolve these issues, all with a single click.
Smart Hints make code-free performance simple We’re excited to announce we’re making Early Hints and Fetch Priorities automatic using the power of Cloudflare’s network.
Introducing HTTP/3 Prioritization Announcing full support for HTTP/3 Extensible Priorities, a new standard that speeds the loading of webpages by up to 37%.
Argo Smart Routing for UDP: speeding up gaming, real-time communications and more Announcing we’re bringing traffic acceleration to customer’s UDP traffic. Now, users can improve the latency of UDP-based applications like video games, voice calls, and video meetings by up to 17%.
All the way up to 11: Serve Brotli from origin and Introducing Compression Rules Enhancing our support for Brotli compression, enabling end-to-end Brotli compression for web content. Compression plays a vital role in reducing bytes during transfers, ensuring quicker downloads and seamless browsing.
How to use Cloudflare Observatory for performance experiments Introducing Cloudflare's Performance Experiments in Observatory: Safely test code, improve website speed, and minimize risk.
Introducing Low-Latency HLS Support for Cloudflare Stream Broadcast live to websites and applications with less than 10 second latency with Low-Latency HTTP Live Streaming (LL-HLS), now in beta with Cloudflare Stream.
Speeding up APIs with Ricochet for API Gateway Announcing Ricochet for API Gateway, the easiest way for Cloudflare customers to achieve faster API responses through automatic, intelligent API response caching.

But wait, there’s more

Announcement Summary
Cloudflare Snippets is now available in alpha Cloudflare Snippets are available in alpha. Snippets are a simple way of executing a small piece of Javascript on select HTTP requests, using the ruleset engine filtering logic.
Making Cloudflare Pages the fastest way to serve your sites Pages is now the fastest way to serve your sites across Netlify, Vercel and many others.
Cloudflare's global network grows to 300 cities and ever closer to end users with connections to 12,000 networks We are pleased to announce that Cloudflare is now connected to over 12,000 Internet networks in over 300 cities around the world.
It's never been easier to migrate thanks to Cloudflare's new Migration Hub Relaunching Turpentine, a service for moving away from Varnish Control Language (VCL). Introducing Cloudflare's new Migration Hub. The Migration Hub serves as a one-stop-shop for all migration needs, featuring brand-new migration guides that bring transparency and simplicity to the process.
Part 2: Rethinking cache purge with a new architecture Discussing architecture improvements we’ve made so far for Cache Purge and what we’re working on now.
Speeding up your (WordPress) website is a few clicks away In this blog, we will explain where the opportunities exist to improve website performance, how to check if a specific site can improve performance, and provide a small JavaScript snippet which can be used with Cloudflare Workers to do this optimization for you.
Benchmarking dashboard performance The Cloudflare dashboard is a single page application that houses all of the UI for our wide portfolio of existing products, as well as the new features we're releasing every day.
Workers KV is faster than ever with a new architecture With the new architecture powering Workers KV our service will become faster and more scalable than ever. We have significantly reduced cold read probability, and enabled KV to serve over a trillion requests a month.
How Kinsta used Workers and Workers KV to improve cache hit rates by 56% Kinsta delivers tailored cloud hosting solutions to over 26,000 companies across 128 countries. Learn how they used Workers and Workers KV to improve cache performance and customer performance.
A step-by-step guide to transferring domains to Cloudflare Transferring your domains to a new registrar isn’t something you do every day, and getting any step of the process wrong could mean downtime and disruption. We’ve built a domain transfer checklist to help you quickly and safely transfer your domains to Cloudflare.
How we scaled and protected Eurovision 2023 voting with Pages and Turnstile More than 162 million fans tuned in to the 2023 Eurovision Song Contest, the first year that non-participating countries could also vote. Cloudflare helped scale and protect the voting application using our rapid DNS infrastructure, CDN, Cloudflare Pages and Turnstile.

Watch on Cloudflare TV

Here's a summary of the Speed Week on Cloudflare TV:

If you missed any of the announcements or want to also view the associated Cloudflare TV segments, where blog authors went through each announcement, you can now watch all the Speed Week videos on Cloudflare TV.

It’s never been easier to migrate thanks to Cloudflare’s new Migration Hub

Post Syndicated from Sam Marsh original http://blog.cloudflare.com/turpentine-v2-migration-program/

It's never been easier to migrate thanks to Cloudflare's new Migration Hub

It's never been easier to migrate thanks to Cloudflare's new Migration Hub

We understand the pain points associated with CDN migrations. That's why in late 2021 we introduced Turpentine, a project to  the process of translating the old Varnish Configuration Language (VCL) into Cloudflare Workers with just a push of a button. After nearly two years of testing and user feedback, we’ve tailored the migration processes for different user groups.

Today, we are thrilled to relaunch Turpentine, and introduce Cloudflare's new Migration Hub. The Migration Hub serves as a one-stop-shop for all migration needs, featuring brand-new migration guides that bring transparency and simplicity to the process.

We also know that a large number of customers aren't comfortable doing migrations themselves. Years of built up business logic makes unpacking and translating CDN configurations between different vendors difficult and locks businesses into subpar products and services. To help these customers we have established a Professional Services group to ensure smooth migrations for customers transitioning to Cloudflare’s first-class products. Going forward, we plan to continue to invest resources in Turpentine to ensure that moving to any part of Cloudflare is easy and you have the help you need.

Why choose Cloudflare?

Cloudflare has gained immense popularity among businesses seeking to improve website performance, security, and reliability. The demand for Cloudflare's CDN services has skyrocketed, with an ever-increasing number of companies wanting to use our services to help protect their web properties. It became evident that a more streamlined approach was needed to empower customers to self-guide through the onboarding process if they wanted.

That’s why we’ve shipped guides to help bring transparency to the migration process, compare Cloudflare's Rules or Workers to VCL or XML configurations, and provide mappings of different products between vendors. This resource serves as a repository of information and step-by-step guidance for those seeking to move to Cloudflare. These guides are designed to empower customers to take control of their onboarding journey by providing them with the tools and resources they need to understand how to successfully implement Cloudflare's first-class products without needing to talk to anyone.

As new features and enhancements are introduced to Cloudflare, the landing page will be updated to reflect these changes.

However, undertaking the onboarding process independently can be daunting for some businesses. We understand that every organization is unique, with specific requirements and challenges. To address this concern, Cloudflare has established a dedicated Professional Services team. This team of experts works closely with customers, taking the time to understand their environments, assess their needs, and provide tailored guidance and support throughout the migration process. With the help of the professional services team, businesses can transition to Cloudflare being guided by an experienced team to ensure a timely, smooth and successful migration. Using the Migration Hub, you can get in contact with the Professional Services team to help your migration journey.

Whether you prefer self-guided exploration or expert guidance, the Cloudflare Migration Hub has everything you need to make your migration journey a success.

Self-serve guides

Our commitment to transparency and empowering our customers led us to create comprehensive public-facing guides that provide valuable insights into how CDN products compare and overlap. With these guides, you can gain a clear understanding of the features and capabilities offered by Cloudflare, and how they map between CDN offerings you might be more familiar with.

It's never been easier to migrate thanks to Cloudflare's new Migration Hub
Example of mapping Fastly products to Cloudflare

The migration guides include product maps that show how you can match Cloudflare features to Akamai or Fastly features and how to configure them. Using this information, migration should just be about matching up rules and implementing instead of translating feature names between vendors or fiddling with ChatGPT prompts to correctly (or incorrectly!) translate code from one vendor to the other. There are also numerous examples of how certain configurations have been accomplished with code examples that help customers configure and understand their current configuration and translate them into Cloudflare products, easily. Check them out here.

Not only that, but Cloudflare’s commitment to providing numerous free tools across our network means anyone can sign-up and get access to much of our platform without needing to talk to anyone. We believe in giving you the tools and knowledge you need to navigate the migration and testing process independently, while knowing that our support is just a click away whenever you need it.

Let us do it for you with Professional Services

It's never been easier to migrate thanks to Cloudflare's new Migration Hub

We're also incredibly excited to introduce our dedicated team of migration experts, known as Professional Services, who are here to assist you throughout the entire process. The Professional Service team will work closely with you, offering their expertise and guiding you through each step to ensure a seamless transition onto Cloudflare’s products.

Too often, we meet with customers who have been intimidated by the complexity of their current CDN vendor. They had help setting it up by a third party and have experienced the nervousness of trying to change things without knowing what impacts it could have downstream. This is compounded by different CDNs using different terminology for essentially the same concepts.

Professional Services is here to help guide your onboarding experience and cut through that uncertainty.

From providing in-depth knowledge about the migration process and tooling to addressing any specific challenges you may encounter, our Professional Services team is committed to making your migration experience as smooth and efficient as possible. With Cloudflare's Professional Services, you can confidently embark on your migration journey, knowing that our experts will handle the complexities while empowering you to drive the migration process forward.

Success Stories

By leveraging Cloudflare's migration solutions, numerous businesses have achieved remarkable results, including improved performance, enhanced security, and streamlined pricing. These success stories serve as a testament to the effectiveness and reliability of Cloudflare's migration offerings.

Improve cost and performance by migrating to Cloudflare

A mobile communications leader successfully migrated its public website, after 20 years with Akamai, to Cloudflare for a better digital experience plus >20% cost savings.

The company’s decision to decentralize purchasing of CDN services illuminated the high cost of using Akamai for its public-facing websites.

A short proof-of-concept of Cloudflare Application Performance suite resulted in measurable cost savings and performance improvements. It was also determined the flexibility to integrate additional Cloudflare tools, like Workers for serverless compute offerings, would enable the organization to scale further when ready.

Avoid reliability concerns by migrating to Cloudflare

A UK sporting giant with a devoted international fan community was deeply concerned about their spikey traffic associated with game days. Often these matches saw 10x the normal website traffic. Unfortunately, incumbent vendors weren’t up for the challenge of providing the performance and uptime reliability to their fans during these game day traffic spikes.

After migrating to Cloudflare, the results spoke for themselves. In one 24-hour match day, the site received over 11 million requests. Cloudflare’s cache served over 93% of them with eaze while providing a 100% uptime guarantee.

Get started today

We invite you to visit our Migration Hub and explore our comprehensive offerings.

Migrating from one CDN to another can be a daunting task, but with Cloudflare's Migration Hub and Professional Services, the process becomes more straightforward and hassle-free. We are committed to empowering our customers with the resources, support, and expertise needed to transition smoothly to Cloudflare's advanced solutions.

Are you measuring what matters? A fresh look at Time To First Byte

Post Syndicated from Sam Marsh original http://blog.cloudflare.com/ttfb-is-not-what-it-used-to-be/

Are you measuring what matters? A fresh look at Time To First Byte

Are you measuring what matters? A fresh look at Time To First Byte

Today, we’re making the case for why Time To First Byte (TTFB) is not a good metric for evaluating how fast web pages load. There are better metrics out there that give a more accurate representation of how well a server or content delivery network performs for end users. In this blog, we’ll go over the ambiguity of measuring TTFB, touch on more meaningful metrics such as Core Web Vitals that should be used instead, and finish on scenarios where TTFB still makes sense to measure.

Many of our customers ask what the best way would be to evaluate how well a network like ours works. This is a good question! Measuring performance is difficult. It’s easy to simplify the question to “How close is Cloudflare to end users?” The predominant metric that’s been used to measure that is round trip time (RTT). This is the time it takes for one network packet to travel from an end user to Cloudflare and back. We measure this metric and mention it from time to time: Cloudflare has an average RTT of 50 milliseconds for 95% of the Internet-connected population.

Whilst RTT is a relatively good indicator of the quality of a network, it doesn’t necessarily tell you that much about how good it is at actually delivering actual websites to end users. For instance, what if the web server is really slow? A user might be very close to the data center that serves the traffic, but if it takes a long time to actually grab the asset from disk and serve it the result will still be a poor experience.

This is where TTFB comes in. It measures the time it takes between a request being sent from an end user until the very first byte of the response being received. This sounds great on paper! However it doesn’t capture how a webpage or web application loads, and what happens after the first byte is received.

In this blog we’ll cover what TTFB is a good indicator of, what it's not great for, and what you should be using instead.

What is TTFB?

TTFB is a metric which reports the duration between sending the request from the client to a server for a given file, and the receipt of the first byte of said file. For example, if you were to download the Cloudflare logo from our website the TTFB would be how long it took to receive the first byte of that image. Similarly, if you were to measure the TTFB of a request to cloudflare.com the metric would return the TTFB of how long it took from request to receiving the first byte of the first HTTP response. Not how long it took for the image to be fully visible or for the web page to be loaded in a state that allowed a user to begin using it.

The simplest answer therefore is to look at the diametrically opposite measurement, Time to Last Byte (TTLB). TTLB, as you’d expect, measures how long it takes until the last byte of data is received from the server. For the Cloudflare logo file this would make sense, as until the image is fully downloaded it's not exactly useful. But what about for a webpage? Do you really need to wait until every single file is fully downloaded, even those images at the bottom of the page you can't immediately see? TTLB is fine for measuring how long it took to download a single file from a CDN / server. However for multi-faceted traffic, like web pages, it is too conservative, as it doesn’t tell you how long it took for the web page to be usable.

As an analogy we can look at measuring how long it takes to process an incoming airplane full of passengers. What's important is to understand how long it takes for those passengers to disembark, pass through passport control, collect their baggage and leave the terminal, if no onward journeys. TTFB would measure success as how long it took to get the first passenger off of the airplane. TTLB would measure how long it took the last passenger to leave the terminal, even if this passenger remained in the terminal for hours afterwards due to passport issues or getting lost. Neither are a good measure of success for the airline.

Why TTFB doesn't make sense

TTFB is a widely-used metric because it is easy-to-understand and it is a great signal for connection setup time, server time and network latency. It can help website owners identify when performance issues originate from their server. But is TTFB a good signal for how real users experience the loading speed of a web page in a browser?

When a web page loads in a browser, the user’s perception of speed isn’t related to the moment the browser first receives bytes of data. It is related to when the user starts to see the page rendering on the screen.

The loading of a web page in a browser is a very complex process. Almost all of this process happens after TTFB is reported. After the first byte has been received, the browser still has to load the main HTML file. It also has to load fonts, stylesheets, javascript, images and other resources. Often these resources link to other resources that also must be downloaded. Often these resources entirely block the rendering of the page. Alongside all these downloads, the browser is also parsing the HTML, CSS and JavaScript. It is building data structures that represent the content of the web page as well as how it is styled. All of this is in preparation to start rendering the final page onto the screen for the user.

Are you measuring what matters? A fresh look at Time To First Byte

When the user starts seeing the web page actually rendered on the screen, TTFB has become a distant memory for the browser. For a metric that signals the loading speed as perceived by the user, TTFB falls dramatically short.

Receiving the first byte isn't sufficient to determine a good end user experience as most pages have additional render blocking resources that get loaded after the initial document request. Render-blocking resources are scripts, stylesheets, and HTML imports that prevent a web page from loading quickly. From a TTFB perspective it means the client could stop the ‘TTFB clock’ on receipt of the first byte of one of these files, but the web browser is blocked from showing anything to the user until the remaining critical assets are downloaded.

This is because browsers need instructions for what to render and what resources need to be fetched to complete “painting” a given web page. These instructions come from a server response. But the servers sending these responses often need time to compile these resources — this is known as “server think time.” While the servers are busy during this time… browsers sit idle and wait. And the TTFB counter goes up.

There have been a number of attempts over the years to benefit from this “think time”. First came Server Push, which was superseded last year by Early Hints. Early Hints take advantage of “server think time” to asynchronously send instructions to the browser to begin loading resources while the origin server is compiling the full response. By sending these hints to a browser before the full response is prepared, the browser can figure out what it needs to do to load the webpage faster for the end user. It also stops the TTFB clock, meaning a lower TTFB. This helps ensure the browser gets the critical files sooner to begin loading the webpage, and it also means the first byte is delivered sooner as there is no waiting on the server for the whole dataset to be prepared and ready to send. Even with Early Hints, though, TTFB doesn’t accurately define how long it took the web page to be in a usable state.

Are you measuring what matters? A fresh look at Time To First Byte

TTFB also does not take into account multiplexing benefits of HTTP/2 and HTTP/3 which allow browsers to load files in parallel. It also doesn't take into account compression on the origin, which would result in a higher TTFB but a quicker page load overall due to the time the server took to compress the assets and send them in a small format over the network.

Cloudflare offers many features that can improve the loading speed of a website, but don’t necessarily impact the TTFB score. These features include Zaraz, Rocket Loader, HTTP/2 and HTTP/3 Prioritization, Mirage, Polish, Image Resizing, Auto Minify and Cache. These features improve the loading time of a webpage, ensuring they load optimally through a series of enhancements from image optimization and compression to render blocking elimination by optimizing the sending of assets from the server to the browser in the best possible order.

More comprehensive metrics are required to illustrate the full loading process of a web page, and the benefit provided by these features. This is where Real User Monitoring helps.  At Cloudflare we are all-in on Real User Monitoring (RUM) as the future of website performance. We’re investing heavily in it: both from an observation point of view and from an optimization one also.

For those unfamiliar with RUM, we typically optimize websites for three main metrics – known as the “Core Web Vitals”. This is a set of key metrics which are believed to be the best and most accurate representation of a poorly performing website vs a well performing one. These key metrics are Largest Contentful Paint, First Input Delay and Cumulative Layout Shift.

Are you measuring what matters? A fresh look at Time To First Byte
Source: https://addyosmani.com/blog/web-vitals-extension/ 

LCP measures loading performance; typically how long it takes to load the largest image or text block visible in the browser. FID measures interactivity. For example, the time between when a user clicks or taps on a button to when the browser responds and starts doing something. Finally, CLS measures visual stability. A good, or bad example of CLS is when you go to a website on your mobile phone, tap on a link and the page moves at the last second meaning you tap something you didn't want to. That would be a lower CLS score as its poor user experience.

Looking at these metrics gives us a good idea of how the end user is truly experiencing your website (RUM) vs. how quickly the first byte of the file was retrieved from the nearest Cloudflare data center (TTFB).

Good TTFB, bad user experience

One of the “sub parts” that comprise LCP is TTFB. That means a poor TTFB is very likely to result in a poor LCP. If it takes you 20 seconds to retrieve the first byte of the first image, your user isn't going to have a good experience – regardless of your outlook on TTFB vs RUM.

Conversely, we found that a good TTFB does not always mean a good LCP score, or FID or CLS. We ran a query to collect RUM metrics of web pages we served which had a good TTFB score. Good is defined as a TTFB as less than 800ms. This allowed us to ask the question: TTFB says these websites are good. Does the RUM data support that?

We took four distinct samples from our RUM data in June. Each sample had a different date-range and sample-rate combination. In each sample we queried for 200,000 page views. From these 200,000 page views we filtered for only the page views that reported a 'Good' TTFB. Across the samples, of all page views that have a good TTFB, about 21% of them did not have a “good” LCP score. 46% of them did not have a “good” FID score. And 57% of them did not have a good CLS score.

This clearly shows the disparity between measuring the time it takes to receive the first byte of traffic, vs the time it takes for a webpage to become stable and interactive. In summary, LCP includes TTFB but also includes other parts of the loading experience. LCP is a more comprehensive, user-centric metric.

TTFB is not all bad

Reading this post and others from Speed Week 2023 you may conclude we really don't like TTFB and you should stop using it. That isn't the case.

There are a few situations where TTFB does matter. For starters, there are many applications that aren’t websites. File servers, APIs and all sorts of streaming protocols don’t have the same semantics as web pages and the best way to objectively measure performance is to in fact look at exactly when the first byte is returned from a server.

To help optimize TTFB for these scenarios we are announcing Timing Insights, a new analytics tool to help you understand what is contributing to "Time to First Byte" (TTFB) of Cloudflare and your origin. Timing Insights breaks down TTFB from the perspective of our servers to help you understand what is slow, so that you can begin addressing it.

Get started with RUM today

To help you understand the real user experience of your website we have today launched Cloudflare Observatory the new home of performance at Cloudflare.

Are you measuring what matters? A fresh look at Time To First Byte

Cloudflare users can now easily monitor website performance using Real User Monitoring (RUM) data along with scheduled synthetic tests from different regions in a single dashboard. This will identify any performance issues your website may have. The best bit? Once we’ve identified any issues, Observatory will highlight customized recommendations to resolve these issues, all with a single click.

Start making your website faster today with Observatory.

Welcome to Speed Week 2023

Post Syndicated from Sam Marsh original http://blog.cloudflare.com/welcome-to-speed-week-2023/

Welcome to Speed Week 2023

Welcome to Speed Week 2023

What we consider ‘fast’ is changing. In just over a century we’ve cut the time taken to travel to the other side of the world from 28 days to 17 hours. We developed a vaccine for a virus causing a global pandemic in just one year – 10% of the typical time. AI has reduced the time taken to complete software development tasks by 55%. As a society, we are driven by metrics – and the need to beat what existed before.

At Cloudflare we don't focus on metrics of days gone by. We’re not aiming for “faster horses”. Instead we are driven by questions such as “What does it actually look like for users?”, “How is this actually speeding up the Internet?”, and “How does this make the customer faster?”.

This innovation week we are helping users measure what matters. We will cover a range of topics including how we are fastest at Zero Trust, have the fastest network and a deep dive on cache purge and why global purge latency mightn’t be the gold star it's made out to be. We’ll also cover why Time to First Byte is generally a bad measurement. And what you should care about instead.

Woven amongst these topics are a number of great new products and features that genuinely make you and your customers faster. From API acceleration and end-to-end Brotli 11 compression, to reducing page load times by 30% with one-click. Plus a brand new home for application performance.

This week we will help you measure what matters. We’ll help you gain insight into your performance, from Zero Trust and API’s to websites and applications. And finally we’ll help you get faster. Quickly.

We are proving we are the fastest at what we do. And we are making it as easy as possible for our users to attain those numbers.

Welcome to Speed Week.

More megapixels?

You don't have to go far in the real world to find examples of highly-touted metrics that likely don't capture what you really care about.

If you read the announcements each year you’d be forgiven for believing smartphones have devolved into cameras with apps and an antenna. With each new model announced the press releases reference megapixel improvements between the previous and latest model.

The number of megapixels alone does not guarantee better image quality. Factors such as sensor size, lens quality, image processing algorithms, and low-light performance also play significant roles in determining the overall camera performance. This has been a widely accepted view for over a decade now, so why do companies keep pushing it as a metric – and why do users feel it's important?

Similarly, marketing collateral from Internet Service Providers would have you believe that “bandwidth is king”.

However it has been categorically proven that bandwidth is not the sole indicator of speed. Just two months ago we published a blog on “Making home Internet faster has little to do with speed”, concluding “While bandwidth plays a part, the latency of the connection – the real Internet “speed” – is more important. The post references a recent paper by two researchers from MIT which supports this point, showing the point of diminishing returns is around 20Mbps for when more bandwidth doesn't mean a webpage loads much faster.

Again, the advertised, and generally accepted comparison metric amongst consumers, is incorrect. More bandwidth does not equal faster Internet speeds.

Simply put – are you really measuring what matters to you when reviewing your product choices and vendors? Or on reflection have your choices been influenced by dogma for far too long?

Measuring what matters on the Internet

Similarly to the smartphone and ISP industries, we at Cloudflare operate in industries where users often compare us against competitors using metrics that likely don't measure what matters to them.

Large enterprises use software to selectively shift traffic between Content Delivery Networks (CDNs) based on the lowest possible Time to First Byte (TTFB) score per region. This means if Cloudflare suddenly were to cut its TTFB in half in Africa, for example, we could see a huge influx of traffic in this region from these enterprise customers – likely not doing anything to improve the actual visitor experience of a website.

TTFB is often used as a measure of how quickly a web server responds to a request and common web testing services report it. The faster it is the better the web server (in theory). We have known for years, however, that TTFB is not on its own a fair reflection of real world performance.

Receiving the first byte isn't sufficient to determine a good end user experience as most pages have additional render blocking resources that get loaded after the initial document request. TTFB does not take into account multiplexing benefits of HTTP/2 and HTTP/3 which allow browsers to load files in parallel. It also doesn't account for features like Early Hints, Zaraz, Rocket Loader, HTTP/2 and soon HTTP/3 Prioritization which eliminate render blocking.

As Sitespect wrote last year, “TTFB is a measure of how fast a web server is able to respond to a request, and how long it takes for that request to traverse various layers of networking to reach a user’s browser. It is a measure of speed for delivery of content, but it is not a measurement for how long end-users are effectively waiting before they can start interacting with your website. TTFB completely ignores everything that happens after that network layer: loading, downloading of resources, rendering, etc. In other words, TTFB is not a user-centric measurement, it’s a networking measurement.”.

At Cloudflare we are all-in on Real User Monitoring (RUM) as the future of website performance. We’re investing heavily in it – both from an observation point of view and from an optimization one also. This week we will be releasing a series of new products aimed at helping users understand the actual experience of their end users (i.e. website visitors), and provide suggestions on how to improve it.

For those unfamiliar with RUM, we typically optimize websites for three main metrics – known as the “Core Web Vitals”. This is a set of key metrics which are believed to be the best and most accurate representation of a poorly performing website vs a well performing one. These key metrics are Largest Contentful Paint, First Input Delay and Cumulative Layout Shift.

Welcome to Speed Week 2023
Credit: https://addyosmani.com/blog/web-vitals-extension/

LCP measures loading performance; typically how long it takes to load the largest image or text block visible in the browser. FID measures interactivity. For example, the time between when a user clicks or taps on a button to when the browser responds and starts doing something. Finally, CLS measures visual stability. A good, or bad example of CLS is when you go to a website on your mobile phone, tap on a link and the page moves at the last second meaning you tap something you didn't want to. That would be a lower CLS score as its poor user experience.

Looking at these metrics gives us a good idea of how the end user is truly experiencing your website (RUM) vs. how quickly the nearest Cloudflare data center begins to return data (TTFB).

The other benefit of Real User Monitoring is it includes the speed advantage of new protocols and features designed to improve the customer experience. For example, Time to First Byte is a single connection between the client and the nearest Cloudflare server. This is nothing like how a web browser connects to a website, which uses multiplexing to fetch multiple files at the same time in parallel. There are also products like Early Hints which are designed to take advantage of the “server think time” to send instructions to the browser to begin loading readily-available resources while the server finishes compiling the full response.

In Speed Week we will be going deep into why TTFB is a bad metric to care about for websites and web applications, why RUM is the future, and a blog post on the latest Core Web Vital – “Interaction to Next Paint” (INP), and what it means to you as a business.

We will also be unveiling a brand-new product which will be the new home of application performance on Cloudflare. The new product will augment synthetic tests from various global locations with real user monitoring data to give administrators the best possible understanding of how their website is performing in the real world. This product will be available for all plan levels.

We’re the fastest, and we can prove it

It's no secret that Cloudflare is fast.

However it might not be obvious to the everyday reader just how fast we are, and in just how many areas.  Fastest compute. Fastest DNS. Fastest network. Fastest Zero Trust Network Access (ZTNA). Fastest Secure Web Gateway (SWG). Fastest object storage. And we’re finding areas we are not empirically the fastest and looking to prove we are number one.

We’re also finding ways to migrate customers from legacy providers and applications to Cloudflare as fast as possible. These legacy vendors have locked in companies using confusing terminology and esoteric features, trapping them on sub-par products and making them too afraid to move away. We’re helping those customers escape. Super Slurper helps customers move away from S3, Turpentine helps migrate legacy VCL setups to Cloudflare, and our Descaler program helps to migrate customers from Zscaler to Cloudflare in a matter of hours. We are building tools and products to help those customers who want to move to the fastest network but are locked in.

In Speed Week we’ll cover the latest on these programs and how we are relentlessly pushing to make the migration process as quick and easy as possible for customers who want to move to Cloudflare and put their business on the fastest network around.

Performance matters, whatever the product area

Generally when you hear of performance improvements it's typically through the lens making websites faster. But speed comes in many forms. Take Zero Trust as an example.

Measuring Zero Trust performance matters because it impacts your employees' experience and their ability to get their job done. Whether it’s accessing services through access control products, connecting out to the public Internet through a Secure Web Gateway, or securing risky external sites through Remote Browser Isolation, all of these experiences need to be frictionless. But what if your company's Secure Web Gateway is in London and you are in Johannesburg? This can mean a painful, slow, and frustrating employee experience whilst they wait for traffic to be sent to and from London. Slack becomes slow. Zoom becomes slow. Employees become frustrated.

The bigger concern, however, is not knowing of these performance issues. For example, if each of your employees are physically located in an office and the connection to critical business systems like Salesforce or Workday worsens, the likelihood is it will become evident quickly. But what about in a remote workforce with employees globally distributed? As a business, you need the ability to understand how your employees are experiencing critical business systems and identify any connection and performance issues they may be experiencing to ensure they get addressed quickly. In Speed Week we’ll unveil our latest Zero Trust offering which will give CIOs and businesses incredible insight into the performance experience of their workforce.

Speed Week will show Cloudflare is the fastest Zero Trust provider. Our analysis will provide updated benchmark comparisons and include additional competitors to show how we outperform everyone to give employees the fastest Zero Trust experience.

Another area we will shine a spotlight on this week is cache purge. When you think of CDNs it's common to look at them as a large distributed cache. Visitor-requested files are retrieved from an origin and stored on globally distributed CDN servers. This allows visitors to download the file in the quickest possible time by retrieving it from their nearest Cloudflare data center rather than having to traverse the Internet to and from the origin. TTFB will measure the time taken to receive a single file from the nearest location. RUM will measure the time it takes to receive multiple files, cached and uncached, and put them together into the webpage requested. But what about when the file changes on origin?

In the scenario where a business is hosting a pricebook as a downloadable file on its website, it is very important to understand how long it takes to remove old copies from Cloudflare cache to ensure customers don't see incorrect prices. This is where measuring cache purge times becomes important. The time taken to remove the invalidated file (old file) from every server in every data center in the CDN is known as the ‘global purge time’. In Speed Week we will explain how we have built our new cache purge architecture to be lightning-fast and what the performance numbers are as a result (spoiler: they are insanely fast).

These are just a few examples of what we have in store for the week. We also have blogs on AI, API acceleration, developer platform, networking, protocols, compression, streaming, UI optimization and more.

Speed at the heart of Cloudflare

At Cloudflare we put performance at the heart of everything we do.

Make sure to follow the Cloudflare blog and social media accounts for all of the week's news, and join us on Cloudflare TV each day for a live discussion of the day's announcements.

Welcome to Speed Week.

The most programmable Supercloud with Cloudflare Snippets

Post Syndicated from Sam Marsh original https://blog.cloudflare.com/snippets-announcement/

The most programmable Supercloud with Cloudflare Snippets

Your traffic, how you like it

The most programmable Supercloud with Cloudflare Snippets

Cloudflare is used by a highly diverse customer base. We offer simple-to-use products for everything from setting HTTP headers to rewriting the URI path and performing URL redirects. Sometimes customers need more than the out-of-the-box functionality, not just adding an HTTP header – but performing some advanced calculation to create the output. Today they would need to create a feature request and wait for it to be shipped, write a Cloudflare Worker, or keep this modification ‘on origin’ – on their own infrastructure.

To simplify this, we are delighted to announce Cloudflare Snippets. Snippets are a new way to perform traffic modifications that users either cannot do via our productised offerings, or want to do programmatically. The best part? The vast majority of customers will pay nothing extra for using Snippets.

Users now have a choice. Perform the action via a rule. Or, if more functionality is needed, write a Snippet.  Neither will mean waiting. Neither will incur additional cost (although a high fair usage cap will apply). Snippets unblocks users to do what they want, when they want. All on Cloudflare.

Snippets will support the import of code written in various languages, such as JavaScript (modern), VCL (legacy) and Apache .htaccess files (legacy). This allows customers to migrate legacy operational code onto our platform – whilst also consolidating their JavaScript operations.

Please use the sign-up form to join the waitlist for Snippets if you are interested in testing. We hope to begin admitting users into the closed beta early 2023.

Why build Snippets?

Over the past 18 months we have released a number of new rules products such as Transform Rules, Cache Rules, Origin Rules, Config Rules and Redirect Rules. These new products give more control to customers on how we process their traffic as it flows through our global network. The feedback on these products so far has been overwhelmingly positive. However, our customers still occasionally need the ability to do more than the out-of-the-box functionality allows.

There are always some use cases where a product doesn’t provide the functionality that a customer needs for their specific situation.  For example, whilst thousands of our customers are now using Transform Rules to solve their HTTP header modification use cases, there remains a small number of use cases that are not possible, such as setting dynamic expiry times with cookies or hashing tokens with a key.

This is where Cloudflare Snippets help. Customers will no longer need to use the full Cloudflare Workers platform to implement these relatively simple use cases. Nor will they need to wait for us to build their feature requests. Instead, they will be able to run a Snippet of JavaScript.

Migrating legacy code to Snippets

Varnish Control Language (VCL) is only used within the context of Varnish. Launched around 16 years ago, it has historically been used to configure traffic and routing for Content Delivery Networks as it was extensible to a wide range of use cases.

There are still a good number of businesses out there using VCL to perform routing and traffic modification actions. Whilst other providers are deprecating support for VCL, we want to make sure those of you comfortable using it are still supported.

Snippets won’t run pure VCL. Instead, we will convert VCL into easy to maintain rules or Snippets. To achieve this we’re building a simple-to-use, self-serve VCL converter that analyzes uploaded VCL code and auto-generates suggested Snippets, and if we can find a match, also generates suggested rules for products such as Transform Rules or Cache Rules.

This topic was initially handled via Project Turpentine, a suite of tools used by Cloudflare employees to parse a customer’s VCL into a suggested JavaScript configuration. This JavaScript could then be loaded into a Worker, or series of Workers.

Snippets takes the idea and principles of Turpentine further. Much further. By building a parser directly in the dashboard it puts the power directly into the hands of users and gives them a choice. You can tell us to migrate everything we can into Rules with the remaining code migrated into Snippets, or, you can choose to tell us to migrate everything into discrete Snippets. It’s your call.

The most programmable Supercloud with Cloudflare Snippets

We’ll give Apache htaccess and NGINX configuration files the same treatment. The goal being users simply upload the files from their websites Apache or NGINX configuration, and we generate suggested Snippets and/or rules.

The days of having to use legacy code for operational tasks are coming to an end. Snippets allow users to migrate these workloads to Cloudflare, and let them focus on the bigger problems of the business vs maintaining legacy systems.

The difference between Snippets and Workers

Most readers will already be familiar with Cloudflare Workers, our powerful developer platform which allows businesses to run and build entire products and solutions on Cloudflare’s global network. Snippets is also built on this platform, but has a few key differences.

The first major difference is that a Snippet will run as part of the Ruleset Engine as dedicated new phases, similar to Transform Rules and Cache Rules. Customers will be able to select and execute a Snippet based on any ruleset engine filter. This allows customers to run a Snippet against every request, or filter for specific HTTP traffic based on the fields we offer, such as traffic with a certain bot score, originating from a specific country, or with a specific cookie. Snippets will be additive, meaning users can have one Snippet to add an HTTP header, and another to rewrite the URL, and both will execute if they match:

The most programmable Supercloud with Cloudflare Snippets

Another major difference – Cloudflare Snippets are available for all plan levels, at no additional cost. 99% of users won’t pay a single cent, ever, to use this solution. This allows customers to migrate their simple workloads from legacy solutions like VCL to the Cloudflare platform, and actively reduce their monthly spend.

Free Plans Pro Plans Business Plans Enterprise Plans
Snippets available 5 Snippets per zone. 20 Snippets per zone. 50 Snippets per zone. 200 Snippets per zone*
(Customers can speak with their Customer Success team to have this increased).

Cloudflare Snippets are lightweight when compared with Workers, offering 5ms maximum execution time, 2MB maximum memory and 32KB total package size. This comparably small footprint allows us to offer this to 99% of users at no additional cost, whilst also being sufficient for the identified use cases like HTTP header modification, URL rewriting and traffic routing – all of which don’t need the vast resources offered by Cloudflare Workers.

Cloudflare Snippets Cloudflare Workers Unbound
(For comparison)
Runtime support JavaScript JavaScript and WASM
Execution location Global – All Cloudflare locations Global – All Cloudflare locations
Triggers supported Ruleset Engine Filters HTTP Request
HTTP Response
Cron Triggers
Maximum execution time 5ms 30 Seconds HTTP
15 Minutes (Cron Trigger)
Maximum memory 2MB 128MB
Total package size 32KB 5MB
Environment variables 8/Snippet 64/Worker
Environment variable size 1KB 5KB
Subrequests 1/request 1000/request
Terraform Support
Wrangler Support
Cron Triggers
Key Value Store
Durable Objects
R2 Integration

What will you be able to build with Cloudflare Snippets?

Snippets will allow customers to migrate their existing workloads to Cloudflare. They will also open up a number of new possible use cases for customers. We have highlighted three common examples below, however there are many more to choose from.

Example 1: Sending suspect bots to a honeypot

When creating Snippets customers will be able to access Cloudflare features available in the Workers runtime, such as the bot score field. This enables customers to forward an HTTP request to a honeypot or use the RegExp Javascript function to change the URL construct being sent back to the end user when traffic is assigned a bot score below a certain threshold, e.g. 29 and lower.

…
if (request.cf.botManagement.score < 30) {
const honeypot = "https://example.com/";
return await fetch(honeypot, request);
…
}

Another common use case we foresee Snippets addressing is cookie modification. Usage can range from simply setting an expiry in five minutes by using getTime and setTime JavaScript functions to setting a dynamic cookie based on user request attributes for A/B testing purposes.

…
{
let res = await fetch(request);
res = new Response(res.body, res);
// 24h * 60m * 60s * 1000ms = 86400000ms
const expiry = new Date(Date.now() + 7 * 86400000).toUTCString();
const group = request.headers.get("userGroup") == "premium" ? "A" : "B";
res.headers.append(
      "Set-Cookie",
`testGroup=${group}; Expires=${expiry}; path=/`
    );
…

Example 3: URI query management

Customers can also deploy Cloudflare Snippets to do complex operations such as splicing the URI query value to selectively remove or inject additional parameters. Query string manipulation is typically done using Transform Rules. However, with Transform Rules the set/ action is effectively a replace action. This action when applied to the URI query string will remove the entire value if there is one and set it to what the user specifies, thus overwriting it. This is a problem for customers who wish to selectively inject specific query parameters for matching traffic. For example,  setting an additional query, e.g. ?utm_campaign=facebook when common social media platforms are detected in the user agent. With Snippets, customers will be able to do this selective removal and insertion using a simple piece of JavaScript, e.g.

…
if (userAgent.includes("Facebook")) {
      const url = new URL(request.url);
      const params = new URLSearchParams(url.search);
      params.set("utm_campaign", "facebook");
      url.search = params.toString();
      const transformedRequest = new Request(url, request)
…
}

We are excited to see what other use cases Cloudflare Snippets unlock for our customers.

Will you stop adding actions to rulesets?

The simple answer is no! We will continue to build out our no-code actions within the ruleset engine, developing new products to solve customer needs.

It may sound obvious – but a core component to feature improvement is talking to customers. Talking to Snippet users will help us understand what real life use cases Snippets help solve and highlight feature gaps we have in our product suite. We can then review if it makes sense to productise that use case, or leave it requiring Snippets.

We also understand that not everyone is a software developer. We are therefore exploring how we can make Snippets accessible to all by creating selectable templates available in a library that can be copied and modified by customers, with minimum coding knowledge required. With Snippets, powerful won’t mean difficult.

Accessing Cloudflare Snippets

Snippets are currently under development — you can sign up here to join the waitlist for access.

We hope to begin admitting users into the closed beta in early 2023, with an open beta to follow.

Dynamic URL redirects: 301 to the future

Post Syndicated from Sam Marsh original https://blog.cloudflare.com/dynamic-redirect-rules/

Dynamic URL redirects: 301 to the future

Dynamic URL redirects: 301 to the future

The Internet is a dynamic place. Websites are constantly changing as technologies and business practices evolve. What was front-page news is quickly moved into a sub-directory. To ensure website visitors continue to see the correct webpage even if it has been moved, administrators often implement URL redirects.

A URL redirect is a mapping from one location on the Internet to another, effectively telling the visitor’s browser that the location of the page has changed, and where they can now find it. This is achieved by providing a virtual ‘link’ between the content’s original and new location.

URL Redirects have typically been implemented as Page Rules within Cloudflare, however Page Rules only match on the URL, rather than other elements such as the visitor’s source country or preferred language. This limitation meant customers with a need for more dynamic URL redirects had to implement alternative solutions such Cloudflare Workers to achieve their goals.

To simplify the management of these more complex use cases we have created Dynamic Redirects. With Dynamic Redirects, users can redirect visitors to another webpage or website based upon hundreds of options such as the visitor’s country of origin or language, without having to write a single line of code.

More than a URL

For nine years users were limited to 125 URL redirects per zone. This limitation meant those with a need for more URL redirects had to implement alternative solutions such Cloudflare Workers to achieve their goals.

In December 2021, we launched Bulk Redirects, allowing up to 100,000 URL redirects per account at the time. In April 2022 we increased this maximum number to over six million URL redirects per account. However, there is still a gap in the ‘URL redirect’ product unfulfilled. Until now.

Bulk Redirects, much like the ‘Forwarding URL’ Page Rule, are prescriptive URL redirects. You tell us what URL to look for, and where to redirect the user to when they visit it. We can support this use case at a huge scale.

If a visitor asks for.. Redirect them to…
https://www.cloudflare.com/r2-storage https://www.cloudflare.com/products/r2
https://www.cloudflare.com/apishield https://www.cloudflare.com/products/api-gateway
https://www.cloudflare.com/welcome-center https://developers.cloudflare.com/fundamentals/get-started/

That’s a simple concept to understand, however user needs have evolved. What if a user wanted to redirect visitors to a localized version of the requested page based on their preferred language? What if a user wanted to redirect visitors to their local subsidiary on the website? Or direct them to an optimized site when they visit from a mobile device? Suddenly, this well understood concept doesn’t work – and they have to deploy code in Workers to solve what is actually a common problem. And common problems deserve to be productized.

This is where Dynamic Redirects can help. The new product provides the same consistent user interface as Transform Rules, Custom Rules, Bulk Redirects, etc. and provides a new action allowing for the target URL to be dynamically created, much like the dynamic rewrite action offered in Transform Rules.

This dynamic action frees the user from having to define explicitly what the target URL should look like, and instead provides them with a full gamut of fields and functions to custom generate the target URL based upon the parameters of the request. For example, rather than redirecting all traffic for www.example.com/shop to www.example.com/en/shop, users can conceptually redirect the traffic to www.example.com/{PREFERRED_LANGUAGE}/shop (not actual syntax!). With this, traffic from a browser with a preferred language of French will be redirected to www.example.com/fr/shop, likewise those with a preferred language of German will be redirected to www.example.com/de/shop.

Dynamic URL redirects: 301 to the future

The other big difference between Dynamic Redirects and  Page Rules is in the filtering. Page Rules are limited to filtering on a URL, or a URL with asterisks as wildcards. Dynamic Redirects is built atop our lightning-fast Rulesets Engine, which also runs products such as Transform Rules, Custom Rules (WAF), Bulk Redirects and API Shield.

Due to this, Dynamic Redirects offers almost the entire suite of Ruleset Engine fields for use in filtering; from http.request.full_uri for the whole URL, to ip.geoip.country (where is the visitor located) and http.request.accepted_languages[] (the language preferred by the visitor). The possibilities are endless.

Users can also now use logical operators such as ‘OR’. Where previously, if a user wanted to redirect five distinct URLs to the same URL they would need to deploy five Page Rules. Today, they can simply use an ‘OR’ to consolidate this use case into just one Dynamic Redirect rule:

# URL Destination URL
1 https:/www.cloudflare.com/partners/integrations/ www.cloudflare.com/partners/
2 https:/www.cloudflare.com/partners/become-a-partner/ www.cloudflare.com/partners/
3 https:/www.cloudflare.com/partners/digital-agency/ www.cloudflare.com/partners/
4 https:/www.cloudflare.com/partners/technology-integrator/ www.cloudflare.com/partners/
5 https:/www.cloudflare.com/partners/view-partners/ www.cloudflare.com/partners/

# Expression Destination URL
1 (http.request.full_uri eq “https:/www.cloudflare.com/partners/integrations/”) or (http.request.full_uri eq “https:/www.cloudflare.com/partners/become-a-partner/”) or (http.request.full_uri eq “https:/www.cloudflare.com/partners/digital-agency/”) or (http.request.full_uri eq “https:/www.cloudflare.com/partners/technology-integrator/”) or (http.request.full_uri eq “https:/www.cloudflare.com/partners/view-partners/”) www.cloudflare.com/partners/

We can further simplify this use case in the future by adding hostname lists, allowing users to add URLs to a list and reference it from within the rule expression, similar to IP Lists. This allows an expression like (http.request.full_uri in $vanity_urls), for example.

A dedicated quota, just for U(RL)

Page Rules are used to set everything from configuration and caching behaviour to header modification and also URL redirection (Forwarding URL). This means that users tend to run out of available rules quickly.

To address this, we’re matching the Page Rule quota in each of the four new products that are being announced today. This means where in Page Rules an Enterprise customer would get 125 Page Rules to share amongst the aforementioned functions, in Dynamic Redirects they have 125 rules just for redirects. This number can be increased for Enterprise customers, also.

We’re also increasing the quota for free plans; where today free plans get three Page Rules, they will now get 10 rules for dynamic redirects, along with each of the other three products (cache, origin, config rules). That’s a net increase of 37 additional rules!

Plan Page Rules Dynamic Redirects
Enterprise 125 125+
Business 50 50
Pro 20 25
Free 3 10

Users can now get more out of their Cloudflare setup, being more specific about when traffic is redirected, optimizing cache and adjusting which settings are and aren’t applied – without having to trade off between these areas due to a limit in rules quota.

Localized redirects

One of the examples covered earlier is being able to redirect visitors to localized content depending on their preferred language.

This can be done by analyzing the ‘Accept-Language’ header sent by the browser, which is stored as an array in the field http.request.accepted_languages[]. This field is an array of the values received by Cloudflare within the Accept-Language HTTP request header, sorted in descending weight order. This header is calculated based on the preferences set by the visitor in the ‘Language’ section of their web browser.

For example, if the visitors browser sends an ‘Accept-Language’ header containing “Accept-Language: fr-CH, fr;q=0.8, en;q=0.9, de;q=0.7, *;q=0.5”, then the field http.request.accepted_languages[0] would contain “en”, with http.request.accepted_languages[1] containing “fr” and so forth.

With this information, we can create a dynamic redirect using the action:

Dynamic URL redirects: 301 to the future

With this rule in place, traffic from visitors with a preferred language of English (en) will be redirected to www.example.com/en/shop. The rule can be duplicated for other languages also, ensuring those with a preferred language of French (fr) will be redirected to www.example.com/fr/shop.

There are so many use cases for Dynamic Redirects we couldn’t fit them all in this blog.

Other possible use cases include mobile redirects, redirects based on cookies, redirects to different endpoints based on request headers for live testing. The potential list is huge, and we can’t wait to hear what you come up with. Try it today!

The future of Page Rules

Post Syndicated from Sam Marsh original https://blog.cloudflare.com/future-of-page-rules/

The future of Page Rules

The future of Page Rules

Page Rules is one of our most well-used products. Adopted by millions of users, it is used for configuring everything from cache to security levels. It is the ‘If This Then That’ of Cloudflare. Where the ‘If…’ is a URL, and the ‘Then That’ is changing how we handle traffic to specific parts of a ‘zone’. But it’s not without its limitations.

Page Rules can only trigger on a URL or URL pattern. There is a maximum of 125 Page Rules per zone. Page Rules are also tricky to debug. Even the idea of a “Page” sounds rather old-fashioned now.

We’re replacing Page Rules with four new dedicated products, offering increased rules quota, more functionality, and better granularity. These products are available immediately for testing. Page Rules is not going away yet, but we do anticipate being able to formally begin the end-of-life process soon.

Why change?

In the 10 years since it launched, Page Rules has become a well established product, and a very well adopted one. One million Page Rules have been deployed in the past three months alone.

Page Rules are used to tune how long files should be cached. Page Rules are used to override zone-wide settings for certain URLs. Page Rules are used to create simple URL redirects. Page Rules are used to selectively add/remove HTTP headers. Page Rules is a multitool of a product.

The future of Page Rules
Photo by Andrey Matveev on Unsplash

Like multitools and other generalist products, Page Rules does a good job at many things, but is not best-of-breed at anything. This is the trade-off of generalism. As we have grown as a company our customers have rightfully demanded more. Filtering on URL-alone is no longer sufficient; users are demanding more – and today we are delivering.

Over the last two years we have been working on the future of Page Rules and distilling hundreds of pieces of feedback into common themes, such as:

  1. I need more than 125 Page Rules
  2. I need to be able to trigger Page Rules on more than just the URL
  3. I need to be able to use regular expressions in my Page Rules
  4. It’s hard for me to understand how different Page Rules interact one another
  5. Page Rules is hard to debug
  6. I want more actions in Page Rules

Analyzing these themes we came to the conclusion that the best thing for Page Rules was to disassemble it and create new discrete products, each of which could be best-of-breed for their relevant areas. This dissolution would also provide better clarity regarding interoperation (cache vs configuration vs …), and make debugging simpler.

Today, we announce those new products:

1. Cache Rules: A dedicated product for setting and tuning ‘everything caching’.

2. Configuration Rules: A dedicated product for setting and selectively enabling, disabling and overriding zone-wide settings.

3. Dynamic Redirects: Like ‘Forwarding URL’ but turned up to 11. Redirect based on the visitors country, their preferred language, their device type, use regular expressions (plan level dependant) and more.

4. Origin Rules: A dedicated product for ‘where does this traffic go where it leaves Cloudflare’. Not only have we added host header and resolve override into this new product (ENT only), we’ve also productized another common Workers use case by enabling customers to selectively override the destination port. We’ve also added the ability to override the Server Name Indication (SNI).

All four of these products are available for use now via the dashboard, API and Terraform – and sitting alongside Transform Rules provide the replacement suite of products that will eventually enable an Page Rules end-of-life announcement.

We have dedicated blogs for each of these product launches with more information on what they offer and problems they address.

Order of execution

One of the main benefits of this new product suite is clarity.

Page Rules is a black box, where traffic goes in, ‘things happen’, and traffic comes out. It’s hard to debug the interplay between cache, configuration, header modification etc and it can vary from zone to zone as it’s entirely user defined.

By having discrete, separate areas per ‘function’, it makes visualizing the flow of a HTTP request much easier:

The future of Page Rules

Rather than a single lozenge of Page Rules, we now have visibility that Origin Rules will run first, then Cache Rules, then Configuration Rules, and finally Dynamic Redirects. This means we will modify host headers first, before tuning cache settings. And we will tune the cache parameters before we modify which settings are enabled for the specific traffic.

We have integrated these new products into the Traffic Sequence dashboard element also.

(For zones using both Page Rules and this new suite of products: The new products will take precedence over Page Rules. This means that Page Rules will be overridden if a conflict occurs).

I need more than 125 Page Rules

One of the limitations of Page Rules was how each Page Rule was stored and executed on our backend architecture. We are unable to offer more than 125 Page Rules per zone before we begin to see a performance hit – latency on HTTP requests begins to increase as evaluating them vs the Page Rules takes longer and longer. To combat this limitation users moved simple workloads to Workers, or split the zone into multiple sub domains, each with a 125 Page Rules quota. Neither of these are ideal for the customer.

To combat this limitation we have built all of the replacement products atop our lightning-fast Rulesets Engine, which also runs products such as Transform Rules, Custom Rules (WAF), Bulk Redirects and API Shield.

This allows us to offer much more quota to customers, as this engine is built to scale well beyond 125 rules per product. The table below summarizes the before and after impact of these new products, showing the default rules quota per plan:

Plan Page Rules Origin Rules Cache Rules Config Rules Dynamic Redirects
Enterprise 125 125+ 125+ 125+ 125+
Business 50 50 50 50 50
Pro 20 25 25 25 25
Free 3 10 10 10 10

Additional rules cannot be purchased for these new products.

This means zone’s on the Enterprise plan now have a minimum of 500 rules to use where before they had 125 via Page Rules. For Enterprises the quota for the new products is negotiable. Pro plan zones go from 20 Page Rules to 100.  Combined with the fine-grained control that the ruleset engine offers, these changes allow customers to customize their zone’s traffic to the finest of margins.

The other benefit from building all of these products on the Rulesets Engine is extensibility. Currently there are over 30 products that are built and running on the Rulesets Engine. Each of these products is essentially a logical bucket called a ‘phase’ which contains a single ruleset scoped to that product. Each phase is restricted to specific actions and fields, for example the field cf.bot_management.score is unavailable in http_request_transform as we have not calculated the bot score at the time we perform URL rewrites. Also, only the rewrite action is permitted. Whereas in Origin Rules (http_request_origin) we only allow the action route

When we create new capabilities for a product that is built atop the Rulesets Engine it is trivially simple for us to extend that new capability to other products at a later date.

For example, we added a new ‘field’, http.request.accepted_languages to Transform Rules earlier in the year. Until recently this was only available in Transform Rules. However, as both products are built on the Rulesets Engine it was trivial to enable this feature for Dynamic Redirects. This allows customers to perform URL redirects based on the visitor’s language preference, and the cost to us from an engineering perspective is negligible as the field is already implemented.

This also means in the future should a new field be created for Cache Rules due to a customer request, e.g. http.request.super_cool_field, in a matter of clicks we can enable this field for any of the other 30 products rather than have to duplicate effort across multiple platforms.

Simply put, the more products we build on top of the Rulesets Engine, the faster we can move and the more functionality we can put into users hands.

A single user experience

The most important benefit of all is consistency. Each of these products has a consistent and predictable API. A consistent and predictable Terraform configuration, and a consistent and predictable user experience in the dashboard. The ruleset engine allows us to keep everything the same except for the ‘action’. The filtering stays the same, the API stays the same, the UI stays (largely) the same, the only change is the ‘…then’, the action section of the rule.

This ensures that as a user when you are clicking around the dashboard setting up a new zone you aren’t having to learn each individual product’s page and how to navigate it. The only part you need to learn is what makes that product unique, its actions:

The future of Page Rules

Finally, when we add a new product, extending the Terraform provider to support it is trivial. That consistent experience has been a north star for us during this project and will continue to be in the future.

Try it them now

We are replacing Page Rules with a new suite of products, each built to be best-of-breed and put more power into the hands of our users.

Read more about the new products in each of their dedicated blogs. Then, try them for yourself!

Managed Transforms: templated HTTP header modifications

Post Syndicated from Sam Marsh original https://blog.cloudflare.com/managed-transforms-templated-http-header-modifications/

Managed Transforms: templated HTTP header modifications

Managed Transforms: templated HTTP header modifications

Managed Transforms is the next step on a journey to make HTTP header modification a trivial task for our customers. In early 2021 the only way for Cloudflare customers to modify HTTP headers was by writing a Cloudflare Worker. We heard from numerous customers who wanted a simpler way.

In June 2021 we introduced Transform Rules, giving customers a simple UI letting them specify what the custom HTTP header’s name and value is—either a static string (i.e. X-My-CDN: Cloudflare) or a dynamically populated value (i.e. X-Bot-Score: cf.bot_management.score).

This made the job much simpler, however there is still a good amount of thought required—with a number of potential drop-off points on the user journey. For example, in order to dynamically populate the bot score into the value of an HTTP request header, the user needs to know the correct field name. To find that they’ll need to go to the documentation site, find the correct section, etc.

When we analyzed how our customers use Transform Rules we found a set of very common use cases in the data. Four of the top eight fields used were relating to bot management; customers wanting to have the bot score, JA3 hash, etc. of each request added as an HTTP header. A further three of the top 10 fields were relating to the geographic location of the visitor (their city, country and ASN). We also saw over 400 Transform Rules being used just to remove X-Powered-By. That means potentially 400 users went to the same part of the dashboard, typed the same header name, read the same documentation and selected the same action.

Much as we set out to productize the common Cloudflare Worker use case of HTTP header modification into Transform Rules, we have now set out to productize and further simplify the common Transform Rules use cases into Managed Transforms.

The intention is to continue to identify common reasons for use of a Transform Rule and where possible package them up into a single click solution.

We always want to make our user’s lives as easy as possible, and finding a way to stop hundreds of people typing the same thing and clicking the same buttons, to achieve the exact same outcome, seems a great way to continue that mission.

An even simpler solution

Managed Transforms is a dedicated section of Transform Rules offering one-click HTTP header modifications. Want to add relevant Cloudflare Bot Management information as custom HTTP headers? One click. Want to add geographic information (country, etc.) as custom HTTP headers? One click.

Managed Transforms: templated HTTP header modifications

Managed Transforms can be found in ‘Rules > Transform Rules’ and clicking on the ‘Managed Transforms’ button. To benefit from Managed Transforms, users simply toggle the appropriate settings, and we take care of the rest.

For example, to enrich every HTTP request with Cloudflare’s Bot Management information users would enable ‘Add bot protection headers’. This setting ensures we add four new HTTP request headers to every HTTP request. SIEM (Security Information and Event Management) products can then be configured to correctly collect and chart these new headers, allowing customers to see the bot score of every HTTP request, how many requests are coming from verified bots, and so on.

Another great use case is the ‘Add security headers’ toggle. On a completely standard, default zone, a user can improve their website’s security score from an F to a C in just one click. Enabling HSTS improves the score to a B (scores correct as of June 7, 2022).

Managed Transforms: templated HTTP header modifications

Adding a Content-Security-Policy (used to mitigate Cross-Site Scripting (‘XSS’) attacks) or a Permission-Policy (used to give websites the ability to allow or block the use of browser features) increases the score to an ‘A’; the addition of both improves the score to the maximum: A+.

Managed Transforms: templated HTTP header modifications

During the design of Managed Transforms we chose not to include default Content-Security-Policy and Permission-Policy HTTP response headers within the ‘Add security headers’ toggle as we found these particular headers to be very specific to each individual website. Any default policies we tried either caused incorrect loading of the website content, or were too open to be of any value. So we decided to remove them from scope.

However, users can still add these HTTP response headers and their appropriate values in a handful of clicks by creating a new Transform Rule:

Managed Transforms: templated HTTP header modifications

The HTTP response headers entered here will be added alongside the HTTP response headers added by Managed Transforms to give an A+ score.

Try it now

Managed Transforms can be used to improve operations, remove sensitive data, and increase security, amongst other common use cases.

Try out Managed Transforms yourself today.

Maximum redirects, minimum effort: Announcing Bulk Redirects

Post Syndicated from Sam Marsh original https://blog.cloudflare.com/maximum-redirects-minimum-effort-announcing-bulk-redirects/

Maximum redirects, minimum effort: Announcing Bulk Redirects

404: Not Found

Maximum redirects, minimum effort: Announcing Bulk Redirects

The Internet is a dynamic place. Websites are constantly changing as technologies and business practices evolve. What was front-page news is quickly moved into a sub-directory. To ensure website visitors continue to see the correct webpage even if it has been moved, administrators often implement URL redirects.

A URL redirect is a mapping from one location on the Internet to another, effectively telling the visitor’s browser that the location of the page has changed, and where they can now find it. This is achieved by providing a virtual ‘link’ between the content’s original and new location.

URL Redirects have typically been implemented as Page Rules within Cloudflare, up to a maximum of 125 URL redirects per zone. This limitation meant customers with a need for more URL redirects had to implement alternative solutions such Cloudflare Workers to achieve their goals.

To simplify the management and implementation of URL redirects at scale we have created Bulk Redirects. Bulk Redirects is a new product that allows an administrator to upload and enable hundreds of thousands of URL redirects within minutes, without having to write a single line of code.

We’ve moved!

Mail forwarding is a product offered by postal services such as USPS and Royal Mail that allows you to continue to receive letters and parcels even if they are sent to an address where you no longer reside.

The postal services achieve this by effectively maintaining a register of your new location and your old location. This allows the systems to detect ‘this letter is for Sam Marsh at address A, but he now lives at address B, therefore send the mail there’.

Maximum redirects, minimum effort: Announcing Bulk Redirects
Photo by Christian Lue on Unsplash

This problem can be solved by manually updating my bank, online shops, etc. and having them send the parcels and letters directly to my new address. However, that assumes I know of every person and business who has my address. And it also relies on those people to remember I moved address and make updates on their side. For example, Grandma Marsh might have forgotten about my new address — I’ve moved a lot — and she may send my birthday card to my old address. Or all those Christmas cards from people who I don’t speak with regularly. Those will go to my old address also.  To solve this, I can use mail forwarding to ensure I still receive my cards and other mail, even though I no longer live at that address.

URL redirects are the Internet equivalent of mail forwarding.

URL redirects are effectively a table with two columns; what traffic am I looking for, and where should I send that traffic to? This mapping allows an administrator to define “whenever visitors go to https://www.cloudflare.com/bots I want to redirect them to the new location https://www.cloudflare.com/pg-lp/bot-mitigation-fight-mode.

With this technology, our sales and marketing teams can use the vanity URL all across the Internet, safe in the knowledge that should the backend systems change they won’t need to go to all the places this URL has been posted and update it. Instead, the intermediary system that handles the URL redirects can be updated. One location. Not thousands.

Why use URL redirects?

URL redirects are used to solve a number of use cases. One such common use case is to use URL redirects to force all visitors to connect to the website over a secure HTTPS connection, instead of via plain HTTP, to improve security. It’s such a common use case we created a toggle in the Cloudflare dashboard, “Always use HTTPS”, which redirects all HTTP requests to HTTPS when enabled.

URL redirects are also used for vanity domains and hyperlinks. In these scenarios, URL redirects are deployed to provide a mapping of short, user-friendly URLs to long, server-friendly URLs.  Not only are shorter URLs more memorable, but they are better scoring from an SEO perspective. According to Backlinko, ‘Excessively long URLs may hurt a page’s search engine visibility. In fact, several industry studies have found that short URLs tend to have a slight edge in Google’s search results.’.

Maximum redirects, minimum effort: Announcing Bulk Redirects
Photo by Hal Gatewood on Unsplash

Another use case is where a company may have a local domain for each of their markets, which they want to redirect back to the main website, e.g., redirect www.example.fr and www.example.de to www.example.com/eu/fr and www.example.com/eu/de, respectively.

This also covers company acquisition, where a company is acquired and the acquiring company wants to redirect hyperlinks to the relevant pages on their own website, e.g., redirect www.example.com to www.companyB.com/portfolio/example.

Finally, one of the most common use cases for URL redirects is to maintain uptime during a website migration. As companies migrate their websites from one platform to another, or one domain to another, URL redirects ensure visitors continue to see the correct content. Without these URL redirects, hyperlinks in emails, blogs, marketing brochures, etc. would fail to load, potentially costing the business revenue in lost sales and brand damage. For example, www.example.com/products/golf/product-goes-here would redirect to the new website at products.example.com/golf/product-goes-here.

How are URL redirects implemented today?

Ensuring these URL redirects are executed correctly is often the job of the reverse proxy — a server which sits between the client and the origin whose job is, amongst many others, to re-route received traffic to the correct destination.

For example, when using NGINX, a popular web server, the administrator would have a line in the config similar to the one below to implement a URL redirect:

`rewrite ^/oldpage$ http://www.example.com/newpage permanent;`

Historically, these web servers were located physically within a company’s data center. Administrators then had full control over the URLs received, and could create the redirect rules as and when needed.

As the world rapidly migrates on-premise applications and solutions to the cloud, administrators can find themselves in a situation where they can no longer do what they previously could. Not being responsible for the origin has a number of benefits, but it also comes with drawbacks such as lack of ‘control’. Previously, an administrator could quickly add a few config lines to the web server in front of their ecommerce platform. Moving to an online hosted platform makes this much more difficult to do.

As such, administrators have moved to platforms like Cloudflare where functionality such as URL redirects can be implemented in the cloud without the need to have administrator access to the origin.

The first way to implement a URL Redirect in Cloudflare is via a Forwarding URL Page Rule. Users can create a Page Rule which matches on a specific URL and redirects matching traffic to another specific URL, along with a status code — either a permanent redirect (301) or a temporary redirect (302):

Maximum redirects, minimum effort: Announcing Bulk Redirects

Another method is to use Cloudflare Workers to implement URL redirects, either individually or as a map. For example, the code below is used to create a URL redirect map which runs when the Worker is invoked:

const redirectMap = new Map([
 ["/bulk1", "https://" + externalHostname + "/redirect2"],
 ["/bulk2", "https://" + externalHostname + "/redirect3"],
 ["/bulk3", "https://" + externalHostname + "/redirect4"],
 ["/bulk4", "https://google.com"],
])

This snippet is taken from the Cloudflare Workers examples library and can be used to scale beyond the 125 URL redirect limit of Page Rules. However, it does require the administrator to be comfortable working with code and correctly configuring their Cloudflare Workers.

Introducing: Bulk Redirects

Speaking with Cloudflare users about URL redirects and their experience with our product offerings, “Give me a product which lets me upload thousands of URL redirects to Cloudflare via a GUI” was a very common request. Customers we interviewed typically wanted a simple way to upload a list of ‘from,to,response code’ without having to write a single line of code. And that’s what we are announcing today.

Bulk Redirects is now available for all Cloudflare plans. It is an account/organization-level product capable of supporting hundreds of thousands of URL redirects, all configured via the dashboard without having to write a single line of code.

The system is implemented in two parts. The first part is the Bulk Redirect List. This is effectively the redirect map, or ‘edge dictionary’, where users can upload their URL redirects:

Maximum redirects, minimum effort: Announcing Bulk Redirects

Each URL redirect within the list contains three main elements. The first two elements are Source URL (the URL we are looking for) and Target URL (the URL we are going to redirect matching traffic to).

There is also the Status code. This is the ‘type’ of redirect. In addition to 301 (Moved Permanently) and 302 (Moved Temporarily) redirects, we have added support for the newer 307 (Temporary Redirect) and 308 (Permanent Redirect) redirect status codes.

We have added support for specifying destination ports within the Target URL field also, allowing URL redirects to non-standard ports, e.g., “Target URL: www.example.com:8443”.

If you have many URL redirects, you can upload them via a CSV file.

There are also four additional parameters available for each individual URL redirect.

Firstly, we have added two options to replace the ambiguity and confusion caused by the use of asterisks as wildcards. Take this source URL as an example: *.example.com/a/b. Would you expect www.example.com/a/b to match? How about example.com/a/b, or www.example.com/path*? Asterisks used as wildcards cause confusion and misunderstanding, and also increase the cost of implementation and maintenance from an engineering perspective. Therefore, we are not implementing them in Bulk Redirects.

Instead, we have added two discrete options: Include subdomains and Subpath matching. The Include subdomains option, once enabled, will match all subdomains to the left of the domain portion of the URL as well as the domain specified. For example, if there is a URL redirect with a source URL of example.com/a then traffic to b.example.com/a and c.b.example.com/a will also be redirected.

The Subpath matching option focuses on the opposite end of the URL. If this option is enabled, the redirect applies to the URL as well as all its subpaths. For example, if we have a URL redirect on www.example.com/foo with subpath matching enabled, we will match on that specific URL as well as all subpaths, e.g., www.example.com/foo/a, www.example.com/foo/a/, ` www.example.com/foo/a/b/c`, etc., but not www.example.com/foobar.

These options provide a tremendous amount of flexibility and granularity for each URL redirect. However, for most use cases only the source URL, target URL, and status code options will need to be set.

Secondly, we have added two options relating to retaining portions of the original HTTP request: Preserve path suffix and Preserve query string. If subpath matching is enabled, Preserve path suffix can be used to copy the URI path from the originally requested URL and add it to the destination URL. For example, if there is a URL redirect of Source URL: example.co.uk, Target URL: www.example.com/a, then requests to example.co.uk/target will be redirected to www.example.com/a/target with both options enabled. Preserve query string can be used independently of the other options, and carries forward the URI query from the originally requested URL to the new URL.

Lists by themselves do not provide any redirection, they are simply the ‘lookup table’. To enable them we need to reference them via a Bulk Redirect Rule.

The rules themselves are very simple. By default, the user experience is to provide a name for the rule, a description, and select the Bulk Redirect List that should be invoked.

Maximum redirects, minimum effort: Announcing Bulk Redirects

For users who require more granularity and control there are additional settings available under the Advanced options toggle. Within this section there are two editable sections:  Expression and Key.

Maximum redirects, minimum effort: Announcing Bulk Redirects

The first field, Expression, specifies the conditions that must be met in order for the rule to run. By default, all URL redirects of the specified list will apply.

The second field, Key, is closely related to the expression. The key is used in combination with the specified list to select the URL redirect to apply. The field used for the key should always be the same as the field used in the expression, i.e., the key should be http.request.full_uri if the field in the expression is http.request.full_uri, or conversely, the key should be raw.http.request.full_uri if the field in the expression is raw.http.request.full_uri.

There are two main use cases for modifying these settings. Firstly, users can edit these options to increase specificity in the trigger, e.g., ip.src.country == "GB" and http.request.full_uri in $redirect_list. This is useful for ensuring Bulk Redirect Lists are only applied when a visitor comes from specific countries, subnets, or ASNs — or also only applying a URL redirect list if the visitor is a verified bot, or the bot score is >35.

Secondly, users can edit these options to amend the URL being matched and used as a lookup in the given list, i.e., the user may choose to have URL redirects in their list(s) specifically for URLs that would be normalized, e.g., URLs containing specific percent-encoding.  To ensure these URL redirects still trigger, the settings in Advanced options should be used to edit the expression and key to use the raw.http.request.full_uri field instead.

Automating via the API

Another way to manage bulk redirects is via our API. Customers wishing to automate the addition of bulk redirects can use the API to either simply add URL redirects to an existing list, or automate the entire workflow — creating a list, adding URL redirects to the list, and enabling the list via a new redirect rule.

There are three main calls when creating bulk redirects via the API:

  1. Create the redirect list
  2. Load with URL redirects
  3. Enable via a rule (You will also need to create the ruleset if doing this for the first time).

For step 1, first create a mass redirect list via the API call:

curl --location --request POST 'https://api.cloudflare.com/client/v4/accounts/<ACCOUNT_ID>/rules/lists' \
--header 'X-Auth-Email: <EMAIL_ADDRESS>' \
--header 'Content-Type: application/json' \
--header 'X-Auth-Key: <API_KEY>' \
--data-raw '{
 "name": "my_redirect_list_2",
 "description": "My redirect list 2",
 "kind": "redirect"
}'

The output will look similar to:

{
  "result": {
    "id": "499b94da726d4dbc9ce6bf6c96ef8b6a",
    "name": "my_redirect_list_2",
    "description": "My redirect list 2",
    "kind": "redirect",
    "num_items": 0,
    "num_referencing_filters": 0,
    "created_on": "2021-12-04T06:43:43Z",
    "modified_on": "2021-12-04T06:43:43Z"
  },
  "success": true,
  "errors": [],
  "messages": []
}

Capture the value of “id”, as this is the list id we will then add URL redirects to.

Next, in step 2 we will add URL redirects to our newly created list by executing a POST call to the id we captured previously – with our URL redirects in the body:

curl --location --request POST 'https://api.cloudflare.com/client/v4/accounts/<ACCOUNT_ID>/rules/lists/<LIST_ID>/items' \
--header 'X-Auth-Email: <EMAIL_ADDRESS>' \
--header 'Content-Type: application/json' \
--header 'X-Auth-Key: <API_KEY>' \
--data-raw '[
 {
   "redirect": {
     "source_url": "www.example.com/a",
     "target_url": "https://www.example.net/a"
   }
 },
 {
   "redirect": {
     "source_url": "www.example.com/b",
     "target_url": "https://www.example.net/a/b",
     "status_code": 307,
     "include_subdomains": true
   }
 },
 {
   "redirect": {
     "source_url": "www.example.com/c",
     "target_url": "www.example.net/c",
     "status_code": 307,
     "include_subdomains": true
   }   
 }
]'

The output will look similar to:

{
  "result": {
    "operation_id": "491ab6411acf4a12a6c72df1385b095a"
  },
  "success": true,
  "errors": [],
  "messages": []
}

In step 3 we enable this list by creating a new mass redirect rule within the mass redirect account-level ruleset.

Note, if this is the first time you are creating a redirect rule you will need to use a different API call to create the ruleset. See the documentation here for more details. All subsequent updates to the rulesets are made by calls similar to below.

Firstly, we need to find our account-level rulesets id. To do this we need to get a list of all account-level rulesets and look for the ruleset with the phase http_request_redirect:

curl --location --request GET 'https://api.cloudflare.com/client/v4/accounts/<ACCOUNT_ID>/rulesets \
--header 'X-Auth-Email: <EMAIL_ADDRESS>' \
--header 'Content-Type: application/json' \
--header 'X-Auth-Key: <API_KEY>'

The output will look similar to:

{
   "result": [
       {
           "id": "efb7b8c949ac4650a09736fc376e9aee",
           "name": "Cloudflare Managed Ruleset",
           "description": "Created by the Cloudflare security team, this ruleset is designed to provide fast and effective protection for all your applications. It is frequently updated to cover new vulnerabilities and reduce false positives.",
           "source": "firewall_managed",
           "kind": "managed",
           "version": "34",
           "last_updated": "2021-10-25T18:33:27.512161Z",
           "phase": "http_request_firewall_managed"
       },
       {
           "id": "4814384a9e5d4991b9815dcfc25d2f1f",
           "name": "Cloudflare OWASP Core Ruleset",
           "description": "Cloudflare's implementation of the Open Web Application Security Project (OWASP) ModSecurity Core Rule Set. We routinely monitor for updates from OWASP based on the latest version available from the official code repository",
           "source": "firewall_managed",
           "kind": "managed",
           "version": "33",
           "last_updated": "2021-10-25T18:33:29.023088Z",
           "phase": "http_request_firewall_managed"
       },
       {
           "id": "5ff4477e596448749d67da859230ac3d",
           "name": "My redirect ruleset",
           "description": "",
           "kind": "root",
           "version": "1",
           "last_updated": "2021-12-04T06:32:58.058744Z",
           "phase": "http_request_redirect"
       }
   ],
   "success": true,
   "errors": [],
   "messages": []
}

Our redirect ruleset is at the bottom of the output. Next we will add our new bulk redirect rule to this ruleset:

curl --location --request PUT 'https://api.cloudflare.com/client/v4/accounts/<ACCOUNT_ID>/rulesets/<RULESET_ID> \
--header 'X-Auth-Email: <EMAIL_ADDRESS>' \
--header 'Content-Type: application/json' \
--header 'X-Auth-Key: <API_KEY> \
--data-raw '{
     "rules": [
   {
     "expression": "http.request.full_uri in $my_redirect_list",
     "description": "Bulk Redirect rule 2",
     "action": "redirect",
     "action_parameters": {
       "from_list": {
         "name": "my_redirect_list_2",
         "key": "http.request.full_uri"
       }
     }
   }
 ]
}'

The output will look similar to:

{
  "result": {
    "id": "5ff4477e596448749d67da859230ac3d",
    "name": "My redirect ruleset",
    "description": "",
    "kind": "root",
    "version": "2",
    "rules": [
      {
        "id": "615cf6ac24c04f439138fdc16bd20535",
        "version": "1",
        "action": "redirect",
        "action_parameters": {
          "from_list": {
            "name": "my_redirect_list_2",
            "key": "http.request.full_uri"
          }
        },
        "expression": "http.request.full_uri in $my_redirect_list",
        "description": "Bulk Redirect rule 2",
        "last_updated": "2021-12-04T07:04:16.701379Z",
        "ref": "615cf6ac24c04f439138fdc16bd20535",
        "enabled": true
      }
    ],
    "last_updated": "2021-12-04T07:04:16.701379Z",
    "phase": "http_request_redirect"
  },
  "success": true,
  "errors": [],
  "messages": []
}

With those API calls executed, our new list is created, loaded with URL redirects and enabled by the bulk redirect rule. Visitors to the URLs specified in our list will now be redirected appropriately.

Account-level benefits

One of the driving forces behind this product is the desire to make life easier for those customers with a large number of zones on Cloudflare. For these customers, URL redirects are a pain point when using Page Rules, as they need to navigate into each zone and configure URL redirects one at a time. This doesn’t scale very well.

Bulk Redirects add real value for customers in this situation. Instead of having to navigate into 400 zones and create one or two Page Rules for each, an administrator can now create and upload a single Bulk Redirect List, which contains all the URL redirects for the zones under management.

This means that if the customer simply wants 399 of those 400 zones to redirect to the “primary zone”, they can create a bulk redirect list with 399 entries, all pointing to example.com, and enable the Subpath matching and Include subdomains options on each. This vastly simplifies the management of the estate.

The same premise also applies to SSL for SaaS customers. For example, if example.com has 20 custom hostnames in their zone, customers can now create a Bulk Redirect List and Rule for each custom hostname, grouping each customer’s URL redirects into their own logical buckets.

Bulk Redirects is a game changer for companies with a large number of zones and customers under management.

Allowances

Bulk Redirects are available for all accounts. The packaging model for Bulk Redirects closely resembles that of “IP Lists”. Accounts are entitled to a set number of Edge Rules (from which “Bulk Redirect Rules” draws down), Bulk Redirect Lists, and URL Redirects depending on the highest Cloudflare plan within their account.

Feature Enterprise Business Pro Free
Edge Rules (for use of Bulk Redirect Rules) 50+ 15 15 15
Bulk Redirect Lists 25+ 5 5 5
URL Redirects 10,000+ 500 500 20

For example, an account with ten zones, all on the Free plan, would be entitled to 15 Edge Rules, 5 Bulk Redirect Lists, and 20 URL Redirects that can be stored within those lists.

An account with one Pro zone and 2 Free plan zones would be entitled to 15 Edge Rules, 5 Bulk Redirect Lists, and 500 URL Redirects that can be stored within those lists.

Enterprise customers have a default of 10,000 URL Redirects to be used across 25 lists. However, these numbers are negotiable on enquiry.

Planned enhancements

We intend to make a number of incremental improvements to the product in the coming months, specifically to the list experience to allow for the editing of URL redirects and also for searching within lists.

In the near future we intend to bring to market a product to fulfill the other common request for URL redirects, and deliver ‘Dynamic URL Redirects’. Whilst Bulk Redirects supports hundreds of thousands of URL redirects, those URL redirects are relatively prescriptive — from a.com/b to b.com/a, for example.  There is still a requirement for supporting more complex, rich URL redirects, e.g., device-specific URL redirects, country-specific URL redirects, URL redirects that allow regular expressions in their target URL, and so forth. We aspire to offer a full range of functionality to support as many use cases as possible.

Try it now

Bulk Redirects can be used to improve operations, simplify complex configurations, and ease website management, amongst many other use cases.  Try out Bulk Redirects yourself today.

Modifying HTTP response headers with Transform Rules

Post Syndicated from Sam Marsh original https://blog.cloudflare.com/transform-http-response-headers/

Modifying HTTP response headers with Transform Rules

Modifying HTTP response headers with Transform Rules

HTTP headers are central to how the web works. They are used for passing additional information between the client and server, such as which security permissions to apply and information about the client, allowing the correct content to be served.

Today we are announcing the immediate availability of the third action within Transform Rules, “HTTP Response Header Modification”, available for all Cloudflare plans. This new functionality provides Cloudflare users the ability to set or remove HTTP response headers as traffic returns through Cloudflare back to the client. This allows customers to enrich responses with information about how their request was handled, debugging information and even recruitment messages.

Previously, HTTP response header modification was done using a Cloudflare Worker. Today we’re introducing an easier way to do this without writing a single line of code.

Luggage tags of the World Wide Web

Modifying HTTP response headers with Transform Rules

Think of HTTP headers as the “luggage tag” attached to your bags when you check in at the airport.

Generally, you don’t need to know what those numbers and words mean. You just know they are important in getting your suitcase from the boarding desk, to the correct airplane, and back to the correct luggage carousel at your destination.

These tags contain information about the weight of the suitcase, the destination airport code, baggage tag number, airline carrier, customs handling information, and more. These attributes are all essential, not only for ensuring that your luggage arrives at the correct destination, but also it does so in the safest, most efficient manner.

HTTP headers are the luggage tags of the Internet. They are essential to ensuring the request from your browser arrives at the correct destination, and that traffic is returned to your browser using the correct settings also in the safest, most efficient manner.

How are HTTP response headers used?

HTTP headers are set on both the ‘request’ and ‘response’ interactions; ‘request’ being when the client asks for the file and ‘response’ being what the server returns as a result. The functionality announced today pertains specifically to HTTP response headers.

HTTP response headers are used to ensure the correct data is returned to the browser along with information which helps the browser handle the data correctly. Common response headers include “Content-Type” which tells the browser the type of the content returned, e.g. “Content-Type: text/html” or “Content-Type: image/png”. Another common header is “Server:” which contains information about the software used to handle the HTTP request, e.g. “Server: cloudflare”.

Outside of basic HTTP traffic handling there are many other uses for these response headers. One such example is to improve security. Security mechanisms such as Content Security Policy (CSP), Cross Origin Resource Sharing (CORS) and HTTP Strict Transport Security (HSTS) are all implemented as response headers to improve and harden security for website visitors.

For example, the primary goal of CSP is to mitigate and report Cross-Site Scripting (XSS) attacks. XSS attacks occur when a malicious script is injected into a trusted website, allowing an attacker to use an application to send malicious code such as a browser-side script to a different end user. This script can then be used to compromise the end user’s interactions with the website or application, siphoning sensitive information such as passwords to a third party.

To prevent this, CSP is added by the website administrator as a HTTP response header. The CSP response header specifies the domains that the browser should consider to be valid sources of executable scripts. A CSP compatible browser will then only execute scripts loaded in files received from those permitted domains, ignoring all other scripts.

CSP is added to the HTTP response by setting the ‘Content-Security-Policy’ header along with the policy which is contained in the value. For example, when using NGINX, a popular web server, the administrator would have a line in the config similar to:

add_header Content-Security-Policy "default-src 'self';" always;

When using Cloudflare Workers, the code would be similar to:

response.headers.set("Content-Security-Policy": "default-src 'self' example.com *.example.com",)

When the browser receives the HTTP response it will now detect the presence of the Content-Security-Policy header and act appropriately.

Dynamic modification of HTTP response headers

Ensuring these headers are present on the HTTP response is often the job of the reverse proxy — a server which sits between the client and the server whose job is, amongst many others, to enrich the HTTP response data returned to the client.

“HTTP Response Header Modification” is now available for all Cloudflare plans, within Transform Rules. It provides the ability to modify HTTP response headers before they are returned to the visitor, all within Cloudflare. This is especially important when the response is coming from an origin the administrator does not have total control over, such as a SaaS provider or other third party service.

Modifying HTTP response headers with Transform Rules

Transform Rules allows users to modify up to ten HTTP response headers per rule using one of three options:

Modifying HTTP response headers with Transform Rules

‘Set dynamic’ should be used when the value of a HTTP response header needs to be populated dynamically for each HTTP response. Examples include adding the Cloudflare Bot Management ‘bot score’ to each HTTP response, or the visitor’s country:

Modifying HTTP response headers with Transform Rules

Note: These values are calculated using the corresponding HTTP request, meaning the bot score returned in the response header will be calculated based upon the HTTP request. Similarly, the ‘ip.src.country value will be the country of the website visitor, not the origin where the response was sent from.

‘Set static’ should be used to populate the value of a header with a static, literal string. This option should be used for simple header creation such as setting the CORS or CSP policies:

Modifying HTTP response headers with Transform Rules

In both ‘set’ examples, if a header with the specified name already exists in the HTTP response, its value will be removed and replaced with the given value.

‘Remove’ is the final option, which should be used to remove all HTTP response headers with the specified name. For example, if you wanted to ensure the ‘Link’ HTTP response header was removed, you would use a rule similar to the following one:

Modifying HTTP response headers with Transform Rules

Cloudflare functions can be used within ‘set dynamic’ header modifications. These functions include:

  • concat()
  • regex_replace()
  • to_string()
  • lower()

An example where functions are commonly used is concat() and to_string() used to take a list of different data types and concatenate to form a single header value. For example, `concat(“score=”,to_string(cf.bot_management.score))` would result in a header value like `score=85`.

Note: regular expression functions are only available for customers on Business and Enterprise plans.

Optimizing for your website

One other huge benefit of moving HTTP response header modification into Cloudflare is the level of filtering provided in the rule builder. Typically, technologies like CORS and CSP are set as response headers on the entire website — or at best — on a per-directory basis.

With Transform Rules, administrators can set headers based upon a number of parameters including the visitor’s country of origin, bot score, user agent, requested filename / file extension, request method and more.

This allows administrators the ability to implement setups such as having a stricter Content Security Policy for verified bots vs unverified bots/low bot score traffic.

Try it now

HTTP Response Header Modification can be used to improve operations, remove sensitive data, and increase security, amongst many other use cases. Try out the latest Transform Rule yourself today.

Traffic Sequence: Which Product Runs First?

Post Syndicated from Sam Marsh original https://blog.cloudflare.com/traffic-sequence-which-product-runs-first/

Traffic Sequence: Which Product Runs First?

Traffic Sequence: Which Product Runs First?

“Which came first, the chicken or the egg?” It’s one of life’s great questions. There are hundreds of articles published which conclude with eggs predating chickens by millions of years. Unfortunately, Cloudflare users don’t have New Scientist on hand to answer similar questions.

Which runs first, Firewall Rules or Workers? Page Rules or Transform Rules? Whilst not as philosophically challenging, the answers to these questions are key to setting up your Cloudflare zone correctly. Answering them has become increasingly difficult as more and more functionality is added, thanks to our incredible rate of shipping products. What was once a relatively easy to understand traffic flow exploded in complexity with the introduction of products such as Workers, Load Balancing Rules and Transform Rules. And this big bang of product announcements is only accelerating each year.

To begin addressing this problem, we developed Traffic Sequence. Traffic Sequence is a simple dashboard illustration which shows a default, high-level overview of how Cloudflare products interact. Think of this as your atlas, rather than your black cab driver’s “Knowledge”. This helps you understand that London is in the south east of the UK, but not that it’s quicker to walk than use the London Underground between Leicester Square and Covent Garden.

Traffic Sequence: Which Product Runs First?

Traffic Sequence is now enabled for all zones by default, appearing on ten product pages within the dashboard. Traffic Sequence highlights in blue the product area you are currently configuring, showing where within the HTTP request lifecycle the specific product sits. This provides context and allows users to understand which products will see the impact of changes here, and which products will not.

Traffic Sequence is designed to make Cloudflare’s edge clearer to our customers, allowing users to understand how products fit together and understand how HTTP requests flow between each.

Dear Cloudflare, which runs first?

Understanding how traffic is routed through Cloudflare has been one of the most common questions from both Cloudflare staff and customers alike.

But why does it matter? Let’s go through a simple example.

Released in April 2021, “Transform Rules” lets users rewrite URLs of HTTP requests as they proxy through their zone — for example, rewriting /login.php to /super/secret/login-page.php, all invisible to the end user.

In this scenario, the administrator also has a Firewall Rule blocking requests to the URI Path /login.php when the visitor is coming from a country other than the United States. What they would see, however, is that visitors from these other countries are still reaching the /login.php page on their servers. Why is this?

This is because URL rewrites happen before Firewall Rules, meaning the Firewall Rules product won’t see a URI Path of /login.php. Instead it will see HTTP requests with the rewritten URI path of /super/secret/login-page.php. Thus, when Firewall Rules evaluates the customers rule it checks:

  1. Is this from a country that is not the USA? Yes
  2. AND – Is this request going to a URI Path of /login.php? No.

As both criteria are not evaluated as ‘true’,  the rule does not match and the traffic is allowed on its journey.

This is why it is so important to know how Cloudflare’s products interoperate to get the most out of your plan, and achieve your goals without having to dig through mountains of documentation.

In an alternate timeline, Traffic Sequence is used to highlight that Firewall Rules run after URL rewriting occurs, and therefore see’s the rewritten value in the URI Path. With this information the customer can then configure a Firewall Rule to look for the rewritten value in URI Path and accomplish their desired setup.

From napkin to working prototype

Traffic Sequence was originally borne out of a “back of the napkin” idea during the creation of Transform Rules and URL Normalization, in an attempt to show where these transformations were happening:

Traffic Sequence: Which Product Runs First?

The idea might have started from a need of our own, but it ended up addressing well known customer and internal problems: whenever we build a new product everyone wants to understand how it fits into the big picture. So we pushed the idea further, proposing it to other teams and soliciting feedback.

This project was a great example of how bringing the right level of fidelity of thinking to the table can be evolved into an opportunity to ship to learn. Something that was initially meant as an explainer diagram for one rule type has become an almost bespoke experience of the dashboard, as it is unique to each customer’s Cloudflare environment, displaying only the products available for use in that zone. We offer many options and routes to products, but we didn’t have a straightforward flow of information that customers can rely on, focusing only on what they have set up and have access to.

As part of the design process, we try to focus on asking lots of questions rather than just finding an answer. Some of the considerations we had were:

  • What if we show customers a product they aren’t using?
  • What if we show customers a product they aren’t entitled to on their plan level?
  • Why aren’t we showing “this product”?
  • Do we have this visualisation on by default?

After gaining internal momentum to flesh out this project, we decided to focus on three areas:

  1. Simplifying a complex ecosystem – what is a useful simplification?
  2. Value that this will add beyond this first application
  3. Opportunity to test out different navigation and mental models.

After all, this is not just a map of our system, but a new way of navigating it entirely.

Positive early internal feedback not only aligned with our goals, but allowed us to iterate on points that needed improvement. We knew that this could be a game-changer for promoting clarity, improving discoverability and saving time with navigation: going for one click instead of three for most items.

Traffic Sequence: Which Product Runs First?

A couple of iterations later, we were ready to put this in the hands of our users for early testing:


Thanks to our incredible community we had a high level of interest in the first week, providing insight into how this feature would be used in the real world, and answering the ultimate question of this experiment: “Does this solve the problem of understanding how Cloudflare handles HTTP requests?”  via our Traffic Sequence survey form:

  • “I didn’t know where my requests were going… until now.”
  • “It’s always been confusing which products/features affect which other products/features.”
  • “It’s really handy to be able to explain the ordering that these are happening in, and I like the deeplink into the relevant area.”

These were all a great reminder that what triggered this work was ingrained in real customer needs.

Other feedback was rapidly incorporated into the prototype; specifically splitting Transform Rules into two separate sections to highlight that URL rewrites and header modifications occur at different parts of the request flow. We also added features which our users deemed important for clarity, such as IP Access Rules.

Traffic Sequence for all

Thanks to the great feedback and participation of all testers, both internal and external, we are now in a position where we are comfortable to take the covers off and make Traffic Sequence available to all users.

Traffic Sequence: Which Product Runs First?

The visualisation can be hidden easily by clicking on the “hide” button, and the display automatically hides to preserve critical whitespace when needed:

Traffic Sequence: Which Product Runs First?

When new products are added, or updates to products occur which modify the traffic order, this diagram will be updated accordingly.

Evolving Traffic Sequence

We know this is a high level, generic overview of how Cloudflare products interact. There is a level of nuance underneath, and a number of products and features not shown in the Traffic Sequence illustration which play an important part in keeping users safe and secure.

In the future we have aspirations to build “the other side of the coin”. Traffic Sequence provides a simple to understand view of how the products work by default at a high level. We also want to create a detailed, almost traceroute-like feature which allows users to see exactly what happens to their traffic — which products it goes via and what happens within those products, and potentially a lot more. Stay tuned!

Try it now

This feature is now enabled by default on all customer zones, and is visible within the dashboard locations outlined above.

Please do try it out and let us know what you think via the Cloudflare Community

Modify HTTP request headers with Transform Rules

Post Syndicated from Sam Marsh original https://blog.cloudflare.com/transform-http-request-headers/

Modify HTTP request headers with Transform Rules

Modify HTTP request headers with Transform Rules

HTTP headers are central to how the web works. They are used for passing additional information between the client and server, such as which security permissions to apply and information about the client, allowing the correct content to be served.

Today we are announcing the immediate availability of the second action within Transform Rules, “HTTP Request Header Modification”, available for all Cloudflare plans. This new functionality provides Cloudflare administrators with the ability to easily set or remove HTTP request headers as traffic flows through Cloudflare. This allows customers to enrich requests with information such as the Cloudflare Bot ManagementBot Score prior to being sent to their servers. Previously, HTTP request header modification was done using a Cloudflare Worker. Today we’re introducing an easier way to do this without writing a single line of code.

Luggage tags of the World Wide Web

Modify HTTP request headers with Transform Rules
Photo by Markus Spiske on Unsplash

Think of HTTP headers as the “luggage tag” attached to your bags when you check in at the airport.

Generally, you don’t need to know what those numbers and words mean. You just know they are important in getting your suitcase from the boarding desk, to the correct airplane, and back to the correct luggage carousel at your destination.

These tags contain information about the weight of the suitcase, the destination airport code, baggage tag number, airline carrier, customs handling information, and more. These attributes are all essential, not only for ensuring that your luggage arrives at the correct destination, but also it does so in the safest, most efficient manner.

HTTP Headers are the luggage tags of the Internet. They are essential to ensuring the request from your browser arrives at the correct destination, and that traffic is returned to your browser using the correct settings also in the safest, most efficient manner.

How are HTTP request headers used?

HTTP headers are set on both the ‘request’ and ‘response’ interactions; ‘request’ being when the client asks for the file and ‘response’ being what the server returns as a result. The functionality announced today pertains specifically to HTTP request headers only.

Many organizations use HTTP request headers to ensure visitor requests are served correctly. They are used to route requests to different clusters, serve mobile-friendly content, and legacy-browser friendly content.

HTTP request headers are also used for security purposes, namely authentication and authorization. Simple examples include adding a static, pre-shared key as a custom header which adds an additional security check to all inbound HTTP requests.

Ensuring these headers are present on the HTTP request is often the job of the reverse proxy — a server which sits between the client and the server whose job is, amongst many others, to enrich the HTTP request data sent to the server.

For example, when using NGINX, a popular web server used as a reverse proxy, the administrator would have a line in the config similar to:

proxy_set_header X-Header-Name "custom";

When using  Cloudflare Workers, the code would be similar to:

request.headers.set("X-Header-Name", "custom")

Each of these lines of code would add a custom HTTP request header to the next-hop destination with a name of ‘X-Header-Name’ and a value of ‘custom’.

Dynamic modification of HTTP request headers

“HTTP Request Header Modification” is now available for all Cloudflare plans, within Transform Rules. It gives control to administrators by providing the ability to modify HTTP request headers before they’re sent to their own origin servers or third-party services such as SaaS providers.

Modify HTTP request headers with Transform Rules

Transform Rules allows users to modify up to 10 HTTP request headers per rule using one of three options:

Modify HTTP request headers with Transform Rules

‘Set dynamic’ should be used when the value of a HTTP request header needs to be populated dynamically for each HTTP request. Examples include adding the Cloudflare Bot Management ‘bot score’ to each HTTP request, or the visitor’s country:

Modify HTTP request headers with Transform Rules

‘Set static’ should be used to populate the value of a header with a static, literal string. This option should be used for simple header creation such as setting the source CDN (Cloudflare) or a shared secret:

Modify HTTP request headers with Transform Rules

In both “set” examples, if a header with the specified name already exists in the HTTP request, its value will be removed and replaced with the given value.

‘Remove’ is the final option, which should be used to remove all HTTP request headers with the specified name. For example, if you wanted to ensure the ‘cf-connecting-ip’ HTTP request header was removed, you would use a rule similar to the following one:

Modify HTTP request headers with Transform Rules

Cloudflare functions can be used within ‘set dynamic’ header modifications. These functions include:

  • concat()
  • regex_replace()
  • to_string()
  • lower()

An example where functions are commonly used is concat() and to_string() used to take a list of different data types and concatenate to form a single header value. For example, concat(“score=”,to_string(cf.bot_management.score)) would result in a header value of ‘score=85’.

Note: regular expression functions are only available for customers on Business and Enterprise plans.

Try it now

HTTP Request Header Modification can be used to improve operations, remove sensitive data, and increase security, amongst many other use cases. Try out the latest Transform Rule yourself today.