All posts by Matt Bullock

Cloudflare Fonts: enhancing website font privacy and speed

Post Syndicated from Matt Bullock original http://blog.cloudflare.com/cloudflare-fonts-enhancing-website-privacy-speed/

Cloudflare Fonts: enhancing website font privacy and speed

Cloudflare Fonts: enhancing website font privacy and speed

We are thrilled to introduce Cloudflare Fonts! In the coming weeks sites that use Google Fonts will be able to effortlessly load their fonts from the site’s own domain rather than from Google. All at a click of a button. This enhances both privacy and performance. It enhances users' privacy by eliminating the need to load fonts from Google’s third-party servers. It boosts a site's performance by bringing fonts closer to end users, reducing the time spent on DNS lookups and TLS connections.

Sites that currently use Google Fonts will not need to self-host fonts or make complex code changes to benefit – Cloudflare Fonts streamlines the entire process, making it a breeze.

Fonts and privacy

When you load fonts from Google, your website initiates a data exchange with Google's servers. This means that your visitors' browsers send requests directly to Google. Consequently, Google has the potential to accumulate a range of data, including IP addresses, user agents (formatted descriptions of the browser and operating system), the referer (the page on which the Google font is to be displayed) and how often each IP makes requests to Google. While Google states that they do not use this data for targeted advertising or set cookies, any time you can prevent sharing your end user’s personal data unnecessarily is a win for privacy.

With Cloudflare Fonts, when you serve fonts directly from your own domain. This means no font requests are sent to third-party domains like Google, which some privacy regulators have found to be a problem in the past. Our pro-privacy approach means your end user’s IP address and other data are not sent to another domain. All that information stays within your control, within your domain. In addition, because Cloudflare Fonts eliminates data transmission to third-party servers like Google's, this can enhance your ability to comply with any potential data localization requirements.

Faster Google Font delivery through Cloudflare

Now that we have established that Cloudflare Fonts can improve your privacy, let's flip to the other side of the coin – how Cloudflare Fonts will improve your performance.

To do this, we first need to delve into how Google Fonts affects your website's performance. Subsequently, we'll explore how Cloudflare Fonts addresses and rectifies these performance challenges.

Google Fonts is a fantastic resource that offers website owners a range of royalty-free fonts for website usage. When you decide on the fonts you would like to incorporate, it’s super easy to integrate. You just add a snippet of HTML to your site. You then add styles to apply these fonts to various parts of your page:

<link href="https://fonts.googleapis.com/css?family=Open+Sans|Roboto+Slab" rel="stylesheet">
<style>
  body {
    font-family: 'Open Sans', sans-serif;
  }
  h1 {
    font-family: 'Roboto Slab', serif;
  }
</style>

But this ease of use comes with a performance penalty.

Upon loading your webpage, your visitors' browser fetches the CSS file as soon as the HTML starts to be parsed. Then, when the browser starts rendering the page and identifies the need for fonts in different text sections, it requests the required font files.

This is where the performance problem arises. Google Fonts employs a two-domain system: the CSS resides on one domain – fonts.googleapis.com – while the font files reside on another domain – fonts.gstatic.com.

This separation results in a minimum of four round trips to the third-party servers for each resource request. These round trips are DNS lookup, socket connection establishment, TLS negotiation (for HTTPS), and the final round trip for the actual resource request. Ultimately, getting a font from Google servers to a browser requires eight round trips.

Users can see this. If they are using Google Fonts they can open their network tab and filter for these Google domains.

Cloudflare Fonts: enhancing website font privacy and speed

You can visually see the impact of the extra DNS request and TLS connection that these requests add to your website experience. For example on my WordPress site that natively uses Google Fonts as part of the theme adds an extra ~150ms.

Cloudflare Fonts: enhancing website font privacy and speed

Fast fonts

Cloudflare Fonts streamlines this process, by reducing the number of round trips from eight to one. Two sets of DNS lookups, socket connections and TLS negotiations to third-parties are no longer required because there is no longer a third-party server involved in serving the CSS or the fonts. The only round trip involves serving the font files directly from the same domain where the HTML is hosted. This approach offers an additional advantage: it allows fonts to be transmitted over the same HTTP/2 or HTTP/3 connection as other page resources, benefiting from proper prioritization and preventing bandwidth contention.

The eagle-eyed amongst you might be thinking “Surely it is still two round trips – what about the CSS request?”. Well, with Cloudflare Fonts, we have also removed the need for a separate CSS request. This means there really is only one round-trip – fetching the font itself.

To achieve both the home-routing of font requests and the removal of the CSS request, we rewrite the HTML as it passes through Cloudflare’s global network. The CSS response is embedded, and font URL transformations are performed within the embedded CSS.

These transformations adjust the font URLs to align with the same domain as the HTML content. These modified responses seamlessly pass through Cloudflare's caching infrastructure, where they are automatically cached for a substantial performance boost. In the event of any cache misses, we use Fontsource and NPM to load these fonts and cache them within the Cloudflare infrastructure. This approach ensures that there's no inadvertent data exposure to Google's infrastructure, maintaining both performance and data privacy.

With Cloudflare Fonts enabled, you are able to see within your Network Tab that font files are now loaded from your own hostname from the /cf-fonts path and served from Cloudflare’s closest cache to the user, as indicated by the cf-cache-status: HIT.

Cloudflare Fonts: enhancing website font privacy and speed

Additionally, you can notice that the timings section in the browser no longer needs an extra DNS lookup for the hostname or the setup of a TLS connection. This happens because the content is served from your hostname, and the browser has already cached the DNS response and has an open TLS connection.

Cloudflare Fonts: enhancing website font privacy and speed

Finally, you can see the real-world performance benefits of Cloudflare Fonts. We conducted synthetic Google Lighthouse tests before enabling Cloudflare Fonts on a straightforward page that displays text. First Contentful Paint (FCP), which represents the time it takes for the first content element to appear on the page, was measured at 0.9 seconds in the Google fonts tests. After enabling Cloudflare Fonts, the First Contentful Paint (FCP) was reduced to 0.3 seconds, and our overall Lighthouse performance score improved from 98 to a perfect 100 out of 100.

Cloudflare Fonts: enhancing website font privacy and speed

Making Cloudflare Fonts fast with ROFL

In order to make Cloudflare Fonts this performant, we needed to make blazing-fast HTML alterations as responses stream through Cloudflare’s network. This has been made possible by leveraging one of Cloudflare’s more recent technologies.

Earlier this year, we finished rewriting one of Cloudflare's oldest components, which played a crucial role in dynamically altering HTML content. But as described in this blog post, a new solution was required to replace the old – A memory-safe solution, able to scale to Cloudflare’s ever-increasing load.

This new module is known as ROFL (Response Overseer for FL). It now powers various Cloudflare products that need to alter HTML as it streams, such as Email Obfuscation, Rocket Loader, and HTML Minification.

ROFL was developed entirely in Rust. This decision was driven by Rust's memory safety, performance, and security. The memory-safety features of Rust are indispensable to ensure airtight protection against memory leaks while we process a staggering volume of requests, measuring in the millions per second. Rust's compiled nature allows us to finely optimize our code for specific hardware configurations, delivering impressive performance gains compared to interpreted languages.

ROFL paved the way for the development of Cloudflare Fonts. The performance of ROFL allows us to rewrite HTML on-the-fly and modify the Google Fonts links quickly, safely and efficiently. This speed helps us reduce any additional latency added by processing the HTML file and improve the performance of your website.

Unlock the power of Cloudflare Fonts today! 🚀

Cloudflare Fonts will be available to all Cloudflare customers in October. If you're using Google Fonts, you will be able to supercharge your site's privacy and speed. By enabling this feature, you can seamlessly enhance your website's performance while safeguarding your user’s privacy.

Traffic transparency: unleashing the power of Cloudflare Trace

Post Syndicated from Matt Bullock original http://blog.cloudflare.com/traffic-transparency-unleashing-cloudflare-trace/

Traffic transparency: unleashing the power of Cloudflare Trace

Traffic transparency: unleashing the power of Cloudflare Trace

Today, we are excited to announce Cloudflare Trace! Cloudflare Trace is available to all our customers. Cloudflare Trace enables you to understand how HTTP requests traverse your zone's configuration and what Cloudflare Rules are being applied to the request.

For many Cloudflare customers, the journey their customers' traffic embarks on through the Cloudflare ecosystem was a mysterious black box. It's a complex voyage, routed through various products, each capable of introducing modification to the request.

Consider this scenario: your web traffic could get blocked by WAF Custom Rules or Managed Rules (WAF); it might face rate limiting, or undergo modifications via Transform Rules, Where a Cloudflare account has many admins, modifying different things it can be akin to a game of "hit and hope," where the outcome of your web traffic's journey is uncertain as you are unsure how another admins rule will impact the request before or after yours. While Cloudflare's individual products are designed to be intuitive, their interoperation, or how they work together, hasn't always been as transparent as our customers need it to be. Cloudflare Trace changes this.

Running a trace

Cloudflare Trace allows users to set a number of request variables, allowing you to tailor your trace precisely to your needs. A basic trace will require users to define two settings. A URL that is being proxied through Cloudflare and an HTTP method such as GET. However, customers can also set request headers, add a request body and even set a bot score to allow users to validate the correct behavior of their security rules.

Once a trace is initiated, the dashboard returns a visualization of the products that were matched on a request, such as Configuration Rules, Transform Rules, and Firewall Rules, along with the specific rules inside these phases that were applied. Customers can then view further details of the filters and actions the specific rule undertakes. Clicking the rule id will take you directly to that specific rule in the Cloudflare Dashboard, allowing you to edit filters and actions if needed.

The user interface also generates a programmatic version of the trace that can be used by customers to run traces via a command line. This enables customers to use tools like jq to further investigate the extensive details returned via the trace output.

The life of a Cloudflare request

Understanding the intricate journey that your traffic embarks on within Cloudflare can be a challenging task for many of our customers and even for Cloudflare employees. This complexity often leads to questions within our Cloudflare Community or direct inquiries to our support team. Internally, over the past 13 years at Cloudflare, numerous individuals have attempted to explain this journey through diagrams. We maintain an internal Wiki page titled 'Life of a Request Museum.' This page archives all the attempts made over the years by some of the first Cloudflare engineers, heads of product, and our marketing team, where the following image was used in our 2018 marketing slides.

Traffic transparency: unleashing the power of Cloudflare Trace

The "problem" (a rather positive one) is that Cloudflare is innovating so rapidly. New products are being added, code is removed, and existing products are continually enhanced. As a result, a diagram created just a few weeks ago can quickly become outdated and challenging to keep up to date.

Finding a happy medium

However, customers still want to understand, “The life of a request.” Striking the ideal balance between providing just enough detail without overwhelming our users posed a problem akin to the Goldilocks principle. One of our first attempts to detail the ordering of Cloudflare products was Traffic Sequence, a straightforward dashboard illustration that provides a basic, high-level overview of the interactions between Cloudflare products. While it does not detail every intricacy, it helps our customers understand the order and flow of products that interacted with an HTTP request and was a welcome addition to the Cloudflare dashboard.

Traffic transparency: unleashing the power of Cloudflare Trace

However, customers still requested further insights and details. Especially around debugging issues. Internally Cloudflare teams utilize a number of self created products to trace a request. One of these products is Flute. This product gives a verbose output of all rules, Cloudflare features and codepaths a request undertakes. This allows our engineers and support teams to investigate an issue and identify if something is awry. For example in the following Flute trace image you can see how a request for my domain is evaluated against Single Redirects, Waiting Room, Configuration Settings, Snippets and Origin Rules.

Traffic transparency: unleashing the power of Cloudflare Trace

The Flute tool became one of the key focal points in the development of Cloudflare Trace. However, it can be quite intricate and packed with extensive details, potentially leading to more questions than solutions if copied verbatim and exposed to our customers.

To understand the happy medium in developing Cloudflare Trace. We closely collaborated with our Support team to gain a deeper understanding of the challenges our customers faced specifically around Cloudflare Rulesets. The primary challenge centered around understanding which rules were applicable to specific requests. Customers often raised queries, and in certain instances, these inquiries had to be escalated for further investigation into the reasons behind a request's specific behavior. By empowering our customers to independently investigate and comprehend these issues, we identified a second area where Cloudflare Trace proves invaluable—by reducing the workload of our support team and enabling them to operate more efficiently while focusing on other support tickets.

For customers encountering genuine problems, they have the capability to export the JSON response of a trace, which can then be directly uploaded to a support ticket. This streamlined process significantly reduces the time required to investigate and resolve support tickets.

Trace examples

Cloudflare Trace has been available via API for the last nine months. We have been working with a number of customers and stakeholders to understand where tracing is beneficial and solving customer problems. Here are some of the real world examples that we have solved using Cloudflare Trace.

Transform Rules inconsistently matching

A customer encountered an issue while attempting to rewrite a URL to their origin for specific paths using Transform rules. The Cloudflare account created a filter that employed regex to match against a specific path.

Traffic transparency: unleashing the power of Cloudflare Trace

A systems administrator monitoring their web server observed in their logs that the URLs for a small percentage of requests were not transforming correctly, causing disruptions to the application. They decided to investigate by comparing a correctly functioning request with one that was not, and subsequently conducted traces.

In the problematic trace, only one rule matched the trace parameters and was setting incorrect parameters.

Whereas on the other URL the rule that contained the regex matched as intended and set the correct URL parameters.

This allowed the sysadmin to pinpoint the problem: the regex was specifically designed to handle requests with subdirectories, but it failed to address cases where requests directly targeted the root or a non-subdirectory path. After identifying this issue within the traces, the sysadmin updated the filter. Subsequently, both cases matched successfully, leading to the resolution of the problem.

What origin?

When a request encounters a Cloudflare ruleset, such as Origin Rules, all the rules are evaluated, and any rule that is matched is applied in sequential order of priority. This means that multiple settings could be applied from different rules. For example, a Host Header could be set in rule 1, and a DNS origin could be assigned in rule 3. This means that the request will exit the Origin Rules phase with a new Host Header and be routed to a different origin. Cloudflare Trace allows users to easily see all the rules that matched and altered the request.

Tracing the future

Cloudflare Trace will be available to all our customers over this coming week. And located within the Account section of your Cloudflare Dashboard for all plans. We are excited to introduce additional features and products to Cloudflare Trace in the coming months. In the future will also be developing scheduling and alerts, which will enable you to monitor if a newly deployed rule is impacting the critical path of your application. As with all our products, we value your feedback. Within the Trace dashboard, you'll find a form for providing feedback and feature requests to help us enhance the product before its general release.

All the way up to 11: Serve Brotli from origin and Introducing Compression Rules

Post Syndicated from Matt Bullock original http://blog.cloudflare.com/this-is-brotli-from-origin/

All the way up to 11: Serve Brotli from origin and Introducing Compression Rules

This post is also available in 简体中文, 日本語, Español and Deutsch.

All the way up to 11: Serve Brotli from origin and Introducing Compression Rules

Throughout Speed Week, we have talked about the importance of optimizing performance. Compression plays a crucial role by reducing file sizes transmitted over the Internet. Smaller file sizes lead to faster downloads, quicker website loading, and an improved user experience.

Take household cleaning products as a real world example. It is estimated “a typical bottle of cleaner is 90% water and less than 10% actual valuable ingredients”. Removing 90% of a typical 500ml bottle of household cleaner reduces the weight from 600g to 60g. This reduction means only a 60g parcel, with instructions to rehydrate on receipt, needs to be sent. Extrapolated into the gallons, this weight reduction soon becomes a huge shipping saving for businesses. Not to mention the environmental impact.

This is how compression works. The sender compresses the file to its smallest possible size, and then sends the smaller file with instructions on how to handle it when received. By reducing the size of the files sent, compression ensures the amount of bandwidth needed to send files over the Internet is a lot less. Where files are stored in expensive cloud providers like AWS, reducing the size of files sent can directly equate to significant cost savings on bandwidth.

Smaller file sizes are also particularly beneficial for end users with limited Internet connections, such as mobile devices on cellular networks or users in areas with slow network speeds.

Cloudflare has always supported compression in the form of Gzip. Gzip is a widely used compression algorithm that has been around since 1992 and provides file compression for all Cloudflare users. However, in 2013 Google introduced Brotli which supports higher compression levels and better performance overall. Switching from gzip to Brotli results in smaller file sizes and faster load times for web pages. We have supported Brotli since 2017 for the connection between Cloudflare and client browsers. Today we are announcing end-to-end Brotli support for web content: support for Brotli compression, at the highest possible levels, from the origin server to the client.

If your origin server supports Brotli, turn it on, crank up the compression level, and enjoy the performance boost.

Brotli compression to 11

Brotli has 12 levels of compression ranging from 0 to 11, with 0 providing the fastest compression speed but the lowest compression ratio, and 11 offering the highest compression ratio but requiring more computational resources and time. During our initial implementation of Brotli five years ago, we identified that compression level 4 offered the balance between bytes saved and compression time without compromising performance.

Since 2017, Cloudflare has been using a maximum compression of Brotli level 4 for all compressible assets based on the end user's "accept-encoding" header. However, one issue was that Cloudflare only requested Gzip compression from the origin, even if the origin supported Brotli. Furthermore, Cloudflare would always decompress the content received from the origin before compressing and sending it to the end user, resulting in additional processing time. As a result, customers were unable to fully leverage the benefits offered by Brotli compression.

Old world

All the way up to 11: Serve Brotli from origin and Introducing Compression Rules

With Cloudflare now fully supporting Brotli end to end, customers will start seeing our updated accept-encoding header arriving at their origins. Once available customers can transfer, cache and serve heavily compressed Brotli files directly to us, all the way up to the maximum level of 11. This will help reduce latency and bandwidth consumption. If the end user device does not support Brotli compression, we will automatically decompress the file and serve it either in its decompressed format or as a Gzip-compressed file, depending on the Accept-Encoding header.

Full end-to-end Brotli compression support

All the way up to 11: Serve Brotli from origin and Introducing Compression Rules

End user cannot support Brotli compression

All the way up to 11: Serve Brotli from origin and Introducing Compression Rules

Customers can implement Brotli compression at their origin by referring to the appropriate online materials. For example, customers that are using NGINX, can implement Brotli by following this tutorial and setting compression at level 11 within the nginx.conf configuration file as follows:

brotli on;
brotli_comp_level 11;
brotli_static on;
brotli_types text/plain text/css application/javascript application/x-javascript text/xml 
application/xml application/xml+rss text/javascript image/x-icon 
image/vnd.microsoft.icon image/bmp image/svg+xml;

Cloudflare will then serve these assets to the client at the exact same compression level (11) for the matching file brotli_types. This means any SVG or BMP images will be sent to the client compressed at Brotli level 11.

Testing

We applied compression against a simple CSS file, measuring the impact of various compression algorithms and levels. Our goal was to identify potential improvements that users could experience by optimizing compression techniques. These results can be seen in the following table:

Test Size (bytes) % Reduction of original file (Higher % better)
Uncompressed response (no compression used) 2,747
Cloudflare default Gzip compression (level 8) 1,121 59.21%
Cloudflare default Brotli compression (level 4) 1,110 59.58%
Compressed with max Gzip level (level 9) 1,121 59.21%
Compressed with max Brotli level (level 11) 909 66.94%

By compressing Brotli at level 11 users are able to reduce their file sizes by 19% compared to the best Gzip compression level. Additionally, the strongest Brotli compression level is around 18% smaller than the default level used by Cloudflare. This highlights a significant size reduction achieved by utilizing Brotli compression, particularly at its highest levels, which can lead to improved website performance, faster page load times and an overall reduction in egress fees.

To take advantage of higher end to end compression rates the following Cloudflare proxy features need to be disabled.

  • Email Obfuscation
  • Rocket Loader
  • Server Side Excludes (SSE)
  • Mirage
  • HTML Minification – JavaScript and CSS can be left enabled.
  • Automatic HTTPS Rewrites

This is due to Cloudflare needing to decompress and access the body to apply the requested settings. Alternatively a customer can disable these features for specific paths using Configuration Rules.

All the way up to 11: Serve Brotli from origin and Introducing Compression Rules

If any of these rewrite features are enabled, your origin can still send Brotli compression at higher levels. However, we will decompress, apply the Cloudflare feature(s) enabled, and recompress on the fly using Cloudflare’s default Brotli level 4 or Gzip level 8 depending on the user's accept-encoding header.

For browsers that do not accept Brotli compression, we will continue to decompress and send Gzipped responses or uncompressed.

Implementation

The initial step towards implementing Brotli from the origin involved constructing a decompression module that could be integrated into Cloudflare software stack. It allows us to efficiently convert the compressed bits received from the origin into the original, uncompressed file. This step was crucial as numerous features such as Email Obfuscation and Cloudflare Workers Customers, rely on accessing the body of a response to apply customizations.

We integrated the decompressor into  the core reverse web proxy of Cloudflare. This integration ensured that all Cloudflare products and features could access Brotli decompression effortlessly. This also allowed our Cloudflare Workers team to incorporate Brotli Directly into Cloudflare Workers allowing our Workers customers to be able to interact with responses returned in Brotli or pass through to the end user unmodified.

Introducing Compression rules – Granular control of compression to end users

By default Cloudflare compresses certain content types based on the Content-Type header of the file. Today we are also announcing Compression Rules for our Enterprise Customers to allow you even more control on how and what Cloudflare will compress.

Today we are also announcing the introduction of Compression Rules for our Enterprise Customers. With Compression Rules, you gain enhanced control over Cloudflare's compression capabilities, enabling you to customize how and which content Cloudflare compresses to optimize your website's performance.

For example, by using Cloudflare's Compression Rules for .ktx files, customers can optimize the delivery of textures in webGL applications, enhancing the overall user experience. Enabling compression minimizes the bandwidth usage and ensures that webGL applications load quickly and smoothly, even when dealing with large and detailed textures.

All the way up to 11: Serve Brotli from origin and Introducing Compression Rules

Alternatively customers can disable compression or specify a preference of how we compress. Another example could be an Infrastructure company only wanting to support Gzip for their IoT devices but allow Brotli compression for all other hostnames.

All the way up to 11: Serve Brotli from origin and Introducing Compression Rules

Compression rules use the filters that our other rules products are built on top of with the added fields of Media Type and Extension type. Allowing users to easily specify the content you wish to compress.

All the way up to 11: Serve Brotli from origin and Introducing Compression Rules

Deprecating the Brotli toggle

Brotli has been long supported by some web browsers since 2016 and Cloudflare offered Brotli Support in 2017. As with all new web technologies Brotli was unknown and we gave customers the ability to selectively enable or disable BrotlI via the API and our UI.

All the way up to 11: Serve Brotli from origin and Introducing Compression Rules

Now that Brotli has evolved and is supported by all browsers, we plan to enable Brotli on all zones by default in the coming months. Mirroring the Gzip behavior we currently support and removing the toggle from our dashboard. If browsers do not support Brotli, Cloudflare will continue to support their accepted encoding types such as Gzip or uncompressed and Enterprise customers will still be able to use Compression rules to granularly control how we compress data towards their users.

The future of web compression

We've seen great adoption and great performance for Brotli as the new compression technique for the web. Looking forward, we are closely following trends and new compression algorithms such as zstd as a possible next-generation compression algorithm.

At the same time, we're looking to improve Brotli directly where we can. One development that we're particularly focused on is shared dictionaries with Brotli. Whenever you compress an asset, you use a "dictionary" that helps the compression to be more efficient. A simple analogy of this is typing OMW into an iPhone message. The iPhone will automatically translate it into On My Way using its own internal dictionary.

O M W
O n M y W a y

This internal dictionary has taken three characters and morphed this into nine characters (including spaces) The internal dictionary has saved six characters which equals performance benefits for users.

By default, the Brotli RFC defines a static dictionary that both clients and the origin servers use. The static dictionary was designed to be general purpose and apply to everyone. Optimizing the size of the dictionary as to not be too large whilst able to generate best compression results. However, what if an origin could generate a bespoke dictionary tailored to a specific website? For example a Cloudflare-specific dictionary would allow us to compress the words and phrases that appear repeatedly on our site such as the word “Cloudflare”. The bespoke dictionary would be designed to compress this as heavily as possible and the browser using the same dictionary would be able to translate this back.

A new proposal by the Web Incubator CG aims to do just that, allowing you to specify your own dictionaries that browsers can use to allow websites to optimize compression further. We're excited about contributing to this proposal and plan on publishing our research soon.

Try it now

Compression Rules are available now! With End to End Brotli being rolled out over the coming weeks. Allowing you to improve performance, reduce bandwidth and granularly control how Cloudflare handles compression to your end users.

Cloudflare Snippets is now available in alpha

Post Syndicated from Matt Bullock original http://blog.cloudflare.com/cloudflare-snippets-alpha/

Cloudflare Snippets is now available in alpha

Today we are excited to announce that Cloudflare Snippets is available in alpha. In the coming weeks we will be opening access to our waiting list.

Cloudflare Snippets is now available in alpha

What are Snippets?

Over the past two years we have released a number of new rules products such as Transform Rules, Cache Rules, Origin Rules, Config Rules and Redirect Rules. These new products give more control to customers on how we process their traffic as it flows through our global network. The feedback on these products so far has been overwhelmingly positive. However, our customers still occasionally need the ability to do more than the out-of-the-box functionality allows. Not just adding an HTTP header – but performing some advanced calculation to create the output.

For these cases, Cloudflare Snippets comes to the rescue. Snippets are small pieces of user created JavaScript that are run by Cloudflare before your website, API or application is served to the user. If you're familiar with Cloudflare Workers, our robust developer platform, then you'll find Snippets to be a familiar addition. For those who are not, Snippets are designed to be easily created, tested, and deployed. Providing you with the ability to deploy your custom JavaScript Snippet to our global network in a matter of seconds.

While Snippets are built on top of the Workers Platform, they do have a number of differences. The first lies in how Snippets operate within the Ruleset Engine as a dedicated new phase, similar to Transform Rules and Cache Rules. This means that customers can select and execute a Snippet based on any Ruleset Engine filter. This gives customers the flexibility to run a Snippet on every request or apply it selectively based on various criteria they provide, such as specific bot scores, country of origin, or certain cookies.

Moreover, Snippets are cumulative in nature, allowing users to have multiple Snippets that execute if they meet the defined conditions. For instance, one Snippet could add an HTTP header and another rewrite the URL, both of which will be executed if their respective conditions are met.

Users now have the flexibility to choose between using a rule for simple, no-code-required tasks, such as adding a basic response Cookie header with Transform Rules, or writing a Cloudflare Snippet for more complex cookie functionality, such as dynamically changing the host or date within the cookie value. Snippets empower customers to get the job done quickly and effortlessly within the Cloudflare ecosystem, without incurring extra expenses (though a fair usage cap applies).

The difference between Snippets and Workers

Another significant advantage is that Cloudflare Snippets are available across all plan levels at no extra cost. This empowers customers to migrate their simple workloads from legacy solutions like VCL to the Cloudflare platform, actively reducing their monthly expenses.

Whether you're on the Free, Pro, Business, or Enterprise plan, Snippets are at your disposal. Free plan users have access to five Snippets per zone, while Pro, Business, and Enterprise plans offer 10, 25, and 50 Snippets per zone, respectively.

In terms of resources, Cloudflare Snippets are lightweight compared to Workers. They have a maximum execution time of 5ms, a maximum memory of 2MB, and a total package size of 32KB. These limits are more than sufficient for common use cases like modifying HTTP headers, rewriting URLs, and routing traffic tasks that do not require the additional features and resources Cloudflare Workers has to offer.

Snippets also run before Workers; this means that users will be able to move simple logic out of a Cloudflare Worker into Snippets or use Cloudflare Workers and its features to further modify a request. The Traffic Sequence UI has also been updated to incorporate Snippets allowing you to easily understand how all the products fit together and understand how HTTP requests flow between them.

Cloudflare Snippets is now available in alpha

What can you build with Cloudflare Snippets?

Snippets allow customers to migrate their existing workloads to Cloudflare. For example, customers that wish to set a dynamic cookie on all of their responses for a percentage of requests can use the `math.random` function within their Snippet.

Cloudflare Snippets is now available in alpha

By leveraging the Ruleset Engine, we can improve the implementation by moving the set cookie logic to the rule instead of executing it on every response or handling it within a Snippet. For example if I only want to set this cookie on my shop subdomain and only for German or UK customers I can create the following rule.

Cloudflare Snippets is now available in alpha

This approach ensures that the snippet will only execute when necessary, minimizing additional processing and reducing the complexity of the code required.

We are excited to see what other use cases Cloudflare Snippets unlock for our customers.

Using Snippets

Snippets are located within the Rules section of the Cloudflare Dashboard. Here customers can use the UI to write, preview and deploy their first Snippet.

Cloudflare Snippets is now available in alpha

As with all Cloudflare products users can deploy their Snippets via the API and Terraform. Allowing users to easily incorporate Snippets within their CI/CD pipelines. The added benefit of using the Ruleset engine allows users to test their code on a subset of traffic. For example, by specifying your own office IP or secret header within the filter that will only trigger the Snippet if present. Finally we will be integrating Snippets within the Account Request Tracer allowing users to easily identify all Rules that are executing on a specific request.

How did we build Snippets?

During Developer Week, we discussed the process of Building Cloudflare on Cloudflare, using our Cloudflare Workers developer platform to enhance our products in terms of speed, robustness, and ease of development. Snippets, the latest Cloudflare product, is built on top of Workers for Platforms.

Cloudflare Snippets is now available in alpha

A snippet is a piece of user-defined JavaScript that, upon creation, generates a unique Snippet ID. This Snippet ID is then associated with a user-defined rule created using the Rule Engine syntax. When a rule is created, a unique Rule ID is assigned to it. The Snippet ID and Rule ID are then linked in a one-to-one relationship. Customers have the flexibility to create multiple snippets and rules, each with its own unique Snippet and Rule. Customers with multiple snippets can easily prioritize them within the user interface (UI) or via API, similar to our other rules-based products.

When a customer's request reaches Cloudflare, we evaluate the request parameters against the created Snippet rules within a user's zone. If a Snippet rule is matched, the corresponding unique Snippet ID is added to a Snippet table. Once all the rules have been evaluated and the Snippet table has been compiled, the completed table is passed to the Snippets Internal Worker Service.

Cloudflare Snippets is now available in alpha

This Worker receives all the Snippet IDs stored within the table that are to be sequentially executed. The system's design allows for the flexibility of keeping Snippets simple, where users can manage individual snippets independently, that execute on the same request. This approach grants users the freedom to control and fine-tune their own individual snippets rather than merging them into a single entity.

Each Snippet receives the modified request from the previous Snippet and applies new modifications to it. After executing the final Snippet IDs, the resulting modified request is passed back to FL for the next step of the request processing.

Snip into action

We are excited to see the innovative use cases that our customers will create with Snippets. In the upcoming weeks, we will start granting access to the alpha version to those on our waitlist. If you haven't joined the waitlist yet, you can still sign up with an open beta available later this year.

How to use Cloudflare Observatory for performance experiments

Post Syndicated from Matt Bullock original http://blog.cloudflare.com/performance-experiments-with-cloudflare/

How to use Cloudflare Observatory for performance experiments

How to use Cloudflare Observatory for performance experiments

Website performance is crucial to the success of online businesses. Study after study has shown that an increased load time directly affects sales. But how do you get test products that could improve your website speed without incurring an element of risk?

In today's digital landscape, it is easy to find code optimizations on the Internet including our own developers documentation to improve the performance of your website or web applications. However, implementing these changes without knowing the impact they’ll have can be daunting. It could also cause an outage, taking websites or applications offline entirely, leaving admins scrambling to remove the offending code and get the business back online.

Users need a way to see the impact of these improvements on their websites without impacting uptime. They want to understand “If I enabled this, what performance boost should I expect to get?”.

Today, we are excited to announce Performance Experiments in Cloudflare Observatory. Performance Experiments gives users a safe place to experiment and determine what the best setup is to improve their website performance before pushing it live for all visitors to benefit from. Cloudflare users will be able to simply enter the desired code, run our Observatory testing suite and view the impact it would have on their Lighthouse score. If they are satisfied with the results they can push the experiment live. With the click of a button.

Experimenting within Observatory

Cloudflare Observatory, announced today, allows users to easily  monitor website performance by integrating Real-User Monitoring (RUM) data and synthetic tests in one location.. This allows users to easily identify areas for optimization and leverage Cloudflare's features to address performance issues.

How to use Cloudflare Observatory for performance experiments

Observatory's recommendations leverage insights from these Lighthouse test and RUM data, enabling precise identification of issues and offering tailored Cloudflare settings for enhanced performance. For example, when a Lighthouse report suggests image optimization improvements, Cloudflare recommends enabling Polish or utilizing Image Resizing. These recommendations can be implemented with a single click, allowing customers to boost their performance score effortlessly.

How to use Cloudflare Observatory for performance experiments

Fine tuning with Experiments

Cloudflare’s Observatory allows customers to easily enable recommended Cloudflare settings. However,  through the medium of Cloudflare Workers web performance advocates have been able to create and share JavaScript examples of how to improve and optimize a website.

A great example of this is Fast Fonts. Google Fonts are slow due to how they are served. When using Google Fonts on your website, you include a stylesheet URL that contains the font styles you want to use. The CSS file is hosted on one domain (fonts.googleapis.com), while the font files are on another domain (fonts.gstatic.com). This separation means that each resource requires at least four round trips to the server for DNS lookup, establishing the socket connection, negotiating TLS encryption (for https), and making the request itself.

These requests cannot be done in parallel because the fonts are not known until after the CSS is downloaded and applied to the page. In the best-case scenario, this leads to eight round trips before the text can be displayed. On a slower 3G connection with a 300ms round-trip time, this delay can add up to 2.4 seconds. To fix this issue Cloudflare Workers can be used to reduce the performance penalties of serving Google Fonts directly from Google by 81%.

Another issue is resource prioritization. When all requests come from the same domain on the same HTTP/2 connection, critical resources like CSS and fonts can be prioritized and delivered before lower priority resources like images. However, since Google Fonts (and most third-party resources) are served from a different domain than the main page resources, they cannot be prioritized and end up competing with each other for download bandwidth. This competition can result in significantly longer fetch times than the best-case scenario of eight round trips.

To implement this Worker first create a Cloudflare Worker, implement the code from the GitHub repository using Wrangler and then run manual tests to see if performance has been improved and that there are no issues or problems with the website loading. Users can choose to implement the Cloudflare Worker on a test path that may not be a true reflection of production or complicate the Cloudflare Worker further by implementing an A/B test that could still have an impact on your end users. So how can users test code on their website to easily see if the code will improve the performance of their website and not have any adverse impact on end users?

Introducing Performance Experiments

Last year we announced Cloudflare Snippets. Snippets is a platform for running discrete pieces of JavaScript code on Cloudflare before your website is served to the user. They provide a convenient way to customize and enhance your website's functionality. If you are already familiar with Cloudflare Workers, our developer platform, you'll find Snippets to be a familiar and welcome addition to your toolkit. With Snippets, you can easily execute small pieces of user-created JavaScript code to modify the behavior of your website and improve performance, security, and user experience.

Combining Snippets with Observatory lets users easily run experiments and get instant feedback on the performance impact. Users will be able to find a piece of JavaScript, insert it into the Experiments window and hit test. Observatory will then automatically run multiple Lighthouse tests with the experiment disabled and then enabled. The results will show the before and after scores allowing users to determine the impact of the experiment e.g. “If I put this JavaScript on my website, my Lighthouse score would improve by 15 points”.

This allows users to understand if the JavaScript has had a positive performance impact on their website. Users can then deploy this JavaScript, via Snippets, against all requests or on a specific subset of traffic. For example, if I only wanted it run on traffic from the UK or my office IPs I would use the rule below:

How to use Cloudflare Observatory for performance experiments

Alternatively, if the results impact performance customers negatively users can safely discard the experiment or try another example. All without real visitors to the website being impacted or ever at risk.

Accessing Performance Experiments

Performance Experiments are currently under development — you can sign up here to join the waitlist for access.

We hope to begin admitting users later in the year, with an open beta to follow.

Faster website, more customers: Cloudflare Observatory can help your business grow

Post Syndicated from Matt Bullock original http://blog.cloudflare.com/cloudflare-observatory-generally-available/

Faster website, more customers: Cloudflare Observatory can help your business grow

This post is also available in 简体中文, 日本語 and Español.

Faster website, more customers: Cloudflare Observatory can help your business grow

Website performance is crucial to the success of online businesses. Study after study has shown that an increased load time directly affects sales. In highly competitive markets the performance of a website is crucial for success. Just like a physical shop situated in a remote area faces challenges in attracting customers, a slow website encounters similar difficulties in attracting traffic. It is vital to measure and improve website performance to enhance user experience and maximize online engagement. Results from testing at home don’t take into account how your customers in different countries, on different devices, with different Internet connections experience your website.

Simply put, you might not know how your website is performing. And that could be costing your business money every single day.

Today we are excited to announce Cloudflare Observatory – the new home of performance at Cloudflare.

Faster website, more customers: Cloudflare Observatory can help your business grow

Cloudflare users can now easily monitor website performance using Real User Monitoring (RUM) data along with scheduled tests from different regions in a single dashboard. This will identify any performance issues your website may have. The best bit? Once we’ve identified any issues, Observatory will highlight customized recommendations to resolve these issues, all with a single click.

Making your website faster just got a lot easier.

I feel the need. The need for speed!

Having a fast website is crucial for achieving online success. According to Google, even a one-second improvement in load time can boost mobile conversions by up to 27%.

A study from Deloitte found “With a 0.1s improvement in site speed, we observed that retail consumers spent almost 10% more”. Another study, from Google, found “53% will leave a mobile site if it takes more than 3 seconds to load”. There is a very real link between website performance and business success.

In today's digital landscape, customers expect instant access to information and seamless browsing experiences. We have all encountered the frustration of waiting for a website to load, often leading us to click the back button and click on the next link. For ecommerce sites, this delay directly translates to lost revenue as users quickly navigate elsewhere.

This importance is further amplified in the world of Search Engine Optimization (SEO). In May 2021, Google announced that page speed would be incorporated into their ranking algorithm, highlighting the significance of fast-loading web pages for higher search engine rankings.

Introducing Observatory

In 2019, we launched the new Speed Tab with the mission to address two crucial questions: "How fast is my website after moving to Cloudflare?" and "How fast could it be?" This tab allowed customers to compare their website's performance before and after enabling Cloudflare features. However, it required users to delve into analytics and analyze traffic patterns and cache hit ratios to optimize their sites, which proved challenging for new Cloudflare users.

To address this, we developed Observatory, a fresh approach to performance monitoring at Cloudflare. Observatory fills the gap that previously existed in understanding website performance and simplifies the process of addressing performance issues by providing tailored recommendations.

Observatory integrates Real-User Monitoring (RUM) data, which enables users to understand their website's performance as experienced by their end users across the globe. By leveraging RUM data we can show valuable insight into the areas of the website that can be optimized and surface Cloudflare features and functionality that can address these issues.

Additionally, Observatory incorporates Google Lighthouse, the industry standard tool for evaluating web performance. We replaced WebPageTest with Lighthouse due to its versatility and widespread adoption in the performance community. With Lighthouse, users can run, schedule, and access Lighthouse performance reports directly in the Cloudflare dashboard.

Observatory also enables regional testing, recognizing the importance of understanding performance variations across different locations. By simulating website performance in different regions, users can understand if their webpage performs well in certain countries and poorly in others. This enables users to optimize their websites for a global audience, ensuring consistent and fast user experiences regardless of location.

Observatory becomes your unified place within the Cloudflare dashboard for website performance by bringing together RUM data, Lighthouse insights, and regional testing. Users can gain a comprehensive understanding of their website's performance and implement Cloudflare recommendations based on this data with just a click of a button.

Measuring performance in Cloudflare Observatory

We support the two main methods of testing website performance. These are synthetic tests and Real User Monitoring (RUM) tests.

Synthetic tests involve simulating user interactions and monitoring performance under controlled environments. These tests can provide valuable baseline measurements and help identify potential issues before deploying changes.

On the other hand, RUM tests involve collecting data directly from real users as they interact with the website, capturing their actual experiences in different environments and network conditions. RUM tests offer insights into the true end-user perspective. By combining both synthetic and RUM tests, website owners can gain a holistic view of performance, understanding how changes and optimizations affect both simulated and real user experiences.

Cloudflare Observatory combines both of these in one location. The integration of Google Lighthouse within the Observatory gives Cloudflare users a simple way to synthetically measure and understand their site's performance. Google Lighthouse measures several key performance metrics that impact user experience and search engine ranking. The generated report provides an overall performance score ranging from 1 (least performant) to 99 (most performant), making it easy for website owners to understand their site's performance.

Observatory offers a user-friendly interface that presents each Lighthouse metric in a traffic light system, indicating the result of the tested metric. One critical metric is Largest Contentful Paint (LCP), which measures a page's loading performance of the primary content. An optimal LCP score is less than 2.5 seconds, indicating satisfactory loading speed for the user. Through Observatory website owners can easily see their LCP score and other metrics. This allows them to optimize their site's performance and user experience. For example, by examining the LCP score website owners can identify opportunities for improvement and make informed decisions to enhance their site's performance.

Faster website, more customers: Cloudflare Observatory can help your business grow

New Smarter Recommendations

Recommendations from Observatory have become smarter by leveraging the insights gathered from Lighthouse and RUM testing. This enables us to precisely identify issues and offer tailored Cloudflare settings to enhance performance. For instance, when you receive a Lighthouse report it will highlight areas in which your website can be improved. In the provided report, several enhancements for image optimization are suggested. Cloudflare takes this feedback into account and provides product recommendations, such as enabling Polish or utilizing Image Resizing. This empowers our customers to enhance their performance score with just a single click.

Faster website, more customers: Cloudflare Observatory can help your business grow

Customers will have the convenience of viewing these recommendations within the Cloudflare dashboard, directly linked to the audit. The dashboard will encompass a wide range of Cloudflare features and functionalities, continually improving over time. With the addition of Cache Rules recommendations for uncached static content and a comprehensive testing suite, users will gain valuable insights into the benefits of implementing specific Cloudflare features before enabling them.

By knowing the performance impact of a product or feature before it is enabled, customers can make informed decisions and optimize their website's performance with confidence.

Faster website, more customers: Cloudflare Observatory can help your business grow

More tests, multiple regions and recurring tests

A significant piece of feedback we received from our old Speed Tab and beta testing was regarding the number and location of tests. We're thrilled to announce that we have addressed this feedback by increasing the number of tests allowed and enabling all plan types to schedule at least one recurring test, originating from a US region.

Customers on our Pro, Business, and Enterprise family of subscriptions can run tests from various regions to understand their site's performance in those areas. For instance, if a website is solely hosted in Iowa, USA, and a visitor is accessing it from Sydney, Australia, they will experience a slower page load due to the time it takes for an uncached file to be sent and rendered by the user's browser over a distance of 14,000 kilometers. By running tests from various regions, our customers can gain valuable insights into their website's performance and make informed decisions to optimize it for a better user experience – and an improved page load time.

Faster website, more customers: Cloudflare Observatory can help your business grow

The higher your plan type the more tests you are able to run and the more regions you are able to use. For example Pro customers can set up five recurring tests for their most important page from five different locations. These test runs will then be stored within the Observatory history tab allowing them to understand their Page Speed score from around the globe. Below is a table detailing the number of tests each plan type can run and the regions available to them.

Plan Ad-hoc tests Recurring tests Frequency of recurring tests Regions supported
Free 5 1 Weekly Iowa, USA
Pro 10 5 Daily Everything in Free and
South Carolina, USA
North Virginia, USA
Dallas, USA
Oregon, USA
Hamina, Finland
Madrid, Spain
St. Ghislain, Belgium
Eemshaven, Netherlands
Milan, Italy
Paris, France
Changhua County, Taiwan
Tokyo, Japan
Osaka, Japan
Tel Aviv, Israel
London, England
Jurong West, Singapore
Sydney, Australia
Frankfurt, Germany
Mumbai, India
São Paulo, Brazil
Business 20 10 Daily
Enterprise 50 15 Daily

Incorporating RUM

Cloudflare’s RUM service provides insights to a user's browser or devices, tracking metrics such as page load times, response times, and other user interactions. Cloudflare collects RUM data through its Browser Insights feature, which inserts a JavaScript "beacon" into HTML pages. This beacon sends information back to Cloudflare about the performance of a website from the perspective of real users, including metrics such as page load time, time to first byte, and other Web Vitals.

While you can always try a few page loads on your own laptop and see the results, gathering data from real users is the only way to take into account real-life device performance and network conditions.

Observatory now incorporates RUM data to match against your tested paths. This allows you to easily see how real users experience your site across the globe. This data is also dissected and located in the Observatory tab against your tested paths. Allowing you to view synthetic test data directly against Real User metrics.

Our RUM provider already incorporates the Interactive Next Paint (INP) Score. In 2022, Google announced Interaction to Next Paint (INP) as that new metric, promoting INP as the new Core Web Vital metric for responsiveness, replacing First Input Delay (FID). FID measures the delay between a user's first interaction with a web page and the browser's response to that interaction. INP measures the delay for any user interaction on a website, not just limited to the first input. This change reflects a more comprehensive approach to evaluating the responsiveness of a website.

If you don't have Web Analytics enabled on your Cloudflare zone then we will be unable to collect and display RUM data within Observatory. Enabling this feature is very simple and instructions can be found here.

Faster website, more customers: Cloudflare Observatory can help your business grow

One click optimizations

Observatory now includes an enhanced Optimization layout, which introduces a one-click recommendations center. Enabling these features on your Cloudflare zone enhances optimization for the latest HTTP protocols, including HTTP/3. Additionally, Image Delivery is improved by converting PNGs and JPEGs to the efficient WebP format. Finally, Cloudflare performance tools are also enabled, allowing users to seamlessly implement new technologies such as Early Hints. These features are designed to contribute to improved website speed and overall performance.

As we release new features that we believe are beneficial to our customers, we will continue to add them to the One Click Optimizations. We have also made changes to the overall layout of the tab, splitting our products into subcategories to allow easy navigation to the individual performance products.

Faster website, more customers: Cloudflare Observatory can help your business grow

Available now

Observatory is available now! Become the Web Performance advocate in your organization by taking advantage of the Observatory features such as Google Lighthouse integration, RUM data, and multi-region testing, all available now. You will be able to gain valuable insights into your website's performance and make informed decisions to optimize and improve your site's performance.

In the coming months, we will continue expanding the Recommendations engine, introducing more products that empower you to continually enhance your website's performance. Additionally, we will provide the capability to simulate requests for specific features, giving you a comprehensive understanding of the real-world performance benefits before implementing them on your website.

Introducing Configuration Rules

Post Syndicated from Matt Bullock original https://blog.cloudflare.com/configuration-rules/

Introducing Configuration Rules

A powerful new set of tools

Introducing Configuration Rules

In 2012, we introduced Page Rules to the world, announcing:

“Page Rules is a powerful new set of tools that allows you to control how CloudFlare works on your site on a page-by-page basis.”

Ten years later, and with all F’s lowercase, we are excited to introduce Configuration Rules — a Page Rules successor and a much improved way of controlling Cloudflare features and settings. With Configuration Rules, users can selectively turn on/off features which would typically be applied to every HTTP request going through the zone. They can do this based on URLs – and more, such as cookies or country of origin.

Configuration Rules opens up a wide range of use cases for our users that previously were impossible without writing custom code in a Cloudflare Worker. Such use cases as A/B testing configuration or only enabling features for a set of file extensions are now made possible thanks to the rich filtering capabilities of the product.

Configuration Rules are available for use immediately across all plan levels.

Turn it on, but only when…

As each HTTP request enters a Cloudflare zone we apply a configuration. This configuration tells the Cloudflare server handling the HTTP request which features the HTTP request should ‘go’ through, and with what settings/options. This is defined by the user, typically via the dashboard.

The issue arises when users want to enable these features, such as Polish as Auto Minify, only on a subset of the traffic to their website. For example, users may want to disable Email Obfuscation but only for a specific page on their website so that contact information is shown correctly to visitors. To do this, they can deploy a Configuration Rule.

Introducing Configuration Rules

Configuration Rules lets users selectively enable or disable features based on one or more ruleset engine fields.

Currently, there are 16 available actions within Configuration Rules. These actions range from Disable Apps, Disable Railgun and Disable Zaraz to Auto Minify, Polish and Mirage.

These actions effectively ‘override’ the corresponding zone-wide setting for matching traffic. For example, Rocket Loader may be enabled for the zone example.com:

Introducing Configuration Rules

If the user, however, does not want Rocket Loader to be enabled on their checkout page due to an issue it causes with a specific JavaScript element, they could create a Configuration Rule to selectively disable Rocket Loader:

Introducing Configuration Rules

This interplay between zone level settings and Configuration Rules allows users to selectively enable features, allowing them to test Rocket Loader on staging.example.com prior to flipping the zone-level toggle.

With Configuration Rules, users also have access to various other non-URL related fields. For example, users could use the ip.geoip.country field to ensure that visitors for specific countries always have the ‘Security Level’ set to ‘I’m under attack’.

Historically, these configuration overrides were achieved with the setting of a Page Rule.

Page Rules is the ‘If This Then That’ of Cloudflare. Where the ‘If…’ is a URL, and the ‘Then That’ is changing how we handle traffic to specific parts of a ‘zone’. It allows users to selectively change how traffic is handled, and in this case specifically, which settings are and aren’t applied. It is very well adopted, with over one million Page Rules in the past three months alone.

Page Rules, however, are limited to performing actions based upon the requested URL. This means if users want to disable Rocket Loader for certain traffic, they need to make that decision based on the URL alone. This can be challenging for users who may want to perform this decision-making on more nuanced aspects, like the user agent of the visitor or on the presence of a specific cookie.

For example, users might want to set the ‘Security Level’ to ‘I’m under attack’ when the HTTP request originates in certain countries. This is where Configuration Rules help.

Use case: A/B testing

A/B testing is the term used to describe the comparison of two versions of a single website or application. It allows users to create a copy of their current website (‘A’), change it (‘B’) and compare the difference.

In a Cloudflare context, users might want to A/B test the effect of features such as Mirage or Polish prior to enabling them for all traffic to the website. With Page Rules, this was impractical. Users would have to create Page Rules matching on specific URI query strings and A/B test by appending those query strings to every HTTP request.

Introducing Configuration Rules

With Configuration Rules, this task is much simpler. Leveraging one or more fields, users can filter on other parameters of a HTTP request to define which features and products to enable.

For example, by using the expression any(http.request.cookies["app"][*] == "test") a user can ensure that Auto Minify, Mirage and Polish are enabled only when this cookie is present on the HTTP request. This allows comparison testing to happen before enabling these products either globally, or on a wider set of traffic. All without impacting existing production traffic.

Introducing Configuration Rules

Use case: augmenting URLs

Configuration Rules can be used to augment existing requirements, also. One example given in ‘The Future of Page Rules’ blog is increasing the Security Level to ‘High’ for visitors trying to access the contact page of a website, to reduce the number of malicious visitors to that page.

In Page Rules, this would be done by simply specifying the contact page URL and specifying the security level, e.g. URL: example.com/contact*. This ensures that any “visitors that exhibit threatening behavior within the last 14 days” are served with a challenge prior to having that page load.

Configuration Rules can take this use case and augment it with additional fields, such as whether the source IP address is in a Cloudflare Managed IP List. This allows users to be more specific about when the security level is changed to ‘High’, such as only when the request also is marged as coming from an open HTTP and SOCKS proxy endpoints, which are frequently used to launch attacks and hide attackers identity:

Introducing Configuration Rules

This reduces the chance of a false positive, and a genuine visitor to the contact form being served with a challenge.

Try it now

Configuration Rules are available now via API, UI, and Terraform for all Cloudflare plans! We are excited to see how you will use them in conjunction with all our new rules releases from this week.

Where to? Introducing Origin Rules

Post Syndicated from Matt Bullock original https://blog.cloudflare.com/origin-rules/

Where to? Introducing Origin Rules

Host headers are key

Where to? Introducing Origin Rules

The host header of an HTTP request tells the receiving server (‘origin’) which website or application a client wants to access.

When an origin receives an HTTP request, it checks the value of this ‘host’ header to see if it is responsible for that traffic. If it finds a match the request will be routed appropriately and the correct data will be returned to the visitor. If it doesn’t find a match, it will return an error telling the visitor it doesn’t have an application or website that matches what they are asking for.

In simple setups this is often not an issue. All requests for example.com are sent to the same origin, which sees the host header example.com and returns the relevant files. However, not all setups are as straightforward. SaaS (Software-as-a-Service) platforms use host headers to route visitors to the correct instance or S3-compatible bucket.

To ensure the correct content is still loaded, the host header must equal the name of this instance or bucket to allow the receiving origin to route it correctly. This means at some point in the traffic flow, the host header must be changed to match the instance or bucket name, before being sent to the SaaS platform.

Another common issue is when web applications on an origin are listening on a non-standard port, e.g. 8001.  Requests sent via HTTPS will by default arrive on port 443. To ensure the traffic isn’t subsequently sent to port 443 on the origin the traffic must be intercepted and have the destination port changed to 8001. This ensures the origin is receiving traffic where it expects it. Previously this would be done as a Cloudflare Worker, Cloudflare Spectrum application or by running a dedicated application on the origin.

Both of these scenarios require customers to write and maintain code to intercept HTTP requests and parse them to ensure they go to the correct origin location, the correct port on that origin, and with the correct host header. This is a burden for administrators to maintain, particularly as legacy applications are migrated away from on-premise and into SaaS.

Cloudflare users want more control on where their traffic goes to – when it goes there – and what it looks like when it arrives. And they want this to be simple to set up and maintain.

To meet those demands we are today announcing Origin Rules, a new product which allows for overriding the host header, the Server Name Indication (SNI), destination port and DNS resolution of matching HTTP requests.

Origin Rules is now the one-stop destination for users who want to change which origin traffic goes to, when this should happen, and what that traffic looks like when it arrives – all without ever having to write a single line of code.

One hostname, many origins

Setting up your service on Cloudflare is very simple. You tell us your domain name, example.com, and where traffic should be sent to when we receive requests that match it. Often this is an IP address. You can also create subdomains, e.g. shop.example.com, and follow the same pattern.

This allows for the web server running www.example.com to live on the IP address 98.51.100.12, and the web server responsible for running shop.example.com to live on a different IP address, e.g. 203.0.113.34. When Cloudflare receives a request for shop.example.com, we send that traffic to the web server at 203.0.113.34 with the host header shop.example.com.

Where to? Introducing Origin Rules

As most web servers commonly serve multiple websites, this host header is used to ensure the correct content is loaded. The web server looks at the request it receives, checks the host header, and tries to match it against websites it’s been told to serve. If it finds a match, it will route this request to the corresponding website’s configuration and the correct files are returned to the visitor.

This has been a foundational principle of the Internet for many years now. Unsurprisingly however, new solutions emerge and user needs evolve.

We have heard from users who want to be able to send different URLs to different origins, such as a SaaS provider for their ecommerce platform and a SaaS provider for their support desk. To achieve this, user’s could, and do, decide to run and manage their own reverse proxy running at this IP address to act as a router. This allows a user to send all traffic for example.com to a single IP address, and let the reverse proxy determine where it goes next:

    location ~ ^/shop { 
        proxy_set_header   Host $http_host;
        proxy_pass         "https://203.0.113.34/$1";
    }

This reverse proxy would detect the traffic sent with the host header example.com with a URI path starting with /shop, and send those matching HTTP requests to the correct SaaS application.

This is potentially a complex system to maintain, however, and as it is an ‘extra hop’, there is an increase in latency as requests first go through Cloudflare, to the origin server, then back to the SaaS provider – who may also be on Cloudflare. In a world rapidly migrating away from on-premise software to SaaS platforms, running your own server to do this specific function goes against the grain.

Users therefore want a way to tell Cloudflare – ‘for all traffic to www.example.com, send it to 98.51.100.12. BUT, if you see any traffic to www.example.com/shop, send it to 203.0.113.34’. This is what we call a resolve override. It is essentially a DNS override.

With a resolve override in place, HTTP requests to www.example.com/shop are now correctly sent by Cloudflare to 203.0.113.34 as requested. And they fail. The web server says it doesn’t know what to do with the HTTP request. This is because the host header is still www.example.com, and the web server does not have any knowledge of that website.

Where to? Introducing Origin Rules

To fix this, we need to make sure these requests are sent to 203.0.113.34 with a host header of shop.example.com. This is what is known as a host header override. Now, requests to www.example.com/shop are not only correctly routed to 203.0.113.34, but the host header is changed to one that the ecommerce software is expecting – and thus the request is correctly routed, and the visitors sees the correct content.

Where to? Introducing Origin Rules

The management of these selective overrides, and other overrides, is achieved via Origin Rules.

Origin Rules allow users to route HTTP traffic to different destinations and override certain request characteristics based on a number of criteria such as the visitor’s country, IP address or HTTP request headers.

Route on more than a URL

Origin Rules is built on top of our ruleset engine. This gives users the ability to perform routing decisions based on many fields including the requested URL, and also the visitors country, specific request headers, and more.

Using a combination of one or more of these available fields, users can ensure traffic is routed to specific backends, only when specific criteria are met such as host, URI path, visitor’s country, and HTTP request headers.

Historically, host header override and resolve override were achieved with the setting of a Page Rule.

Where to? Introducing Origin Rules

Page Rules is the ‘If This Then That’ of Cloudflare. Where the ‘If…’ is a URL, and the ‘Then That’ is changing how we handle traffic to specific parts of a ‘zone’. It allows users to selectively change how traffic is handled, or in this case, where traffic is sent. It is very well adopted, with over one million Page Rules in the past three months alone.

Page Rules, however, are limited to performing actions based upon the requested URL. This means if users want to change the backend a HTTP request goes to, they need to make that decision based on the URL alone. This can be challenging for users who may want to perform this decision-making on more nuanced aspects, like the user agent of the visitor or on the presence of a specific cookie.

With Origin Rules, users can perform host header override, resolve override, destination port override and SNI overrides – based on any number of criteria – not only the requested URL. This unlocks a number of interesting use cases.

Example use case: integration with cloud storage endpoints

One such use case is using a cloud storage provider as a backend for static assets, such as images. Enterprise zones can use a combination of host header override and resolve override actions to override the destination of outgoing HTTP requests. This allows for all traffic to example.net to be sent to 98.51.100.12, but requests to example.net/*.jpg be sent to a publicly accessible S3-compatible bucket.

Where to? Introducing Origin Rules

To do this, the user would create an Origin Rule setting the resolve override value to be a DNS record on their own zone, pointing to the S3 provider’s URL. This ensures that requests matching the pattern are routed to the S3 URL. However, when the cloud storage provider receives the request it will drop it – as it does not know how to route requests for the host example.net. Therefore, users also need to deploy a host header override, changing this value to match the bucket name – e.g. bucket.example.net.

Combined, this ensures requests matching the pattern correctly reach the cloud storage provider – with a host header it can use to correctly route the request to the correct bucket.

Origin Rules also enable new use cases. For example, a user can use Origin Rules to A/B test different cloud providers prior to a cut over. This is possible by using the field http.request.cookies and routing traffic to a new, test bucket or cloud provider based on the presence of a specific cookie on the request.

Users with multiple storage regions can also use the ip.geoip.country field within a filter expression to route users to the closest storage instance, reducing latency and time to load for these requests.

Destination port override

Cloudflare listens on 13 ports; seven ports for HTTP, six ports for HTTPS. This means if a request is sent to a URL with the destination port of 443, as is standard for HTTPS, it will be sent to the origin server with a destination port of 443. The same 1:1 mapping applies to the other twelve ports.

But what if a user wanted to change that mapping? For example, when the backend origin server is listening on port 8001. In this scenario, an intermediate service is required to listen for requests on port 443 and create a sub-request with the destination port set to 8001.

Historically this was done on the origin server itself – with a reverse proxy server listening for requests on 443 and other ports and proxying those requests to another port.

Apache
 <VirtualHost *:*>
        ProxyPreserveHost On
        ProxyPass / http://0.0.0.0:8001/
        ProxyPassReverse / http://0.0.0.0:8001/
        ServerName example.com
    </VirtualHost>

NGINX
server {
  listen 443;
  server_name example.com;
    location / {
      proxy_pass http://0.0.0.0:8001;
    }
}

More recently, users have deployed Cloudflare Workers to perform this service, modifying the destination port before HTTP requests ever reach their servers.

Origin Rules simplifies destination port modifications, letting users change the destination port via a simple rules experience without ever having to write a single line of code or configuration:

Where to? Introducing Origin Rules

This destination port modification can also be triggered on almost any field available in the ruleset engine, also, allowing users to change which port to send requests to based on URL, URI path, the presence of HTTP request header and more.

Server Name Indication

Server Name Indication (SNI) is an addition to the TLS encryption protocol. It enables a client device to specify the domain name it is trying to reach in the first step of the TLS handshake, preventing common “name mismatch” errors. Customers using Cloudflare for SaaS may have millions of hostnames pointing to Cloudflare. However, the origin that these requests are sent to may not have an individual certificate for each of the hostnames.

Users today have the option of doing this on a per custom hostname basis using custom origins in SSL for SaaS, however for Enterprise customers not using this setup it was an impossible task.

Enterprise users can use Origin Rules to override the value of the SNI, providing it matches any other zone in their account. This removes the need for users to manage multiple certificates on the origin or choose not to encrypt connections from Cloudflare to the origin.

Try it now

Origin Rules are available to use now via API, Terraform, and our dashboard. Further details can be found on our Developers Docs. Currently, destination port rewriting is available for all our customers as part of Origin Rules. Resolve Override, Host Header Override and SNI overrides are available to our Enterprise users.