Tag Archives: Rules

Traffic transparency: unleashing the power of Cloudflare Trace

Post Syndicated from Matt Bullock original http://blog.cloudflare.com/traffic-transparency-unleashing-cloudflare-trace/

Traffic transparency: unleashing the power of Cloudflare Trace

Traffic transparency: unleashing the power of Cloudflare Trace

Today, we are excited to announce Cloudflare Trace! Cloudflare Trace is available to all our customers. Cloudflare Trace enables you to understand how HTTP requests traverse your zone's configuration and what Cloudflare Rules are being applied to the request.

For many Cloudflare customers, the journey their customers' traffic embarks on through the Cloudflare ecosystem was a mysterious black box. It's a complex voyage, routed through various products, each capable of introducing modification to the request.

Consider this scenario: your web traffic could get blocked by WAF Custom Rules or Managed Rules (WAF); it might face rate limiting, or undergo modifications via Transform Rules, Where a Cloudflare account has many admins, modifying different things it can be akin to a game of "hit and hope," where the outcome of your web traffic's journey is uncertain as you are unsure how another admins rule will impact the request before or after yours. While Cloudflare's individual products are designed to be intuitive, their interoperation, or how they work together, hasn't always been as transparent as our customers need it to be. Cloudflare Trace changes this.

Running a trace

Cloudflare Trace allows users to set a number of request variables, allowing you to tailor your trace precisely to your needs. A basic trace will require users to define two settings. A URL that is being proxied through Cloudflare and an HTTP method such as GET. However, customers can also set request headers, add a request body and even set a bot score to allow users to validate the correct behavior of their security rules.

Once a trace is initiated, the dashboard returns a visualization of the products that were matched on a request, such as Configuration Rules, Transform Rules, and Firewall Rules, along with the specific rules inside these phases that were applied. Customers can then view further details of the filters and actions the specific rule undertakes. Clicking the rule id will take you directly to that specific rule in the Cloudflare Dashboard, allowing you to edit filters and actions if needed.

The user interface also generates a programmatic version of the trace that can be used by customers to run traces via a command line. This enables customers to use tools like jq to further investigate the extensive details returned via the trace output.

The life of a Cloudflare request

Understanding the intricate journey that your traffic embarks on within Cloudflare can be a challenging task for many of our customers and even for Cloudflare employees. This complexity often leads to questions within our Cloudflare Community or direct inquiries to our support team. Internally, over the past 13 years at Cloudflare, numerous individuals have attempted to explain this journey through diagrams. We maintain an internal Wiki page titled 'Life of a Request Museum.' This page archives all the attempts made over the years by some of the first Cloudflare engineers, heads of product, and our marketing team, where the following image was used in our 2018 marketing slides.

Traffic transparency: unleashing the power of Cloudflare Trace

The "problem" (a rather positive one) is that Cloudflare is innovating so rapidly. New products are being added, code is removed, and existing products are continually enhanced. As a result, a diagram created just a few weeks ago can quickly become outdated and challenging to keep up to date.

Finding a happy medium

However, customers still want to understand, “The life of a request.” Striking the ideal balance between providing just enough detail without overwhelming our users posed a problem akin to the Goldilocks principle. One of our first attempts to detail the ordering of Cloudflare products was Traffic Sequence, a straightforward dashboard illustration that provides a basic, high-level overview of the interactions between Cloudflare products. While it does not detail every intricacy, it helps our customers understand the order and flow of products that interacted with an HTTP request and was a welcome addition to the Cloudflare dashboard.

Traffic transparency: unleashing the power of Cloudflare Trace

However, customers still requested further insights and details. Especially around debugging issues. Internally Cloudflare teams utilize a number of self created products to trace a request. One of these products is Flute. This product gives a verbose output of all rules, Cloudflare features and codepaths a request undertakes. This allows our engineers and support teams to investigate an issue and identify if something is awry. For example in the following Flute trace image you can see how a request for my domain is evaluated against Single Redirects, Waiting Room, Configuration Settings, Snippets and Origin Rules.

Traffic transparency: unleashing the power of Cloudflare Trace

The Flute tool became one of the key focal points in the development of Cloudflare Trace. However, it can be quite intricate and packed with extensive details, potentially leading to more questions than solutions if copied verbatim and exposed to our customers.

To understand the happy medium in developing Cloudflare Trace. We closely collaborated with our Support team to gain a deeper understanding of the challenges our customers faced specifically around Cloudflare Rulesets. The primary challenge centered around understanding which rules were applicable to specific requests. Customers often raised queries, and in certain instances, these inquiries had to be escalated for further investigation into the reasons behind a request's specific behavior. By empowering our customers to independently investigate and comprehend these issues, we identified a second area where Cloudflare Trace proves invaluable—by reducing the workload of our support team and enabling them to operate more efficiently while focusing on other support tickets.

For customers encountering genuine problems, they have the capability to export the JSON response of a trace, which can then be directly uploaded to a support ticket. This streamlined process significantly reduces the time required to investigate and resolve support tickets.

Trace examples

Cloudflare Trace has been available via API for the last nine months. We have been working with a number of customers and stakeholders to understand where tracing is beneficial and solving customer problems. Here are some of the real world examples that we have solved using Cloudflare Trace.

Transform Rules inconsistently matching

A customer encountered an issue while attempting to rewrite a URL to their origin for specific paths using Transform rules. The Cloudflare account created a filter that employed regex to match against a specific path.

Traffic transparency: unleashing the power of Cloudflare Trace

A systems administrator monitoring their web server observed in their logs that the URLs for a small percentage of requests were not transforming correctly, causing disruptions to the application. They decided to investigate by comparing a correctly functioning request with one that was not, and subsequently conducted traces.

In the problematic trace, only one rule matched the trace parameters and was setting incorrect parameters.

Whereas on the other URL the rule that contained the regex matched as intended and set the correct URL parameters.

This allowed the sysadmin to pinpoint the problem: the regex was specifically designed to handle requests with subdirectories, but it failed to address cases where requests directly targeted the root or a non-subdirectory path. After identifying this issue within the traces, the sysadmin updated the filter. Subsequently, both cases matched successfully, leading to the resolution of the problem.

What origin?

When a request encounters a Cloudflare ruleset, such as Origin Rules, all the rules are evaluated, and any rule that is matched is applied in sequential order of priority. This means that multiple settings could be applied from different rules. For example, a Host Header could be set in rule 1, and a DNS origin could be assigned in rule 3. This means that the request will exit the Origin Rules phase with a new Host Header and be routed to a different origin. Cloudflare Trace allows users to easily see all the rules that matched and altered the request.

Tracing the future

Cloudflare Trace will be available to all our customers over this coming week. And located within the Account section of your Cloudflare Dashboard for all plans. We are excited to introduce additional features and products to Cloudflare Trace in the coming months. In the future will also be developing scheduling and alerts, which will enable you to monitor if a newly deployed rule is impacting the critical path of your application. As with all our products, we value your feedback. Within the Trace dashboard, you'll find a form for providing feedback and feature requests to help us enhance the product before its general release.

Introducing Waiting Room Bypass Rules

Post Syndicated from Arielle Olache original https://blog.cloudflare.com/waiting-room-bypass-rules/

Introducing Waiting Room Bypass Rules

Introducing Waiting Room Bypass Rules

Leveraging the power and versatility of Cloudflare’s Ruleset Engine, Waiting Room now offers customers more fine-tuned control over their waiting room traffic. Queue only the traffic you want to with Waiting Room Bypass Rules, now available to all Enterprise customers with an Advanced Purchase of Waiting Room.

Customers depend on Waiting Room for always-on protection from unexpected and overwhelming traffic surges that would otherwise bring their site down. Waiting Room places excess users in a fully customizable virtual waiting room, admitting new visitors dynamically as spots become available on a customer’s site. Instead of throwing error pages or delivering poorly-performing site pages, Waiting Room empowers customers to take control of their end-user experience during unmanageable traffic surges.

Introducing Waiting Room Bypass Rules
Take control of your customer experience with a fully customizable virtual waiting room

Additionally, customers use Waiting Room Event Scheduling to manage user flow and ensure reliable site performance before, during, and after online events such as product restocks, seasonal sales, and ticket sales. With Event Scheduling, customers schedule changes to their waiting rooms’ settings and custom queuing page ahead of time, with options to pre-queue early arrivers and offload event traffic from their origins after the event has concluded.

As part of the simple, no-coding-necessary process for deploying a waiting room, customers specify a hostname and path combination, which defines the location of their waiting room on their site. When a site visitor makes a preliminary request to that hostname and path or any of its subpaths, they will be issued a waiting room cookie and placed in the queue if the waiting room is queuing at that time.

The hostname and path approach to defining the placement of a waiting room is intuitive and makes it easy to deploy a waiting room to a site. But, many customers needed more granular control over what traffic and parts of their site their waiting room did or did not cover under a waiting room’s configured hostname and path. Use cases for allowing specific traffic to bypass a waiting room were varied.

Examples included allowing internal administrators site access at all times, never blocking internal user agents performing operations like synthetic monitoring, not applying waiting room to specific subpaths or query strings, or queuing only traffic from certain countries. Given the diversity of our customers’ requests to exclude traffic from a waiting room’s coverage, we built a bypass feature that gave customers the versatility necessary to deploy waiting rooms aligned with their existing site architecture and use cases. With the release of Event Scheduling, Waiting Room customers could queue when they wanted to; now, we are excited to announce that customers now have more flexibility to queue who and where they want to with Waiting Room Bypass Rules.

Who gets queued is up to you

Waiting Room Bypass Rules allow customers to write expressions that define what traffic should bypass their waiting room. A waiting room will not apply to incoming requests matching conditions based on one or more available fields, like IP address, URI path, query string, and country. Bypass rules supersede all Waiting Room features; even in Queue-all mode, event pre-queuing, or any other waiting room state, traffic matching your enabled rules’ expressions will never be queued or issued a waiting room cookie. Waiting Room rules are created via the Waiting Room API or the Waiting Room dashboard using a familiar rule management interface found throughout the Cloudflare dashboard. Waiting Room rules are managed at the individual waiting room level for precise control over each waiting room’s traffic.

Introducing Waiting Room Bypass Rules
Use the familiar rule builder found throughout the Cloudflare dashboard to define which traffic should bypass your waiting room.
Introducing Waiting Room Bypass Rules
Manage bypass rules at the individual waiting room level for precise control over each waiting room’s traffic coverage.

Built with the Ruleset Engine

We love building Cloudflare products on top of Cloudflare products; we knew that the versatility we wanted to offer to our customers with regard to what traffic a waiting room should apply to would best be achieved by integrating with Cloudflare’s powerful Ruleset Engine. The Ruleset Engine provides the infrastructure for many of our highly customizable products, such as the new Origin Rules feature and our WAF Managed Rulesets, by providing a unified way to represent the concept of “rules” and an easy-to-integrate software library for consistently executing those rules. This allows all sorts of Cloudflare products to provide extremely similar rules capabilities, with the same language for defining conditions (which can grow quite complex!) and very similar APIs.

We’ve even found that some of Waiting Room’s product functionality can be best implemented on top of the Ruleset Engine; earlier this year we migrated some select core Waiting Room logic into the Ruleset Engine, implemented as rulesets that we now transparently deploy to every zone with a waiting room. This newly migrated implementation has been running for months, and the flexibility of the Ruleset Engine will make certain future Waiting Room features easier than ever to build.

The Ruleset Engine works on two major concepts: rulesets and rules. A ruleset is a list of rules that the engine executes in order. A rule consists of a condition and an action, with the action being executed if the condition evaluates to “true”.

Under the hood, when you create the first rule for a waiting room, we create a hidden ruleset attached to that waiting room. We put together a ruleset of our own which runs on every request to your zone, dispatching to your custom ruleset if a request matches the attached waiting room. This all sounds somewhat complicated, but don’t worry. You simply manage rules at the individual waiting room level, while we abstract away the complexity of the underlying rulesets.

The Waiting Room Bypass action’s core implementation is quite simple: when the condition on one of your bypass rules is true for a request, the bypass action simply clears the flag on the request that would have told the waiting room code to kick in. This way, the bypass action ensures that Waiting Room doesn’t touch your request and the request doesn’t affect your waiting room’s statistics or queuing.

Bypass rules in action

Creating a bypass rule is easy and requires no coding or application changes. Let’s walk through a couple of real-world scenarios to demonstrate how easy it is to deploy a bypass rule from the Waiting Room dashboard or API.

Set up an Administrative IP Bypass Rule via the Waiting Room Dashboard

As mentioned before, many customers wanted to ensure that their site administrators and other internal employees–which they identify by IP address–could access their site at all times, regardless of a waiting room’s queueing status. Allowing unrestricted access to specific internal employees is especially important before online events when customers pre-queue early site visitors ahead of an event’s start time. Without the ability to bypass the waiting room before the event starts, internal employees could not review their event pages in a production environment, adding friction and uncertainty to their review process. Let’s see how we can fix that with a Waiting Room bypass rule.

Before setting up your administrator bypass rule, you can create an IP list if you have more than a handful of IPs you’d like to bypass your waiting room. Then, you will need to configure a waiting room from the Waiting Room dashboard. From that waiting room’s expanded view, navigate to Manage rules, where you can create, disable, and delete rules for a waiting room.

Introducing Waiting Room Bypass Rules
Once you have created your waiting room, from that waiting room’s expanded view, select Manage rules to create Waiting Room bypass rules.

Using the rule builder, give your rule a descriptive name and build your expression. To build an expression for this example, we will select “IP Source Address” from the Field drop-down, “is in list” from the Operator drop-down, and then select the name of the IP list we created earlier.  Once we’ve built the expression, we can either save this rule as a draft and deploy it later or save the rule and deploy it now.

Introducing Waiting Room Bypass Rules
Allow site administrators to bypass your waiting room by creating a Waiting Room bypass rule via the Waiting Room dashboard.

Once saved, the rule will appear on this waiting room’s rules management page along with all other rules for this waiting room. All requests for which this rule’s expression evaluates to true will bypass the waiting room. Thus, any user with an IP address from this managed list will never be queued when this rule is enabled.

Introducing Waiting Room Bypass Rules
Enable or disabled Waiting Room rules from an individual waiting room’s rule management dashboard.

For added oversight, there are indicators on the Waiting Room dashboard that clearly signal if a waiting room has any bypass rules enabled. Now that we have deployed the admin bypass rule, we can see from the Waiting Room table that there is an active rule for this waiting room.

Introducing Waiting Room Bypass Rules
Easily glean which waiting rooms have bypass rules active from the Waiting Room table.

Bypass rules via the Waiting Room API – path bypass example

Another common customer request now achievable using Waiting Room Bypass Rules is path exclusions. As mentioned previously, a waiting room applies to all requests hitting the hostname and path combination of the configured waiting room and any URLs under that path. For many Waiting Room customers, there were specific URLs, paths, or query strings that they did not want a waiting room to apply to under their configured hostname and path. There are various reasons why a customer would want to make exceptions like this but let’s consider the following use case to illustrate the utility of bypassing specific parts of a site or application.

Consider a movie ticketing platform that wants to protect its ticketing web application from purchasing surges due to blockbuster releases. They create a waiting room to cover their ticketing web app by placing it at ticketing.example.com/. After tickets are purchased, the ticketing platform sends movie-goers an email or a text which links back to a URL under ticketing.example.com/. The URL the user receives via text or email is: ticketing.example.com/myaccount/mobiletickets/userverified?ticketid=<ticketID>

This link opens a page in the user’s mobile browser containing a QR code that the movie theater will scan in place of a physical ticket. They want to ensure that the waiting room does not apply to customers trying to open their mobile tickets.

To do this, they would create the following bypass rule via the Waiting Room API as follows:

With a waiting room already configured at ticketing.example.com/, use the following API call to create a bypass rule:

curl -X POST \"https://api.cloudflare.com/client/v4/zones/<ZONE_ID>/waiting_rooms/<ROOM_ID>/rules" \-H "Authorization: Bearer <API_TOKEN>" \
-d '{    
"description": "ticket holders bypass waiting room",    
"expression": "ends_with(http.request.uri.path, \"/userverified\")  
 "action": "bypass_waiting_room"
}'

Let’s break down this call. First, there is the URL, which needs to be populated with the zone id and the waiting room id. Then we pass our API token in the Authorization header. This call will create a new Waiting Room rule that will be added after any existing rules for this waiting room.

Within the body of the call, first define an optional description parameter. Give the rule an optional description to indicate the purpose of the rule for easy reference later.

"description": "ticket holders bypass waiting room"

Next, write the expression, which defines the exact traffic which should bypass the waiting room. In this example, customers using the direct link sent via text should bypass the waiting room. These direct links end with the path userverified so create an ends_with condition.

"expression": "ends_with(http.request.uri.path, \"/userverified\")

When creating a bypass rule for path, query string, or URLs, make sure to include in the expression exclusions for subrequests that load assets on the pages covered by a waiting room. Let’s assume in this example we are hosting assets on a different subdomain not covered by the waiting room. Therefore, we do not need to include these subrequests in the expression.

Lastly, set the action parameter to bypass_waiting_room to indicate that this traffic should bypass the waiting room and deploy the rule. This customer’s waiting room now covers precisely the parts of their application they want to cover. Their waiting room will protect their web application from ticket purchasing traffic while ensuring that customers who need to display their mobile tickets at the movie theater can do so reliably without being placed in a Waiting Room queue.

With the addition of Waiting Room Bypass Rules, customers now have more flexibility to deploy waiting rooms on their terms, to cover exactly the traffic they want. For more on Waiting Room Bypass Rules and Waiting Room, check out our developer documentation.

Not using Cloudflare yet? Start now with our Business plan which includes one basic Waiting Room or contact us about our advanced Waiting Room with Event Scheduling, Customized Templates, and Waiting Room Bypass Rules available on our Enterprise plan.

Zone Versioning is now generally available

Post Syndicated from Garrett Galow original https://blog.cloudflare.com/zone-versioning-ga/

Zone Versioning is now generally available

Zone Versioning is now generally available

Today we are announcing the general availability of Zone Versioning for enterprise customers. Zone Versioning allows you to safely manage zone configuration by versioning changes and choosing how and when to deploy those changes to defined environments of traffic. Previously announced as HTTP Applications, we have redesigned the experience based on testing and feedback to provide a seamless experience for customers looking to safely rollout configuration changes.

Problems with making configuration changes

There are two problems we have heard from customers that Zone Versioning aims to solve:

  1. How do I test changes to my zone safely?
  2. If I do end up making a change that impacts my traffic negatively, how can I quickly revert that change?

Customers have worked out various ways of solving these problems. For problem #1, customers will create staging zones that live on a different hostname, often taking the form staging.example.com, that they make changes on first to ensure that those changes will work when deployed to their production zone. When making more than one change this can become troublesome as they now need to keep track of all the changes made to make the exact same set of changes on the production zone. Also, it is possible that something tested in staging never makes it to production, but yet is not rolled back, so now the two environments differ in configuration.

For problem #2, customers often keep track of what changes were made and when they were deployed in a ticketing system like JIRA, such that in case of an incident an on-call engineer can more easily find the changes they may need to roll back by manually modifying the configuration of the zone. This requires the on-call to be able to easily get to the list of what changes were made.

Altogether, this means customers are more reluctant to make changes to configuration or turn on new features that may benefit them because they do not feel confident in the ability to validate the changes safely.

How Zone Versioning solves those problems

Zone Versioning provides two new fundamental aspects to managing configuration that allow a customer to safely test, deploy and rollback configuration changes: Versions and Environments.

Versions are independent sets of zone configuration. They can be created anytime from a previous version or the initial configuration of the zone and changes to one version will not affect another version. Initially, a version affects none of a zone’s traffic, so any changes made are safe by definition. When first enabling zone versioning, we create Version 1 that is based on the current configuration of the zone (referred to as the baseline configuration).

Zone Versioning is now generally available

From there any changes that you make to Version 1 will be safely stored and propagated to our global network, but will not affect any traffic. Making changes to a version is no different from before, just select the version to edit and modify the configuration of that feature as normal. Once you have made the set of changes desired for a given version, to deploy that version on live traffic in your zone, you will need to deploy the version to an Environment.

Environments are a way of mapping segments of your zone’s traffic to versions of configuration. Powered by our Ruleset Engine, that powers the likes of Custom WAF Rules and Cache Rules, Environments give you the ability to create filters based on a wide range of parameters such as hostname, client IP, location, or cookie. When a version is applied to an Environment, any traffic matching the filter will use that version’s configuration.

By default, we create three environments to get started with:

  • Development – Applies to traffic sent with a specific cookie for development
  • Staging – Applies to traffic sent to Cloudflare’s staging IPs
  • Production – Applies to all traffic on the zone

You can create additional environments or modify the pre-defined environments except for Production. Any newly created environment will begin in an unassigned state meaning traffic will fall back to the baseline configuration of the zone. In the above image, we have deployed Version 2 to both the Development and Staging environments. Once we have tested Version 2 in staging, then we can ‘Promote’ Version 2 to Production which means all traffic on the zone will receive the configuration in Version 2 except for Development and Staging traffic. If something goes wrong after deploying to Production, then we can use the ‘Rollback’ action to revert to the configuration of Version 1.

How promotion and rollbacks work

It is worth going into a bit more detail about how configuration changes, promotions, and rollbacks are realized in our global network. Whenever a configuration change is made to a version, we store that change in our system of record for the service and push that change to our global network so that it is available to be used at any time.

Importantly and unlike how changes to zones automatically take effect, that change will not be used until the version is deployed to an environment that is receiving traffic. The same is true for when a version is promoted or rolled back between environments. Because all the configuration we need for a given version is already available in our global network, we only need to push a single, atomic change to tell our network that traffic matching the filter for a given environment should now use the newly defined configuration version.

This means that promotions and more importantly rollbacks occur as quickly as you are used to with any configuration change in Cloudflare. No need to wait five or ten minutes for us to roll back a bad deployment, if something goes wrong you can return to a last known good configuration in seconds. Slow rollbacks can make ongoing incidents drag on leading to extended customer impact, so the ability to quickly execute a rollback was a critical capability.

Get started with Zone Versioning today

Enterprise Customers can get started with Zone Versioning today for their zones on the Cloudflare dashboard. Customers will need to be using the new Managed WAF rules in order to enable Zone Versioning. You can find more information about Zone Versioning in our Developer Docs.

Happy versioning!

The most programmable Supercloud with Cloudflare Snippets

Post Syndicated from Sam Marsh original https://blog.cloudflare.com/snippets-announcement/

The most programmable Supercloud with Cloudflare Snippets

Your traffic, how you like it

The most programmable Supercloud with Cloudflare Snippets

Cloudflare is used by a highly diverse customer base. We offer simple-to-use products for everything from setting HTTP headers to rewriting the URI path and performing URL redirects. Sometimes customers need more than the out-of-the-box functionality, not just adding an HTTP header – but performing some advanced calculation to create the output. Today they would need to create a feature request and wait for it to be shipped, write a Cloudflare Worker, or keep this modification ‘on origin’ – on their own infrastructure.

To simplify this, we are delighted to announce Cloudflare Snippets. Snippets are a new way to perform traffic modifications that users either cannot do via our productised offerings, or want to do programmatically. The best part? The vast majority of customers will pay nothing extra for using Snippets.

Users now have a choice. Perform the action via a rule. Or, if more functionality is needed, write a Snippet.  Neither will mean waiting. Neither will incur additional cost (although a high fair usage cap will apply). Snippets unblocks users to do what they want, when they want. All on Cloudflare.

Snippets will support the import of code written in various languages, such as JavaScript (modern), VCL (legacy) and Apache .htaccess files (legacy). This allows customers to migrate legacy operational code onto our platform – whilst also consolidating their JavaScript operations.

Please use the sign-up form to join the waitlist for Snippets if you are interested in testing. We hope to begin admitting users into the closed beta early 2023.

Why build Snippets?

Over the past 18 months we have released a number of new rules products such as Transform Rules, Cache Rules, Origin Rules, Config Rules and Redirect Rules. These new products give more control to customers on how we process their traffic as it flows through our global network. The feedback on these products so far has been overwhelmingly positive. However, our customers still occasionally need the ability to do more than the out-of-the-box functionality allows.

There are always some use cases where a product doesn’t provide the functionality that a customer needs for their specific situation.  For example, whilst thousands of our customers are now using Transform Rules to solve their HTTP header modification use cases, there remains a small number of use cases that are not possible, such as setting dynamic expiry times with cookies or hashing tokens with a key.

This is where Cloudflare Snippets help. Customers will no longer need to use the full Cloudflare Workers platform to implement these relatively simple use cases. Nor will they need to wait for us to build their feature requests. Instead, they will be able to run a Snippet of JavaScript.

Migrating legacy code to Snippets

Varnish Control Language (VCL) is only used within the context of Varnish. Launched around 16 years ago, it has historically been used to configure traffic and routing for Content Delivery Networks as it was extensible to a wide range of use cases.

There are still a good number of businesses out there using VCL to perform routing and traffic modification actions. Whilst other providers are deprecating support for VCL, we want to make sure those of you comfortable using it are still supported.

Snippets won’t run pure VCL. Instead, we will convert VCL into easy to maintain rules or Snippets. To achieve this we’re building a simple-to-use, self-serve VCL converter that analyzes uploaded VCL code and auto-generates suggested Snippets, and if we can find a match, also generates suggested rules for products such as Transform Rules or Cache Rules.

This topic was initially handled via Project Turpentine, a suite of tools used by Cloudflare employees to parse a customer’s VCL into a suggested JavaScript configuration. This JavaScript could then be loaded into a Worker, or series of Workers.

Snippets takes the idea and principles of Turpentine further. Much further. By building a parser directly in the dashboard it puts the power directly into the hands of users and gives them a choice. You can tell us to migrate everything we can into Rules with the remaining code migrated into Snippets, or, you can choose to tell us to migrate everything into discrete Snippets. It’s your call.

The most programmable Supercloud with Cloudflare Snippets

We’ll give Apache htaccess and NGINX configuration files the same treatment. The goal being users simply upload the files from their websites Apache or NGINX configuration, and we generate suggested Snippets and/or rules.

The days of having to use legacy code for operational tasks are coming to an end. Snippets allow users to migrate these workloads to Cloudflare, and let them focus on the bigger problems of the business vs maintaining legacy systems.

The difference between Snippets and Workers

Most readers will already be familiar with Cloudflare Workers, our powerful developer platform which allows businesses to run and build entire products and solutions on Cloudflare’s global network. Snippets is also built on this platform, but has a few key differences.

The first major difference is that a Snippet will run as part of the Ruleset Engine as dedicated new phases, similar to Transform Rules and Cache Rules. Customers will be able to select and execute a Snippet based on any ruleset engine filter. This allows customers to run a Snippet against every request, or filter for specific HTTP traffic based on the fields we offer, such as traffic with a certain bot score, originating from a specific country, or with a specific cookie. Snippets will be additive, meaning users can have one Snippet to add an HTTP header, and another to rewrite the URL, and both will execute if they match:

The most programmable Supercloud with Cloudflare Snippets

Another major difference – Cloudflare Snippets are available for all plan levels, at no additional cost. 99% of users won’t pay a single cent, ever, to use this solution. This allows customers to migrate their simple workloads from legacy solutions like VCL to the Cloudflare platform, and actively reduce their monthly spend.

Free Plans Pro Plans Business Plans Enterprise Plans
Snippets available 5 Snippets per zone. 20 Snippets per zone. 50 Snippets per zone. 200 Snippets per zone*
(Customers can speak with their Customer Success team to have this increased).

Cloudflare Snippets are lightweight when compared with Workers, offering 5ms maximum execution time, 2MB maximum memory and 32KB total package size. This comparably small footprint allows us to offer this to 99% of users at no additional cost, whilst also being sufficient for the identified use cases like HTTP header modification, URL rewriting and traffic routing – all of which don’t need the vast resources offered by Cloudflare Workers.

Cloudflare Snippets Cloudflare Workers Unbound
(For comparison)
Runtime support JavaScript JavaScript and WASM
Execution location Global – All Cloudflare locations Global – All Cloudflare locations
Triggers supported Ruleset Engine Filters HTTP Request
HTTP Response
Cron Triggers
Maximum execution time 5ms 30 Seconds HTTP
15 Minutes (Cron Trigger)
Maximum memory 2MB 128MB
Total package size 32KB 5MB
Environment variables 8/Snippet 64/Worker
Environment variable size 1KB 5KB
Subrequests 1/request 1000/request
Terraform Support
Wrangler Support
Cron Triggers
Key Value Store
Durable Objects
R2 Integration

What will you be able to build with Cloudflare Snippets?

Snippets will allow customers to migrate their existing workloads to Cloudflare. They will also open up a number of new possible use cases for customers. We have highlighted three common examples below, however there are many more to choose from.

Example 1: Sending suspect bots to a honeypot

When creating Snippets customers will be able to access Cloudflare features available in the Workers runtime, such as the bot score field. This enables customers to forward an HTTP request to a honeypot or use the RegExp Javascript function to change the URL construct being sent back to the end user when traffic is assigned a bot score below a certain threshold, e.g. 29 and lower.

…
if (request.cf.botManagement.score < 30) {
const honeypot = "https://example.com/";
return await fetch(honeypot, request);
…
}

Another common use case we foresee Snippets addressing is cookie modification. Usage can range from simply setting an expiry in five minutes by using getTime and setTime JavaScript functions to setting a dynamic cookie based on user request attributes for A/B testing purposes.

…
{
let res = await fetch(request);
res = new Response(res.body, res);
// 24h * 60m * 60s * 1000ms = 86400000ms
const expiry = new Date(Date.now() + 7 * 86400000).toUTCString();
const group = request.headers.get("userGroup") == "premium" ? "A" : "B";
res.headers.append(
      "Set-Cookie",
`testGroup=${group}; Expires=${expiry}; path=/`
    );
…

Example 3: URI query management

Customers can also deploy Cloudflare Snippets to do complex operations such as splicing the URI query value to selectively remove or inject additional parameters. Query string manipulation is typically done using Transform Rules. However, with Transform Rules the set/ action is effectively a replace action. This action when applied to the URI query string will remove the entire value if there is one and set it to what the user specifies, thus overwriting it. This is a problem for customers who wish to selectively inject specific query parameters for matching traffic. For example,  setting an additional query, e.g. ?utm_campaign=facebook when common social media platforms are detected in the user agent. With Snippets, customers will be able to do this selective removal and insertion using a simple piece of JavaScript, e.g.

…
if (userAgent.includes("Facebook")) {
      const url = new URL(request.url);
      const params = new URLSearchParams(url.search);
      params.set("utm_campaign", "facebook");
      url.search = params.toString();
      const transformedRequest = new Request(url, request)
…
}

We are excited to see what other use cases Cloudflare Snippets unlock for our customers.

Will you stop adding actions to rulesets?

The simple answer is no! We will continue to build out our no-code actions within the ruleset engine, developing new products to solve customer needs.

It may sound obvious – but a core component to feature improvement is talking to customers. Talking to Snippet users will help us understand what real life use cases Snippets help solve and highlight feature gaps we have in our product suite. We can then review if it makes sense to productise that use case, or leave it requiring Snippets.

We also understand that not everyone is a software developer. We are therefore exploring how we can make Snippets accessible to all by creating selectable templates available in a library that can be copied and modified by customers, with minimum coding knowledge required. With Snippets, powerful won’t mean difficult.

Accessing Cloudflare Snippets

Snippets are currently under development — you can sign up here to join the waitlist for access.

We hope to begin admitting users into the closed beta in early 2023, with an open beta to follow.

Introducing Cache Rules: precision caching at your fingertips

Post Syndicated from Alex Krivit original https://blog.cloudflare.com/introducing-cache-rules/

Introducing Cache Rules: precision caching at your fingertips

Introducing Cache Rules: precision caching at your fingertips

Ten years ago, in 2012, we released a product that put “a powerful new set of tools” in the hands of Cloudflare customers, allowing website owners to control how Cloudflare would cache, apply security controls, manipulate headers, implement redirects, and more on any page of their website. This product is called Page Rules and since its introduction, it has grown substantially in terms of popularity and functionality.

Page Rules are a common choice for customers that want to have fine-grained control over how Cloudflare should cache their content. There are more than 3.5 million caching Page Rules currently deployed that help websites customize their content. We have spent the last ten years learning how customers use those rules to cache content, and it’s clear the time is ripe for evolving rules-based caching on Cloudflare. This evolution will allow for greater flexibility in caching different types of content through additional rule configurability, while providing more visibility into when and how different rules interact across Cloudflare’s ecosystem.

Today, we’ve announced that Page Rules will be re-imagined into four product-specific rule sets: Origin Rules, Cache Rules, Configuration Rules, and Redirect Rules.

In this blog we’re going to discuss Cache Rules, and how we’re applying ten years of product iteration and learning from Page Rules to give you the tools and options to best optimize your cache.

Activating Page Rules, then and now

Adding a Page Rule is very simple: users either make an API call or navigate to the dashboard, enter a full or wildcard URL pattern (e.g. example.com/images/scr1.png or example.com/images/scr*), and tell us which actions to perform when we see that pattern. For example a Page Rule could tell browsers– keep a copy of the response longer via “Browser Cache TTL”, or tell our cache that via “Edge Cache TTL”. Low effort, high impact. All this is accomplished without fighting origin configuration or writing a single line of code.

Under the hood, a lot is happening to make that rule scale: we turn every rule condition into regexes, matching them against the tens of millions of requests per second across 275+ data centers globally. The compute necessary to process and apply new values on the fly across the globe is immense and corresponds directly to the number of rules we are able to offer to users. By moving cache actions from Page Rules to Cache Rules we can allow for users to not only set more rules, but also to trigger these rules more precisely.

More than a URL

Users of Page Rules are limited to specific URLs or URL patterns to define how browsers or Cloudflare cache their websites files. Cache Rules allows users to set caching behavior on additional criteria, such as the HTTP request headers or the requested file type. Users can continue to match on the requested URL also, as used in our Page Rules example earlier. With Cache Rules, users can now define this behavior on one or more fields available.

For example, if a user wanted to specify cache behavior for all image/png content-types, it’s now as simple as pushing a few buttons in the UI or writing a small expression in the API. Cache Rules give users precise control over when and how Cloudflare and browsers cache their content. Cache Rules allow for rules to be triggered on request header values that can be simply defined like

any(http.request.headers["content-type"][*] == "image/png")

Which triggers the Cache Rule to be applied to all image/png media types. Additionally, users may also leverage other request headers like cookie values, user-agents, or hostnames.

As a plus, these matching criteria can be stacked and configured with operators like AND and OR, providing additional simplicity in building complex rules from many discrete blocks, e.g. if you would like to target both image/png AND image/jpeg.

For the full list of fields available conditionals you can apply Cache Rules to, please refer to the Cache Rules documentation.

Introducing Cache Rules: precision caching at your fingertips

Visibility into how and when Rules are applied

Our current offerings of Page Rules, Workers, and Transform Rules can all manipulate caching functionality for our users’ content. Often, there is some trial and error required to make sure that the confluence of several rules and/or Workers are behaving in an expected manner.

As part of upgrading Page Rules we have separated it into four new products:

  1. Origin Rues
  2. Cache Rules
  3. Configuration Rules
  4. Redirect Rules

This gives users a better understanding into how and when different parts of the Cloudflare stack are activated, reducing the spin-up and debug time. We will also be providing additional visibility in the dashboard for when rules are activated as they go through Cloudflare. As a sneak peek please see:

Introducing Cache Rules: precision caching at your fingertips

Our users may take advantage of this strict precedence by chaining the results of one product into another. For example, the output of URL rewrites in Transform Rules will feed into the actions of Cache Rules, and the output of Cache Rules will feed into IP Access Rules, and so on.

In the future, we plan to increase this visibility further to allow for inputs and outputs across the rules products to be observed so that the modifications made on our network can be observed before the rule is even deployed.

Cache Rules. What are they? Are they improved? Let’s find out!

To start, Cache Rules will have all the caching functionality currently available in Page Rules. Users will be able to:

  • Tell Cloudflare to cache an asset or not,
  • Alter how long Cloudflare should cache an asset,
  • Alter how long a browser should cache an asset,
  • Define a custom cache key for an asset,
  • Configure how Cloudflare serves stale, revalidates, or otherwise uses header values to direct cache freshness and content continuity,

And so much more.

Cache Rules are intuitive and work similarly to our other ruleset engine-based products announced today: API or UI conditionals for URL or request headers are evaluated, and if matching, Cloudflare and browser caching options are configured on behalf of the user. For all the different options available, see our Cache Rules documentation.

Under the hood, Cache Rules apply targeted rule applications so that additional rules can be supported per user and across the whole engine. What this means for our users is that by consuming less CPU for rule evaluations, we’re able to support more rules per user. For specifics on how many additional Cache Rules you’ll be able to use, please see the Future of Rules Blog.

Introducing Cache Rules: precision caching at your fingertips

How can you use Cache Rules today?

Cache Rules are available today in beta and can be configured via the API, Terraform, or UI in the Caching tab of the dashboard. We welcome you to try the functionality and provide us feedback for how they are working or what additional features you’d like to see via community posts, or however else you generally get our attention 🙂.

If you have Page Rules implemented for caching on the same path, Cache Rules will take precedence by design. For our more patient users, we plan on releasing a one-click migration tool for Page Rules in the near future.

What’s in store for the future of Cache Rules?

In addition to granular control and increased visibility, the new rules products also opens the door to more complex features that can recommend rules to help customers achieve better cache hit ratios and reduce their egress costs, adding additional caching actions and visibility, so you can see precisely how Cache Rules will alter headers that Cloudflare uses to cache content, and allowing customers to run experiments with different rule configurations and see the outcome firsthand. These possibilities represent the tip of the iceberg for the next iteration of how customers will use rules on Cloudflare.

Try it out!

We look forward to you trying Cache Rules and providing feedback on what you’d like to see us build next.