Tag Archives: Product News

We’ve shipped so many products the Cloudflare dashboard needed its own search engine

Post Syndicated from Emily Flannery original https://blog.cloudflare.com/quick-search-beta/

We've shipped so many products the Cloudflare dashboard needed its own search engine

We've shipped so many products the Cloudflare dashboard needed its own search engine

Today we’re proud to announce our first release of quick search for the Cloudflare dashboard, a beta version of our first ever cross-dashboard search tool to help you navigate our products and features. This first release is now available to a small percentage of our customers. Want to request early access? Let us know by filling out this form.

What we’re launching

We’re launching quick search to speed up common interactions with the Cloudflare dashboard. Our dashboard allows you to configure Cloudflare’s full suite of products and features, and quick search gives you a shortcut.

To get started, you can access the quick search tool from anywhere within the Cloudflare dashboard by clicking the magnifying glass button in the top navigation, or hitting Ctrl + K on Linux and Windows or ⌘ + K on Mac. (If you find yourself forgetting which key combination it is just remember that it’s or Ctrl-K-wik.) From there, enter a search term and then select from the results shown below.

We've shipped so many products the Cloudflare dashboard needed its own search engine
Access quick search from the top navigation bar, or use keyboard shortcuts Ctrl + K on Linux and Windows or ⌘ + K on Mac.

Current supported functionality

What functionality will you have access to? Below you’ll learn about the three core capabilities of quick search that are included in this release, as well as helpful tips for using the tool.

Search for a page in the dashboard

Start typing in the name of the product you’re looking for, and we’ll load matching terms after each key press. You will see results for any dashboard page that currently exists in your sidebar navigation. Then, just click the desired result to navigate directly there.

We've shipped so many products the Cloudflare dashboard needed its own search engine
Search for “page” and you’ll see results categorized into “website-only products” and “account-wide products.”
We've shipped so many products the Cloudflare dashboard needed its own search engine
Search for “ddos” and you’ll see results categorized into “websites,” “website-only products” and “account-wide products.”

Search for website-only products

For our customers who manage a website or domain in Cloudflare, you have access to a multitude of Cloudflare products and features to enhance your website’s security, performance and reliability. Quick search can be used to easily find those products and features, regardless of where you currently are in the dashboard (even from within another website!).

You may easily search for your website by name to navigate to your website’s Overview page:

We've shipped so many products the Cloudflare dashboard needed its own search engine

You may also navigate to the products and feature pages within your specific website(s). Note that you can perform a website-specific search from anywhere in your core dashboard using one of two different approaches, which are explained below.

First, you may search first for your website by name, then navigate search results from there:

We've shipped so many products the Cloudflare dashboard needed its own search engine

Alternatively, you may search first for the product or feature you’re looking for, then filter down by your website:

We've shipped so many products the Cloudflare dashboard needed its own search engine

Search for account-wide products

Many Cloudflare products and features are not tied directly to a website or domain that you have set up in Cloudflare, like Workers, R2, Magic Transit—not to mention their related sub-pages. Now, you may use quick search to more easily navigate to those sections of the dashboard.

We've shipped so many products the Cloudflare dashboard needed its own search engine

Here’s an overview of what’s next on our quick search roadmap (and not yet supported today):

  • Search results do not currently return results of product- and feature-specific names or configurations, such as Worker names, specific DNS records, IP addresses, Firewall Rules.
  • Search results do not currently return results from within the Zero Trust dashboard.
  • Search results do not currently return results for Cloudflare content living outside the dashboard, like Support or Developer documentation.

We’d love to hear what you think. What would you like to see added next? Let us know using the feedback link found at the bottom of the search window.

We've shipped so many products the Cloudflare dashboard needed its own search engine

Our vision for the future of the dashboard

We’re excited to launch quick search and to continue improving our dashboard experience for all customers. Over time, we’ll mature our search functionality to index any and all content you might be looking for — including search results for all product content, Support and Developer docs, extending search across accounts, caching your recent searches, and more.

Quick search is one of many important user experience improvements we are planning to tackle over the coming weeks, months and years. The dashboard is central to your Cloudflare experience, and we’re fully committed to making your experience delightful, useful, and easy. Stay tuned for an upcoming blog post outlining the vision for the Cloudflare dashboard, from our in-app home experience to our global navigation and beyond.

For now, keep your eye out for the little search icon that will help you in your day-to-day responsibilities in Cloudflare, and if you don’t see it yet, don’t worry—we can’t wait to ship it to you soon.

If you don’t yet see quick search in your Cloudflare dashboard, you can request early access by filling out this form.

Introducing Configuration Rules

Post Syndicated from Matt Bullock original https://blog.cloudflare.com/configuration-rules/

Introducing Configuration Rules

A powerful new set of tools

Introducing Configuration Rules

In 2012, we introduced Page Rules to the world, announcing:

“Page Rules is a powerful new set of tools that allows you to control how CloudFlare works on your site on a page-by-page basis.”

Ten years later, and with all F’s lowercase, we are excited to introduce Configuration Rules — a Page Rules successor and a much improved way of controlling Cloudflare features and settings. With Configuration Rules, users can selectively turn on/off features which would typically be applied to every HTTP request going through the zone. They can do this based on URLs – and more, such as cookies or country of origin.

Configuration Rules opens up a wide range of use cases for our users that previously were impossible without writing custom code in a Cloudflare Worker. Such use cases as A/B testing configuration or only enabling features for a set of file extensions are now made possible thanks to the rich filtering capabilities of the product.

Configuration Rules are available for use immediately across all plan levels.

Turn it on, but only when…

As each HTTP request enters a Cloudflare zone we apply a configuration. This configuration tells the Cloudflare server handling the HTTP request which features the HTTP request should ‘go’ through, and with what settings/options. This is defined by the user, typically via the dashboard.

The issue arises when users want to enable these features, such as Polish as Auto Minify, only on a subset of the traffic to their website. For example, users may want to disable Email Obfuscation but only for a specific page on their website so that contact information is shown correctly to visitors. To do this, they can deploy a Configuration Rule.

Introducing Configuration Rules

Configuration Rules lets users selectively enable or disable features based on one or more ruleset engine fields.

Currently, there are 16 available actions within Configuration Rules. These actions range from Disable Apps, Disable Railgun and Disable Zaraz to Auto Minify, Polish and Mirage.

These actions effectively ‘override’ the corresponding zone-wide setting for matching traffic. For example, Rocket Loader may be enabled for the zone example.com:

Introducing Configuration Rules

If the user, however, does not want Rocket Loader to be enabled on their checkout page due to an issue it causes with a specific JavaScript element, they could create a Configuration Rule to selectively disable Rocket Loader:

Introducing Configuration Rules

This interplay between zone level settings and Configuration Rules allows users to selectively enable features, allowing them to test Rocket Loader on staging.example.com prior to flipping the zone-level toggle.

With Configuration Rules, users also have access to various other non-URL related fields. For example, users could use the ip.geoip.country field to ensure that visitors for specific countries always have the ‘Security Level’ set to ‘I’m under attack’.

Historically, these configuration overrides were achieved with the setting of a Page Rule.

Page Rules is the ‘If This Then That’ of Cloudflare. Where the ‘If…’ is a URL, and the ‘Then That’ is changing how we handle traffic to specific parts of a ‘zone’. It allows users to selectively change how traffic is handled, and in this case specifically, which settings are and aren’t applied. It is very well adopted, with over one million Page Rules in the past three months alone.

Page Rules, however, are limited to performing actions based upon the requested URL. This means if users want to disable Rocket Loader for certain traffic, they need to make that decision based on the URL alone. This can be challenging for users who may want to perform this decision-making on more nuanced aspects, like the user agent of the visitor or on the presence of a specific cookie.

For example, users might want to set the ‘Security Level’ to ‘I’m under attack’ when the HTTP request originates in certain countries. This is where Configuration Rules help.

Use case: A/B testing

A/B testing is the term used to describe the comparison of two versions of a single website or application. It allows users to create a copy of their current website (‘A’), change it (‘B’) and compare the difference.

In a Cloudflare context, users might want to A/B test the effect of features such as Mirage or Polish prior to enabling them for all traffic to the website. With Page Rules, this was impractical. Users would have to create Page Rules matching on specific URI query strings and A/B test by appending those query strings to every HTTP request.

Introducing Configuration Rules

With Configuration Rules, this task is much simpler. Leveraging one or more fields, users can filter on other parameters of a HTTP request to define which features and products to enable.

For example, by using the expression any(http.request.cookies["app"][*] == "test") a user can ensure that Auto Minify, Mirage and Polish are enabled only when this cookie is present on the HTTP request. This allows comparison testing to happen before enabling these products either globally, or on a wider set of traffic. All without impacting existing production traffic.

Introducing Configuration Rules

Use case: augmenting URLs

Configuration Rules can be used to augment existing requirements, also. One example given in ‘The Future of Page Rules’ blog is increasing the Security Level to ‘High’ for visitors trying to access the contact page of a website, to reduce the number of malicious visitors to that page.

In Page Rules, this would be done by simply specifying the contact page URL and specifying the security level, e.g. URL: example.com/contact*. This ensures that any “visitors that exhibit threatening behavior within the last 14 days” are served with a challenge prior to having that page load.

Configuration Rules can take this use case and augment it with additional fields, such as whether the source IP address is in a Cloudflare Managed IP List. This allows users to be more specific about when the security level is changed to ‘High’, such as only when the request also is marged as coming from an open HTTP and SOCKS proxy endpoints, which are frequently used to launch attacks and hide attackers identity:

Introducing Configuration Rules

This reduces the chance of a false positive, and a genuine visitor to the contact form being served with a challenge.

Try it now

Configuration Rules are available now via API, UI, and Terraform for all Cloudflare plans! We are excited to see how you will use them in conjunction with all our new rules releases from this week.

Where to? Introducing Origin Rules

Post Syndicated from Matt Bullock original https://blog.cloudflare.com/origin-rules/

Where to? Introducing Origin Rules

Host headers are key

Where to? Introducing Origin Rules

The host header of an HTTP request tells the receiving server (‘origin’) which website or application a client wants to access.

When an origin receives an HTTP request, it checks the value of this ‘host’ header to see if it is responsible for that traffic. If it finds a match the request will be routed appropriately and the correct data will be returned to the visitor. If it doesn’t find a match, it will return an error telling the visitor it doesn’t have an application or website that matches what they are asking for.

In simple setups this is often not an issue. All requests for example.com are sent to the same origin, which sees the host header example.com and returns the relevant files. However, not all setups are as straightforward. SaaS (Software-as-a-Service) platforms use host headers to route visitors to the correct instance or S3-compatible bucket.

To ensure the correct content is still loaded, the host header must equal the name of this instance or bucket to allow the receiving origin to route it correctly. This means at some point in the traffic flow, the host header must be changed to match the instance or bucket name, before being sent to the SaaS platform.

Another common issue is when web applications on an origin are listening on a non-standard port, e.g. 8001.  Requests sent via HTTPS will by default arrive on port 443. To ensure the traffic isn’t subsequently sent to port 443 on the origin the traffic must be intercepted and have the destination port changed to 8001. This ensures the origin is receiving traffic where it expects it. Previously this would be done as a Cloudflare Worker, Cloudflare Spectrum application or by running a dedicated application on the origin.

Both of these scenarios require customers to write and maintain code to intercept HTTP requests and parse them to ensure they go to the correct origin location, the correct port on that origin, and with the correct host header. This is a burden for administrators to maintain, particularly as legacy applications are migrated away from on-premise and into SaaS.

Cloudflare users want more control on where their traffic goes to – when it goes there – and what it looks like when it arrives. And they want this to be simple to set up and maintain.

To meet those demands we are today announcing Origin Rules, a new product which allows for overriding the host header, the Server Name Indication (SNI), destination port and DNS resolution of matching HTTP requests.

Origin Rules is now the one-stop destination for users who want to change which origin traffic goes to, when this should happen, and what that traffic looks like when it arrives – all without ever having to write a single line of code.

One hostname, many origins

Setting up your service on Cloudflare is very simple. You tell us your domain name, example.com, and where traffic should be sent to when we receive requests that match it. Often this is an IP address. You can also create subdomains, e.g. shop.example.com, and follow the same pattern.

This allows for the web server running www.example.com to live on the IP address 98.51.100.12, and the web server responsible for running shop.example.com to live on a different IP address, e.g. 203.0.113.34. When Cloudflare receives a request for shop.example.com, we send that traffic to the web server at 203.0.113.34 with the host header shop.example.com.

Where to? Introducing Origin Rules

As most web servers commonly serve multiple websites, this host header is used to ensure the correct content is loaded. The web server looks at the request it receives, checks the host header, and tries to match it against websites it’s been told to serve. If it finds a match, it will route this request to the corresponding website’s configuration and the correct files are returned to the visitor.

This has been a foundational principle of the Internet for many years now. Unsurprisingly however, new solutions emerge and user needs evolve.

We have heard from users who want to be able to send different URLs to different origins, such as a SaaS provider for their ecommerce platform and a SaaS provider for their support desk. To achieve this, user’s could, and do, decide to run and manage their own reverse proxy running at this IP address to act as a router. This allows a user to send all traffic for example.com to a single IP address, and let the reverse proxy determine where it goes next:

    location ~ ^/shop { 
        proxy_set_header   Host $http_host;
        proxy_pass         "https://203.0.113.34/$1";
    }

This reverse proxy would detect the traffic sent with the host header example.com with a URI path starting with /shop, and send those matching HTTP requests to the correct SaaS application.

This is potentially a complex system to maintain, however, and as it is an ‘extra hop’, there is an increase in latency as requests first go through Cloudflare, to the origin server, then back to the SaaS provider – who may also be on Cloudflare. In a world rapidly migrating away from on-premise software to SaaS platforms, running your own server to do this specific function goes against the grain.

Users therefore want a way to tell Cloudflare – ‘for all traffic to www.example.com, send it to 98.51.100.12. BUT, if you see any traffic to www.example.com/shop, send it to 203.0.113.34’. This is what we call a resolve override. It is essentially a DNS override.

With a resolve override in place, HTTP requests to www.example.com/shop are now correctly sent by Cloudflare to 203.0.113.34 as requested. And they fail. The web server says it doesn’t know what to do with the HTTP request. This is because the host header is still www.example.com, and the web server does not have any knowledge of that website.

Where to? Introducing Origin Rules

To fix this, we need to make sure these requests are sent to 203.0.113.34 with a host header of shop.example.com. This is what is known as a host header override. Now, requests to www.example.com/shop are not only correctly routed to 203.0.113.34, but the host header is changed to one that the ecommerce software is expecting – and thus the request is correctly routed, and the visitors sees the correct content.

Where to? Introducing Origin Rules

The management of these selective overrides, and other overrides, is achieved via Origin Rules.

Origin Rules allow users to route HTTP traffic to different destinations and override certain request characteristics based on a number of criteria such as the visitor’s country, IP address or HTTP request headers.

Route on more than a URL

Origin Rules is built on top of our ruleset engine. This gives users the ability to perform routing decisions based on many fields including the requested URL, and also the visitors country, specific request headers, and more.

Using a combination of one or more of these available fields, users can ensure traffic is routed to specific backends, only when specific criteria are met such as host, URI path, visitor’s country, and HTTP request headers.

Historically, host header override and resolve override were achieved with the setting of a Page Rule.

Where to? Introducing Origin Rules

Page Rules is the ‘If This Then That’ of Cloudflare. Where the ‘If…’ is a URL, and the ‘Then That’ is changing how we handle traffic to specific parts of a ‘zone’. It allows users to selectively change how traffic is handled, or in this case, where traffic is sent. It is very well adopted, with over one million Page Rules in the past three months alone.

Page Rules, however, are limited to performing actions based upon the requested URL. This means if users want to change the backend a HTTP request goes to, they need to make that decision based on the URL alone. This can be challenging for users who may want to perform this decision-making on more nuanced aspects, like the user agent of the visitor or on the presence of a specific cookie.

With Origin Rules, users can perform host header override, resolve override, destination port override and SNI overrides – based on any number of criteria – not only the requested URL. This unlocks a number of interesting use cases.

Example use case: integration with cloud storage endpoints

One such use case is using a cloud storage provider as a backend for static assets, such as images. Enterprise zones can use a combination of host header override and resolve override actions to override the destination of outgoing HTTP requests. This allows for all traffic to example.net to be sent to 98.51.100.12, but requests to example.net/*.jpg be sent to a publicly accessible S3-compatible bucket.

Where to? Introducing Origin Rules

To do this, the user would create an Origin Rule setting the resolve override value to be a DNS record on their own zone, pointing to the S3 provider’s URL. This ensures that requests matching the pattern are routed to the S3 URL. However, when the cloud storage provider receives the request it will drop it – as it does not know how to route requests for the host example.net. Therefore, users also need to deploy a host header override, changing this value to match the bucket name – e.g. bucket.example.net.

Combined, this ensures requests matching the pattern correctly reach the cloud storage provider – with a host header it can use to correctly route the request to the correct bucket.

Origin Rules also enable new use cases. For example, a user can use Origin Rules to A/B test different cloud providers prior to a cut over. This is possible by using the field http.request.cookies and routing traffic to a new, test bucket or cloud provider based on the presence of a specific cookie on the request.

Users with multiple storage regions can also use the ip.geoip.country field within a filter expression to route users to the closest storage instance, reducing latency and time to load for these requests.

Destination port override

Cloudflare listens on 13 ports; seven ports for HTTP, six ports for HTTPS. This means if a request is sent to a URL with the destination port of 443, as is standard for HTTPS, it will be sent to the origin server with a destination port of 443. The same 1:1 mapping applies to the other twelve ports.

But what if a user wanted to change that mapping? For example, when the backend origin server is listening on port 8001. In this scenario, an intermediate service is required to listen for requests on port 443 and create a sub-request with the destination port set to 8001.

Historically this was done on the origin server itself – with a reverse proxy server listening for requests on 443 and other ports and proxying those requests to another port.

Apache
 <VirtualHost *:*>
        ProxyPreserveHost On
        ProxyPass / http://0.0.0.0:8001/
        ProxyPassReverse / http://0.0.0.0:8001/
        ServerName example.com
    </VirtualHost>

NGINX
server {
  listen 443;
  server_name example.com;
    location / {
      proxy_pass http://0.0.0.0:8001;
    }
}

More recently, users have deployed Cloudflare Workers to perform this service, modifying the destination port before HTTP requests ever reach their servers.

Origin Rules simplifies destination port modifications, letting users change the destination port via a simple rules experience without ever having to write a single line of code or configuration:

Where to? Introducing Origin Rules

This destination port modification can also be triggered on almost any field available in the ruleset engine, also, allowing users to change which port to send requests to based on URL, URI path, the presence of HTTP request header and more.

Server Name Indication

Server Name Indication (SNI) is an addition to the TLS encryption protocol. It enables a client device to specify the domain name it is trying to reach in the first step of the TLS handshake, preventing common “name mismatch” errors. Customers using Cloudflare for SaaS may have millions of hostnames pointing to Cloudflare. However, the origin that these requests are sent to may not have an individual certificate for each of the hostnames.

Users today have the option of doing this on a per custom hostname basis using custom origins in SSL for SaaS, however for Enterprise customers not using this setup it was an impossible task.

Enterprise users can use Origin Rules to override the value of the SNI, providing it matches any other zone in their account. This removes the need for users to manage multiple certificates on the origin or choose not to encrypt connections from Cloudflare to the origin.

Try it now

Origin Rules are available to use now via API, Terraform, and our dashboard. Further details can be found on our Developers Docs. Currently, destination port rewriting is available for all our customers as part of Origin Rules. Resolve Override, Host Header Override and SNI overrides are available to our Enterprise users.

WebRTC live streaming to unlimited viewers, with sub-second latency

Post Syndicated from Kyle Boutette original https://blog.cloudflare.com/webrtc-whip-whep-cloudflare-stream/

WebRTC live streaming to unlimited viewers, with sub-second latency

WebRTC live streaming to unlimited viewers, with sub-second latency

Creators and broadcasters expect to be able to go live from anywhere, on any device. Viewers expect “live” to mean “real-time”. The protocols that power most live streams are unable to meet these growing expectations.

In talking to developers building live streaming into their apps and websites, we’ve heard near universal frustration with the limitations of existing live streaming technologies. Developers in 2022 rightly expect to be able to deliver low latency to viewers, broadcast reliably, and use web standards rather than old protocols that date back to the era of Flash.

Today, we’re excited to announce in open beta that Cloudflare Stream now supports live video streaming over WebRTC, with sub-second latency, to unlimited concurrent viewers. This is a new feature of Cloudflare Stream, and you can start using it right now in the Cloudflare Dashboard — read the docs to get started.

WebRTC with Cloudflare Stream leapfrogs existing tools and protocols, exclusively uses open standards with zero dependency on a specific SDK, and empowers any developer to build both low latency live streaming and playback into their website or app.

The status quo of streaming live video is broken

The status quo of streaming live video has high latency, depends on archaic protocols and is incompatible with the way developers build apps and websites. A reasonable person’s expectations of what the Internet should be able to deliver in 2022 are simply unmet by the dominant set of protocols carried over from past eras.

Viewers increasingly expect “live” to mean “real-time”. People want to place bets on sports broadcasts in real-time, interact and ask questions to presenters in real-time, and never feel behind their friends at a live event.

In practice, the HLS and DASH standards used to deliver video have 10+ seconds of latency. LL-HLS and LL-DASH bring this down to closer to 5 seconds, but only as a hack on top of the existing protocol that delivers segments of video in individual HTTP requests. Sending mini video clips over TCP simply cannot deliver video in real-time. HLS and DASH are here to stay, but aren’t the future of real-time live video.

Creators and broadcasters expect to be able to go live from anywhere, on any device.

In practice, people creating live content are stuck with a limited set of native apps, and can’t go live using RTMP from a web browser. Because it’s built on top of TCP, the RTMP broadcasting protocol struggles under even the slightest network disruption, making it a poor or often unworkable option when broadcasting from mobile networks. RTMP, originally built for use with Adobe Flash Player, was last updated in 2012, and while Stream supports the newer SRT protocol, creators need an option that works natively on the web and can more easily be integrated in native apps.

Developers expect to be able to build using standard APIs that are built into web browsers and native apps.

In practice, RTMP can’t be used from a web browser, and creating a native app that supports RTMP broadcasting typically requires diving into lower-level programming languages like C and Rust. Only those with expertise in both live video protocols and these languages have full access to the tools needed to create novel live streaming client applications.

We’re solving this by using new open WebRTC standards: WHIP and WHEP

WebRTC is the real-time communications protocol, supported across all web browsers, that powers video calling services like Zoom and Google Meet. Since inception it’s been designed for real-time, ultra low-latency communications.

While WebRTC is well established, for most of its history it’s lacked standards for:

  • Ingestion — how broadcasters should send media content (akin to RTMP today)
  • Egress — how viewers request and receive media content (akin to DASH or HLS today)

As a result, developers have had to implement this on their own, and client applications on both sides are often tightly coupled to provider-specific implementations. Developers we talk to often express frustration, having sunk months of engineering work into building around a specific vendor’s SDK, unable to switch without a significant rewrite of their client apps.

At Cloudflare, our mission is broader — we’re helping to build a better Internet. Today we’re launching not just a new feature of Cloudflare Stream, but a vote of confidence in new WebRTC standards for both ingestion and egress. We think you should be able to start using Stream without feeling locked into an SDK or implementation specific to Cloudflare, and we’re committed to using open standards whenever possible.

For ingestion, WHIP is an IETF draft on the Standards Track, with many applications already successfully using it in production. For delivery (egress), WHEP is an IETF draft with broad agreement. Combined, they provide a standardized end-to-end way to broadcast one-to-many over WebRTC at scale.

Cloudflare Stream is the first cloud service to let you both broadcast using WHIP and playback using WHEP — no vendor-specific SDK needed. Here’s how it works:

WebRTC live streaming to unlimited viewers, with sub-second latency

Cloudflare Stream is already built on top of the Cloudflare developer platform, using Workers and Durable Objects running on Cloudflare’s global network, within 50ms of 95% of the world’s Internet-connected population.

Our WebRTC implementation extends this to relay WebRTC video through our network. Broadcasters stream video using WHIP to the point of presence closest to their location, which tells the Durable Object where the live stream can be found. Viewers request streaming video from the point of presence closest to them, which asks the Durable Object where to find the stream, and video is routed through Cloudflare’s network, all with sub-second latency.

Using Durable Objects, we achieve this with zero centralized state. And just like the rest of Cloudflare Stream, you never have to think about regions, both in terms of pricing and product development.

While existing ultra low-latency streaming providers charge significantly more to stream over WebRTC, because Stream runs on Cloudflare’s global network, we’re able to offer WebRTC streaming at the same price as delivering video over HLS or DASH. We don’t think you should be penalized with higher pricing when choosing which technology to rely on to stream live video. Once generally available, WebRTC streaming will cost $1 per 1000 minutes of video delivered, just like the rest of Stream.

What does sub-second latency let you build?

Ultra low latency unlocks interactivity within your website or app, removing the time delay between creators, in-person attendees, and those watching remotely.

Developers we talk to are building everything from live sports betting, to live auctions, to live viewer Q&A and even real-time collaboration in video post-production. Even streams without in-app interactivity can benefit from real-time — no sports fan wants to get a text from their friend at the game that ruins the moment, before they’ve had a chance to watch the final play. Whether you’re bringing an existing app or have a new idea in mind, we’re excited to see what you build.

If you can write JavaScript, you can let your users go live from their browser

While hobbyist and professional creators might take the time to download and learn how to use an application like OBS Studio, most Internet users won’t get past this friction of new tools, and copying RTMP keys from one tool to another. To empower more people to go live, they need to be able to broadcast from within your website or app, just by enabling access to the camera and microphone.

Cloudflare Stream with WebRTC lets you build live streaming into your app as a front-end developer, without any special knowledge of video protocols. And our approach, using the WHIP and WHEP open standards, means you can do this with zero dependencies, with 100% your code that you control.

Go live from a web browser with just a few lines of code

You can go live right now, from your web browser, by creating a live input in the Cloudflare Stream dashboard, and pasting a URL into the example linked below.

Read the docs or run the example code below in your browser using Stackbltiz.

<video id="input-video" autoplay autoplay muted></video>

import WHIPClient from "./WHIPClient.js";

const url = "<WEBRTC_URL_FROM_YOUR_LIVE_INPUT>";
const videoElement = document.getElementById("input-video");
const client = new WHIPClient(url, videoElement);

This example uses an example WHIP client, written in just 100 lines of Javascript, using APIs that are native to web browsers, with zero dependencies. But because WHIP is an open standard, you can use any WHIP client you choose. Support for WHIP is growing across the video streaming industry — it has recently been added to Gstreamer, and one of the authors of the WHIP specification has written a Javascript client implementation. We intend to support the full WHIP specification, including supporting Trickle ICE for fast NAT traversal.

Play a live stream in a browser, with sub-second latency, no SDK required

Once you’ve started streaming, copy the playback URL from the live input you just created, and paste it into the example linked below.

Read the docs or run the example code below in your browser using Stackbltiz.

<video id="playback" controls autoplay muted></video>

import WHEPClient from './WHEPClient.js';
const url = "<WEBRTC_PLAYBACK_URL_FROM_YOUR_LIVE_INPUT>";
const videoElement = document.getElementById("playback");
const client = new WHEPClient(url, videoElement);

Just like the WHIP example before, this one uses an example WHEP client we’ve written that has zero dependencies. WHEP is an earlier IETF draft than WHIP, published in July of this year, but adoption is moving quickly. People in the community have already written open-source client implementations in both Javascript, C, with more to come.

Start experimenting with real-time live video, in open beta today

WebRTC streaming is in open beta today, ready for you to use as an integrated feature of Cloudflare Stream. Once Generally Available, WebRTC streaming will be priced like the rest of Cloudflare Stream, based on minutes of video delivered and minutes stored.

Read the docs to get started.

Logpush: now lower cost and with more visibility

Post Syndicated from Duc Nguyen original https://blog.cloudflare.com/logpush-filters-alerts/

Logpush: now lower cost and with more visibility

Logpush: now lower cost and with more visibility

Logs are a critical part of every successful application. Cloudflare products and services around the world generate massive amounts of logs upon which customers of all sizes depend. Structured logging from our products are used by customers for purposes including analytics, debugging performance issues, monitoring application health, maintaining security standards for compliance reasons, and much more.

Logpush is Cloudflare’s product for pushing these critical logs to customer systems for consumption and analysis. Whenever our products generate logs as a result of traffic or data passing through our systems from anywhere in the world, we buffer these logs and push them directly to customer-defined destinations like Cloudflare R2, Splunk, AWS S3, and many more.

Today we are announcing three new key features related to Cloudflare’s Logpush product. First, the ability to have only logs matching certain criteria be sent. Second, the ability to get alerted when logs are failing to be pushed due to customer destinations having issues or network issues occurring between Cloudflare and the customer destination. In addition, customers will also be able to query for analytics around the health of Logpush jobs like how many bytes and records were pushed, number of successful pushes, and number of failing pushes.

Filtering logs before they are pushed

Because logs are both critical and generated with high volume, many customers have to maintain complex infrastructure just to ingest and store logs, as well as deal with ever-increasing related costs. On a typical day, a real, example customer receives about 21 billion records, or 2.1 terabytes (about 24.9 TB uncompressed) of gzip compressed logs. Over the course of a month, that could easily be hundreds of billions of events and hundreds of terabytes of data.

It is often unnecessary to store and analyze all of this data, and customers could get by with specific subsets of the data matching certain criteria. For example, a customer might want just the set of HTTP data that had status code >= 400, or the set of firewall data where the action taken was to block the user.
We can now achieve this in our Logpush jobs by setting specific filters on the fields of the log messages themselves. You can use either our API or the Cloudflare dashboard to set up filters.

To do this in the dashboard, either create a new Logpush job or modify an existing job. You will see the option to set certain filters. For example, an ecommerce customer might want to receive logs only for the checkout page where the bot score was non-zero:

Logpush: now lower cost and with more visibility

Logpush job alerting

When logs are a critical part of your infrastructure, you want peace of mind that logging infrastructure is healthy. With that in mind, we are announcing the ability to get notified when your Logpush jobs have been retrying to push and failing for 24 hours.

To set up alerts in the Cloudflare dashboard:

1. First, navigate to “Notifications” in the left-panel of the account view

2. Next, Click the “add” button

Logpush: now lower cost and with more visibility

3. Select the alert “Failing Logpush Job Disabled”

Logpush: now lower cost and with more visibility

4. Configure the alert and click Save.

That’s it — you will receive an email alert if your Logpush job is disabled.

Logpush Job Health API

We have also added the ability to query for stats related to the health of your Logpush jobs to our graphql API. Customers can now use our GraphQL API to query for things like the number of bytes pushed, number of compressed bytes pushed, number of records pushed, the status of each push, and much more. Using these stats, customers can have greater visibility into a core part of infrastructure. The GraphQL API is self documenting so full details about the new logpushHealthAdaptiveGroups node can be found using any GraphQL client, but head to GraphQL docs for more information.

Below are a couple example queries of how you can use the GraphQL to find stats related to your Logpush jobs.

Query for number of pushes to S3 that resulted in status code != 200

query
{
  viewer
  {
    zones(filter: { zoneTag: $zoneTag})
    {
      logpushHealthAdaptiveGroups(filter: {
        datetime_gt:"2022-08-15T00:00:00Z",
        destinationType:"s3",
        status_neq:200
      }, 
      limit:10)
      {
        count,
        dimensions {
          jobId,
          status,
          destinationType
        }
      }
    }
  }
}

Getting the number of bytes, compressed bytes and records that were pushed

query
{
  viewer
  {
    zones(filter: { zoneTag: $zoneTag})
    {
      logpushHealthAdaptiveGroups(filter: {
        datetime_gt:"2022-08-15T00:00:00Z",
        destinationType:"s3",
        status:200
      }, 
      limit:10)
      {
        sum {
          bytes,
          bytesCompressed,
          records
        }
      }
    }
  }
}

Summary

Logpush is a robust and flexible platform for customers who need to integrate their own logging and monitoring systems with Cloudflare. Different Logpush jobs can be deployed to support multiple destinations or, with filtering, multiple subsets of logs.

Customers who haven’t created Logpush jobs are encouraged to do so. Try pushing your logs to R2 for safe-keeping! For customers who don’t currently have access to this powerful tool, consider upgrading your plan.

Regional Services comes to India, Japan and Australia

Post Syndicated from Achiel van der Mandele original https://blog.cloudflare.com/regional-services-comes-to-apac/

Regional Services comes to India, Japan and Australia

This post is also available in Deutsch, Français.

Regional Services comes to India, Japan and Australia

We announced the Data Localization Suite in 2020, when requirements for data localization were already important in the European Union. Since then, we’ve witnessed a growing trend toward localization globally. We are thrilled to expand our coverage to these countries in Asia Pacific, allowing more customers to use Cloudflare by giving them precise control over which parts of the Cloudflare network are able to perform advanced functions like WAF or Bot Management that require inspecting traffic.

Regional Services, a recap

In 2020, we introduced (Regional Services), a new way for customers to use Cloudflare. With Regional Services, customers can limit which data centers actually decrypt and inspect traffic. This helps because certain customers are affected by regulations on where they are allowed to service traffic. Others have agreements with their customers as part of contracts specifying exactly where traffic is allowed to be decrypted and inspected.

As one German bank told us: “We can look at the rules and regulations and debate them all we want. As long as you promise me that no machine outside the European Union will see a decrypted bank account number belonging to one of my customers, we’re happy to use Cloudflare in any capacity”.

Under normal operation, Cloudflare uses its entire network to perform all functions. This is what most customers want: leverage all of Cloudflare’s data centers so that you always service traffic to eyeballs as quickly as possible. Increasingly, we are seeing customers that wish to strictly limit which data centers service their traffic. With Regional Services, customers can use Cloudflare’s network but limit which data centers perform the actual decryption. Products that require decryption, such as WAF, Bot Management and Workers will only be applied within those data centers.

How does Regional Services work?

You might be asking yourself: how does that even work? Doesn’t Cloudflare operate an anycast network? Cloudflare was built from the bottom up to leverage anycast, a routing protocol. All of Cloudflare’s data centers advertise the same IP addresses through Border Gateway Protocol. Whichever data center is closest to you from a network point of view is the one that you’ll hit.

This is great for two reasons. The first is that the closer the data center to you, the faster the reply. The second great benefit is that this comes in very handy when dealing with large DDoS attacks. Volumetric DDoS attacks throw a lot of bogus traffic at you, which overwhelms network capacity. Cloudflare’s anycast network is great at taking on these attacks because they get distributed across the entire network.

Anycast doesn’t respect regional borders, it doesn’t even know about them. Which is why out of the box, Cloudflare can’t guarantee that traffic inside a country will also be serviced there. Although typically you’ll hit a data center inside your country, it’s very possible that your Internet Service Provider will send traffic to a network that might route it to a different country.

Regional Services solves that: when turned on, each data center becomes aware of which region it is operating in. If a user from a country hits a data center that doesn’t match the region that the customer has selected, we simply forward the raw TCP stream in encrypted form. Once it reaches a data center inside the right region, we decrypt and apply all Layer 7 products. This covers products such as CDN, WAF, Bot Management and Workers.

Let’s take an example. A user is in Kerala, India and their Internet Service Provider has determined that the fastest path to one of our data centers is to Colombo, Sri Lanka. In this example, a customer may have selected India as the sole region within which traffic should be serviced. The Colombo data center sees that this traffic is meant for the India region. It does not decrypt, but instead forwards it to the closest data center inside India. There, we decrypt and products such as WAF and Workers are applied as if the traffic had hit the data center directly.

Regional Services comes to India, Japan and Australia

Bringing Regional Services to Asia

Historically, we’ve seen most interest in Regional Services in geographic regions such as the European Union and the Americas. Over the past few years, however, we are seeing a lot of interest from Asia Pacific. Based on customer feedback and analysis on regulations we quickly concluded there were three key regions we needed to support: India, Japan and Australia. We’re proud to say that all three are now generally available for use today.

But we’re not done yet! We realize there are many more customers that require localization to their particular region. We’re looking to add many more in the near future and are working hard to make it easier to support more of them. If you have a region in mind, we’d love to hear it!

India, Japan and Australia are all live today! If you’re interested in using the Data Localization Suite, contact your account team!

Store and retrieve your logs on R2

Post Syndicated from Shelley Jones original https://blog.cloudflare.com/store-and-retrieve-logs-on-r2/

Store and retrieve your logs on R2

Store and retrieve your logs on R2

Following today’s announcement of General Availability of Cloudflare R2 object storage, we’re excited to announce that customers can also store and retrieve their logs on R2.

Cloudflare’s Logging and Analytics products provide vital insights into customers’ applications. Though we have a breadth of capabilities, logs in particular play a pivotal role in understanding what occurs at a granular level; we produce detailed logs containing metadata generated by Cloudflare products via events flowing through our network, and they are depended upon to illustrate or investigate anything (and everything) from the general performance or health of applications to closely examining security incidents.

Until today, we have only provided customers with the ability to export logs to 3rd-party destinations – to both store and perform analysis. However, with Log Storage on R2 we are able to offer customers a cost-effective solution to store event logs for any of our products.

The cost conundrum

We’ve unpacked the commercial impact in a previous blog post, but to recap, the cost of storage can vary broadly depending on the volume of requests Internet properties receive. On top of that – and specifically pertaining to logs – there’s usually more expensive fees to access that data whenever the need arises. This can be incredibly problematic, especially when customers are having to balance their budget with the need to access their logs – whether it’s to mitigate a potential catastrophe or just out of curiosity.

With R2, not only do we not charge customers egress costs, but we also provide the opportunity to make further operational savings by centralizing storage and retrieval. Though, most of all, we just want to make it easy and convenient for customers to access their logs via our Retrieval API – all you need to do is provide a time range!

Logs on R2: get started!

Why would you want to store your logs on Cloudflare R2? First, R2 is S3 API compatible, so your existing tooling will continue to work as is. Second, not only is R2 cost-effective for storage, we also do not charge any egress fees if you want to get your logs out of Cloudflare to be ingested into your own systems. You can store logs for any Cloudflare product, and you can also store what you need for as long as you need; retention is completely within your control.

Storing Logs on R2

To create Logpush jobs pushing to R2, you can use either the dashboard or Cloudflare API. Using the dashboard, you can create a job and select R2 as the destination during configuration:

Store and retrieve your logs on R2

To use the Cloudflare API to create the job, do something like:

curl -s -X POST 'https://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/jobs' \
-H "X-Auth-Email: <EMAIL>" \
-H "X-Auth-Key: <API_KEY>" \
-d '{
 "name":"<DOMAIN_NAME>",
"destination_conf":"r2://<BUCKET_PATH>/{DATE}?account-id=<ACCOUNT_ID>&access-key-id=<R2_ACCESS_KEY_ID>&secret-access-key=<R2_SECRET_ACCESS_KEY>",
 "dataset": "http_requests",
"logpull_options":"fields=ClientIP,ClientRequestHost,ClientRequestMethod,ClientRequestURI,EdgeEndTimestamp,EdgeResponseBytes,EdgeResponseStatus,EdgeStartTimestamp,RayID&timestamps=rfc3339",
 "kind":"edge"
}' | jq .

Please see Logpush over R2 docs for more information.

Log Retrieval on R2

If you have your logs pushed to R2, you could use the Cloudflare API to retrieve logs in specific time ranges like the following:

curl -s -g -X GET 'https://api.cloudflare.com/client/v4/accounts/<ACCOUNT_ID>/logs/retrieve?start=2022-09-25T16:00:00Z&end=2022-09-25T16:05:00Z&bucket=<YOUR_BUCKET>&prefix=<YOUR_FILE_PREFIX>/{DATE}' \
-H "X-Auth-Email: <EMAIL>" \
-H "X-Auth-Key: <API_KEY>" \ 
-H "R2-Access-Key-Id: R2_ACCESS_KEY_ID" \
-H "R2-Secret-Access-Key: R2_SECRET_ACCESS_KEY" | jq .

See Log Retrieval API for more details.

Now that you have critical logging infrastructure on Cloudflare, you probably want to be able to monitor the health of these Logpush jobs as well as get relevant alerts when something needs your attention.

Looking forward

While we have a vision to build out log analysis and forensics capabilities on top of R2 – and a roadmap to get us there – we’d still love to hear your thoughts on any improvements we can make, particularly to our retrieval options.

Get setup on R2 to start pushing logs today! If your current plan doesn’t include Logpush, storing logs on R2 is another great reason to upgrade!

Stream Live is now Generally Available

Post Syndicated from Brendan Irvine-Broque original https://blog.cloudflare.com/stream-live-ga/

Stream Live is now Generally Available

Stream Live is now Generally Available

Today, we’re excited to announce that Stream Live is out of beta, available to everyone, and ready for production traffic at scale. Stream Live is a feature of Cloudflare Stream that allows developers to build live video features in websites and native apps.

Since its beta launch, developers have used Stream to broadcast live concerts from some of the world’s most popular artists directly to fans, build brand-new video creator platforms, operate a global 24/7 live OTT service, and more. While in beta, Stream has ingested millions of minutes of live video and delivered to viewers all over the world.

Bring your big live events, ambitious new video subscription service, or the next mobile video app with millions of users — we’re ready for it.

Streaming live video at scale is hard

Live video uses a massive amount of bandwidth. For example, a one-hour live stream at 1080p at 8Mbps is 3.6GB. At typical cloud provider egress prices, even a little egress can break the bank.

Live video must be encoded on-the-fly, in real-time. People expect to be able to watch live video on their phone, while connected to mobile networks with less bandwidth, higher latency and frequently interrupted connections. To support this, live video must be re-encoded in real-time into multiple resolutions, allowing someone’s phone to drop to a lower resolution and continue playback. This can be complex (Which bitrates? Which codecs? How many?) and costly: running a fleet of virtual machines doesn’t come cheap.

Ingest location matters — Live streaming protocols like RTMPS send video over TCP. If a single packet is dropped or lost, the entire connection is brought to a halt while the packet is found and re-transmitted. This is known as “head of line blocking”. The further away the broadcaster is from the ingest server, the more network hops, and the more likely packets will be dropped or lost, ultimately resulting in latency and buffering for viewers.

Delivery location matters — Live video must be cached and served from points of presence as close to viewers as possible. The longer the network round trips, the more likely videos will buffer or drop to a lower quality.

Broadcasting protocols are in flux — The most widely used protocol for streaming live video, RTMPS, has been abandoned since 2012, and dates back to the era of Flash video in the early 2000s. A new emerging standard, SRT, is not yet supported everywhere. And WebRTC has only recently evolved into an option for high definition one-to-many broadcasting at scale.

The old way to solve this has been to stitch together separate cloud services from different vendors. One vendor provides excellent content delivery, but no encoding. Another provides APIs or hardware to encode, but leaves you to fend for yourself and build your own storage layer. As a developer, you have to learn, manage and write a layer of glue code around the esoteric details of video streaming protocols, codecs, encoding settings and delivery pipelines.

Stream Live is now Generally Available

We built Stream Live to make streaming live video easy, like adding an <img> tag to a website. Live video is now a fundamental building block of Internet content, and we think any developer should have the tools to add it to their website or native app.

With Stream, you or your users stream live video directly to Cloudflare, and Cloudflare delivers video directly to your viewers. You never have to manage internal encoding, storage, or delivery systems — it’s just live video in and live video out.

Our network, our hardware = a solution only Cloudflare can provide

We’re not the only ones building APIs for live video — but we are the only ones with our own global network and hardware that we control and optimize for video. That lets us do things that others can’t, like sub-second glass-to-glass latency using RTMPS and SRT playback at scale.

Newer video codecs require specialized hardware encoders, and while others are beholden to the hardware limitations of public cloud providers, we’re already hard at work installing the latest encoding hardware in our own racks, so that you can deliver high resolution video with even less bandwidth. Our goal is to make what is otherwise only available to video giants available directly to you — stay tuned for some exciting updates on this next week.

Most providers limit how many destinations you can restream or simulcast to. Because we operate our own network, we’ve never even worried about this, and let you restream to as many destinations as you need.

Operating our own network lets us price Stream based on minutes of video delivered — unlike others, we don’t pay someone else for bandwidth and then pass along their costs to you at a markup. The status quo of charging for bandwidth or per-GB storage penalizes you for delivering or storing high resolution content. If you ask why a few times, most of the time you’ll discover that others are pushing their own cost structures on to you.

Encoding video is compute-intensive, delivering video is bandwidth intensive, and location matters when ingesting live video. When you use Stream, you don’t need to worry about optimizing performance, finding a CDN, and/or tweaking configuration endlessly. Stream takes care of this for you.

Free your live video from the business models of big platforms

Nearly every business uses live video, whether to engage with customers, broadcast events or monetize live content. But few have the specialized engineering resources to deliver live video at scale on their own, and wire together multiple low level cloud services. To date, many of the largest content creators have been forced to depend on a shortlist of social media apps and streaming services to deliver live content at scale.

Unlike the status quo, who force you to put your live video in their apps and services and fit their business models, Stream gives you full control of your live video, on your website or app, on any device, at scale, without pushing your users to someone else’s service.

Free encoding. Free ingestion. Free analytics. Simple per-minute pricing.

Others Stream
Encoding $ per minute Free
Ingestion $ per GB Free
Analytics Separate product Free
Live recordings Minutes or hours later Instant
Storage $ per GB per minute stored
Delivery $ per GB per minute delivered

Other platforms charge for ingestion and encoding. Many even force you to consider where you’re streaming to and from, the bitrate and frames per second of your video, and even which of their datacenters you’re using.

With Stream, encoding and ingestion are free. Other platforms charge for delivery based on bandwidth, penalizing you for delivering high quality video to your viewers. If you stream at a high resolution, you pay more.

With Stream, you don’t pay a penalty for delivering high resolution video. Stream’s pricing is simple — minutes of video delivered and stored. Because you pay per minute, not per gigabyte, you can stream at the ideal resolution for your viewers without worrying about bandwidth costs.

Other platforms charge separately for analytics, requiring you to buy another product to get metrics about your live streams.

With Stream, analytics are free. Stream provides an API and Dashboard for both server-side and client-side analytics, that can be queried on a per-video, per-creator, per-country basis, and more. You can use analytics to identify which creators in your app have the most viewed live streams, inform how much to bill your customers for their own usage, identify where content is going viral, and more.

Other platforms tack on live recordings or DVR mode as a separate add-on feature, and recordings only become available minutes or even hours after a live stream ends.

With Stream, live recordings are a built-in feature, made available instantly after a live stream ends. Once a live stream is available, it works just like any other video uploaded to Stream, letting you seamlessly use the same APIs for managing both pre-recorded and live content.

Build live video into your website or app in minutes

Stream Live is now Generally Available

Cloudflare Stream enables you or your users to go live using the same protocols and tools that broadcasters big and small use to go live to YouTube or Twitch, but gives you full control over access and presentation of live streams.

Step 1: Create a live input

Create a new live input from the Stream Dashboard or use use the Stream API:

Request

curl -X POST \
-H "Authorization: Bearer <YOUR_API_TOKEN>" \
-d "{"recording": { "mode": "automatic" } }" \
https://api.cloudflare.com/client/v4/accounts/<YOUR_CLOUDFLARE_ACCOUNT_ID>/stream/live_inputs

Response

{
"result": {
"uid": "<UID_OF_YOUR_LIVE_INPUT>",
"rtmps": {
"url": "rtmps://live.cloudflare.com:443/live/",
"streamKey": "<PRIVATE_RTMPS_STREAM_KEY>"
},
...
}
}

Step 2: Use the RTMPS key with any live broadcasting software, or in your own native app

Copy the RTMPS URL and key, and use them with your live streaming application. We recommend using Open Broadcaster Software (OBS) to get started, but any RTMPS or SRT compatible software should be able to interoperate with Stream Live.

Enter the Stream RTMPS URL and the Stream Key from Step 1:

Stream Live is now Generally Available

Step 3: Preview your live stream in the Cloudflare Dashboard

In the Stream Dashboard, within seconds of going live, you will see a preview of what your viewers will see, along with the real-time connection status of your live stream.

Stream Live is now Generally Available

Step 4: Add live video playback to your website or app

Stream your video using our Stream Player embed code, or use any video player that supports HLS or DASH — live streams can be played in both websites or native iOS and Android apps.

For example, on iOS, all you need to do is provide AVPlayer with the URL to the HLS manifest for your live input, which you can find via the API or in the Stream Dashboard.

import SwiftUI
import AVKit

struct MyView: View {
    // Change the url to the Cloudflare Stream HLS manifest URL
    private let player = AVPlayer(url: URL(string: "https://customer-9cbb9x7nxdw5hb57.cloudflarestream.com/8f92fe7d2c1c0983767649e065e691fc/manifest/video.m3u8")!)

    var body: some View {
        VideoPlayer(player: player)
            .onAppear() {
                player.play()
            }
    }
}

struct MyView_Previews: PreviewProvider {
    static var previews: some View {
        MyView()
    }
}

To run a complete example app in XCode, follow this guide in the Stream Developer docs.

Companies are building whole new video platforms on top of Stream

Developers want control, but most don’t have time to become video experts. And even video experts building innovative new platforms don’t want to manage live streaming infrastructure.

Switcher Studio’s whole business is live video — their iOS app allows creators and businesses to produce their own branded, multi camera live streams. Switcher uses Stream as an essential part of their live streaming infrastructure. In their own words:

“Since 2014, Switcher has helped creators connect to audiences with livestreams. Now, our users create over 100,000 streams per month. As we grew, we needed a scalable content delivery solution. Cloudflare offers secure, fast delivery, and even allowed us to offer new features, like multistreaming. Trusting Cloudflare Stream lets our team focus on the live production tools that make Switcher unique.”

While Stream Live has been in beta, we’ve worked with many customers like Switcher, where live video isn’t just one product feature, it is the core of their product. Even as experts in live video, they choose to use Stream, so that they can focus on the unique value they create for their customers, leaving the infrastructure of ingesting, encoding, recording and delivering live video to Cloudflare.

Start building live video into your website or app today

It takes just a few minutes to sign up and start your first live stream, using the Cloudflare Dashboard, with no code required to get started, but APIs for when you’re ready to start building your own live video features. Give it a try — we’re ready for you, no matter the size of your audience.

Introducing Advanced DDoS Alerts

Post Syndicated from Omer Yoachimik original https://blog.cloudflare.com/advanced-ddos-alerts/

Introducing Advanced DDoS Alerts

Introducing Advanced DDoS Alerts

We’re pleased to introduce Advanced DDoS Alerts. Advanced DDoS Alerts are customizable and provide users the flexibility they need when managing many Internet properties. Users can easily define which alerts they want to receive — for which DDoS attack sizes, protocols and for which Internet properties.

This release includes two types of Advanced DDoS Alerts:

  1. Advanced HTTP DDoS Attack Alerts – Available to WAF/CDN customers on the Enterprise plan, who have also subscribed to the Advanced DDoS Protection service.
  2. Advanced L3/4 DDoS Attack Alerts – Available to Magic Transit and Spectrum BYOIP customers on the Enterprise plan.

Standard DDoS Alerts are available to customers on all plans, including the Free plan. Advanced DDoS Alerts are part of Cloudflare’s Advanced DDoS service.

Why alerts?

Distributed Denial of Service attacks are cyber attacks that aim to take down your Internet properties and make them unavailable for your users. As early as 2017, Cloudflare pioneered the Unmetered DDoS Protection to provide all customers with DDoS protection, without limits, to ensure that their Internet properties remain available. We’re able to provide this level of commitment to our customers thanks to our automated DDoS protection systems. But if the systems operate automatically, why even be alerted?

Well, to put it plainly, when our DDoS protection systems kick in, they insert ephemeral rules inline to mitigate the attack. Many of our customers operate business critical applications and services. When our systems make a decision to insert a rule, customers might want to be able to verify that all the malicious traffic is mitigated, and that legitimate user traffic is not. Our DDoS alerts begin firing as soon as our systems make a mitigation decision. Therefore, by informing our customers about a decision to insert a rule in real time, they can observe and verify that their Internet properties are both protected and available.

Managing many Internet properties

The standard DDoS Alerts alert you on DDoS attacks that target any and all of your Cloudflare-protected Internet properties. However, some of our customers may manage large numbers of Internet properties ranging from hundreds to hundreds of thousands. The standard DDoS Alerts would notify users every time one of those properties would come under attack — which could become very noisy.

The Advanced DDoS Alerts address this concern by allowing users to select the specific Internet properties that they want to be notified about; zones and hostnames for WAF/CDN customers, and IP prefixes for Magic Transit and Spectrum BYOIP customers.

Introducing Advanced DDoS Alerts
Creating an Advanced HTTP DDoS Attack Alert: selecting zones and hostnames
Introducing Advanced DDoS Alerts
Creating an Advanced L3/4 DDoS Attack Alert: selecting prefixes

One (attack) size doesn’t fit all

The standard DDoS Alerts alert you on DDoS attacks of any size. Well, almost any size. We implemented minimal alert thresholds to avoid spamming our customers’ email inboxes. Those limits are very small and not customer-configurable. As we’ve seen in the recent DDoS trends report, most of the attacks are very small — another reason why the standard DDoS Alert could become noisy for customers that only care about very large attacks. On the opposite end of the spectrum, choosing not to alert may become too quiet for customers that do want to be notified about smaller attacks.

The Advanced DDoS Alerts let customers choose their own alert threshold. WAF/CDN customers can define the minimum request-per-second rate of an HTTP DDoS attack alert. Magic Transit and Spectrum BYOIP customers can define the packet-per-second and Megabit-per-second rates of a L3/4 DDoS attack alert.

Introducing Advanced DDoS Alerts
Creating an Advanced HTTP DDoS Attack Alert: defining request rate
Introducing Advanced DDoS Alerts
Creating an Advanced L3/4 DDoS Attack Alert: defining packet/bit rate

Not all protocols are created equal

As part of the Advanced L3/4 DDoS Alerts, we also let our users define the protocols to be alerted on. If a Magic Transit customer manages mostly UDP applications, they may not care if TCP-based DDoS attacks target it. Similarly, if a Spectrum BYOIP customer only cares about HTTP/TCP traffic, other-protocol-based attacks could be of no concern to them.

Introducing Advanced DDoS Alerts
Introducing Advanced DDoS Alerts
Creating an Advanced L3/4 DDoS Attack Alert: selecting the protocols

Creating an Advanced DDoS Alert

We’ll show here how to create an Advanced HTTP DDoS Alert, but the process to create a L3/4 alert is similar. You can view a more detailed guide on our developers website.

First, click here or log in to your Cloudflare account, navigate to Notifications and click Add. Then select the Advanced HTTP DDoS Attack Alert or Advanced L3/4 DDoS Attack Alert (based on your eligibility). Give your alert a name, an optional description, add your preferred delivery method (e.g., Webhook) and click Next.

Introducing Advanced DDoS Alerts
Step 1: Creating an Advanced HTTP DDoS Attack Alert

Second, select the domains you’d like to be alerted on. You can also narrow it down to specific hostnames. Define the minimum request-per-second rate to be alerted on, click Save, and voilà.

Introducing Advanced DDoS Alerts
Step 2: Defining the Advanced HTTP DDoS Attack Alert conditions

Actionable alerts for making better decisions

Cloudflare Advanced DDoS Alerts aim to provide our customers with configurable controls to make better decisions for their own environments. Customers can now be alerted on attacks based on which domain/prefix is being attacked, the size of the attack, and the protocol of the attack. We recognize that the power to configure and control DDoS attack alerts should ultimately be left up to our customers, and we are excited to announce the availability of this functionality.

Want to learn more about Advanced DDoS Alerts? Visit our developer site.

Interested in upgrading to get Advanced DDoS Alerts? Contact your account team.

New to Cloudflare? Speak to a Cloudflare expert.

Introducing Cloudflare Adaptive DDoS Protection – our new traffic profiling system for mitigating DDoS attacks

Post Syndicated from Omer Yoachimik original https://blog.cloudflare.com/adaptive-ddos-protection/

Introducing Cloudflare Adaptive DDoS Protection - our new traffic profiling system for mitigating DDoS attacks

Introducing Cloudflare Adaptive DDoS Protection - our new traffic profiling system for mitigating DDoS attacks

Every Internet property is unique, with its own traffic behaviors and patterns. For example, a website may only expect user traffic from certain geographies, and a network might only expect to see a limited set of protocols.

Understanding that the traffic patterns of each Internet property are unique is what led us to develop the Adaptive DDoS Protection system. Adaptive DDoS Protection joins our existing suite of automated DDoS defenses and takes it to the next level. The new system learns your unique traffic patterns and adapts to protect against sophisticated DDoS attacks.

Adaptive DDoS Protection is now generally available to Enterprise customers:

  • HTTP Adaptive DDoS Protection – available to WAF/CDN customers on the Enterprise plan, who have also subscribed to the Advanced DDoS Protection service.
  • L3/4 Adaptive DDoS Protection – available to Magic Transit and Spectrum customers on an Enterprise plan.

Adaptive DDoS Protection learns your traffic patterns

The Adaptive DDoS Protection system creates a traffic profile by looking at a customer’s maximal rates of traffic every day, for the past seven days. The profiles are recalculated every day using the past seven-day history. We then store the maximal traffic rates seen for every predefined dimension value. Every profile uses one dimension and these dimensions include the source country of the request, the country where the Cloudflare data center that received the IP packet is located, user agent, IP protocol, destination ports and more.

So, for example, for the profile that uses the source country as a dimension, the system will log the maximal traffic rates seen per country. e.g. 2,000 requests per second (rps) for Germany, 3,000 rps for France, 10,000 rps for Brazil, and so on. This example is for HTTP traffic, but Adaptive DDoS protection also profiles L3/4 traffic for our Magic Transit and Spectrum Enterprise customers.

Another note on the maximal rates is that we use the 95th percentile rates. This means that we take a look at the maximal rates and discard the top 5% of the highest rates. The purpose of this is to eliminate outliers from the calculations.

Calculating traffic profiles is done asynchronously — meaning that it does not induce any latency to our customers’ traffic. The system  then distributes a compact profile representation across our network that can be consumed by our DDoS protection systems to be used to detect and mitigate DDoS attacks in a much more cost-efficient manner.

In addition to the traffic profiles, the Adaptive DDoS Protection also leverages Cloudflare’s Machine Learning generated Bot Scores as an additional signal to differentiate between user and automated traffic. The purpose of using these scores is to differentiate between legitimate spikes in user traffic that deviates from the traffic profile, and a spike of automated and potentially malicious traffic.

Out of the box and easy to use

Adaptive DDoS Protection just works out of the box. It automatically creates the profiles, and then customers can tweak and tune the settings as they need via DDoS Managed Rules. Customers can change the sensitivity level, leverage expression fields to create overrides (e.g. exclude this type of traffic), and change the mitigation action to tailor the behavior of the system to their specific needs and traffic patterns.

Introducing Cloudflare Adaptive DDoS Protection - our new traffic profiling system for mitigating DDoS attacks

Adaptive DDoS Protection complements the existing DDoS protection systems which leverages dynamic fingerprinting to detect and mitigate DDoS attacks. The two work in tandem to protect our customers from DDoS attacks. When Cloudflare customers onboard a new Internet property to Cloudflare, the dynamic fingerprinting protects them automatically and out of the box — without requiring any user action. Once the Adaptive DDoS Protection learns their legitimate traffic patterns and creates a profile, users can turn it on to provide an extra layer of protection.

Rules included as part of the Adaptive DDoS Protection

As part of this release, we’re pleased to announce the following capabilities as part of Cloudflare’s Adaptive DDoS Protection:

Profiling Dimension Availability
WAF/CDN customers on the Enterprise plan with Advanced DDoS Magic Transit & Spectrum Enterprise customers
Origin errors
Client IP Country & region Coming soon
User Agent (globally, not per customer*)
IP Protocol
Combination of IP Protocol and Destination Port Coming soon

*The User-Agent-aware feature analyzes, learns and profiles all the top user agents that we see across the Cloudflare network. This feature helps us identify DDoS attacks that leverage legacy or wrongly configured user agents.

Excluding UA-aware DDoS Protection, Adaptive DDoS Protection rules are deployed in Log mode. Customers can observe the traffic that’s flagged, tweak the sensitivity if needed, and then deploy the rules in mitigation mode. You can follow the steps outlined in this guide to do so.

Making the impact of DDoS attacks a thing of the past

Our mission at Cloudflare is to help build a better Internet. The DDoS Protection team’s vision is derived from this mission: our goal is to make the impact of DDoS attacks a thing of the past. Cloudflare’s Adaptive DDoS Protection takes us one step closer to achieving that vision: making Cloudflare’s DDoS protection even more intelligent, sophisticated, and tailored to our customer’s unique traffic patterns and individual needs.

Want to learn more about Cloudflare’s Adaptive DDoS Protection? Visit our developer site.

Interested in upgrading to get access to Adaptive DDoS Protection? Contact your account team.

New to Cloudflare? Speak to a Cloudflare expert.

Using Cloudflare R2 as an apt/yum repository

Post Syndicated from Sudarsan Reddy original https://blog.cloudflare.com/using-cloudflare-r2-as-an-apt-yum-repository/

Using Cloudflare R2 as an apt/yum repository

Using Cloudflare R2 as an apt/yum repository

In this blog post, we’re going to talk about how we use Cloudflare R2 as an apt/yum repository to bring cloudflared (the Cloudflare Tunnel daemon) to your Debian/Ubuntu and CentOS/RHEL systems and how you can do it for your own distributable in a few easy steps!

I work on Cloudflare Tunnel, a product which enables customers to quickly connect their private networks and services through the Cloudflare global network without needing to expose any public IPs or ports through their firewall. Cloudflare Tunnel is managed for users by cloudflared, a tool that runs on the same network as the private services. It proxies traffic for these services via Cloudflare, and users can then access these services securely through the Cloudflare network.

Our connector, cloudflared, was designed to be lightweight and flexible enough to be effectively deployed on a Raspberry Pi, a router, your laptop, or a server running on a data center with applications ranging from IoT control to private networking. Naturally, this means cloudflared comes built for a myriad of operating systems, architectures and package distributions: You could download the appropriate package from our GitHub releases, brew install it or apt/yum install it (https://pkg.cloudflare.com).

In the rest of this blog post, I’ll use cloudflared as an example of how to create an apt/yum repository backed by Cloudflare’s object storage service R2. Note that this can be any binary/distributable. I simply use cloudflared as an example because this is something we recently did and actually use in production.

How apt-get works

Let’s see what happens when you run something like this on your terminal.

$ apt-get install cloudflared

Let’s also assume that apt sources were already added like so:

  $ echo 'deb [signed-by=/usr/share/keyrings/cloudflare-main.gpg] https://pkg.cloudflare.com/cloudflared buster main' | sudo tee /etc/apt/sources.list.d/cloudflared.list


$ apt-get update

From the source.list above, apt first looks up the Release file (or InRelease if it’s a signed package like cloudflared, but we will ignore this for the sake of simplicity).

A Release file contains a list of supported architectures and their md5, sha1 and sha256 checksums. It looks something like this:

$ curl https://pkg.cloudflare.com/cloudflared/dists/buster/Release
Origin: cloudflared
Label: cloudflared
Codename: buster
Date: Thu, 11 Aug 2022 08:40:18 UTC
Architectures: amd64 386 arm64 arm armhf
Components: main
Description: apt repository for cloudflared - buster
MD5Sum:
 c14a4a1cbe9437d6575ae788008a1ef4 549 main/binary-amd64/Packages
 6165bff172dd91fa658ca17a9556f3c8 374 main/binary-amd64/Packages.gz
 9cd622402eabed0b1b83f086976a8e01 128 main/binary-amd64/Release
 5d2929c46648ea8dbeb1c5f695d2ef6b 545 main/binary-386/Packages
 7419d40e4c22feb19937dce49b0b5a3d 371 main/binary-386/Packages.gz
 1770db5634dddaea0a5fedb4b078e7ef 126 main/binary-386/Release
 b0f5ccfe3c3acee38ba058d9d78a8f5f 549 main/binary-arm64/Packages
 48ba719b3b7127de21807f0dfc02cc19 376 main/binary-arm64/Packages.gz
 4f95fe1d9afd0124a32923ddb9187104 128 main/binary-arm64/Release
 8c50559a267962a7da631f000afc6e20 545 main/binary-arm/Packages
 4baed71e49ae3a5d895822837bead606 372 main/binary-arm/Packages.gz
 e472c3517a0091b30ab27926587c92f9 126 main/binary-arm/Release
 bb6d18be81e52e57bc3b729cbc80c1b5 549 main/binary-armhf/Packages
 31fd71fec8acc969a6128ac1489bd8ee 371 main/binary-armhf/Packages.gz
 8fbe2ff17eb40eacd64a82c46114d9e4 128 main/binary-armhf/Release
SHA1:
…
SHA256:
…

Depending on your system’s architecture, Debian picks the appropriate Packages location. A Packages file contains metadata about the binary apt wants to install, including location and its checksum. Let’s say it’s an amd64 machine. This means we’ll go here next:

$ curl https://pkg.cloudflare.com/cloudflared/dists/buster/main/binary-amd64/Packages
Package: cloudflared
Version: 2022.8.0
License: Apache License Version 2.0
Vendor: Cloudflare
Architecture: amd64
Maintainer: Cloudflare <[email protected]>
Installed-Size: 30736
Homepage: https://github.com/cloudflare/cloudflared
Priority: extra
Section: default
Filename: pool/main/c/cloudflared/cloudflared_2022.8.0_amd64.deb
Size: 15286808
SHA256: c47ca10a3c60ccbc34aa5750ad49f9207f855032eb1034a4de2d26916258ccc3
SHA1: 1655dd22fb069b8438b88b24cb2a80d03e31baea
MD5sum: 3aca53ccf2f9b2f584f066080557c01e
Description: Cloudflare Tunnel daemon

Notice the Filename field. This is where apt gets the deb from before running a dpkg command on it. What all of this means is the apt repository (and the yum repositories too) is basically a structured file system of mostly plaintext files and a deb.

The deb file is a Debian software package that contains two things: installable data (in our case, the cloudflared binary) and metadata about the installable.

Building our own apt repository

Now that we know what happens when an apt-get install runs, let’s work our way backwards to construct the repository.

Create a deb file out of the binary

Note: It is optional but recommended one signs the packages. See the section about how apt verifies packages here.

Debian files can be built by the dpkg-buildpackage tool in a linux or Debian environment. We use a handy command line tool called fpm (https://fpm.readthedocs.io/en/v1.13.1/) to do this because it works for both rpm and deb.

$ fpm -s <dir> -t deb -C /path/to/project -name <project_name> –version <version>

This yields a .deb file.

Create plaintext files needed by apt to lookup downloads

Again, these files can be built by hand, but there are multiple tools available to generate this:

We use reprepro.

Using it is as simple as

$ reprepro buster includedeb <path/to/the/deb>

reprepro neatly creates a bunch of folders in the structure we looked into above.

Upload them to Cloudflare R2

We use Cloudflare R2 to now be the host for this structured file system. R2 lets us upload and serve objects in this structured format. This means, we just need to upload the files in the same structure reprepro created them.

Here is a copyable example of how we do it for cloudflared.

Serve them from an R2 worker

For fine-grained control, we’ll write a very lightweight Cloudflare Worker to be the service we talk to and serve as the front-end API for apt to talk to. For an apt repository, we only need it to perform the GET function.

Here’s an example on how-to do this: https://developers.cloudflare.com/r2/examples/demo-worker/

Putting it all together

Here is a handy script we use to push cloudflared to R2 ready for apt/yum downloads and includes signing and publishing the pubkey.

And that’s it! You now have your own apt/yum repo without needing a server that needs to be maintained, there are no egress fees for downloads, and it is on the Cloudflare global network, protecting it against high request volumes. You can automate many of these steps to make it part of a release process.

Today, this is how cloudflared is distributed on the apt and yum repositories: https://pkg.cloudflare.com/

Introducing thresholds in Security Event Alerting: a z-score love story

Post Syndicated from Kristina Galicova original https://blog.cloudflare.com/introducing-thresholds-in-security-event-alerting-a-z-score-love-story/

Introducing thresholds in Security Event Alerting: a z-score love story

Introducing thresholds in Security Event Alerting: a z-score love story

Today we are excited to announce thresholds for our Security Event Alerts: a new and improved way of detecting anomalous spikes of security events on your Internet properties. Previously, our calculations were based on z-score methodology alone, which was able to determine most of the significant spikes. By introducing a threshold, we are able to make alerts more accurate and only notify you when it truly matters. One can think of it as a romance between the two strategies. This is the story of how they met.

Author’s note: as an intern at Cloudflare I got to work on this project from start to finish from investigation all the way to the final product.

Once upon a time

In the beginning, there were Security Event Alerts. Security Event Alerts are notifications that are sent whenever we detect a threat to your Internet property. As the name suggests, they track the number of security events, which are requests to your application that match security rules. For example, you can configure a security rule that blocks access from certain countries. Every time a user from that country tries to access your Internet property, it will log as a security event. While a security event may be harmless and fired as a result of the natural flow of traffic, it is important to alert on instances when a rule is fired more times than usual. Anomalous spikes of too many security events in a short period of time can indicate an attack. To find these anomalies and distinguish between the natural number of security events and that which poses a threat, we need a good strategy.

The lonely life of a z-score

Before a threshold entered the picture, our strategy worked only on the basis of a z-score. Z-score is a methodology that looks at the number of standard deviations a certain data point is from the mean. In our current configuration, if a spike crosses the z-score value of 3.5, we send you an alert. This value was decided on after careful analysis of our customers’ data, finding it the most effective in determining a legitimate alert. Any lower and notifications will get noisy for smaller spikes. Any higher and we may miss out on significant events. You can read more about our z-score methodology in this blog post.

The following graphs are an example of how the z-score method works. The first graph shows the number of security events over time, with a recent spike.

Introducing thresholds in Security Event Alerting: a z-score love story

To determine whether this spike is significant, we calculate the z-score and check if the value is above 3.5:

Introducing thresholds in Security Event Alerting: a z-score love story

As the graph shows, the deviation is above 3.5 and so an alert is triggered.

However, relying on z-score becomes tricky for domains that experience no security events for a long period of time. With many security events at zero, the mean and standard deviation depress to zero as well. When a non-zero value finally appears, it will always be infinite standard deviations away from the mean. As a result, it will always trigger an alert even on spikes that do not pose any threat to your domain, such as the below:

Introducing thresholds in Security Event Alerting: a z-score love story

With five security events, you are likely going to ignore this spike, as it is too low to indicate a meaningful threat. However, the z-score in this instance will be infinite:

Introducing thresholds in Security Event Alerting: a z-score love story

Since a z-score of infinity is greater than 3.5, an alert will be triggered. This means that customers with few security events would often be overwhelmed by event alerts that are not worth worrying about.

Letting go of zeros

To avoid the mean and standard deviation becoming zero and thus alerting on every non-zero spike, zero values can be ignored in the calculation. In other words, to calculate the mean and standard deviation, only data points that are higher than zero will be considered.

With those conditions, the same spike to five security events will now generate a different z-score:

Introducing thresholds in Security Event Alerting: a z-score love story

Great! With the z-score at zero, it will no longer trigger an alert on the harmless spike!

But what about spikes that could be harmful? When calculations ignore zeros, we need enough non-zero data points to accurately determine the mean and standard deviation. If only one non-zero value is present, that data point determines the mean and standard deviation. As such, the mean will always be equal to the spike, z-score will always be zero and an alert will never be triggered:

Introducing thresholds in Security Event Alerting: a z-score love story

For a spike of 1000 events, we can tell that there is something wrong and we should trigger an alert. However, because there is only one non-zero data point, the z-score will remain zero:

Introducing thresholds in Security Event Alerting: a z-score love story

The z-score does not cross the value 3.5 and an alert will not be triggered.

So what’s better? Including zeros in our calculations can skew the results for domains with too many zero events and alert them every time a spike appears. Not including zeros is mathematically wrong and will never alert on these spikes.

Threshold, the prince charming

Clearly, a z-score is not enough on its own.

Instead, we paired up the z-score with a threshold. The threshold represents the raw number of security events an Internet property can have, below which an alert will not be sent. While z-score checks whether the spike is at least 3.5 standard deviations above the mean, the threshold makes sure it is above a certain static value. If both of these conditions are met, we will send you an alert:

Introducing thresholds in Security Event Alerting: a z-score love story

The above spike crosses the threshold of 200 security events. We now have to check that the z-score is above 3.5:

Introducing thresholds in Security Event Alerting: a z-score love story

The z-score value crosses 3.5 and an alert will be sent.

A threshold for the number of security events comes as the perfect complement. By itself, the threshold cannot determine whether something is a spike, and would simply alert on any value crossing it. This blog post describes in more detail why thresholds alone do not work. However, when paired with z-score, they are able to share their strengths and cover for each other’s weaknesses. If the z-score falsely detects an insignificant spike, the threshold will stop the alert from triggering. Conversely, if a value does cross the security events threshold, the z-score ensures there is a reasonable variance from the data average before allowing an alert to be sent.

The invaluable value

To foster a successful relationship between the z-score and security events threshold, we needed to determine the most effective threshold value. After careful analysis of our previous attacks on customers, we set the value to 200. This number is high enough to filter out the smaller, noisier spikes, but low enough to expose any threats.

Am I invited to the wedding?

Yes, you are! The z-score and threshold relationship is already enabled for all WAF customers, so all you need to do is sit back and relax. For enterprise customers, the threshold will be applied to each type of alert enabled on your domain.

Happily ever after

The story certainly does not end here. We are constantly iterating on our alerts, so keep an eye out for future updates on the road to make our algorithms even more personalized for your Internet properties!

Crawler Hints supports Microsoft’s IndexNow in helping users find new content

Post Syndicated from Alex Krivit original https://blog.cloudflare.com/crawler-hints-supports-microsofts-indexnow-in-helping-users-find-new-content/

Crawler Hints supports Microsoft’s IndexNow in helping users find new content

Crawler Hints supports Microsoft’s IndexNow in helping users find new content

The web is constantly changing. Whether it’s news or updates to your social feed, it’s a constant flow of information. As a user, that’s great. But have you ever stopped to think how search engines deal with all the change?

It turns out, they “index” the web on a regular basis — sending bots out, to constantly crawl webpages, looking for changes. Today, bot traffic accounts for about 30% of total traffic on the Internet, and given how foundational search is to using the Internet, it should come as no surprise that search engine bots make up a large proportion of that what might come as a surprise is how inefficient the model is, though: we estimate that over 50% of crawler traffic is wasted effort.

This has a huge impact. There’s all the additional capacity that owners of websites need to bake into their site to absorb the bots crawling all over it. There’s the transmission of the data. There’s the CPU cost of running the bots. And when you’re running at the scale of the Internet, all of this has a pretty big environmental footprint.

Part of the problem, though, is nobody had really stopped to ask: maybe there’s a better way?

Right now, the model for indexing websites is the same as it has been since the 1990s: a “pull” model, where the search engine sends a crawler out to a website after a predetermined amount of time. During Impact Week last year, we asked: what about flipping the model on its head? What about moving to a push model, where a website could simply ping a search engine to let it know an update had been made?

There are a heap of advantages to such a model. The website wins: it’s not dealing with unnecessary crawls. It also makes sure that as soon as there’s an update to its content, it’s reflected in the search engine — it doesn’t need to wait for the next crawl. The website owner wins because they don’t need to manage distinct search engine crawl submissions. The search engine wins, too: it saves money on crawl costs, and it can make sure it gets the latest content.

Of course, this needs work to be done on both sides of the equation. The websites need a mechanism to alert the search engines; and the search engines need a mechanism to receive the alert, so they know when to do the crawl.

Crawler Hints — Cloudflare’s Solution for Websites

Solving this problem is why we launched Crawler Hints. Cloudflare sits in a unique position on the Internet — we’re serving on average 36 million HTTP requests per second. That represents a lot of websites. It also means we’re uniquely positioned to help solve this problem:  to help give crawlers hints about when they should recrawl if new content has been added or if content on a site has recently changed.

With Crawler Hints, we send signals to web indexers based on cache data and origin status codes to help them understand when content has likely changed or been added to a site. The aim is to increase the number of relevant crawls as well as drastically reduce the number of crawls that don’t find fresh content, saving bandwidth and compute for both indexers and sites alike, and improving the experience of using the search engines.

But, of course, that’s just half the equation.

IndexNow Protocol — the Search Engine Moves from Pull to Push

Websites alerting the search engine about changes is useless if the search engines aren’t listening — and they simply continue to crawl the way they always have. Of course, search engines are incredibly complicated, and changing the way they operate is no easy task.

The IndexNow Protocol is a standard developed by Microsoft, Seznam.cz and Yandex, and it represents a major shift in the way search engines operate. Using IndexNow, search engines have a mechanism by which they can receive signals from Crawler Hints. Once they have that signal, they can shift their crawlers from a pull model to a push model.

In a recent update, Microsoft has announced that millions of websites are now using IndexNow to signal to search engine crawlers when their content needs to be crawled and IndexNow was used to index/crawl about 7% of all new URLs clicked when someone is selecting from web search results.

On the Cloudflare side, since the release of Crawler Hints in October 2021, Crawler Hints has processed about six-hundred-billion signals to IndexNow.

That’s a lot of saved crawls.

How to enable Crawler Hints

By enabling Crawler Hints on your website, with the simple click of a button, Cloudflare will take care of signaling to these search engines when your content has changed via the IndexNow API. You don’t need to do anything else!

Crawler Hints is free to use and available to all Cloudflare customers. If you’d like to see how Crawler Hints can benefit how your website is indexed by the world’s biggest search engines, please feel free to opt-into the service by:

  1. Sign in to your Cloudflare Account.
  2. In the dashboard, navigate to the Cache tab.
  3. Click on the Configuration section.
  4. Locate the Crawler Hints and enable.
Crawler Hints supports Microsoft’s IndexNow in helping users find new content

Upon enabling Crawler Hints, Cloudflare will share when content on your site has changed and needs to be re-crawled with search engines using the IndexNow protocol (this blog can help if you’re interested in finding out more about how the mechanism works).

What’s Next?

Going forward, because the benefits are so substantial for site owners, search operators, and the environment, we plan to start defaulting Crawler Hints on for all our customers. We’re also hopeful that Google, the world’s largest search engine and most wasteful user of Internet resources, will adopt IndexNow or a similar standard and lower the burden of search crawling on the planet.

When we think of helping to build a better Internet, this is exactly what comes to mind: creating and supporting standards that make it operate better, greener, faster. We’re really excited about the work to date, and will continue to work to improve the signaling to ensure the most valuable information is being sent to the search engines in a timely manner. This includes incorporating additional signals such as etags, last-modified headers, and content hash differences. Adding these signals will help further inform crawlers when they should reindex sites, and how often they need to return to a particular site to check if it’s been changed. This is only the beginning. We will continue testing more signals and working with industry partners so that we can help crawlers run efficiently with these hints.

And finally: if you’re on Cloudflare, and you’d like to be part of this revolution in how search engines operate on the web (it’s free!), simply follow the instructions in the section above.

35,000 new trees in Nova Scotia

Post Syndicated from Patrick Day original https://blog.cloudflare.com/35-000-new-trees-in-nova-scotia/

35,000 new trees in Nova Scotia

Cloudflare is proud to announce the first 35,000 trees from our commitment to help clean up bad bots (and the climate) have been planted.

35,000 new trees in Nova Scotia

Working with our partners at One Tree Planted (OTP), Cloudflare was able to support the restoration of 20 hectares of land at Victoria Park in Nova Scotia, Canada. The 130-year-old natural woodland park is located in the heart of Truro, NS, and includes over 3,000 acres of hiking and biking trails through natural gorges, rivers, and waterfalls, as well as an old-growth eastern hemlock forest.

The planting projects added red spruce, black spruce, eastern white pine, eastern larch, northern red oak, sugar maple, yellow birch, and jack pine to two areas of the park. The first area was a section of the park that recently lost a number of old conifers due to insect attacks. The second was an area previously used as a municipal dump, which has since been covered by a clay cap and topsoil.

35,000 new trees in Nova Scotia

Our tree commitment began far from the Canadian woodlands. In 2019, we launched an ambitious tool called Bot Fight Mode, which for the first time fought back against bots, targeting scrapers and other automated actors.

Our idea was simple: preoccupy bad bots with nonsense tasks, so they cannot attack real sites. Even better, make these tasks computationally expensive to engage with. This approach is effective, but it forces bad actors to consume more energy and likely emit more greenhouse gasses (GHG). So in addition to launching Bot Fight Mode, we also committed to supporting tree planting projects to account for any potential environmental impact.

What is Bot Fight Mode?

As soon as Bot Fight Mode is enabled, it immediately starts challenging bots that visit your site. It is available to all Cloudflare customers for free, regardless of plan.

35,000 new trees in Nova Scotia

When Bot Fight Mode identifies a bot, it issues a computationally expensive challenge to exhaust it (also called “tarpitting”). Our aim is to disincentivize attackers, so they have to find a new hobby altogether. When we tarpit a bot, we require a significant amount of compute time that will stall its progress and result in a hefty server bill. Sorry not sorry.

We do this because bots are leeches. They draw resources, slow down sites, and abuse online platforms. They also hack into accounts and steal personal data. Of course, we allowlist a small number of bots that are well-behaved, like Slack and Google. And Bot Fight Mode only acts on traffic from cloud and hosting providers (because that is where bots usually originate from).

Over 550,000 sites use Bot Fight Mode today! We believe this makes it the most widely deployed bot management solution in the world (though this is impossible to validate). Free customers can enable the tool from the dashboard and paid customers can use a special version, known as Super Bot Fight Mode.

How many trees? Let’s do the math 🚀

Now, the hard part: how can we translate bot challenges into a specific number of trees that should be planted? Fortunately, we can use a series of unit conversions, similar to those we use to calculate Cloudflare’s total GHG emissions.

We started with the following assumptions.

Table 1.

Measure Quantity Scaled Source
Energy used by a standard server 1,760.3 kWh / year To hours (0.2 kWh / hour) Go Climate
Emissions factor 0.33852 kgCO2e / kWh To grams (338.52 gCO2e / kWh) Go Climate
CO2 absorbed by a mature tree 48 lbsCO2e / year To kilograms (21 kgCO2e / year) One Tree Planted

Next, we selected a high-traffic day to model the rate and duration of bot challenges on our network. On May 23, 2021, Bot Fight Mode issued 2,878,622 challenges, which lasted an average of 50 seconds each. In total, bots spent 39,981 hours engaging with our network defenses, or more than four years of challenges in a single day!

We then converted that time value into kilowatt-hours (kWh) of energy based on the rate of power consumed by our generic server listed in Table 1 above.

39,981 (hours) x .2 (kWh/hour) = 7,996 (kWh)

Once we knew the total amount of energy consumed by bad bot servers, we used an emissions factor (the amount of greenhouse gasses emitted per unit of energy consumed) to determine total emissions.

7,996 (kwh) x 338.52 (gCO2e/kwh) = 2,706,805 (gCO2e)

If you have made it this far, clearly you like to geek out like we do, so for the sake of completeness, the unit commonly used in emissions calculations is carbon dioxide equivalent (CO2e), which is a composite unit for all six GHGs listed in the Kyoto Protocol weighted by Global Warming Potential.

The last conversion we needed was from emissions to trees. Our partners at OTP found that a mature tree absorbs roughly 21 kgCO2e per year. Based on our total emissions that translates to roughly 47,000 trees per server, or 840 trees per CPU core. However, in our original post, we also noted that given the time it takes for a newly planted tree to reach maturity, we would multiply our donation by a factor of 25.

In the end, over the first two years of the program, we calculated that we would need approximately 42,000 trees to account for all the individual CPU cores engaged in Bot Fight Mode. For good measure, we rounded up to an even 50,000.

We are proud that most of these trees are already in the ground, and we look forward to providing an update when the final 15,000 are planted.

A piece of the puzzle

“Planting trees will benefit species diversity of the existing forest, animal habitat, greening of reclamation areas as well as community recreation areas, and visual benefits along popular hiking/biking trail networks.”  
Stephanie Clement, One Tree Planted, Project Manager North America

Reforestation is an important part of protecting healthy ecosystems and promoting biodiversity. Trees and forests are also a fundamental part of helping to slow the growth of global GHG emissions.

However, we recognize there is no single solution to the climate crisis. As part of our mission to help build a better, more sustainable Internet, Cloudflare is investing in renewable energy, tools that help our customers understand and mitigate their own carbon footprints on our network, and projects that will help offset or remove historical emissions associated with powering our network by 2025.

Want to be part of our bots & trees effort? Enable Bot Fight Mode today! It’s available on our free plan and takes only a few seconds. By the time we made our first donation to OTP in 2021, Bot Fight Mode had already spent more than 3,000 years distracting bots.

Help us defeat bad bots and improve our planet today!

35,000 new trees in Nova Scotia

—-
For more information on Victoria Park, please visit https://www.victoriaparktruro.ca
For more information on One Tree Planted, please visit https://onetreeplanted.org
For more information on sustainability at Cloudflare, please visit www.cloudflare.com/impact

Waiting Room Event Scheduling protects your site during online events

Post Syndicated from Arielle Olache original https://blog.cloudflare.com/waiting-room-event-scheduling/

Waiting Room Event Scheduling protects your site during online events

Waiting Room Event Scheduling protects your site during online events

You’ve got big plans for your ecommerce strategy in the form of online events — seasonal sales, open registration periods, product drops, ticket sales, and more. With all the hype you’ve generated, you’ll get a lot of site traffic, and that’s a good thing! With Waiting Room Event Scheduling, you can protect your servers from being overloaded during your event while delivering a user experience that is unique to the occasion and consistent with your brand. Available now to enterprise customers with an advanced Waiting Room subscription, Event Scheduling allows you to plan changes to your waiting room’s settings and custom queueing page ahead of time, ensuring flawless execution of your online event.

More than always-on protection

We launched Waiting Room to protect our customers’ servers during traffic spikes. Waiting Room sends excess visitors to a virtual queue during traffic surges, letting visitors in dynamically as spots become available on your site. By automatically queuing traffic that exceeds your site’s capacity, Waiting Room protects your origin servers and your customer experience. Additionally, the Waiting Room’s queuing page can be customized to match the look and feel of your site so that your users never feel as though they have left your application, ensuring a seamless customer experience.

While many of our customers use Waiting Room as an always-on failsafe against potential traffic spikes, some customers use Waiting Room to manage traffic during time-boxed online events. While these events can undoubtedly result in traffic spikes that Waiting Room safeguards against, they present unique challenges for our customers.

In the lifecycle of an online event, various stages of the event generally require the settings of a waiting room to be updated. While each customer’s event requirements are unique, consider the following customer use cases. To prevent a mad rush to a landing page that could overwhelm their site ahead of an event, some customers want to queue early arrivers in the days or hours leading up to the event. During an event, some customers want to impose stricter limits on how long visitors have to browse and complete transactions to ensure that as many visitors as possible get a fair chance to partake. After an event has concluded, many customers want to offload event traffic, blocking access to the event pages while informing users that the event has ended.

For each of these scenarios, our customers want to communicate expectations to their users and customize the look and feel of their queuing page to ensure a seamless, on-brand user experience. Combine all the use cases in the example above into one timeline, and you’ve got at least three event stages that would require waiting room settings and queuing pages to be updated, all with perfect timing.

Waiting Room Event Scheduling protects your site during online events

While these use cases were technically feasible with Waiting Room, they required on-the-spot updates to its configuration settings. This strategy was not ideal or practical when customers needed to be absolutely sure their waiting room would update in lockstep with the timeline of their event. In short, many customers needed to schedule changes to the behavior of their waiting room ahead of time. We built Event Scheduling to give our customers flexibility and control over when and how their waiting room settings change, ensuring that these changes will happen automatically as planned.

Introducing Waiting Room Event Scheduling

With Event Scheduling, you can schedule cascading changes to your waiting room ahead of time as discrete events. For each waiting room event, you can customize traffic thresholds, session duration, queuing method, and the content and styling of your queuing page. Refer to the Create Event API documentation, for a complete list of customizable event settings.

New Queuing Methods

Giving our customers the ability to schedule changes to a waiting room’s settings is a game-changer for customers with event-based Waiting Room requirements, but we didn’t stop there. We’ve also added two new queueing methods — Reject and Passthrough — to give our customers more options for controlling user flow before, during and after their online events.

In the example where customers wanted to offload site traffic after an event, the Reject queuing method would do just that! A waiting room with the Reject queuing method configured will offload traffic from your site, presenting users with a static, fully customizable HTML page. Conversely, the Passthrough queuing method allows all visitors unrestricted access to your site. Unlike simply disabling your waiting room to achieve this, Passthrough has the advantage of being scheduled to turn on or off ahead of time and providing user traffic stats through the Waiting Room status endpoint.

Waiting Room Event Scheduling protects your site during online events

If you prefer to have the waiting room in a completely passive state, having the waiting room on and configured with the Passthrough queueing method allows you to turn on Queue All quickly. Queue All places all new site visitors in a queue, which is a life-saver in the case of unexpected site downtime or any other crisis. Before deactivating Queue All, you can see how many users are waiting to enter your site. Queue All also overrides any active waiting room event, giving you authoritative, fast control in an emergency.

Waiting Room Event Scheduling protects your site during online events

Event Pre-queuing

As part of an event’s configuration, you can also enable a pre-queue, a virtual holding area for visitors that arrive at your event before its start time. Pre-queues add extra protection against traffic spikes that can cause your site to crash.

To illustrate how, imagine your customer is a devoted fan trying to get tickets to their favorite band’s upcoming concert. Ticket sales open in one hour, so they visit your sales page about ten minutes before sales open. There is a static landing page on your site where the ticket sales page will be. The fan starts refreshing the page in the minutes leading up to the start time, hoping to get access as soon as sales open. Now multiply that one hopeful concert-goer by many thousands of fans, and before your sale has even begun, your site is already overwhelmed and at risk of crashing. Having a pre-queue in place protects your site from this type of user activity that has the potential to overwhelm your site at a time when stakes are very high for your brand. And with the ability to fully customize the pre-queuing page, you can still generate the same excitement you would have with your event’s landing page.

Taking it a step further, you can elect to shuffle the pre-queue, randomly assigning users who reach your application during the pre-queue period a place in line when the event starts. If your event uses the First in First Out queuing method, randomizing your pre-queue can help promote fairness, especially if your pre-queuing period spans many time zones. Like the Random queuing method, implementing a randomized pre-queue neutralizes the advantage customers in earlier time zones have to grab a place in line ahead of the event’s start time. Ultimately, fairness for your event is unique to you and your customers’ perspectives and needs. With the order of entry options available for both the pre-queue and overflow queuing during your event, you have control over managing fairness of entry to align with your unique requirements.

Creating a waiting room event

Similarly to configuring a waiting room, scheduling events with Waiting Room is incredibly easy and requires no coding or application changes. You will first need to have a baseline waiting room configured. Then, you can schedule events for this waiting room from the Waiting Room dashboard. In the event creation workflow, you’ll indicate when you would like the event to start and end and configure an optional pre-queue.

Waiting Room Event Scheduling protects your site during online events

Unless specified otherwise, your event will always inherit the configuration settings of its associated waiting room. That way, you only need to update the waiting room settings that you would like to change for the duration of the event. You can optionally create a queuing page for your users that is unique to the event and preview what your event queuing page will look like in different queuing states and browsers, ensuring that your end-user’s experience doesn’t result in garbled CSS or broken looking pages!

Before saving your event, you will be able to review your waiting room and event settings side by side, making it easy to verify how the behavior of your waiting room will change for the duration of the event.

Waiting Room Event Scheduling protects your site during online events

Once created, your event will appear in the Waiting Room dashboard nested under its associated waiting room. The date of each waiting room’s next event is indicated in the dashboard’s default view so that you can tell at a glance if any waiting room’s settings may change due to an upcoming event. You can expand each waiting room’s row to list upcoming events and associated durations. Additionally, if an event is live, a green dot will appear next to this date, adding extra assurance that it has kicked in.

Waiting Room Event Scheduling protects your site during online events

Event Scheduling in action

Tying it all together, let’s walk through a real-world scenario to demonstrate the versatility and practicality of Event Scheduling. In this example, we have a major women’s fashion retailer, let’s call them Shopflare, with an upcoming flash sale. A flash sale is an online sales event that is different from a regular sale in that it lasts for a brief time and offers substantial discounts or limited stock. Often, retailers target a specific audience using marketing campaigns in the days leading up to a flash sale.

In our example, Shopflare’s marketing team plans to send an email campaign to a set of target customers, promoting their Spring Flash Sale, where they will be offering free shipping and 40% off on their freshest spring arrivals for one day only! How could Shopflare use Waiting Room and Event Scheduling to help this sales event go off without a hitch?

Preparing for the flash sale

One week before the sale, Shopflare’s web team creates a landing page with a countdown for their spring flash sale at example.com/sales/spring_flash_sale. They place a waiting room at this URL with a First In First Out queuing method and their desired traffic thresholds to allow traffic while ensuring their site remains performant. They then send an email campaign to their target audience directly linking to the sale’s landing page. With their baseline waiting room in place, early traffic to the URL will not overwhelm their site. Shopflare’s team also prepares for the upcoming sale by scheduling two cascading waiting room events ahead of time. Let’s review Shopflare’s flash sale requirements related to Waiting Room and review the steps they would take with Event Scheduling to satisfy them.

Pre-queueing and event overflow queuing

A few hours before the sale starts, Shopflare wants to allow shoppers to start “lining up” to secure a spot ahead of those who arrive after the event start time. They want to create a lottery for these early arrivers, randomly assigning them a place in line when the sale starts to mitigate the advantage that customers from earlier time zones have to secure a spot in line. To do this, they would create an event for the waiting room they have already configured.

Waiting Room Event Scheduling protects your site during online events

The event creation workflow consists of four main steps: Details, Settings, Customization, and Review. On the Details page of the event creation workflow, they would enter their sale start and end times, set the start time of the pre-queue and enable “Shuffle at Event Start” to create a randomized pre-queue.

Waiting Room Event Scheduling protects your site during online events

While the sale is in progress, Shopflare wants an overflow queue to protect their site from being overwhelmed by traffic in excess of their waiting room limits, letting these users in First in First Out when spots open up on their site. Since their underlying waiting room is already configured with the traffic thresholds they want to enforce for the duration of the event, they would simply leave the Settings page of the event creation workflow unchanged and proceed to Customization.

Waiting Room Event Scheduling protects your site during online events

On the Customization step, Shopflare will create a custom queuing experience for their sale by uploading a custom HTML template that contains the HTML for both their pre-queueing page and their overflow queue.

Waiting Room Event Scheduling protects your site during online events

Shopflare wants their pre-queuing page to get shoppers excited about the beginning of the sale. They ensure it is branded and unique to the flash sale while setting clear expectations for shoppers. For their overflow queue, they want the same look and feel of their pre-queueing page, with updated messaging that gives shoppers an estimated wait time and explains the reason for queuing. Check out the two sample queuing pages below to see how they create a unique and informative experience for their queued customers in both the pre-queue and overflow queue.

Waiting Room Event Scheduling protects your site during online events
Waiting Room Event Scheduling protects your site during online events

Sale conclusion

Once the sale has ended, Shopflare wants to allow active shoppers a five-minute grace period to complete their purchases without admitting any more new visitors. For 48 hours post-sale, they would like to present all visitors with a static page letting them know the sale has concluded while providing a redirect link back to their homepage. To achieve this, Shopflare would create another event for the baseline waiting room that starts when the previous event ends without a pre-queue enabled.

Waiting Room Event Scheduling protects your site during online events

To offload all new site traffic after the sale has ended while giving active shoppers a five-minute grace period, from the Settings page of the event creation workflow, they would set session duration to five minutes, disable session renewal and select the Reject All queuing method.

Waiting Room Event Scheduling protects your site during online events

Once again, on the Customization tab, they would elect to override the underlying waiting room template with a custom event template and upload their custom Reject page HTML. They would then Review and save their event and it will appear along with their previously created event in the Waiting Room dashboard.

Waiting Room Event Scheduling protects your site during online events

And that’s it! With their waiting room events in place, Shopflare can rest assured that their site will be protected and that their customers have an on-brand and transparent shopping experience on the big day. Each customer and online event is unique. However, you choose to manage your user traffic for your online event, Event Scheduling for Cloudflare Waiting Rooms offers the options necessary to deliver a stellar and fair user experience while protecting your application during your online event. We can’t wait to support you in your next online event!

For more on Event Scheduling and Waiting Room, check out our developer documentation.


We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet application, ward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.

Introducing Location-Aware DDoS Protection

Post Syndicated from Omer Yoachimik original https://blog.cloudflare.com/location-aware-ddos-protection/

Introducing Location-Aware DDoS Protection

Introducing Location-Aware DDoS Protection

We’re thrilled to introduce Cloudflare’s Location-Aware DDoS Protection.

Distributed Denial of Service (DDoS) attacks are cyber attacks that aim to make your Internet property unavailable by flooding it with more traffic than it can handle. For this reason, attackers usually aim to generate as much attack traffic as they can — from as many locations as they can. With Location-Aware DDoS Protection, we take this distributed characteristic of the attack, that is thought of being advantageous for the attacker, and turn it on its back — making it into a disadvantage.

Location-Aware DDoS Protection is now available in beta for Cloudflare Enterprise customers that are subscribed to the Advanced DDoS service.

Introducing Location-Aware DDoS Protection

Distributed attacks lose their edge

Cloudflare’s Location-Aware DDoS Protection takes the attacker’s advantage and uses it against them. By learning where your traffic comes from, the system becomes location-aware and constantly asks “Does it make sense for your website?” when seeing new traffic.

For example, if you operate an e-commerce website that mostly serves the German consumer, then most of your traffic would most likely originate from within Germany, some from neighboring European countries, and a decreasing amount as we expand globally to other countries and geographies. If sudden spikes of traffic arrive from unexpected locations outside your main geographies, the system will flag and mitigate the unwanted traffic.

Location-Aware DDoS Protection also leverages Cloudflare’s Machine Learning models to identify traffic that is likely automated. This is used as an additional signal to provide more accurate protection.

Enabling Location-Aware Protection

Enterprise customers subscribed to the Advanced DDoS service can customize and enable the Location-Aware DDoS Protection system. By default, the system will only show what it thinks is suspicious traffic based on your last 7-day P95 rates, bucketed by client country and region (recalculated every 24 hours).

Customers can view what the system flagged in the Security Overview dashboard.

Introducing Location-Aware DDoS Protection

Location-Aware DDoS Protection is exposed to customers as a new HTTP DDoS Managed rule within the existing ruleset. To enable it, change the action to Managed Challenge or Block. Customers can adjust its sensitivity level to define how much tolerance you permit for traffic that deviates from your observed geographies. The lower the sensitivity, the higher the tolerance.

Introducing Location-Aware DDoS Protection

To learn how to view flagged traffic and how to configure the Location-Aware DDoS Protection, visit our developer docs site.

Making the impact of DDoS attacks a thing of the past

Our mission at Cloudflare is to help build a better Internet. The DDoS Protection team’s vision is derived from this mission: our goal is to make the impact of DDoS attacks a thing of the past. Location-aware protection is only the first step to making Cloudflare’s DDoS protection even more intelligent, sophisticated, and tailored to individual needs.

Not using Cloudflare yet? Start now with our Free and Pro plans to protect your websites, or contact us to learn more about the Enterprise Advanced DDoS Protection package.

New WAF intelligence feeds

Post Syndicated from Daniele Molteni original https://blog.cloudflare.com/new-waf-intelligence-feeds/

New WAF intelligence feeds

New WAF intelligence feeds

Cloudflare is expanding our WAF’s threat intelligence capabilities by adding four new managed IP lists that can be used as part of any custom firewall rule.

Managed lists are created and maintained by Cloudflare and are built based on threat intelligence feeds collected by analyzing patterns and trends observed across the Internet. Enterprise customers can already use the Open SOCKS Proxy list (launched in March 2021) and today we are adding four new IP lists: “VPNs”, “Botnets, Command and Control Servers”, “Malware” and “Anonymizers”.

New WAF intelligence feeds
You can check what rules are available in your plan by navigating to Manage Account → Configuration → Lists.

Customers can reference these lists when creating a custom firewall rule or in Advanced Rate Limiting. For example, you can choose to block all traffic generated by IPs we categorize as VPNs, or rate limit traffic generated by all Anonymizers. You can simply incorporate managed IP lists in the powerful firewall rule builder. Of course, you can also use your own custom IP list.

New WAF intelligence feeds
Managed IP Lists can be used in WAF rules to manage incoming traffic from these IPs.

Where do these feeds come from?

These lists are based on Cloudflare-generated threat feeds which are made available as IP lists to be easily consumed in the WAF. Each IP is categorized by combining open source data as well as by analyzing the behavior of each IP leveraging the scale and reach of Cloudflare network. After an IP has been included in one of these feeds, we verify its categorization and feed this information back into our security systems and make it available to our customers in the form of a managed IP list. The content of each list is updated multiple times a day.

In addition to generating IP classifications based on Cloudflare’s internal data, Cloudflare curates and combines several data sources that we believe provide reliable coverage of active security threats with a low false positive rate. In today’s environment, an IP belonging to a cloud provider might today be distributing malware, but tomorrow might be a critical resource for your company.

Some IP address classifications are publicly available, OSINT data, for example Tor exit nodes, and Cloudflare takes care of integrating this into our Anonymizer list so that you don’t have to manage integrating this list into every asset in your network. Other classifications are determined or vetted using a variety of DNS techniques, like lookup, PTR record lookup, and observing passive DNS from Cloudflare’s network.

Our malware and command-and-control focused lists are generated from curated partnerships, and one type of IP address we target when we select partners is data sources that identify security threats that do not have DNS records associated with them.

Our Anonymizer list encompasses several types of services that perform anonymization, including VPNs, open proxies, and Tor nodes. It is a superset of the more narrowly focused VPN list (known commercial VPN nodes), and the Cloudflare Open Proxies list (proxies that relay traffic without requiring authentication).

In dashboard IP annotations

Using these lists to deploy a preventative security policy for these IPs is great, but what about knowing if an IP that is interacting with your website or application is part of a Botnet or VPN? We first released contextual information for Anonymizers as part of Security Week 2022, but we are now closing the circle by extending this feature to cover all new lists.

As part of Cloudflare’s threat intelligence feeds, we are exposing the IP category directly into the dashboard. Say you are investigating requests that were blocked by the WAF and that looked to be probing your application for known software vulnerabilities. If the source IP of these requests is matching with one of our feeds (for example part of a VPN), contextual information will appear directly on the analytics page.

New WAF intelligence feeds
When the source IP of a WAF event matches one of the threat feeds, we provide contextual information directly onto the Cloudflare dashboard.

This information can help you see patterns and decide whether you need to use the managed lists to handle the traffic from these IPs in a particular way, for example by creating a rate limiting rule that reduces the amount of requests these actors can perform over a period of time.

Who gets this?

The following table summarizes what plans have access to each one of these features. Any paying plans will have access to the contextual in-dash information, while Enterprise will be able to use different managed lists. Managed lists can be used only on Enterprise zones within an Enterprise account.

FREE PRO BIZ ENT Advanced ENT *
Annotations x
Open Proxies x x x
Anonymizers x x x x
VPNs x x x x
Botnets, command and control x x x x
Malware x x x x

* Contact your customer success manager to learn how to get access to these lists.

Future releases

We are working on enriching our threat feeds even further. In the next months we are going to provide more IP lists, specifically we are looking into lists for cloud providers and Carrier-grade Network Address Translation (CG-NAT).

Connect to private network services with Browser Isolation

Post Syndicated from Tim Obezuk original https://blog.cloudflare.com/browser-isolation-private-network/

Connect to private network services with Browser Isolation

Connect to private network services with Browser Isolation

If you’re working in an IT organization that has relied on virtual desktops but looking to get rid of them, we have some good news: starting today, you can connect your users to your private network via isolated remote browsers. This means you can deliver sensitive internal web applications — reducing costs without sacrificing security.

Browser Isolation with private network connectivity enables your users to securely access private web services without installing any software or agents on an endpoint device or absorbing the management and cost overhead of serving virtual desktops. What’s even better: Browser Isolation is natively integrated into Cloudflare’s Zero Trust platform, making it easy to control and monitor who can access what private services from a remote browser without sacrificing performance or security.

Deprecating virtual desktops for web apps

The presence of virtual desktops in the workplace tells an interesting story about the evolution of deploying and securing enterprise applications. Serving a full virtual desktop to end-users is an expensive decision, each user requiring a dedicated virtual machine with multiple CPU cores and gigabytes of memory to run a full operating system. This cost was offset by the benefits of streamlining desktop app distribution and the security benefits of isolating unmanaged devices from the aging application.

Then the launch of Chromium/V8 surprised everyone by demonstrating that desktop-grade applications could be built entirely in web-based technologies.  Today, a vast majority of applications — either SaaS or private — exist within a web browser. With most Virtual Desktop Infrastructure (VDI) users connecting to a remote desktop just to open a web browser, VDI’s utility for distributing applications is really no longer needed and has become a tremendously expensive way to securely host a web browser.

Browser Isolation with private network connectivity enables businesses to maintain the security benefits of VDI, without the costs of hosting and operating legacy virtual desktops.

Transparent end-user experience

But it doesn’t just have a better ROI. Browser Isolation also offers a better experience for your end-users, too. Serving web applications via virtual desktops is a clunky experience. Users first need to connect to their virtual desktop (either through a desktop application or web portal), open an embedded web browser. This model requires users to context-switch between local and remote web applications which adds friction, impacting user productivity.

With Browser Isolation users simply navigate to the isolated private application in their preferred web browser and use the service as if they were directly browsing the remote web browser.

How it works

Browser Isolation with private network connectivity works by unifying our Zero Trust products: Cloudflare Access and Cloudflare Tunnels.

Cloudflare Access authorizes your users via your preferred Identity Provider and connects them to a remote browser without installing any software on their device. Cloudflare Tunnels securely connects your private network to remote browsers hosted on Cloudflare’s network without opening any inbound ports on your firewall.

Monitor third-party users on private networks

Ever needed to give a contractor or vendor access to your network to remotely manage a web UI? Simply add the user to your Clientless Web Isolation policy, and they can connect to your internal service without installing any client software on their device. All requests to private IPs are filtered, inspected, and logged through Cloudflare Gateway.

Apply data protection controls

All traffic from remote browsers into your network is inspected and filtered. Data protection controls such as disabling clipboard, printing and file upload/downloads can be granularly applied to high-risk user groups and sensitive applications.

Get started

Connect your network to Cloudflare Zero Trust

It’s ridiculously easy to connect any network with outbound Internet access.

Engineers needing a web environment to debug and test services inside a private network just need to run a single command to connect their network to Browser Isolation using Cloudflare Tunnels.

Enable Clientless Web Isolation

Clientless Web Isolation allows users to connect to a remote browser without installing any software on the endpoint device. That means company-wide deployment is seamless and transparent to end users. Follow these steps to enable Clientless Web Isolation and define what users are allowed to connect to a remote browser.

Browse private IP resources

Now that you have your network connected to Cloudflare, and your users connected to remote browsers it’s easy for a user to connect to any RFC 1918 address in a remote browser. Simply navigate to your isolation endpoint, and you’ll be connected to your private network.

For example, if you want a user to manage a router hosted at http://192.0.2.1, prefix this URL with your isolation endpoint such as

https://<authdomain>.cloudflareaccess.com/browser/http://192.0.2.1

That’s it! Users are automatically served a remote browser in a nearby Cloudflare data center.

Remote browser connected to a private web service with data loss prevention policies enabled

Define policies

At this point, your users can connect to any private resource inside your network. You may want to further control what endpoints your users can reach. To do this, navigate to Gateway → Policies → HTTP and allow / block or apply data protection controls for any private resource based on identity or destination IP address. See our developer documentation for more information.

Connect to private network services with Browser Isolation

Additionally, isolation policies can be defined to control how users can interact with the remote browser to disable the clipboard, printing or file upload / downloads. See our developer documentation for more information.

Logging and visibility

Finally, all remote browser traffic is logged by the Secure Web Gateway. Navigate to Logs → Gateway → HTTP and filter by identity or destination IP address.

Connect to private network services with Browser Isolation

What’s next?

We’re excited to learn how people use Browser Isolation to enable remote access to private networks and protect sensitive apps. Like always, we’re just getting started so stay tuned for improvements on configuring remote browsers and deeper connectivity with Access applications. Click here to get started with Browser Isolation.

Announcing Gateway + CASB

Post Syndicated from Corey Mahan original https://blog.cloudflare.com/announcing-gateway-and-casb/

Announcing Gateway + CASB

This post is also available in 简体中文, 日本語, Español.

Announcing Gateway + CASB

Shadow IT and managing access to sanctioned or unsanctioned SaaS applications remain one of the biggest pain points for IT administrators in the era of the cloud.

We’re excited to announce that starting today, Cloudflare’s Secure Web Gateway and our new API-driven Cloud Access Security Broker (CASB) work seamlessly together to help IT and security teams go from finding Shadow IT to fixing it in minutes.

Detect security issues within SaaS applications

Cloudflare’s API-driven CASB starts by providing comprehensive visibility into SaaS applications, so you can easily prevent data leaks and compliance violations. Setup takes just a few clicks to integrate with your organization’s SaaS services, like Google Workspace and Microsoft 365. From there, IT and security teams can see what applications and services their users are logging into and how company data is being shared.

So you’ve found the issues. But what happens next?

Identify and detect, but then what?

Customer feedback from the API-driven CASB beta has followed a similar theme: it was super easy to set up and detect all my security issues, but how do I fix this stuff?

Almost immediately after investigating the most critical issues, it makes sense to want to start taking action. Whether it be detecting an unknown application being used for Shadow IT or wanting to limit functionality, access, or behaviors to a known but unapproved application, remediation is front of mind.

This led to customers feeling like they had a bunch of useful data in front of them, but no clear action to take to get started on fixing them.

Create Gateway policies from CASB security findings

To solve this problem, we’re allowing you to easily create Gateway policies from CASB security findings. Security findings are issues detected within SaaS applications that involve users, data at rest, and settings that are assigned a Low, Medium, High or Critical severity per integration.

Using the security findings from CASB allows for fine-grained Gateway policies which prevent future unwanted behavior while still allowing usage that aligns to company security policy. This means going from viewing a CASB security issue, like the use of an unapproved SaaS application, to preventing or controlling access in minutes. This seamless cross-product experience all happens from a single, unified platform.

For example, take the CASB Google Workspace security finding around third-party apps which detects sign-ins or other permission sharing from a user’s account. In just a few clicks, you can create a Gateway policy to block some or all of the activity, like uploads or downloads, to the detected SaaS application. This policy can be applied to some or all users, based on what access has been granted to the user’s account.

By surfacing the exact behavior with CASB, you can take swift and targeted action to better protect your organization with Gateway.

Announcing Gateway + CASB

Get started today with the Cloudflare One

This post highlights one of the many ways the Cloudflare One suite of solutions work seamlessly together as a unified platform to find and fix security issues across SaaS applications.

Get started now with Cloudflare’s Secure Web Gateway by signing up here. Cloudflare’s API-driven CASB is in closed beta with new customers being onboarded each week. You can request access here to try out this exciting new cross-product feature.

Early Hints update: How Cloudflare, Google, and Shopify are working together to build a faster Internet for everyone

Post Syndicated from Alex Krivit original https://blog.cloudflare.com/early-hints-performance/

Early Hints update: How Cloudflare, Google, and Shopify are working together to build a faster Internet for everyone

Early Hints update: How Cloudflare, Google, and Shopify are working together to build a faster Internet for everyone

A few months ago, we wrote a post focused on a product we were building that could vastly improve page load performance. That product, known as Early Hints, has seen wide adoption since that original post. In early benchmarking experiments with Early Hints, we saw performance improvements that were as high as 30%.

Now, with over 100,000 customers using Early Hints on Cloudflare, we are excited to talk about how much Early Hints have improved page loads for our customers in production, how customers can get the most out of Early Hints, and provide an update on the next iteration of Early Hints we’re building.

What Are Early Hints again?

As a reminder, the browser you’re using right now to read this page needed instructions for what to render and what resources (like images, fonts, and scripts) need to be fetched from somewhere else in order to complete the loading of this (or any given) web page. When you decide you want to see a page, your browser sends a request to a server and the instructions for what to load come from the server’s response. These responses are generally composed of a multitude of resources that tell the browser what content to load and how to display it to the user. The servers sending these instructions to your browser often need time to gather up all of the resources in order to compile the whole webpage. This period is known as “server think time.” Traditionally, during the “server think time” the browser would sit waiting until the server has finished gathering all the required resources and is able to return the full response.

Early Hints was designed to take advantage of this “server think time” to send instructions to the browser to begin loading readily-available resources while the server finishes compiling the full response. Concretely, the server sends two responses: the first to instruct the browser on what it can begin loading right away, and the second is the full response with the remaining information. By sending these hints to a browser before the full response is prepared, the browser can figure out what it needs to do to load the webpage faster for the end user.

Early Hints uses the HTTP status code 103 as the first response to the client. The “hints” are HTTP headers attached to the 103 response that are likely to appear in the final response, indicating (with the Link header) resources the browser should begin loading while the server prepares the final response. Sending hints on which assets to expect before the entire response is compiled allows the browser to use this “think time” (when it would otherwise have been sitting idle) to fetch needed assets, prepare parts of the displayed page, and otherwise get ready for the full response to be returned.

Early Hints on Cloudflare accomplishes performance improvements in three ways:

  • By sending a response where resources are directed to be preloaded by the browser. Preloaded resources direct the browser to begin loading the specified resources as they will be needed soon to load the full page. For example, if the browser needs to fetch a font resource from a third party, that fetch can happen before the full response is returned, so the font is already waiting to be used on the page when the full response returns from the server.
  • By using preconnect to initiate a connection to places where content will be returned from an origin server. For example, if a Shopify storefront needs content from a Shopify origin to finish loading the page, preconnect will warm up the connection which improves the performance for when the origin returns the content.
  • By caching and emitting Early Hints on Cloudflare, we make an efficient use of the full waiting period – not just server think time – which includes transit latency to the origin. Cloudflare sits within 50 milliseconds of 95% of the Internet-connected population globally. So while a request is routed to an origin and the final response is being compiled, Cloudflare can send an Early Hint from much closer and the browser can begin loading.
Early Hints update: How Cloudflare, Google, and Shopify are working together to build a faster Internet for everyone

Early Hints is like multitasking across the Internet – at the same time the origin is compiling resources for the final response and making calls to databases or other servers, the browser is already beginning to load assets for the end user.

What’s new with Early Hints?

While developing Early Hints, we’ve been fortunate to work with Google and Shopify to collect data on the performance impact. Chrome provided web developers with experimental access to both preload and preconnect support for Link headers in Early Hints. Shopify worked with us to guide the development by providing test frameworks which were invaluable to getting real performance data.

Today is a big day for Early Hints. Google announced that Early Hints is available in Chrome version 103 with support for preload and preconnect to start. Previously, Early Hints was available via an origin trial so that Chrome could measure the full performance benefit (A/B test). Now that the data has been collected and analyzed, and we’ve been able to prove a substantial improvement to page load, we’re excited that Chrome’s full support of Early Hints will mean that many more requests will see the performance benefits.

That’s not the only big news coming out about Early Hints. Shopify battle-tested Cloudflare’s implementation of Early Hints during Black Friday/Cyber Monday 2021 and is sharing the performance benefits they saw during the busiest shopping time of the year:


While talking to the audience at Cloudflare Connect London last week, Colin Bendell, Director, Performance Engineering at Shopify summarized it best: “when a buyer visits a website, if that first page that (they) experience is just 10% faster, on average there is a 7% increase in conversion“. The beauty of Early Hints is you can get that sort of speedup easily, and with Smart Early Hints that can be one click away.

You can see his entire talk here:

The headline here is that during a time of vast uncertainty due to the global pandemic, a time when everyone was more online than ever before, when people needed their Internet to be reliably fast — Cloudflare, Google, and Shopify all came together to build and test Early Hints so that the whole Internet would be a faster, better, and more efficient place.

So how much did Early Hints improve performance of customers’ websites?

Performance Improvement with Early Hints

In our simple tests back in September, we were able to accelerate the Largest Contentful Paint (LCP) by 20-30%. Granted, this result was on an artificial page with mostly large images where Early Hints impact could be maximized. As for Shopify, we also knew their storefronts were particularly good candidates for Early Hints. Each mom-and-pop.shop page depends on many assets served from cdn.shopify.com – speeding up a preconnect to that host should meaningfully accelerate loading those assets.

But what about other zones? We expected most origins already using Link preload and preconnect headers to see at least modest improvements if they turned on Early Hints. We wanted to assess performance impact for other uses of Early Hints beyond Shopify’s.

However, getting good data on web page performance impact can be tricky. Not every 103 response from Cloudflare will result in a subsequent request through our network. Some hints tell the browser to preload assets on important third-party origins, for example. And not every Cloudflare zone may have Browser Insights enabled to gather Real User Monitoring data.

Ultimately, we decided to do some lab testing with WebPageTest of a sample of the most popular websites (top 1,000 by request volume) using Early Hints on their URLs with preload and preconnect Link headers. WebPageTest (which we’ve written about in the past) is an excellent tool to visualize and collect metrics on web page performance across a variety of device and connectivity settings.

Lab Testing

In our earlier blog post, we were mainly focused on Largest Contentful Paint (LCP), which is the time at which the browser renders the largest visible image or text block, relative to the start of the page load. Here we’ll focus on improvements not only to LCP, but also FCP (First Contentful Paint), which is the time at which the browser first renders visible content relative to the start of the page load.

We compared test runs with Early Hints support off and on (in Chrome), across four different simulated environments: desktop with a cable connection (5Mbps download / 28ms RTT), mobile with 3G (1.6Mbps / 300ms RTT), mobile with low-latency 3G (1.6Mbps / 150ms RTT) and mobile with 4G (9Mbps / 170ms RTT). After running the tests, we cleaned the data to remove URLs with no visual completeness metrics or less than five DOM elements. (These usually indicated document fragments vs. a page a user might actually navigate to.) This gave us a final sample population of a little more than 750 URLs, each from distinct zones.

In the box plots below, we’re comparing FCP and LCP percentiles between the timing data control runs (no Early Hints) and the runs with Early Hints enabled. Our sample population represents a variety of zones, some of which load relatively quickly and some far slower, thus the long whiskers and string of outlier points climbing the y-axis. The y-axis is constrained to the max p99 of the dataset, to ensure 99% of the data are reflected in the graph while still letting us focus on the p25 / p50 / p75 differences.

Early Hints update: How Cloudflare, Google, and Shopify are working together to build a faster Internet for everyone
Early Hints update: How Cloudflare, Google, and Shopify are working together to build a faster Internet for everyone

The relative shift in the box plot quantiles suggest we should expect modest benefits for Early Hints for the majority of web pages. By comparing FCP / LCP percentage improvement of the web pages from their respective baselines, we can quantify what those median and p75 improvements would look like:

Early Hints update: How Cloudflare, Google, and Shopify are working together to build a faster Internet for everyone
Early Hints update: How Cloudflare, Google, and Shopify are working together to build a faster Internet for everyone

A couple observations:

  • From the p50 values, we see that for 50% of web pages on desktop, Early Hints improved FCP by more than 9.47% and LCP by more than 6.03%. For the p75, or the upper 25%, FCP improved by more than 20.4% and LCP by more than 15.97%.
  • The sizable improvements in First Contentful Paint suggest many hints are for render-blocking assets (such as critical but dynamic stylesheets and scripts that can’t be embedded in the HTML document itself).
  • We see a greater percentage impact on desktop over cable and on mobile over 4G. In theory, the impact of Early Hints is bounded by the load time of the linked asset (i.e. ideally we could preload the entire asset before the browser requires it), so we might expect the FCP / LCP reduction to increase in step with latency. Instead, it appears to be the other way around. There could be many variables at play here – for example, the extra bandwidth the 4G connection provides seems to be more influential than the decreased latency between the two 3G connection settings. Likely that wider bandwidth pipe is especially helpful for URLs we observed that preloaded larger assets such as JS bundles or font files. We also found examples of pages that performed consistently worse on lower-grade connections (see our note on “over-hinting” below).
  • Quite a few sample zones cached their HTML pages on Cloudflare (~15% of the sample). For CDN cache hits, we’d expect Early Hints to be less influential on the final result (because the “server think time” is drastically shorter). Filtering them out from the sample, however, yielded almost identical relative improvement metrics.

The relative distributions between control and Early Hints runs, as well as the per-site baseline improvements, show us Early Hints can be broadly beneficial for use cases beyond Shopify’s. As suggested by the p75+ values, we also still find plenty of case studies showing a more substantial potential impact to LCP (and FCP) like the one we observed from our artificial test case, as indicated from these WebPageTest waterfall diagrams:

Early Hints update: How Cloudflare, Google, and Shopify are working together to build a faster Internet for everyone

These diagrams show the network and rendering activity on the same web page (which, bucking the trend, had some of its best results over mobile – 3G settings, shown here) for its first ten resources. Compare the WebPageTest waterfall view above (with Early Hints disabled) with the waterfall below (Early Hints enabled). The first green vertical line in each indicates First Contentful Paint. The page configures Link preload headers for a few JS / CSS assets, as well as a handful of key images. When Early Hints is on, those assets (numbered 2 through 9 below) get a significant head start from the preload hints. In this case, FCP and LCP improved by 33%!

Early Hints update: How Cloudflare, Google, and Shopify are working together to build a faster Internet for everyone

Early Hints Best Practices and Strategies for Better Performance

The effect of Early Hints can vary widely on a case-by-case basis. We noticed particularly successful zones had one or more of the following:

  • Preconnect Link headers to important third-party origins (e.g. an origin hosting the pages’ assets, or Google Fonts).
  • Preload Link headers for a handful of critical render-blocking resources.
  • Scripts and stylesheets split into chunks, enumerated in preload Links.
  • A preload Link for the LCP asset, e.g. the featured image on a blog post.

It’s quite possible these strategies are already familiar to you if you work on web performance! Essentially the best practices that apply to using Link headers or <link> elements in the HTML <head> also apply to Early Hints. That is to say: if your web page is already using preload or preconnect Link headers, using Early Hints should amplify those benefits.

A cautionary note here: while it may be safer to aggressively send assets in Early Hints versus Server Push (as the hints won’t arbitrarily send browser-cached content the way Server Push might), it is still possible to over-hint non-critical assets and saturate network bandwidth in a similar manner to overpushing. For example, one page in our sample listed well over 50 images in its 103 response (but not one of its render-blocking JS scripts). It saw improvements over cable, but was consistently worse off in the higher latency, lower bandwidth mobile connection settings.

Google has great guidelines for configuring Link headers at your origin in their blog post. As for emitting these Links as Early Hints, Cloudflare can take care of that for you!

How to enable on Cloudflare

  • To enable Early Hints on Cloudflare, simply sign in to your account and select the domain you’d like to enable it on.
  • Navigate to the Speed Tab of the dashboard.
  • Enable Early Hints.

Enabling Early Hints means that we will harvest the preload and preconnect Link headers from your origin responses, cache them, and send them as 103 Early Hints for subsequent requests so that future visitors will be able to gain an even greater performance benefit.

For more information about our Early Hints feature, please refer to our announcement post or our documentation.

Smart Early Hints update

In our original blog post, we also mentioned our intention to ship a product improvement to Early Hints that would generate the 103 on your behalf.

Smart Early Hints will generate Early Hints even when there isn’t a Link header present in the origin response from which we can harvest a 103. The goal is to be a no-code/configuration experience with massive improvements to page load. Smart Early Hints will infer what assets can be preloaded or prioritized in different ways by analyzing responses coming from our customer’s origins. It will be your one-button web performance guru completely dedicated to making sure your site is loading as fast as possible.

This work is still under development, but we look forward to getting it built before the end of the year.

Try it out!

The promise Early Hints holds has only started to be explored, and we’re excited to continue to build products and features and make the web performance reliably fast.

We’ll continue to update you along our journey as we develop Early Hints and look forward to your feedback (special thanks to the Cloudflare Community members who have already been invaluable) as we move to bring Early Hints to everyone.