Tag Archives: Product News

Give us a ping. (Cloudflare) One ping only

Post Syndicated from Abe Carryl original https://blog.cloudflare.com/the-most-exciting-ping-release/

Give us a ping. (Cloudflare) One ping only

Give us a ping. (Cloudflare) One ping only

Ping was born in 1983 when the Internet needed a simple, effective way to measure reachability and distance. In short, ping (and subsequent utilities like traceroute and MTR)  provides users with a quick way to validate whether one machine can communicate with another. Fast-forward to today and these network utility tools have become ubiquitous. Not only are they now the de facto standard for troubleshooting connectivity and network performance issues, but they also improve our overall quality of life by acting as a common suite of tools almost all Internet users are comfortable employing in their day-to-day roles and responsibilities.

Making network utility tools work as expected is very important to us, especially now as more and more customers are building their private networks on Cloudflare. Over 10,000 teams now run a private network on Cloudflare. Some of these teams are among the world’s largest enterprises, some are small crews, and yet others are hobbyists, but they all want to know – can I reach that?

That’s why today we’re excited to incorporate support for these utilities into our already expansive troubleshooting toolkit for Cloudflare Zero Trust. To get started, sign up to receive beta access and start using the familiar debugging tools that we all know and love like ping, traceroute, and MTR to test connectivity to private network destinations running behind Tunnel.

Cloudflare Zero Trust

With Cloudflare Zero Trust, we’ve made it ridiculously easy to build your private network on Cloudflare. In fact, it takes just three steps to get started. First, download Cloudflare’s device client, WARP, to connect your users to Cloudflare. Then, create identity and device aware policies to determine who can reach what within your network. And finally, connect your network to Cloudflare with Tunnel directly from the Zero Trust dashboard.

Give us a ping. (Cloudflare) One ping only

We’ve designed Cloudflare Zero Trust to act as a single pane of glass for your organization. This means that after you’ve deployed any part of our Zero Trust solution, whether that be ZTNA or SWG, you are clicks, not months, away from deploying Browser Isolation, Data Loss Prevention, Cloud Access Security Broker, and Email Security. This is a stark contrast from other solutions on the market which may require distinct implementations or have limited interoperability across their portfolio of services.

It’s that simple, but if you’re looking for more prescriptive guidance watch our demo below to get started:

To get started, sign-up for early access to the closed beta. If you’re interested in learning more about how it works and what else we will be launching in the future, keep scrolling.

So, how do these network utilities actually work?

Ping, traceroute and MTR are all powered by the same underlying protocol, ICMP. Every ICMP message has 8-bit type and code fields, which define the purpose and semantics of the message. While ICMP has many types of messages, the network diagnostic tools mentioned above make specific use of the echo request and echo reply message types.

Every ICMP message has a type, code and checksum. As you may have guessed from the name, an echo reply is generated in response to the receipt of an echo request, and critically, the request and reply have matching identifiers and sequence numbers. Make a mental note of this fact as it will be useful context later in this blog post.

Give us a ping. (Cloudflare) One ping only

A crash course in ping, traceroute, and MTR

As you may expect, each one of these utilities comes with its own unique nuances, but don’t worry. We’re going to provide a quick refresher on each before getting into the nitty-gritty details.


Ping works by sending a sequence of echo request packets to the destination. Each router hop between the sender and destination decrements the TTL field of the IP packet containing the ICMP message and forwards the packet to the next hop. If a hop decrements the TTL to 0 before reaching the destination, or doesn’t have a next hop to forward to, it will return an ICMP error message – “TTL exceeded” or “Destination host unreachable” respectively – to the sender. A destination which speaks ICMP will receive these echo request packets and return matching echo replies to the sender. The same process of traversing routers and TTL decrementing takes place on the return trip. On the sender’s machine, ping reports the final TTL of these replies, as well as the roundtrip latency of sending and receiving the ICMP messages to the destination. From this information a user can determine the distance between themselves and the origin server, both in terms of number of network hops and time.

Traceroute and MTR

As we’ve just outlined, while helpful, the output provided by ping is relatively simple. It does provide some useful information, but we will generally want to follow up this request with a traceroute to learn more about the specific path to a given destination. Similar to ping, traceroutes start by sending an ICMP echo request. However, it handles TTL a bit differently. You can learn more about why that is the case in our Learning Center, but the important takeaway is that this is how traceroutes are able to map and capture the IP address of each unique hop on the network path. This output makes traceroute an incredibly powerful tool to understanding not only if a machine can connect to another, but also how it will get there! And finally, we’ll cover MTR. We’ve grouped traceroute and MTR together for now as they operate in an extremely similar fashion. In short, the output of an MTR will provide everything traceroute can, but with some additional, aggregate statistics for each unique hop. MTR will also run until explicitly stopped allowing users to receive a statistical average for each hop on the path.

Checking connectivity to the origin

Now that we’ve had a quick refresher, let’s say I cannot connect to my private application server. With ICMP support enabled on my Zero Trust account, I could run a traceroute to see if the server is online.

Here is simple example from one of our lab environments:

Give us a ping. (Cloudflare) One ping only

Then, if my server is online, traceroute should output something like the following:

traceroute -I
traceroute to (, 64 hops max, 72 byte packets
 1 (  20.782 ms  12.070 ms  15.888 ms
 2 (  31.508 ms  30.657 ms  29.478 ms
 3 (  40.158 ms  55.719 ms  27.603 ms

Let’s examine this a bit deeper. Here, the first hop is the Cloudflare data center where my Cloudflare WARP device is connected via our Anycast network. Keep in mind this IP may look different depending on your location. The second hop will be the server running cloudflared. And finally, the last hop is my application server.

Conversely, if I could not connect to my app server I would expect traceroute to output the following:

traceroute -I
traceroute to (, 64 hops max, 72 byte packets
 1 (  20.782 ms  12.070 ms  15.888 ms
 2  * * *
 3  * * *

In the example above, this means the ICMP echo requests are not reaching cloudflared. To troubleshoot, first I will make sure cloudflared is running by checking the status of the Tunnel in the ZeroTrust dashboard. Then I will check if the Tunnel has a route to the destination IP. This can be found in the Routes column of the Tunnels table in the dashboard. If it does not, I will add a route to my Tunnel to see if this changes the output of my traceroute.

Once I have confirmed that cloudflared is running and the Tunnel has a route to my app server, traceroute will show the following:

raceroute -I
traceroute to (, 64 hops max, 72 byte packets
 1 (  20.782 ms  12.070 ms  15.888 ms
 2 (  31.508 ms  30.657 ms  29.478 ms
 3  * * *

However, it looks like we still can’t quite reach the application server. This means the ICMP echo requests reached cloudflared, but my application server isn’t returning echo replies. Now, I can narrow down the problem to my application server, or communication between cloudflared and the app server. Perhaps the machine needs to be rebooted or there is a firewall rule in place, but either way we have what we need to start troubleshooting the last hop. With ICMP support, we now have many network tools at our disposal to troubleshoot connectivity end-to-end.

Note that the route cloudflared to origin is always shown as a single hop, even if there are one or more routers between the two. This is because cloudflared creates its own echo request to the origin, instead of forwarding the original packets. In the next section we will explain the technical reason behind it.

What makes ICMP traffic unique?

A few quarters ago, Cloudflare Zero Trust extended support for UDP end-to-end as well. Since UDP and ICMP are both datagram-based protocols, within the Cloudflare network we can reuse the same infrastructure to proxy both UDP and ICMP traffic. To do this, we send the individual datagrams for either protocol over a QUIC connection using QUIC datagrams between Cloudflare and the cloudflared instances within your network.

With UDP, we establish and maintain a session per client/destination pair, such that we are able to send only the UDP payload and a session identifier in datagrams. In this way, we don’t need to send the IP and port to which the UDP payload should be forwarded with every single packet.

However, with ICMP we decided that establishing a session like this is far too much overhead, given that typically only a handful of ICMP packets are exchanged between endpoints. Instead, we send the entire IP packet (with the ICMP payload inside) as a single datagram.

What this means is that cloudflared can read the destination of the ICMP packet from the IP header it receives. While this conveys the eventual destination of the packet to cloudflared, there is still work to be done to actually send the packet. Cloudflared cannot simply send out the IP packet it receives without modification, because the source IP in the packet is still the original client IP, and not a source that is routable to the cloudflared instance itself.

To receive ICMP echo replies in response to the ICMP packets it forwards, cloudflared must apply a source NAT to the packet. This means that when cloudflared receives an IP packet, it must complete the following:

  • Read the destination IP address of the packet
  • Strip off the IP header to get the ICMP payload
  • Send the ICMP payload to the destination, meaning the source address of the ICMP packet will be the IP of a network interface to which cloudflared can bind
  • When cloudflared receives replies on this address, it must rewrite the destination address of the received packet (destination because the direction of the packet is reversed) to the original client source address

Network Address Translation like this is done all the time for TCP and UDP, but is much easier in those cases because ports can be used to disambiguate cases where the source and destination IPs are the same. Since ICMP packets do not have ports associated with them, we needed to find a way to map packets received from the upstream back to the original source which sent cloudflared those packets.

For example, imagine that two clients and both send an ICMP echo request to a destination As we previously outlined, cloudflared must rewrite the source IPs of these packets to a source address to which it can bind. In this scenario, when the echo replies come back, the IP headers will be identical: source= destination=<cloudflared’s IP>. So, how can cloudflared determine which packet needs to have its destination rewritten to and which to

To solve this problem, we use fields of the ICMP packet to track packet flows, in the same way that ports are used in TCP/UDP NAT. The field we’ll use for this purpose is the Echo ID. When an echo request is received, conformant ICMP endpoints will return an echo reply with the same identifier as was received in the request. This means we can send the packet from with ID 23 and the one from with ID 45, and when we receive replies with IDs 23 and 45, we know which one corresponds to each original source.

Of course this strategy only works for ICMP echo requests, which make up a relatively small percentage of the available ICMP message types. For security reasons, however, and owing to the fact that these message types are sufficient to implement the ubiquitous ping and traceroute functionality that we’re after, these are the only message types we currently support. We’ll talk through the security reasons for this choice in the next section.

How to proxy ICMP without elevated permissions

Generally, applications need to send ICMP packets through raw sockets. Applications have control of the IP header using this socket, so it requires elevated privileges to open. Whereas the IP header for TCP and UDP packets are added on send and removed on receive by the operating system. To adhere to security best-practices, we don’t really want to run cloudflared with additional privileges. We needed a better solution. To solve this, we found inspiration in the ping utility, which you’ll note can be run by any user, without elevated permissions. So then, how does ping send ICMP echo requests and listen for echo replies as a normal user program? Well, the answer is less satisfying: it depends (on the platform). And as cloudflared supports all the following platforms, we needed to answer this question for each.


On linux, ping opens a datagram socket for the ICMP protocol with the syscall socket(PF_INET, SOCK_DGRAM, PROT_ICMP). This type of socket can only be opened if the group ID of the user running the program is in /proc/sys/net/ipv4/ping_group_range, but critically, the user does not need to be root. This socket is “special” in that it can only send ICMP echo requests and receive echo replies. Great! It also has a conceptual “port” associated with it, despite the fact that ICMP does not use ports. In this case, the identifier field of echo requests sent through this socket are rewritten to the “port” assigned to the socket. Reciprocally, echo replies received by the kernel which have the same identifier are sent to the socket which sent the request.

Therefore, on linux cloudflared is able to perform source NAT for ICMP packets simply by opening a unique socket per source IP address. This rewrites the identifier field and source address of the request. Replies are delivered to this same socket meaning that cloudflared can easily rewrite the destination IP address (destination because the packets are flowing to the client) and echo identifier back to the original values received from the client.


On Darwin (the UNIX-based core set of components which make up macOS), things are similar, in that we can open an unprivileged ICMP socket with the same syscall socket(PF_INET, SOCK_DGRAM, PROT_ICMP). However, there is an important difference. With Darwin the kernel does not allocate a conceptual “port” for this socket, and thus, when sending ICMP echo requests the kernel does not rewrite the echo ID as it does on linux. Further, and more importantly for our purposes, the kernel does not demultiplex ICMP echo replies to the socket which sent the corresponding request using the echo identifier. This means that on macOS, we effectively need to perform the echo ID rewriting manually. In practice, this means that when cloudflared receives an echo request on macOS, it must choose an echo ID which is unique for the destination. Cloudflared then adds a key of (chosen echo ID, destination IP) to a mapping it then maintains, with a value of (original echo ID, original source IP). Cloudflared rewrites the echo ID in the echo request packet to the one it chose and forwards it to the destination. When it receives a reply, it is able to use the source IP address and echo ID to look up the client address and original echo ID and rewrite the echo ID and destination address in the reply packet before forwarding it back to the client.


Finally, we arrived at Windows which conveniently provides a Win32 API IcmpSendEcho that sends echo requests and returns echo reply, timeout or error. For ICMPv6 we just had to use Icmp6SendEcho. The APIs are in C, but cloudflared can call them through CGO without a problem. If you also need to call these APIs in a Go program, checkout our wrapper for inspiration.

And there you have it! That’s how we built the most exciting ping release since 1983. Overall, we’re thrilled to announce this new feature and can’t wait to get your feedback on ways we can continue improving our implementation moving forward.

What’s next

Support for these ICMP-based utilities is just the beginning of how we’re thinking about improving our Zero Trust administrator experience. Our goal is to continue providing tools which make it easy to identify issues within the network that impact connectivity and performance.

Looking forward, we plan to add more dials and knobs for observability with announcements like Digital Experience Monitoring across our Zero Trust platform to help users proactively monitor and stay alert to changing network conditions. In the meantime, try applying Zero Trust controls to your private network for free by signing up today.

CIO Week 2023 recap

Post Syndicated from James Chang original https://blog.cloudflare.com/cio-week-2023-recap/

CIO Week 2023 recap

CIO Week 2023 recap

In our Welcome to CIO Week 2023 post, we talked about wanting to start the year by celebrating the work Chief Information Officers do to keep their organizations safe and productive.

Over the past week, you learned about announcements addressing all facets of your technology stack – including new services, betas, strategic partnerships, third party integrations, and more. This recap blog summarizes each announcement and labels what capability is generally available (GA), in beta, or on our roadmap.

We delivered on critical capabilities requested by our customers – such as even more comprehensive phishing protection and deeper integrations with the Microsoft ecosystem. Looking ahead, we also described our roadmap for emerging technology categories like Digital Experience Monitoring and our vision to make it exceedingly simple to route traffic from any source to any destination through Cloudflare’s network.

Everything we launched is designed to help CIOs accelerate their pursuit of digital transformation. In this blog, we organized our announcement summaries based on the three feelings we want CIOs to have when they consider partnering with Cloudflare:

  1. CIOs now have a simpler roadmap to Zero Trust and SASE: We announced new capabilities and tighter integrations that make it easier for organizations to adopt Zero Trust security best practices and move towards aspirational architectures like Secure Access Service Edge (SASE).
  2. CIOs have access to the right technology and channel partners: We announced integrations and programming to help organizations access the right expertise to modernize IT and security at their own pace with the technologies they already use.
  3. CIOs can streamline a multi-cloud strategy with ease: We announced new ways to connect, secure, and accelerate traffic across diverse cloud environments.

Thank you for following CIO Week, Cloudflare’s first of many Innovation Weeks in 2023. It can be hard to keep up with our pace of innovation sometimes, but we hope that reading this blog and registering for our recap webinar will help!

If you want to speak with us about how to modernize your IT and security and make life easier for your organization’s CIO, fill out the form here.

Simplifying your journey to Zero Trust and SASE

Securing access
These blog posts are focused on making it faster, easier, and safer to connect any user to any application with the granular controls and comprehensive visibility needed to achieve Zero Trust.

Blog Summary
Beta: Introducing Digital Experience Monitoring Cloudflare Digital Experience Monitoring will be an all-in-one dashboard that helps CIOs understand how critical applications and Internet services are performing across their entire corporate network. Sign up for beta access.
Beta: Weave your own global, private, virtual Zero Trust network on Cloudflare with WARP-to-WARP With a single click, any device running Cloudflare’s device client, WARP, in your organization can reach any other device running WARP over a private network. Sign up for beta access.
GA: New ways to troubleshoot Cloudflare Access ‘blocked’ messages Investigate ‘allow’ or ‘block’ decisions based on how a connection was made with the same level of ease that you can troubleshoot user identity within Cloudflare’s Zero Trust platform.
Beta: One-click data security for your internal and SaaS applications Secure sensitive data by running application sessions in an isolated browser and control how users interact with sensitive data – now with just one click. Sign up for beta access.
GA: Announcing SCIM support for Cloudflare Access & Gateway Cloudflare’s ZTNA (Access) and SWG (Gateway) services now support the System for Cross-domain Identity Management (SCIM) protocol, making it easier for administrators to manage identity records across systems.
GA: Cloudflare Zero Trust: The Most Exciting Ping Release Since 1983 Cloudflare Zero Trust administrators can use familiar debugging tools that use the ICMP protocol (like Ping, Traceroute, and MTR) to test connectivity to private network destinations.

Threat defense
These blog posts are focused on helping organizations filter, inspect, and isolate traffic to protect users from phishing, ransomware, and other Internet threats.

Blog Summary
GA: Email Link Isolation: your safety net for the latest phishing attacks Email Link Isolation is your safety net for the suspicious links that end up in inboxes and that users may click. This added protection turns Cloudflare Area 1 into the most comprehensive email security solution when it comes to protecting against phishing attacks.
GA: Bring your own certificates to Cloudflare Gateway Administrators can use their own custom certificates to apply HTTP, DNS, CASB, DLP, RBI and other filtering policies.
GA: Announcing Custom DLP profiles Cloudflare’s Data Loss Prevention (DLP) service now offers the ability to create custom detections, so that organizations can inspect traffic for their most sensitive data.
GA: Cloudflare Zero Trust for Managed Service Providers Learn how the U.S. Federal Government and other large Managed Service Providers (MSPs) are using Cloudflare’s Tenant API to apply security policies like DNS filtering across the organizations they manage.

Secure SaaS environments
These blog posts are focused on maintaining consistent security and visibility across SaaS application environments, in particular to protect leaks of sensitive data.

Blog Summary
Roadmap: How Cloudflare CASB and DLP work together to protect your data Cloudflare Zero Trust will introduce capabilities between our CASB and DLP services that will enable administrators to peer into the files stored in their SaaS applications and identify sensitive data inside them.
Roadmap: How Cloudflare Area 1 and DLP work together to protect data in email Cloudflare is combining capabilities from Area 1 Email Security and Data Loss Prevention (DLP) to provide complete data protection for corporate email.
GA: Cloudflare CASB: Scan Salesforce and Box for security issues Cloudflare CASB now integrates with Salesforce and Box, enabling IT and security teams to scan these SaaS environments for security risks.

Accelerating and securing connectivity
In addition to product capabilities, blog posts in this section highlight speed and other strategic benefits that organizations realize with Cloudflare.

Blog Summary
Why do CIOs choose Cloudflare One? As part of CIO Week, we spoke with the leaders of some of our largest customers to better understand why they selected Cloudflare One. Learn six thematic reasons why.
Cloudflare is faster than Zscaler Cloudflare is 38-55% faster at delivering Zero Trust experiences than Zscaler, as validated by third party testing.
GA: Network detection and settings profiles for the Cloudflare One agent Cloudflare’s device client (WARP) can now securely detect pre-configured locations and route traffic based on the needs of the organization for that location.

Making Cloudflare easier to use
These blog posts highlight innovations across the Cloudflare portfolio, and outside the Zero Trust and SASE categories, to help organizations secure and accelerate traffic with ease.

Blog Summary
Preview any Cloudflare product today Enterprise customers can now start previewing non-contracted services with a single click in the dashboard.
GA: Improved access controls: API access can now be selectively disabled Cloudflare is making it easier for account owners to view and manage the access their users have on an account by allowing them to restrict API access to the account.
GA: Zone Versioning is now generally available Zone Versioning allows customers to safely manage zone configuration by versioning changes and choosing how and when to deploy those changes to defined environments of traffic.
Roadmap: Cloudflare Application Services for private networks: do more with the tools you already love Cloudflare is unlocking operational efficiencies by working on integrations between our Application Services to protect Internet-facing websites and our Cloudflare One platform to protect corporate networks.

Collaborating with the right partners

In addition to new programming for our channel partners, these blog posts describe deeper technical integrations that help organizations work more efficiently with the IT and security tools they already use.

Blog Summary
GA: Expanding our Microsoft collaboration: Proactive and automated Zero Trust security for customers Cloudflare announced four new integrations between Microsoft Azure Active Directory (Azure AD) and Cloudflare Zero Trust that reduce risk proactively. These integrated offerings increase automation, allowing security teams to focus on threats versus implementation and maintenance.
Beta: API-based email scanning Now, Microsoft Office 365 customers can deploy Area 1 cloud email security via Microsoft Graph API. This feature enables O365 customers to quickly deploy the Area 1 product via API, with onboarding through the Microsoft Marketplace coming in the near future.
GA: China Express: Cloudflare partners to boost performance in China for corporate networks China Express is a suite of offerings designed to simplify connectivity and improve performance for users in China and developed in partnership with China Mobile International and China Broadband Communications.
Beta: Announcing the Authorized Partner Service Delivery Track for Cloudflare One Cloudflare announced the limited availability of a new specialization track for our channel and implementation partners, designed to help develop their expertise in delivering Cloudflare One services.

Streamlining your multi-cloud strategy

These blog posts highlight innovations that make it easier for organizations to simply ‘plug into’ Cloudflare’s network and send traffic from any source to any destination.

Blog Summary
Beta: Announcing the Magic WAN Connector: the easiest on-ramp to your next generation network Cloudflare is making it even easier to get connected with the Magic WAN Connector: a lightweight software package you can install in any physical or cloud network to automatically connect, steer, and shape any IP traffic. Sign up for early access.
GA: Cloud CNI privately connects your clouds to Cloudflare Customers using Google Cloud Platform, Azure, Oracle Cloud, IBM Cloud, and Amazon Web Services can now open direct connections from their private cloud instances into Cloudflare.
Cloudflare protection for all your cardinal directions This blog post recaps how definitions of corporate network traffic have shifted and how Cloudflare One provides protection for all traffic flows, regardless of source or destination.

Expanding our Microsoft collaboration: proactive and automated Zero Trust security for customers

Post Syndicated from Abhi Das original https://blog.cloudflare.com/expanding-our-collaboration-with-microsoft-proactive-and-automated-zero-trust-security/

Expanding our Microsoft collaboration: proactive and automated Zero Trust security for customers

Expanding our Microsoft collaboration: proactive and automated Zero Trust security for customers

As CIOs navigate the complexities of stitching together multiple solutions, we are extending our partnership with Microsoft to create one of the best Zero Trust solutions available. Today, we are announcing four new integrations between Azure AD and Cloudflare Zero Trust that reduce risk proactively. These integrated offerings increase automation allowing security teams to focus on threats versus implementation and maintenance.

What is Zero Trust and why is it important?

Zero Trust is an overused term in the industry and creates a lot of confusion. So, let’s break it down. Zero Trust architecture emphasizes the “never trust, always verify” approach. One way to think about it is that in the traditional security perimeter or “castle and moat” model, you have access to all the rooms inside the building (e.g., apps) simply by having access to the main door (e.g., typically a VPN).  In the Zero Trust model you would need to obtain access to each locked room (or app) individually rather than only relying on access through the main door. Some key components of the Zero Trust model are identity e.g., Azure AD (who), apps e.g., a SAP instance or a custom app on Azure (applications), policies e.g. Cloudflare Access rules (who can access what application), devices e.g. a laptop managed by Microsoft Intune (the security of the endpoint requesting the access) and other contextual signals.

Zero Trust is even more important today since companies of all sizes are faced with an accelerating digital transformation and an increasingly distributed workforce. Moving away from the castle and moat model, to the Internet becoming your corporate network, requires security checks for every user accessing every resource. As a result, all companies, especially those whose use of Microsoft’s broad cloud portfolio is increasing, are adopting a Zero Trust architecture as an essential part of their cloud journey.

Cloudflare’s Zero Trust platform provides a modern approach to authentication for internal and SaaS applications. Most companies likely have a mix of corporate applications – some that are SaaS and some that are hosted on-premise or on Azure. Cloudflare’s Zero Trust Network Access (ZTNA) product as part of our Zero Trust platform makes these applications feel like SaaS applications, allowing employees to access them with a simple and consistent flow. Cloudflare Access acts as a unified reverse proxy to enforce access control by making sure every request is authenticated, authorized, and encrypted.

Cloudflare Zero Trust and Microsoft Azure Active Directory

We have thousands of customers using Azure AD and Cloudflare Access as part of their Zero Trust architecture. Our partnership with Microsoft  announced last year strengthened security without compromising performance for our joint customers. Cloudflare’s Zero Trust platform integrates with Azure AD, providing a seamless application access experience for your organization’s hybrid workforce.

Expanding our Microsoft collaboration: proactive and automated Zero Trust security for customers

As a recap, the integrations we launched solved two key problems:

  1. For on-premise legacy applications, Cloudflare’s participation as Azure AD secure hybrid access partner enabled customers to centrally manage access to their legacy on-premise applications using SSO authentication without incremental development. Joint customers now easily use Cloudflare Access as an additional layer of security with built-in performance in front of their legacy applications.
  2. For apps that run on Microsoft Azure, joint customers can integrate Azure AD with Cloudflare Zero Trust and build rules based on user identity, group membership and Azure AD Conditional Access policies. Users will authenticate with their Azure AD credentials and connect to Cloudflare Access with just a few simple steps using Cloudflare’s app connector, Cloudflare Tunnel, that can expose applications running on Azure. See guide to install and configure Cloudflare Tunnel.

Recognizing Cloudflare’s innovative approach to Zero Trust and Security solutions, Microsoft awarded us the Security Software Innovator award at the 2022 Microsoft Security Excellence Awards, a prestigious classification in the Microsoft partner community.

But we aren’t done innovating. We listened to our customers’ feedback and to address their pain points are announcing several new integrations.

Microsoft integrations we are announcing today

The four new integrations we are announcing today are:

1. Per-application conditional access: Azure AD customers can use their existing Conditional Access policies in Cloudflare Zero Trust.

Expanding our Microsoft collaboration: proactive and automated Zero Trust security for customers

Azure AD allows administrators to create and enforce policies on both applications and users using Conditional Access. It provides a wide range of parameters that can be used to control user access to applications (e.g. user risk level, sign-in risk level, device platform, location, client apps, etc.). Cloudflare Access now supports Azure AD Conditional Access policies per application. This allows security teams to define their security conditions in Azure AD and enforce them in Cloudflare Access.

For example, customers might have tighter levels of control for an internal payroll application and hence will have specific conditional access policies on Azure AD. However, for a general info type application such as an internal wiki, customers might enforce not as stringent rules on Azure AD conditional access policies. In this case both app groups and relevant Azure AD conditional access policies can be directly plugged into Cloudflare Zero Trust seamlessly without any code changes.

2. SCIM: Autonomously synchronize Azure AD groups between Cloudflare Zero Trust and Azure AD, saving hundreds of hours in the CIO org.

Expanding our Microsoft collaboration: proactive and automated Zero Trust security for customers

Cloudflare Access policies can use Azure AD to verify a user’s identity and provide information about that user (e.g., first/last name, email, group membership, etc.). These user attributes are not always constant, and can change over time. When a user still retains access to certain sensitive resources when they shouldn’t, it can have serious consequences.

Often when user attributes change, an administrator needs to review and update all access policies that may include the user in question. This makes for a tedious process and an error-prone outcome.

The SCIM (System for Cross-domain Identity Management) specification ensures that user identities across entities using it are always up-to-date. We are excited to announce that joint customers of Azure AD and Cloudflare Access can now enable SCIM user and group provisioning and deprovisioning. It will accomplish the following:

  • The IdP policy group selectors are now pre-populated with Azure AD groups and will remain in sync. Any changes made to the policy group will instantly reflect in Access without any overhead for administrators.

  • When a user is deprovisioned on Azure AD, all the user’s access is revoked across Cloudflare Access and Gateway. This ensures that change is made in near real time thereby reducing security risks.

3. Risky user isolation: Helps joint customers add an extra layer of security by isolating high risk users (based on AD signals) such as contractors to browser isolated sessions via Cloudflare’s RBI product.

Expanding our Microsoft collaboration: proactive and automated Zero Trust security for customers

Azure AD classifies users into low, medium and high risk users based on many data points it analyzes. Users may move from one risk group to another based on their activities. Users can be deemed risky based on many factors such as the nature of their employment i.e. contractors, risky sign-in behavior, credential leaks, etc. While these users are high-risk, there is a low-risk way to provide access to resources/apps while the user is assessed further.

We now support integrating Azure AD groups with Cloudflare Browser Isolation. When a user is classified as high-risk on Azure AD, we use this signal to automatically isolate their traffic with our Azure AD integration. This means a high-risk user can access resources through a secure and isolated browser. If the user were to move from high-risk to low-risk, the user would no longer be subjected to the isolation policy applied to high-risk users.

4. Secure joint Government Cloud customers: Helps Government Cloud customers achieve better security with centralized identity & access management via Azure AD, and an additional layer of security by connecting them to the Cloudflare global network, not having to open them up to the whole Internet.

Via Secure Hybrid Access (SHA) program, Government Cloud (‘GCC’) customers will soon be able to integrate Azure AD with Cloudflare Zero Trust and build rules based on user identity, group membership and Azure AD conditional access policies. Users will authenticate with their Azure AD credentials and connect to Cloudflare Access with just a few simple steps using Cloudflare Tunnel that can expose applications running on Microsoft Azure.

“Digital transformation has created a new security paradigm resulting in organizations accelerating their adoption of Zero Trust. The Cloudflare Zero Trust and Azure Active Directory joint solution has been a growth enabler for Swiss Re by easing Zero Trust deployments across our workforce allowing us to focus on our core business. Together, the joint solution enables us to go beyond SSO to empower our adaptive workforce with frictionless, secure access to applications from anywhere. The joint solution also delivers us a holistic Zero Trust solution that encompasses people, devices, and networks.”
– Botond Szakács, Director, Swiss Re

A cloud-native Zero Trust security model has become an absolute necessity as enterprises continue to adopt a cloud-first strategy. Cloudflare has and Microsoft have jointly developed robust product integrations with Microsoft to help security and IT leaders CIO teams prevent attacks proactively, dynamically control policy and risk, and increase automation in alignment with Zero Trust best practices.
– Joy Chik, President, Identity & Network Access, Microsoft

Try it now

Interested in learning more about how our Zero Trust products integrate with Azure Active Directory? Take a look at this extensive reference architecture that can help you get started on your Zero Trust journey and then add the specific use cases above as required. Also, check out this joint webinar with Microsoft that highlights our joint Zero Trust solution and how you can get started.

What next

We are just getting started. We want to continue innovating and make the Cloudflare Zero Trust and Microsoft Security joint solution to solve your problems. Please give us feedback on what else you would like us to build as you continue using this joint solution.

Zone Versioning is now generally available

Post Syndicated from Garrett Galow original https://blog.cloudflare.com/zone-versioning-ga/

Zone Versioning is now generally available

Zone Versioning is now generally available

Today we are announcing the general availability of Zone Versioning for enterprise customers. Zone Versioning allows you to safely manage zone configuration by versioning changes and choosing how and when to deploy those changes to defined environments of traffic. Previously announced as HTTP Applications, we have redesigned the experience based on testing and feedback to provide a seamless experience for customers looking to safely rollout configuration changes.

Problems with making configuration changes

There are two problems we have heard from customers that Zone Versioning aims to solve:

  1. How do I test changes to my zone safely?
  2. If I do end up making a change that impacts my traffic negatively, how can I quickly revert that change?

Customers have worked out various ways of solving these problems. For problem #1, customers will create staging zones that live on a different hostname, often taking the form staging.example.com, that they make changes on first to ensure that those changes will work when deployed to their production zone. When making more than one change this can become troublesome as they now need to keep track of all the changes made to make the exact same set of changes on the production zone. Also, it is possible that something tested in staging never makes it to production, but yet is not rolled back, so now the two environments differ in configuration.

For problem #2, customers often keep track of what changes were made and when they were deployed in a ticketing system like JIRA, such that in case of an incident an on-call engineer can more easily find the changes they may need to roll back by manually modifying the configuration of the zone. This requires the on-call to be able to easily get to the list of what changes were made.

Altogether, this means customers are more reluctant to make changes to configuration or turn on new features that may benefit them because they do not feel confident in the ability to validate the changes safely.

How Zone Versioning solves those problems

Zone Versioning provides two new fundamental aspects to managing configuration that allow a customer to safely test, deploy and rollback configuration changes: Versions and Environments.

Versions are independent sets of zone configuration. They can be created anytime from a previous version or the initial configuration of the zone and changes to one version will not affect another version. Initially, a version affects none of a zone’s traffic, so any changes made are safe by definition. When first enabling zone versioning, we create Version 1 that is based on the current configuration of the zone (referred to as the baseline configuration).

Zone Versioning is now generally available

From there any changes that you make to Version 1 will be safely stored and propagated to our global network, but will not affect any traffic. Making changes to a version is no different from before, just select the version to edit and modify the configuration of that feature as normal. Once you have made the set of changes desired for a given version, to deploy that version on live traffic in your zone, you will need to deploy the version to an Environment.

Environments are a way of mapping segments of your zone’s traffic to versions of configuration. Powered by our Ruleset Engine, that powers the likes of Custom WAF Rules and Cache Rules, Environments give you the ability to create filters based on a wide range of parameters such as hostname, client IP, location, or cookie. When a version is applied to an Environment, any traffic matching the filter will use that version’s configuration.

By default, we create three environments to get started with:

  • Development – Applies to traffic sent with a specific cookie for development
  • Staging – Applies to traffic sent to Cloudflare’s staging IPs
  • Production – Applies to all traffic on the zone

You can create additional environments or modify the pre-defined environments except for Production. Any newly created environment will begin in an unassigned state meaning traffic will fall back to the baseline configuration of the zone. In the above image, we have deployed Version 2 to both the Development and Staging environments. Once we have tested Version 2 in staging, then we can ‘Promote’ Version 2 to Production which means all traffic on the zone will receive the configuration in Version 2 except for Development and Staging traffic. If something goes wrong after deploying to Production, then we can use the ‘Rollback’ action to revert to the configuration of Version 1.

How promotion and rollbacks work

It is worth going into a bit more detail about how configuration changes, promotions, and rollbacks are realized in our global network. Whenever a configuration change is made to a version, we store that change in our system of record for the service and push that change to our global network so that it is available to be used at any time.

Importantly and unlike how changes to zones automatically take effect, that change will not be used until the version is deployed to an environment that is receiving traffic. The same is true for when a version is promoted or rolled back between environments. Because all the configuration we need for a given version is already available in our global network, we only need to push a single, atomic change to tell our network that traffic matching the filter for a given environment should now use the newly defined configuration version.

This means that promotions and more importantly rollbacks occur as quickly as you are used to with any configuration change in Cloudflare. No need to wait five or ten minutes for us to roll back a bad deployment, if something goes wrong you can return to a last known good configuration in seconds. Slow rollbacks can make ongoing incidents drag on leading to extended customer impact, so the ability to quickly execute a rollback was a critical capability.

Get started with Zone Versioning today

Enterprise Customers can get started with Zone Versioning today for their zones on the Cloudflare dashboard. Customers will need to be using the new Managed WAF rules in order to enable Zone Versioning. You can find more information about Zone Versioning in our Developer Docs.

Happy versioning!

API-based email scanning

Post Syndicated from Ayush Kumar original https://blog.cloudflare.com/api-based-email-scanning/

API-based email scanning

API-based email scanning

The landscape of email security is constantly changing. One aspect that remains consistent is the reliance of email as the beginning for the majority of threat campaigns. Attackers often start with a phishing campaign to gather employee credentials which, if successful, are used to exfiltrate data, siphon money, or perform other malicious activities. This threat remains ever present even as companies transition to moving their email to the cloud using providers like Microsoft 365 or Google Workspace.

In our pursuit to help build a better Internet and tackle online threats, Cloudflare offers email security via our Area 1 product to protect all types of email inboxes – from cloud to on premise. The Area 1 product analyzes every email an organization receives and uses our threat models to assess if the message poses risk to the customer. For messages that are deemed malicious, the Area 1 platform will even prevent the email from landing in the recipient’s inbox, ensuring that there is no chance for the attempted attack to be successful.

We try to provide customers with the flexibility to deploy our solution in whatever way they find easiest. Continuing in this pursuit to make our solution as turnkey as possible, we are excited to announce our open beta for Microsoft 365 domain onboarding via the Microsoft Graph API. We know that domains onboarded via API offer quicker deployment times and more flexibility. This onboarding method is one of many, so customers can now deploy domains how they see fit without losing Area 1 protection.

Onboarding Microsoft 365 Domains via API

Cloudflare Area 1 provides customers with many deployment options. Whether it is Journaling + BCC (where customers send a copy of each email to Area 1), Inline/MX records (where another hop is added via MX records), or Secure Email Gateway Connectors (where Area 1 directly interacts with a SEG), Area 1 provides customers with flexibility with how they want to deploy our solution. However, we have always recommended customers to deploy using MX records.

API-based email scanning

Adding this extra hop and having domains be pointed to Area 1 allows the service to provide protection with sinkholing, making sure that malicious emails don’t reach the destination email inbox. However, we recognized that configuring Area 1 as the first hop (i.e. changing the MX records) may require sign-offs from other teams inside organizations and can lead to additional cycles. Organizations are also caught in waiting for this inline change to reflect in DNS (known as DNS propagation time). We know our customers want to be protected ASAP while they make these necessary adjustments.

With Microsoft 365 onboarding, the process of adding protection requires less configuration steps and waiting time. We now use the Microsoft Graph API to evaluate all messages associated with a domain. This allows for greater flexibility for operation teams to deploy Area 1.

For example, a customer of Area 1 who is heavily involved in M&A transactions due to the nature of their industry benefit from being able to deploy Area 1 quickly using the Microsoft API. Before API onboarding, IT teams spent time juggling the handover of various acquisition assets. Assigning new access rights, handing over ownership, and other tasks took time to execute leaving mailboxes unsecured. However, now when the customer acquires a new entity, they can use the API onboarding to quickly add protection for the domains they just acquired. This allows them to have protection on the email addresses associated with the new domain while they work on completing the other tasks on hand. How our API onboarding process works can be seen below.

API-based email scanning

Once we are authorized to read incoming messages from Microsoft 365, we will start processing emails and firing detections on suspected emails. This new onboarding process is significantly faster and only requires a few clicks to get started.

To start the process, choose which domain you would like to onboard via API. Then within the UI, you can navigate to “Domains & Routing” within the settings. After adding a new domain and choosing API scan, you can follow our setup wizard to authorize Area 1 to start reading messages.

API-based email scanning
API scan

Within a few minutes of authorization, your organization will now be protected by Area 1.

API-based email scanning
Ready to scan ‌‌

Looking Ahead

This onboarding process is part of our continual efforts to provide customers with best of class email protection. With our API onboarding we provide customers with increased flexibility to deploy our solution. As we look forward, our Microsoft 365 API onboarding opens the door for other capabilities.

Our team is now looking to add the ability to retroactively scan emails that were sent before Area 1 was installed. This provides the opportunity for new customers to clean up any old emails that could still pose a risk for the organization. We are also looking to provide more levers for organizations who want to have more control on which mailboxes are scanned with Area 1. Soon customers will be able to designate within the UI which mailboxes will have their incoming email scanned by Area 1.

We also currently limit the deployment type of each domain to one type (i.e. a domain can either be onboarded using MX records or API). However, we are now looking at providing customers with the ability to do hybrid deployments, using both API + MX records. This combinatorial approach not only provides the greatest flexibility but also provides the maximum coverage.

There are many things in the pipeline that the Area 1 team is looking to bring to customers in 2023 and this open beta lets us build these new capabilities.

All customers can join the open beta so if you are interested in onboarding a new domain using this method, follow the steps above and get Area 1 protection on your Microsoft 365 Domains.

New: Scan Salesforce and Box for security issues

Post Syndicated from Alex Dunbrack original https://blog.cloudflare.com/casb-adds-salesforce-and-box-integrations/

New: Scan Salesforce and Box for security issues

New: Scan Salesforce and Box for security issues

Today, we’re sharing the release of two new SaaS integrations for Cloudflare CASB – Salesforce and Box – in order to help CIOs, IT leaders, and security admins swiftly identify looming security issues present across the exact type of tools housing this business-critical data.

Recap: What is Cloudflare CASB?

Released in September, Cloudflare’s API CASB has already proven to organizations from around the world that security risks – like insecure settings and inappropriate file sharing – can often exist across the friendly SaaS apps we all know and love, and indeed pose a threat. By giving operators a comprehensive view of the issues plaguing their SaaS environments, Cloudflare CASB has allowed them to effortlessly remediate problems in a timely manner before they can be leveraged against them.

But as both we and other forward-thinking administrators have come to realize, it’s not always Microsoft 365, Google Workspace, and business chat tools like Slack that contain an organization’s most sensitive information.

Scan Salesforce with Cloudflare CASB

The first Software-as-a-Service. Salesforce, the sprawling, intricate, hard-to-contain Customer Relationship Management (CRM) platform, gives workforces a flexible hub from which they can do just as the software describes: manage customer relationships. Whether it be tracking deals and selling opportunities, managing customer conversations, or storing contractual agreements, Salesforce has truly become the ubiquitous solution for organizations looking for a way to manage every customer-facing interaction they have.

This reliance, however, also makes Salesforce a business data goldmine for bad actors.

New: Scan Salesforce and Box for security issues

With CASB’s new integration for Salesforce, IT and security operators will be able to quickly connect their environments and scan them for the kind of issues putting their sensitive business data at risk. Spot uploaded files that have been shared publicly with anyone who has the link. Identify default permissions that give employees access to records that should be need-to-know only. You can even see employees who are sending out emails as other Salesforce users!

Using this new integration, we’re excited to help close the security visibility gap for yet another SaaS app serving as the lifeblood for teams out in the field making business happen.

Scan Box with Cloudflare CASB

Box is the leading Content Cloud that enables organizations to accelerate business processes, power workplace collaboration, and protect their most valuable information, all while working with a best-of-breed enterprise IT stack like Cloudflare.

A platform used to store everything – from contracts and financials to product roadmaps and employee records – Box has given collaborative organizations a single place to convene and share information that, in a growing remote-first world, has no better place to be stored.

So where are disgruntled employees and people with malicious intent going to look when they want to unveil private business files?

New: Scan Salesforce and Box for security issues

With Cloudflare CASB’s new integration for Box, security and IT teams alike can now link their admin accounts and scan them for under-the-radar security issues leaving them prone to compromise and data exfiltration. In addition to Box’s built-in content and collaboration security, Cloudflare CASB gives you another added layer of protection where you can catch files and folders shared publicly or with users outside your organization. By providing security admins with a single view to see employees who aren’t following security policies, we make it harder for bad actors to get inside and do damage.

With Cloudflare’s status as an official Box Technology Partner, we’re looking forward to offering both Cloudflare and Box users a robust, yet easy-to-use toolset that can help stop pressing, real-world data security incidents right in their tracks.

“Organizations today need products that are inherently secure to support employees working from anywhere,” said Areg Alimian, Head of Security Products at Box. “At Box, we continuously strive to improve our integrations with third-party apps so that it’s easier than ever for customers to use Box alongside best-in-class solutions. With today’s integration with Cloudflare CASB, we enable our joint customers to have a single pane of glass view allowing them to consistently enforce security policies and protect leakage of sensitive information across all their apps.”

Taking action on your business data security

Salesforce and Box are certainly not the only SaaS applications managing this type of sensitive organizational data. At Cloudflare, we strive to make our products as widely compatible as possible so that organizations can continue to place their trust and confidence in us to help keep them secure.

Today, Cloudflare CASB supports integrations with Google Workspace, Microsoft 365, Slack, GitHub, Salesforce, and Box, with a growing list of other critical applications on their way, so if there’s one in particular you’d like to see soon, let us know!

For those not already using Cloudflare Zero Trust, don’t hesitate to get started today – see the platform yourself with 50 free seats by signing up here, then get in touch with our team here to learn more about how Cloudflare CASB can help your organization lock down its SaaS apps.

How Cloudflare Area 1 and DLP work together to protect data in email

Post Syndicated from Ayush Kumar original https://blog.cloudflare.com/dlp-area1-to-protect-data-in-email/

How Cloudflare Area 1 and DLP work together to protect data in email

How Cloudflare Area 1 and DLP work together to protect data in email

Threat prevention is not limited to keeping external actors out, but also keeping sensitive data in. Most organizations do not realize how much confidential information resides within their email inboxes. Employees handle vast amounts of sensitive data on a daily basis, such as intellectual property, internal documentation, PII, or payment information and often share this information internally via email making email one of the largest locations confidential information is stored within a company. It comes as no shock that organizations worry about protecting the accidental or malicious egress of sensitive data and often address these concerns by instituting strong Data Loss Prevention policies. Cloudflare makes it easy for customers to manage the data in their email inboxes with Area 1 Email Security and Cloudflare One.

Cloudflare One, our SASE platform that delivers network-as-a-service (NaaS) with Zero Trust security natively built-in, connects users to enterprise resources, and offers a wide variety of opportunities to secure corporate traffic, including the inspection of data transferred to your corporate email. Area 1 email security, as part of our composable Cloudflare One platform, delivers the most complete data protection for your inbox and offers a cohesive solution when including additional services, such as Data Loss Prevention (DLP). With the ability to easily adopt and implement Zero Trust services as needed, customers have the flexibility to layer on defenses based on their most critical use cases. In the case of Area 1 + DLP, the combination can collectively and preemptively address the most pressing use cases that represent high-risk areas of exposure for organizations. Combining these products provides the in-depth defense of your corporate data.

Preventing egress of cloud email data via HTTPs

Email provides a readily available outlet for corporate data, so why let sensitive data reach email in the first place? An employee can accidentally attach an internal file rather than a public white paper in a customer email, or worse, attach a document with the wrong customers’ information to an email.

With Cloudflare Data Loss Prevention (DLP) you can prevent the upload of sensitive information, such as PII or intellectual property, to your corporate email. DLP is offered as part of Cloudflare One, which runs traffic from data centers, offices, and remote users through the Cloudflare network.  As traffic traverses Cloudflare, we offer protections including validating identity and device posture and filtering corporate traffic.

How Cloudflare Area 1 and DLP work together to protect data in email

Cloudflare One offers HTTP(s) filtering, enabling you to inspect and route the traffic to your corporate applications. Cloudflare Data Loss Prevention (DLP) leverages the HTTP filtering abilities of Cloudflare One. You can apply rules to your corporate traffic and route traffic based on information in an HTTP request. There are a wide variety of options for filtering, such as domain, URL, application, HTTP method, and many more. You can use these options to segment the traffic you wish to DLP scan. All of this is done with the performance of our global network and managed with one control plane.

You can apply DLP policies to corporate email applications, such as Google Suite or O365.  As an employee attempts to upload an attachment to an email, the upload is inspected for sensitive data, and then allowed or blocked according to your policy.

How Cloudflare Area 1 and DLP work together to protect data in email

Inside your corporate email extend more core data protection principles with Area 1 in the following ways:

Enforcing data security between partners

With Cloudflare’s Area 1, you can also enforce strong TLS standards. Having TLS configured adds an extra layer of security as it ensures that emails are encrypted, preventing any attackers from reading sensitive information and changing the message if they intercept the email in transit (on-path-attack). This is especially useful for G Suite customers whose internal emails still go out to the whole Internet in front of prying eyes or for customers who have contractual obligations to communicate with partners with SSL/TLS.

Area 1 makes it easy to enforce SSL/TLS inspections. From the Area 1 portal, you can configure Partner Domain(s) TLS by navigating “Partner Domains TLS” within “Domains & Routing” and adding a partner domain with which you want to enforce TLS. If TLS is required then all emails from that domain with no TLS will be automatically dropped. Our TLS ensures strong TLS rather than the best effort in order to make sure that all traffic is encrypted with strong ciphers preventing a malicious attacker from being able to decrypt any intercepted emails.

How Cloudflare Area 1 and DLP work together to protect data in email

Stopping passive email data loss

Organizations often forget that exfiltration also can be done without ever sending any email. Attackers who are able to compromise a company account are able to passively sit by, monitoring all communications and picking out information manually.

Once an attacker has reached this stage, it is incredibly difficult to know an account is compromised and what information is being tracked. Indicators like email volume, IP address changes, and others do not work since the attacker is not taking any actions that would cause suspicion. At Cloudflare, we have a strong thesis on preventing these account takeovers before they take place, so no attacker is able to fly under the radar.

In order to stop account takeovers before they happen, we place great emphasis on filtering emails that pose a risk for stealing employee credentials. The most common attack vector used by malicious actors are phishing emails. Given its ability to have a high impact in accessing confidential data when successful, it’s no shock that this is the go-to tool in the attackers tool kit. Phishing emails pose little threat to an email inbox protected by Cloudflare’s Area 1 product. Area 1’s models are able to assess if a message is a suspected phishing email by analyzing different metadata. Anomalies detected by the models like domain proximity (how close a domain is to the legitimate one), sentiment of email, or others can quickly determine if an email is legitimate or not. If Area 1 determines an email to be a phishing attempt, we automatically retract the email and prevent the recipient from receiving the email in their inbox ensuring that the employee’s account remains uncompromised and unable to be used to exfiltrate data.

Attackers who are looking to exfiltrate data from an organization also often rely on employees clicking on links sent to them via email. These links can point to online forms which on the surface look innocuous but serve to gather sensitive information. Attackers can use these websites to initiate scripts which gather information about the visitor without any interaction from the employee. This presents a strong concern since an errant click by an employee can lead to the exfiltration of sensitive information. Other malicious links can contain exact copies of websites which the user is accustomed to accessing. However, these links are a form of phishing where the credentials entered by the employee are sent to the attacker rather than logging them into the website.

Area 1 covers this risk by providing Email Link Isolation as part of our email security offering. With email link isolation, Area 1 looks at every link sent and accesses its domain authority. For anything that’s on the margin (a link we cannot confidently say is safe), Area 1 will launch a headless Chromium browser and open the link there with no interruption. This way, any malicious scripts that execute will run on an isolated instance far from the company’s infrastructure, stopping the attacker from getting company information. This is all accomplished instantaneously and reliably.

Stopping Ransomware

Attackers have many tools in their arsenal to try to compromise employee accounts. As we mentioned above, phishing is a common threat vector, but it’s far from the only one. At Area 1, we are also vigilant in preventing the propagation of ransomware.

A common mechanism that attackers use to disseminate ransomware is to disguise attachments by renaming them. A ransomware payload could be renamed from petya.7z to Invoice.pdf in order to try to trick an employee into downloading the file. Depending on how urgent the email made this invoice seem, the employee could blindly try to open the attachment on their computer causing the organization to suffer a ransomware attack. Area 1’s models detect these mismatches and stop malicious ones from arriving into their target’s email inbox.

How Cloudflare Area 1 and DLP work together to protect data in email

A successful ransomware campaign can not only stunt the daily operations of any company, but can also lead to the loss of local data if the encryption is unable to be reversed. Cloudflare’s Area 1 product has dedicated payload models which analyze not only the attachment extensions but also the hashed value of the attachment to compare it to known ransomware campaigns. Once Area 1 finds an attachment deemed to be ransomware, we prohibit the email from going any further.

How Cloudflare Area 1 and DLP work together to protect data in email

Cloudflare’s DLP vision

We aim for Cloudflare products to give you the layered security you need to protect your organization, whether its malicious attempts to get in or sensitive data getting out. As email continues to be the largest surface of corporate data, it is crucial for companies to have strong DLP policies in place to prevent the loss of data. With Area 1 and Cloudflare One working together, we at Cloudflare are able to provide organizations with more confidence about their DLP policies.

If you are interested in these email security or DLP services, contact us for a conversation about your security and data protection needs.

Or if you currently subscribe to Cloudflare services, consider reaching out to your Cloudflare customer success manager to discuss adding additional email security or DLP protection.

Preview any Cloudflare product today

Post Syndicated from Angie Kim original https://blog.cloudflare.com/preview-today/

Preview any Cloudflare product today

Preview any Cloudflare product today

With Cloudflare’s pace of innovation, customers want to be able to see how our products work and sooner to address their needs without having to contact someone. Now they can, without any commitments or limits on monetary value and usage caps.

Ready to get started? Here’s how it works.

For any product* that is currently not part of an enterprise contract, users with administrative access will have the ability to enable the product on the Cloudflare dashboard. With a single click of a button, they can start configuring any required features within seconds.

Preview any Cloudflare product today
Preview any Cloudflare product today

You have access to resources that can help you get started as well as the ongoing support of your sales team. You will be otherwise left to enjoy the product and our team members will be in contact after about 2 weeks. We always look to collect feedback and can also discuss how to have it added to your contract. If more time is needed in the evaluation phase, no problem. If it is decided that it is not a right product fit, we will offboard the product without any penalties.

We are working on offering more and more self-service capabilities that traditionally have not been offered to our enterprise customers. We’ll also be enhancing this overall experience over the next few months to increase visibility and improve the self-guided journey.

Log into the dashboard to start exploring any products today!

*There are some products that will never be fully self-service, but we will look for opportunities to streamline the onboarding as much as possible. Examples include access to our China Network and Registrar. Support for the following products is still on the roadmap: R2, Cache Reserve, Image Resizing, CASB, DLP and BYOIP.

Weave your own global, private, virtual Zero Trust network on Cloudflare with WARP-to-WARP

Post Syndicated from Abe Carryl original https://blog.cloudflare.com/warp-to-warp/

Weave your own global, private, virtual Zero Trust network on Cloudflare with WARP-to-WARP

Weave your own global, private, virtual Zero Trust network on Cloudflare with WARP-to-WARP

Millions of users rely on Cloudflare WARP to connect to the Internet through Cloudflare’s network. Individuals download the mobile or desktop application and rely on the Wireguard-based tunnel to make their browser faster and more private. Thousands of enterprises trust Cloudflare WARP to connect employees to our Secure Web Gateway and other Zero Trust services as they navigate the Internet.

We’ve heard from both groups of users that they also want to connect to other devices running WARP. Teams can build a private network on Cloudflare’s network today by connecting WARP on one side to a Cloudflare Tunnel, GRE tunnels, or IPSec tunnels on the other end. However, what if both devices already run WARP?

Starting today, we’re excited to make it even easier to build a network on Cloudflare with the launch of WARP-to-WARP connectivity. With a single click, any device running WARP in your organization can reach any other device running WARP. Developers can connect to a teammate’s machine to test a web server. Administrators can reach employee devices to troubleshoot issues. The feature works with our existing private network on-ramps, like the tunnel options listed above. All with Zero Trust rules built in.

To get started, sign-up to receive early access to our closed beta. If you’re interested in learning more about how it works and what else we will be launching in the future, keep scrolling.

The bridge to Zero Trust

We understand that adopting a Zero Trust architecture can feel overwhelming at times. With Cloudflare One, our mission is to make Zero Trust prescriptive and approachable regardless of where you are on your journey today. To help users navigate the uncertain, we created resources like our vendor-agnostic Zero Trust Roadmap which lays out a battle-tested path to Zero Trust. Within our own products and services, we’ve launched a number of features to bridge the gap between the networks you manage today and the network you hope to build for your organization in the future.

Ultimately, our goal is to enable you to overlay your network on Cloudflare however you want, whether that be with existing hardware in the field, a carrier you already partner with, through existing technology standards like IPsec tunnels, or more Zero Trust approaches like WARP or Tunnel. It shouldn’t matter which method you chose to start with, the point is that you need the flexibility to get started no matter where you are in this journey. We call these connectivity options on-ramps and off-ramps.

A recap of WARP to Tunnel

The model laid out above allows users to start by defining their specific needs and then customize their deployment by choosing from a set of fully composable on and offramps to connect their users and devices to Cloudflare. This means that customers are able to leverage any of these solutions together to route traffic seamlessly between devices, offices, data centers, cloud environments, and self-hosted or SaaS applications.

One example of a deployment we’ve seen thousands of customers be successful with is what we call WARP-to-Tunnel. In this deployment, the on-ramp Cloudflare WARP ensures end-user traffic reaches Cloudflare’s global network in a secure and performant manner. The off-ramp Cloudflare Tunnel then ensures that, after your Zero Trust rules have been enforced, we have secure, redundant, and reliable paths to land user traffic back in your distributed, private network.

Weave your own global, private, virtual Zero Trust network on Cloudflare with WARP-to-WARP

This is a great example of a deployment that is ideal for users that need to support public to private traffic flows (i.e. North-South)

But what happens when you need to support private to private traffic flows (i.e. East-West) within this deployment?

With WARP-to-WARP, connecting just got easier

Starting today, devices on-ramping to Cloudflare with WARP will also be able to off-ramp to each other. With this announcement, we’re adding yet another tool to leverage in new or existing deployments that provides users with stronger network fabric to connect users, devices, and autonomous systems.

Weave your own global, private, virtual Zero Trust network on Cloudflare with WARP-to-WARP

This means any of your Zero Trust-enrolled devices will be able to securely connect to any other device on your Cloudflare-defined network, regardless of physical location or network configuration. This unlocks the ability for you to address any device running WARP in the exact same way you are able to send traffic to services behind a Cloudflare Tunnel today. Naturally, all of this traffic flows through our in-line Zero Trust services, regardless of how it gets to Cloudflare, and this new connectivity announced today is no exception.

To power all of this, we now track where WARP devices are connected to, in Cloudflare’s global network, the same way we do for Cloudflare Tunnel. Traffic meant for a specific WARP device is relayed across our network, using Argo Smart Routing, and piped through the transport that routes IP packets to the appropriate WARP device. Since this traffic goes through our Zero Trust Secure Web Gateway — allowing various types of filtering — it means we upgrade and downgrade traffic from purely routed IP packets to fully proxied TLS connections (as well as other protocols). In the case of using SSH to remotely access a colleague’s WARP device, this means that your traffic is eligible for SSH command auditing as well.

Get started today with these use cases

If you already deployed Cloudflare WARP to your organization, then your IT department will be excited to learn they can use this new connectivity to reach out to any device running Cloudflare WARP. Connecting via SSH, RDP, SMB, or any other service running on the device is now simpler than ever. All of this provides Zero Trust access for the IT team members, with their actions being secured in-line, audited, and pushed to your organization’s logs.

Or, maybe you are done with designing a new function of an existing product and want to let your team members check it out at their own convenience. Sending them a link with your private IP — assigned by Cloudflare — will do the job. Their devices will see your machine as if they were in the same physical network, despite being across the other side of the world.

The usefulness doesn’t end with humans on both sides of the interaction: the weekend has arrived, and you have finally set out to move your local NAS to a host provider where you run a virtual machine. By running Cloudflare WARP on it, similarly to your laptop, you can now access your photos using the virtual machine’s private IP. This was already possible with WARP to Tunnel; but with WARP-to-WARP, you also get connectivity in reverse direction, where you can have the virtual machine periodically rsync/scp files from your laptop as well. This means you can make any server initiate traffic towards the rest of your Zero Trust organization with this new type of connectivity.

What’s next?

This feature will be available on all plans at no additional cost. To get started with this new feature, add your name to the closed beta, and we’ll notify you once you’ve been enrolled. Then, you’ll simply ensure that at least two devices are enrolled in Cloudflare Zero Trust and have the latest version of Cloudflare WARP installed.

This new feature builds upon the existing benefits of Cloudflare Zero Trust, which include enhanced connectivity, improved performance, and streamlined access controls. With the ability to connect to any other device in their deployment, Zero Trust users will be able to take advantage of even more robust security and connectivity options.

To get started in minutes, create a Zero Trust account, download the WARP agent, enroll these devices into your Zero Trust organization, and start creating Zero Trust policies to establish fast, secure connectivity between these devices. That’s it.

Introducing Digital Experience Monitoring

Post Syndicated from Abe Carryl original https://blog.cloudflare.com/introducing-digital-experience-monitoring/

Introducing Digital Experience Monitoring

This post is also available in 简体中文, 日本語, Français and Español.

Introducing Digital Experience Monitoring

Today, organizations of all shapes and sizes lack visibility and insight into the digital experiences of their end-users. This often leaves IT and network administrators feeling vulnerable to issues beyond their control which hinder productivity across their organization. When issues inevitably arise, teams are left with a finger-pointing exercise. They’re unsure if the root cause lies within the first, middle or last mile and are forced to file a ticket for the respective owners of each. Ideally, each team sprints into investigation to find the needle in the haystack. However, once each side has exhausted all resources, they once again finger point upstream. To help solve this problem, we’re building a new product, Digital Experience Monitoring, which will enable administrators to pinpoint and resolve issues impacting end-user connectivity and performance.

To get started, sign up to receive early access. If you’re interested in learning more about how it works and what else we will be launching in the near future, keep scrolling.

Our vision

Over the last year, we’ve received an overwhelming amount of feedback that users want to see the intelligence that Cloudflare possesses from our unique perspective, helping power the Internet embedded within our Zero Trust platform. Today, we’re excited to announce just that. Throughout the coming weeks, we will be releasing a number of features for our Digital Experience Monitoring product which will provide you with unparalleled visibility into the performance and connectivity of your users, applications, and networks.

With data centers in more than 275 cities across the globe, Cloudflare handles an average of 39 million HTTP requests and 22 million DNS requests every second. And with more than one billion unique IP addresses connecting to our network we have one of the most representative views of Internet traffic on the planet. This unique point of view on the Internet will be able to provide you deep insight into the digital experience of your users. You can think of Digital Experience Monitoring as the air traffic control tower of your Zero Trust deployment providing you with the data-driven insights you need to help each user arrive at their destination as quickly and smoothly as possible.

What is Digital Experience Monitoring?

When we began to research Digital Experience Monitoring, we started with you: the user. Users want a single dashboard to monitor user, application, and network availability and performance. Ultimately, this dashboard needs to help users cohesively understand the minute-by-minute experiences of their end-users so that they can quickly and easily resolve issues impacting productivity. Simply put, users want hop by hop visibility into the network traffic paths of each and every user in their organization.

From our conversations with our users, we understand that providing this level of insight has become even more critical and challenging in an increasingly work-from-anywhere world.

With this product, we want to empower you to answer the hard questions. The questions in the kind of tickets we all wish we could avoid when they appear in the queue like “Why can’t the CEO reach SharePoint while traveling abroad?”. Could it have been a poor Wi-Fi signal strength in the hotel? High CPU on the device? Or something else entirely?

Without the proper tools, it’s nearly impossible to answer these questions. Regardless, it’s all but certain that this investigation will be a time-consuming endeavor whether it has a happy ending or not. Traditionally, the investigation will go something like this. IT professionals will start their investigation by looking into the first-mile which may include profiling the health of the endpoint (i.e. CPU or RAM utilization), Wi-Fi signal strength, or local network congestion. With any luck at all, the issue is identified, and the pain stops here.

Unfortunately, teams rarely have the tools required to prove these theories out so, frustrated, they move on to everything in between the user and the application. Here we might be looking for an outage or a similar issue with a local Internet Service Provider (ISP). Again, even if we do have reason to believe that this is the issue it can be difficult to prove this beyond a reasonable doubt.

Reluctantly, we move onto the last mile. Here we’ll be looking to validate that the application in question is available and if so, how quickly we can establish a meaningful connection (Time to First Byte, First Contentful Paint, packet loss) to this application. More often than not, the lead investigator is left with more questions than answers after attempting to account for the hop by hop degradation. Then, by the time the ticket can be closed, the CEO has boarded a flight back home and the issue is no longer relevant.

With Digital Experience Monitoring, we’ve set out to build the tools you need to quickly find the needle in the haystack and resolve issues related to performance and connectivity. However, we also understand that availability and performance are just shorthand measures for gauging the complete experience of our customers. Of course, there is much more to a good user experience than just insights and analytics. We will continue to pay close attention to other key metrics around the volume of support tickets, contact rate, and time to resolution as other significant indicators of a healthy deployment. Internally, when shared with Cloudflare, this telemetry data will help enable our support teams to quickly validate and report issues to continuously improve the overall Zero Trust experience.

“As CIO, I am focused on outfitting Cintas with technology and systems that help us deliver on our promises for the 1 million plus businesses we serve across North America.  As we leverage more cloud based technology to create differentiated experiences for our customers, Cloudflare is an integral part of delivering on that promise.”  
Matthew Hough, CIO, Cintas

A look ahead

In the coming weeks, we’ll be launching three new features. Here is a look ahead at what you can expect when you sign up for early access.

Zero Trust Fleet Status

One of the common challenges of deploying software is understanding how it is performing in the wild. For Zero Trust, this might mean trying to answer how many of your end-users are running our device agent, Cloudflare WARP, for instance. Then, of those users, you may want to see how many users have enabled, paused, or disabled the agent during the early phases of a deployment. Shortly after finding these answers, you may want to see if there is any correlation between the users who pause their WARP agent and the data center through which they are connected to Cloudflare. These are the kinds of answers you will be able to find with Zero Trust Fleet Status. These insights will be available at both an organizational and per-user level.

Introducing Digital Experience Monitoring

Synthetic Application Monitoring

Oftentimes, the issues being reported to IT professionals will fall outside their control. For instance, an outage for a popular SaaS application can derail an otherwise perfectly productive day. But, these issues would become much easier to address if you knew about them before your users began to report them. For instance, this foresight would allow you to proactively communicate issues to the organization and get ahead of the flood of IT tickets destined for your inbox. With Synthetic Application Monitoring, we’ll be providing Zero Trust administrators the ability to create synthetic application tests to public-facing endpoints.

Introducing Digital Experience Monitoring

With this tool, users can initiate periodic traceroute and HTTP GET requests destined for a given public IP or hostname. In the dashboard, we’ll then surface global and user-level analytics enabling administrators to easily identify trends across their organization. Users will also have the ability to filter results down to identify individual users or devices who are most impacted by these outages.

Introducing Digital Experience Monitoring

Network Path Visualization

Once an issue with a given user or device is identified through the Synthetic Application Monitoring reports highlighted above, administrators will be able to view hop-by-hop telemetry data outlining the critical path to public facing endpoints. Administrators will have the ability to view this data represented graphically and export any data which may be relevant outside the context of Zero Trust.

Introducing Digital Experience Monitoring

What’s next

According to Gartner®, “by 2026 at least 60% of I&O leaders will use Digital Experience Monitoring (DEM) to measure application, services and endpoint performance from the user’s viewpoint, up from less than 20% in 2021.” The items at the top of our roadmap will be just the beginning to Cloudflare’s approach to bringing our intelligence into your Zero Trust deployments.

Perhaps what we’re most excited about with this product is that users on all Zero Trust plans will be able to get started at no additional cost and then upgrade their plans for more advanced features and usage moving forward. Join our waitlist to be notified when these initial capabilities are available and receive early access.

Gartner Market Guide for Digital Experience Monitoring, 03/28/2022, Mrudula Bangera, Padraig Byrne, Gregg Siegfried.
GARTNER is the registered trademark and service mark of Gartner Inc., and/or its affiliates in the U.S. and/or internationally and has been used herein with permission. All rights reserved.

One of our most requested features is here: DNS record comments and tags

Post Syndicated from Hannes Gerhart original https://blog.cloudflare.com/dns-record-comments/

One of our most requested features is here: DNS record comments and tags

One of our most requested features is here: DNS record comments and tags

Starting today, we’re adding support on all zone plans to add custom comments on your DNS records. Users on the Pro, Business and Enterprise plan will also be able to tag DNS records.

DNS records are important

DNS records play an essential role when it comes to operating a website or a web application. In general, they are used to mapping human-readable hostnames to machine-readable information, most commonly IP addresses. Besides mapping hostnames to IP addresses they also fulfill many other use cases like:

  • Ensuring emails can reach your inbox, by setting up MX records.
  • Avoiding email spoofing and phishing by configuring SPF, DMARC and DKIM policies as TXT records.
  • Validating a TLS certificate by adding a TXT (or CNAME) record.
  • Specifying allowed certificate authorities that can issue certificates on behalf of your domain by creating a CAA record.
  • Validating ownership of your domain for other web services (website hosting, email hosting, web storage, etc.) – usually by creating a TXT record.
  • And many more.

With all these different use cases, it is easy to forget what a particular DNS record is for and it is not always possible to derive the purpose from the name, type and content of a record. Validation TXT records tend to be on seemingly arbitrary names with rather cryptic content. When you then also throw multiple people or teams into the mix who have access to the same domain, all creating and updating DNS records, it can quickly happen that someone modifies or even deletes a record causing the on-call person to get paged in the middle of the night.

Enter: DNS record comments & tags 📝

Starting today, everyone with a zone on Cloudflare can add custom comments on each of their DNS records via the API and through the Cloudflare dashboard.

One of our most requested features is here: DNS record comments and tags

To add a comment, just click on the Edit action of the respective DNS record and fill out the Comment field. Once you hit Save, a small icon will appear next to the record name to remind you that this record has a comment. Hovering over the icon will allow you to take a quick glance at it without having to open the edit panel.

One of our most requested features is here: DNS record comments and tags

What you also can see in the screenshot above is the new Tags field. All users on the Pro, Business, or Enterprise plans now have the option to add custom tags to their records. These tags can be just a key like “important” or a key-value pair like “team:DNS” which is separated by a colon. Neither comments nor tags have any impact on the resolution or propagation of the particular DNS record, and they’re only visible to people with access to the zone.

Now we know that some of our users love automation by using our API. So if you want to create a number of zones and populate all their DNS records by uploading a zone file as part of your script, you can also directly include the DNS record comments and tags in that zone file. And when you export a zone file, either to back up all records of your zone or to easily move your zone to another account on Cloudflare, it will also contain comments and tags. Learn more about importing and exporting comments and tags on our developer documentation.

;; A Records
*.mycoolwebpage.xyz.     1      IN  A
mycoolwebpage.xyz.       1      IN  A ; Contact Hannes for details.
sub1.mycoolwebpage.xyz.  1      IN  A ; Test origin server. Can be deleted eventually. cf_tags=testing
sub1.mycoolwebpage.xyz.  1      IN  A ; Production origin server. cf_tags=important,prod,team:DNS

;; MX Records
mycoolwebpage.xyz.       1      IN  MX   1 mailserver1.example.
mycoolwebpage.xyz.       1      IN  MX   2 mailserver2.example.

;; TXT Records
mycoolwebpage.xyz.       86400	IN  TXT  "v=spf1 ip4: -all" ; cf_tags=important,team:EMAIL
sub1.mycoolwebpage.xyz.  86400  IN  TXT  "hBeFxN3qZT40" ; Verification record for service XYZ. cf_tags=team:API

New filters

It might be that your zone has hundreds or thousands of DNS records, so how on earth would you find all the records that belong to the same team or that are needed for one particular application?

For this we created a new filter option in the dashboard. This allows you to not only filter for comments or tags but also for other record data like name, type, content, or proxy status. The general search bar for a quick and broader search will still be available, but it cannot (yet) be used in conjunction with the new filters.

One of our most requested features is here: DNS record comments and tags

By clicking on the “Add filter” button, you can select individual filters that are connected with a logical AND. So if I wanted to only look at TXT records that are tagged as important, I would add these filters:

One more thing (or two)

Another change we made is to replace the Advanced button with two individual actions: Import and Export, and Dashboard Display Settings.

You can find them in the top right corner under DNS management. When you click on Import and Export you have the option to either export all existing DNS records (including their comments and tags) into a zone file or import new DNS records to your zone by uploading a zone file.

The action Dashboard Display Settings allows you to select which special record types are shown in the UI. And there is an option to toggle showing the record tags inline under the respective DNS record or just showing an icon if there are tags present on the record.

And last but not least, we increased the width of the DNS record table as part of this release. The new table makes better use of the existing horizontal space and allows you to see more details of your DNS records, especially if you have longer subdomain names or content.

Try it now

DNS record comments and tags are available today. Just navigate to the DNS tab of your zone in the Cloudflare dashboard and create your first comment or tag. If you are not yet using Cloudflare DNS, sign up for free in just a few minutes.

Learn more about DNS record comments and tags on our developer documentation.

ICYMI: Developer Week 2022 announcements

Post Syndicated from Dawn Parzych original https://blog.cloudflare.com/icymi-developer-week-2022-announcements/

ICYMI: Developer Week 2022 announcements

ICYMI: Developer Week 2022 announcements

Developer Week 2022 has come to a close. Over the last week we’ve shared with you 31 posts on what you can build on Cloudflare and our vision and roadmap on where we’re headed. We shared product announcements, customer and partner stories, and provided technical deep dives. In case you missed any of the posts here’s a handy recap.

Product and feature announcements

Announcement Summary
Welcome to the Supercloud (and Developer Week 2022) Our vision of the cloud — a model of cloud computing that promises to make developers highly productive at scaling from one to Internet-scale in the most flexible, efficient, and economical way.
Build applications of any size on Cloudflare with the Queues open beta Build performant and resilient distributed applications with Queues. Available to all developers with a paid Workers plan.
Migrate from S3 easily with the R2 Super Slurper A tool to easily and efficiently move objects from your existing storage provider to R2.
Get started with Cloudflare Workers with ready-made templates See what’s possible with Workers and get building faster with these starter templates.
Reduce origin load, save on cloud egress fees, and maximize cache hits with Cache Reserve Cache Reserve is graduating to open beta – users can now test and integrate it into their content delivery strategy without any additional waiting.
Store and process your Cloudflare Logs… with Cloudflare Query Cloudflare logs stored on R2.
UPDATE Supercloud SET status = ‘open alpha’ WHERE product = ‘D1’ D1, our first global relational database, is in open alpha. Start building and share your feedback with us.
Automate an isolated browser instance with just a few lines of code The Browser Rendering API is an out of the box solution to run browser automation tasks with Puppeteer in Workers.
Bringing authentication and identification to Workers through Mutual TLS Send outbound requests with Workers through a mutually authenticated channel.
Spice up your sites on Cloudflare Pages with Pages Functions General Availability Easily add dynamic content to your Pages projects with Functions.
Announcing the first Workers Launchpad cohort and growth of the program to $2 billion We were blown away by the interest in the Workers Launchpad Funding Program and are proud to introduce the first cohort.
The most programmable Supercloud with Cloudflare Snippets Modify traffic routed through the Cloudflare CDN without having to write a Worker.
Keep track of Workers’ code and configuration changes with Deployments Track your changes to a Worker configuration, binding, and code.
Send Cloudflare Workers logs to a destination of your choice with Workers Trace Events Logpush Gain visibility into your Workers when logs are sent to your analytics platform or object storage. Available to all users on a Workers paid plan.
Improved Workers TypeScript support Based on feedback from users we’ve improved our types and are open-sourcing the automatic generation scripts.

Technical deep dives

Announcement Summary
The road to a more standards-compliant Workers API An update on the work the WinterCG is doing on the creation of common API standards in JavaScript runtimes and how Workers is implementing them.
Indexing millions of HTTP requests using Durable Objects
Indexing and querying millions of logs stored in R2 using Workers, Durable Objects, and the Streams API.
Iteration isn’t just for code: here are our latest API docs We’ve revamped our API reference documentation to standardize our API content and improve the overall developer experience when using the Cloudflare APIs.
Making static sites dynamic with D1 A template to build a D1-based comments APi.
The Cloudflare API now uses OpenAPI schemas OpenAPI schemas are now available for the Cloudflare API.
Server-side render full stack applications with Pages Functions Run server-side rendering in a Function using a variety of frameworks including Qwik, Astro, and SolidStart.
Incremental adoption of micro-frontends with Cloudflare Workers How to replace selected elements of a legacy client-side rendered application with server-side rendered fragments using Workers.
How we built it: the technology behind Cloudflare Radar 2.0 Details on how we rebuilt Radar using Pages, Remix, Workers, and R2.
How Cloudflare uses Terraform to manage Cloudflare How we made it easier for our developers to make changes with the Cloudflare Terraform provider.
Network performance Update: Developer Week 2022 See how fast Cloudflare Workers are compared to other solutions.
How Cloudflare instruments services using Workers Analytics Engine Instrumentation with Analytics Engine provides data to find bugs and helps us prioritize new features.
Doubling down on local development with Workers:Miniflare meets workerd Improving local development using Miniflare3, now powered by workerd.

Customer and partner stories

Announcement Summary
Cloudflare Workers scale too well and broke our infrastructure, so we are rebuilding it on Workers How DevCycle re-architected their feature management tool using Workers.
Easy Postgres integration with Workers and Neon.tech Neon.tech solves the challenges of connecting to Postgres from Workers
Xata Workers: client-side database access without client-side secrets Xata uses Workers for Platform to reduce security risks of running untrusted code.
Twilio Segment Edge SDK powered by Cloudflare Workers The Segment Edge SDK, built on Workers, helps applications collect and track events from the client, and get access to realtime user state to personalize experiences.


And that’s it for Developer Week 2022. But you can keep the conversation going by joining our Discord Community.

Send Cloudflare Workers logs to a destination of your choice with Workers Trace Events Logpush

Post Syndicated from Tanushree Sharma original https://blog.cloudflare.com/workers-logpush-ga/

Send Cloudflare Workers logs to a destination of your choice with Workers Trace Events Logpush

Send Cloudflare Workers logs to a destination of your choice with Workers Trace Events Logpush

When writing code, you can only move as fast as you can debug.

Our goal at Cloudflare is to give our developers the tools to deploy applications faster than ever before. This means giving you tools to do everything from initializing your Workers project to having visibility into your application successfully serving production traffic.

Last year we introduced wrangler tail, letting you access a live stream of Workers logs to help pinpoint errors to debug your applications. Workers Trace Events Logpush (or just Workers Logpush for short) extends this functionality – you can use it to send Workers logs to an object storage destination or analytics platform of your choice.

Workers Logpush is now available to everyone on the Workers Paid plan! Read on to learn how to get started and about pricing information.

Move fast and don’t break things

With the rise of platforms like Cloudflare Workers over containers and VMs, it now takes just minutes to deploy applications. But, when building an application, any tech stack that you choose comes with its own set of trade-offs.

As a developer, choosing Workers means you don’t need to worry about any of the underlying architecture. You just write code, and it works (hopefully!). A common criticism of this style of platform is that observability becomes more difficult.

We want to change that.

Over the years, we’ve made improvements to the testing and debugging tools that we offer — wrangler dev, Miniflare and most recently our open sourced runtime workerd. These improvements have made debugging locally and running unit tests much easier. However, there will always be edge cases or bugs that are only replicated in production environments.

If something does break…enter Workers Logpush

Wrangler tail lets you view logs in real time, but we’ve heard from developers that you would also like to set up monitoring for your services and have a historical record to look back on. Workers Logpush includes metadata about requests, console.log() messages and any uncaught exceptions. To give you an idea of what it looks like, below is a sample log line:

               "please work!"

Logpush has support for the most popular observability tools. Send logs to Datadog, New Relic or even R2 for storage and ad hoc querying.


Workers Logpush is available to both customers on our Workers Paid and Enterprise plans. We wanted this to be very affordable for our developers. Workers Logpush is priced at $0.05 per million requests, and we only charge you for requests that result in logs delivered to an end destination after any filtering or sampling is applied. It also has an included usage of 10M requests each month.


Logpush is incredibly simple to set up.

1. Create a Logpush job. The following example sends Workers logs to R2.

curl -X POST 'https://api.cloudflare.com/client/v4/accounts/<ACCOUNT_ID>/logpush/jobs' \
-H 'X-Auth-Key: <API_KEY>' \
-H 'X-Auth-Email: <EMAIL>' \
-H 'Content-Type: application/json' \
-d '{
"name": "workers-logpush",
"logpull_options": "fields=Event,EventTimestampMs,Outcome,Exceptions,Logs,ScriptName",
"destination_conf": "r2://<BUCKET_PATH>/{DATE}?account-id=<ACCOUNT_ID>&access-key-id=<R2_ACCESS_KEY_ID>&secret-access-key=<R2_SECRET_ACCESS_KEY>",
"dataset": "workers_trace_events",
"enabled": true
}'| jq .

In Logpush, you can also configure filters and a sampling rate to have more control of the volume of data that is sent to your configured destination. For example if you only want to receive logs for resulted in an exception, you could add the following under logpull_options:

"filter":"{\"where\": {\"key\":\"Outcome\",\"operator\":\"eq\",\"value\":\"exception\"}}"

2. Enable logging on your Workers script

You can do this by adding a new property, logpush = true, to your wrangler.toml file. This can be added either in the top level configuration or under an environment. Any new scripts with this property will automatically get picked up by the Logpush job.

Get started today!

Both customers on our Workers Paid Plan and Enterprise plan can get started with Workers Logpush now! The full guide on how to get started is here.

Automate an isolated browser instance with just a few lines of code

Post Syndicated from Tanushree Sharma original https://blog.cloudflare.com/introducing-workers-browser-rendering-api/

Automate an isolated browser instance with just a few lines of code

Automate an isolated browser instance with just a few lines of code

If you’ve ever created a website that shows any kind of analytics, you’ve probably also thought about adding a “Save Image” or “Save as PDF” button to store and share results. This isn’t as easy as it seems (I can attest to this firsthand) and it’s not long before you go down a rabbit hole of trying 10 different libraries, hoping one will work.

This is why we’re excited to announce a private beta of the Workers Browser Rendering API, improving the browser automation experience for developers. With browser automation, you can programmatically do anything that a user can do when interacting with a browser.

The Workers Browser Rendering API, or just Rendering API for short, is our out-of-the-box solution for simplifying developer workflows, including capturing images or screenshots, by running browser automation in Workers.

Browser automation, everywhere

As with many of the best Cloudflare products, Rendering API was born out of an internal need. Many of our teams were setting up or wanted to set up their own tools to perform what sounds like an incredibly simple task: taking automated screenshots.

When gathering use cases, we realized that much of what our internal teams wanted would also be useful for our customers. Some notable ones are:

  • Taking screenshots for social sharing thumbnails or preview images
  • Emailed daily screenshots of dashboards (requested specifically by our SVP of Engineering)
  • Reporting bugs on websites and sending them directly to frontend teams

Not to mention use cases for other browser automation functions like:

Testing UI/UX Flows
End-to-end (E2E) testing is used to minimic user behaviour and can identify bugs that unit tests or integration tests have missed. And let’s be honest – no developer wants to manually check the user flow each time they make changes to their application. E2E tests can be especially useful to verify logic on your customer’s critical path like account creation, authentication or checkout.

Performance Tests
Application performance metrics, such as page load time, directly impact your user’s experience and your SEO rankings. To avoid performance regressions, you want to test impact on latency in conditions that are as close as possible to your production environment before you merge. By automating performance testing you can measure if your proposed changes will result in a degraded experience for your uses and make improvements accordingly.  

Unlocking a new building block

One of the most common browser automation frameworks is Puppeteer. It’s common to run Puppeteer in a containerization tool like Docker or in a serverless environment. Taking automated screenshots should be as easy as writing some code, hitting deploy and having it run when a particular event is triggered or on a regular schedule.

It should be, but it’s not.

Even on a serverless solution like AWS Lambda, running Puppeteer means packaging it, making sure dependencies are covered, uploading packages to S3 and deploying using Layers. Whether using Docker or something like Lambda, it’s clear that this is not easy to set up.

One of the pillars of Cloudflare’s development platform is to provide our developers with tools that are incredibly simple, yet powerful to build on. Rendering API is our out-of-the-box solution for running Puppeteer in Workers.

Screenshotting made simple

To start, the Rendering API will have support for navigating to a webpage and taking a screenshot, with more functions to follow. To use it, all you need to do is add our new browser binding to your project’s wrangler.toml file


bindings = [
 { name = "my_browser” type = "browser" }

From there, taking a screenshot and saving it to R2 is as simple as:


import puppeteer from '@cloudflare/puppeteer'

export default {
    async fetch(request: Request, env: Env): Promise<Response> {
        const browser = await puppeteer.launch({
            browserBinding: env.MY_BROWSER
        const page = await browser.newPage()

        await page.goto("https://example.com/")
        const img = await page.screenshot() as Buffer
        await browser.close()

        //upload to R2
        try {
            await env.MY_BUCKET.put("screenshot.jpg", img);
            return new Response(`Success!`);
        } catch (e) {
            return new Response('', { status: 400 })

Down the line, we have plans to add full Puppeteer support, including functions like page.type, page.click, page.evaluate!

What’s happening under the hood?

Remote browser isolation technology is an integral part of our Zero Trust product offering. Remote browser isolation lets users interact with a web browser that instead of running on the client’s device, runs in a remote environment. The Rendering API repurposes this under the hood!

Automate an isolated browser instance with just a few lines of code

We’ve wrapped the Puppeteer library so that it can be run directly from your own Worker. You can think of your Worker as the client. Each of Cloudflare’s data centers has a pool of warm browsers ready to go and when a Worker requests a browser, the browser is instantly returned and is connected to via a WebSocket. Once the WebSocket connection is established, our internal browser API Worker handles all communication to the browser session via the Chrome Devtools Protocol.

To ensure the security of your Worker, individual remote browsers are run as disposable instances – one instance per request, and never shared. They are secured using gVisor to protect against kernel level exploits. On top of that, the browser is running sandboxed processes with the lowest privilege level using a Linux seccomp profile.

The Rendering API should be used when you’re building and testing your applications.  To prevent abuse, Cloudflare Bot Management has baked in signals to indicate that a request is coming from a Worker running Puppeteer. As a Cloudflare Bot Management customer, this will automatically be added to your blocklist, with the option to explicitly opt in and allow it.

How can you get started?

We’re introducing the Workers Browser Rendering API in closed beta. If you’re interested, please tell us a bit about your use case and join the waitlist. We would love to hear what else you want to build using the Workers Browser Rendering API, let us know in the Workers channel on the Cloudflare Developers Discord!

Store and process your Cloudflare Logs… with Cloudflare

Post Syndicated from Jon Levine original https://blog.cloudflare.com/announcing-logs-engine/

Store and process your Cloudflare Logs... with Cloudflare

Store and process your Cloudflare Logs... with Cloudflare

Millions of customers trust Cloudflare to accelerate their website, protect their network, or as a platform to build their own applications. But, once you’re running in production, how do you know what’s going on with your application? You need logs from Cloudflare – a record of what happened on our network when your customers interacted with your product that uses Cloudflare.

Cloudflare Logs are an indispensable tool for debugging applications, identifying security vulnerabilities, or just understanding how users are interacting with your product. However, our customers generate petabytes of logs, and store them for months or years at a time. Log data is tantalizing: all those answers, just waiting to be revealed with the right query! But until now, it’s been too hard for customers to actually store, search, and understand their logs without expensive and cumbersome third party tools.

Today we’re announcing Cloudflare Logs Engine: a new product to enable any kind of investigation with Cloudflare Logs — all within Cloudflare.

Starting today, Cloudflare customers who push their logs to R2 can retrieve them by time range and unique identifier. Over the coming months we want to enable customers to:

  • Store logs for any Cloudflare dataset, for as long as you want, with a few clicks
  • Access logs no matter what plan you use, without relying on third party tools
  • Write queries that include multiple datasets
  • Quickly identify the logs you need and take action based on what you find

Why Cloudflare Logs?

When it comes to visibility into your traffic, most customers start with analytics. Cloudflare dashboard is full of analytics about all of our products, which give a high-level overview of what’s happening: for example, number of requests served, the ratio of cache hits, or the amount of CPU time used.

But sometimes, more detail is needed. Developers especially need to be able to read individual log lines to debug applications. For example, suppose you notice a problem where your application throws an error in an unexpected way – you need to know the cause of that error and see every request with that pattern.

Cloudflare offers tools like Instant Logs and wrangler tail which excel at real-time debugging. These are incredibly helpful if you’re making changes on the fly, or if the problem occurs frequently enough that it will appear during your debugging session.

In other cases, you need to find that needle in a haystack — the one rare event that causes everything to go wrong. Or you might have identified a security issue and want to make sure you’ve identified every time that issue could have been exploited in your application’s history.

When this happens, you need logs. In particular, you need forensics: the ability to search the entire history of your logs.

A brief overview of log analysis

Before we take a look at Logs Engine itself, I want to briefly talk about alternatives – how have our customers been dealing with their logs so far?

Cloudflare has long offered Logpull and Logpush. Logpull enables enterprise customers to store their HTTP logs on Cloudflare for up to seven days, and retrieve them by either time or RayID. Logpush can send your Cloudflare logs just about anywhere on the Internet, quickly and reliably. While Logpush provides more flexibility, it’s been up to customers to actually store and analyze those logs.

Cloudflare has a number of partnerships with SIEMs and data warehouses/data lakes. Many of these tools even have pre-built Cloudflare dashboards for easy visibility. And third party tools have a big advantage in that you can store and search across many log sources, not just Cloudflare.

That said, we’ve heard from customers that they have some challenges with these solutions.

First, third party log tooling can be expensive! Most tools require that you pay not just for storage, but for indexing all of that data when it’s ingested. While that enables powerful search functionality later on, Cloudflare (by its nature) is often one of the largest emitters of logs that a developer will have. If you were to store and index every log line we generate, it can cost more money to analyze the logs than to deliver the actual service.

Second, these tools can be hard to use. Logs are often used to track down an issue that customers discover via analytics in the Cloudflare dashboard. After finding what you need in logs, it can be hard to get back to the right part of the Cloudflare dashboard to make the appropriate configuration changes.

Finally, Logpush was previously limited to Enterprise plans. Soon, we will start offering these services to customers at any scale, regardless of plan type or how they choose to pay.

Why Logs Engine?

With Logs Engine, we wanted to solve these problems. We wanted to build something affordable, easy to use, and accessible to any Cloudflare customer. And we wanted it to work for any Cloudflare logs dataset, for any span of time.

Our first insight was that to make logs affordable, we need to separate storage and compute. The cost of Storage is actually quite low! Thanks to R2, there’s no reason many of our customers can’t store all of their logs for long periods of time. At the same time, we want to separate out the analysis of logs so that customers only pay for the compute of logs they analyze – not every line ingested. While we’re still developing our query pricing, our aim is to be predictable, transparent and upfront. You should never be surprised by the cost of a query (or land a huge bill by accident).

It’s great to separate storage and compute. But, if you need to scan all of your logs anyway to answer the first question you have, you haven’t gained any benefits to this separation. In order to realize cost savings, it’s critical to narrow down your search before executing a query. That’s where our next big idea came in: a tight integration with analytics.

Most of the time, when analyzing logs, you don’t know what you’re looking for. For example, if you’re trying to find the cause of a specific origin status code, you may need to spend some time understanding which origins are impacted, which clients are sending them, and the time range in which these errors happened. Thanks to our ABR analytics, we can provide a good summary of the data very quickly – but not the exact details of what happened. By integrating with analytics, we can help customers narrow down their queries, then switch to Logs Engine once you know exactly what you’re looking for.

Finally, we wanted to make logs accessible to anyone. That means all plan types – not just Enterprise.

Additionally, we want to make it easy to both set up log storage and analysis, and also to take action on logs once you find problems. With Logs Engine, it will be possible to search logs right from the dashboard, and to immediately create rules based on the patterns you find there.

What’s available today and our roadmap

Today, Enterprise customers can store logs in R2 and retrieve them via time range. Currently in beta, we also allow customers to retrieve logs by RayID (see our companion blog post) — to join the beta, please email [email protected].

Coming soon, we will enable customers on all plan types — not just Enterprise — to ingest logs into Logs Engine. Details on pricing will follow soon.

We also plan to build more powerful querying capability, beyond time range and RayID lookup. For example, we plan to support arbitrary filtering on any column, plus more expressive queries that can look across datasets or aggregate data.

But why stop at logs? This foundation lays the groundwork to support other types of data sources and queries one day. We are just getting started. Over the long term, we’re also exploring the ability to ingest data sources outside of Cloudflare and query them. Paired with Analytics Engine this is a formidable way to explore any data set in a cost-effective way!

Reduce origin load, save on cloud egress fees, and maximize cache hits with Cache Reserve

Post Syndicated from Alex Krivit original https://blog.cloudflare.com/cache-reserve-open-beta/

Reduce origin load, save on cloud egress fees, and maximize cache hits with Cache Reserve

Reduce origin load, save on cloud egress fees, and maximize cache hits with Cache Reserve

Earlier this year, we introduced Cache Reserve. Cache Reserve helps users serve content from Cloudflare’s cache for longer by using R2’s persistent data storage. Serving content from Cloudflare’s cache benefits website operators by reducing their bills for egress fees from origins, while also benefiting website visitors by having content load faster.

Cache Reserve has been in closed beta for a few months while we’ve collected feedback from our initial users and continued to develop the product. After several rounds of iterating on this feedback, today we’re extremely excited to announce that Cache Reserve is graduating to open beta – users will now be able to test it and integrate it into their content delivery strategy without any additional waiting.

If you want to see the benefits of Cache Reserve for yourself and give us some feedback– you can go to the Cloudflare dashboard, navigate to the Caching section and enable Cache Reserve by pushing one button.

How does Cache Reserve fit into the larger picture?

Content served from Cloudflare’s cache begins its journey at an origin server, where the content is hosted. When a request reaches the origin, the origin compiles the content needed for the response and sends it back to the visitor.

The distance between the visitor and the origin can affect the performance of the asset as it may travel a long distance for the response. This is also where the user is charged a fee to move the content from where it’s stored on the origin to the visitor requesting the content. These fees, known as “bandwidth” or “egress” fees, are familiar monthly line items on the invoices for users that host their content on cloud providers.

Reduce origin load, save on cloud egress fees, and maximize cache hits with Cache Reserve

Cloudflare’s CDN sits between the origin and visitor and evaluates the origin’s response to see if it can be cached. If it can be added to Cloudflare’s cache, then the next time a request comes in for that content, Cloudflare can respond with the cached asset, which means there’s no need to send the request to the origin– reducing egress fees for our customers. We also cache content in data centers close to the visitor to improve the performance and cut down on the transit time for a response.

To help assets remain cached for longer, a few years ago we introduced Tiered Cache which organizes all of our 250+ global data centers into a hierarchy of lower-tiers (generally closer to visitors) and upper-tiers (generally closer to origins). When a request for content cannot be served from a lower-tier’s cache, the upper-tier is checked before going to the origin for a fresh copy of the content. Organizing our data centers into tiers helps us cache content in the right places for longer by putting multiple caches between the visitor’s request and the origin.

Why do cache misses occur?
Misses occur when Cloudflare cannot serve the content from cache and must go back to the origin to retrieve a fresh copy. This can happen when a customer sets the cache-control time to signify when the content is out of date (stale) and needs to be revalidated. The other element at play – how long the network wants content to remain cached – is a bit more complicated and can fluctuate depending on eviction criteria.

CDNs must consider whether they need to evict content early to optimize storage of other assets when cache space is full. At Cloudflare, we prioritize eviction based on how recently a piece of cached content was requested by using an algorithm called “least recently used” or LRU. This means that even if cache-control signifies that a piece of content should be cached for many days, we may still need to evict it earlier (if it is least-requested in that cache) to cache more popular content.

This works well for most customers and website visitors, but is often a point of confusion for people wondering why content is unexpectedly displaying a miss. If eviction did not happen then content would need to be cached in data centers that were further away from visitors requesting that data, harming the performance of the asset and injecting inefficiencies into how Cloudflare’s network operates.

Some customers, however, have large libraries of content that may not be requested for long periods of time. Using the traditional cache, these assets would likely be evicted and, if requested again, served from the origin. Keeping assets in cache requires that they remain popular on the Internet which is hard given what’s popular or current is constantly changing. Evicting content that becomes cold means additional origin egress for the customer if that content needs to be pulled repeatedly from the origin.

Reduce origin load, save on cloud egress fees, and maximize cache hits with Cache Reserve

Enter Cache Reserve
This is where Cache Reserve shines. Cache Reserve serves as the ultimate upper-tier data center for content that might otherwise be evicted from cache. Once admitted to Cache Reserve, content can be stored for a much longer period of time– 30 days by default. If another request comes in during that period, it can be extended for another 30 days (and so on) or until cache-control signifies that we should no longer serve that content from cache. Cache Reserve serves as a safety net to backstop all cacheable content, so customers don’t have to worry about unwanted cache eviction and origin egress fees.

How does Cache Reserve save egress?

The promise of Cache Reserve is that hit ratios will increase and egress fees from origins will decrease for long tail content that is rarely requested and may be evicted from cache.

However, there are additional egress savings built into the product. For example, objects are written to Cache Reserve on misses. This means that when fetching the content from the origin on a cache miss, we both use that to respond to a request while also writing the asset to Cache Reserve, so customers won’t experience egress from serving that asset for a long time.

Cache Reserve is designed to be used with tiered cache enabled for maximum origin shielding. When there is a cache miss in both the lower and upper tiers, Cache Reserve is checked and if there is a hit, the response will be cached in both the lower and upper tier on its way back to the visitor without the origin needing to see the request or serve any additional data.

Cache Reserve accomplishes these origin egress savings for a low price, based on R2 costs. For more information on Cache Reserve prices and operations, please see the documentation here.

Scaling Cache Reserve on Cloudflare’s developer platform

When we first announced Cache Reserve, the response was overwhelming. Over 20,000 users wanted access to the beta, and we quickly made several interesting discoveries about how people wanted to use Cache Reserve.

The first big challenge we found was that users hated egress fees as much as we do and wanted to make sure that as much content as possible was in Cache Reserve. During the closed beta we saw usage above 8,000 PUT operations per second sustained, and objects served at a rate of over 3,000 GETs per second. We were also caching around 600Tb for some of our large customers. We knew that we wanted to open the product up to anyone that wanted to use it and in order to scale to meet this demand, we needed to make several changes quickly. So we turned to Cloudflare’s developer platform.

Cache Reserve stores data on R2 using its S3-compatible API. Under the hood, R2 handles all the complexity of an object storage system using our performant and scalable developer primitives: Workers and Durable Objects. We decided to use developer platform tools because it would allow us to implement different scaling strategies quickly. The advantage of building on the Cloudflare developer platform is that Cache Reserve was easily able to experiment to see how we could best distribute the high load we were seeing, all while shielding the complexity of how Cache Reserve works from users.  

With the single press of a button, Cache Reserve performs these functions:

  • On a cache miss, Pingora (our new L7 proxy) reaches out to the origin for the content and writes the response to R2. This happens while the content continues its trip back to the visitor (thereby avoiding needless latency).
  • Inside R2, a Worker writes the content to R2’s persistent data storage while also keeping track of the important metadata that Pingora sends about the object (like origin headers, freshness values, and retention information) using Durable Objects storage.
  • When the content is next requested, Pingora looks up where the data is stored in R2 by computing the cache key. The cache key’s hash determines both the object name in R2 and which bucket it was written to, as each zone’s assets are sharded across multiple buckets to distribute load.
  • Once found, Pingora attaches the relevant metadata and sends the content from R2 to the nearest upper-tier to be cached, then to the lower-tier and finally back to the visitor.
Reduce origin load, save on cloud egress fees, and maximize cache hits with Cache Reserve

This is magic! None of the above needs to be managed by the user. By bringing together R2, Workers, Durable Objects, Pingora, and Tiered Cache we were able to quickly build and make changes to Cache Reserve to scale as needed…

What’s next for Cache Reserve

In addition to the work we’ve done to scale Cache Reserve, opening the product up also opens the door to more features and integrations across Cloudflare. We plan on putting additional analytics and metrics in the hands of Cache Reserve users, so they know precisely what’s in Cache Reserve and how much egress it’s saving them. We also plan on building out more complex integrations with R2 so if customers want to begin managing their storage, they are able to easily make that transition. Finally, we’re going to be looking into providing more options for customers to control precisely what is eligible for Cache Reserve. These features represent just the beginning for how customers will control and customize their cache on Cloudflare.

What’s some of the feedback been so far?

As a long time Cloudflare customer, we were eager to deploy Cache Reserve to provide cost savings and improved performance for our end users. Ensuring our application always performs optimally for our global partners and delivery riders is a primary focus of Delivery Hero. With Cache Reserve our cache hit ratio improved by 5% enabling us to scale back our infrastructure and simplify what is needed to operate our global site and provide additional cost savings.
Wai Hang Tang, Director of Engineering at Delivery Hero

Anthology uses Cloudflare’s global cache to drastically improve the performance of content for our end users at schools and universities. By pushing a single button to enable Cache Reserve, we were able to provide a great experience for teachers and students and reduce two-thirds of our daily egress traffic.
Paul Pearcy, Senior Staff Engineer at Anthology

At Enjoei we’re always looking for ways to help make our end-user sites faster and more efficient. By using Cloudflare Cache Reserve, we were able to drastically improve our cache hit ratio by more than 10% which reduced our origin egress costs. Cache Reserve also improved the performance for many of our merchants’ sites in South America, which improved their SEO and discoverability across the Internet (Google, Criteo, Facebook, Tiktok)– and it took no time to set it up.
Elomar Correia, Head of DevOps SRE | Enterprise Solutions Architect at Enjoei

In the live events industry, the size and demand for our cacheable content can be extremely volatile, which causes unpredictable swings in our egress fees. Additionally, keeping data as close to our users as possible is critical for customer experience in the high traffic and low bandwidth scenarios our products are used in, such as conventions and music festivals. Cache Reserve helps us mitigate both of these problems with minimal impact on our engineering teams, giving us more predictable costs and lower latency than existing solutions.
Jarrett Hawrylak, VP of Engineering | Enterprise Ticketing at Patron Technology

How can I use it today?

As of today, Cache Reserve is in open beta, meaning that it’s available to anyone who wants to use it.

To use the Cache Reserve:

  • Simply go to the Caching tile in the dashboard.
  • Navigate to the Cache Reserve page and push the enable data sync button (or purchase button).

Enterprise Customers can work with their Cloudflare Account team to access Cache Reserve.

Customers can ensure Cache Reserve is working by looking at the baseline metrics regarding how much data is cached and how many operations we’ve seen in the Cache Reserve section of the dashboard. Specific requests served by Cache Reserve are available by using Logpush v2 and finding HTTP requests with the field “CacheReserveUsed.”

We will continue to make sure that we are quickly triaging the feedback you give us and making improvements to help ensure Cache Reserve is easy to use, massively beneficial, and your choice for reducing egress fees for cached content.

Reduce origin load, save on cloud egress fees, and maximize cache hits with Cache Reserve

Try it out

We’ve been so excited to get Cache Reserve in more people’s hands. There will be more exciting developments to Cache Reserve as we continue to invest in giving you all the tools you need to build your perfect cache.

Try Cache Reserve today and let us know what you think.

Migrate from S3 easily with the R2 Super Slurper

Post Syndicated from Aly Cabral original https://blog.cloudflare.com/cloudflare-r2-super-slurper/

Migrate from S3 easily with the R2 Super Slurper

Migrate from S3 easily with the R2 Super Slurper

R2 is an S3-compatible, globally distributed object storage, allowing developers to store large amounts of unstructured data without the costly egress bandwidth fees you commonly find with other providers.

To enjoy this egress freedom, you’ll have to start planning to send all that data you have somewhere else into R2. You might want to do it all at once, moving as much data as quickly as possible while ensuring data consistency. Or do you prefer moving the data to R2 slowly and gradually shifting your reads from your old provider to R2? And only then decide whether to cut off your old storage or keep it as a backup for new objects in R2?

There are multiple options for architecture and implementations for this movement, but taking terabytes of data from one cloud storage provider to another is always problematic, always involves planning, and likely requires staffing.

And that was hard. But not anymore.

Today we’re announcing the R2 Super Slurper, the feature that will enable you to move all your data to R2 in one giant slurp or sip by sip — all in a friendly, intuitive UI and API.

Migrate from S3 easily with the R2 Super Slurper

The first step: R2 Super Slurper Private Beta

One giant slurp

The very first iteration of the R2 Super Slurper allows you to target an S3 bucket and import the objects you have stored there into your R2 bucket. It’s a simple, one-time import that covers the most common scenarios. Point to your existing S3 source, grant the R2 Super Slurper permissions to read the objects you want to migrate, and an asynchronous job will take care of the rest.

Migrate from S3 easily with the R2 Super Slurper

You’ll also be able to save the definitions and credentials to access your source bucket, so you can migrate different folders from within the bucket, in new operations, without having to define URLs and credentials all over again. This operation alone will save you from scripting your way through buckets with many paths you’d like to validate for consistency.  During the beta stages — with your feedback — we will evolve the R2 Super Slurper to the point where anyone can achieve an entirely consistent, super slurp, all with the click of just a few buttons.

Automatic sip by sip migration

Other future development includes automatic sip by sip migration, which provides a way to incrementally copy objects to R2 as they get requested from an end-user. It allows you to start serving objects from R2 as they migrate, saving you money immediately.

Migrate from S3 easily with the R2 Super Slurper

The flow of the requests and object migration will look like this:

  • Check for Object — A request arrives at Cloudflare (1), and we check the R2 bucket for the requested object (2). If the object exists, R2 serves it (3).
  • Copy the Object — If the object does not exist in R2, a request for the object flows to the origin bucket (2a). Once there’s an answer with an object, we serve it and copy it into R2 (2b).
  • Serve the Object — R2 serves all future requests for the object (3).

With this capability you can copy your objects, previously scattered through one or even multiple buckets from other vendors, while ensuring that everything requested from the end-user side gets served from R2. And because you will only need to use the R2 Super Slurper to sip the object from elsewhere on the first request, you will start saving on those egress fees for any subsequent ones.

We are currently targeting S3-compatible buckets for now, but you can expect other sources to become available during 2023.

Join the waitlist for the R2 Super Slurper private beta

To access the R2 Super Slurper, you must be an R2 user first and sign up for the R2 Super Slurper waitlist here.

We will collaborate closely with many early users in the private beta stage to refine and test the service . Soon, we’ll announce an open beta where users can sign up for the service.

Make sure to join our Discord server and get in touch with a fantastic community of users and Cloudflare staff for all R2-related topics!

Cloudflare Pages gets even faster with Early Hints

Post Syndicated from Greg Brimble original https://blog.cloudflare.com/early-hints-on-cloudflare-pages/

Cloudflare Pages gets even faster with Early Hints

Cloudflare Pages gets even faster with Early Hints

Last year, we demonstrated what we meant by “lightning fast”, showing Pages’ first-class performance in all parts of the world, and today, we’re thrilled to announce an integration that takes this commitment to speed even further – introducing Pages support for Early Hints! Early Hints allow you to unblock the loading of page critical resources, ahead of any slow-to-deliver HTML pages. Early Hints can be used to improve the loading experience for your visitors by significantly reducing key performance metrics such as the largest contentful paint (LCP).

What is Early Hints?

Early Hints is a new feature of the Internet which is supported in Chrome since version 103, and that Cloudflare made generally available for websites using our network. Early Hints supersedes Server Push as a mechanism to “hint” to a browser about critical resources on your page (e.g. fonts, CSS, and above-the-fold images). The browser can immediately start loading these resources before waiting for a full HTML response. This uses time that was otherwise previously wasted! Before Early Hints, no work could be started until the browser received the first byte of the response. Now, the browser can fill this time usefully when it was previously sat waiting. Early Hints can bring significant improvements to the performance of your website, particularly for metrics such as LCP.

How Early Hints works

Cloudflare caches any preload and preconnect type Link headers sent from your 200 OK response, and sends them early for any subsequent requests as a 103 Early Hints response.

In practical terms, an HTTP conversation now looks like this:


Host: example.com

Early Hints Response

103 Early Hints
Link: </styles.css>; rel=preload; as=style


200 OK
Content-Type: text/html; charset=utf-8
Link: </styles.css>; rel=preload; as=style

<!-- ... -->

Early Hints on Cloudflare Pages

Websites hosted on Cloudflare Pages can particularly benefit from Early Hints. If you’re using Pages Functions to generate dynamic server-side rendered (SSR) pages, there’s a good chance that Early Hints will make a significant improvement on your website.

Performance Testing

We created a simple demonstration e-commerce website in order to evaluate the performance of Early Hints.

Cloudflare Pages gets even faster with Early Hints

This landing page has the price of each item, as well as a remaining stock counter. The page itself is just hand-crafted HTML and CSS, but these pricing and inventory values are being templated in live for every request with Pages Functions. To simulate loading from an external data-source (possibly backed by KV, Durable Objects, D1, or even an external API like Shopify) we’ve added a fixed delay before this data resolves. We include preload links in our response to some critical resources:

  • an external CSS stylesheet,
  • the image of the t-shirt,
  • the image of the cap,
  • and the image of the keycap.

The very first request makes a waterfall like you might expect. The first request is held blocked for a considerable amount of time while we resolve this pricing and inventory data. Once loaded, the browser parses the HTML, pulls out the external resources, and makes subsequent requests for their contents. The CSS and images extend the loading time considerably given their large dimensions and high quality. The largest contentful paint (LCP) occurs when the t-shirt image loads, and the document finishes once all requests are fulfilled.

Cloudflare Pages gets even faster with Early Hints

Subsequent requests are where things get interesting! These preload links are cached on Cloudflare’s global network, and are sent ahead of the document in a 103 Early Hints response. Now, the waterfall looks much different. The initial request goes out the same, but now, requests for the CSS and images slide much further left since they can be started as soon as the 103 response is delivered. The browser starts fetching those resources while waiting for the original request to finish server-side rendering. The LCP again occurs once the t-shirt image has loaded, but this time, it’s brought forward by 530ms because it started loading 752ms faster, and the document is fully loaded 562ms faster, again because the external resources could all start loading faster.

Cloudflare Pages gets even faster with Early Hints

The final four requests (highlighted in yellow) come back as 304 Not Modified responses using a If-None-Match header. By default, Cloudflare Pages requires the browser to confirm that all assets are fresh, and so, on the off chance that they were updated between the Early Hints response and when they come to being used, the browser is checking if they have changed. Since they haven’t, there’s no contentful body to download, and the response completes quickly. This can be avoided by setting a custom Cache-Control header on these assets using a _headers file. For example, you could cache these images for one minute with a rule like:

# _headers

  Cache-Control: max-age=60

We could take this performance audit further by exploring other features that Cloudflare offers, such as automatic CSS minification, Cloudflare Images, and Image Resizing.

We already serve Cloudflare Pages from one of the fastest networks in the world — Early Hints simply allows developers to take advantage of our global network even further.

Using Early Hints and Cloudflare Pages

The Early Hints feature on Cloudflare is currently restricted to caching Link headers in a webpage’s response. Typically, this would mean that Cloudflare Pages users would either need to use the _headers file, or Pages Functions to apply these headers. However, for your convenience, we’ve also added support to transform any <link> HTML elements you include in your body into Link headers. This allows you to directly control the Early Hints you send, straight from the same document where you reference these resources – no need to come out of HTML to take advantage of Early Hints.

For example, for the following HTML document, will generate an Early Hints response:

HTML Document

<!DOCTYPE html>
    <link rel="preload" as="style" href="/styles.css" />
    <!-- ... -->

Early Hints Response

103 Early Hints
Link: </styles.css>; rel=preload; as=style

As previously mentioned, Link headers can also be set with a _headers file if you prefer:

# _headers

  Link: </styles.css>; rel=preload; as=style

Early Hints (and the automatic HTML <link> parsing) has already been enabled automatically for all pages.dev domains. If you have any custom domains configured on your Pages project, make sure to enable Early Hints on that domain in the Cloudflare dashboard under the “Speed” tab. More information can be found in our documentation.

Additionally, in the future, we hope to support the Smart Early Hints features. Smart Early Hints will enable Cloudflare to automatically generate Early Hints, even when no Link header or <link> elements exist, by analyzing website traffic and inferring which resources are important for a given page. We’ll be sharing more about Smart Early Hints soon.

In the meantime, try out Early Hints on Pages today! Let us know how much of a loading improvement you see in our Discord server.

Bringing the best live video experience to Cloudflare Stream with AV1

Post Syndicated from Renan Dincer original https://blog.cloudflare.com/av1-cloudflare-stream-beta/

Bringing the best live video experience to Cloudflare Stream with AV1

Bringing the best live video experience to Cloudflare Stream with AV1

Consumer hardware is pushing the limits of consumers’ bandwidth.

VR headsets support 5760 x 3840 resolution — 22.1 million pixels per frame of video. Nearly all new TVs and smartphones sold today now support 4K — 8.8 million pixels per frame. It’s now normal for most people on a subway to be casually streaming video on their phone, even as they pass through a tunnel. People expect all of this to just work, and get frustrated when it doesn’t.

Consumer Internet bandwidth hasn’t kept up. Even advanced mobile carriers still limit streaming video resolution to prevent network congestion. Many mobile users still have to monitor and limit their mobile data usage. Higher Internet speeds require expensive infrastructure upgrades, and 30% of Americans still say they often have problems simply connecting to the Internet at home.

We talk to developers every day who are pushing up against these limits, trying to deliver the highest quality streaming video without buffering or jitter, challenged by viewers’ expectations and bandwidth. Developers building live video experiences hit these limits the hardest — buffering doesn’t just delay video playback, it can cause the viewer to get out of sync with the live event. Buffering can cause a sports fan to miss a key moment as playback suddenly skips ahead, or find out in a text message about the outcome of the final play, before they’ve had a chance to watch.

Today we’re announcing a big step towards breaking the ceiling of these limits — support in Cloudflare Stream for the AV1 codec for live videos and their recordings, available today to all Cloudflare Stream customers in open beta. Read the docs to get started, or watch an AV1 video from Cloudflare Stream in your web browser. AV1 is an open and royalty-free video codec that uses 46% less bandwidth than H.264, the most commonly used video codec on the web today.

What is AV1, and how does it improve live video streaming?

Every piece of information that travels across the Internet, from web pages to photos, requires data to be transmitted between two computers. A single character usually takes one byte, so a two-page letter would be 3600 bytes or 3.6 kilobytes of data transferred.

One pixel in a photo takes 3 bytes, one each for red, green and blue in the pixel. A 4K photo would take 8,294,400 bytes, or 8.2 Megabytes. A video is like a photo that changes 30 times a second, which would make almost 15 Gigabytes per minute. That’s a lot!

To reduce the amount of bandwidth needed to stream video, before video is sent to your device, it is compressed using a codec. When your device receives video, it decodes this into the pixels displayed on your screen. These codecs are essential to both streaming and storing video.

Video compression codecs combine multiple advanced techniques, and are able to compress video to one percent of the original size, with your eyes barely noticing a difference. This also makes video codecs computationally intensive and hard to run. Smartphones, laptops and TVs have specific media decoding hardware, separate from the main CPU, optimized to decode specific protocols quickly, using the minimum amount of battery life and power.

Every few years, as researchers invent more efficient compression techniques, standards bodies release new codecs that take advantage of these improvements. Each generation of improvements in compression technology increases the requirements for computers that run them. With higher requirements, new chips are made available with increased compute capacity. These new chips allow your device to display higher quality video while using less bandwidth.

AV1 takes advantage of recent advances in compute to deliver video with dramatically fewer bytes, even compared to other relatively recent video protocols like VP9 and HEVC.

AV1 leverages the power of new smartphone chips

One of the biggest developments of the past few years has been the rise of custom chip designs for smartphones. Much of what’s driven the development of these chips is the need for advanced on-device image and video processing, as companies compete on the basis of which smartphone has the best camera.

This means the phones we carry around have an incredible amount of compute power. One way to think about AV1 is that it shifts work from the network to the viewer’s device. AV1 is fewer bytes over the wire, but computationally harder to decode than prior formats. When AV1 was first announced in 2018, it was dismissed by some as too slow to encode and decode, but smartphone chips have become radically faster in the past four years, more quickly than many saw coming.

AV1 hardware decoding is already built into the latest Google Pixel smartphones as part of the Tensor chip design. The Samsung Exynos 2200 and MediaTek Dimensity 1000 SoC mobile chipsets both support hardware accelerated AV1 decoding. It appears that Google will require that all devices that support Android 14 support decoding AV1. And AVPlayer, the media playback API built into iOS and tvOS, now includes an option for AV1, which hints at future support. It’s clear that the industry is heading towards hardware-accelerated AV1 decoding in the most popular consumer devices.

With hardware decoding comes battery life savings — essential for both today’s smartphones and tomorrow’s VR headsets. For example, a Google Pixel 6 with AV1 hardware decoding uses only minimal battery and CPU to decode and play our test video:

Bringing the best live video experience to Cloudflare Stream with AV1

AV1 encoding requires even more compute power

Just as decoding is significantly harder for end-user devices, it is also significantly harder to encode video using AV1. When AV1 was announced in 2018, many doubted whether hardware would be able to encode it efficiently enough for the protocol to be adopted quickly enough.

To demonstrate this, we encoded the 4K rendering of Big Buck Bunny (a classic among video engineers!) into AV1, using an AMD EPYC 7642 48-Core Processor with 256 GB RAM. This CPU continues to be a workhorse of our compute fleet, as we have written about previously. We used the following command to re-encode the video, based on the example in the ffmpeg AV1 documentation:

ffmpeg -i bbb_sunflower_2160p_30fps_normal.mp4 -c:v libaom-av1 -crf 30 -b:v 0 -strict -2 av1_test.mkv

Using a single core, encoding just two seconds of video at 30fps took over 30 minutes. Even if all 48 cores were used to encode, it would take at minimum over 43 seconds to encode just two seconds of video. Live encoding using only CPUs would require over 20 servers running at full capacity.

Special-purpose AV1 software encoders like rav1e and SVT-AV1 that run on general purpose CPUs can encode somewhat faster than libaom-av1 with ffmpeg, but still consume a huge amount of compute power to encode AV1 in real-time, requiring multiple servers running at full capacity in many scenarios.

Cloudflare Stream encodes your video to AV1 in real-time

At Cloudflare, we control both the hardware and software on our network. So to solve the CPU constraint, we’ve installed dedicated AV1 hardware encoders, designed specifically to encode AV1 at blazing fast speeds. This end to end control is what lets us encode your video to AV1 in real-time. This is entirely out of reach to most public cloud customers, including the video infrastructure providers who depend on them for compute power.

Encoding in real-time means you can use AV1 for live video streaming, where saving bandwidth matters most. With a pre-recorded video, the client video player can fetch future segments of video well in advance, relying on a buffer that can be many tens of seconds long. With live video, buffering is constrained by latency — it’s not possible to build up a large buffer when viewing a live stream. There is less margin for error with live streaming, and every byte saved means that if a viewer’s connection is interrupted, it takes less time to recover before the buffer is empty.

Stream lets you support AV1 with no additional work

AV1 has a chicken or the egg dilemma. And we’re helping solve it.

Companies with large video libraries often re-encode their entire content library to a new codec before using it. But AV1 is so computationally intensive that re-encoding whole libraries has been cost prohibitive. Companies have to choose specific videos to re-encode, and guess which content will be most viewed ahead of time. This is particularly challenging for apps with user generated content, where content can suddenly go viral, and viewer patterns are hard to anticipate.

This has slowed down the adoption of AV1 — content providers wait for more devices to support AV1, and device manufacturers wait for more content to use AV1. Which will come first?

With Cloudflare Stream there is no need to manually trigger re-encoding, re-upload video, or manage the bulk encoding of a large video library. This is a unique approach that is made possible by integrating encoding and delivery into a single product — it is not possible to encode on-demand using the old way of encoding first, and then pointing a CDN at a bucket of pre-encoded files.

We think this approach can accelerate the adoption of AV1. Consider a video app with millions of minutes of user-generated video. Most videos will never be watched again. In the old model, developers would have to spend huge sums of money to encode upfront, or pick and choose which videos to re-encode. With Stream, we can help anyone incrementally adopt AV1, without re-encoding upfront. As we work towards making AV1 Generally Available, we’ll be working to make supporting AV1 simple and painless, even for videos already uploaded to Stream, with no special configuration necessary.

Open, royalty-free, and widely supported

At Cloudflare, we are committed to open standards and fighting patent trolls. While there are multiple competing options for new video codecs, we chose to support AV1 first in part because it is open source and has royalty-free licensing.

Other encoding codecs force device manufacturers to pay royalty fees in order to adopt their standard in consumer hardware, and have been quick to file lawsuits against competing video codecs. The group behind the open and royalty-free VP8 and VP9 codecs have been pushing back against this model for more than a decade, and AV1 is the successor to these codecs, with support from all the biggest technology companies, both software and hardware. Beyond its technical accomplishments, AV1 is a clear message from the industry that the future of video encoding should be open, royalty-free, and free from patent litigation.

Try AV1 right now with your live stream or live recording

Support for AV1 is currently in open beta. You can try using AV1 on your own live video with Cloudflare Stream right now — just add the ?betaCodecSuggestion=av1 query parameter to the HLS or DASH manifest URL for any live stream or live recording created after October 1st in Cloudflare Stream. Read the docs to get started. If you don’t yet have a Cloudflare account, you can sign up here and start using Cloudflare Stream in just a few minutes.

We also have a recording of a live video, encoded using AV1, that you can watch here. Note that Safari does not yet support AV1.

We encourage you to try AV1 with your test streams, and we’d love your feedback. Join our Discord channel and tell us what you’re building, and what kinds of video you’re interested in using AV1 with. We’d love to hear from you!

Automatic (secure) transmission: taking the pain out of origin connection security

Post Syndicated from Alex Krivit original https://blog.cloudflare.com/securing-origin-connectivity/

Automatic (secure) transmission: taking the pain out of origin connection security

Automatic (secure) transmission: taking the pain out of origin connection security

In 2014, Cloudflare set out to encrypt the Internet by introducing Universal SSL. It made getting an SSL/TLS certificate free and easy at a time when doing so was neither free, nor easy. Overnight millions of websites had a secure connection between the user’s browser and Cloudflare.

But getting the connection encrypted from Cloudflare to the customer’s origin server was more complex. Since Cloudflare and all browsers supported SSL/TLS, the connection between the browser and Cloudflare could be instantly secured. But back in 2014 configuring an origin server with an SSL/TLS certificate was complex, expensive, and sometimes not even possible.

And so we relied on users to configure the best security level for their origin server. Later we added a service that detects and recommends the highest level of security for the connection between Cloudflare and the origin server. We also introduced free origin server certificates for customers who didn’t want to get a certificate elsewhere.

Today, we’re going even further. Cloudflare will shortly find the most secure connection possible to our customers’ origin servers and use it, automatically. Doing this correctly, at scale, while not breaking a customer’s service is very complicated. This blog post explains how we are automatically achieving that highest level of security possible for those customers who don’t want to spend time configuring their SSL/TLS set up manually.

Why configuring origin SSL automatically is so hard

When we announced Universal SSL, we knew the backend security of the connection between Cloudflare and the origin was a different and harder problem to solve.

In order to configure the tightest security, customers had to procure a certificate from a third party and upload it to their origin. Then they had to indicate to Cloudflare that we should use this certificate to verify the identity of the server while also indicating the connection security capabilities of their origin. This could be an expensive and tedious process. To help alleviate this high set up cost, in 2015 Cloudflare launched a beta Origin CA service in which we provided free limited-function certificates to customer origin servers. We also provided guidance on how to correctly configure and upload the certificates, so that secure connections between Cloudflare and a customer’s origin could be established quickly and easily.

What we discovered though, is that while this service was useful to customers, it still required a lot of configuration. We didn’t see the change we did with Universal SSL because customers still had to fight with their origins in order to upload certificates and test to make sure that they had configured everything correctly. And when you throw things like load balancers into the mix or servers mapped to different subdomains, handling server-side SSL/TLS gets even more complicated.

Around the same time as that announcement, Let’s Encrypt and other services began offering certificates as a public CA for free, making TLS easier and paving the way for widespread adoption. Let’s Encrypt and Cloudflare had come to the same conclusion: by offering certificates for free, simplifying server configuration for the user, and working to streamline certificate renewal, they could make a tangible impact on the overall security of the web.

Automatic (secure) transmission: taking the pain out of origin connection security

The announcements of free and easy to configure certificates correlated with an increase in attention on origin-facing security. Cloudflare customers began requesting more documentation to configure origin-facing certificates and SSL/TLS communication that were performant and intuitive. In response, in 2016 we announced the GA of origin certificate authority to provide cheap and easy origin certificates along with guidance on how to best configure backend security for any website.

The increased customer demand and attention helped pave the way for additional features that focused on backend security on Cloudflare. For example, authenticated origin pull ensures that only HTTPS requests from Cloudflare will receive a response from your origin, preventing an origin response from requests outside of Cloudflare. Another option, Cloudflare Tunnel can be set up to run on the origin servers, proactively establishing secure and private tunnels to the nearest Cloudflare data center. This configuration allows customers to completely lock down their origin servers to only receive requests routed through our network. For customers unable to lock down their origins using this method, we still encourage adopting the strongest possible security when configuring how Cloudflare should connect to an origin server.

Cloudflare currently offers five options for SSL/TLS configurability that we use when communicating with origins:

  • In Off mode, as you might expect, traffic from browsers to Cloudflare and from Cloudflare to origins are not encrypted and will use plain text HTTP.
  • In Flexible mode, traffic from browsers to Cloudflare can be encrypted via HTTPS, but traffic from Cloudflare to the site’s origin server is not. This is a common selection for origins that cannot support TLS, even though we recommend upgrading this origin configuration wherever possible. A guide for upgrading can be found here.
  • In Full mode, Cloudflare follows whatever is happening with the browser request and uses that same option to connect to the origin. For example, if the browser uses HTTP to connect to Cloudflare, we’ll establish a connection with the origin over HTTP. If the browser uses HTTPS, we’ll use HTTPS to communicate with the origin; however we will not validate the certificate on the origin to prove the identity and trustworthiness of the server.
  • In Full (strict) mode, traffic between Cloudflare follows the same pattern as in Full mode, however Full (strict) mode adds validation of the origin server’s certificate. The origin certificate can either be issued by a public CA like Let’s Encrypt or by Cloudflare Origin CA.
  • In Strict mode, traffic from the browser to Cloudflare that is HTTP or HTTPS will always be connected to the origin over HTTPS with a validation of the origin server’s certificate.
Automatic (secure) transmission: taking the pain out of origin connection security

What we have found in a lot of cases is that when customers initially signed up for Cloudflare, the origin they were using could not support the most advanced versions of encryption, resulting in origin-facing communication using unencrypted HTTP. These default values persisted over time, even though the origin has become more capable. We think the time is ripe to re-evaluate the entire concept of default SSL/TLS levels.

That’s why we will reduce the configuration burden for origin-facing security by automatically managing this on behalf of our customers. Cloudflare will provide a zero configuration option for how we will communicate with origins: we will simply look at an origin and use the most-secure option available to communicate with it.

Re-evaluating default SSL/TLS modes is only the beginning. Not only will we automatically upgrade sites to their best security setting, we will also open up all SSL/TLS modes to all plan levels. Historically, Strict mode was reserved for enterprise customers only. This was because we released this mode in 2014 when few people had origins that were able to communicate over SSL/TLS, and we were nervous about customers breaking their configurations. But this is 2022, and we think that Strict mode should be available to anyone who wants to use it. So we will be opening it up to everyone with the launch of the automatic upgrades.

How will automatic upgrading work?

To upgrade the origin-facing security of websites, we first need to determine the highest security level the origin can use. To make this determination, we will use the SSL/TLS Recommender tool that we released a year ago.

The recommender performs a series of requests from Cloudflare to the customer’s origin(s) to determine if the backend communication can be upgraded beyond what is currently configured. The recommender accomplishes this by:

  • Crawling the website to collect links on different pages of the site. For websites with large numbers of links, the recommender will only examine a subset. Similarly, for sites where the crawl turns up an insufficient number of links, we augment our results with a sample of links from recent visitors requests to the zone. All of this is to get a representative sample to where requests are going in order to know how responses are served from the origin.
  • The crawler uses the user agent Cloudflare-SSLDetector and has been added to Cloudflare’s list of known “good bots”.
  • Next, the recommender downloads the content of each link over both HTTP and HTTPS. The recommender makes only idempotent GET requests when scanning origin servers to avoid modifying server resource state.
  • Following this, the recommender runs a content similarity algorithm to determine if the content collected over HTTP and HTTPS matches.
  • If the content that is downloaded over HTTP matches the content downloaded over HTTPS, then it’s known that we can upgrade the security of the website without negative consequences.
  • If the website is already configured to Full mode, we will perform a certificate validation (without the additional need for crawling the site) to determine whether it can be updated to Full (strict) mode or higher.

If it can be determined that the customer’s origin is able to be upgraded without breaking, we will upgrade the origin-facing security automatically.

But that’s not all. Not only are we removing the configuration burden for services on Cloudflare, but we’re also providing more precise security settings by moving from per-zone SSL/TLS settings to per-origin SSL/TLS settings.

The current implementation of the backend SSL/TLS service is related to an entire website, which works well for those with a single origin. For those that have more complex setups however, this can mean that origin-facing security is defined by the lowest capable origin serving a part of the traffic for that service. For example, if a website uses img.example.com and api.example.com, and these subdomains are served by different origins that have different security capabilities, we would not want to limit the SSL/TLS capabilities of both subdomains to the least secure origin. By using our new service, we will be able to set per-origin security more precisely to allow us to maximize the security posture of each origin.

The goal of this is to maximize the origin-facing security of everything on Cloudflare. However, if any origin that we attempt to scan blocks the SSL recommender, has a non-functional origin, or opts-out of this service, we will not complete the scans and will not be able to upgrade security. Details on how to opt-out will be provided via email announcements soon.

Opting out

There are a number of reasons why someone might want to configure a lower-than-optimal security setting for their website. One common reason customers provide is a fear that having higher security settings will negatively impact the performance of their site. Others may want to set a suboptimal security setting for testing purposes or to debug some behavior. Whatever the reason, we will provide the tools needed to continue to configure the SSL/TLS mode you want, even if that’s different from what we think is the best.

When is this going to happen?

We will begin to roll this change out before the end of the year. If you read this and want to make sure you’re at the highest level of backend security already, we recommend Full (strict) or Strict mode. If you prefer to wait for us to automatically upgrade your origin security for you, please keep your eyes peeled to your inbox for the date we will begin rolling out this change for your group.

At Cloudflare, we believe that the Internet needs to be secure and private. If you’d like to help us achieve that, we’re hiring across the engineering organization.