Being a bad guy on the Internet is a really good business. In more than 90% of cybersecurity incidents, phishing is the root cause of the attack, and during this third week of August phishing attacks were reported against the U.S. elections, in the geopolitical conflict between the U.S., Israel, and Iran, and to cause $60M in corporate losses.
You might think that after 30 years of email being the top vector for attack and risk we are helpless to do anything about it, but that would be giving too much credit to bad actors, and a misunderstanding of how defenders focused on detections can take control and win.
Phishing isn’t about email exclusively, or any specific protocol for that matter. Simply put, it is an attempt to get a person, like you or me, to take an action that unwittingly leads to damages. These attacks work because they appear to be authentic, visually or organizationally, such as pretending to be the CEO or CFO of your company, and when you break it down they are three main attack vectors that Cloudflare has seen most impactful from the bad emails we protect our customers from: 1. Clicking links (deceptive links are 35.6% of threat indicators) 2. Downloading files or malware (malicious attachments are 1.9% of threat indicators) 3. Business email compromise (BEC) phishing that elicits money or intellectual property with no links or files (0.5% of threat indicators).
Today, we at Cloudflare see an increase in what we’ve termed multi-channel phishing. What other channels are there to send links, files and elicit BEC actions? There’s SMS (text messaging) and public and private messaging applications, which are increasingly common attack vectors that take advantage of the ability to send links over those channels, and also how people consume information and work. There’s cloud collaboration, where attackers rely on links, files, and BEC phishing on commonly used collaboration tools like Google Workspace, Atlassian, and Microsoft Office 365. And finally, there’s web and social phishing targeting people on LinkedIn and X. Ultimately, any attempt to stop phishing needs to be comprehensive enough to detect and protect against these different vectors.
A real example
It’s one thing to tell you this, but we’d love to give you an example of how a multi-channel phish plays out with a sophisticated attacker.
Here’s an email message that an executive notices is in their junk folder. That’s because our Email Security product noticed there’s something off about it and moved it there, but it relates to a project the executive is working on, so the executive thinks it’s legitimate. There’s a request for a company org chart, and the attacker knows that this is the kind of thing that’s going to be caught if they continue on email, so they include a link to a real Google form:
The executive clicks the link, and because it is a legitimate Google form, it displays the following:
There’s a request to upload the org chart here, and that’s what they try to do:
The executive drags it in, but it doesn’t finish uploading because in the document there is an “internal only” watermark that our Gateway and digital loss prevention (DLP) engine detected, which in turn prevented the upload.
Sophisticated attackers use urgency to drive better outcomes. Here, the attackers know the executive has an upcoming deadline for the consultant to report back to the CEO. Unable to upload the document, they respond back to the attacker. The attacker suggests that they try another method of upload or, in the worst case scenario, send the document on WhatsApp.
The executive attempts to upload the org chart to the website they were provided in the second email, not knowing that this site would have loaded malware, but because it was loaded in Cloudflare’s Browser Isolation, it kept the executive’s device safe. Most importantly, when trying to upload sensitive company documents, the action is stopped again:
Finally they try WhatsApp, and again, we block it:
Ease of use
Setting up a security solution and maintaining it is critical to long term protection. However, having IT administration teams constantly tweak each product, configuration, and monitor each users’ needs is not only costly but risky as well, as it puts a large amount of overhead on these teams.
Protecting the executive in the example above required just four steps:
Install and login to Cloudflare’s device agent for protection
With just a few clicks, anyone with the device agent client can be protected against multi-channel phish, making it easy for end users and administrators. For organizations that don’t allow clients to be installed, an agentless deployment is also available.
2. Configure policies that apply to all your user traffic routed through our secure web gateway. These policies can block access outright to high risk sites, such as those known to participate in phishing campaigns. For sites that may be suspicious, such as newly registered domains, isolated browser access allows users to access the website, but limits their interaction.
The executive was also unable to upload the org chart to a free cloud storage service because their organization is using Cloudflare One’s Gateway and Browser Isolation solutions that were configured to load any free cloud storage websites in a remote isolated environment, which not only prevented the upload but also removed the ability to copy and paste information as well.
Also, while the executive was able to converse with the bad actor over WhatsApp, their files were blocked because of Cloudflare One’s Gateway solution, configured by the administrator to block all uploads and downloads on WhatsApp.
3. Set up DLP policies based on what shouldn’t be uploaded, typed, or copied and pasted.
The executive was unable to upload the org chart to the Google form because the organization is using Cloudflare One’s Gateway and DLP solutions. This protection is implemented by configuring Gateway to block any DLP infraction, even on a valid website like Google.
4. Deploy Email Security and set up auto-move rules based on the types of emails detected.
In the example above, the executive never received any of the multiple malicious emails that were sent to them because Cloudflare’s Email Security was protecting their inbox. The phishing emails that did arrive were put into their Junk folder because the email was impersonating someone that didn’t match the signature in the email, and the configuration in Email Security automatically moved it there because of a one-click configuration set by the executive’s IT administrator.
But even with best-in-class detections, it goes without saying that it is important to have the ability to drill down on any metric to learn about individual users that are being impacted by an ongoing attack. Below is a mockup of our upcoming improved email security monitoring dashboard.
What’s next
While phishing, despite being around for three decades, continues to be a clear and present danger, effective detections in a seamless and comprehensive solution are really the only way to stay protected these days.
If you’re simply thinking about purchasing email security by itself, you can see why that just isn’t enough. Multi-layered protection is absolutely necessary to protect modern workforces, because work and data don’t just sit in email. They’re everywhere and on every device. Your phishing protection needs to be as well.
While you can do this by stitching together multiple vendors, it just won’t all work together. And besides the cost, a multi-vendor approach also usually increases overhead for investigation, maintenance, and uniformity for IT teams that are already stretched thin.
Whether or not you are at the start of your journey with Cloudflare, you can see how getting different parts of the Cloudflare One product suite can help holistically with phishing. And if you are already deep in your journey with Cloudflare, and are looking for 99.99% effective email detections trusted by the Fortune 500, global organizations, and even government entities, you can see how our Email Security helps.
If you’re running Office 365, and you’d like to see what we can catch that your current provider cannot, you can start right now with Retro Scan.
And if you are using our Email Security solution already, you can learn more about our comprehensive protection here.
In 2023, Cloudflare introduced a new load balancing solution supporting Local Traffic Management (LTM). This year, we took it a step further by introducing support for layer 4 load balancing to private networks via Spectrum. Now, organizations can seamlessly balance public HTTP(S), TCP, and UDP traffic to their privately hosted applications. Today, we’re thrilled to unveil our latest enhancement: support for end-to-end private traffic flows as well as WARP authenticated device traffic, eliminating the need for dedicated hardware load balancers! These groundbreaking features are powered by the enhanced integration of Cloudflare load balancing with our Cloudflare One platform, and are available to our enterprise customers. With this upgrade, our customers can now utilize Cloudflare load balancers for both public and private traffic directed at private networks.
Cloudflare Load Balancing today
Before discussing the new features, let’s review Cloudflare’s existing load balancing support and the challenges customers face.
Cloudflare currently supports four main load balancing traffic flows:
Internet-facing load balancers connecting to publicly accessible endpoints at layer 7, supporting HTTP(S).
Internet-facing load balancers connecting to publicly accessible endpoints at layer 4 (Spectrum), supporting TCP and UDP services
Internet-facing load balancers connecting to private endpoints at layer 7 HTTP(S) via Cloudflare Tunnels.
Internet-facing load balancers connecting to private endpoints at layer 4 (Spectrum), supporting TCP and UDP services via Cloudflare Tunnels.
One of the biggest advantages of Cloudflare’s load balancing solutions is the elimination of hardware costs and maintenance. Unlike hardware-based load balancers, which are costly to purchase, license, operate, and upgrade, Cloudflare’s solution requires no hardware. There’s no need to buy additional modules or new licenses, and you won’t face end-of-life issues with equipment that necessitate costly replacements.
With Cloudflare, you can focus on innovation and growth. Load balancers are deployed in every Cloudflare data center across the globe, in over 300 cities, providing virtually unlimited scale and capacity. You never need to worry about bandwidth constraints, deployment locations, extra hardware modules, downtime, upgrades, or supply chain constraints. Cloudflare’s global Anycast network ensures that every customer connects to a nearby data center and load balancer, where policies, rules, and steering are applied efficiently. And now, the resilience, scale, and simplicity of Cloudflare load balancers can be integrated into your private networks! We have worked hard to ensure that Cloudflare load balancers are highly available and disaster ready, from the core to the edge – even when datacenters lose power.
Keeping private resources private with Magic WAN
Before today’s announcement, all of Cloudflare’s load balancers operating at layer 4 have been connected to the public Internet. Customers have been able to secure the traffic flowing to their load balancers with WAF rules and Zero Trust policies, but some customers would prefer to keep certain resources private and under no circumstances exposed to the Internet. It’s been possible to isolate origin servers and endpoints this way, which can exist on private networks that are only accessible via Cloudflare Tunnels. And as of today, we can offer a similar level of isolation to customers’ layer 4 load balancers.
In our previous LTM blog post, we discussed connecting these internal or private resources to the Cloudflare global network and how Cloudflare would soon introduce load balancers that are accessible via private IP addresses. Unlike other Cloudflare load balancers, these do not have an associated hostname. Rather, they are accessible via an RFC 1918 private IP address. In the land of load balancers, this is often referred to as a virtual IP (VIP). As of today, load balancers that are accessible at private IPs can now be used within a virtual network to isolate traffic to a certain set of Cloudflare tunnels, enabling customers to load balance traffic within their private network without exposing applications to the public Internet.
The question you might be asking is, “If I have a private IP load balancer and privately hosted applications, how do I or my users actually reach these now-private services?”
Cloudflare Magic WAN can now be used as an on-ramp in tandem with Cloudflare load balancers that are accessible via an assigned private IP address. Magic WAN provides a secure and high-performance connection to internal resources, ensuring that traffic remains private and optimized across our global network. With Magic WAN, customers can connect their corporate networks directly to Cloudflare’s global network with GRE or IPSec tunnels, maintaining privacy and security while enjoying seamless connectivity. The Magic WAN Connector easily establishes connectivity to Cloudflare without the need to configure network gear, and it can be deployed at any physical or cloud location! With the enhancements to Cloudflare’s load balancing solution, customers can confidently keep their corporate applications resilient while maintaining the end-to-end privacy and security of their resources.
This enhancement opens up numerous use cases for internal load balancing, such as managing traffic between different data centers, efficiently routing traffic for internally hosted applications, optimizing resource allocation for critical applications, and ensuring high availability for internal services. Organizations can now replace traditional hardware-based load balancers, reducing complexity and lowering costs associated with maintaining physical infrastructure. By leveraging Cloudflare load balancing and Magic WAN, companies can achieve greater flexibility and scalability, adapting quickly to changing network demands without the need for additional hardware investments.
But what about latency? Load balancing is all about keeping your applications resilient and performant and Cloudflare was built with speed at its core. There is a Cloudflare datacenter within 50ms of 95% of the Internet-connected population globally! Now, we support all Cloudflare One on-ramps to not only provide seamless and secure connectivity, but also to dramatically reduce latency compared to legacy solutions. Load balancing also works seamlessly with Argo Smart Routing to intelligently route around network congestion to improve your application performance by up to 30%! Check out the blogs here and here to read more about how Cloudflare One can reduce application latency.
Supporting distributed users with Cloudflare WARP
But what about when users are distributed and not connected to the local corporate network? Cloudflare WARP can now be used as an on-ramp to reach Cloudflare load balancers that are configured with private IP addresses. The Cloudflare WARP client allows you to protect corporate devices by securely and privately sending traffic from those devices to Cloudflare’s global network, where Cloudflare Gateway can apply advanced web filtering. The WARP client also makes it possible to apply advanced Zero Trust policies that check a device’s health before it connects to corporate applications.
In this load balancing use case, WARP pairs up perfectly with Cloudflare Tunnels so that customers can place their private origins within virtual networks to help either isolate traffic or handle overlapping private IP addresses. Once these virtual networks are defined, administrators can configure WARP profiles to allow their users to connect to the proper virtual networks. Once connected, WARP takes the configuration of the virtual networks and installs routes on the end users’ devices. These routes will tell the end user’s device how to reach the Cloudflare load balancer that was created with a private, non-publicly routable IP address. The administrator could then create a DNS record locally that would point to that private IP address. Once DNS resolves locally, the device would route all subsequent traffic over the WARP connection. This is all seamless to the user and occurs with minimal latency.
How we connected load balancing to Cloudflare One
In contrast to public L4 or L7 load balancers, private L4 load balancers are not going to have publicly addressable hostnames or IP addresses, but we still need to be able to handle their traffic. To make this possible, we had to integrate existing load balancing services with private networking services created by our Cloudflare One team. To do this, upon creation of a private load balancer, we now assign a private IP address within the customer’s virtual network. When traffic destined for a private load balancer enters Cloudflare, our private networking services make a request to load balancing to determine which endpoint to connect to. The information in the response from load balancing is used to connect directly to a privately hosted endpoint via a variety of secure traffic off-ramps. This differs significantly from our public load balancers where traffic is off-ramped to the public internet. In fact, we can now direct traffic from any on-ramp to any off-ramp! This allows for significant flexibility in architecture. For example, not only can we direct WARP traffic to an endpoint connected via GRE or IPSec, but we can also off-ramp this traffic to Cloudflare Tunnel, a CNI connection, or out to the public internet! Now, instead of purchasing a bespoke load balancing solution for each traffic type, like an application or network load balancer, you can configure a single load balancing solution to handle virtually any permutation of traffic that your business needs to run!
Getting started with internal load balancing
We are excited to be releasing these new load balancing features that solve critical connectivity issues for our customers and effectively eliminate the need for a hardware load balancer. Cloudflare load balancers now support end-to-end private traffic flows with Cloudflare One. To get started with configuring this feature, take a look at our load balancing documentation.
We are just getting started with our local traffic management load balancing support. There is so much more to come including user experience changes, enhanced layer 4 session affinity, new steering methods, refined control of egress ports, and more.
We’re excited to announce that BastionZero, a Zero Trust infrastructure access platform, has joined Cloudflare. This acquisition extends our Zero Trust Network Access (ZTNA) flows with native access management for infrastructure like servers, Kubernetes clusters, and databases.
Security teams often prioritize application and Internet access because these are the primary vectors through which users interact with corporate resources and external threats infiltrate networks. Applications are typically the most visible and accessible part of an organization’s digital footprint, making them frequent targets for cyberattacks. Securing application access through methods like Single Sign-On (SSO) and Multi-Factor Authentication (MFA) can yield immediate and tangible improvements in user security.
However, infrastructure access is equally critical and many teams still rely on castle-and-moat style network controls and local resource permissions to protect infrastructure like servers, databases, Kubernetes clusters, and more. This is difficult and fraught with risk because the security controls are fragmented across hundreds or thousands of targets. Bad actors are increasingly focusing on targeting infrastructure resources as a way to take down huge swaths of applications at once or steal sensitive data. We are excited to extend Cloudflare One’s Zero Trust Network Access to natively protect infrastructure with user- and device-based policies along with multi-factor authentication.
Application vs. infrastructure access
Application access typically involves interacting with web-based or client-server applications. These applications often support modern authentication mechanisms such as Single Sign-On (SSO), which streamline user authentication and enhance security. SSO integrates with identity providers (IdPs) to offer a seamless and secure login experience, reducing the risk of password fatigue and credential theft.
Infrastructure access, on the other hand, encompasses a broader and more diverse range of systems, including servers, databases, and network devices. These systems often rely on protocols such as SSH (Secure Shell), RDP (Remote Desktop Protocol), and Kubectl (Kubernetes) for administrative access. The nature of these protocols introduces additional complexities that make securing infrastructure access more challenging.
SSH Authentication: SSH is a fundamental tool for accessing Linux and Unix-based systems. SSH access is typically facilitated through public key authentication, through which a user is issued a public/private key pair that a target system is configured to accept. These keys must be distributed to trusted users, rotated frequently, and monitored for any leakage. If a key is accidentally leaked, it can grant a bad actor direct control over the SSH-accessible resource.
RDP Authentication: RDP is widely used for remote access to Windows-based systems. While RDP supports various authentication methods, including password-based and certificate-based authentication, it is often targeted by brute force and credential stuffing attacks.
Kubernetes Authentication: Kubernetes, as a container orchestration platform, introduces its own set of authentication challenges. Access to Kubernetes clusters involves managing roles, service accounts, and kubeconfig files along with user certificates.
Infrastructure access with Cloudflare One today
Cloudflare One facilitates Zero Trust Network Access (ZTNA) for infrastructure resources with an approach superior to traditional VPNs. An administrator can define a set of identity, device, and network-aware policies that dictate if a user can access a specific IP address, hostname, and/or port combination. This allows you to create policies like “Only users in the identity provider group ‘developers’ can access resources over port 22 (default SSH port) in our corporate network,” which is already much finer control than a VPN with basic firewall policies would allow.
However, this approach still has limitations, as it relies on a set of assumptions about how corporate infrastructure is provisioned and managed. If an infrastructure resource is configured outside of the assumed network structure, e.g. SSH over a non-standard port is allowed, all network-level controls may be bypassed. This leaves only the native authentication protections of the specific protocol protecting that resource and is often how leaked SSH keys or database credentials can lead to a wider system outage or breach.
Many organizations will leverage more complex network structures like a bastion host model or complex Privileged Access Management (PAM) solutions as an added defense-in-depth strategy. However, this leads to significantly more cost and management overhead for IT security teams and sometimes complicates challenges related to least-privileged access. Tools like bastion hosts or PAM solutions end up eroding least-privilege over time because policies expand, change, or drift from a company’s security stance. This leads to users incorrectly retaining access to sensitive infrastructure.
How BastionZero fits in
While our goal for years has been to help organizations of any size replace their VPNs as simply and quickly as possible, BastionZero expands the scope of Cloudflare’s VPN replacement solution beyond apps and networks to provide the same level of simplicity for extending Zero Trust controls to infrastructure resources. This helps security teams centralize the management of even more of their hybrid IT environment, while using standard Zero Trust practices to keep DevOps teams productive and secure. Together, Cloudflare and BastionZero can help organizations replace not only VPNs but also bastion hosts; SSH, Kubernetes, or database key management systems; and redundant PAM solutions.
BastionZero provides native integration to major infrastructure access protocols and targets like SSH, RDP, Kubernetes, database servers, and more to ensure that a target resource is configured to accept connections for that specific user, instead of relying on network level controls. This allows administrators to think in terms of resources and targets, not IP addresses and ports. Additionally, BastionZero leverages OpenPubKey, an open source library that uses two forms of authentication to generate an OpenID Connect (OIDC) token, which avoids single point of failure risks inherent to a standalone Identity Provider.
BastionZero will add the following capabilities to Cloudflare’s SASE platform:
The elimination of long-lived keys/credentials through frictionless infrastructure privileged access management (PAM) capabilities that modernize credential management (e.g., SSH keys, kubeconfig files, database passwords) through a new ephemeral, decentralized approach.
A DevOps-based approach for securing SSH connections to support least privilege access that records sessions and logs every command for better visibility to support compliance requirements. Teams can operate in terms of auto-discovered targets, not IP addresses or networks, as they define just-in-time access policies and automate workflows.
Clientless RDP to support access to desktop environments without the overhead and hassle of installing a client on a user’s device.
What’s next for BastionZero
The BastionZero team will be focused on integrating their infrastructure access controls directly into Cloudflare One. During the third and fourth quarters of this year, we will be announcing a number of new features to facilitate Zero Trust infrastructure access via Cloudflare One. All functionality delivered this year will be included in the Cloudflare One free tier for organizations less than 50 users. We believe that everyone should have access to world-class security controls.
We are looking for early beta testers and teams to provide feedback about what they would like to see with respect to infrastructure access. If you are interested in learning more, please sign up here.
Managing risk posture — how your business assesses, prioritizes, and mitigates risks — has never been easy. But as attack surfaces continue to expand rapidly, doing that job has become increasingly complex and inefficient. (One global survey found that SOC team members spend, on average, one-third of their workday on incidents that pose no threat).
But what if you could mitigate risk with less effort and less noise?
Why this approach helps protect more of your attack surface, while also reducing SecOps effort
Three key use cases — including enforcing Zero Trust with our expanded CrowdStrike partnership
Other new projects we’re exploring based on these capabilities
Cloudflare for Unified Risk Posture
Today, we’re announcing Cloudflare for Unified Risk Posture, a new suite of cybersecurity risk management capabilities that can help enterprises with automated and dynamic risk posture enforcement across their expanding attack surface. Today, one unified platform enables organizations to:
Evaluate risk across people and applications: Cloudflare evaluates risk posed by people via user entity and behavior analytics (UEBA) models and risks to apps, APIs, and sites via malicious payload, zero-day threat, and bot detection models.
Enforce automated risk controls at scale: Based on these dynamic first- and third-party risk scores, Cloudflare enforces consistent risk controls for people and apps across any location around the world.
Figure 1: Unified Risk Posture Diagram
As mentioned above, this suite converges capabilities from our SASE and WAAP security portfolios onto our global network. Customers can now take advantage of built-in risk management functionality packaged as part of these existing portfolios.
This launch builds on our progressive efforts to extend first-party visibility and controls and third-party integrations that make it easier for organizations to adapt to evolving risks. For example, as part of the 2024 Security Week, we announced the general availability of behavior-based user risk scoring and the beta availability of an AI-enabled assistant to help you analyze risks facing your applications. And in a recent integration in the Fall of 2023, we announced that our cloud email security customers can ingest and display our threat detections within the CrowdStrike Falcon® Next-Gen SIEM dashboard.
To further manage your risk posture, you will be able to take advantage of new Cloudflare capabilities and integrations, including:
A new integration to share Cloudflare Zero Trust and email log data with the CrowdStrike Falcon Next-Gen SIEM (available now)
A new integration to share Cloudflare’s user risk score with Okta to enforce access policies (coming by the end of Q2 2024)
New first-party UEBA models, including user risk scores based on device posture checks (coming by the end of Q2 2024)
Unifying the evaluation, exchange, and enforcement stages of risk management onto Cloudflare’s platform helps security leaders mitigate risk with less effort. As a cybersecurity vendor defending both public-facing and internal infrastructure, Cloudflare is uniquely positioned to protect wide swathes of your expanding attack surface. Bringing together dynamic first-party risk scoring, flexible integrations, and automated enforcement helps drive two primary business outcomes:
Reducing effort in SecOps with less manual policy building and greater agility in responding to incidents. This means fewer clicks to build policies, more automated workflows, and lower mean times to detect (MTTD) and mean times to respond (MTTR) to incidents.
Reducing cyber risk with visibility and controls that span people and apps. This means fewer critical incidents and more threats blocked automatically.
Customers like Indeed, the #1 job site in the world, are already seeing these impacts by partnering with Cloudflare:
“Cloudflare is helping us mitigate risk more effectively with less effort and simplifies how we deliver Zero Trust across my organization.” — Anthony Moisant, SVP, Chief Information Officer and Chief Security Officer at Indeed.
Problem: Too many risks across too much attack surface
Managing risk posture is an inherently broad challenge, covering internal dangers and external threats across attack vectors. Below is just a sampling of risk factors CISOs and their security teams track across three everyday dimensions including people, apps, and data:
People risks: Phishing, social engineering, malware, ransomware, remote access, insider threats, physical access compromise, third party / supply chain, mobile devices / BYOD
App risks: denial of service, zero-day exploits, SQL injection, cross-site scripting, remote code execution, credential stuffing, account takeover, shadow IT usage, API abuse
Data risks: data loss / exposure, data theft / breach, privacy violation, compliance violation, data tampering
Point solutions emerged to lock down some of these specific risks and attack vectors. But over time, organizations have accumulated many services with a limited ability to talk to one another and build a more holistic view of risk. The granular telemetry generated by each tool has led to information overload for security staff who are often stretched thin already. Security Information and Event Management (SIEM) and Extended Detection & Response (XDR) platforms play a critical role in aggregating risk data across environments and mitigating threats based on analysis, but these tools still demand time, resources, and expertise to operate effectively. All these challenges have gotten worse as attack surfaces have expanded rapidly, as businesses embrace hybrid work, build new digital apps, and more recently, experiment with AI.
How Cloudflare helps manage risk posture
To help restore control over this complexity, Cloudflare for Unified Risk Posture provides one platform to evaluate risk, exchange indicators, and enforce dynamic controls throughout IT environments and around the world, all while complementing the security tools your business already relies on.
Although the specific risks Cloudflare can mitigate are wide-ranging (including all those in the sample bullets above), the following three use cases represent the full range of our capabilities, which you can start taking advantage of today.
Use Case #1: Enforce Zero Trust with Cloudflare & CrowdStrike
This first use case spotlights the flexibility with which Cloudflare fits into your current security ecosystem to make it easier to adopt Zero Trust best practices.
Cloudflare integrates with and ingests security signals from best-in-class EPP and IDP partners to enforce identity and device posture checks for any access request to any destination. You can even onboard multiple providers at once to enforce different policies in different contexts. For example, by integrating with CrowdStrike Falcon®, joint customers can enforce policies based on the Falcon Zero Trust Assessment (ZTA) score, which delivers continuous real-time security posture assessments across all endpoints in an organization regardless of the location, network or user. Plus, customers can then push activity logs generated by Cloudflare, including all access requests, to whichever cloud storage or analytics providers they prefer.
Today, we are announcing an expanded partnership with CrowdStrike for a new integration that enables organizations to share logs with Falcon Next-Gen SIEM for deeper analysis and further investigation. Falcon Next-Gen SIEM unifies first- and third-party data, native threat intelligence, AI, and workflow automation to drive SOC transformation and enforce better threat protection. The integration of Cloudflare Zero Trust and email logs with Falcon Next-Gen SIEM allows joint customers to identify and investigate Zero Trust networking and email risks and analyze data with other log sources to uncover hidden threats.
“CrowdStrike Falcon Next-Gen SIEM delivers up to 150x faster search performance over legacy SIEMs and products positioned as SIEM alternatives. Our transformative telemetry, paired with Cloudflare’s robust Zero Trust capabilities provides an unprecedented partnership. Together, we are converging two of the most critical pieces of the risk management puzzle that organizations of every size must address in order to combat today’s growing threats.” — Daniel Bernard, Chief Business Officer at CrowdStrike
Below is a sample workflow of how Cloudflare and CrowdStrike work together to enforce Zero Trust policies and mitigate emerging risks. Together, Cloudflare and CrowdStrike complement each other by exchanging activity and risk data and enforcing risk-based policies and remediation steps.
Figure 2: Enforce Zero Trust with Cloudflare & CrowdStrike
Phase 1: Automated investigation
Phase 2: Zero Trust enforcement
Phase 3: Remediation
Cloudflare and CrowdStrike help an organization detect that a user is compromised.
In this example, Cloudflare has recently blocked web browsing to risky websites and phishing emails, serving as the first line of defense. Those logs are then sent to CrowdStrike Falcon Next-Gen SIEM, which alerts your organization’s analyst about suspicious activity.
At the same time, CrowdStrike Falcon Insight XDR automatically scans that user’s device and detects that it is infected. As a result, the Falcon ZTA score reflecting the device’s health is lowered.
This org has set up device posture checks via Cloudflare’s Zero Trust Network Access (ZTNA), only allowing access when the Falcon ZTA risk score is above a specific threshold they have defined.
Our ZTNA denies the user’s next request to access an application because the Falcon ZTA score falls below that threshold.
Because of this failed device posture check, Cloudflare increases the risk score for that user, which places them in a group with more restrictive controls.
In parallel, CrowdStrike’s Next-GenSIEM has continued to analyze the specific user’s activity and broader risks throughout the organization’s environment. Using machine learning models, CrowdStrike surfaces top risks and proposes solutions for each risk to your analyst.
The analyst can then review and select remediation tactics — for example, quarantining the user’s device — to further reduce risk throughout the organization.
Use Case #2: Protect apps, APIs, & websites
This next use case is focused on protecting apps, APIs, and websites from threat actors and bots. Many customers first adopt Cloudflare for this use case, but may not be aware of the risk evaluation algorithms underpinning their protection.
These risk models are trained largely on telemetry from Cloudflare’s global network, which is used as a reverse proxy by nearly 20% of all websites and sees about 3 trillion DNS queries per day. This unique real-time visibility powers threat intelligence and even enables us to detect and mitigate zero-days before others.
Unlike other vendors, Cloudflare’s network architecture enables risk evaluation models and security controls on public-facing and internal infrastructure to be shared across all of our services. This means that organizations can apply protections against app vulnerability exploits, DDoS, and bots in front of internal apps like self-hosted Jira and Confluence servers, protecting them from emerging and even zero-day threats.
Organizations can review the potential misconfigurations, data leakage risks, and vulnerabilities that impact the risk posture for their apps, APIs, and websites within Cloudflare Security Center. We are investing in this centralized view of risk posture management by integrating alerts and insights across our security portfolio. In fact, we recently announced updates focused on highlighting where gaps exist in how your organization has deployed Cloudflare services.
Finally, we are also making it easier for organizations to investigate security events directly and recently announced beta availability of Log Explorer. In this beta, security teams can view all of their HTTP traffic in one place with search, analytics dashboards, and filters built-in. These capabilities can help customers monitor more risk factors within the Cloudflare platform versus exporting to third party tools.
Use Case #3: Protect sensitive data with UEBA
This third use case summarizes one common way many customers plan to leverage our user risk / UEBA scores to prevent leaks and mishandling of sensitive data:
Phase 1: In this example, the security team has already configured data loss prevention (DLP) policies to detect and block traffic with sensitive data. These policies prevent one user’s multiple, repeated attempts to upload source code to a public GitHub repository.
Phase 2: Because this user has now violated a high number of DLP policies within a short time frame, Cloudflare scores that suspicious user as high risk, regardless of whether those uploads had malicious or benign intent. The security team can now further investigate that specific user, including reviewing all of his recent log activity.
Phase 3: For that specific high-risk user or for a group of high-risk users, administrators can then set ZTNA or even browser isolation rules to block or isolate access to applications that contain other sensitive data.
Altogether, this workflow highlights how Cloudflare’s risk posture controls adapt to suspicious behavior from evaluation through to enforcement.
How to get started with unified risk posture management
The above use cases reflect how our customers are unifying risk management with Cloudflare. Through these customer conversations, a few themes emerged for why they feel confident in our vision to help them manage risk across their expanding attack surface:
The simplicity of our unified platform: We bring together SASE and WAAP risk scoring and controls for people and apps. Plus, with a single API for all Cloudflare services, organizations can automate and customize workflows with infrastructure-as-code tools like Terraform with ease.
The flexibility of our integrations:We exchange risk signals with the EPP, IDP, XDR, and SIEM providers you already use, so you can do more with your tools and data. Plus, with one-time integrations that work across all our services, you can extend controls across your IT environments with agility.
The scale of our global network: Every security service is available for customers to run in every location across our network spanning 320+ locations and 13K+ interconnects. In this way, single-pass inspection and risk policy enforcement is always fast, consistent, and resilient, delivered close to your users and apps.
To continue learning more about how Cloudflare can help you evaluate risk, exchange risk indicators, and enforce risk controls, explore more resources on our website.
In the ever-evolving domain of enterprise security, CISOs and CIOs have to tirelessly build new enterprise networks and maintain old ones to achieve performant any-to-any connectivity. For their team of network architects, surveying their own environment to keep up with changing needs is half the job. The other is often unearthing new, innovative solutions which integrate seamlessly into the existing landscape. This continuous cycle of construction and fortification in the pursuit of secure, flexible infrastructure is exactly what Cloudflare’s SASE offering, Cloudflare One, was built for.
Cloudflare One has progressively evolved based on feedback from customers and analysts., Today, we are thrilled to introduce the public availability of the Cloudflare WARP Connector, a new tool that makes bidirectional, site-to-site, and mesh-like connectivity even easier to secure without the need to make any disruptive changes to existing network infrastructure.
Bridging a gap in Cloudflare’s Zero Trust story
Cloudflare’s approach has always been focused on offering a breadth of products, acknowledging that there is no one-size-fits-all solution for network connectivity. Our vision is simple: any-to-any connectivity, any way you want it.
Prior to the WARP Connector, one of the easiest ways to connect your infrastructure to Cloudflare, whether that be a local HTTP server, web services served by a Kubernetes cluster, or a private network segment, was through the Cloudflare Tunnel app connector, cloudflared. In many cases this works great, but over time customers began to surface a long tail of use cases which could not be supported based on the underlying architecture of cloudflared. This includes situations where customers utilize VOIP phones, necessitating a SIP server to establish outgoing connections to user’s softphones, or a CI/CD server sending notifications to relevant stakeholders for each stage of the CI/CD pipelines. Later in this blog post, we explore these use cases in detail.
As clouflared proxies at Layer 4 of the OSI model, its design was optimized specifically to proxy requests to origin services — it was not designed to be an active listener to handle requests from origin services. This design trade-off means that cloudflared needs to source NAT all requests it proxies to the application server. This setup is convenient for scenarios where customers don’t need to update routing tables to deploy cloudflared in front of their original services. However, it also means that customers can’t see the true source IP of the client sending the requests. This matters in scenarios where a network firewall is logging all the network traffic, as the source IP of all the requests will be cloudflared’s IP address, causing the customer to lose visibility into the true client source.
Build or borrow
To solve this problem, we identified two potential solutions: start from scratch by building a new connector, or borrow from an existing connector, likely in either cloudflared or WARP.
The following table provides an overview of the tradeoffs of the two approaches:
Features
Build in cloudflared
Borrow from WARP
Bidirectional traffic flows
As described in the earlier section, limitations of Layer 4 proxying.
This does proxying at
Layer 3, because of which it can act as default gateway for that subnet, enabling it to support traffic flows from both directions.
User experience
For Cloudflare One customers, they have to work with two distinct products (cloudflared and WARP) to connect their services and users.
For Cloudflare One customers, they just have to get familiar with a single product to connect their users as well as their networks.
Site-to-site connectivity between branches, data centers (on-premise and cloud) and headquarters.
Not recommended
For sites where running agents on each device is not feasible, this could easily connect the sites to users running WARP clients in other sites/branches/data centers. This would work seamlessly where the underlying tunnels are all the same.
Visibility into true source IP
It does source NATting.
Since it acts as the default gateway, it preserves the true source IP address for any traffic flow.
High availability
Inherently reliable by design and supports replicas for failover scenarios.
Reliability specifications are very different for a default gateway use case vs endpoint device agent. Hence, there is opportunity to innovate here.
Introducing WARP Connector
Starting today, the introduction of WARP Connector opens up new possibilities: server initiated (SIP/VOIP) flows; site-to-site connectivity, connecting branches, headquarters, and cloud platforms; and even mesh-like networking with WARP-to-WARP. Under the hood, this new connector is an extension of warp-client that can act as a virtual router for any subnet within the network to on/off-ramp traffic through Cloudflare.
By building on WARP, we were able to take advantage of its design, where it creates a virtual network interface on the host to logically subdivide the physical interface (NIC) for the purpose of routing IP traffic. This enables us to send bidirectional traffic through the WireGuard/MASQUE tunnel that’s maintained between the host and Cloudflare edge. By virtue of this architecture, customers also get the added benefit of visibility into the true source IP of the client.
WARP Connector can be easily deployed on the default gateway without any additional routing changes. Alternatively, static routes can be configured for specific CIDRs that need to be routed via WARP Connector, and the static routes can be configured on the default gateway or on every host in that subnet.
Private network use cases
Here we’ll walk through a couple of key reasons why you may want to deploy our new connector, but remember that this solution can support numerous services, such as Microsoft’s System Center Configuration Manager (SCCM), Active Directory server updates, VOIP and SIP traffic, and developer workflows with complex CI/CD pipeline interaction. It’s also important to note this connector can either be run alongside cloudflared and Magic WAN, or can be a standalone remote access and site-to-site connector to the Cloudflare Global network.
Softphone and VOIP servers
For users to establish a voice or video call over a VOIP software service, typically a SIP server within the private network brokers the connection using the last known IP address of the end-user. However, if traffic is proxied anywhere along the path, this often results in participants only receiving partial voice or data signals. With the WARP Connector, customers can now apply granular policies to these services for secure access, fortifying VOIP infrastructure within their Zero Trust framework.
Securing access to CI/CD pipeline
An organization’s DevOps ecosystem is generally built out of many parts, but a CI/CD server such as Jenkins or Teamcity is the epicenter of all development activities. Hence, securing that CI/CD server is critical. With the WARP Connector and WARP Client, organizations can secure the entire CI/CD pipeline and also streamline it easily.
Let’s look at a typical CI/CD pipeline for a Kubernetes application. The environment is set up as depicted in the diagram above, with WARP clients on the developer and QA laptops and a WARP Connector securely connecting the CI/CD server and staging servers on different networks:
Typically, the CI/CD pipeline is triggered when a developer commits their code change, invoking a webhook on the CI/CD server.
Once the images are built, it’s time to deploy the code, which is typically done in stages: test, staging and production.
Notifications are sent to the developer and QA engineer to notify them when the images are ready in the test/staging environments.
QA engineers receive the notifications via webhook from the CI/CD servers to kick-start their monitoring and troubleshooting workflow.
With WARP Connector, customers can easily connect their developers to the tools in the DevOps ecosystem by keeping the ecosystem private and not exposing it to the public. Once the DevOps ecosystem is securely connected to Cloudflare, granular security policies can be easily applied to secure access to the CI/CD pipeline.
True source IP address preservation
Organizations running Microsoft AD Servers or non-web application servers often need to identify the true source IP address for auditing or policy application. If these requirements exist, WARP Connector simplifies this, offering solutions without adding NAT boundaries. This can be useful to rate-limit unhealthy source IP addresses, for ACL-based policies within the perimeter, or to collect additional diagnostics from end-users.
Getting started with WARP Connector
As part of this launch, we’re making some changes to the Cloudflare One Dashboard to better highlight our different network on/off ramp options. As of today, a new “Network” tab will appear on your dashboard. This will be the new home for the Cloudflare Tunnel UI.
We are also introducing the new “Routes” tab next to “Tunnels”. This page will present an organizational view of customer’s virtual networks, Cloudflare Tunnels, and routes associated with them. This new page helps answer a customer’s questions pertaining to their network configurations, such as: “Which Cloudflare Tunnel has the route to my host 192.168.1.2 ” or “If a route for CIDR 192.168.2.1/28 exists, how can it be accessed” or “What are the overlapping CIDRs in my environment and which VNETs do they belong to?”. This is extremely useful for customers who have very complex enterprise networks that use the Cloudflare dashboard for troubleshooting connectivity issues.
Embarking on your WARP Connector journey is straightforward. Currently deployable on Linux hosts, users can select “create a Tunnel” and pick from either cloudflared or WARP to deploy straight from the dashboard. Follow our developer documentation to get started in a few easy steps. In the near future we will be adding support for more platforms where WARP Connectors can be deployed.
What’s next?
Thank you to all of our private beta customers for their invaluable feedback. Moving forward, our immediate focus in the coming quarters is on simplifying deployment, mirroring that of cloudflared, and enhancing high availability through redundancy and failover mechanisms.
Stay tuned for more updates as we continue our journey in innovating and enhancing the Cloudflare One platform. We’re excited to see how our customers leverage WARP Connector to transform their connectivity and security landscape.
Since the discovery of CRIME, BREACH, TIME, LUCKY-13 etc., length-based side-channel attacks have been considered practical. Even though packets were encrypted, attackers were able to infer information about the underlying plaintext by analyzing metadata like the packet length or timing information.
Cloudflare was recently contacted by a group of researchers at Ben Gurion University who wrote a paper titled “What Was Your Prompt? A Remote Keylogging Attack on AI Assistants” that describes “a novel side-channel that can be used to read encrypted responses from AI Assistants over the web”. The Workers AI and AI Gateway team collaborated closely with these security researchers through our Public Bug Bounty program, discovering and fully patching a vulnerability that affects LLM providers. You can read the detailed research paper here.
Since being notified about this vulnerability, we’ve implemented a mitigation to help secure all Workers AI and AI Gateway customers. As far as we could assess, there was no outstanding risk to Workers AI and AI Gateway customers.
How does the side-channel attack work?
In the paper, the authors describe a method in which they intercept the stream of a chat session with an LLM provider, use the network packet headers to infer the length of each token, extract and segment their sequence, and then use their own dedicated LLMs to infer the response.
The two main requirements for a successful attack are an AI chat client running in streaming mode and a malicious actor capable of capturing network traffic between the client and the AI chat service. In streaming mode, the LLM tokens are emitted sequentially, introducing a token-length side-channel. Malicious actors could eavesdrop on packets via public networks or within an ISP.
An example request vulnerable to the side-channel attack looks like this:
curl -X POST \
https://api.cloudflare.com/client/v4/accounts/<account-id>/ai/run/@cf/meta/llama-2-7b-chat-int8 \
-H "Authorization: Bearer <Token>" \
-d '{"stream":true,"prompt":"tell me something about portugal"}'
Let’s use Wireshark to inspect the network packets on the LLM chat session while streaming:
The first packet has a length of 95 and corresponds to the token “Port” which has a length of four. The second packet has a length of 93 and corresponds to the token “ug” which has a length of two, and so on. By removing the likely token envelope from the network packet length, it is easy to infer how many tokens were transmitted and their sequence and individual length just by sniffing encrypted network data.
Since the attacker needs the sequence of individual token length, this vulnerability only affects text generation models using streaming. This means that AI inference providers that use streaming — the most common way of interacting with LLMs — like Workers AI, are potentially vulnerable.
This method requires that the attacker is on the same network or in a position to observe the communication traffic and its accuracy depends on knowing the target LLM’s writing style. In ideal conditions, the researchers claim that their system “can reconstruct 29% of an AI assistant’s responses and successfully infer the topic from 55% of them”. It’s also important to note that unlike other side-channel attacks, in this case the attacker has no way of evaluating its prediction against the ground truth. That means that we are as likely to get a sentence with near perfect accuracy as we are to get one where only things that match are conjunctions.
Mitigating LLM side-channel attacks
Since this type of attack relies on the length of tokens being inferred from the packet, it can be just as easily mitigated by obscuring token size. The researchers suggested a few strategies to mitigate these side-channel attacks, one of which is the simplest: padding the token responses with random length noise to obscure the length of the token so that responses can not be inferred from the packets. While we immediately added the mitigation to our own inference product — Workers AI, we wanted to help customers secure their LLMs regardless of where they are running them by adding it to our AI Gateway.
As of today, all users of Workers AI and AI Gateway are now automatically protected from this side-channel attack.
What we did
Once we got word of this research work and how exploiting the technique could potentially impact our AI products, we did what we always do in situations like this: we assembled a team of systems engineers, security engineers, and product managers and started discussing risk mitigation strategies and next steps. We also had a call with the researchers, who kindly attended, presented their conclusions, and answered questions from our teams.
Unfortunately, at this point, this research does not include actual code that we can use to reproduce the claims or the effectiveness and accuracy of the described side-channel attack. However, we think that the paper has theoretical merit, that it provides enough detail and explanations, and that the risks are not negligible.
We decided to incorporate the first mitigation suggestion in the paper: including random padding to each message to hide the actual length of tokens in the stream, thereby complicating attempts to infer information based solely on network packet size.
Workers AI, our inference product, is now protected
With our inference-as-a-service product, anyone can use the Workers AI platform and make API calls to our supported AI models. This means that we oversee the inference requests being made to and from the models. As such, we have a responsibility to ensure that the service is secure and protected from potential vulnerabilities. We immediately rolled out a fix once we were notified of the research, and all Workers AI customers are now automatically protected from this side-channel attack. We have not seen any malicious attacks exploiting this vulnerability, other than the ethical testing from the researchers.
Our solution for Workers AI is a variation of the mitigation strategy suggested in the research document. Since we stream JSON objects rather than the raw tokens, instead of padding the tokens with whitespace characters, we added a new property, “p” (for padding) that has a string value of variable random length.
This has the advantage that no modifications are required in the SDK or the client code, the changes are invisible to the end-users, and no action is required from our customers. By adding random variable length to the JSON objects, we introduce the same network-level variability, and the attacker essentially loses the required input signal. Customers can continue using Workers AI as usual while benefiting from this protection.
One step further: AI Gateway protects users of any inference provider
We added protection to our AI inference product, but we also have a product that proxies requests to any provider — AI Gateway. AI Gateway acts as a proxy between a user and supported inference providers, helping developers gain control, performance, and observability over their AI applications. In line with our mission to help build a better Internet, we wanted to quickly roll out a fix that can help all our customers using text generation AIs, regardless of which provider they use or if they have mitigations to prevent this attack. To do this, we implemented a similar solution that pads all streaming responses proxied through AI Gateway with random noise of variable length.
Our AI Gateway customers are now automatically protected against this side-channel attack, even if the upstream inference providers have not yet mitigated the vulnerability. If you are unsure if your inference provider has patched this vulnerability yet, use AI Gateway to proxy your requests and ensure that you are protected.
Conclusion
At Cloudflare, our mission is to help build a better Internet – that means that we care about all citizens of the Internet, regardless of what their tech stack looks like. We are proud to be able to improve the security of our AI products in a way that is transparent and requires no action from our customers.
We are grateful to the researchers who discovered this vulnerability and have been very collaborative in helping us understand the problem space. If you are a security researcher who is interested in helping us make our products more secure, check out our Bug Bounty program at hackerone.com/cloudflare.
Today we are excited to announce Magic Cloud Networking, supercharged by Cloudflare’s recent acquisition of Nefeli Networks’ innovative technology. These new capabilities to visualize and automate cloud networks will give our customers secure, easy, and seamless connection to public cloud environments.
Public clouds offer organizations a scalable and on-demand IT infrastructure without the overhead and expense of running their own datacenter. Cloud networking is foundational to applications that have been migrated to the cloud, but is difficult to manage without automation software, especially when operating at scale across multiple cloud accounts. Magic Cloud Networking uses familiar concepts to provide a single interface that controls and unifies multiple cloud providers’ native network capabilities to create reliable, cost-effective, and secure cloud networks.
Nefeli’s approach to multi-cloud networking solves the problem of building and operating end-to-end networks within and across public clouds, allowing organizations to securely leverage applications spanning any combination of internal and external resources. Adding Nefeli’s technology will make it easier than ever for our customers to connect and protect their users, private networks and applications.
Why is cloud networking difficult?
Compared with a traditional on-premises data center network, cloud networking promises simplicity:
Much of the complexity of physical networking is abstracted away from users because the physical and ethernet layers are not part of the network service exposed by the cloud provider.
There are fewer control plane protocols; instead, the cloud providers deliver a simplified software-defined network (SDN) that is fully programmable via API.
There is capacity — from zero up to very large — available instantly and on-demand, only charging for what you use.
However, that promise has not yet been fully realized. Our customers have described several reasons cloud networking is difficult:
Poor end-to-end visibility: Cloud network visibility tools are difficult to use and silos exist even within single cloud providers that impede end-to-end monitoring and troubleshooting.
Faster pace: Traditional IT management approaches clash with the promise of the cloud: instant deployment available on-demand. Familiar ClickOps and CLI-driven procedures must be replaced by automation to meet the needs of the business.
Different technology: Established network architectures in on-premises environments do not seamlessly transition to a public cloud. The missing ethernet layer and advanced control plane protocols were critical in many network designs.
New cost models: The dynamic pay-as-you-go usage-based cost models of the public clouds are not compatible with established approaches built around fixed cost circuits and 5-year depreciation. Network solutions are often architected with financial constraints, and accordingly, different architectural approaches are sensible in the cloud.
New security risks: Securing public clouds with true zero trust and least-privilege demands mature operating processes and automation, and familiarity with cloud-specific policies and IAM controls.
Multi-vendor: Oftentimes enterprise networks have used single-vendor sourcing to facilitate interoperability, operational efficiency, and targeted hiring and training. Operating a network that extends beyond a single cloud, into other clouds or on-premises environments, is a multi-vendor scenario.
Nefeli considered all these problems and the tensions between different customer perspectives to identify where the problem should be solved.
Trains, planes, and automation
Consider a train system. To operate effectively it has three key layers:
tracks and trains
electronic signals
a company to manage the system and sell tickets.
A train system with good tracks, trains, and signals could still be operating below its full potential because its agents are unable to keep up with passenger demand. The result is that passengers cannot plan itineraries or purchase tickets.
The train company eliminates bottlenecks in process flow by simplifying the schedules, simplifying the pricing, providing agents with better booking systems, and installing automated ticket machines. Now the same fast and reliable infrastructure of tracks, trains, and signals can be used to its full potential.
Solve the right problem
In networking, there are an analogous set of three layers, called the networking planes:
Data Plane: the network paths that transport data (in the form of packets) from source to destination.
Control Plane: protocols and logic that change how packets are steered across the data plane.
Management Plane: the configuration and monitoring interfaces for the data plane and control plane.
In public cloud networks, these layers map to:
Cloud Data Plane: The underlying cables and devices are exposed to users as the Virtual Private Cloud (VPC) or Virtual Network (VNet) service that includes subnets, routing tables, security groups/ACLs and additional services such as load-balancers and VPN gateways.
Cloud Control Plane: In place of distributed protocols, the cloud control plane is a software defined network (SDN) that, for example, programs static route tables. (There is limited use of traditional control plane protocols, such as BGP to interface with external networks and ARP to interface with VMs.)
Cloud Management Plane: An administrative interface with a UI and API which allows the admin to fully configure the data and control planes. It also provides a variety of monitoring and logging capabilities that can be enabled and integrated with 3rd party systems.
Like our train example, most of the problems that our customers experience with cloud networking are in the third layer: the management plane.
Nefeli simplifies, unifies, and automates cloud network management and operations.
Avoid cost and complexity
One common approach to tackle management problems in cloud networks is introducing Virtual Network Functions (VNFs), which are virtual machines (VMs) that do packet forwarding, in place of native cloud data plane constructs. Some VNFs are routers, firewalls, or load-balancers ported from a traditional network vendor’s hardware appliances, while others are software-based proxies often built on open-source projects like NGINX or Envoy. Because VNFs mimic their physical counterparts, IT teams could continue using familiar management tooling, but VNFs have downsides:
VMs do not have custom network silicon and so instead rely on raw compute power. The VM is sized for the peak anticipated load and then typically runs 24x7x365. This drives a high cost of compute regardless of the actual utilization.
High-availability (HA) relies on fragile, costly, and complex network configuration.
Service insertion — the configuration to put a VNF into the packet flow — often forces packet paths that incur additional bandwidth charges.
VNFs are typically licensed similarly to their on-premises counterparts and are expensive.
VNFs lock in the enterprise and potentially exclude them benefitting from improvements in the cloud’s native data plane offerings.
For these reasons, enterprises are turning away from VNF-based solutions and increasingly looking to rely on the native network capabilities of their cloud service providers. The built-in public cloud networking is elastic, performant, robust, and priced on usage, with high-availability options integrated and backed by the cloud provider’s service level agreement.
In our train example, the tracks and trains are good. Likewise, the cloud network data plane is highly capable. Changing the data plane to solve management plane problems is the wrong approach. To make this work at scale, organizations need a solution that works together with the native network capabilities of cloud service providers.
Nefeli leverages native cloud data plane constructs rather than third party VNFs.
Introducing Magic Cloud Networking
The Nefeli team has joined Cloudflare to integrate cloud network management functionality with Cloudflare One. This capability is called Magic Cloud Networking and with it, enterprises can use the Cloudflare dashboard and API to manage their public cloud networks and connect with Cloudflare One.
End-to-end
Just as train providers are focused only on completing train journeys in their own network, cloud service providers deliver network connectivity and tools within a single cloud account. Many large enterprises have hundreds of cloud accounts across multiple cloud providers. In an end-to-end network this creates disconnected networking silos which introduce operational inefficiencies and risk.
Imagine you are trying to organize a train journey across Europe, and no single train company serves both your origin and destination. You know they all offer the same basic service: a seat on a train. However, your trip is difficult to arrange because it involves multiple trains operated by different companies with their own schedules and ticketing rates, all in different languages!
Magic Cloud Networking is like an online travel agent that aggregates multiple transportation options, books multiple tickets, facilitates changes after booking, and then delivers travel status updates.
Through the Cloudflare dashboard, you can discover all of your network resources across accounts and cloud providers and visualize your end-to-end network in a single interface. Once Magic Cloud Networking discovers your networks, you can build a scalable network through a fully automated and simple workflow.
Taming per-cloud complexity
Public clouds are used to deliver applications and services. Each cloud provider offers a composable stack of modular building blocks (resources) that start with the foundation of a billing account and then add on security controls. The next foundational layer, for server-based applications, is VPC networking. Additional resources are built on the VPC network foundation until you have compute, storage, and network infrastructure to host the enterprise application and data. Even relatively simple architectures can be composed of hundreds of resources.
The trouble is, these resources expose abstractions that are different from the building blocks you would use to build a service on prem, the abstractions differ between cloud providers, and they form a web of dependencies with complex rules about how configuration changes are made (rules which differ between resource types and cloud providers). For example, say I create 100 VMs, and connect them to an IP network. Can I make changes to the IP network while the VMs are using the network? The answer: it depends.
Magic Cloud Networking handles these differences and complexities for you. It configures native cloud constructs such as VPN gateways, routes, and security groups to securely connect your cloud VPC network to Cloudflare One without having to learn each cloud’s incantations for creating VPN connections and hubs.
Continuous, coordinated automation
Returning to our train system example, what if the railway maintenance staff find a dangerous fault on the railroad track? They manually set the signal to a stop light to prevent any oncoming trains using the faulty section of track. Then, what if, by unfortunate coincidence, the scheduling office is changing the signal schedule, and they set the signals remotely which clears the safety measure made by the maintenance crew? Now there is a problem that no one knows about and the root cause is that multiple authorities can change the signals via different interfaces without coordination.
The same problem exists in cloud networks: configuration changes are made by different teams using different automation and configuration interfaces across a spectrum of roles such as billing, support, security, networking, firewalls, database, and application development.
Once your network is deployed, Magic Cloud Networking monitors its configuration and health, enabling you to be confident that the security and connectivity you put in place yesterday is still in place today. It tracks the cloud resources it is responsible for, automatically reverting drift if they are changed out-of-band, while allowing you to manage other resources, like storage buckets and application servers, with other automation tools. And, as you change your network, Cloudflare takes care of route management, injecting and withdrawing routes globally across Cloudflare and all connected cloud provider networks.
Magic Cloud Networking is fully programmable via API, and can be integrated into existing automation toolchains.
Ready to start conquering cloud networking?
We are thrilled to introduce Magic Cloud Networking as another pivotal step to fulfilling the promise of the Connectivity Cloud. This marks our initial stride in empowering customers to seamlessly integrate Cloudflare with their public clouds to get securely connected, stay securely connected, and gain flexibility and cost savings as they go.
Join us on this journey for early access: learn more and sign up here.
We understand that your VeloCloud deployment may be partially or even fully deployed. You may be experiencing discomfort from SASE anxiety. Symptoms include:
Irregular priorities and strategies – With the number of times that VMware reorganized its various networking and security products into different business units, it’s now about to embark on yet another as Broadcom pursues single vendor SASE.
If you’re a VeloCloud customer, we are here to help you with your transition to Magic WAN, with planning, products and services. You’ve experienced the turbulence, and that’s why we are taking steps to help. First, it’s necessary to illustrate what’s fundamentally wrong with the architecture by acquisition model in order to define the right path forward. Second, we document the steps involved for making a transition from VeloCloud to Cloudflare. Third, we are offering a helping hand to help VeloCloud customers to get their SASE strategies back on track.
Architecture is the key to SASE
Your IT organization must deliver stability across your information systems, because the future of your business depends on the decisions that you make today. You need to make sure that your SASE journey is backed by vendors that you can depend on. Indecisive vendors and unclear strategies rarely inspire confidence, and it’s driving organizations to reconsider their relationship.
It’s not just VeloCloud that’s pivoting. Many vendors are chasing the brass ring to meet the requirement for Single Vendor SASE, and they’re trying to reduce their time to market by acquiring features on their checklist, rather than taking the time to build the right architecture for consistent management and user experience. It’s led to rapid consolidation of both startups and larger product stacks, but now we’re seeing many many instances of vendors having to rationalize their overlapping product lines. Strange days indeed.
But the thing is, Single Vendor SASE is not a feature checklist game. It’s not like shopping for PC antivirus software where the most attractive option was the one with the most checkboxes. It doesn’t matter if you acquire a large stack of product acronyms (ZTNA, SD-WAN, SWG, CASB, DLP, FWaaS, SD-WAN to name but a few) if the results are just as convoluted as the technology it aims to replace.
If organizations are new to SASE, then it can be difficult to know what to look for. However, one clear sign of trouble is taking an SSE designed by one vendor and combining it with SD-WAN from another. Because you can’t get a converged platform out of two fundamentally incongruent technologies.
Why SASE Math Doesn’t Work
The conceptual model for SASE typically illustrates two half circles, with one consisting of cloud-delivered networking and the other being cloud-delivered security. With this picture in mind, it’s easy to see how one might think that combining an implementation of cloud-delivered networking (VeloCloud SD-WAN) and an implementation of cloud-delivered security (Symantec Network Protection – SSE) might satisfy the requirements. Does Single Vendor SASE = SD-WAN + SSE?
In practice, networking and network security do not exist in separate universes, but SD-WAN and SSE implementations do, especially when they were designed by different vendors. That’s why the math doesn’t work, because even with the requisite SASE functionality, the implementation of the functionality doesn’t fit. SD-WAN is designed for network connectivity between sites over the SD-WAN fabric, whereas SSE largely focuses on the enforcement of security policy for user->application traffic from remote users or traffic leaving (rather than traversing) the SD-WAN fabric. Therefore, to bring these two worlds together, you end up with security inconsistency, proxy chains which create a burden on latency, or implementing security at the edge rather than in the cloud.
Why Cloudflare is different
At Cloudflare, the basis for our approach to single vendor SASE starts from building a global network designed with private data centers, overprovisioned network and compute capacity, and a private backbone designed to deliver our customer’s traffic to any destination. It’s what we call any-to-any connectivity. It’s not using the public cloud for SASE services, because the public cloud was designed as a destination for traffic rather than being optimized for transit. We are in full control of the design of our data centers and network and we’re obsessed with making it even better every day.
It’s from this network that we deliver networking and security services. Conceptually, we implement a philosophy of composability, where the fundamental network connection between the customer’s site and the Cloudflare data center remains the same across different use cases. In practice, and unlike traditional approaches, it means no downtime for service insertion when you need more functionality — the connection to Cloudflare remains the same. It’s the services and the onboarding of additional destinations that changes as organizations expand their use of Cloudflare.
From the perspective of branch connectivity, use Magic WAN for the connectivity that ties your business together, no matter which way traffic passes. That’s because we don’t treat the directions of your network traffic as independent problems. We solve for consistency by on-ramping all traffic through one of Cloudflare’s 310+ anycasted data centers (whether inbound, outbound, or east-west) for enforcement of security policy. We solve for latency by eliminating the need to forward traffic to a compute location by providing full compute services in every data center. We implement SASE using a light edge / heavy cloud model, with services delivered within the Cloudflare connectivity cloud rather than on-prem.
How to transition from VeloCloud to Cloudflare
Start by contacting us to get a consultation session with our solutions architecture team. Our architects specialize in network modernization and can map your SASE goals across a series of smaller projects. We’ve worked with hundreds of organizations to achieve their SASE goals with the Cloudflare connectivity cloud and can build a plan that your team can execute on.
For product education, join one of our product workshops on Magic WAN to get a deep dive into how it’s built and how it can be rolled out to your locations. Magic WAN uses a light edge, heavy cloud model that has multiple network insertion models (whether a tunnel from an existing device, using our turnkey Magic WAN Connector, or deploying a virtual appliance) which can work in parallel or as a replacement for your branch connectivity needs, thus allowing you to migrate at your pace. Our specialist teams can help you mitigate transitionary hardware and license costs as you phase out VeloCloud and accelerate your rollout of Magic WAN.
The Magic WAN technical engineers have a number of resources to help you build product knowledge as well. This includes reference architectures and quick start guides that address your organization’s connectivity goals, whether sizing down your on-prem network in favor of the emerging “coffee shop networking” philosophy, retiring legacy SD-WAN, and full replacement of conventional MPLS.
For services, our customer success teams are ready to support your transition, with services that are tailored specifically for Magic WAN migrations both large and small.
Your next move
Interested in learning more? Contact us to get started, and we’ll help you with your SASE journey. Contact us to learn how to replace VeloCloud with Cloudflare Magic WAN and use our network as an extension of yours.
We are excited to announce two enhancements to Cloudflare’s Data Loss Prevention (DLP) service: support for Optical Character Recognition (OCR) and predefined source code detections. These two highly requested DLP features make it easier for organizations to protect their sensitive data with granularity and reduce the risks of breaches, regulatory non-compliance, and reputational damage:
With OCR, customers can efficiently identify and classify sensitive information contained within images or scanned documents.
With predefined source code detections, organizations can scan inline traffic for common code languages and block those HTTP requests to prevent data leaks, as well as detecting the storage of code in repositories such as Google Drive.
OCR enables the extraction of text from images. It converts the text within those images into readable text data that can be easily edited, searched, or analyzed, unlike images.
Sensitive data regularly appears in image files. For example, employees are often asked to provide images of identification cards, passports, or documents as proof of identity or work status. Those images can contain a plethora of sensitive and regulated classes of data, including Personally Identifiable Information (PII) — for example, passport numbers, driver’s license numbers, birthdates, tax identification numbers, and much more.
OCR can be leveraged within DLP policies to prevent the unauthorized sharing or leakage of sensitive information contained within images. Policies can detect when sensitive text content is being uploaded to cloud storage or shared through other communication channels, and block the transaction to prevent data loss. This assists in enforcing compliance with regulatory requirements related to data protection and privacy.
About source code detection
Source code fuels digital business and contains high-value intellectual property, including proprietary algorithms and encrypted secrets about a company’s infrastructure. Source code has been and will continue to be a target for theft by external attackers, but customers are also increasingly concerned about the inadvertent exposure of this information by internal users. For example, developers may accidentally upload source code to a publicly available GitHub repository or to generative AI tools like ChatGPT. While these tools have their place (like using AI to help with debugging), security teams want greater visibility and more precise control over what data flows to and from these tools.
To help customers, Cloudflare now offers predefined DLP profiles for common code languages — specifically C, C++, C#, Go, Haskell, Java, Javascript, Lua, Python, R, Rust, and Swift. These machine learning-based detections train on public repositories for algorithm development, ensuring they remain up to date. Cloudflare’s DLP inspects the HTTP body of requests for these DLP profiles, and security teams can block traffic accordingly to prevent data leaks.
How to use these capabilities
Cloudflare offers you flexibility to determine what data you are interested in detecting via DLP policies. You can use predefined profiles created by Cloudflare for common types of sensitive or regulated data (e.g. credentials, financial data, health data, identifiers), or you can create your own custom detections.
To implement inline blocking of source code, simply select the DLP profiles for the languages you want to detect. For example, if my organization uses Rust, Go, and JavaScript, I would turn on those detections:
I would then create a blocking policy via our secure web gateway to prevent traffic containing source code. Here, we block source code from being uploaded to ChatGPT:
Adding OCR to any detection is similarly easy. Below is a profile looking for sensitive data that could be stored in scanned documents.
With the detections selected, simply enable the OCR toggle, and wherever you are applying DLP inspections, images in your content will be scanned for sensitive data. The detections work the same in images as they do in the text, including Match Counts and Context Analysis, so no additional logic or settings are needed.
Consistency across use cases is a core principle of our DLP solution, so as always, this feature is available for both data at rest, available via CASB, and data in transit, available via Gateway.
Generative AI has captured the imagination of the world by being able to produce poetry, screenplays, or imagery. These tools can be used to improve human productivity for good causes, but they can also be employed by malicious actors to carry out sophisticated attacks.
We are witnessing phishing attacks and social engineering becoming more sophisticated as attackers tap into powerful new tools to generate credible content or interact with humans as if it was a real person. Attackers can use AI to build boutique tooling made for attacking specific sites with the intent of harvesting proprietary data and taking over user accounts.
To protect against these new challenges, we need new and more sophisticated security tools: this is how Defensive AI was born. Defensive AI is the framework Cloudflare uses when thinking about how intelligent systems can improve the effectiveness of our security solutions. The key to Defensive AI is data generated by Cloudflare’s vast network, whether generally across our entire network or specific to individual customer traffic.
At Cloudflare, we use AI to increase the level of protection across all security areas, ranging from application security to email security and our Zero Trust platform. This includes creating customized protection for every customer for API or email security, or using our huge amount of attack data to train models to detect application attacks that haven’t been discovered yet.
In the following sections, we will provide examples of how we designed the latest generation of security products that leverage AI to secure against AI-powered attacks.
Protecting APIs with anomaly detection
APIs power the modern Web, comprising 57% of dynamic traffic across the Cloudflare network, up from 52% in 2021. While APIs aren’t a new technology, securing them differs from securing a traditional web application. Because APIs offer easy programmatic access by design and are growing in popularity, fraudsters and threat actors have pivoted to targeting APIs. Security teams must now counter this rising threat. Importantly, each API is usually unique in its purpose and usage, and therefore securing APIs can take an inordinate amount of time.
Cloudflare is announcing the development of API Anomaly Detection for API Gateway to protect APIs from attacks designed to damage applications, take over accounts, or exfiltrate data. API Gateway provides a layer of protection between your hosted APIs and every device that interfaces with them, giving you the visibility, control, and security tools you need to manage your APIs.
API Anomaly Detection is an upcoming, ML-powered feature in our API Gateway product suite and a natural successor to Sequence Analytics. In order to protect APIs at scale, API Anomaly Detection learns an application’s business logic by analyzing client API request sequences. It then builds a model of what a sequence of expected requests looks like for that application. The resulting traffic model is used to identify attacks that deviate from the expected client behavior. As a result, API Gateway can use its Sequence Mitigation functionality to enforce the learned model of the application’s intended business logic, stopping attacks.
While we’re still developing API Anomaly Detection, API Gateway customers can sign up here to be included in the beta for API Anomaly Detection. Today, customers can get started with Sequence Analytics and Sequence Mitigation by reviewing the docs. Enterprise customers that haven’t purchased API Gateway can self-start a trial in the Cloudflare Dashboard, or contact their account manager for more information.
Identifying unknown application vulnerabilities
Another area where AI improves security is in our Web Application Firewall (WAF). Cloudflare processes 55 million HTTP requests per second on average and has an unparalleled visibility into attacks and exploits across the world targeting a wide range of applications.
One of the big challenges with the WAF is adding protections for new vulnerabilities and false positives. A WAF is a collection of rules designed to identify attacks directed at web applications. New vulnerabilities are discovered daily and at Cloudflare we have a team of security analysts that create new rules when vulnerabilities are discovered. However, manually creating rules takes time — usually hours — leaving applications potentially vulnerable until a protection is in place. The other problem is that attackers continuously evolve and mutate existing attack payloads that can potentially bypass existing rules.
This is why Cloudflare has, for years, leveraged machine learning models that constantly learn from the latest attacks, deploying mitigations without the need for manual rule creation. This can be seen, for example, in our WAF Attack Score solution. WAF Attack Score is based on an ML model trained on attack traffic identified on the Cloudflare network. The resulting classifier allows us to identify variations and bypasses of existing attacks as well as extending the protection to new and undiscovered attacks. Recently, we have made Attack Score available to all Enterprise and Business plans.
While the contribution of security analysts is indispensable, in the era of AI and rapidly evolving attack payloads, a robust security posture demands solutions that do not rely on human operators to write rules for each novel threat. Combining Attack Score with traditional signature-based rules is an example of how intelligent systems can support tasks carried out by humans. Attack Score identifies new malicious payloads which can be used by analysts to optimize rules that, in turn, provide better training data for our AI models. This creates a reinforcing positive feedback loop improving the overall protection and response time of our WAF.
Long term, we will adapt the AI model to account for customer-specific traffic characteristics to better identify deviations from normal and benign traffic.
Using AI to fight phishing
Email is one of the most effective vectors leveraged by bad actors with the US Cybersecurity and Infrastructure Security Agency (CISA) reporting that 90% of cyber attacks start with phishing and Cloudflare Email Security marking 2.6% of 2023’s emails as malicious. The rise of AI-enhanced attacks are making traditional email security providers obsolete, as threat actors can now craft phishing emails that are more credible than ever with little to no language errors.
Cloudflare Email Security is a cloud-native service that stops phishing attacks across all threat vectors. Cloudflare’s email security product continues to protect customers with its AI models, even as trends like Generative AI continue to evolve. Cloudflare’s models analyze all parts of a phishing attack to determine the risk posed to the end user. Some of our AI models are personalized for each customer while others are trained holistically. Privacy is paramount at Cloudflare, so only non-personally identifiable information is used by our tools for training. In 2023, Cloudflare processed approximately 13 billion, and blocked 3.4 billion, emails, providing the email security product a rich dataset that can be used to train AI models.
Two detections that are part of our portfolio are Honeycomb and Labyrinth.
Honeycomb is a patented email sender domain reputation model. This service builds a graph of who is sending messages and builds a model to determine risk. Models are trained on specific customer traffic patterns, so every customer has AI models trained on what their good traffic looks like.
Labyrinth uses ML to protect on a per-customer basis. Actors attempt to spoof emails from our clients’ valid partner companies. We can gather a list with statistics of known & good email senders for each of our clients. We can then detect the spoof attempts when the email is sent by someone from an unverified domain, but the domain mentioned in the email itself is a reference/verified domain.
AI remains at the core of our email security product, and we are constantly improving the ways we leverage it within our product. If you want to get more information about how we are using our AI models to stop AI enhanced phishing attacks check out our blog post here.
Zero-Trust security protected and powered by AI
Cloudflare Zero Trust provides administrators the tools to protect access to their IT infrastructure by enforcing strict identity verification for every person and device regardless of whether they are sitting within or outside the network perimeter.
One of the big challenges is to enforce strict access control while reducing the friction introduced by frequent verifications. Existing solutions also put pressure on IT teams that need to analyze log data to track how risk is evolving within their infrastructure. Sifting through a huge amount of data to find rare attacks requires large teams and substantial budgets.
Cloudflare simplifies this process by introducing behavior-based user risk scoring. Leveraging AI, we analyze real-time data to identify anomalies in the users’ behavior and signals that could lead to harms to the organization. This provides administrators with recommendations on how to tailor the security posture based on user behavior.
Zero Trust user risk scoring detects user activity and behaviors that could introduce risk to your organizations, systems, and data and assigns a score of Low, Medium, or High to the user involved. This approach is sometimes referred to as user and entity behavior analytics (UEBA) and enables teams to detect and remediate possible account compromise, company policy violations, and other risky activity.
The first contextual behavior we are launching is “impossible travel”, which helps identify if a user’s credentials are being used in two locations that the user could not have traveled to in that period of time. These risk scores can be further extended in the future to highlight personalized behavior risks based on contextual information such as time of day usage patterns and access patterns to flag any anomalous behavior. Since all traffic would be proxying through your SWG, this can also be extended to resources which are being accessed, like an internal company repo.
From application and email security to network security and Zero Trust, we are witnessing attackers leveraging new technologies to be more effective in achieving their goals. In the last few years, multiple Cloudflare product and engineering teams have adopted intelligent systems to better identify abuses and increase protection.
Besides the generative AI craze, AI is already a crucial part of how we defend digital assets against attacks and how we discourage bad actors.
Cloudflare One, our secure access service edge (SASE) platform, is introducing new capabilities to detect risk based on user behavior so that you can improve security posture across your organization.
Traditionally, security and IT teams spend a lot of time, labor, and money analyzing log data to track how risk is changing within their business and to stay on top of threats. Sifting through such large volumes of data – the majority of which may well be benign user activity – can feel like finding a needle in a haystack.
Cloudflare’s approach simplifies this process with user risk scoring. With AI/machine learning techniques, we analyze the real-time telemetry of user activities and behaviors that pass through our network to identify abnormal behavior and potential indicators of compromises that could lead to danger for your organization, so your security teams can lock down suspicious activity and adapt your security posture in the face of changing risk factors and sophisticated threats.
User risk scoring
The concept of trust in cybersecurity has evolved dramatically. The old model of “trust but verify” has given way to a Zero Trust approach, where trust is never assumed and verification is continuous, as each network request is scrutinized. This form of continuous evaluation enables administrators to grant access based not just on the contents of a request and its metadata, but on its context — such as whether the user typically logs in at that time or location.
Previously, this kind of contextual risk assessment was time-consuming and required expertise to parse through log data. Now, we’re excited to introduce Zero Trust user risk scoring which does this automatically, allowing administrators to specify behavioral rules — like monitoring for anomalous “impossible travel” and custom Data Loss Prevention (DLP) triggers, and use these to generate dynamic user risk scores.
Zero Trust user risk scoring detects user activity and behaviors that could introduce risk to your organizations, systems, and data and assigns a score of Low, Medium, or High to the user involved. This approach is sometimes referred to as user and entity behavior analytics (UEBA) and enables teams to detect and remediate possible account compromise, company policy violations, and other risky activity.
How risk scoring works and detecting user risk
User risk scoring is built to examine behaviors. Behaviors are actions taken or completed by a user and observed by Cloudflare One, our SASE platform that helps organizations implement Zero Trust.
Once tracking for a particular behavior is enabled, the Zero Trust risk scoring engine immediately starts to review existing logs generated within your Zero Trust account. Then, after a user in your account performs a behavior that matches one of the enabled risk behaviors based on observed log data, Cloudflare assigns a risk score — Low, Medium, or High — to the user who performed the behavior.
Behaviors are built using log data from within your Cloudflare account. No additional user data is being collected, tracked or stored beyond what is already available in the existing Zero Trust logs (which adhere to the log retention timeframes).
A popular priority amongst security and insider threat teams is detecting when a user performs so-called “impossible travel”. Impossible travel, available as a predefined risk behavior today, is when a user completes a login from two different locations that the user could not have traveled to in that period of time. For example, if Alice is in Seattle and logs into her organization’s finance application that is protected by Cloudflare Access and only a few minutes later is seen logging into her organization’s business suite from Sydney, Australia, impossible travel would be triggered and Alice would be assigned a risk level of High.
For users that are observed performing multiple risk behaviors, they will be assigned the highest-level risk behavior they’ve triggered. This real-time risk assessment empowers your security teams to act swiftly and decisively.
Enabling predefined risk behaviors
Behaviors can be enabled and disabled at any time, but are disabled by default. Therefore, users will not be assigned risk scores until you have decided what is considered a risk to your organization and how urgent that risk is.
To start detecting a given risk behavior, an administrator must first ensure the behavior requirements are met (for instance, to detect whether a user has triggered a high number of DLP policies, you’ll need to first set up a DLP profile). From there, simply enable the behavior in the Zero Trust dashboard.
After a behavior has been enabled, Cloudflare will start analyzing behaviors to flag users with the corresponding risk when detected. The risk level of any behavior can be changed by an administrator. You have the freedom to enable behaviors that are relevant to your security posture as well as adjust the default risk score (Low, Medium, or High) from an out-of-the-box assignment.
And for security administrators who have investigated a user and need to clear a user’s risk score, simply go to Risk score > User risk scoring, choose the appropriate user, and select ‘Reset user risk’ followed by ‘Confirm.’ Once a user’s risk score is reset, they disappear from the risk table — until or unless they trigger another risk behavior.
How do I get started?
User risk scoring and DLP are part of Cloudflare One, which converges Zero Trust security and network connectivity services on one unified platform and global control plane.
As more organizations collectively progress toward adopting a SASE architecture, it has become clear that the traditional SASE market definition (SSE + SD-WAN) is not enough. It forces some teams to work with multiple vendors to address their specific needs, introducing performance and security tradeoffs. More worrisome, it draws focus more to a checklist of services than a vendor’s underlying architecture. Even the most advanced individual security services or traffic on-ramps don’t matter if organizations ultimately send their traffic through a fragmented, flawed network.
Single-vendor SASE is a critical trend to converge disparate security and networking technologies, yet enterprise “any-to-any connectivity” needs true network modernization for SASE to work for all teams. Over the past few years, Cloudflare has launched capabilities to help organizations modernize their networks as they navigate their short- and long-term roadmaps of SASE use cases. We’ve helped simplify SASE implementation, regardless of the team leading the initiative.
Announcing (even more!) flexible on-ramps for single-vendor SASE
Today, we are announcing a series of updates to our SASE platform, Cloudflare One, that further the promise of a single-vendor SASE architecture. Through these new capabilities, Cloudflare makes SASE networking more flexible and accessible for security teams, more efficient for traditional networking teams, and uniquely extend its reach to an underserved technical team in the larger SASE connectivity conversation: DevOps.
These platform updates include:
Flexible on-ramps for site-to-site connectivity that enable both agent/proxy-based and appliance/routing-based implementations, simplifying SASE networking for both security and networking teams.
New WAN-as-a-service (WANaaS) capabilities like high availability, application awareness, a virtual machine deployment option, and enhanced visibility and analytics that boost operational efficiency while reducing network costs through a “light branch, heavy cloud” approach.
Zero Trust connectivity for DevOps: mesh and peer-to-peer (P2P) secure networking capabilities that extend ZTNA to support service-to-service workflows and bidirectional traffic.
Cloudflare offers a wide range of SASE on- and off-ramps — including connectors for your WAN, applications, services, systems, devices, or any other internal network resources — to more easily route traffic to and from Cloudflare services. This helps organizations align with their best fit connectivity paradigm, based on existing environment, technical familiarity, and job role.
We recently dove into the Magic WAN Connector in a separate blog post and have explained how all our on-ramps fit together in our SASE reference architecture, including our new WARP Connector. This blog focuses on the main impact those technologies have for customers approaching SASE networking from different angles.
More flexible and accessible for security teams
The process of implementing a SASE architecture can challenge an organization’s status quo for internal responsibilities and collaboration across IT, security, and networking. Different teams own various security or networking technologies whose replacement cycles are not necessarily aligned, which can reduce the organization’s willingness to support particular projects.
Security or IT practitioners need to be able to protect resources no matter where they reside. Sometimes a small connectivity change would help them more efficiently protect a given resource, but the task is outside their domain of control. Security teams don’t want to feel reliant on their networking teams in order to do their jobs, and yet they also don’t need to cause downstream trouble with existing network infrastructure. They need an easier way to connect subnets, for instance, without feeling held back by bureaucracy.
Agent/proxy-based site-to-site connectivity
To help push these security-led projects past the challenges associated with traditional siloes, Cloudflare offers both agent/proxy-based and appliance/routing-based implementations for site-to-site or subnet-to-subnet connectivity. This way, networking teams can pursue the traditional networking concepts with which they are familiar through our appliance/routing-based WANaaS — a modern architecture vs. legacy SD-WAN overlays. Simultaneously, security/IT teams can achieve connectivity through agent/proxy-based software connectors (like the WARP Connector) that may be more approachable to implement. This agent-based approach blurs the lines between industry norms for branch connectors and app connectors, bringing WAN and ZTNA technology closer together to help achieve least-privileged access everywhere.
Agent/proxy-based connectivity may be a complementary fit for a subset of an organization’s total network connectivity. These software-driven site-to-site use cases could include microsites with no router or firewall, or perhaps cases in which teams are unable to configure IPsec or GRE tunnels like in tightly regulated managed networks or cloud environments like Kubernetes. Organizations can mix and match traffic on-ramps to fit their needs; all options can be used composably and concurrently.
Our agent/proxy-based approach to site-to-site connectivity uses the same underlying technology that helps security teams fully replace VPNs, supporting ZTNA for apps with server-initiated or bidirectional traffic. These include services such as Voice over Internet Protocol (VoIP) and Session Initiation Protocol (SIP) traffic, Microsoft’s System Center Configuration Manager (SCCM), Active Directory (AD) domain replication, and as detailed later in this blog, DevOps workflows.
This new Cloudflare on-ramp enables site-to-site, bidirectional, and mesh networking connectivity without requiring changes to underlying network routing infrastructure, acting as a router for the subnet within the private network to on-ramp and off-ramp traffic through Cloudflare.
More efficient for networking teams
Meanwhile, for networking teams who prefer a network-layer appliance/routing-based implementation for site-to-site connectivity, the industry norms still force too many tradeoffs between security, performance, cost, and reliability. Many (if not most) large enterprises still rely on legacy forms of private connectivity such as MPLS. MPLS is generally considered expensive and inflexible, but it is highly reliable and has features such as quality of service (QoS) that are used for bandwidth management.
Commodity Internet connectivity is widely available in most parts of the inhabited world, but has a number of challenges which make it an imperfect replacement to MPLS. In many countries, high speed Internet is fast and cheap, but this is not universally true. Speed and costs depend on the local infrastructure and the market for regional service providers. In general, broadband Internet is also not as reliable as MPLS. Outages and slowdowns are not unusual, with customers having varying degrees of tolerance to the frequency and duration of disrupted service. For businesses, outages and slowdowns are not tolerable. Disruptions to network service means lost business, unhappy customers, lower productivity and frustrated employees. Thus, despite the fact that a significant amount of corporate traffic flows have shifted to the Internet anyway, many organizations face difficulty migrating away from MPLS.
SD-WAN introduced an alternative to MPLS that is transport neutral and improves networking stability over conventional broadband alone. However, it introduces new topology and security challenges. For example, many SD-WAN implementations can increase risk if they bypass inspection between branches. It also has implementation-specific challenges such as how to address scaling and the use/control (or more precisely, the lack of) a middle mile. Thus, the promise of making a full cutover to Internet connectivity and eliminating MPLS remains unfulfilled for many organizations. These issues are also not very apparent to some customers at the time of purchase and require continuing market education.
Evolution of the enterprise WAN
Cloudflare Magic WAN follows a different paradigm built from the ground up in Cloudflare’s connectivity cloud; it takes a “light branch, heavy cloud” approach to augment and eventually replace existing network architectures including MPLS circuits and SD-WAN overlays. While Magic WAN has similar cloud-native routing and configuration controls to what customers would expect from traditional SD-WAN, it is easier to deploy, manage, and consume. It scales with changing business requirements, with security built in. Customers like Solocal agree that the benefits of this architecture ultimately improve their total cost of ownership:
“Cloudflare’s Magic WAN Connector offers a centralized and automated management of network and security infrastructure, in an intuitive approach. As part of Cloudflare’s SASE platform, it provides a consistent and homogeneous single-vendor architecture, founded on market standards and best practices. Control over all data flows is ensured, and risks of breaches or security gaps are reduced. It is obvious to Solocal that it should provide us with significant savings, by reducing all costs related to acquiring, installing, maintaining, and upgrading our branch network appliances by up to 40%. A high-potential connectivity solution for our IT to modernize our network.” – Maxime Lacour, Network Operations Manager, Solocal
This is quite different from other single-vendor SASE vendor approaches which have been trying to reconcile acquisitions that were designed around fundamentally different design philosophies. These “stitched together” solutions lead to a non-converged experience due to their fragmented architectures, similar to what organizations might see if they were managing multiple separate vendors anyway. Consolidating the components of SASE with a vendor that has built a unified, integrated solution, versus piecing together different solutions for networking and security, significantly simplifies deployment and management by reducing complexity, bypassed security, and potential integration or connectivity challenges.
Magic WAN can automatically establish IPsec tunnels to Cloudflare via our Connector device, manually via Anycast IPsec or GRE Tunnels initiated on a customer’s edge router or firewall, or via Cloudflare Network Interconnect (CNI) at private peering locations or public cloud instances. It pushes beyond “integration” claims with SSE to truly converge security and networking functionality and help organizations more efficiently modernize their networks.
New Magic WAN Connector capabilities
In October 2023, we announced the general availability of the Magic WAN Connector, a lightweight device that customers can drop into existing network environments for zero-touch connectivity to Cloudflare One, and ultimately used to replace other networking hardware such as legacy SD-WAN devices, routers, and firewalls. Today, we’re excited to announce new capabilities of the Magic WAN Connector including:
High Availability (HA) configurations for critical environments: In enterprise deployments, organizations generally desire support for high availability to mitigate the risk of hardware failure. High availability uses a pair of Magic WAN Connectors (running as a VM or on a supported hardware device) that work in conjunction with one another to seamlessly resume operation if one device fails. Customers can manage HA configuration, like all other aspects of the Magic WAN Connector, from the unified Cloudflare One dashboard.
Application awareness: One of the central differentiating features of SD-WAN vs. more traditional networking devices has been the ability to create traffic policies based on well-known applications, in addition to network-layer attributes like IP and port ranges. Application-aware policies provide easier management and more granularity over traffic flows. Cloudflare’s implementation of application awareness leverages the intelligence of our global network, using the same categorization/classification already shared across security tools like our Secure Web Gateway, so IT and security teams can expect consistent behavior across routing and inspection decisions – a capability not available in dual-vendor or stitched-together SASE solutions.
Virtual machine deployment option: The Magic WAN Connector is now available as a virtual appliance software image, that can be downloaded for immediate deployment on any supported virtualization platform / hypervisor. The virtual Magic WAN Connector has the same ultra-low-touch deployment model and centralized fleet management experience as the hardware appliance, and is offered to all Magic WAN customers at no additional cost.
Enhanced visibility and analytics:The Magic WAN Connector features enhanced visibility into key metrics such as connectivity status, CPU utilization, memory consumption, and device temperature. These analytics are available via dashboard and API so operations teams can integrate the data into their NOCs.
Extending SASE’s reach to DevOps
Complex continuous integration and continuous delivery (CI/CD) pipeline interaction is famous for being agile, so the connectivity and security supporting these workflows should match. DevOps teams too often rely on traditional VPNs to accomplish remote access to various development and operational tools. VPNs are cumbersome to manage, susceptible to exploit with known or zero-day vulnerabilities, and use a legacy hub-and-spoke connectivity model that is too slow for modern workflows.
Of any employee group, developers are particularly capable of finding creative workarounds that decrease friction in their daily workflows, so all corporate security measures need to “just work,” without getting in their way. Ideally, all users and servers across build, staging, and production environments should be orchestrated through centralized, Zero Trust access controls, no matter what components and tools are used and no matter where they are located. Ad hoc policy changes should be accommodated, as well as temporary Zero Trust access for contractors or even emergency responders during a production server incident.
Zero Trust connectivity for DevOps
ZTNA works well as an industry paradigm for secure, least-privileged user-to-app access, but it should extend further to secure networking use cases that involve server-initiated or bidirectional traffic. This follows an emerging trend that imagines an overlay mesh connectivity model across clouds, VPCs, or network segments without a reliance on routers. For true any-to-any connectivity, customers need flexibility to cover all of their network connectivity and application access use cases. Not every SASE vendor’s network on-ramps can extend beyond client-initiated traffic without requiring network routing changes or making security tradeoffs, so generic “any-to-any connectivity” claims may not be what they initially seem.
Cloudflare extends the reach of ZTNA to ensure all user-to-app use cases are covered, plus mesh and P2P secure networking to make connectivity options as broad and flexible as possible. DevOps service-to-service workflows can run efficiently on the same platform that accomplishes ZTNA, VPN replacement, or enterprise-class SASE. Cloudflare acts as the connectivity “glue” across all DevOps users and resources, regardless of the flow of traffic at each step. This same technology, i.e., WARP Connector, enables admins to manage different private networks with overlapping IP ranges — VPC & RFC1918, support server-initiated traffic and P2P apps (e.g., SCCM, AD, VoIP & SIP traffic) connectivity over existing private networks, build P2P private networks (e.g., CI/CD resource flows), and deterministically route traffic. Organizations can also automate management of their SASE platform with Cloudflare’s Terraform provider.
The Cloudflare difference
Cloudflare’s single-vendor SASE platform, Cloudflare One, is built on our connectivity cloud — the next evolution of the public cloud, providing a unified, intelligent platform of programmable, composable services that enable connectivity between all networks (enterprise and Internet), clouds, apps, and users. Our connectivity cloud is flexible enough to make “any-to-any connectivity” a more approachable reality for organizations implementing a SASE architecture, accommodating deployment preferences alongside prescriptive guidance. Cloudflare is built to offer the breadth and depth needed to help organizations regain IT control through single-vendor SASE and beyond, while simplifying workflows for every team that contributes along the way.
Other SASE vendors designed their data centers for egress traffic to the Internet. They weren’t designed to handle or secure East-West traffic, providing neither middle mile nor security services for traffic passing from branch to HQ or branch to branch. Cloudflare’s middle mile global backbone supports security and networking for any-to-any connectivity, whether users are on-prem or remote, and whether apps are in the data center or in the cloud.
Today, we’re announcing the general availability of the Magic WAN Connector, a key component of our SASE platform, Cloudflare One. Magic WAN Connector is the glue between your existing network hardware and Cloudflare’s network — it provides a super simplified software solution that comes pre-installed on Cloudflare-certified hardware, and is entirely managed from the Cloudflare One dashboard.
It takes only a few minutes from unboxing to seeing your network traffic automatically routed to the closest Cloudflare location, where it flows through a full stack of Zero Trust security controls before taking an accelerated path to its destination, whether that’s another location on your private network, a SaaS app, or any application on the open Internet.
Since we announced our beta earlier this year, organizations around the world have deployed the Magic WAN Connector to connect and secure their network locations. We’re excited for the general availability of the Magic WAN Connector to accelerate SASE transformation at scale.
When customers tell us about their journey to embrace SASE, one of the most common stories we hear is:
We started with our remote workforce, deploying modern solutions to secure access to internal apps and Internet resources. But now, we’re looking at the broader landscape of our enterprise network connectivity and security, and it’s daunting. We want to shift to a cloud and Internet-centric model for all of our infrastructure, but we’re struggling to figure out how to start.
The Magic WAN Connector was created to address this problem.
Zero-touch connectivity to your new corporate WAN
Cloudflare One enables organizations of any size to connect and secure all of their users, devices, applications, networks, and data with a unified platform delivered by our global connectivity cloud. Magic WAN is the network connectivity “glue” of Cloudflare One, allowing our customers to migrate away from legacy private circuits and use our network as an extension of their own.
Previously, customers have connected their locations to Magic WAN with Anycast GRE or IPsec tunnels configured on their edge network equipment (usually existing routers or firewalls), or plugged into us directly with CNI. But for the past few years, we’ve heard requests from hundreds of customers asking for a zero-touch approach to connecting their branches: We just want something we can plug in and turn on, and it handles the rest.
The Magic WAN Connector is exactly this. Customers receive Cloudflare-certified hardware with our software pre-installed on it, and everything is controlled via the Cloudflare dashboard. What was once a time-consuming, complex process now takes a matter of minutes, enabling robust Zero-Trust protection for all of your traffic.
In addition to automatically configuring tunnels and routing policies to direct your network traffic to Cloudflare, the Magic WAN Connector will also handle traffic steering, shaping and failover to make sure your packets always take the best path available to the closest Cloudflare network location — which is likely only milliseconds away. You’ll also get enhanced visibility into all your traffic flows in analytics and logs, providing a unified observability experience across both your branches and the traffic through Cloudflare’s network.
Zero Trust security for all your traffic
Once the Magic WAN Connector is deployed at your network location, you have automatic access to enforce Zero Trust security policies across both public and private traffic.
A secure on-ramp to the Internet
An easy first step to improving your organization’s security posture after connecting network locations to Cloudflare is creating Secure Web Gateway policies to defend against ransomware, phishing, and other threats for faster, safer Internet browsing. By default, all Internet traffic from locations with the Magic WAN Connector will route through Cloudflare Gateway, providing a unified management plane for traffic from physical locations and remote employees.
A more secure private network
The Magic WAN Connector also enables routing private traffic between your network locations, with multiple layers of network and Zero Trust security controls in place. Unlike a traditional network architecture, which requires deploying and managing a stack of security hardware and backhauling branch traffic through a central location for filtering, a SASE architecture provides private traffic filtering and control built-in: enforced across a distributed network, but managed from a single dashboard interface or API.
A simpler approach for hybrid cloud
Cloudflare One enables connectivity for any physical or cloud network with easy on-ramps depending on location type. The Magic WAN Connector provides easy connectivity for branches, but also provides automatic connectivity to other networks including VPCs connected using cloud-native constructs (e.g., VPN Gateways) or direct cloud connectivity (via Cloud CNI). With a unified connectivity and control plane across physical and cloud infrastructure, IT and security teams can reduce overhead and cost of managing multi- and hybrid cloud networks.
Single-vendor SASE dramatically reduces cost and complexity
With the general availability of the Magic WAN Connector, we’ve put the final piece in place to deliver a unified SASE platform, developed and fully integrated from the ground up. Deploying and managing all the components of SASE with a single vendor, versus piecing together different solutions for networking and security, significantly simplifies deployment and management by reducing complexity and potential integration challenges. Many vendors that market a full SASE solution have actually stitched together separate products through acquisition, leading to an un-integrated experience similar to what you would see deploying and managing multiple separate vendors. In contrast, Cloudflare One (now with the Magic WAN Connector for simplified branch functions) enables organizations to achieve the true promise of SASE: a simplified, efficient, and highly secure network and security infrastructure that reduces your total cost of ownership and adapts to the evolving needs of the modern digital landscape.
Evolving beyond SD-WAN
Cloudflare One addresses many of the challenges that were left behind as organizations deployed SD-WAN to help simplify networking operations. SD-WAN provides orchestration capabilities to help manage devices and configuration in one place, as well as last mile traffic management to steer and shape traffic based on more sophisticated logic than is possible in traditional routers. But SD-WAN devices generally don't have embedded security controls, leaving teams to stitch together a patchwork of hardware, virtualized and cloud-based tools to keep their networks secure. They can make decisions about the best way to send traffic out from a customer’s branch, but they have no way to influence traffic hops between the last mile and the traffic's destination. And while some SD-WAN providers have surfaced virtualized versions of their appliances that can be deployed in cloud environments, they don't support native cloud connectivity and can complicate rather than ease the transition to cloud.
Cloudflare One represents the next evolution of enterprise networking, and has a fundamentally different architecture from either legacy networking or SD-WAN. It's based on a "light branch, heavy cloud" principle: deploy the minimum required hardware within physical locations (or virtual hardware within virtual networks, e.g., cloud VPCs) and use low-cost Internet connectivity to reach the nearest "service edge" location. At those locations, traffic can flow through security controls and be optimized on the way to its destination, whether that's another location within the customer's private network or an application on the public Internet. This architecture also enables remote user access to connected networks.
This shift — moving most of the "smarts" from the branch to a distributed global network edge, and leaving only the functions at the branch that absolutely require local presence, delivered by the Magic WAN Connector — solves our customers’ current problems and sets them up for easier management and a stronger security posture as the connectivity and attack landscape continues to evolve.
Aspect
Example
MPLS/VPN Service
SD-WAN
SASE with
Cloudflare One
Configuration
New site setup, configuration and management
By MSP through service request
Simplified orchestration and management via centralized controller
Automated orchestration via SaaS portal
Single Dashboard
Last mile
traffic control
Traffic balancing, QoS, and failover
Covered by MPLS SLAs
Best Path selection available in SD-WAN appliance
Minimal on-prem deployment to control local decision making
Middle mile
traffic control
Traffic steering around middle mile congestion
Covered by MPLS SLAs
“Tunnel Spaghetti” and still no control over the middle mile
Integrated traffic management & private backbone controls in a unified dashboard
Cloud integration
Connectivity for cloud migration
Centralized breakout
Decentralized breakout
Native connectivity with Cloud Network Interconnect
Security
Filter in & outbound Internet traffic for malware
Patchwork of hardware controls
Patchwork of hardware and/or software controls
Native integration with user, data, application & network security tools
Cost
Maximize ROI for network investments
High cost for hardware and connectivity
Optimized connectivity costs at the expense of increased
hardware and software costs
Decreased hardware and connectivity costs for maximized ROI
Summary of legacy, SD-WAN based, and SASE architecture considerations
Love and want to keep your current SD-WAN vendor? No problem – you can still use any appliance that supports IPsec or GRE as an on-ramp for Cloudflare One.
Ready to simplify your SASE journey?
You can learn more about the Magic WAN Connector, including device specs, specific feature info, onboarding process details, and more at our dev docs, or contact us to get started today.
The most famous data breaches–the ones that keep security practitioners up at night–involved the leak of millions of user records. Companies have lost names, addresses, email addresses, Social Security numbers, passwords, and a wealth of other sensitive information. Protecting this data is the highest priority of most security teams, yet many teams still struggle to actually detect these leaks.
Cloudflare’s Data Loss Prevention suite already includes the ability to identify sensitive data like credit card numbers, but with the volume of data being transferred every day, it can be challenging to understand which of the transactions that include sensitive data are actually problematic. We hear customers tell us, “I don’t care when one of my employees uses a personal credit card to buy something online. Tell me when one of my customers’ credit cards are leaked.”
In response, we looked for a method to distinguish between any credit card and one belonging to a specific customer. We are excited to announce the launch of our newest Data Loss Prevention feature, Exact Data Match. With Exact Data Match (EDM), customers securely tell us what data they want to protect, and then we identify, log, and block the presence or movement of that data. For example, if you provide us with a set of credit card numbers, we will DLP scan your traffic or repositories for only those cards. This allows you to create targeted DLP detections for your organization.
What is Exact Data Match?
Many Data Loss Prevention (DLP) detections begin with a generic identification of a pattern, often using a regular expression, and then are validated by additional criteria. Validation can leverage a wide range of techniques from checksums to machine learning models. However, this validates that the pattern is a credit card, not that it is your credit card.
With Exact Data Match, you tell us exactly the data you want to protect, but we never see it in cleartext. You provide a list of data of your choosing, such as a list of names, addresses, or credit card numbers, and that data is hashed before ever reaching Cloudflare. We store the hashes and scan your traffic or content for matches of the hashes. When we find a match, we log or block it according to your policy.
By using a finite list of data, we drastically reduce false positives compared to generic pattern matching. Meanwhile, hashing the data maintains your data privacy. Our goal is to meet your data protection and privacy needs.
How do I use it?
We now offer you the ability to upload DLP datasets. These allow you to provide batches of data to be used for your DLP detections.
When creating a dataset, provide a name, description, and a file containing the data to match.
When you upload the file, Cloudflare one-way hashes the data right in your browser. The hashed data is then transferred via API to Cloudflare, while the cleartext data never leaves the browser.
You can see the status of the upload in the datasets table.
The dataset can now be added to a DLP profile for detection. You can also add other predefined and custom entries to the same DLP profile.
Exact data match is now available for every DLP customer. If you are not a DLP customer but would like to learn more about Cloudflare One and DLP, reach out for a consultation.
What’s next?
Customers have many different formats to store data, and many different ways in which they want to monitor it. Our goal is to offer as much flexibility as your organization needs to meet your data protection goals.
One of the best feelings as a developer is seeing your idea come to life. You want to move fast and Cloudflare’s developer platform gives you the tools to take your applications from 0 to 100 within minutes.
One thing that we’ve heard slows developers down is the question: “What databases can be used with Workers?”. Developers stumble when it comes to things like finding the databases that Workers can connect to, the right library or driver that's compatible with Workers and translating boilerplate examples to something that can run on our developer platform.
Today we’re announcing Database Integrations – making it seamless to connect to your database of choice on Workers. To start, we’ve added some of the most popular databases that support HTTP connections: Neon, PlanetScale and Supabase with more (like Prisma, Fauna, MongoDB Atlas) to come!
Focus more on code, less on config
Our serverless SQL database, D1, launched in open alpha last year, and we’re continuing to invest in making it production ready (stay tuned for an exciting update later this week!). We also recognize that there are plenty of flavours of databases, and we want developers to have the freedom to select what’s best for them and pair it with our powerful compute offering.
On our second day of this Developer Week 2023, data is in the spotlight. We’re taking huge strides in making it possible and more performant to connect to databases from Workers (spoiler alert!):
Making it possible and performant is just the start, we also want to make connecting to databases painless. Databases have specific protocols, drivers, APIs and vendor specific features that you need to understand in order to get up and running. With Database Integrations, we want to make this process foolproof.
Whether you’re working on your first project or your hundredth project, you should be able to connect to your database of choice with your eyes closed. With Database Integrations, you can spend less time focusing on configuration and more on doing what you love – building your applications!
What does this experience look like?
Discoverability
If you’re starting a project from scratch or want to connect Workers to an existing database, you want to know “What are my options?”.
Workers supports connections to a wide array of database providers over HTTP. With newly released outbound TCP support, the databases that you can connect to on Workers will only grow!
In the new “Integrations” tab, you’ll be able to view all the databases that we support and add the integration to your Worker directly from here. To start, we have support for Neon, PlanetScale and Supabase with many more coming soon.
Authentication
You should never have to copy and paste your database credentials or other parts of the connection string.
Once you hit “Add Integration” we take you through an OAuth2 flow that automatically gets the right configuration from your database provider and adds them as encrypted environment variables to your Worker.
Once you have credentials set up, check out our documentation for examples on how to get started using the data platform’s client library. What’s more – we have templates coming that will allow you to get started even faster!
That’s it! With database integrations, you can connect your Worker with your database in just a few clicks. Head to your Worker > Settings > Integrations to try it out today.
What’s next?
We’ve only just scratched the surface with Database Integrations and there’s a ton more coming soon!
While we’ll be continuing to add support for more popular data platforms we also know that it's impossible for us to keep up in a moving landscape. We’ve been working on an integrations platform so that any database provider can easily build their own integration with Workers. As a developer, this means that you can start tinkering with the next new database right away on Workers.
Additionally, we’re working on adding wrangler support, so you can create integrations directly from the CLI. We’ll also be adding support for account level environment variables in order for you to share integrations across the Workers in your account.
We’re really excited about the potential here and to see all the new creations from our developers! Be sure to join Cloudflare’s Developer Discord and share your projects. Happy building!
There’s an important debate happening in Europe that could affect the future of the Internet. The European Commission is considering new rules for how networks connect to each other on the Internet. It’s considering proposals that – no hyperbole – will slow the Internet for consumers and are dangerous for the Internet.
The large incumbent telcos are complaining loudly to anyone who wants to listen that they aren’t being adequately compensated for the capital investments they’re making. These telcos are a set of previously regulated monopolies who still constitute the largest telcos by revenue in Europe in today's competitive market. They say traffic volumes, largely due to video streaming, are growing rapidly, implying they need to make capital investments to keep up. And they call for new charges on big US tech companies: a “fair share” contribution that those networks should make to European Internet infrastructure investment.
In response to this campaign, in February the European Commission released a set of recommended actions and proposals “aimed to make Gigabit connectivity available to all citizens and businesses across the EU by 2030.” The Commission goes on to say that “Reliable, fast and secure connectivity is a must for everybody and everywhere in the Union, including in rural and remote areas.” While this goal is certainly the right one, our agreement with the European Commission’s approach, unfortunately, ends right there. A close reading of the Commission’s exploratory consultation that accompanies the Gigabit connectivity proposals shows that the ultimate goal is to intervene in the market for how networks interconnect, with the intention to extract fees from large tech companies and funnel them to large incumbent telcos.
This debate has been characterised as a fight between Big Tech and Big European Telco. But it’s about much more than that. Contrary to its intent, these proposals would give the biggest technology companies preferred access to the largest European ISPs. European consumers and small businesses, when accessing anything on the Internet outside Big Tech (Netflix, Google, Meta, Amazon, etc), would get the slow lane. Below we’ll explain why Cloudflare, although we are not currently targeted for extra fees, still feels strongly that these fees are dangerous for the Internet:
Network usage fees would create fast lanes for Big Tech content, and slow lanes for everything else, slowing the Internet for European consumers;
Small businesses, Internet startups, and consumers are the beneficiaries of Europe’s low wholesale bandwidth prices. Regulatory intervention in this market would lead to higher prices that would be passed onto SMEs and consumers;
The Internet works best – fastest and most reliably – when networks connect freely and frequently, bringing content and service as close to consumers as possible. Network usage fees artificially disincentivize efforts to bring content close to users, making the Internet experience worse for consumers.
Why network interconnection matters
Understanding why the debate in Europe matters for the future of the Internet requires understanding how Internet traffic gets to end users, as well as the steps that can be taken to improve Internet performance.
At Cloudflare, we know a lot about this. According to Hurricane Electric, Cloudflare connects with other networks at 287 Internet exchange points (IXPs), the second most of any network on the planet. And we’re directly connected to other networks on the Internet in more than 285 cities in over 100 countries. So when we see a proposal to change how networks interconnect, we take notice. What the European Commission is considering might appear to be targeting the direct relationship between telcos and large tech companies, but we know it will have much broader effects.
There are different ways in which networks exchange data on the Internet. In some cases, networks connect directly to exchange data between users of each network. This is called peering. Cloudflare has an open peering policy; we’ll peer with any other network. Peering is one hop between networks – it’s the gold standard. Fewer hops from start to end generally means faster and more reliable data delivery. We peer with more than 12,000 networks around the world on a settlement-free basis, which means neither network pays the other to send traffic. This settlement-free peering is one of the aspects of Cloudflare’s business that allows us to offer a free version of our services to millions of users globally, permitting individuals and small businesses to have websites that load quickly and efficiently and are better protected from cyberattacks. We’ll talk more about the benefits of settlement-free peering below.
When networks don’t connect directly, they might pay a third-party IP transit network to deliver traffic on their behalf. No network is connected to every other network on the Internet, so transit networks play an important role making sure any network can reach any other network. They’re compensated for doing so; generally a network will pay their transit provider based on how much traffic they ask the transit provider to deliver. Cloudflare is connected to more than 12,000 other networks, but there are over 100,000 Autonomous Systems (networks) on the Internet, so we use transit networks to reach the “long tail”. For example, the Cloudflare network (AS 13335) provides the website cloudflare.com to any network that requests it. If a user of a small ISP with whom Cloudflare doesn’t have direct connections requests cloudflare.com from their browser, it’s likely that their ISP will use a transit provider to send that request to Cloudflare. Then Cloudflare would respond to the request, sending the website content back to the user via a transit provider.
In Europe, transit providers play a critical role because many of the largest incumbent telcos won’t do settlement-free direct peering connections. Therefore, many European consumers that use large incumbent telcos for their Internet service interact with Cloudflare’s services through third party transit networks. It isn’t the gold standard of network interconnection (which is peering, and would be faster and more reliable) but it works well enough most of the time.
Cloudflare would of course be happy to directly connect with EU telcos because we have an open peering policy. As we’ll show, the performance and reliability improvement for their subscribers and our customers’ content and services would significantly improve. And if the telcos offered us transit – the ability to send traffic to their network and onwards to the Internet – at market rates, we would consider use of that service as part of competitive supplier selection. While it’s unfortunate that incumbent telcos haven’t offered services at market-competitive prices, overall the interconnection market in Europe – indeed the Internet itself – currently works well. Others agree. BEREC, the body of European telecommunications regulators, wrote recently in a preliminary assessment:
BEREC's experience shows that the internet has proven its ability to cope with increasing traffic volumes, changes in demand patterns, technology, business models, as well as in the (relative) market power between market players. These developments are reflected in the IP interconnection mechanisms governing the internet which evolved without a need for regulatory intervention. The internet’s ability to self-adapt has been and still is essential for its success and its innovative capability.
There is a competitive market for IP transit. According to market analysis firm Telegeography’s State of the Network 2023 report, “The lowest [prices on offer for] 100 GigE [IP transit services in Europe] were $0.06 per Mbps per month.” These prices are consistent with what Cloudflare sees in the market. In our view, the Commission should be proud of the effective competition in this market, and it should protect it. These prices are comparable to IP transit prices in the United States and signal, overall, a healthy Internet ecosystem. Competitive wholesale bandwidth prices (transit prices) mean it is easier for small independent telcos to enter the market, and lower prices for all types of Internet applications and services. In our view, regulatory intervention in this well-functioning market has significant down-side risks.
Large incumbent telcos are seeking regulatory intervention in part because they are not willing to accept the fair market prices for transit. Very Large Telcos and Content and Application Providers (CAPs) – the term the European Commission uses for networks that have the content and services consumers want to see – negotiate freely for transit and peering. In our experience, large incumbent telcos ask for paid peering fees that are many multiples of what a CAP could pay to transit networks for a similar service. At the prices offered, many networks – including Cloudflare – continue to use transit providers instead of paying incumbent telcos for peering. Telcos are trying to use regulation to force CAPs into these relationships at artificially high prices.
If the Commission’s proposal is adopted, the price for interconnection in Europe would likely be set by this regulation, not the market. Once there’s a price for interconnection between CAPs and telcos, whether that price is found via negotiation, or more likely arbitrators set the price, that is likely to become the de facto price for all interconnection. After all, if telcos can achieve artificially high prices from the largest CAPs, why would they accept much lower rates from any other network – including transits – to connect with them? Instead of falling wholesale prices spurring Internet innovation as is happening now in Europe and the United States, rising wholesale prices will be passed onto small businesses and consumers.
Network usage fees would give Big Tech a fast lane, at the expense of consumers and smaller service providers
If network fees become a reality, the current Internet experience for users in Europe will deteriorate. Notwithstanding existing net neutrality regulations, we already see large telcos relegate content from transit providers to more congested connections. If the biggest CAPs pay for interconnection, consumer traffic to other networks will be relegated to a slow and/or congested lane. Networks that aren’t paying would still use transit providers to reach the large incumbent telcos, but those transit links would be second class citizens to the paid traffic. Existing transit links will become (more) slow and congested. By targeting only the largest CAPs, a proposal based on network fees would perversely, and contrary to intent, cement those CAPs’ position at the top by improving the consumer experience for those networks at the expense of all others. By mandating that the CAPs pay the large incumbent telcos for peering, the European Commission would therefore be facilitating discrimination against services using smaller networks and organisations that cannot match the resources of the large CAPs.
Indeed, we already see evidence that some of the large incumbent telcos treat transit networks as second-class citizens when it comes to Internet traffic. In November 2022, HWSW, a Hungarian tech news site, reported on recurring Internet problems for users of Magyar Telekom, a subsidiary of Deutsche Telekom, because of congestion between Deutsche Telekom and its transit networks:
Network problem that exists during the fairly well-defined period, mostly between 4 p.m. and midnight Hungarian time, … due to congestion in the connection (Level3) between Deutsche Telekom, the parent company that operates Magyar Telekom's international peering routes, and Cloudflare, therefore it does not only affect Hungarian subscribers, but occurs to a greater or lesser extent at all DT subsidiaries that, like Magyar Telekom, are linked to the parent company. (translated by Google Translate)
Going back many years, large telcos have demonstrated that traffic reaching them through transit networks is not a high priority to maintain quality. In 2015, Cogent, a transit provider, sued Deutsche Telekom over interconnection, writing, “Deutsche Telekom has interfered with the free flow of internet traffic between Cogent customers and Deutsche Telekom customers by refusing to increase the capacity of the interconnection ports that allow the exchange of traffic”.
Beyond the effect on consumers, the implementation of Network Usage Fees would seem to violate the European Union’s Open Internet Regulation, sometimes referred to as the net neutrality provision. Article 3(3) of the Open Internet Regulation states:
Providers of internet access services shall treat all traffic equally, when providing internet access services, without discrimination, restriction or interference, and irrespective of the sender and receiver, the content accessed or distributed, the applications or services used or provided, or the terminal equipment used. (emphasis added)
Fees from certain sources of content in exchange for private paths between the CAP and large incumbent telcos would seem to be a plain-language violation of this provision.
Network usage fees would endanger the benefits of Settlement-Free Peering
Let’s now talk about the ecosystem that leads to a thriving Internet. We first talked about transit, now we’ll move on to peering, which is quietly central to how the Internet works. “Peering” is the practice of two networks directly interconnecting (they could be backbones, CDNs, mobile networks or broadband telcos to exchange traffic. Almost always, networks peer without any payments (“settlement-free”) in recognition of the performance benefits and resiliency we’re about to discuss. A recent survey of over 10,000 ISPs shows that 99.99% of their exchanged traffic is on settlement-free terms. The Internet works best when these peering arrangements happen freely and frequently.
These types of peering arrangements and network interconnection also significantly improve latency for the end-user of services delivered via the Internet. The speed of an Internet connection depends more on latency (the time it takes for a consumer to request data and receive the response) than on bandwidth (the maximum amount of data that is flowing at any one time over a connection). Latency is critical to many Internet use-cases. A recent technical paper used the example of a mapping application that responds to user scrolling. The application wouldn’t need to pre-load unnecessary data if it can quickly get a small amount of data in response to a user swiping in a certain direction.
In recognition of the myriad benefits, settlement-free peering between CDNs and terminating ISPs is the global norm in the industry. Most networks understand that through settlement-free peering, (1) customers get the best experience through local traffic delivery, (2) networks have increased resilience through multiple traffic paths, and (3) data is exchanged locally instead of backhauled and aggregated in larger volumes at regional Internet hubs. By contrast, paid peering is rare, and is usually employed by networks that operate in markets without robust competition. Unfortunately, when an incumbent telco achieves a dominant market position or has no significant competition, they may be less concerned about the performance penalty they impose on their own users by refusing to peer directly.
As an example, consider the map in Figure 2. This map shows the situation in Germany, where most traffic is exchanged via transit providers at the Internet hub in Frankfurt. Consumers are losing in this situation for two reasons: First, the farther they are from Frankfurt, the higher latency they will experience for Cloudflare services. For customers in northeast Germany, for example, the distance from Cloudflare’s servers in Frankfurt means they will experience nearly double the latency of consumers closer to Cloudflare geographically. Second, the reliance on a small number of transit providers exposes their traffic to congestion and reliability risks. The remedy is obvious: if large telcos would interconnect (“peer”) with Cloudflare in all five cities where Cloudflare has points of presence, every consumer, regardless of where they are in Germany, would have the same excellent Internet experience.
We’ve shown that local settlement-free interconnection benefits consumers by improving the speed of their Internet experience, but local interconnection also reduces the amount of traffic that aggregates at regional Internet hubs. If a telco interconnects with a large video provider in a single regional hub, the telco needs to carry their subscribers’ request for content through their network to the hub. Data will be exchanged at the hub, then the telco needs to carry the data back through their “backbone” network to the subscriber. (While this situation can result in large traffic volumes, modern networks can easily expand the capacity between themselves at almost no cost by adding additional port capacity. The fibre-optic cable capacity in this “backbone” part of the Internet is not constrained.)
Local settlement-free peering is one way to reduce the traffic across those interconnection points. Another way is to use embedded caches, which are offered by most CDNs, including Cloudflare. In this scenario, a CDN sends hardware to the telco, which installs it in their network at local aggregation points that are private to the telco. When their subscriber requests data from the CDN, the telco can find that content at a local infrastructure point and send it back to the subscriber. The data doesn’t need to aggregate on backhaul links, or ever reach a regional Internet hub. This approach is common. Cloudflare has hundreds of these deployments with telcos globally.
Conclusion: make your views known to the European Commission!
In conclusion, it’s our view that despite the unwillingness of many large European incumbents to peer on a settlement-free basis, the IP interconnection market is healthy, which benefits European consumers. We believe regulatory intervention that forces content and application providers into paid peering agreements would have the effect of relegating all other traffic to a slow, congested lane. Further, we fear this intervention will do nothing to meet Europe’s Digital Decade goals, and instead will make the Internet experience worse for consumers and small businesses.
If you agree that major intervention in how networks interconnect in Europe is unnecessary, and even harmful, consider reading more about the European Commission’s consultation. While the consultation itself may look intimidating, anyone can submit a narrative response (deadline: 19 May). Consider telling the European Commission that their goals of ubiquitous connectivity are the right ones but that the approach they are considering is going into the wrong direction.
Today we wanted to focus on LATAM and show how our network performed against Zscaler and Netskope in Argentina, Brazil, Chile, Colombia, Costa Rica, Ecuador, Mexico, Peru, Uruguay and Venezuela.
With 47 data centers across Latin America and Caribbean, Cloudflare has the largest number of SASE Points of Presence across all vendors, meaning we can offer our Zero Trust services closer to the end user and reduce unwanted latency.
We’ve run a series of tests comparing our Zero Trust Network Access product against Zscaler and Netskope’s comparable products.
For each of these tests, we used 95th percentile Time to First Byte and Response tests, which measure the time it takes for a user to make a request, and get the start of the response (Time to First Byte), and the end of the response (Response). These tests were designed with the goal of trying to measure performance from an end-user perspective.
In this blog we’re going to talk about why performance matters and do a deep dive on what we’re measuring to show that we’re faster.
Why does performance matter?
Performance matters because it impacts your employees' experience and their ability to get their job done. For example, if Anna is connecting to a hosted, protected application like Salesforce to complete some work, she doesn’t want to be waiting constantly for pages to load or to authenticate her requests. In an access-controlled application, the first thing you do when you connect is you log in. If that login takes a long time, you may get distracted with a random message from a coworker and even when you get authenticated, you still want your normal application experience to be snappy and smooth: users should never notice Zero Trust when it’s at its best.
If these products or experiences are slow, then something worse might happen than your users complaining: they may find ways to turn off the products or bypass them, which puts your company at risk. A Zero Trust product suite is completely ineffective if no one is using it because it’s slow.
Ensuring Zero Trust is fast is critical to the effectiveness of a Zero Trust solution: employees won’t want to turn it off and put themselves at risk if they barely know it’s there at all. Services like Zscaler or Netskope may outperform many older, antiquated solutions, but their network still fails to measure up to a highly performant, optimized network like Cloudflare’s.
Cloudflare Access: the fastest Zero Trust proxy
Access control needs to be seamless and transparent to the user: the best compliment for a Zero Trust solution is employees barely notice it’s there. Services like Cloudflare Access and Zscaler Private Access (ZPA) allow users to cache authentication information on the provider network, ensuring applications can be accessed securely and quickly to give users that seamless experience they want. So having a network that minimizes the number of logins required while also reducing the latency of your application requests by delivering the service closer to the user will help keep your Internet experience snappy and reactive.
For these tests, Cloudflare contracted Miercom, a third party who performed a set of tests that was intended to replicate an end-user connecting to a resource protected by Cloudflare, Zscaler and Netskope. Miercom set up application instances in 14 locations around the world, and devised a test that would log into the application through various Zero Trust providers to access certain content. The test methodology is described as follows, but you can look at the full report from Miercom detailing their test methodology here:
User connects to the application from a browser mimicked by a Catchpoint instance – new session
User authenticates against their identity provider
User accesses resource
User refreshes the browser page and tries to access the same resource but with credentials already present – existing session
In this test we evaluated Cloudflare against Zscaler and Netskope accessing applications hosted in two specific regions (Brazil and the US south-west). We tested the response time for an existing session, when a user has already been authenticated and that authentication information can be cached.
Here’s how this data looks for each of the 10 countries we tested across LATAM:
Argentina
Zero Trust Access – Time to First Byte (App in Brazil)
95th Percentile (ms)
Cloudflare
1,203
Netskope
8,319
Zscaler
5,961
When we drill into the data, we see that Cloudflare is faster when connecting from Argentina to an app hosted in Brazil. Cloudflare’s 95th percentile time to first byte times are 75% faster than Zscaler and 85% faster than Netskope.
Cloudflare is also faster when connecting from Argentina to an app hosted in the United States (South West Region). Cloudflare’s 95th percentile time to first byte times are 70% faster than Zscaler and 68% faster than Netskope:
Zero Trust Access – Time to First Byte (App in US West)
95th Percentile (ms)
Cloudflare
1,587
Netskope
5,082
Zscaler
5,299
Brazil
Zero Trust Access – Time to First Byte (App in Brazil)
95th Percentile (ms)
Cloudflare
1,525
Netskope
3,799
Zscaler
3,916
When we drill into the data, we see that Cloudflare is faster when connecting from Brazil to an app hosted in Brazil. Cloudflare’s 95th percentile time to first byte times are 61% faster than Zscaler and 59% faster than Netskope.
Cloudflare is also faster when connecting from Brazil to an app hosted in the United States (South West Region). Cloudflare’s 95th percentile time to first byte times are 58% faster than Zscaler and 59% faster than Netskope:
Zero Trust Access – Time to First Byte (App in US West)
95th Percentile (ms)
Cloudflare
1,603
Netskope
3,989
Zscaler
3,894
Chile
Zero Trust Access – Time to First Byte (App in Brazil)
95th Percentile (ms)
Cloudflare
714
Netskope
3,000
Zscaler
3,157
When we drill into the data, we see that Cloudflare is faster when connecting from Chile to an app hosted in Brazil. Cloudflare’s 95th percentile time to first byte times are 77% faster than Zscaler and 76% faster than Netskope.
Cloudflare is also faster when connecting from Chile to an app hosted in the United States (South West Region). Cloudflare’s 95th percentile time to first byte times are 80% faster than Zscaler and 79% faster than Netskope:
Zero Trust Access – Time to First Byte (App in US West)
95th Percentile (ms)
Cloudflare
648
Netskope
3,113
Zscaler
3,290
Colombia
Zero Trust Access – Time to First Byte (App in Brazil)
95th Percentile (ms)
Cloudflare
1,628
Netskope
2,699
Zscaler
4,763
When we drill into the data, we see that Cloudflare is faster when connecting from Colombia to an app hosted in Brazil. Cloudflare’s 95th percentile time to first byte times are 65% faster than Zscaler and 39% faster than Netskope.
Cloudflare is also faster when connecting from Colombia to an app hosted in the United States (South West Region). Cloudflare’s 95th percentile time to first byte times are 59% faster than Zscaler and 56% faster than Netskope:
Zero Trust Access – Time to First Byte (App in US West)
95th Percentile (ms)
Cloudflare
1,466
Netskope
3,351
Zscaler
3,623
Costa Rica
Zero Trust Access – Time to First Byte (App in Brazil)
95th Percentile (ms)
Cloudflare
1,432
Netskope
2,036
Zscaler
2,110
When we drill into the data, we see that Cloudflare is faster when connecting from Costa Rica to an app hosted in Brazil. Cloudflare’s 95th percentile time to first byte times are 32% faster than Zscaler and 29% faster than Netskope.
Cloudflare is also faster when connecting from Costa Rica to an app hosted in the United States (South West Region). Cloudflare’s 95th percentile time to first byte times are 36% faster than Zscaler and 32% faster than Netskope:
Zero Trust Access – Time to First Byte (App in US West)
95th Percentile (ms)
Cloudflare
1,387
Netskope
2,044
Zscaler
2,191
Ecuador
Zero Trust Access – Time to First Byte (App in Brazil)
95th Percentile (ms)
Cloudflare
1,134
Netskope
2,002
Zscaler
2,206
When we drill into the data, we see that Cloudflare is faster when connecting from Ecuador to an app hosted in Brazil. Cloudflare’s 95th percentile time to first byte times are 48% faster than Zscaler and 43% faster than Netskope.
Cloudflare is also faster when connecting from Ecuador to an app hosted in the United States (South West Region). Cloudflare’s 95th percentile time to first byte times are 46% faster than Zscaler and 40% faster than Netskope:
Zero Trust Access – Time to First Byte (App in US West)
95th Percentile (ms)
Cloudflare
1,179
Netskope
1,976
Zscaler
2,210
Mexico
Zero Trust Access – Time to First Byte (App in Brazil)
95th Percentile (ms)
Cloudflare
2,334
Netskope
2,882
Zscaler
3,537
When we drill into the data, we see that Cloudflare is faster when connecting from Mexico to an app hosted in Brazil. Cloudflare’s 95th percentile time to first byte times are 34% faster than Zscaler and 19% faster than Netskope.
Cloudflare is also faster when connecting from Mexico to an app hosted in the United States (South West Region). Cloudflare’s 95th percentile time to first byte times are 56% faster than Zscaler and 53% faster than Netskope:
Zero Trust Access – Time to First Byte (App in US West)
95th Percentile (ms)
Cloudflare
1,249
Netskope
2,679
Zscaler
2,880
Peru
Zero Trust Access – Time to First Byte (App in Brazil)
95th Percentile (ms)
Cloudflare
609
Netskope
2,425
Zscaler
2,992
When we drill into the data, we see that Cloudflare is faster when connecting from Peru to an app hosted in Brazil. Cloudflare’s 95th percentile time to first byte times are 79% faster than Zscaler and 74% faster than Netskope.
Cloudflare is also faster when connecting from Peru to an app hosted in the United States (South West Region). Cloudflare’s 95th percentile time to first byte times are 80% faster than Zscaler and 73% faster than Netskope:
Zero Trust Access – Time to First Byte (App in US West)
95th Percentile (ms)
Cloudflare
827
Netskope
3,108
Zscaler
4,189
Uruguay
Zero Trust Access – Time to First Byte (App in Brazil)
95th Percentile (ms)
Cloudflare
1,242
Netskope
3,556
Zscaler
4,467
When we drill into the data, we see that Cloudflare is faster when connecting from Uruguay to an app hosted in Brazil. Cloudflare’s 95th percentile time to first byte times are 72% faster than Zscaler and 65% faster than Netskope.
Cloudflare is also faster when connecting from Uruguay to an app hosted in the United States (South West Region). Cloudflare’s 95th percentile time to first byte times are 60% faster than Zscaler and 48% faster than Netskope:
Zero Trust Access – Time to First Byte (App in US West)
95th Percentile (ms)
Cloudflare
1078
Netskope
2,101
Zscaler
2,726
Venezuela
Zero Trust Access – Time to First Byte (App in Brazil)
95th Percentile (ms)
Cloudflare
1,272
Netskope
3,451
Zscaler
3,800
When we drill into the data, we see that Cloudflare is faster when connecting from Venezuela to an app hosted in Brazil. Cloudflare’s 95th percentile time to first byte times are 66% faster than Zscaler and 63% faster than Netskope.
Cloudflare is also faster when connecting from Venezuela to an app hosted in the United States (South West Region). Cloudflare’s 95th percentile time to first byte times are 85% faster than Zscaler and 77% faster than Netskope:
Zero Trust Access – Time to First Byte (App in US West)
Gartner has recognized Cloudflare in the 2023 “Gartner® Magic Quadrant™ for Security Service Edge (SSE)” report for its ability to execute and completeness of vision. We are excited to share that the Cloudflare Zero Trust solution, part of our Cloudflare One platform, is one of only ten vendors recognized in the report.
Of the 10 companies named to this year’s Gartner® Magic Quadrant™ report, Cloudflare is the only new vendor addition. You can read more about our position in the report and what customers say about using Cloudflare One here.
Cloudflare is also the newest vendor when measured by the date since our first products in the SSE space launched. We launched Cloudflare Access, our best-in-class Zero Trust access control product, a little less than five years ago. Since then, we have released hundreds of features and shipped nearly a dozen more products to create a comprehensive SSE solution that over 10,000 organizations trust to keep their organizations data, devices and teams both safe and fast. We moved that quickly because we built Cloudflare One on top of the same network that already secures and accelerates large segments of the Internet today.
We deliver our SSE services on the same servers and in the same locations that serve some of the world’s largest Internet properties. We combined existing advantages like the world’s fastest DNS resolver, Cloudflare’s serverless compute platform, and our ability to route and accelerate traffic around the globe. We might be new to the report, but customers who select Cloudflare One are not betting on an upstart provider; they are choosing an industry-leading solution made possible by a network that already secures millions of destinations and billions of users every day.
We are flattered by the recognition from Gartner this week and even more thrilled by the customer outcomes we make possible today. That said, we are not done and we are only going faster.
What is a Security Service Edge?
A Security Service Edge (SSE) “secures access to the web, cloud services and private applications. Capabilities include access control, threat protection, data security, security monitoring, and acceptable-use control enforced by network-based and API-based integration. SSE is primarily delivered as a cloud-based service, and may include on-premises or agent-based components.”1
The SSE space developed to meet organizations as they encountered a new class of security problems. Years ago, teams could keep their devices, services, and data safe by hiding from the rest of the world behind a figurative castle-and-moat. The defense perimeter for an enterprise corresponded to the literal walls of their office. Applications ran in server closets or self-managed data centers. Businesses could deploy firewalls, proxies, and filtering appliances in the form of on-premise hardware. Remote users suffered through the setup by backhauling their traffic through the physical office with a legacy virtual private network (VPN) client.
That model began to break down when applications started to leave the building. Teams began migrating to SaaS tools and public cloud providers. They could no longer control security by placing physical appliances in the flow of their one path to the Internet.
Meanwhile, users also left the office, placing stress on the ability of a self-managed private network to scale with the traffic. Performance and availability suffered while costs increased as organizations carried more traffic and deployed more bandaids to try and buy time.
Bad actors also evolved. Attacks became more sophisticated and exploited the migration away from a classic security perimeter. The legacy appliances deployed could not keep up with the changes in attack patterns and scale of attacks.
SSE vendors provide organizations with a cloud-based solution to those challenges. SSE providers deploy and maintain security services in their own points of presence or in a public cloud provider, giving enterprises a secure first hop before they connect to the rest of the Internet or to their internal tools. IT teams can deprecate the physical or virtual appliances that they spent days maintaining. Security teams benefit from filtering and policies that update constantly to defend against new threats.
Some SSE features target remote access replacement by offering customers the ability to connect users to internal tools with Zero Trust access control rules. Other parts of an SSE platform focus on applying Zero Trust scrutiny to the rest of the Internet, replacing the on-premise filtering appliances of an enterprise with cloud-based firewalls, resolvers, and proxies that filter and log traffic leaving a device closer to the user instead of forcing a backhaul to a centralized location.
What about SASE?
You might also be familiar with the term Secure Access Service Edge (SASE). We hear customers talk about their “SASE” goals more often than “SSE” alone. SASE extends the definition of SSE to include managing the connectivity of the traffic being secured. Network-as-a-Service vendors help enterprises connect their users, devices, sites, and services. SSE providers secure that traffic.
Most vendors focus on one side of the equation. Network-as-a-service companies sell software-defined wide area network (SD-WAN), interconnection, and traffic optimization solutions to help enterprises manage and accelerate connectivity, but those enterprises wind up losing those benefits by sending all that traffic to an SSE provider for filtering. SSE providers deliver security tools for traffic of nearly any type, but they still need customers to buy additional networking services to get that traffic to their locations.
Cloudflare One is a single vendor SASE platform. Cloudflare offers enterprises a comprehensive network-as-a-service where teams can send all traffic to Cloudflare’s network, where we can help teams manage connectivity and improve performance. Enterprises can choose from flexible on-ramps, like their existing hardware routers, agents running on laptops and mobile devices, physical and virtual interconnects, or Cloudflare’s own last mile connector.
When that traffic reaches Cloudflare’s network, our SSE services apply security filtering in the same locations where we manage and route connectivity. Cloudflare’s SSE solution does not add additional hops; we deliver filtering and logging in-line with the traffic we accelerate for our customers. The value of our single vendor SASE solution is just another outcome of an obsession we’ve had since we first launched our reverse proxy over ten years ago: customers should not have to compromise performance for security and vice versa.
So where does Cloudflare One fit?
Cloudflare One connects enterprises to the tools they need while securing their devices, applications and data without compromising on performance. The platform consists of two primary components: our Cloudflare Zero Trust products, which represent our SSE offering, and our network-as-a-service solution. As much as today’s announcement separates out those features, we prefer to talk about how they work together.
Cloudflare’s network-as-a-service offering, our Magic WAN solution, extends our network for customers to use as their own. Enterprises can take advantage of the investments we have made over more than a decade to build out one of the world’s most peered, most performant, and most available networks. Teams can connect individual roaming devices, offices and physical sites, or entire networks and data centers through Cloudflare to the rest of the Internet or internal destinations.
We want to make it as easy as possible for customers to send us their traffic, so we provide many flexible “on-ramps” to easily fit into their existing infrastructure. Enterprises can use our roaming agent to connect user devices, our Cloudflare Tunnel service for application-level connectivity, network-level tunnels from our Magic WAN Connector or their existing router or SD-WAN hardware, and/or direct physical or virtual interconnections for dedicated connectivity to on-prem or cloud infrastructure at 1,600+ locations around the world. When packets arrive at the closest Cloudflare location, we provide optimization, acceleration and logging to give customers visibility into their traffic flows.
Instead of sending that accelerated traffic to an additional intermediary for security filtering, our Cloudflare Zero Trust platform can take over to provide SSE security filtering in the same location – generally on the exact same server – as our network-as-a-service functions. Enterprises can pick and choose what SSE features they want to enable to strengthen their security posture over time.
Cloudflare One and the SSE feature set
The security features inside of Cloudflare One provide comprehensive SSE coverage to enterprises operating at any scale. Customers just need to send traffic to a Cloudflare location within a few milliseconds of their users and Cloudflare Zero Trust handles everything else.
Cloudflare One SSE Capabilities
Zero Trust Access Control Cloudflare provides a Zero Trust VPN replacement for teams that host and control their own resources. Customers can deploy a private network inside of Cloudflare’s network for more traditional connectivity or extend access to contractors without any agent required. Regardless of how users connect, and for any type of destination they need, Cloudflare’s network gives administrators the ability to build granular rules on a per-resource or global basis. Teams can combine one or more identity providers, device posture inputs, and other sources of signal to determine when and how a user should be able to connect.
Organizations can also extend these types of Zero Trust access control rules to the SaaS applications where they do not control the hosting by introducing Cloudflare’s identity proxy into the login flow. They can continue to use their existing identity provider but layer on additional checks like device posture, country, and multifactor method.
DNS filtering Cloudflare’s DNS filtering solution runs on the world’s fastest DNS resolver, filtering and logging the DNS queries leaving individual devices or some of the world’s largest networks.
Network firewall Organizations that maintain on-premise hardware firewalls or cloud-based equivalents can deprecate their boxes by sending traffic through Cloudflare where our firewall-as-a-service can filter and log traffic. Our Network Firewall includes L3-L7 filtering, Intrusion Detection, and direct integrations with our Threat Intelligence feeds and the rest of our SSE suite. It enables security teams to build sophisticated policies without any of the headaches of traditional hardware: no capacity or redundancy planning, no throughput restrictions, no manual patches or upgrades.
Secure Web Gateway Cloudflare’s Secure Web Gateway (SWG) service inspects, filters, and logs traffic in a Cloudflare PoP close to a user regardless of where they work. The SWG can block HTTP requests bound for dangerous destinations, scan traffic for viruses and malware, and control how traffic routes to the rest of the Internet without the need for additional hardware or virtualized services.
In-line Cloud Access Security Broker and Shadow IT The proliferation of SaaS applications can help teams cut costs but poses a real risk; sometimes users prefer tools other than the ones selected by their IT or Security teams. Cloudflare’s in-line Cloud Access Security Broker (CASB) gives administrators the tools to make sure employees use SaaS applications as intended. Teams can build tenant control rules that restrict employees from logging into personal accounts, policies that only allow file uploads of certain types to approved SaaS applications, and filters that restrict employees from using unapproved services.
Cloudflare’s “Shadow IT” service scans and catalogs user traffic to the Internet to help IT and Security teams detect and monitor the unauthorized use of SaaS applications. For example, teams can ensure that their approved cloud storage is the only place where users can upload materials.
API-driven Cloud Access Security Broker Cloudflare’s superpower is our network, but sometimes the worst attacks start with data sitting still. Teams that adopt SaaS applications can share work products and collaborate together from any location; that same convenience makes it simple for mistakes or bad actors to cause a serious data breach.
In some cases, employees might overshare a document with sensitive information by selecting the wrong button in the “Share” menu. With just one click, a spreadsheet with customer contact data could become public on the Internet. In other situations, users might share a report with their personal account without realizing they just violated internal compliance rules.
Regardless of how the potential data breach started, Cloudflare’s API-driven CASB constantly scans the SaaS applications that your team uses for potential misconfiguration and data loss. Once detected, Cloudflare’s CASB will alert administrators and provide a comprehensive guide to remediating the incident.
Data Loss Prevention Cloudflare’s Data Loss Prevention service scans traffic to detect and block potential data loss. Administrators can select from common precreated profiles, like social security numbers or credit card numbers, or create their own criteria using regular expressions or integrate with existing Microsoft Information Protection labels.
Remote Browser Isolation Cloudflare’s browser isolation service runs a browser inside of our network, in a data center just milliseconds from the user, and sends the vector rendering of the web page to the local device. Team members can use any modern browser and, unlike other approaches, the Internet just feels like the Internet. Administrators can isolate sites on the fly, choosing to only isolate unknown destinations or providing contractors with an agentless workstation. Security teams can add additional protection like blocking copy-paste or printing.
Security beyond the SSE
Many of the customers who talk to us about their SSE goals are not ready to begin adopting every security service in the category from Day 1. Instead, they tend to have strategic SSE goals and tactical immediate problems. That’s fine. We can meet customers wherever they begin on their journey and sometimes that journey starts with pain points that sit just a bit outside of the current SSE definition. We can help in those areas, too.
Many of the types of attacks that an SSE model aims to prevent begin with email, but that falls outside of the traditional SSE definition. Attackers will target specific employees or entire workforces with phishing links or malware that the default filtering available from email providers today miss.
We want to help customers stop these attacks at the inbox before SSE features like DNS or SWG filtering need to apply. Cloudflare One includes industry-leading email security through our Area 1 product to protect teams regardless of their email provider. Area 1 is not just a standalone solution bundled into our SSE; Cloudflare Zero Trust features work better together alongside Area 1. Suspicious emails can open links in an isolated browser, for example, to give customers a defense-in-depth security model without the risk of more IT help desk tickets.
Cloudflare One customers can also take advantage of another Gartner-recognized platform in Cloudflare, our application security suite. Cloudflare’s industry-leading application security features, like our Web Application Firewall and DDoS mitigation service, can be deployed in-line with our Zero Trust security features. Teams can add bot management alerts, API protection, and faster caching to their internal tools with a single click.
Why Cloudflare?
Over 10,000 organizations trust Cloudflare One to connect and secure their enterprise. Cloudflare One helps protect and accelerate teams from the world’s largest IT organization, the US Federal Government, to thousands of small groups who rely on our free plan. A couple of months ago we spoke with customers as part of our CIO Week to listen to the reasons they select Cloudflare One. Their feedback followed a few consistent themes.
1) Cloudflare One delivers more complete security Nearly every SSE vendor offers improved security compared to a traditional castle-and-moat model, but that is a low bar. We built the security features in Cloudflare One to be best in class. Our industry-leading access control solution provides more built-in options to control who can connect to the tools that power your business.
We partner leading identity providers and endpoint protection platforms, like Microsoft and CrowdStrike, to provide a Zero Trust VPN replacement that is better than anything else on the market. On the outbound filtering side, every filtering option relies on threat intelligence gathered and curated by Cloudforce One, our dedicated threat research team.
2) Cloudflare One makes your team faster Cloudflare One accelerates your end users from the first moment they connect to the Internet by starting with the world’s fastest DNS resolver. End users send those DNS queries and establish connectivity over a secure tunnel optimized based on feedback from the millions of users who rely on our popular consumer forward proxy. Entire sites connect through a variety of tunnel options to Cloudflare’s network where we are the fastest connectivity provider for the most number of the world’s 3,000 largest networks.
We compete and measure ourselves against pure connectivity providers. When we measure ourselves against pure SSE providers, like Zscaler, we significantly outperform by 38% to 59% depending on use case.
3) Cloudflare One is easier to manage The Cloudflare Zero Trust products are unique in the SSE market in that we offer a free plan that covers nearly every feature. We make these services available at no cost to groups of up to 50 users because we believe that security on the Internet should be accessible to anyone on any budget.
A consequence of that commitment is that we built products that have to be easy to use. Unlike other SSE providers who only sell to the enterprise and can rely on large systems integrators for deployment, we had to create a solution that any team could deploy. From human rights organizations without full-time IT departments to start ups who want to spend more time building and less time worrying about vulnerabilities.
We also know that administrators want more options than just an intuitive dashboard. We provide API support for managing every Cloudflare One feature, and we maintain a Terraform provider for teams that need the option for peer reviewed configuration-as-code management.
4) Cloudflare One is the most cost-efficient comprehensive SASE offering Cloudflare is responsible for delivering and securing millions of websites on the Internet every day. To support that volume of traffic, we had to build our network for scale and cost-efficiency.
The largest enterprises’ internal network traffic does not (yet) match the volume of even moderately popular Internet properties. When those teams send traffic to Cloudflare One, we rely on the same hardware and the same data centers that power our application services business to apply security and networking features. As a result, we can help deliver comprehensive security to any team at a price point that is made possible by our existing investment in our network.
5) Cloudflare can be your single, consolidated security vendor Cloudflare One is only the most recent part of the Cloudflare platform to be recognized in industry analyst reports. In 2022 Gartner named Cloudflare a Leaderin Web Application and API Protection (WAAP). When customers select Cloudflare to solve their SSE challenges, they have the opportunity to add best-in-class solutions all from the same vendor.
Dozens of independent analyst firms continue to recognize Cloudflare for our ability to deliver results to our customers on services ranging from DDoS protection, CDN and edge computing to bot management.
What’s next?
When customers choose Cloudflare One, they trust our network to secure the most sensitive aspects of their enterprise without slowing down their business. We are grateful to the more than 10,000 organizations who have selected us as their vendor in the last five years, from small teams on our free plan to Fortune 500 companies and government agencies.
Today’s announcement only accelerates the momentum in Cloudflare One. We are focused on building the next wave of security and connectivity features our customers need to focus on their own mission. We’re going to keep going faster to help more and more organizations. Want to get started on that journey with us? Let us know here and we’ll reach out.
Gartner, “Magic Quadrant for Security Service Edge”, Analyst(s): Charlie Winckless, Aaron McQuaid, John Watts, Craig Lawson, Thomas Lintemuth, Dale Koeppen, April 10, 2023.
GARTNER is a registered trademark and service mark of Gartner and Magic Quadrant is a registered trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.
Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
Over the last few years the topic of cyber security has moved from the IT department to the board room. The current climate of geopolitical and economic uncertainty has made the threat of cyber attacks all the more pressing, with businesses of all sizes and across all industries feeling the impact. From the potential for a crippling ransomware attack to a data breach that could compromise sensitive consumer information, the risks are real and potentially catastrophic. Organizations are recognizing the need for better resilience and preparation regarding cybersecurity. It is not enough to simply react to attacks as they happen; companies must proactively prepare for the inevitable in their approach to cybersecurity.
The security approach that has gained the most traction in recent years is the concept of Zero Trust. The basic principle behind Zero Trust is simple: don’t trust anything; verify everything. The impetus for a modern Zero Trust architecture is that traditional perimeter-based (castle-and-moat) security models are no longer sufficient in today’s digitally distributed landscape. Organizations must adopt a holistic approach to security based on verifying the identity and trustworthiness of all users, devices, and systems that access their networks and data.
Zero Trust has been on the radar of business leaders and board members for some time now. However, Zero Trust is no longer just a concept being discussed; it’s now a mandate. With remote or hybrid work now the norm and cyber-attacks continuing to escalate, businesses realize they must take a fundamentally different approach to security. But as with any significant shift in strategy, implementation can be challenging, and efforts can sometimes stall. Although many firms have begun implementing Zero Trust methods and technologies, only some have fully implemented them throughout the organization. For many large companies, this is the current status of their Zero Trust initiatives – stuck in the implementation phase.
A new leadership role emerges
But what if there was a missing piece in the cybersecurity puzzle that could change everything? Enter the role of “Chief Zero Trust Officer” (CZTO) – a new position that we believe will become increasingly common in large organizations over the next year.
The idea of companies potentially creating the role of Chief Zero Trust Officer evolved from conversations last year between Cloudflare’s Field CTO team members and US federal government agencies. A similar job function was first noted in the White House memorandum directing federal agencies to “move toward Zero Trust cybersecurity principles” and requiring agencies “designate and identify a Zero Trust strategy implementation lead for their organization” within 30 days. In government, a role like this is often called a “czar,” but the title “chief” is more appropriate within a business.
Large organizations need strong leaders to efficiently get things done. Businesses assign the ultimate leadership responsibility to people with titles that begin with the word chief, such as Chief Executive Officer (CEO) or Chief Financial Officer (CFO). These positions exist to provide direction, set strategy, make critical decisions, and manage day-to-day operations and they are often accountable to the board for overall performance and success.
Why a C-level for Zero Trust, and why now?
An old saying goes, “when everyone is responsible, no one is responsible.” As we consider the challenges in implementing Zero Trust within an enterprise, it appears that a lack of clear leadership and accountability is a significant issue. The question remains, who *exactly* is responsible for driving the adoption and execution of Zero Trust within the organization?
Large enterprises need a single person responsible for driving the Zero Trust journey. This leader should be empowered with a clear mandate and have a singular focus: getting the enterprise to Zero Trust. This is where the idea of the Chief Zero Trust Officer was born. “Chief Zero Trust Officer” may seem like just a title, but it holds a lot of weight. It commands attention and can overcome many obstacles to Zero Trust.
Barriers to adoption
Implementing Zero Trust can be hindered by various technological challenges. Understanding and implementing the complex architecture of some vendors can take time, demand extensive training, or require a professional services engagement to acquire the necessary expertise. Identifying and verifying users and devices in a Zero Trust environment can also be a challenge. It requires an accurate inventory of the organization’s user base, groups they’re a part of, and their applications and devices.
On the organizational side, coordination between different teams is crucial for effectively implementing Zero Trust. Breaking down the silos between IT, cybersecurity, and networking groups, establishing clear communication channels, and regular meetings between team members can help achieve a cohesive security strategy. General resistance to change can also be a significant obstacle. Leaders should use techniques such as leading by example, transparent communication, and involving employees in the change process to mitigate it. Proactively addressing concerns, providing support, and creating employee training opportunities can also help ease the transition.
Responsibility and accountability – no matter what you call it
But why does an organization need a CZTO? Is another C-level role essential? Why not assign someone already managing security within the CISO organization? Of course, these are all valid questions. Think about it this way – companies should assign the title based on the level of strategic importance to the company. So, whether it’s Chief Zero Trust Officer, Head of Zero Trust, VP of Zero Trust, or something else, the title must command attention and come with the power to break down silos and cut through bureaucracy.
New C-level titles aren’t without precedent. In recent years, we’ve seen the emergence of titles such as Chief Digital Transformation Officer, Chief eXperience Officer, Chief Customer Officer, and Chief Data Scientist. The Chief Zero Trust Officer title is likely not even a permanent role. What’s crucial is that the person holding the role has the authority and vision to drive the Zero Trust initiative forward, with the support of company leadership and the board of directors.
Getting to Zero Trust in 2023
Getting to Zero Trust security is now a mandate for many companies, as the traditional perimeter-based security model is no longer enough to protect against today’s sophisticated threats. To navigate the technical and organizational challenges that come with Zero Trust implementation, the leadership of a CZTO is crucial. The CZTO will lead the Zero Trust initiative, align teams and break down barriers to achieve a smooth rollout. The role of CZTO in the C-suite emphasizes the importance of Zero Trust in the company. It ensures that the Zero Trust initiative is given the necessary attention and resources to succeed. Organizations that appoint a CZTO now will be the ones that come out on top in the future.
Cloudflare One for Zero Trust
Cloudflare One is Cloudflare’s Zero Trust platform that is easy to deploy and integrates seamlessly with existing tools and vendors. It is built on the principle of Zero Trust and provides organizations with a comprehensive security solution that works globally. Cloudflare One is delivered on Cloudflare’s global network, which means that it works seamlessly across multiple geographies, countries, network providers, and devices. With Cloudflare’s massive global presence, traffic is secured, routed, and filtered over an optimized backbone that uses real-time Internet intelligence to protect against the latest threats and route traffic around bad Internet weather and outages. Additionally, Cloudflare One integrates with best-of-breed identity management and endpoint device security solutions, creating a complete solution that encompasses the entire corporate network of today and tomorrow. If you’d like to know more, let us know here, and we’ll reach out.
Do you prefer to avoid talking to someone just yet? Nearly every feature in Cloudflare One is available at no cost for up to 50 users. Many of our largest enterprise customers start by exploring our Zero Trust products themselves on our free plan, and we invite you to do so by following the link here.
Working with another Zero Trust vendor?
Cloudflare’s security experts have built a vendor-neutral roadmap to provide a Zero Trust architecture and an example implementation timeline. The Zero Trust Roadmap https://zerotrustroadmap.org/ is an excellent resource for organizations that want to learn more about the benefits and best practices of implementing Zero Trust. And if you feel stuck on your current Zero Trust journey, have your chief Zero Trust officer give us a call at Cloudflare!
Cloudflare has been helping global organizations offer their users a consistent experience all over the world. This includes mainland China, a market our global customers cannot ignore but that continues to be challenging for infrastructure teams trying to ensure performance, security and reliability for their applications and users both in and outside mainland China. We are excited to announce China Express — a new suite of capabilities and best practices in partnership with our partners China Mobile International (CMI) and CBC Tech — that help address some of these performance challenges and ensure a consistent experience for customers and employees everywhere.
Cloudflare has been providing Application Services to users in mainland China since 2015, improving performance and security using in-country data centers and caching. Today, we have a presence in 30 cities in mainland China thanks to our strategic partnership with JD Cloud. While this delivers significant performance improvements, some requests still need to go back to the origin servers which may live outside mainland China. With limited international Internet gateways and restrictive cross-border regulations, international traffic has a very high latency and packet drop rate in and out of China. This results in inconsistent cached content within China and a poor experience for users trying to access dynamic content that requires frequent access to the origin.
Last month, we expanded our Cloudflare One, Zero Trust network-as-a-service platform to users and organizations in China with additional connectivity options. This has received tremendous interest from customers, so we’re looking at what else we could do to further improve the user experience for customers with employees or offices in China.
What is China Express?
China Express is a suite of connectivity and performance offerings designed to simplify connectivity and improve performance for users in China. To understand these better, let’s take an example of Acme Corp, a global company with offices in Shanghai and Beijing — with origin data centers in London and Ashburn. And let’s see how we can help their infrastructure teams better serve employees and users in mainland China.
China Express Premium DIA
Premium Dedicated Internet Access, is an optimized, high-quality public Internet circuit for cross-border connectivity provided by our local partners CMI and CBC Tech. With this service, traffic from mainland China will arrive at our partner data center in Hong Kong, using a fixed NAT IP. Customers do not worry about compliance issues because their traffic still goes through the public Internet with all regulatory controls in place.
Acme Corp can use Premium DIA to improve origin performance for their Cloudflare service in mainland China. Requests to the origin data centers in Ashburn and London would traverse the Premium DIA connection, which offers more bandwidth and lower packet loss resulting in more than a 60% improvement in performance.
Acme employees in mainland China would also see an improvement while accessing SaaS applications such as Microsoft 365 over the Internet when these apps are delivered from outside China. They would also notice an improvement in Internet speed in general.
China Express Private Link
While Premium DIA offers Acme performance improvements over the public Internet, they may want to keep some mission-critical application traffic on a private network for security reasons. Private link offers a dedicated private tunnel between Acme’s locations in China and their data centers outside of China. Private Link can also be used to establish dedicated private connectivity to SaaS data centers like Salesforce.
Private Link is a highly regulated area in China and depending on your use case, there might be additional requirements from our partners to implement it.
China Express Travel SIM
Acme Corp might have employees visiting China on a regular basis and need access to their corporate apps on their mobile devices including phones and tablets. Their IT teams not only have to procure and provision mobile Internet connectivity for their users, but also enforce consistent Zero Trust security controls.
Cloudflare is pleased to announce that the Travel SIM provided by Cloudflare’s partner CMI automatically provides network connectivity and can be used together with the Cloudflare WARP Client on mobile devices to provide Cloudflare’s suite of Zero Trust security services. Using the same Zero Trust profiles assigned to the user, the WARP client will automatically use the available 4G LTE network and establish a WireGuard tunnel to the closest Cloudflare data center outside of China. The data connection can also be shared with other devices using the hotspot function on the mobile device.
With the Travel SIM, users can enjoy the same Cloudflare global service as the rest of the world when traveling to China. And IT and security teams no longer need to worry about purchasing or deploying additional Zero Trust seats and device clients to ensure the employees’ Internet connection and the security policy enforcement.
China Express — Extending Cloudflare One to China
As mentioned in a previous blog post, we are extending Cloudflare One, our zero trust network-as-a-service product, to mainland China through our strategic partnerships. Acme Corp will now be able to ensure their employees both inside and outside China will be able to use consistent zero trust security policy using the Cloudflare WARP device client. In addition, they will be able to connect their physical offices in China to their global private WAN using Magic WAN with consistent security policies applied globally.
Get started today
Cloudflare is excited to work with our partners to help our customers solve connectivity and performance challenges in mainland China. All the above solutions are easy and fast to deploy and are available now. If you’d like to get started, contact us here or reach out to your account team.
The collective thoughts of the interwebz
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.