Security is a top priority for organizations looking to keep pace with a changing threat landscape and build customer trust. However, the traditional approach of defined security perimeters that separate trusted from untrusted network zones has proven to be inadequate as hybrid work models accelerate digital transformation.
Today’s distributed enterprise requires a new approach to ensuring the right levels of security and accessibility for systems and data. Security experts increasingly recommend Zero Trust as the solution, but security teams can get confused when Zero Trust is presented as a product, rather than as a security model. We’re excited to share a whitepaper we recently authored with SANS Institute called Zero Trust: Charting a Path To Stronger Security, which addresses common misconceptions and explores Zero Trust opportunities.
Gartner predicts that by 2025, over 60% of organizations will embrace Zero Trust as a starting place for security.
The whitepaper includes context and analysis that can help you move past Zero Trust marketing hype and learn about these key considerations for implementing a successful Zero Trust strategy:
Zero Trust definition and guiding principles
Six foundational capabilities to establish
Four fallacies to avoid
Six Zero Trust use cases
Metrics for measuring Zero Trust ROI
The journey to Zero Trust is an iterative process that is different for every organization. We encourage you to download the whitepaper, and gain insight into how you can chart a path to a multi-layered security strategy that adapts to the modern environment and meaningfully improves your technical and business outcomes. We look forward to your feedback and to continuing the journey together.
If you have feedback about this post, submit comments in the Comments section below.
Want more AWS Security news? Follow us on Twitter.
Today, we’re announcing the general availability of the Magic WAN Connector, a key component of our SASE platform, Cloudflare One. Magic WAN Connector is the glue between your existing network hardware and Cloudflare’s network — it provides a super simplified software solution that comes pre-installed on Cloudflare-certified hardware, and is entirely managed from the Cloudflare One dashboard.
It takes only a few minutes from unboxing to seeing your network traffic automatically routed to the closest Cloudflare location, where it flows through a full stack of Zero Trust security controls before taking an accelerated path to its destination, whether that’s another location on your private network, a SaaS app, or any application on the open Internet.
Since we announced our beta earlier this year, organizations around the world have deployed the Magic WAN Connector to connect and secure their network locations. We’re excited for the general availability of the Magic WAN Connector to accelerate SASE transformation at scale.
When customers tell us about their journey to embrace SASE, one of the most common stories we hear is:
We started with our remote workforce, deploying modern solutions to secure access to internal apps and Internet resources. But now, we’re looking at the broader landscape of our enterprise network connectivity and security, and it’s daunting. We want to shift to a cloud and Internet-centric model for all of our infrastructure, but we’re struggling to figure out how to start.
The Magic WAN Connector was created to address this problem.
Zero-touch connectivity to your new corporate WAN
Cloudflare One enables organizations of any size to connect and secure all of their users, devices, applications, networks, and data with a unified platform delivered by our global connectivity cloud. Magic WAN is the network connectivity “glue” of Cloudflare One, allowing our customers to migrate away from legacy private circuits and use our network as an extension of their own.
Previously, customers have connected their locations to Magic WAN with Anycast GRE or IPsec tunnels configured on their edge network equipment (usually existing routers or firewalls), or plugged into us directly with CNI. But for the past few years, we’ve heard requests from hundreds of customers asking for a zero-touch approach to connecting their branches: We just want something we can plug in and turn on, and it handles the rest.
The Magic WAN Connector is exactly this. Customers receive Cloudflare-certified hardware with our software pre-installed on it, and everything is controlled via the Cloudflare dashboard. What was once a time-consuming, complex process now takes a matter of minutes, enabling robust Zero-Trust protection for all of your traffic.
In addition to automatically configuring tunnels and routing policies to direct your network traffic to Cloudflare, the Magic WAN Connector will also handle traffic steering, shaping and failover to make sure your packets always take the best path available to the closest Cloudflare network location — which is likely only milliseconds away. You’ll also get enhanced visibility into all your traffic flows in analytics and logs, providing a unified observability experience across both your branches and the traffic through Cloudflare’s network.
Zero Trust security for all your traffic
Once the Magic WAN Connector is deployed at your network location, you have automatic access to enforce Zero Trust security policies across both public and private traffic.
A secure on-ramp to the Internet
An easy first step to improving your organization’s security posture after connecting network locations to Cloudflare is creating Secure Web Gateway policies to defend against ransomware, phishing, and other threats for faster, safer Internet browsing. By default, all Internet traffic from locations with the Magic WAN Connector will route through Cloudflare Gateway, providing a unified management plane for traffic from physical locations and remote employees.
A more secure private network
The Magic WAN Connector also enables routing private traffic between your network locations, with multiple layers of network and Zero Trust security controls in place. Unlike a traditional network architecture, which requires deploying and managing a stack of security hardware and backhauling branch traffic through a central location for filtering, a SASE architecture provides private traffic filtering and control built-in: enforced across a distributed network, but managed from a single dashboard interface or API.
A simpler approach for hybrid cloud
Cloudflare One enables connectivity for any physical or cloud network with easy on-ramps depending on location type. The Magic WAN Connector provides easy connectivity for branches, but also provides automatic connectivity to other networks including VPCs connected using cloud-native constructs (e.g., VPN Gateways) or direct cloud connectivity (via Cloud CNI). With a unified connectivity and control plane across physical and cloud infrastructure, IT and security teams can reduce overhead and cost of managing multi- and hybrid cloud networks.
Single-vendor SASE dramatically reduces cost and complexity
With the general availability of the Magic WAN Connector, we’ve put the final piece in place to deliver a unified SASE platform, developed and fully integrated from the ground up. Deploying and managing all the components of SASE with a single vendor, versus piecing together different solutions for networking and security, significantly simplifies deployment and management by reducing complexity and potential integration challenges. Many vendors that market a full SASE solution have actually stitched together separate products through acquisition, leading to an un-integrated experience similar to what you would see deploying and managing multiple separate vendors. In contrast, Cloudflare One (now with the Magic WAN Connector for simplified branch functions) enables organizations to achieve the true promise of SASE: a simplified, efficient, and highly secure network and security infrastructure that reduces your total cost of ownership and adapts to the evolving needs of the modern digital landscape.
Evolving beyond SD-WAN
Cloudflare One addresses many of the challenges that were left behind as organizations deployed SD-WAN to help simplify networking operations. SD-WAN provides orchestration capabilities to help manage devices and configuration in one place, as well as last mile traffic management to steer and shape traffic based on more sophisticated logic than is possible in traditional routers. But SD-WAN devices generally don't have embedded security controls, leaving teams to stitch together a patchwork of hardware, virtualized and cloud-based tools to keep their networks secure. They can make decisions about the best way to send traffic out from a customer’s branch, but they have no way to influence traffic hops between the last mile and the traffic's destination. And while some SD-WAN providers have surfaced virtualized versions of their appliances that can be deployed in cloud environments, they don't support native cloud connectivity and can complicate rather than ease the transition to cloud.
Cloudflare One represents the next evolution of enterprise networking, and has a fundamentally different architecture from either legacy networking or SD-WAN. It's based on a "light branch, heavy cloud" principle: deploy the minimum required hardware within physical locations (or virtual hardware within virtual networks, e.g., cloud VPCs) and use low-cost Internet connectivity to reach the nearest "service edge" location. At those locations, traffic can flow through security controls and be optimized on the way to its destination, whether that's another location within the customer's private network or an application on the public Internet. This architecture also enables remote user access to connected networks.
This shift — moving most of the "smarts" from the branch to a distributed global network edge, and leaving only the functions at the branch that absolutely require local presence, delivered by the Magic WAN Connector — solves our customers’ current problems and sets them up for easier management and a stronger security posture as the connectivity and attack landscape continues to evolve.
Aspect
Example
MPLS/VPN Service
SD-WAN
SASE with
Cloudflare One
Configuration
New site setup, configuration and management
By MSP through service request
Simplified orchestration and management via centralized controller
Automated orchestration via SaaS portal
Single Dashboard
Last mile
traffic control
Traffic balancing, QoS, and failover
Covered by MPLS SLAs
Best Path selection available in SD-WAN appliance
Minimal on-prem deployment to control local decision making
Middle mile
traffic control
Traffic steering around middle mile congestion
Covered by MPLS SLAs
“Tunnel Spaghetti” and still no control over the middle mile
Integrated traffic management & private backbone controls in a unified dashboard
Cloud integration
Connectivity for cloud migration
Centralized breakout
Decentralized breakout
Native connectivity with Cloud Network Interconnect
Security
Filter in & outbound Internet traffic for malware
Patchwork of hardware controls
Patchwork of hardware and/or software controls
Native integration with user, data, application & network security tools
Cost
Maximize ROI for network investments
High cost for hardware and connectivity
Optimized connectivity costs at the expense of increased
hardware and software costs
Decreased hardware and connectivity costs for maximized ROI
Summary of legacy, SD-WAN based, and SASE architecture considerations
Love and want to keep your current SD-WAN vendor? No problem – you can still use any appliance that supports IPsec or GRE as an on-ramp for Cloudflare One.
Ready to simplify your SASE journey?
You can learn more about the Magic WAN Connector, including device specs, specific feature info, onboarding process details, and more at our dev docs, or contact us to get started today.
Data continues to explode in volume, variety, and velocity, and security teams at organizations of all sizes are challenged to keep up. Businesses face escalating risks posed by varied SaaS environments, the emergence of generative artificial intelligence (AI) tools, and the exposure and theft of valuable source code continues to keep CISOs and Data Officers up at night.
Over the past few years, Cloudflare has launched capabilities to help organizations navigate these risks and gain visibility and controls over their data — including the launches of our data loss prevention (DLP) and cloud access security broker (CASB) services in the fall of 2022.
Announcing Cloudflare One’s data protection suite
Today, we are building on that momentum and announcing Cloudflare One for Data Protection — our unified suite to protect data everywhere across web, SaaS, and private applications. Built on and delivered across our entire global network, Cloudflare One’s data protection suite is architected for the risks of modern coding and increased usage of AI.
A separate blog post published today looks back on what technologies and features we delivered over the past year and previews new functionality that customers can look forward to.
In this blog, we focus more on what impact those technologies and features have for customers in addressing modern data risks — with examples of practical use cases. We believe that Cloudflare One is uniquely positioned to deliver better data protection that addresses modern data risks. And by “better,” we mean:
Helping security teams be more effective protecting data by simplifying inline and API connectivity together with policy management
Helping employees be more productive by ensuring fast, reliable, and consistent user experiences
Helping organizations be more agile by innovating rapidly to meet evolving data security and privacy requirements
Harder than ever to secure data
Data spans more environments than most organizations can keep track of. In conversations with customers, three distinctly modern risks stick out:
The growing diversity of cloud and SaaS environments: The apps where knowledge workers spend most of their time — like cloud email inboxes, shared cloud storage folders and documents, SaaS productivity and collaboration suites like Microsoft 365 — are increasingly targeted by threat actors for data exfiltration.
Emerging AI tools: Business leaders are concerned about users oversharing sensitive information with opaque large language model tools like ChatGPT, but at the same time, want to leverage the benefits of AI.
Source code exposure or theft: Developer code fuels digital business, but that same high-value source code can be exposed or targeted for theft across many developer tools like GitHub, including in plain sight locations like public repositories.
These latter two risks, in particular, are already intersecting. Companies like Amazon, Apple, Verizon, Deutsche Bank, and more are blocking employees from using tools like ChatGPT for fear of losing confidential data, and Samsung recently had an engineer accidentally upload sensitive code to the tool. As organizations prioritize new digital services and experiences, developers face mounting pressure to work faster and smarter. AI tools can help unlock that productivity, but the long-term consequences of oversharing sensitive data with these tools is still unknown.
All together, data risks are only primed to escalate, particularly as organizations accelerate digital transformation initiatives with hybrid work and development continuing to expand attack surfaces. At the same time, regulatory compliance will only become more demanding, as more countries and states adopt more stringent data privacy laws.
Traditional DLP services are not equipped to keep up with these modern risks. A combination of high setup and operational complexity plus negative user experiences means that, in practice, DLP controls are often underutilized or bypassed entirely. Whether deployed as a standalone platform or integrated into security products or SaaS applications, DLP products can often become expensive shelfware. And backhauling traffic through on-premise data protection hardware – whether, DLP, firewall and SWG appliances, or otherwise — create costs and slow user experiences that hold businesses back in the long run.
Figure 1: Modern data risks
How customers use Cloudflare for data protection
Today, customers are increasingly turning to Cloudflare to address these data risks, including a Fortune 500 natural gas company, a major US job site, a regional US airline, an Australian healthcare company and more. Across these customer engagements, three use cases are standing out as common focus areas when deploying Cloudflare One for data protection.
Use case #1: Securing AI tools and developer code (Applied Systems)
Applied Systems, an insurance technology & software company, recently deployed Cloudflare One to secure data in AI environments.
Specifically, the company runs the public instance of ChatGPT in an isolated browser, so that the security team can apply copy-paste blocks: preventing users from copying sensitive information (including developer code) from other apps into the AI tool. According to Chief Information Security Officer Tanner Randolph, “We wanted to let employees take advantage of AI while keeping it safe.”
This use case was just one of several Applied Systems tackled when migrating from Zscaler and Cisco to Cloudflare, but we see a growing interest in securing AI and developer code among our customers.
Use case #2: Data exposure visibility
Customers are leveraging Cloudflare One to regain visibility and controls over data exposure risks across their sprawling app environments. For many, the first step is analyzing unsanctioned app usage, and then taking steps to allow, block, isolate, or apply other controls to those resources. A second and increasingly popular step is scanning SaaS apps for misconfigurations and sensitive data via a CASB and DLP service, and then taking prescriptive steps to remediate via SWG policies.
A UK ecommerce giant with 7,5000 employees turned to Cloudflare for this latter step. As part of a broader migration strategy from Zscaler to Cloudflare, this company quickly set up API integrations between its SaaS environments and Cloudflare’s CASB and began scanning for misconfigurations. Plus, during this integration process, the company was able to sync DLP policies with Microsoft Pureview Information Protection sensitivity labels, so that it could use its existing framework to prioritize what data to protect. All in all, the company was able to begin identifying data exposure risks within a day.
Use case #3: Compliance with regulations
Comprehensive data regulations like GDPR, CCPA, HIPAA, and GLBA have been in our lives for some time now. But new laws are quickly emerging: for example, 11 U.S. states now have comprehensive privacy laws, up from just 3 in 2021. And updates to existing laws like PCI DSS now include stricter, more expansive requirements.
Customers are increasingly turning to Cloudflare One for compliance, in particular by ensuring they can monitor and protect regulated data (e.g. financial data, health data, PII, exact data matches, and more). Some common steps include first, detecting and applying controls to sensitive data via DLP, next, maintaining detailed audit trails via logs and further SIEM analysis, and finally, reducing overall risk with a comprehensive Zero Trust security posture.
Let’s look at a concrete example. One Zero Trust best practice that is increasingly required is multi-factor authentication (MFA). In the payment cards industry, PCI DSS v4.0, which takes effect in 2025, requires that requests to MFA be enforced for every access request to the cardholder data environment, for every user and for every location – including cloud environments, on-prem apps, workstations and more. (requirement 8.4.2). Plus, those MFA systems must be configured to prevent misuse – including replay attacks and bypass attempts – and must require at least two different factors that must be successful (requirement 8.5). To help organizations comply with both of these requirements, Cloudflare helps organizations enforce MFA across all apps and users – and in fact, we use our same services to enforce hard key authentication for our own employees.
Figure 2: Data protection use cases
The Cloudflare difference
Cloudflare One’s data protection suite is built to stay at the forefront of modern data risks to address these and other evolving use cases.
With Cloudflare, DLP is not just integrated with other typically distinct security services, like CASB, SWG, ZTNA, RBI, and email security, but converged onto a single platform with one control plane and one interface. Beyond the acronym soup, our network architecture is really what enables us to help organizations be more effective, more productive, and more agile with protecting data.
We simplify connectivity, with flexible options for you to send traffic to Cloudflare for enforcement. Those options include API-based scans of SaaS suites for misconfigurations and sensitive data. Unlike solutions that require security teams to get full app permissions from IT or business teams, Cloudflare can find risk exposure with read-only app permissions. Clientless deployments of ZTNA to secure application access and of browser isolation to control data within websites and apps are scalable for all users — employees and third-parties like contractors — for the largest enterprises. And when you do want to forward proxy traffic, Cloudflare offers one device client with self-enrollment permissions or wide area network on-ramps across security services. With so many practical ways to deploy, your data protection approach will be effective and functional — not shelfware.
Just like your data, our global network is everywhere, now spanning over 300 cities in over 100 countries. We have proven that we enforce controls faster than vendors like Zscaler, Netskope, and Palo Alto Networks — all with single-pass inspection. We ensure security is quick, reliable, and unintrusive, so you can layer on data controls without disruptive work productivity.
Our programmable network architecture enables us to build new capabilities quickly. And we rapidly adopt new security standards and protocols (like IPv6-only connections or HTTP/3 encryption) to ensure data protection remains effective. Altogether, this architecture equips us to evolve alongside changing data protection use cases, like protecting code in AI environments, and quickly deploy AI and machine learning models across our network locations to enforce higher precision, context-driven detections.
Figure 3: Unified data protection with Cloudflare
How to get started
Modern data risks demand modern security. We feel that Cloudflare One’s unified data protection suite is architected to help organizations navigate their priority risks today and in the future — whether that is securing developer code and AI tools, regaining visibility over SaaS apps, or staying compliant with evolving regulations.
If you’re ready to explore how Cloudflare can protect your data, request a workshop with our experts today.
In the announcement post, we focused on how the data protection suite helps customers navigate modern data risks, with recommended use cases and real-world customer examples.
In this companion blog post, we recap the capabilities built into the Cloudflare One suite over the past year and preview new functionality that customers can look forward to. This blog is best for practitioners interested in protecting data and SaaS environments using Cloudflare One.
DLP & CASB capabilities launched in the past year
Cloudflare launched both DLP and CASB services in September 2022, and since then have rapidly built functionality to meet the growing needs of our organizations of all sizes. Before previewing how these services will evolve, it is worth recapping the many enhancements added in the past year.
Cloudflare’s DLP solution helps organizations detect and protect sensitive data across their environment based on its several characteristics. DLP controls can be critical in preventing (and detecting) damaging leaks and ensuring compliance for regulated classes of data like financial, health, and personally identifiable information.
Improvements to DLP detections and policies can be characterized by three major themes:
Customization: making it easy for administrators to design DLP policies with the flexibility they want.
Deep detections: equipping administrators with increasingly granular controls over what data they protect and how.
Detailed detections: providing administrators with more detailed visibility and logs to analyze the efficacy of their DLP policies.
Cloudflare’s CASB helps organizations connect to, scan, and monitor third-party SaaS applications for misconfigurations, improper data sharing, and other security risks — all via lightweight API integrations. In this way, organizations can regain visibility and controls over their growing investments in SaaS apps.
CASB product enhancements can similarly be summarized by three themes:
Expanding API integrations: Today, our CASB integrates with 18 of the most popular SaaS apps — Microsoft 365 (including OneDrive), Google Workspace (including Drive), Salesforce, GitHub, and more. Setting up these API integrations takes fewer clicks than first-generation CASB solutions, with comparable coverage to other vendors in the Security Services Edge (SSE) space.
Strengthening findings of CASB scans: We have made it easier to remediate the misconfigurations identified by these CASB scans with both prescriptive guides and in-line policy actions built into the dashboard.
Converging CASB & DLP functionality: We started enabling organizations to scan SaaS apps for sensitive data, as classified by DLP policies. For example, this helps organizations detect when credit cards or social security numbers are in Google documents or spreadsheets that have been made publicly available to anyone on the Internet.
This last theme, in particular, speaks to the value of unifying data protection capabilities on a single platform for simple, streamlined workflows. The below table highlights some major capabilities launched since our general availability announcements last September.
Table 1: Select DLP and CASB capabilities shipped since 2022 Q4
After a quick API integration, Cloudflare syncs continuously with the Microsoft Information Protection (MIP) labels you already use to streamline how you build DLP policies.
Administrators can create custom detections using the same regex policy builder used across our entire Zero Trust platform for a consistent configuration experience across services.
Administrators can set minimum thresholds for the number of times a detection is made before an action (like block or log) is triggered. This way, customers can create policies that allow individual transactions but block up/downloads with high volumes of sensitive data.
Context analysis helps reduce false positive detections by analyzing proximity keywords (for example: seeing “expiration date” near a credit card number increases the likelihood of triggering a detection).
Cloudflare now captures more wide-ranging and granular details of DLP-related activity in logs, including payload analysis, file names, and higher fidelity details of individual files. A large percentage of our customers prefer to push these logs to SIEM tools like DataDog and Sumo Logic.
Today, Cloudflare integrates with 18 of the most widely used SaaS apps, including productivity suites, cloud storage, chat tools, and more. API-based scans not only reveal misconfigurations, but also offer built-in HTTP policy creation workflows and step-by-step remediation guides.
Today, organizations can set up CASB to scan every publicly accessible file in Google Workspace for text that matches a DLP profile (financial data, personal identifiers, etc.).
New and upcoming DLP & CASB functionality
Today’s launch of Cloudflare One’s data protection suite crystalizes our commitment to keep investing in DLP and CASB functionality across these thematic areas. Below we wanted to preview a few new and upcoming capabilities on the Cloudflare One’s data protection suite roadmap that will become available in the coming weeks for further visibility and controls across data environments.
Exact data matching with custom wordlists
Already shipped: Exact Data Match, moves from out of beta to general availability, allowing customers to tell Cloudflare’s DLP exactly what data to look for by uploading a dataset, which could include names, phone numbers, or anything else.
Next 30 days: Customers will soon be able to upload a list of specific words, create DLP policies to search for those important keywords in files, and block and log that activity.
How customers benefit: Administrators can be more specific about what they need to protect and save time creating policies by bulk uploading the data and terms that they care most about. Over time, many organizations have amassed long lists of terms configured for incumbent DLP services, and these customizable upload capabilities streamline migration from other vendors to Cloudflare. Just as with all other DLP profiles, Cloudflare searches for these custom lists and keywords within in-line traffic and in integrated SaaS apps.
Detecting source code and health data
Next 30 days: Soon, Clouflare’s DLP will include predefined profiles to detect developer source code and protected health information (PHI). Initially, code data will include languages like Python, Javascript, Java, and C++ — four of the most popular languages today — and PHI data will include medication and diagnosis names — two highly sensitive medical topics.
How customers benefit: These predefined profiles expand coverage to some of the most valuable — and in the case of PHI, one of the most regulated — types of data within an organization.
Converging API-driven CASB & DLP for data-at-rest protections
Next 30 days: Soon, organizations will be able to scan for sensitive data at rest in Microsoft 365 (e.g. OneDrive). API-based scans of these environments will flag, for example, whether credit card numbers, source code, or other data configured via DLP policies reside within publicly accessible files. Administrators can then take prescriptive steps to remediate via in-line CASB gateway policies.
Shipping by the end of the year: Within the next few months, this same integration will be available with GitHub.
How customers benefit: Between the existing Google Workspace integration and this upcoming Microsoft 365 integration, customers can scan for sensitive data across two of the most prominent cloud productivity suites — where users spend much of their time and where large percentages of organizational data lives. This new Microsoft integration represents a continued investment in streamlining security workflows across the Microsoft ecosystem — whether for managing identity and application access, enforcing device posture, or isolating risky users.
The GitHub integration also restores visibility over one of the most critical developer environments that is also increasingly a risk for data leaks. In fact, according to GitGuardian, 10 million hard-coded secrets were exposed in public GitHub commits in 2022, a figure that is up 67% from 2021 and only expected to grow. Preventing source code exposure on GitHub is a problem area our product team regularly hears from our customers, and we will continue to prioritize securing developer environments.
Layering on Zero Trust context: User Risk Score
Next 30 days: Cloudflare will introduce a risk score based on user behavior and activities that have been detected across Cloudflare One’s services. Organizations will be able to detect user behaviors that introduce risk from action like an Impossible Travel anomaly or detections from too many DLP violations in a given period of time. Shortly following the detection capabilities will be the option to take preventative or remediative policy actions, within the wider Cloudflare One suite. In this way, organizations can control access to sensitive data and applications based on changing risk factors and real-time context.
How customers benefit: Today, intensive time, labor, and money are spent on analyzing large volumes of log data to identify patterns of risk. Cloudflare's ‘out-of-the-box’ risk score simplifies that process, helping organizations gain visibility into and lock down suspicious activity with speed and efficiency.
How to get started
These are just some of the capabilities on our short-term roadmap, and we can’t wait to share more with you as the data protection suite evolves. If you’re ready to explore how Cloudflare One can protect your data, request a workshop with our experts today.
On June 19, 2023, AWS Verified Access introduced improved logging functionality; Verified Access now logs more extensive user context information received from the trust providers. This improved logging feature simplifies administration and troubleshooting of application access policies while adhering to zero-trust principles.
In this blog post, we will show you how to manage the Verified Access logging configuration and how to use Verified Access logs to write and troubleshoot access policies faster. We provide an example showing the user context information that was logged before and after the improved logging functionality and how you can use that information to transform a high-level policy into a fine-grained policy.
Overview of AWS Verified Access
AWS Verified Access helps enterprises to provide secure access to their corporate applications without using a virtual private network (VPN). Using Verified Access, you can configure fine-grained access policies to help limit application access only to users who meet the specified security requirements (for example, user identity and device security status). These policies are written in Cedar, a new policy language developed and open-sourced by AWS.
Verified Access validates each request based on access policies that you set. You can use user context—such as user, group, and device risk score—from your existing third-party identity and device security services to define access policies. In addition, Verified Access provides you an option to log every access attempt to help you respond quickly to security incidents and audit requests. These logs also contain user context sent from your identity and device security services and can help you to match the expected outcomes with the actual outcomes of your policies. To capture these logs, you need to enable logging from the Verified Access console.
Figure 1: Overview of AWS Verified Access architecture showing Verified Access connected to an application
After a Verified Access administrator attaches a trust provider to a Verified Access instance, they can write policies using the user context information from the trust provider. This user context information is custom to an organization, and you need to gather it from different sources when writing or troubleshooting policies that require more extensive user context.
Now, with the improved logging functionality, the Verified Access logs record more extensive user context information from the trust providers. This eliminates the need to gather information from different sources. With the detailed context available in the logs, you have more information to help validate and troubleshoot your policies.
To improve the preceding policy and make it more granular, you can include checks for various user and device details. For example, you can check if the user belongs to a particular group, has a verified email, should be logging in from a device with an OS that has an assessment score greater than 50, and has an overall device score greater than 15.
Modify the Verified Access instance logging configuration
Open the Verified Access console and select Verified Access instances.
Select the instance that you want to modify, and then, on the Verified Access instance logging configuration tab, select Modify Verified Access instance logging configuration.
Under Update log version, select ocsf-1.0.0-rc.2, turn on Include trust context, and select where the logs should be delivered.
Figure 3: Verified Access log version and trust context
After you’ve completed the preceding steps, Verified Access will start logging more extensive user context information from the trust providers for every request that Verified Access receives. This context information can have sensitive information. To learn more about how to protect this sensitive information, see Protect Sensitive Data with Amazon CloudWatch Logs.
The following example log shows information received from the IAM Identity Center identity provider (IdP) and the device provider CrowdStrike.
The following example log shows the user context information received from the OpenID Connect (OIDC) trust provider Okta. You can see the difference in the information provided by the two different trust providers: IAM Identity Center and Okta.
The following is a sample policy written using the information received from the trust providers.
permit(principal,action,resource)
when {
context.idcpolicy.groups has "<hr-group-id>" &&
context.idcpolicy.user.email.address like "*@example.com" &&
context.idcpolicy.user.email.verified == true &&
context has "crdstrikepolicy" &&
context.crdstrikepolicy.assessment.os > 50 &&
context.crdstrikepolicy.assessment.overall > 15
};
This policy only grants access to users who belong to a particular group, have a verified email address, and have a corporate email domain. Also, users can only access the application from a device with an OS that has an assessment score greater than 50, and has an overall device score greater than 15.
Conclusion
In this post, you learned how to manage Verified Access logging configuration from the Verified Access console and how to use improved logging information to write AWS Verified Access policies. To get started with Verified Access, see the Amazon VPC console.
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.
Want more AWS Security news? Follow us on Twitter.
Cloudflare's Zero Trust platform helps organizations map and adopt a strong security posture. This ranges from Zero Trust Network Access, a Secure Web Gateway to help filter traffic, to Cloud Access Security Broker and Data Loss Prevention to protect data in transit and in the cloud. Customers use Cloudflare to verify, isolate, and inspect all devices managed by IT. Our composable, in-line solutions offer a simplified approach to security and a comprehensive set of logs.
We’ve heard from many of our customers that they aggregate these logs into Datadog’s Cloud SIEM product. Datadog Cloud SIEM provides threat detection, investigation, and automated response for dynamic, cloud-scale environments. Cloud SIEM analyzes operational and security logs in real time – regardless of volume – while utilizing out-of-the-box integrations and rules to detect threats and investigate them. It also automates response and remediation through out-of-the-box workflow blueprints. Developers, security, and operations teams can also leverage detailed observability data and efficiently collaborate to accelerate security investigations in a single, unified platform. We previously had an out-of-the-box dashboard for Cloudflare CDN available on Datadog. These help our customers gain valuable insights into product usage and performance metrics for response times, HTTP status codes, cache hit rate. Customers can collect, visualize, and alert on key Cloudflare metrics.
Today, we are very excited to announce the general availability of Cloudflare Zero Trust Integration with Datadog. This deeper integration offers the Cloudflare Content Pack within Cloud SIEM which includes out-of-the-box dashboard and detection rules that will help our customers ingesting Zero Trust logs into Datadog, gaining greatly improved security insights over their Zero Trust landscape.
“Our Datadog SIEM integration with Cloudflare delivers a holistic view of activity across Cloudflare Zero Trust integrations–helping security and dev teams quickly identify and respond to anomalous activity across app, device, and users within the Cloudflare Zero Trust ecosystem. The integration offers detection rules that automatically generate signals based on CASB (cloud access security broker) findings, and impossible travel scenarios, a revamped dashboard for easy spotting of anomalies, and accelerates response and remediation to quickly contain an attacker’s activity through an out-of-the-box workflow automation blueprints.” – Yash Kumar, Senior Director of Product, Datadog
How to get started
Set up Logpush jobs to your Datadog destination
Use the Cloudflare dashboard or API to create a Logpush job with all fields enabled for each dataset you’d like to ingest on Datadog. We have eight account-scoped datasets available to use today (Access Requests, Audit logs, CASB findings, Gateway logs including DNS, Network, HTTP; Zero Trust Session Logs) that can be ingested into Datadog.
Install the Cloudflare Tile in Datadog
In your Datadog dashboard, locate and install the Cloudflare Tile within the Datadog Integration catalog. At this stage, Datadog’s out-of-the-box log processing pipeline will automatically parse and normalize your Cloudflare Zero Trust logs.
Analyze and correlate your Zero Trust logs with Datadog Cloud SIEM's out-of-the-box content
Our new and improved integration with Datadog enables security teams to quickly and easily monitor their Zero Trust components with the Cloudflare Content Pack. This includes the out-of-the-box dashboard that now features a Zero Trust section highlighting various widgets about activity across the applications, devices, and users in your Cloudflare Zero Trust ecosystem. This section gives you a holistic view, helping you spot and respond to anomalies quickly.
Security detections built for CASB
As Enterprises use more SaaS applications, it becomes more critical to have insights and control for data at-rest. Cloudflare CASB findings do just that by providing security risk insights for all integrated SaaS applications.
With this new integration, Datadog now offers an out-of-the-box detection rule that detects any CASB findings. The alert is triggered at different severity levels for any CASB security finding that could indicate suspicious activity within an integrated SaaS app, like Microsoft 365 and Google Workspace. In the example below, the CASB finding points to an asset whose Google Workspace Domain Record is missing.
This detection is helpful in identifying and remedying misconfigurations or any security issues saving time and reducing the possibility of security breaches.
Security detections for Impossible Travel
One of the most common security issues can show up in surprisingly simple ways. For example, could be a user that seemingly logs in from one location only to login shortly after from a location physically too far away. Datadog’s new detection rule addresses exactly this scenario with their Impossible Travel detection rule. If Datadog Cloud SIEM determines that two consecutive loglines for a user indicate impossible travel of more than 500 km at over 1,000 km/h, the security alert is triggered. An admin can then determine if it is a security breach and take actions accordingly.
What’s next
Customers of Cloudflare and Datadog can now gain a more comprehensive view of their products and security posture with the enhanced dashboards and the new detection rules. We are excited to work on adding more value for our customers and develop unique detection rules.
If you are a Cloudflare customer using Datadog, explore the new integration starting today.
The most famous data breaches–the ones that keep security practitioners up at night–involved the leak of millions of user records. Companies have lost names, addresses, email addresses, Social Security numbers, passwords, and a wealth of other sensitive information. Protecting this data is the highest priority of most security teams, yet many teams still struggle to actually detect these leaks.
Cloudflare’s Data Loss Prevention suite already includes the ability to identify sensitive data like credit card numbers, but with the volume of data being transferred every day, it can be challenging to understand which of the transactions that include sensitive data are actually problematic. We hear customers tell us, “I don’t care when one of my employees uses a personal credit card to buy something online. Tell me when one of my customers’ credit cards are leaked.”
In response, we looked for a method to distinguish between any credit card and one belonging to a specific customer. We are excited to announce the launch of our newest Data Loss Prevention feature, Exact Data Match. With Exact Data Match (EDM), customers securely tell us what data they want to protect, and then we identify, log, and block the presence or movement of that data. For example, if you provide us with a set of credit card numbers, we will DLP scan your traffic or repositories for only those cards. This allows you to create targeted DLP detections for your organization.
What is Exact Data Match?
Many Data Loss Prevention (DLP) detections begin with a generic identification of a pattern, often using a regular expression, and then are validated by additional criteria. Validation can leverage a wide range of techniques from checksums to machine learning models. However, this validates that the pattern is a credit card, not that it is your credit card.
With Exact Data Match, you tell us exactly the data you want to protect, but we never see it in cleartext. You provide a list of data of your choosing, such as a list of names, addresses, or credit card numbers, and that data is hashed before ever reaching Cloudflare. We store the hashes and scan your traffic or content for matches of the hashes. When we find a match, we log or block it according to your policy.
By using a finite list of data, we drastically reduce false positives compared to generic pattern matching. Meanwhile, hashing the data maintains your data privacy. Our goal is to meet your data protection and privacy needs.
How do I use it?
We now offer you the ability to upload DLP datasets. These allow you to provide batches of data to be used for your DLP detections.
When creating a dataset, provide a name, description, and a file containing the data to match.
When you upload the file, Cloudflare one-way hashes the data right in your browser. The hashed data is then transferred via API to Cloudflare, while the cleartext data never leaves the browser.
You can see the status of the upload in the datasets table.
The dataset can now be added to a DLP profile for detection. You can also add other predefined and custom entries to the same DLP profile.
Exact data match is now available for every DLP customer. If you are not a DLP customer but would like to learn more about Cloudflare One and DLP, reach out for a consultation.
What’s next?
Customers have many different formats to store data, and many different ways in which they want to monitor it. Our goal is to offer as much flexibility as your organization needs to meet your data protection goals.
In January and in March we posted blogs outlining how Cloudflare performed against others in Zero Trust. The conclusion in both cases was that Cloudflare was faster than Zscaler and Netskope in a variety of Zero Trust scenarios. For Speed Week, we’re bringing back these tests and upping the ante: we’re testing more providers against more public Internet endpoints in more regions than we have in the past.
For these tests, we tested three Zero Trust scenarios: Secure Web Gateway (SWG), Zero Trust Network Access (ZTNA), and Remote Browser Isolation (RBI). We tested against three competitors: Zscaler, Netskope, and Palo Alto Networks. We tested these scenarios from 12 regions around the world, up from the four we’d previously tested with. The results are that Cloudflare is the fastest Secure Web Gateway in 42% of testing scenarios, the most of any provider. Cloudflare is 46% faster than Zscaler, 56% faster than Netskope, and 10% faster than Palo Alto for ZTNA, and 64% faster than Zscaler for RBI scenarios.
In this blog, we’ll provide a refresher on why performance matters, do a deep dive on how we’re faster for each scenario, and we’ll talk about how we measured performance for each product.
Performance is a threat vector
Performance in Zero Trust matters; when Zero Trust performs poorly, users disable it, opening organizations to risk. Zero Trust services should be unobtrusive when the services become noticeable they prevent users from getting their job done.
Zero Trust services may have lots of bells and whistles that help protect customers, but none of that matters if employees can’t use the services to do their job quickly and efficiently. Fast performance helps drive adoption and makes security feel transparent to the end users. At Cloudflare, we prioritize making our products fast and frictionless, and the results speak for themselves. So now let’s turn it over to the results, starting with our secure web gateway.
Cloudflare Gateway: security at the Internet
A secure web gateway needs to be fast because it acts as a funnel for all of an organization’s Internet-bound traffic. If a secure web gateway is slow, then any traffic from users out to the Internet will be slow. If traffic out to the Internet is slow, users may see web pages load slowly, video calls experience jitter or loss, or generally unable to do their jobs. Users may decide to turn off the gateway, putting the organization at risk of attack.
In addition to being close to users, a performant web gateway needs to also be well-peered with the rest of the Internet to avoid slow paths out to websites users want to access. Many websites use CDNs to accelerate their content and provide a better experience. These CDNs are often well-peered and embedded in last mile networks. But traffic through a secure web gateway follows a forward proxy path: users connect to the proxy, and the proxy connects to the websites users are trying to access. If that proxy isn’t as well-peered as the destination websites are, the user traffic could travel farther to get to the proxy than it would have needed to if it was just going to the website itself, creating a hairpin, as seen in the diagram below:
A well-connected proxy ensures that the user traffic travels less distance making it as fast as possible.
To compare secure web gateway products, we pitted the Cloudflare Gateway and WARP client against Zscaler, Netskope, and Palo Alto which all have products that perform the same functions. Cloudflare users benefit from Gateway and Cloudflare’s network being embedded deep into last mile networks close to users, being peered with over 12,000 networks. That heightened connectivity shows because Cloudflare Gateway is the fastest network in 42% of tested scenarios:
Number of testing scenarios where each provider is fastest for 95th percentile HTTP Response time (higher is better)
Provider
Scenarios where this provider is fastest
Cloudflare
48
Zscaler
14
Netskope
10
Palo Alto Networks
42
This data shows that we are faster to more websites from more places than any of our competitors. To measure this, we look at the 95th percentile HTTP response time: how long it takes for a user to go through the proxy, have the proxy make a request to a website on the Internet, and finally return the response. This measurement is important because it’s an accurate representation of what users see. When we look at the 95th percentile across all tests, we see that Cloudflare is 2.5% faster than Palo Alto Networks, 13% faster than Zscaler, and 6.5% faster than Netskope.
95th percentile HTTP response across all tests
Provider
95th percentile response (ms)
Cloudflare
515
Zscaler
595
Netskope
550
Palo Alto Networks
529
Cloudflare wins out here because Cloudflare’s exceptional peering allows us to succeed in places where others were not able to succeed. We are able to get locally peered in hard-to-reach places on the globe, giving us an edge. For example, take a look at how Cloudflare performs against the others in Australia, where we are 30% faster than the next fastest provider:
Cloudflare establishes great peering relationships in countries around the world: in Australia we are locally peered with all of the major Australian Internet providers, and as such we are able to provide a fast experience to many users around the world. Globally, we are peered with over 12,000 networks, getting as close to end users as we can to shorten the time requests spend on the public Internet. This work has previously allowed us to deliver content quickly to users, but in a Zero Trust world, it shortens the path users take to get to their SWG, meaning they can quickly get to the services they need.
Previously when we performed these tests, we only tested from a single Azure region to five websites. Existing testing frameworks like Catchpoint are unsuitable for this task because performance testing requires that you run the SWG client on the testing endpoint. We also needed to make sure that all of the tests are running on similar machines in the same places to measure performance as well as possible. This allows us to measure the end-to-end responses coming from the same location where both test environments are running.
In our testing configuration for this round of evaluations, we put four VMs in 12 cloud regions side by side: one running Cloudflare WARP connecting to our gateway, one running ZIA, one running Netskope, and one running Palo Alto Networks. These VMs made requests every five minutes to the 11 different websites mentioned below and logged the HTTP browser timings for how long each request took. Based on this, we are able to get a user-facing view of performance that is meaningful. Here is a full matrix of locations that we tested from, what websites we tested against, and which provider was faster:
Endpoints
SWG Regions
Shopify
Walmart
Zendesk
ServiceNow
Azure Site
Slack
Zoom
Box
M365
GitHub
Bitbucket
East US
Cloudflare
Cloudflare
Palo Alto Networks
Cloudflare
Palo Alto Networks
Cloudflare
Palo Alto Networks
Cloudflare
West US
Palo Alto Networks
Palo Alto Networks
Cloudflare
Cloudflare
Palo Alto Networks
Cloudflare
Palo Alto Networks
Cloudflare
South Central US
Cloudflare
Cloudflare
Palo Alto Networks
Cloudflare
Palo Alto Networks
Cloudflare
Palo Alto Networks
Cloudflare
Brazil South
Cloudflare
Palo Alto Networks
Palo Alto Networks
Palo Alto Networks
Zscaler
Zscaler
Zscaler
Palo Alto Networks
Cloudflare
Palo Alto Networks
Palo Alto Networks
UK South
Cloudflare
Palo Alto Networks
Palo Alto Networks
Palo Alto Networks
Palo Alto Networks
Palo Alto Networks
Palo Alto Networks
Cloudflare
Palo Alto Networks
Palo Alto Networks
Palo Alto Networks
Central India
Cloudflare
Cloudflare
Cloudflare
Palo Alto Networks
Palo Alto Networks
Cloudflare
Cloudflare
Cloudflare
Southeast Asia
Cloudflare
Cloudflare
Cloudflare
Cloudflare
Palo Alto Networks
Cloudflare
Cloudflare
Cloudflare
Canada Central
Cloudflare
Cloudflare
Palo Alto Networks
Cloudflare
Cloudflare
Palo Alto Networks
Palo Alto Networks
Palo Alto Networks
Zscaler
Cloudflare
Zscaler
Switzerland North
netskope
Zscaler
Zscaler
Cloudflare
netskope
netskope
netskope
netskope
Cloudflare
Cloudflare
netskope
Australia East
Cloudflare
Cloudflare
netskope
Cloudflare
Cloudflare
Cloudflare
Cloudflare
Cloudflare
UAE Dubai
Zscaler
Zscaler
Cloudflare
Cloudflare
Zscaler
netskope
Palo Alto Networks
Zscaler
Zscaler
netskope
netskope
South Africa North
Palo Alto Networks
Palo Alto Networks
Palo Alto Networks
Zscaler
Palo Alto Networks
Palo Alto Networks
Palo Alto Networks
Palo Alto Networks
Zscaler
Palo Alto Networks
Palo Alto Networks
Blank cells indicate that tests to that particular website did not report accurate results or experienced failures for over 50% of the testing period. Based on this data, Cloudflare is generally faster, but we’re not as fast as we’d like to be. There are still some areas where we need to improve, specifically in South Africa, UAE, and Brazil. By Birthday Week in September, we want to be the fastest to all of these websites in each of these regions, which will bring our number up from fastest in 54% of tests to fastest in 79% of tests.
To summarize, Cloudflare’s Gateway is still the fastest SWG on the Internet. But Zero Trust isn’t all about SWG. Let’s talk about how Cloudflare performs in Zero Trust Network Access scenarios.
Instant (Zero Trust) access
Access control needs to be seamless and transparent to the user: the best compliment for a Zero Trust solution is for employees to barely notice it’s there. Services like Cloudflare Access protect applications over the public Internet, allowing for role-based authentication access instead of relying on things like a VPN to restrict and secure applications. This form of access management is more secure, but with a performant ZTNA service, it can even be faster.
Cloudflare outperforms our competitors in this space, being 46% faster than Zscaler, 56% faster than Netskope, and 10% faster than Palo Alto Networks:
Zero Trust Network Access P95 HTTP Response times
Provider
P95 HTTP response (ms)
Cloudflare
1252
Zscaler
2388
Netskope
2974
Palo Alto Networks
1471
For this test, we created applications hosted in three different clouds in 12 different locations: AWS, GCP, and Azure. However, it should be noted that Palo Alto Networks was the exception, as we were only able to measure them using applications hosted in one cloud from two regions due to logistical challenges with setting up testing: US East and Singapore.
For each of these applications, we created tests from Catchpoint that accessed the application from 400 locations around the world. Each of these Catchpoint nodes attempted two actions:
New Session: log into an application and receive an authentication token
Existing Session: refresh the page and log in passing the previously obtained credentials
We like to measure these scenarios separately, because when we look at 95th percentile values, we would almost always be looking at new sessions if we combined new and existing sessions together. For the sake of completeness though, we will also show the 95th percentile latency of both new and existing sessions combined.
Cloudflare was faster in both US East and Singapore, but let’s spotlight a couple of regions to delve into. Let’s take a look at a region where resources are heavily interconnected equally across competitors: US East, specifically Ashburn, Virginia.
In Ashburn, Virginia, Cloudflare handily beats Zscaler and Netskope for ZTNA 95th percentile HTTP Response:
95th percentile HTTP Response times (ms) for applications hosted in Ashburn, VA
AWS East US
Total (ms)
New Sessions (ms)
Existing Sessions (ms)
Cloudflare
2849
1749
1353
Zscaler
5340
2953
2491
Netskope
6513
3748
2897
Palo Alto Networks
Azure East US
Cloudflare
1692
989
1169
Zscaler
5403
2951
2412
Netskope
6601
3805
2964
Palo Alto Networks
GCP East US
Cloudflare
2811
1615
1320
Zscaler
Netskope
6694
3819
3023
Palo Alto Networks
2258
894
1464
You might notice that Palo Alto Networks looks to come out ahead of Cloudflare for existing sessions (and therefore for overall 95th percentile). But these numbers are misleading because Palo Alto Networks’ ZTNA behavior is slightly different than ours, Zscaler’s, or Netskope’s. When they perform a new session, it does a full connection intercept and returns a response from its processors instead of directing users to the login page of the application they are trying to access.
This means that Palo Alto Networks' new session response times don’t actually measure the end-to-end latency we’re looking for. Because of this, their numbers for new session latency and total session latency are misleading, meaning we can only meaningfully compare ourselves to them for existing session latency. When we look at existing sessions, when Palo Alto Networks acts as a pass-through, Cloudflare still comes out ahead by 10%.
This is true in Singapore as well, where Cloudflare is 50% faster than Zscaler and Netskope, and also 10% faster than Palo Alto Networks for Existing Sessions:
95th percentile HTTP Response times (ms) for applications hosted in Singapore
AWS Singapore
Total (ms)
New Sessions (ms)
Existing Sessions (ms)
Cloudflare
2748
1568
1310
Zscaler
5349
3033
2500
Netskope
6402
3598
2990
Palo Alto Networks
Azure Singapore
Cloudflare
1831
1022
1181
Zscaler
5699
3037
2577
Netskope
6722
3834
3040
Palo Alto Networks
GCPSingapore
Cloudflare
2820
1641
1355
Zscaler
5499
3037
2412
Netskope
6525
3713
2992
Palo Alto Networks
2293
922
1476
One critique of this data could be that we’re aggregating the times of all Catchpoint nodes together at P95, and we’re not looking at the 95th percentile of Catchpoint nodes in the same region as the application. We looked at that, too, and Cloudflare’s ZTNA performance is still better. Looking at only North America-based Catchpoint nodes, Cloudflare performs 50% better than Netskope, 40% better than Zscaler, and 10% better than Palo Alto Networks at P95 for warm connections:
Zero Trust Network Access 95th percentile HTTP Response times for warm connections with testing locations in North America
Provider
P95 HTTP response (ms)
Cloudflare
810
Zscaler
1290
Netskope
1351
Palo Alto Networks
871
Finally, one thing we wanted to show about our ZTNA performance was how well Cloudflare performed per cloud per region. This below chart shows the matrix of cloud providers and tested regions:
Fastest ZTNA provider in each cloud provider and region by 95th percentile HTTP Response
AWS
Azure
GCP
Australia East
Cloudflare
Cloudflare
Cloudflare
Brazil South
Cloudflare
Cloudflare
N/A
Canada Central
Cloudflare
Cloudflare
Cloudflare
Central India
Cloudflare
Cloudflare
Cloudflare
East US
Cloudflare
Cloudflare
Cloudflare
South Africa North
Cloudflare
Cloudflare
N/A
South Central US
N/A
Cloudflare
Zscaler
Southeast Asia
Cloudflare
Cloudflare
Cloudflare
Switzerland North
N/A
N/A
Cloudflare
UAE Dubai
Cloudflare
Cloudflare
Cloudflare
UK South
Cloudflare
Cloudflare
netskope
West US
Cloudflare
Cloudflare
N/A
There were some VMs in some clouds that malfunctioned and didn’t report accurate data. But out of 30 available cloud regions where we had accurate data, Cloudflare was the fastest ZT provider in 28 of them, meaning we were faster in 93% of tested cloud regions.
To summarize, Cloudflare also provides the best experience when evaluating Zero Trust Network Access. But what about another piece of the puzzle: Remote Browser Isolation (RBI)?
Remote Browser Isolation: a secure browser hosted in the cloud
Remote browser isolation products have a very strong dependency on the public Internet: if your connection to your browser isolation product isn’t good, then your browser experience will feel weird and slow. Remote browser isolation is extraordinarily dependent on performance to feel smooth and seamless to the users: if everything is fast as it should be, then users shouldn’t even notice that they’re using browser isolation.
For this test, we’re again pitting Cloudflare against Zscaler. While Netskope does have an RBI product, we were unable to test it due to it requiring a SWG client, meaning we would be unable to get full fidelity of testing locations like we would when testing Cloudflare and Zscaler. Our tests showed that Cloudflare is 64% faster than Zscaler for remote browsing scenarios: Here’s a matrix of fastest provider per cloud per region for our RBI tests:
Fastest RBI provider in each cloud provider and region by 95th percentile HTTP Response
AWS
Azure
GCP
Australia East
Cloudflare
Cloudflare
Cloudflare
Brazil South
Cloudflare
Cloudflare
Cloudflare
Canada Central
Cloudflare
Cloudflare
Cloudflare
Central India
Cloudflare
Cloudflare
Cloudflare
East US
Cloudflare
Cloudflare
Cloudflare
South Africa North
Cloudflare
Cloudflare
South Central US
Cloudflare
Cloudflare
Southeast Asia
Cloudflare
Cloudflare
Cloudflare
Switzerland North
Cloudflare
Cloudflare
Cloudflare
UAE Dubai
Cloudflare
Cloudflare
Cloudflare
UK South
Cloudflare
Cloudflare
Cloudflare
West US
Cloudflare
Cloudflare
Cloudflare
This chart shows the results of all of the tests run against Cloudflare and Zscaler to applications hosted on three different clouds in 12 different locations from the same 400 Catchpoint nodes as the ZTNA tests. In every scenario, Cloudflare was faster. In fact, no test against a Cloudflare-protected endpoint had a 95th percentile HTTP Response of above 2105 ms, while no Zscaler-protected endpoint had a 95th percentile HTTP response of below 5000 ms.
To get this data, we leveraged the same VMs to host applications accessed through RBI services. Each Catchpoint node would attempt to log into the application through RBI, receive authentication credentials, and then try to access the page by passing the credentials. We look at the same new and existing sessions that we do for ZTNA, and Cloudflare is faster in both new sessions and existing session scenarios as well.
Gotta go fast(er)
Our Zero Trust customers want us to be fast not because they want the fastest Internet access, but because they want to know that employee productivity won’t be impacted by switching to Cloudflare. That doesn’t necessarily mean that the most important thing for us is being faster than our competitors, although we are. The most important thing for us is improving our experience so that our users feel comfortable knowing we take their experience seriously. When we put out new numbers for Birthday Week in September and we’re faster than we were before, it won’t mean that we just made the numbers go up: it means that we are constantly evaluating and improving our service to provide the best experience for our customers. We care more that our customers in UAE have an improved experience with Office365 as opposed to beating a competitor in a test. We show these numbers so that we can show you that we take performance seriously, and we’re committed to providing the best experience for you, wherever you are.
A collection of tools from Cloudflare One to help your teams use AI services safely
Cloudflare One gives teams of any size the ability to safely use the best tools on the Internet without management headaches or performance challenges. We’re excited to announce Cloudflare One for AI, a new collection of features that help your team build with the latest AI services while still maintaining a Zero Trust security posture.
Large Language Models, Larger Security Challenges
A Large Language Model (LLM), like OpenAI’s GPT or Google’s Bard, consists of a neural network trained against a set of data to predict and generate text based on a prompt. Users can ask questions, solicit feedback, and lean on the service to create output from poetry to Cloudflare Workers applications.
The tools also bear an uncanny resemblance to a real human. As in some real-life personal conversations, oversharing can become a serious problem with these AI services. This risk multiplies due to the types of use cases where LLM models thrive. These tools can help developers solve difficult coding challenges or information workers create succinct reports from a mess of notes. While helpful, every input fed into a prompt becomes a piece of data leaving your organization’s control.
Some responses to tools like ChatGPT have been to try and ban the service outright; either at a corporate level or across an entire nation. We don’t think you should have to do that. Cloudflare One’s goal is to allow you to safely use the tools you need, wherever they live, without compromising performance. These features will feel familiar to any existing use of the Zero Trust products in Cloudflare One, but we’re excited to walk through cases where you can use the tools available right now to allow your team to take advantage of the latest LLM features.
Measure usage
SaaS applications make it easy for any user to sign up and start testing. That convenience also makes these tools a liability for IT budgets and security policies. Teams refer to this problem as “Shadow IT” – the adoption of applications and services outside the approved channels in an organization.
In terms of budget, we have heard from early adopter customers who know that their team members are beginning to experiment with LLMs, but they are not sure how to approach making a commercial licensing decision. What services and features do their users need and how many seats should they purchase?
On the security side, the AIs can be revolutionary for getting work done but terrifying for data control policies. Team members treat these AIs like sounding boards for painful problems. The services invite users to come with their questions or challenges. Sometimes the context inside those prompts can contain sensitive information that should never leave an organization. Even if teams select and approve a single vendor, members of your organization might prefer another AI and continue to use it in their workflow.
Cloudflare One customers on any plan can now review the usage of AIs. Your IT department can deploy Cloudflare Gateway and passively observe how many users are selecting which services as a way to start scoping out enterprise licensing plans.
Administrators can also block the use of these services with a single click, but that is not our goal today. You might want to use this feature if you select ChatGPT as your approved model, and you want to make sure team members don’t continue to use alternatives, but we hope you don’t block all of these services outright. Cloudflare’s priority is to give you the ability to use these tools safely.
Control API access
When our teams began experimenting with OpenAI’s ChatGPT service, we were astonished by what it already knew about Cloudflare. We asked ChatGPT to create applications with Cloudflare Workers or guide us through how to configure a Cloudflare Access policy and, in most cases, the results were accurate and helpful.
In some cases the results missed the mark. The AIs were using outdated information, or we were asking questions about features that had only launched recently. Thankfully, these AIs can learn and we can help. We can train these models with scoped inputs and connect plug-ins to provide our customers with better AI-guided experiences when using Cloudflare services.
We heard from customers who want to do the same thing and, like us, they need to securely share training data and grant plug-in access for an AI service. Cloudflare One’s security suite extends beyond human users and can give teams the ability to securely share Zero Trust access to sensitive data over APIs.
First, teams can create service tokens that external services must present to reach data made available through Cloudflare One. Administrators can provide these tokens to systems making API requests and log every single request. As needed, teams can revoke these tokens with a single click.
After creating and issuing service tokens, administrators can create policies to allow specific services access to their training data. These policies will verify the service token and can be extended to verify country, IP address or an mTLS certificate. Policies can also be created to require human users to authenticate with an identity provider and complete an MFA prompt before accessing sensitive training data or services.
When teams are ready to allow an AI service to connect to their infrastructure, they can do so without poking holes in their firewalls by using Cloudflare Tunnel. Cloudflare Tunnel will create an encrypted, outbound-only connection to Cloudflare’s network where every request will be checked against the access rules configured for one or more services protected by Cloudflare One.
Cloudflare’s Zero Trust access control gives you the ability to enforce authentication on each and every request made to the data your organization decides to provide to these tools. That still leaves a gap in the data your team members might overshare on their own.
Restrict data uploads
Administrators can select an AI service, block Shadow IT alternatives, and carefully gate access to their training material, but humans are still involved in these AI experiments. Any one of us can accidentally cause a security incident by oversharing information in the process of using an AI service – even an approved service.
We expect AI playgrounds to continue to evolve to feature more data management capabilities, but we don’t think you should have to wait for that to begin adopting these services as part of your workflow. Cloudflare’s Data Loss Prevention (DLP) service can provide a safeguard to stop oversharing before it becomes an incident for your security team.
First, tell us what data you care about. We provide simple, preconfigured options that give you the ability to check for things that look like social security numbers or credit card numbers. Cloudflare DLP can also scan for patterns based on regular expressions configured by your team.
Once you have defined the data that should never leave your organization, you can build granular rules about how it can and cannot be shared with AI services. Maybe some users are approved to experiment with projects that contain sensitive data, in which case you can build a rule that only allows an Active Directory or Okta group to upload that kind of information while everyone else is blocked.
Control use without a proxy
The tools in today’s blog post focus on features that apply to data-in-motion. We also want to make sure that misconfigurations in the applications don’t lead to security violations. For example, the new plug-in feature in ChatGPT brings the knowledge and workflows of external services into the AI interaction flow. However, that can also lead to the services behind plug-ins having more access than you want to.
Cloudflare’s Cloud Access Security Broker (CASB) scans your SaaS applications for potential issues that can occur when users make changes. Whether alerting you to files that someone accidentally just made public on the Internet to checking that your GitHub repositories have the right membership controls, Cloudflare’s CASB removes the manual effort required to check each and every setting for potential issues in your SaaS applications.
Available soon, we are working on new integrations with popular AI services to check for misconfigurations. Like most users of these services, we’re still learning more about where potential accidents can occur, and we are excited to provide administrators who use our CASB with our first wave of controls for AI services.
What’s next?
The usefulness of these tools will only accelerate. The ability of AI services to coach and generate output will continue to make it easier for builders from any background to create the next big thing.
We share a similar goal. The Cloudflare products focused on helping users build applications and services, our Workers platform, remove hassles like worrying about where to deploy your application or how to scale your services. Cloudflare solves those headaches so that users can focus on creating. Combined with the AI services, we expect to see thousands of new builders launch the next wave of products built on Cloudflare and inspired by AI coaching and generation.
We have already seen dozens of projects flourish that were built on Cloudflare Workers using guidance from tools like ChatGPT. We plan to launch new integrations with these models to make this even more seamless, bringing better Cloudflare-specific guidance to the chat experience.
We also know that the security risk of these tools will grow. We will continue to bring functionality into Cloudflare One that aims to stay one step ahead of the risks as they evolve with these services. Ready to get started? Sign up here to begin using Cloudflare One at no cost for teams of up to 50 users.
Gartner has recognized Cloudflare in the 2023 “Gartner® Magic Quadrant™ for Security Service Edge (SSE)” report for its ability to execute and completeness of vision. We are excited to share that the Cloudflare Zero Trust solution, part of our Cloudflare One platform, is one of only ten vendors recognized in the report.
Of the 10 companies named to this year’s Gartner® Magic Quadrant™ report, Cloudflare is the only new vendor addition. You can read more about our position in the report and what customers say about using Cloudflare One here.
Cloudflare is also the newest vendor when measured by the date since our first products in the SSE space launched. We launched Cloudflare Access, our best-in-class Zero Trust access control product, a little less than five years ago. Since then, we have released hundreds of features and shipped nearly a dozen more products to create a comprehensive SSE solution that over 10,000 organizations trust to keep their organizations data, devices and teams both safe and fast. We moved that quickly because we built Cloudflare One on top of the same network that already secures and accelerates large segments of the Internet today.
We deliver our SSE services on the same servers and in the same locations that serve some of the world’s largest Internet properties. We combined existing advantages like the world’s fastest DNS resolver, Cloudflare’s serverless compute platform, and our ability to route and accelerate traffic around the globe. We might be new to the report, but customers who select Cloudflare One are not betting on an upstart provider; they are choosing an industry-leading solution made possible by a network that already secures millions of destinations and billions of users every day.
We are flattered by the recognition from Gartner this week and even more thrilled by the customer outcomes we make possible today. That said, we are not done and we are only going faster.
What is a Security Service Edge?
A Security Service Edge (SSE) “secures access to the web, cloud services and private applications. Capabilities include access control, threat protection, data security, security monitoring, and acceptable-use control enforced by network-based and API-based integration. SSE is primarily delivered as a cloud-based service, and may include on-premises or agent-based components.”1
The SSE space developed to meet organizations as they encountered a new class of security problems. Years ago, teams could keep their devices, services, and data safe by hiding from the rest of the world behind a figurative castle-and-moat. The defense perimeter for an enterprise corresponded to the literal walls of their office. Applications ran in server closets or self-managed data centers. Businesses could deploy firewalls, proxies, and filtering appliances in the form of on-premise hardware. Remote users suffered through the setup by backhauling their traffic through the physical office with a legacy virtual private network (VPN) client.
That model began to break down when applications started to leave the building. Teams began migrating to SaaS tools and public cloud providers. They could no longer control security by placing physical appliances in the flow of their one path to the Internet.
Meanwhile, users also left the office, placing stress on the ability of a self-managed private network to scale with the traffic. Performance and availability suffered while costs increased as organizations carried more traffic and deployed more bandaids to try and buy time.
Bad actors also evolved. Attacks became more sophisticated and exploited the migration away from a classic security perimeter. The legacy appliances deployed could not keep up with the changes in attack patterns and scale of attacks.
SSE vendors provide organizations with a cloud-based solution to those challenges. SSE providers deploy and maintain security services in their own points of presence or in a public cloud provider, giving enterprises a secure first hop before they connect to the rest of the Internet or to their internal tools. IT teams can deprecate the physical or virtual appliances that they spent days maintaining. Security teams benefit from filtering and policies that update constantly to defend against new threats.
Some SSE features target remote access replacement by offering customers the ability to connect users to internal tools with Zero Trust access control rules. Other parts of an SSE platform focus on applying Zero Trust scrutiny to the rest of the Internet, replacing the on-premise filtering appliances of an enterprise with cloud-based firewalls, resolvers, and proxies that filter and log traffic leaving a device closer to the user instead of forcing a backhaul to a centralized location.
What about SASE?
You might also be familiar with the term Secure Access Service Edge (SASE). We hear customers talk about their “SASE” goals more often than “SSE” alone. SASE extends the definition of SSE to include managing the connectivity of the traffic being secured. Network-as-a-Service vendors help enterprises connect their users, devices, sites, and services. SSE providers secure that traffic.
Most vendors focus on one side of the equation. Network-as-a-service companies sell software-defined wide area network (SD-WAN), interconnection, and traffic optimization solutions to help enterprises manage and accelerate connectivity, but those enterprises wind up losing those benefits by sending all that traffic to an SSE provider for filtering. SSE providers deliver security tools for traffic of nearly any type, but they still need customers to buy additional networking services to get that traffic to their locations.
Cloudflare One is a single vendor SASE platform. Cloudflare offers enterprises a comprehensive network-as-a-service where teams can send all traffic to Cloudflare’s network, where we can help teams manage connectivity and improve performance. Enterprises can choose from flexible on-ramps, like their existing hardware routers, agents running on laptops and mobile devices, physical and virtual interconnects, or Cloudflare’s own last mile connector.
When that traffic reaches Cloudflare’s network, our SSE services apply security filtering in the same locations where we manage and route connectivity. Cloudflare’s SSE solution does not add additional hops; we deliver filtering and logging in-line with the traffic we accelerate for our customers. The value of our single vendor SASE solution is just another outcome of an obsession we’ve had since we first launched our reverse proxy over ten years ago: customers should not have to compromise performance for security and vice versa.
So where does Cloudflare One fit?
Cloudflare One connects enterprises to the tools they need while securing their devices, applications and data without compromising on performance. The platform consists of two primary components: our Cloudflare Zero Trust products, which represent our SSE offering, and our network-as-a-service solution. As much as today’s announcement separates out those features, we prefer to talk about how they work together.
Cloudflare’s network-as-a-service offering, our Magic WAN solution, extends our network for customers to use as their own. Enterprises can take advantage of the investments we have made over more than a decade to build out one of the world’s most peered, most performant, and most available networks. Teams can connect individual roaming devices, offices and physical sites, or entire networks and data centers through Cloudflare to the rest of the Internet or internal destinations.
We want to make it as easy as possible for customers to send us their traffic, so we provide many flexible “on-ramps” to easily fit into their existing infrastructure. Enterprises can use our roaming agent to connect user devices, our Cloudflare Tunnel service for application-level connectivity, network-level tunnels from our Magic WAN Connector or their existing router or SD-WAN hardware, and/or direct physical or virtual interconnections for dedicated connectivity to on-prem or cloud infrastructure at 1,600+ locations around the world. When packets arrive at the closest Cloudflare location, we provide optimization, acceleration and logging to give customers visibility into their traffic flows.
Instead of sending that accelerated traffic to an additional intermediary for security filtering, our Cloudflare Zero Trust platform can take over to provide SSE security filtering in the same location – generally on the exact same server – as our network-as-a-service functions. Enterprises can pick and choose what SSE features they want to enable to strengthen their security posture over time.
Cloudflare One and the SSE feature set
The security features inside of Cloudflare One provide comprehensive SSE coverage to enterprises operating at any scale. Customers just need to send traffic to a Cloudflare location within a few milliseconds of their users and Cloudflare Zero Trust handles everything else.
Cloudflare One SSE Capabilities
Zero Trust Access Control Cloudflare provides a Zero Trust VPN replacement for teams that host and control their own resources. Customers can deploy a private network inside of Cloudflare’s network for more traditional connectivity or extend access to contractors without any agent required. Regardless of how users connect, and for any type of destination they need, Cloudflare’s network gives administrators the ability to build granular rules on a per-resource or global basis. Teams can combine one or more identity providers, device posture inputs, and other sources of signal to determine when and how a user should be able to connect.
Organizations can also extend these types of Zero Trust access control rules to the SaaS applications where they do not control the hosting by introducing Cloudflare’s identity proxy into the login flow. They can continue to use their existing identity provider but layer on additional checks like device posture, country, and multifactor method.
DNS filtering Cloudflare’s DNS filtering solution runs on the world’s fastest DNS resolver, filtering and logging the DNS queries leaving individual devices or some of the world’s largest networks.
Network firewall Organizations that maintain on-premise hardware firewalls or cloud-based equivalents can deprecate their boxes by sending traffic through Cloudflare where our firewall-as-a-service can filter and log traffic. Our Network Firewall includes L3-L7 filtering, Intrusion Detection, and direct integrations with our Threat Intelligence feeds and the rest of our SSE suite. It enables security teams to build sophisticated policies without any of the headaches of traditional hardware: no capacity or redundancy planning, no throughput restrictions, no manual patches or upgrades.
Secure Web Gateway Cloudflare’s Secure Web Gateway (SWG) service inspects, filters, and logs traffic in a Cloudflare PoP close to a user regardless of where they work. The SWG can block HTTP requests bound for dangerous destinations, scan traffic for viruses and malware, and control how traffic routes to the rest of the Internet without the need for additional hardware or virtualized services.
In-line Cloud Access Security Broker and Shadow IT The proliferation of SaaS applications can help teams cut costs but poses a real risk; sometimes users prefer tools other than the ones selected by their IT or Security teams. Cloudflare’s in-line Cloud Access Security Broker (CASB) gives administrators the tools to make sure employees use SaaS applications as intended. Teams can build tenant control rules that restrict employees from logging into personal accounts, policies that only allow file uploads of certain types to approved SaaS applications, and filters that restrict employees from using unapproved services.
Cloudflare’s “Shadow IT” service scans and catalogs user traffic to the Internet to help IT and Security teams detect and monitor the unauthorized use of SaaS applications. For example, teams can ensure that their approved cloud storage is the only place where users can upload materials.
API-driven Cloud Access Security Broker Cloudflare’s superpower is our network, but sometimes the worst attacks start with data sitting still. Teams that adopt SaaS applications can share work products and collaborate together from any location; that same convenience makes it simple for mistakes or bad actors to cause a serious data breach.
In some cases, employees might overshare a document with sensitive information by selecting the wrong button in the “Share” menu. With just one click, a spreadsheet with customer contact data could become public on the Internet. In other situations, users might share a report with their personal account without realizing they just violated internal compliance rules.
Regardless of how the potential data breach started, Cloudflare’s API-driven CASB constantly scans the SaaS applications that your team uses for potential misconfiguration and data loss. Once detected, Cloudflare’s CASB will alert administrators and provide a comprehensive guide to remediating the incident.
Data Loss Prevention Cloudflare’s Data Loss Prevention service scans traffic to detect and block potential data loss. Administrators can select from common precreated profiles, like social security numbers or credit card numbers, or create their own criteria using regular expressions or integrate with existing Microsoft Information Protection labels.
Remote Browser Isolation Cloudflare’s browser isolation service runs a browser inside of our network, in a data center just milliseconds from the user, and sends the vector rendering of the web page to the local device. Team members can use any modern browser and, unlike other approaches, the Internet just feels like the Internet. Administrators can isolate sites on the fly, choosing to only isolate unknown destinations or providing contractors with an agentless workstation. Security teams can add additional protection like blocking copy-paste or printing.
Security beyond the SSE
Many of the customers who talk to us about their SSE goals are not ready to begin adopting every security service in the category from Day 1. Instead, they tend to have strategic SSE goals and tactical immediate problems. That’s fine. We can meet customers wherever they begin on their journey and sometimes that journey starts with pain points that sit just a bit outside of the current SSE definition. We can help in those areas, too.
Many of the types of attacks that an SSE model aims to prevent begin with email, but that falls outside of the traditional SSE definition. Attackers will target specific employees or entire workforces with phishing links or malware that the default filtering available from email providers today miss.
We want to help customers stop these attacks at the inbox before SSE features like DNS or SWG filtering need to apply. Cloudflare One includes industry-leading email security through our Area 1 product to protect teams regardless of their email provider. Area 1 is not just a standalone solution bundled into our SSE; Cloudflare Zero Trust features work better together alongside Area 1. Suspicious emails can open links in an isolated browser, for example, to give customers a defense-in-depth security model without the risk of more IT help desk tickets.
Cloudflare One customers can also take advantage of another Gartner-recognized platform in Cloudflare, our application security suite. Cloudflare’s industry-leading application security features, like our Web Application Firewall and DDoS mitigation service, can be deployed in-line with our Zero Trust security features. Teams can add bot management alerts, API protection, and faster caching to their internal tools with a single click.
Why Cloudflare?
Over 10,000 organizations trust Cloudflare One to connect and secure their enterprise. Cloudflare One helps protect and accelerate teams from the world’s largest IT organization, the US Federal Government, to thousands of small groups who rely on our free plan. A couple of months ago we spoke with customers as part of our CIO Week to listen to the reasons they select Cloudflare One. Their feedback followed a few consistent themes.
1) Cloudflare One delivers more complete security Nearly every SSE vendor offers improved security compared to a traditional castle-and-moat model, but that is a low bar. We built the security features in Cloudflare One to be best in class. Our industry-leading access control solution provides more built-in options to control who can connect to the tools that power your business.
We partner leading identity providers and endpoint protection platforms, like Microsoft and CrowdStrike, to provide a Zero Trust VPN replacement that is better than anything else on the market. On the outbound filtering side, every filtering option relies on threat intelligence gathered and curated by Cloudforce One, our dedicated threat research team.
2) Cloudflare One makes your team faster Cloudflare One accelerates your end users from the first moment they connect to the Internet by starting with the world’s fastest DNS resolver. End users send those DNS queries and establish connectivity over a secure tunnel optimized based on feedback from the millions of users who rely on our popular consumer forward proxy. Entire sites connect through a variety of tunnel options to Cloudflare’s network where we are the fastest connectivity provider for the most number of the world’s 3,000 largest networks.
We compete and measure ourselves against pure connectivity providers. When we measure ourselves against pure SSE providers, like Zscaler, we significantly outperform by 38% to 59% depending on use case.
3) Cloudflare One is easier to manage The Cloudflare Zero Trust products are unique in the SSE market in that we offer a free plan that covers nearly every feature. We make these services available at no cost to groups of up to 50 users because we believe that security on the Internet should be accessible to anyone on any budget.
A consequence of that commitment is that we built products that have to be easy to use. Unlike other SSE providers who only sell to the enterprise and can rely on large systems integrators for deployment, we had to create a solution that any team could deploy. From human rights organizations without full-time IT departments to start ups who want to spend more time building and less time worrying about vulnerabilities.
We also know that administrators want more options than just an intuitive dashboard. We provide API support for managing every Cloudflare One feature, and we maintain a Terraform provider for teams that need the option for peer reviewed configuration-as-code management.
4) Cloudflare One is the most cost-efficient comprehensive SASE offering Cloudflare is responsible for delivering and securing millions of websites on the Internet every day. To support that volume of traffic, we had to build our network for scale and cost-efficiency.
The largest enterprises’ internal network traffic does not (yet) match the volume of even moderately popular Internet properties. When those teams send traffic to Cloudflare One, we rely on the same hardware and the same data centers that power our application services business to apply security and networking features. As a result, we can help deliver comprehensive security to any team at a price point that is made possible by our existing investment in our network.
5) Cloudflare can be your single, consolidated security vendor Cloudflare One is only the most recent part of the Cloudflare platform to be recognized in industry analyst reports. In 2022 Gartner named Cloudflare a Leaderin Web Application and API Protection (WAAP). When customers select Cloudflare to solve their SSE challenges, they have the opportunity to add best-in-class solutions all from the same vendor.
Dozens of independent analyst firms continue to recognize Cloudflare for our ability to deliver results to our customers on services ranging from DDoS protection, CDN and edge computing to bot management.
What’s next?
When customers choose Cloudflare One, they trust our network to secure the most sensitive aspects of their enterprise without slowing down their business. We are grateful to the more than 10,000 organizations who have selected us as their vendor in the last five years, from small teams on our free plan to Fortune 500 companies and government agencies.
Today’s announcement only accelerates the momentum in Cloudflare One. We are focused on building the next wave of security and connectivity features our customers need to focus on their own mission. We’re going to keep going faster to help more and more organizations. Want to get started on that journey with us? Let us know here and we’ll reach out.
Gartner, “Magic Quadrant for Security Service Edge”, Analyst(s): Charlie Winckless, Aaron McQuaid, John Watts, Craig Lawson, Thomas Lintemuth, Dale Koeppen, April 10, 2023.
GARTNER is a registered trademark and service mark of Gartner and Magic Quadrant is a registered trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.
Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
Security Week 2023 is officially in the books. In our welcome post last Saturday, I talked about Cloudflare’s years-long evolution from protecting websites, to protecting applications, to protecting people. Our goal this week was to help our customers solve a broader range of problems, reduce external points of vulnerability, and make their jobs easier.
We announced 34 new tools and integrations that will do just that. Combined, these announcement will help you do five key things faster and easier:
Making it easier to deploy and manage Zero Trust everywhere
Reducing the number of third parties customers must use
Leverage machine learning to let humans focus on critical thinking
Opening up more proprietary Cloudflare threat intelligence to our customers
Making it harder for humans to make mistakes
And to help you respond to the most current attacks in real time, we reported on how we’re seeing scammers use the Silicon Valley Bank news to phish new victims, and what you can do to protect yourself.
In case you missed any of the announcements, take a look at the summary and navigation guide below.
Today we have released insights from our global network on the top 50 brands used in phishing attacks coupled with the tools customers need to stay safer. Our new phishing and brand protection capabilities, part of Security Center, let customers better preserve brand trust by detecting and even blocking “confusable” and lookalike domains involved in phishing campaigns.
Phishing attacks come in all sorts of ways to fool people. Email is definitely the most common, but there are others. Following up on our Top 50 brands in phishing attacks post, here are some tips to help you catch these scams before you fall for them.
Page Shield now ensures only vetted and secure JavaScript is being executed by browsers to stop unwanted or malicious JavaScript from loading to keep end user data safer.
With Aegis, customers can now get dedicated IPs from Cloudflare we use to send them traffic. This allows customers to lock down services and applications at an IP level and build a protected environment that is application, protocol, and even IP-aware.
mTLS support for Workers allows for communication with resources that enforce an mTLS connection. mTLS provides greater security for those building on Workers so they can identify and authenticate both the client and the server helps protect sensitive data.
We have introduced an innovative new approach to secure hosted applications via Cloudflare Access without the need for any installed software or custom code on application servers.
Cloudflare is excited to launch the Descaler Program, a frictionless path to migrate existing Zscaler customers to Cloudflare One. With this announcement, Cloudflare is making it even easier for enterprise customers to make the switch to a faster, simpler, and more agile foundation for security and network transformation.
Today we’re excited to announce the expansion of support for automated normalization and correlation of Zero Trust logs for Logpush in Sumo Logic’s Cloud SIEM. Joint customers will reduce alert fatigue and accelerate the triage process by converging security and network data into high-fidelity insights.
Cloudflare One now offers Data Loss Prevention (DLP) detections for Microsoft Purview Information Protection labels. This extends the power of Microsoft’s labels to any of your corporate traffic in just a few clicks.
We are unveiling two new integrations for Cloudflare CASB: one for Atlassian Confluence and the other for Atlassian Jira. Security teams can begin scanning for Atlassian- and Confluence-specific security issues that may be leaving sensitive corporate data at risk.
Cloudflare Access and Ping Identity offer a powerful solution for organizations looking to implement Zero Trust security controls to protect their applications and data. Cloudflare is now offering full integration support, so Ping Identity customers can easily integrate their identity management solutions with Cloudflare Access to provide a comprehensive security solution for their applications
We are excited to announce Cloudflare Fraud Detection that will provide precise, easy to use tools that can be deployed in seconds to detect and categorize fraud such as fake account creation or card testing and fraudulent transactions. Fraud Detection will be in early access later this year, those interested can sign up here.
Customers can use these new features to enforce a positive security model on their API endpoints even if they have little-to-no information about their existing APIs today.
With our new Cloudflare Sequence Analytics for APIs, organizations can view the most important sequences of API requests to their endpoints to better understand potential abuse and where to apply protections first.
Read our post on how we keep users and organizations safer with machine learning models that detect attackers attempting to evade detection with DNS tunneling and domain generation algorithms.
We are making the machine learning empowered WAF and Security analytics view available to our Business plan customers, to help detect and stop attacks before they are known.
We have made Cloudflare Radar’s newest free tool available, URL Scanner, providing an under-the-hood look at any webpage to make the Internet more transparent and secure for all.
One of our core beliefs is that privacy is a human right. To achieve that right, we are announcing that our implementations of post-quantum cryptography will be available to everyone, free of charge, forever.
The recent news reports of AI cracking post-quantum cryptography are greatly exaggerated. In this blog, we take a deep dive into the world of side-channel attacks and how AI has been used for more than a decade already to aid it.
IBM and Cloudflare continue to partner together to help customers meet the unique security, performance, resiliency and compliance needs of their customers through the addition of exciting new product and service offerings.
Customers will now be able to use our Cloudflare Tunnels product to send traffic to the key server through a secure channel, without publicly exposing it to the rest of the Internet.
Brand impersonation continues to be a big problem globally. Setting SPF, DKIM and DMARC policies is a great way to reduce that risk, and protect your domains from being used in spoofing emails. But maintaining a correct SPF configuration can be very costly and time consuming, and that’s why we’re launching Cloudflare DMARC Management.
At Cloudflare, we use the Workers platform and our product stack to build new services. Read how we made the new DMARC Management solution entirely on top of our APIs.
Cloudflare’s cloud email security solution now integrates with KnowBe4, allowing mutual customers to offer real-time coaching to employees when a phishing campaign is detected by Cloudflare.
We are excited to announce new options to customize user experience in Access, including customizable pages including login, blocks and the application launcher.
Cloudflare Access is 75% faster than Netskope and 50% faster than Zscaler, and our network is faster than other providers in 48% of last mile networks.
Cloudflare announces one-click ISO certified region, a super easy way for customers to limit where traffic is serviced to ISO 27001 certified data centers inside the European Union.
All WAF customers will benefit fromAccount Security Analytics and Events. This allows organizations to new eyes on your account in Cloudflare dashboard to give holistic visibility. No matter how many zones you manage, they are all there!
We are thrilled to announce the full support of wildcard and multi-hostname application definitions in Cloudflare Access. Until now, Access had limitations that restricted it to a single hostname or a limited set of wildcards
While that’s it for Security Week 2023, you all know by now that Innovation weeks never end for Cloudflare. Stay tuned for a week full of new developer tools coming soon, and a week dedicated to making the Internet faster later in the year.
During every Innovation Week, Cloudflare looks at our network’s performance versus our competitors. In past weeks, we’ve focused on how much faster we are compared to reverse proxies like Akamai, or platforms that sell serverless compute that compares to our Supercloud, like Fastly and AWS. This week, we’d like to provide an update on how we compare to other reverse proxies as well as an update to our application services security product comparison against Zscaler and Netskope. This product is part of our Zero Trust platform, which helps secure applications and Internet experiences out to the public Internet, as opposed to our reverse proxy which protects your websites from outside users.
In addition to our previous post showing how our Zero Trust platform compared against Zscaler, we also have previously shared extensive network benchmarking results for reverse proxies from 3,000 last mile networks around the world. It’s been a while since we’ve shown you our progress towards being #1 in every last mile network. We want to show that data as well as revisiting our series of tests comparing Cloudflare Access to Zscaler Private Access and Netskope Private Access. For our overall network tests, Cloudflare is #1 in 47% of the top 3,000 most reported networks. For our application security tests, Cloudflare is 50% faster than Zscaler and 75% faster than Netskope.
In this blog we’re going to talk about why performance matters for our products, do a deep dive on what we’re measuring to show that we’re faster, and we’ll talk about how we measured performance for each product.
Why does performance matter?
We talked about it in our last blog, but performance matters because it impacts your employees’ experience and their ability to get their job done. Whether it’s accessing services through access control products, connecting out to the public Internet through a Secure Web Gateway, or securing risky external sites through Remote Browser Isolation, all of these experiences need to be frictionless.
A quick summary: say Bob at Acme Corporation is connecting from Johannesburg out to Slack or Zoom to get some work done. If Acme’s Secure Web Gateway is located far away from Bob in London, then Bob’s traffic may go out of Johannesburg to London, and then back into Johannesburg to reach his email. If Bob tries to do something like a voice call on Slack or Zoom, his performance may be painfully slow as he waits for his emails to send and receive. Zoom and Slack both recommend low latency for optimal performance. That extra hop Bob has to take through his gateway could decrease throughput and increase his latency, giving Bob a bad experience.
As we’ve discussed before, if these products or experiences are slow, then something worse might happen than your users complaining: they may find ways to turn off the products or bypass them, which puts your company at risk. A Zero Trust product suite is completely ineffective if no one is using it because it’s slow. Ensuring Zero Trust is fast is critical to the effectiveness of a Zero Trust solution: employees won’t want to turn it off and put themselves at risk if they barely know it’s there at all.
Much like Zscaler, Netskope may outperform many older, antiquated solutions, but their network still fails to measure up to a highly performant, optimized network like Cloudflare’s. We’ve tested all of our Zero Trust products against Netskope equivalents, and we’re even bringing back Zscaler to show you how Zscaler compares against them as well. So let’s dig into the data and show you how and why we’re faster in a critical Zero Trust scenario, comparing Cloudflare Access to Zscaler Private Access and Netskope Private Access.
Cloudflare Access: the fastest Zero Trust proxy
Access control needs to be seamless and transparent to the user: the best compliment for a Zero Trust solution is employees barely notice it’s there. These services allow users to cache authentication information on the provider network, ensuring applications can be accessed securely and quickly to give users that seamless experience they want. So having a network that minimizes the number of logins required while also reducing the latency of your application requests will help keep your Internet experience snappy and reactive.
Cloudflare Access does all that 75% faster than Netskope and 50% faster than Zscaler, ensuring that no matter where you are in the world, you’ll get a fast, secure application experience:
Cloudflare measured application access across ourselves, Zscaler and Netskope from 300 different locations around the world connecting to 6 distinct application servers in Hong Kong, Toronto, Johannesburg, São Paulo, Phoenix, and Switzerland. In each of these locations, Cloudflare’s P95 response time was faster than Zscaler and Netskope. Let’s take a look at the data when the application is hosted in Toronto, an area where Zscaler and Netskope should do well as it’s in a heavily interconnected region: North America.
ZT Access – Response time (95th Percentile) – Toronto
95th Percentile Response (ms)
Cloudflare
2,182
Zscaler
4,071
Netskope
6,072
Cloudflare really stands out in regions with more diverse connectivity options like South America or Asia Pacific, where Zscaler compares better to Netskope than it does Cloudflare:
When we look at application servers hosted locally in South America, Cloudflare stands out:
ZT Access – Response time (95th Percentile) – South America
95th Percentile Response (ms)
Cloudflare
2,961
Zscaler
9,271
Netskope
8,223
Cloudflare’s network shines here, allowing us to ingress connections close to the users. You can see this by looking at the Connect times in South America:
ZT Access – Connect time (95th Percentile) – South America
95th Percentile Connect (ms)
Cloudflare
369
Zscaler
1,753
Netskope
1,160
Cloudflare’s network sets us apart here because we’re able to get users onto our network faster and find the optimal routes around the world back to the application host. We’re twice as fast as Zscaler and three times faster than Netskope because of this superpower. Across all the different tests, Cloudflare’s Connect times is consistently faster across all 300 testing nodes.
In our last blog, we looked at two distinct scenarios that need to be measured individually when we compared Cloudflare and Zscaler. The first scenario is when a user logs into their application and has to authenticate. In this case, the Zero Trust Access service will direct the user to a login page, the user will authenticate, and then be redirected to their application.
This is called a new session, because no authentication information is cached or exists on the Access network. The second scenario is called an existing session, when a user has already been authenticated and that authentication information can be cached. This scenario is usually much faster, because it doesn’t require an extra call to an identity provider to complete.
We like to measure these scenarios separately, because when we look at 95th percentile values, we would almost always be looking at new sessions if we combined new and existing sessions together. But across both scenarios, Cloudflare is consistently faster in every region. Let’s go back and look at an application hosted in Toronto, where users connecting to us connect faster than Zscaler and Netskope for both new and existing sessions.
ZT Access – Response Time (95th Percentile) – Toronto
New Sessions (ms)
Existing Sessions (ms)
Cloudflare
1,276
1,022
Zscaler
2,415
1,797
Netskope
5,741
1,822
You can see that new sessions are generally slower as expected, but Cloudflare’s network and optimized software stack provides a consistently fast user experience. In scenarios where end-to-end connectivity can be more challenging, Cloudflare stands out even more. Let’s take a look at users in Asia connecting through to an application in Hong Kong.
ZT Access – Response Time (95th Percentile) – Hong Kong
New Sessions (ms)
Existing Sessions (ms)
Cloudflare
2,582
2,075
Zscaler
4,956
3,617
Netskope
5,139
3,902
One interesting thing that stands out here is that while Cloudflare’s network is hyper-optimized for performance, Zscaler more closely compares to Netskope on performance than they do to Cloudflare. Netskope also performs poorly on new sessions, which indicates that their service does not react well when users are establishing new sessions.
We like to separate these new and existing sessions because it’s important to look at similar request paths to do a proper comparison. For example, if we’re comparing a request via Zscaler on an existing session and a request via Cloudflare on a new session, we could see that Cloudflare was much slower than Zscaler because of the need to authenticate. So when we contracted a third party to design these tests, we made sure that they took that into account.
For these tests, Cloudflare configured five application instances hosted in Toronto, Los Angeles, Sao Paulo, and Hong Kong. Cloudflare then used 300 different Catchpoint nodes from around the world to mimic a browser login as follows:
User connects to the application from a browser mimicked by a Catchpoint instance – new session
User authenticates against their identity provider
User accesses resource
User refreshes the browser page and tries to access the same resource but with credentials already present – existing session
This allows us to look at Cloudflare versus all the other products for application performance for both new and existing sessions, and we’ve shown that we’re faster. As we’ve mentioned, a lot of that is due to our network and how we get close to our users. So now we’re going to talk about how we compare to other large networks and how we get close to you.
Network effects make the user experience better
Getting closer to users improves the last mile Round Trip Time (RTT). As we discussed in the Access comparison, having a low RTT improves customer performance because new and existing sessions don’t have to travel very far to get to Cloudflare’s Zero Trust network. Embedding ourselves in these last mile networks helps us get closer to our users, which doesn’t just help Zero Trust performance, it helps web performance and developer performance, as we’ve discussed in prior blogs.
To quantify network performance, we have to get enough data from around the world, across all manner of different networks, comparing ourselves with other providers. We used Real User Measurements (RUM) to fetch a 100kb file from several different providers. Users around the world report the performance of different providers. The more users who report the data, the higher fidelity the signal is. The goal is to provide an accurate picture of where different providers are faster, and more importantly, where Cloudflare can improve. You can read more about the methodology in the original Speed Week 2021 blog post here.
We are constantly going through the process of figuring out why we were slow — and then improving. The challenges we faced were unique to each network and highlighted a variety of different issues that are prevalent on the Internet. We’re going to provide an overview of some of the efforts we use to improve our performance for our users.
But before we do, here are the results of our efforts since Developer Week 2022, the last time we showed off these numbers. Out of the top 3,000 networks in the world (by number of IPv4 addresses advertised), here’s a breakdown of the number of networks where each provider is number one in p95 TCP Connection Time, which represents the time it takes for a user on a given network to connect to the provider:
Here’s what those numbers look like as of this week, Security Week 2023:
As you can see, Cloudflare has extended its lead in being faster in more networks, while other networks that previously were faster like Akamai and Fastly lost their lead. This translates to the effects we see on the World Map. Here’s what that world map looked like in Developer Week 2022:
Here’s how that world map looks today during Security Week 2023:
As you can see, Cloudflare has gotten faster in Brazil, many countries in Africa including South Africa, Ethiopia, and Nigeria, as well as Indonesia in Asia, and Norway, Sweden, and the UK in Europe.
A lot of these countries benefited from the Edge Partner Program that we discussed in the Impact Week blog. A quick refresher: the Edge Partner Program encourages last mile ISPs to partner with Cloudflare to deploy Cloudflare locations that are embedded in the last mile ISP. This improves the last mile RTT and improves performance for things like Access. Since we last showed you this map, Cloudflare has deployed more partner locations in places like Nigeria, and Saudi Arabia, which have improved performance for users in all scenarios. Efforts like the Edge Partner Program help improve not just the Zero Trust scenarios like we described above, but also the general web browsing experience for end users who use websites protected by Cloudflare.
Next-generation performance in a Zero Trust world
In a non-Zero Trust world, you and your IT teams were the network operator — which gave you the ability to control performance. While this control was comforting, it was also a huge burden on your IT teams who had to manage middle mile connections between offices and resources. But in a Zero Trust world, your network is now… well, it’s the public Internet. This means less work for your teams — but a lot more responsibility on your Zero Trust provider, which has to manage performance for every single one of your users. The better your Zero Trust provider is at improving end-to-end performance, the better an experience your users will have and the less risk you expose yourself to. For real-time applications like authentication and secure web gateways, having a snappy user experience is critical.
A Zero Trust provider needs to not only secure your users on the public Internet, but it also needs to optimize the public Internet to make sure that your users continuously stay protected. Moving to Zero Trust doesn’t just reduce the need for corporate networks, it also allows user traffic to flow to resources more naturally. However, given your Zero Trust provider is going to be the gatekeeper for all your users and all your applications, performance is a critical aspect to evaluate to reduce friction for your users and reduce the likelihood that users will complain, be less productive, or turn the solutions off. Cloudflare is constantly improving our network to ensure that users always have the best experience, through programs like the Edge Partner Program and constantly improving our peering and interconnectivity. It’s this tireless effort that makes us the fastest Zero Trust provider.
Today, we are very excited to announce that Cloudflare’s cloud email security solution, Area 1, now integrates with KnowBe4, a leading security awareness training and simulated phishing platform. This integration allows mutual customers to offer real-time coaching to their employees when a phishing campaign is detected by Cloudflare’s email security solution.
We are all aware that phishing attacks often use email as a vector to deliver the fraudulent message. Cybercriminals use a range of tactics, such as posing as a trustworthy organization, using urgent or threatening language, or creating a sense of urgency to entice the recipient to click on a link or download an attachment.
Despite the increasing sophistication of these attacks and the solutions to stop them, human error remains the weakest link in this chain of events. This is because humans can be easily manipulated or deceived, especially when they are distracted or rushed. For example, an employee might accidentally click on a link in an email that looks legitimate but is actually a phishing attempt, or they might enter their password into a fake login page without realizing it. According to the 2021 Verizon Data Breach Investigations Report, phishing was the most common form of social engineering attack, accounting for 36% of all breaches. The report also noted that 85% of all breaches involved a human element, such as human error or social engineering.
Therefore, it is essential to educate and train individuals on how to recognize and avoid phishing attacks. This includes raising awareness of common phishing tactics and training individuals to scrutinize emails carefully before clicking on any links or downloading attachments.
Area1 integrates with KnowBe4
Our integration allows for the seamless integration of Cloudflare’s advanced email security capabilities with KnowBe4’s Security Awareness Training platform, KSMAT, and its real-time coaching product, SecurityCoach. This means that organizations using both products can now benefit from an added layer of security that detects and prevents email-based threats in real-time while also training employees to recognize and avoid such threats.
Organizations can offer real-time security coaching to their employees whenever our email security solution detects four types of events: malicious attachments, malicious links, spoofed emails, and suspicious emails. IT or security professionals can configure their real-time coaching campaigns to immediately deliver relevant training to their users related to a detected event.
“KnowBe4 is proud to partner with Cloudflare to provide a seamless integration with our new SecurityCoach product, which aims to deliver real-time security coaching and advice to help end users enhance their cybersecurity knowledge and strengthen their role in contributing to a strong security culture. KnowBe4 is actively working with Cloudflare to provide an API-based integration to connect our platform with systems that IT/security professionals already utilize, making rolling out new products to their teams an easy and unified process.” – Stu Sjouwerman, CEO, KnowBe4
By using the integration, organizations can ensure that their employees are not only protected by advanced security technology that detects and blocks malicious emails, but are also educated on how to identify and avoid these threats. This has been a commonly demanded feature from our customers and we have made it simple for them to implement it.
How it works
Create private key and public key in the Area 1 dashboard
Before you can set up this integration in your KnowBe4 (KMSAT) console, you will need to create a private key and public key with Cloudflare.
Log in to your Cloudflare Area 1 email security console as an admin.
Click the gear icon in the top-right corner of the page, and then navigate to the Service Accounts tab.
Click + Add Service Account.
In the NAME field, enter a name for your new service account.
Click + Create Service Account.
In the pop-up window that opens, copy and save the private key somewhere that you can easily access. You will need this key to complete the setup process in the Set Up the Integration in your KnowBe4 (KMSAT) Console section below.
Set up the integration in your KnowBe4 (KMSAT) Console
Once you have created a private key and public key in your Cloudflare Area 1 email security console, you can set up the integration in your KMSAT console. To register Cloudflare Area 1 email security with SecurityCoach in your KMSAT console, follow the steps below:
Log in to your KMSAT console and navigate to SecurityCoach > Setup > Security Vendor Integrations.
Locate Cloudflare Area 1 Email Security and click Configure.
Enter the Public Key and Private Key that you saved in the ‘Create your private Key and public key’ section above.
Click authorize. Once you’ve successfully authorized this integration, you can manage detection rules for Cloudflare Area 1 on the ‘Detection rules subtab’ of SecurityCoach.
SecurityCoach in action
Now that the SecurityCoach is set up, users within your organization will receive messages if Area 1 finds that a malicious email was sent to them. An example one can be seen below.
This message not only alerts the user to be more scrutinous about emails they are receiving, since they now know they are being actively targeted, but also provides them with followup steps that they can take to ensure their account is as safe as possible. The image and text that shows up in the email can be configured from the KnowBe4 console giving customers full flexibility on what to communicate with their employees.
What’s next
We’ll be expanding this integration with KnowBe4 to our other Zero Trust products in the coming months. If you have any questions or feedback on this integration, please contact your account team at Cloudflare. We’re excited to continue closely working with technology partners to expand existing and create new integrations that help customers on their Zero Trust journey.
Cloudflare secures outbound Internet traffic for thousands of organizations every day, protecting users, devices, and data from threats like ransomware and phishing. One way we do this is by intelligently classifying what Internet destinations are risky using the domain name system (DNS). DNS is essential to Internet navigation because it enables users to look up addresses using human-friendly names, like cloudflare.com. For websites, this means translating a domain name into the IP address of the server that can deliver the content for that site.
However, attackers can exploit the DNS system itself, and often use techniques to evade detection and control using domain names that look like random strings. In this blog, we will discuss two techniques threat actors use – DNS tunneling and domain generation algorithms – and explain how Cloudflare uses machine learning to detect them.
Domain Generation Algorithm (DGA)
Most websites don’t change their domain name very often. This is the point after all, having a stable human-friendly name to be able to connect to a resource on the Internet. However, as a side-effect stable domain names become a point of control, allowing network administrators to use restrictions on domain names to enforce policies, for example blocking access to malicious websites. Cloudflare Gateway – our secure web gateway service for threat defense – makes this easy to do by allowing administrators to block risky and suspicious domains based on integrated threat intelligence.
But what if instead of using a stable domain name, an attacker targeting your users generated random domain names to communicate with, making it more difficult to know in advance what domains to block? This is the idea of Domain Generation Algorithm domains (MITRE ATT&CK technique T1568.002).
After initial installation, malware reaches out to a command-and-control server to receive further instructions, this is called “command and control” (MITRE ATT&CK tactic TA0011). The attacker may send instructions to perform such actions as gathering and transmitting information about the infected device, downloading additional stages of malware, stealing credentials and private data and sending it to the server, or operating as a bot within a network to perform denial-of-service attacks. Using a domain generation algorithm to frequently generate random domain names to communicate with for command and control gives malware a way to bypass blocks on fixed domains or IP addresses. Each day the malware generates a random set of domain names. To rendezvous with the malware, the attacker registers one of these domain names and awaits communication from the infected device.
Speed in identifying these domains is important to disrupting an attack. Because the domains rotate each day, by the time the malicious disposition of a domain propagates through the cybersecurity community, the malware may have rotated to a new domain name. However, the random nature of these domain names (they are literally a random string of letters!) also gives us an opportunity to detect them using machine learning.
The machine learning model
To identify DGA domains, we trained a model that extends a pre-trained transformers-based neural network. Transformers-based neural networks are the state-of-the-art technique in natural language processing, and underlie large language models and services like ChatGPT. They are trained by using adjacent words and context around a word or character to “learn” what is likely to come next.
Domain names largely contain words and abbreviations that are meaningful in human language. Looking at the top domains on Cloudflare Radar, we see that they are largely composed of words and common abbreviations, “face” and “book” for example, or “cloud” and “flare”. This makes the knowledge of human language encoded in transformer models a powerful tool for detecting random domain names.
For DGA models, we curated ground truth data that consisted of domain names observed from Cloudflare’s 1.1.1.1 DNS resolver for the negative class, and we used domain names from known domain generation algorithms for the positive class (all uses of DNS resolver data is completed in accordance with our privacy commitments).
Our final training set contained over 250,000 domain names, and was weighted to include more negative (not DGA domains) than positive cases. We trained three different versions of the model with different architectures: LSTM (Long Short-Term Memory Neural Network), LightGBM (binary classification), and a transformer-based model. We selected the transformer based model based on it having the highest accuracy and F1 score (the F1 score is a measure of model fit that penalizes having very different precision and recall, on an imbalanced data set the highest accuracy model might be the one that predicts everything either true or false, not what we want!), with an accuracy of over 99% on the test data.
To compute the score for a new domain never seen before by the model, the domain name is tokenized (i.e. broken up into individual components, in this case characters), and the sequence of characters are passed to the model. The transformers Python package from Hugging Face makes it easy to use these types of models for a variety of applications. The library supports summarization, question answering, translation, text generation, classification, and more. In this case we use sequence classification, together with a model that was customized for this task. The output of the model is a score indicating the chance that the domain was generated by a domain generation algorithm. If the score is over our threshold, we label the domain and a domain generation algorithm domain.
Deployment
The expansive view of domain names Cloudflare has from our 1.1.1.1 resolver means we can quickly observe DGA domains after they become active. We process all DNS query names that successfully resolve using this model, so a single successful resolution of the domain name anywhere in Cloudflare’s public resolver network can be detected.
From the queries observed on 1.1.1.1, we filter down first to new and newly seen domain names. We then apply our DGA classifier to the new and newly seen domain names, allowing us to detect activated command and control domains as soon as they are observed anywhere in the world by the 1.1.1.1 resolver.
DNS Tunneling detection
In issuing commands or extracting data from an installed piece of malware, attackers seek to avoid detection. One way to send data and bypass traditional detection methods is to encode data within another protocol. When the attacker controls the authoritative name server for a domain, information can be encoded as DNS queries and responses. Instead of making a DNS query for a simple domain name, such as www.cloudflare.com, and getting a response like 104.16.124.96, attackers can send and receive long DNS queries and responses that contain encoded data.
Here is an example query made by an application performing DNS tunneling (query shortened and partially redacted):
The response data to a query like the one above can vary in length based on the response record type the server uses and the recursive DNS resolvers in the path. Generally, it is at most 255 characters per response record and looks like a random string of characters.
TXT
jdqjtv64k2w4iudbe6b7t2abgubis
This ability to take an arbitrary set of bytes and send it to the server as a DNS query and receive a response in the answer data creates a bi-directional communication channel that can be used to transmit any data. The malware running on the infected host encodes the data it wants to transmit as a DNS query name and the infected host sends the DNS query to its resolver.
Since this query is not a true hostname, but actually encodes some data the malware wishes to transmit, the query is very likely to be unique, and is passed on to the authoritative DNS server for that domain.
The authoritative DNS server decodes the query back into the original data, and if necessary can transmit it elsewhere on the Internet. Responses go back the other direction, the response data is encoded as a query response (for example a TXT record) and sent back to the malware running on the infected host.
One challenge with identifying this type of traffic, however, is that there are also many benign applications that use the DNS system to encode or transmit data as well. An example of a query that was classified as not DNS tunneling:
As humans, we can see that the leading portion of this DNS query is a UUID. Queries like this are often used by security and monitoring applications and network appliances to check in. The leading portion of the query might be the unique id of the device or installation that is performing the check-in.
During the research and training phase our researchers identified a wide variety of different applications that use a large number of random looking DNS queries. Some examples of this include subdomains of content delivery networks, video streaming, advertising and tracking, security appliances, as well as DNS tunneling. Our researchers investigated and labeled many of these domains, and while doing so, identified features that can be used to distinguish between benign applications and true DNS tunneling.
The model
For this application, we trained a two-stage model. The first stage makes quick yes/no decisions about whether the domain might be a DNS tunneling domain. The second stage of the model makes finer-grained distinctions between legitimate domains that have large numbers of subdomains, such as security appliances or AV false-positive control, and malicious DNS tunneling.
The first stage is a gradient boosted decision tree that gives us an initial classification based on minimal information. A decision tree model is like playing 20 questions – each layer of the decision tree asks a yes or no question, which gets you closer to the final answer. Decision tree models are good at both predicting binary yes/no results as well as incorporating binary or nominal attributes into a prediction, and are fast and lightweight to execute, making them a good fit for this application. Gradient boosting is a reliable technique for training models that is particularly good at combining several attributes with weak predictive power into a strong predictor. It can be used to train multiple types of models including decision trees as well as numeric predictions.
If the first stage classifies the domain as “yes, potential DNS tunneling”, it is checked against the second stage, which incorporates data observed from Cloudflare’s 1.1.1.1 DNS resolver. This second model is a neural network model and refines the categorization of the first, in order to distinguish legitimate applications.
In this model, the neural network takes 28 features as input and classifies the domain into one of 17 applications, such as DNS tunneling, IT appliance beacons, or email delivery and spam related. Figure 2 shows a diagram generated from the popular Python software package Keras showing the layers of this neural network. We see the 28 input features at the top layer and at the bottom layer, the 17 output values indicating the prediction value for each type of application. This neural network is very small, having about 2,000 individual weights that can be set during the training process. In the next section we will see an example of a model that is based on a state-of-the-art pretrained model from a model family that has tens to hundreds of millions of predefined weights.
Fig. 2, The keras.utils.plot_model() function draws a diagram of the neural network layers.
Figure 3 shows a plot of the feature values of the applications we are trying to distinguish in polar coordinates. Each color is the feature values of all the domains the model classified as a single type of application over a sample period. The position around the circle (theta) is the feature, and the distance from the center (rho) is the value of that feature. We can see how many of the applications have similar feature values.
When we observe a new domain and compute its feature values, our model uses those feature values to give us a prediction about which application the new domain resembles. As mentioned, the neural network has 28 inputs each of which is the value for a single feature and 17 outputs. The 17 output values represent the prediction that the domain is each of those 17 different types of applications, with malicious DNS tunneling being one of the 17 outputs. The job of the model is to convert the sometimes small differences between the feature values into a prediction. If the value of the malicious DNS tunneling output of the neural network is higher than the other outputs, the domain is labeled as a security threat.
Fig. 3, Domains containing high-entropy DNS subdomains, visualized as feature plots. Each section around the circumference of the plot represents a different feature of the observed DNS queries. The distance from the center represents the value of that feature. Each color line is a distinct application, and machine learning helps us distinguish between these and classify them.
Deployment
For the DNS tunneling model, our system consumes the logs from our secure web gateway service. The first stage model is applied to all DNS queries. Domains that are flagged as possible DNS tunneling are then sent to the second stage where the prediction is refined using additional features.
Looking forward: combining machine learning with human expertise
In September 2022, Cloudflare announced the general availability of our threat operations and research team, Cloudforce One, which allows our in-house experts to share insights directly with customers. Layering this human element on top of the ML models that we have already developed helps Cloudflare deliver additional protection threat protection for our customers, as we plan to explain in the next article in this blog series.
Until then, click here to create a free account, with no time limit for up to 50 users, and point just your DNS traffic, or all traffic (layers 4 to 7), to Cloudflare to protect your team, devices, and data with machine learning-driven threat defense.
A picture is worth a thousand words and the same is true when it comes to getting visualizations, trends, and data in the form of a ready-made security dashboard.
Today we’re excited to announce the expansion of support for automated normalization and correlation of Zero Trust logs for Logpush in Sumo Logic’s Cloud SIEM. As a Cloudflare technology partner, Sumo Logic is the pioneer in continuous intelligence, a new category of software which enables organizations of all sizes to address the data challenges and opportunities presented by digital transformation, modern applications, and cloud computing.
The updated content in Sumo Logic Cloud SIEM helps joint Cloudflare customers reduce alert fatigue tied to Zero Trust logs and accelerates the triage process for security analysts by converging security and network data into high-fidelity insights. This new functionality complements the existing Cloudflare App for Sumo Logic designed to help IT and security teams gain insights, understand anomalous activity, and better trend security and network performance data over time.
Deeper integration to deliver Zero Trust insights
Using Cloudflare Zero Trust helps protect users, devices, and data, and in the process can create a large volume of logs. These logs are helpful and important because they provide the who, what, when, and where for activity happening within and across an organization. They contain information such as what website was accessed, who signed in to an application, or what data may have been shared from a SaaS service.
Up until now, our integrations with Sumo Logic only allowed automated correlation of security signals for Cloudflare only included core services. While it’s critical to ensure collection of WAF and bot detection events across your fabric, extended visibility into Zero Trust components has now become more important than ever with the explosion of distributed work and adoption of hybrid and multi-cloud infrastructure architectures.
With the expanded Zero Trust logs now available in Sumo Logic Cloud SIEM, customers can now get deeper context into security insights thanks to the broad set of network and security logs produced by Cloudflare products:
“As a long time Cloudflare partner, we’ve worked together to help joint customers analyze events and trends from their websites and applications to provide end-to-end visibility and improve digital experiences. We’re excited to expand this partnership to provide real-time insights into the Zero Trust security posture of mutual customers in Sumo Logic’s Cloud SIEM.” – John Coyle – Vice President of Business Development, Sumo Logic
How to get started
To take advantage of the suite of integrations available for Sumo Logic and Cloudflare logs available via Logpush, first enable Logpush to Sumo Logic, which will ship logs directly to Sumo Logic’s cloud-native platform. Then, install the Cloudflare App and (for Cloud SIEM customers) enable forwarding of these logs to Cloud SIEM for automated normalization and correlation of security insights.
Note that Cloudflare’s Logpush service is only available to Enterprise customers. If you are interested in upgrading, please contact us here.
Enable Logpush to Sumo Logic
Cloudflare Logpush supports pushing logs directly to Sumo Logic via the Cloudflare dashboard or via API.
Install the Cloudflare App for Sumo Logic
Locate and install the Cloudflare app from the App Catalog, linked above. If you want to see a preview of the dashboards included with the app before installing, click Preview Dashboards. Once installed, you can now view key information in the Cloudflare Dashboards for all core services.
(Cloud SIEM Customers) Forward logs to Cloud SIEM
After the steps above, enable the updated parser for Cloudflare logs by adding the _parser field to your S3 source created when installing the Cloudflare App.
What’s next
As more organizations move towards a Zero Trust model for security, it’s increasingly important to have visibility into every aspect of the network with logs playing a crucial role in this effort.
If your organization is just getting started and not already using a tool like Sumo Logic, Cloudflare R2 for log storage is worth considering. Cloudflare R2 offers a scalable, cost-effective solution for log storage.
We’re excited to continue closely working with technology partners to expand existing and create new integrations that help customers on their Zero Trust journey.
Today, Cloudflare is excited to launch the Descaler Program, a frictionless path to migrate existing Zscaler customers to Cloudflare One. With this announcement, Cloudflare is making it even easier for enterprise customers to make the switch to a faster, simpler, and more agile foundation for security and network transformation.
Zscaler customers are increasingly telling us that they’re unhappy with the way in which they have to manage multiple solutions to achieve their goals and with the commercial terms they are being offered. Cloudflare One offers a larger network, a ‘single stack’ solution with no service chaining that enables innovation at an incredible rate, meaning lots of new product and feature releases.
At its core, the Descaler Program helps derisk change. It’s designed to be simple and straightforward, with technical resources to ensure a smooth transition and strategic consultation to ensure the migration achieves your organization’s goals. Customers can expect to be up and running on Cloudflare One in a matter of weeks without disruption to their business operations.
What makes up the Descaler Program?
Knowledgeable people. Clear process. Like-magic technology. Getting the people, process, and technology right is critical for any successful change. That’s why we’ve brought together the best of each to help customers experience a frictionless migration to Cloudflare One.
Cloudflare One is our Secure Access Service Edge (SASE) platform that combines network connectivity services with Zero Trust security services on one of the fastest, most resilient and most composable global networks. The platform dynamically connects users to enterprise resources, with identity-based security controls delivered close to users, wherever they are.
Eligibility
Enterprise organizations who use competitive security products from Zscaler, such as ZIA or ZPA, and have 1,000 employees or more are eligible to participate. The Descaler Program builds in resources and touch points with Cloudflare experts on two related paths – one focused on technical success, the other focused on business success.
Technology success
Administrators rejoice. The Descaler Program includes the tools, process and partners you need for a frictionless technical migration.
1. Architecture workshops. Our experts and yours will take a fresh look at where you are and where you need to go over the next two to three years to enable digital transformation. This interactive session with Cloudflare experts will help us focus together on the most meaningful migration paths for your organization and dive into the supporting technologies available to make the transition to Cloudflare even easier.
Outcomes from this mutual investment of time will include a custom migration plan, access to the Descaler toolkit, and dedicated resources from Cloudflare to facilitate a seamless cutover while sharpening focus on your short, medium, and long term business goals facilitated through networking and security technology. You will leave with a better understanding of your migration path to an Internet-native SASE platform, but more importantly, how you can make Zero Trust and SASE concepts tangible for your business.
2. Technical migration tools. In addition to providing people and processes focused on supporting your migration, Cloudflare can help you leverage a suite of technical tools and scripts that in just a few clicks, automatically export settings and configurations of already deployed Zscaler products to be migrated into Cloudflare One. This toolkit is positioned to save countless hours of unnecessary point-and-click time wasted.
The magic of this flow is in its simplicity. Following extract, transform, and load (ETL) best practices, using supported and documented API calls to your current account, the Descaler toolkit will export your current configuration and settings from ZIA or ZPA, transform them to be Cloudflare One-compatible before migrating into a new Cloudflare One account.
Take a ZPA application for example, the Descaler toolkit will look at existing settings around Application name, Domain/SNI, IPs, Ports allowed, Protocols allowed, User groups, and more before exporting, transforming, and importing into a new Cloudflare One account. In situations where time is of the essence, quick time to value migration paths can be taken. For example, if faced with an urgent ZIA migration then it’s simply a matter of switching over DNS to get a baseline of protection, turning off Zscaler and then managing the process to deploy WARP and a full Secure Web Gateway in short order.
Getting started with the toolkit You’ll first be asked to create a new API key in your ZIA or ZPA account. From there the Cloudflare team will share the toolkit to be run locally by one of your system administrators alongside members of the Cloudflare team to support in case there are any questions. Cloudflare won’t ever need or ask for your API key, just the outputs. Cloudflare will then use the output to transform and load the configurations into a newly provisioned Cloudflare One account.
The Descaler toolkit only performs read and list API requests to your Zscaler account. In scenarios where systems or services you wish to migrate do not map 1:1, the Cloudflare team and our Authorized Partners will be standing by to assist in making the migration process as smooth as possible.
3. Trusted partner engagements. The Cloudflare Partner Network includes service and implementation partners who deliver security, reliability and performance solutions with a broad range of value-added services. Our Technology Partners offer customers complementary solutions within the cloud stack for hands-on keyboard assistance when desired. Back in January we announced the Authorized Partner Service Delivery Track for Cloudflare One and are excited to connect customers to authorized partners that meet Cloudflare’s high standards for professional services delivery.
As the Descaler Program continues to grow additional capabilities such as full technical training with customer certification courses along with support for in-house professional services and authorized partner professional services delivery are being explored to make the transition process even easier. This is only the beginning of the technical resources being made available to customers looking to make the switch to Cloudflare.
Business components
For CxOs, it couldn’t be more clear when it comes to showing tangible business value and cost savings that impact your businesses bottom line.
Return On Investment (ROI) calculation. We value showing, not just telling you about the value from Cloudflare One. We want to make sure customers migrating anything recognize the quantifiable business impacts that can potentially be realized by moving to the Cloudflare One platform.
Escape hatch for your current contract. Don’t let your existing contract be a stopper to your long term security modernization. Cloudflare is committed to making the migration process as cost-effective as possible – which means tools and flexible financial options for customers to reach escape velocity from Zscaler and land safely with Cloudflare. You won’t regret this interaction come renewal time.
Zero Trust roadmap assessment. Going from zero to Zero Trust means looking ahead to what’s next with a concrete understanding of where you are today. For business leaders, that means using resources like our vendor-agnostic Zero Trust Roadmap to map out future initiatives today with help from architects, engineers and other business leaders.
If your Internet pipes are all clogged up then use The Descaler Program to get a faster flow:
Why migrating from Zscaler to Cloudflare One just makes sense
More and more organizations are choosing Cloudflare over Zscaler to modernize security, and when they do, they typically cite our strengths across a few key evaluation criteria:
User experience: IT and security administrators have found our services easier to deploy and simpler to manage. End users benefit from faster performance across security services. Whereas Zscaler’s fragmented clouds and piecemeal services add management complexity over time, Cloudflare offers a single, unified control plane that keeps your organization progressing quickly towards its security goals.
Connectivity: Customers value the reliability and scalability of our larger global network footprint to secure any traffic. Plus, unlike Zscaler, Cloudflare’s network is designed to run every service in every location to ensure consistent protections for users around the world.
Agility for the future: Customers recognize that progressing towards Zero Trust and SASE require long-term partnerships. For that journey, they trust in Cloudflare’s track record of rapid innovation and value our flexible architecture to adopt new security standards and technologies and stay ahead of the curve.
These are just a few reasons why organizations choose Cloudflare – and if you’re looking for even more reasons and customer stories, we encourage you to check out this recent blog post.
If you’re looking to motivate your colleagues to take advantage of the Descaler Program, we encourage you to explore more direct comparisons with this infographic or our website.
How to get started
Joining the Descaler Program is as easy as signing up using the link below. From there, the Cloudflare team will reach out to you for further enrollment details. By providing details about your current Zscaler deployments, ongoing challenges and your future Zero Trust or SASE goals we’ll be able to hit the ground running.
With the Descaler Program we’re excited to offer a clear path for customers to make the switch to Cloudflare One. To get started, sign up here.
Realizing the goals of Zero Trust is a journey: moving from a world of static networking and hardware concepts to organization-based access and continuous validation is not a one-step process. This challenge is never more real than when dealing with IP addresses. For years, companies on the Internet have built hardened systems based on the idea that only users with certain IP addresses can access certain resources. This implies that IP addresses are tied with identity, which is a kluge and can actually open websites up to attack in some cases. For large companies with many origins and applications that need to be protected in a Zero Trust model, it’s important to be able to support their transition to Zero Trust using mTLS, Access, or Tunnel. To make the transition some organizations may need dedicated IP addresses.
Today we’re introducing Cloudflare Aegis: dedicated IPs that we use to send you traffic. This allows you to lock down your services and applications at an IP level and build a protected environment that is application aware, protocol aware, and even IP-aware. Aegis is available today through Early Access for Enterprise customers, and you can talk to your account team if you want to learn more about it.
We’re going to talk about what Aegis is, give an example of how customers are using it today to secure their networks and services, and talk about how it can integrate with existing products and services to help protect you on your Zero Trust journey. But before we get into what Aegis is, let’s talk about why we built it.
Protecting your services at scale
Cloudflare protects your networks and services from attackers and improves your application performance, but protecting your origin on its own is still an important challenge that must be tackled. To help, Cloudflare built mTLS support and enforcement in conjunction with API Shield, Cloudflare Access, and Cloudflare Tunnel to help enforce a zero trust approach to security: the only entities who can access your origins are ones with the proper certificates, which are configured in Cloudflare and revalidated on a regular basis. Bad traffic is explicitly blocked because the networks and services are set up to only receive encrypted, authenticated traffic.
While mTLS and Access are great for protecting networks and applications regardless of what IP addresses are being used, it isn’t always feasible to deploy at large scale in a short amount of time, especially if you haven’t already configured it for every application or service you build. For some customers who have hundreds, maybe even thousands of applications or services protected behind Cloudflare, adding mTLS or Access for every single origin is a significant task. Some customers might have an additional problem: they can’t keep track of every service so they don’t know where to put mTLS configurations. Enforcing good security behavior can take years in this case, and may have a long tail of unprotected origins that can leave customers vulnerable to potential attacks through spoofing Cloudflare IPs and gaining access to customer networks and user data.
How does Cloudflare Aegis protect you?
What our customers want to be able to do is lock down their entire network by getting dedicated egress IPs from Cloudflare: a small list of IP addresses that Cloudflare uses to send traffic which are reserved only for them which they can configure in their L3 firewalls and block everything else. By ensuring that only a single customer’s traffic will use those dedicated IP addresses, customers have essentially bought blanket protection for their network and give them an additional layer of security for their networks and applications once mTLS is set up. To outline how Cloudflare Aegis might help protect a customer, let’s consider Blank Bank, a fictional customer.
Blank Bank has about 900 applications and services scattered across different instances using a mix of on-premise equipment and cloud services. Blank Bank relies on Cloudflare for L7 services like CDN, DDoS, WAF, and Bot Management, but does not implement mTLS to any of their origins today. During a recent security audit, Blank Bank was told that all new feature development would stop until they were able to secure all of their applications and services to prevent outside traffic from reaching any of the services behind Cloudflare. The audit found that existing services did not implement sufficient security measure at the application, and allowlisting Cloudflare IPs was not enough to secure the services because potential attackers could use Workers to access Blank Bank services outside the prescribed APIs and data flows. Blank Bank was told to apply security precautions as soon as possible. But adding mTLS to each of their 900 applications and services could take years as each service must be configured individually, and they want to keep improving their service now.
Cloudflare Aegis helps solve this problem by scoping the number of IPs we use to talk to Blank Bank from millions down to one: the private egress IP we allocated for them and only them. This IP address ensures that the only traffic that should be reaching Blank Bank servers comes from an IP meant for only Blank Bank traffic: no other Cloudflare customer attempting to reach Blank Bank will have this IP address. Furthermore, this IP is not publicly listed making it harder for an attacker to figure out what IP Cloudflare is using to speak to Blank Bank. With this, Blank Bank can restrict their network Access Control Lists (ACLs) to only allow traffic coming from this IP into their network. Here’s how their network firewall looks before Aegis:
After getting an Aegis IP, they can completely lock down their firewalls to only allow traffic from the Aegis IP that is reserved for them:
Simply by making a change of egress IP, we’ve been able to better protect Blank Bank’s entire network, ensuring they can keep developing new features and improving their already stellar customer experience, while keeping their endpoints safe until they are able to deploy mTLS to every single origin they need to.
Every sword needs a shield
Cloudflare Aegis pairs really well with any of our products to provide heightened application security and protection while allowing you to get things done. Let’s talk about how it can work with some of our products to improve security posture, such as Cloudflare Access, Cloudflare Network Interconnect, and Cloudflare Workers.
Cloudflare Access + CNI
Cloudflare Aegis works really well with Access and CNI to provide a completely secure application access framework that doesn’t even use the public Internet. Access provides the authorization security and caching to ensure that your policies are always being enforced from beyond the application’s server. Aegis ensures that all requests for your application come through a dedicated IP that we assign you. And finally, Cloudflare Network Interconnect provides the private path from Cloudflare over to your application, where you can apply L3 firewall policies to completely protect your network and applications.
This set up of protecting the path to your services sounds a lot like another product we offer: Cloudflare Tunnel. Cloudflare Tunnel encrypts and protects traffic from Cloudflare to an origin network by installing a daemon on the server-side machines. In terms of goals of protecting the origin network by creating private network concepts, Tunnel and this set up are very much comparable. However, some customers might not necessarily want to expose the public endpoints that Tunnel requires. This setup can protect your origin servers without needing to expose anything to the public Internet. This setup is also easier to configure from an application point of view: you don’t need to configure JWT or install Tunnel on your origin: you can configure a firewall policy instead. This makes setting up Access across an organization very easy.
Workers
Aegis and Workers (and the rest of our developer platform) pair incredibly well together. Whenever our developer platform needs to access your services, when paired with Aegis, they’ll use dedicated IPs. This allows your network to be extra protected and ensure that only the Workers you assign will access your endpoints.
Shields up
Many people view the Internet like the wild west, where anything can happen. Attackers can DDoS origins, and they can spoof IP addresses and pretend to be someone else. But with Cloudflare Aegis, you get an extra shield to protect your origin network so that attackers can’t get in. The IPs that you receive traffic from are reserved only for you and no one else, ensuring that the only users that access your network are the ones that you want to access it, and come through those IP addresses.
If you’re interested in better locking down your networks and applications with Cloudflare Aegis, reach out to your account team today to get started and give yourself a shield you can use to defend yourself.
On March 20, 2023, we will be launching an updated navigation in the Zero Trust dashboard, offering all of our Zero Trust users a more seamless experience across Cloudflare as a whole. This change will allow you to more easily manage your Zero Trust organization alongside your application and network services, developer tools, and more.
As part of this upcoming release, you will see three key changes:
Quicker navigation
Instead of opening another window or typing in a URL, you can go back to the Cloudflare dashboard in one click.
Switch accounts with ease
View and switch accounts at the top of your sidebar.
Resources and support
Find helpful links to our Community, developer documentation, and support team at the top of your navigation bar.
Why we’re updating the Zero Trust navigation
In 2020, Gateway was broadly released as the first Cloudflare product that didn’t require a site hosted on Cloudflare’s infrastructure. In other words, Gateway was unconstrained by the site-specific model most other Cloudflare products relied on at the time, while also used in close conjunction with Access. And so, the Cloudflare for Teams dashboard was built on a new model, designed from scratch, to give our customers a designated home—consolidated under a single roof—to manage their Teams products and accounts.
Fast forward to today and Zero Trust has grown tremendously, both in capability and reach. Many of our customers are using multiple Cloudflare products together, including Cloudflare One and Zero Trust products. Our home has grown, and this navigation change is one step toward expanding our roof to cover Cloudflare’s rapidly expanding footprint.
A focus on user experience
We have heard from many of you about the pains you experience when using multiple Cloudflare products, including Zero Trust. Your voice matters to us, and we’re invested in building a world-class user experience to make your time with Cloudflare an easy and enjoyable one. Our user experience improvements are based on three core principles: Consistency, Interconnectivity, and Discoverability.
We aim to offer a consistent and predictable user experience across the entire Cloudflare ecosystem so you never have to think twice about where you are in your journey, whether performing your familiar daily tasks or discovering our new ground-breaking products and features.
What else?
This navigation change we’re announcing today isn’t the only user experience improvement we’ve built! You may have noticed a few more optimizations recently:
User authorization and loading experience
Remember the days of the recurrent loading screen? Or perhaps when your Zero Trust account didn’t match the one you had logged in with to manage, say, your DNS? Those days are over! Our team has built a smarter, faster, and more seamless user and account authorization experience.
New tables
Tables are table stakes when it comes to presenting large quantities of data and information. (Yes, pun intended.) Tables are a common UI element across Cloudflare, and now Zero Trust uses the same tables UI as you will see when managing other products and features.
UI consistency
A slight change in color scheme and page layout brings the Zero Trust dashboard into the same visual family as the broader Cloudflare experience. Now, when you navigate to Zero Trust, we want you to know that you’re still under our one single Cloudflare roof.
We’re as excited about these improvements as you are! And we hope the upcoming navigation and page improvements come as a welcome addition to the changes noted above.
What’s next?
The user experience changes we’ve covered today go a long way toward creating a more consistent, seamless and user-friendly interface to make your work on Cloudflare as easy and efficient as possible. We know there’s always room for further improvement (we already have quite a few big improvements on our radar!).
To ensure we’re solving your biggest problems, we’d like to hear from you. Please consider filling out a short survey to share the most pressing user experience improvements you’d like to see next.
On Thursday, March 2, 2023, the Biden-Harris Administration released the National Cybersecurity Strategy aimed at securing the Internet. Cloudflare welcomes the Strategy, and congratulates the White House on this comprehensive, much-needed policy initiative. The goal of the Strategy is to make the digital ecosystem defensible, resistant, and values-aligned. This is a goal that Cloudflare fully supports. The Strategy recognizes the vital role that the private sector has to play in defending the United States against cyber attacks.
The Strategy aims to make a fundamental shift and transformation of roles, responsibilities, and resources in cyberspace by (1) rebalancing the responsibility to defend cyberspace by shifting the burden away from individuals, small businesses, and local governments, and onto organizations that are most capable and best-positioned to reduce risks, like data holders and technology providers; and (2) realigning incentives to favor long-term investments by balancing defending the United States against urgent threats today and simultaneously investing in a resilient future. The Strategy envisions attaining these goals through five collaborative pillars:
Pillar One: defending critical infrastructure;
Pillar Two: disrupting and dismantling threat actors;
Pillar Three: shaping market forces to drive security and resilience;
Pillar Four: investing in a resilient future; and
Pillar Five: forging international partnerships to pursue shared goals.
Through the Strategy, the U.S. Government is committed to preserving and extending the open, free, global, interoperable, reliable, and secure Internet. Cloudflare shares this commitment, and has built tools and products that are easily deployed and accessible to everyone that help make it a reality. Here are a few things that stand out to us in the Strategy, and how Cloudflare has contributed to the goals we share.
Defending Critical Infrastructure: Shields Up and Zero Trust
Importantly, Pillar One of the Strategy is focused on defending critical infrastructure. Critical infrastructure is vital to the functioning of society, and includes things like gas pipelines, railways, utilities, clean water, hospitals, and electricity, among others. In the aftermath of Russia’s invasion of Ukraine, the United States, the United Kingdom, Japan, and others issued warnings about the increased risk of cyber attacks. There was widespread concern by private sector and government cybersecurity experts about potential retaliation in the United States to the sanctions that resulted from the Russian invasion of Ukraine. In response, the Cybersecurity Infrastructure & Security Agency (CISA) announced its Shields Up initiative. When Shields Up was announced, we wrote about the essential tools that Cloudflare offers – for free – for protecting an online presence. We also published a Shields Up reading list.
One way we responded to the increased risk to critical infrastructure was the Critical Infrastructure Defense Project (CIDP), which we launched in partnership with Crowdstrike and Ping Identity, and offered a broad suite of products for free for four months to any United States-based hospital, or energy or water utility. Thankfully, the retaliation did not materialize at the level experts and officials were expecting. But that does not mean that the fear was not well-founded nor that malicious actors do not continue to have designs on critical infrastructure in the United States or around the world.
In addition to Shields Up, the Strategy doubles down on the Zero Trust Framework to guard against cyber attacks, a strategy first announced by the White House in January 2022 when it instructed federal agencies to move towards Zero Trust cybersecurity principles. These principles are rooted in the fundamental principle of “never trust, always verify;” no one is trusted by default from inside or outside of a network, and verification is required from everyone trying to gain access to resources on the network.
We could not agree more with the US government’s decision to modernize by grounding its federal defenses with Zero Trust principles. Zero Trust is not just a buzzword. Cloudflare has been championing Zero Trust for years, and we think it is so important for cybersecurity that we believe that a Chief Zero Trust Officer will become increasingly common over the next year. And because we know how important Zero Trust tools are, we recently announced that civil society and government participants in Project Galileo and the Athenian Project will have free access to Zero Trust products because we believe that qualified vulnerable public interest organizations should have access to Enterprise-level cyber security products no matter their size and budgets.
Disrupting and dismantling threat actors
Pillar Three of the Strategy is focused on disrupting and dismantling threat actors. As a member of the Joint Cyber Defense Collaborative, Cloudflare partners with the US government and cyber defenders from organizations across the Internet ecosystem to help increase visibility of malicious activity and threats, and drive collective action. Our network is large, learns from each attack, and is global, providing the best defense against attacks. The more we deal with attacks, the more we know how to stop them, and the easier it gets to find and deal with new threats. We block an average of 136 billion cyber threats per day. Just last month, Cloudflare mitigated a record-breaking 71 million request-per-second DDoS attack, the largest reported HTTP DDoS attack on record, more than 54% higher than the previous reported record of 46M rps in June 2022.
Privacy Preserving Technologies
Pillar Four focuses on investing in a resilient future, partly through supporting privacy-preserving technologies. The Internet was not built with privacy and security in mind, but a more private Internet is a better Internet. Even with encryption, information about consumer IP addresses and the names of websites they visit leak from protocols that weren’t designed to preserve privacy. We believe that reducing the availability of that information can help consumers regain control over their data.
Cloudflare has therefore worked to develop technologies to help build a more privacy-preserving Internet. We’ve been working on technologies that encourage and enable website operators and app developers to build privacy into their products at the protocol level. We’ve released or support a number of services that deploy state-of-the-art, privacy-enhancing technologies for DNS and other communications to help individuals, large corporations, small-businesses, and governments alike. These products include: Privacy Gateway, a fully managed, scalable, and performant Oblivious HTTP (OHTTP) relay, which is designed so that Internet Service Providers don’t know the websites their subscribers are visiting, and likewise websites don’t know the true IP address of their visitors; Private Relay, a version of Privacy Gateway that includes a second relay server that conveys data to websites and applications which hides a device’s true IP address; Cloudflare WARP, a free proxy application that encrypts traffic on the user’s device, routes it through the Cloudflare network, and then routes it on to its intended destination; and 1.1.1.1, our free, public Domain Name System (DNS) resolver, which helps make Internet traffic more private.
Preparing for the Post-Quantum Future and Safer Internet Protocols
As part of its goal of investing in a resilient future, one of the Strategic Objectives of the Strategy is to prepare for the post-quantum future whereby the government will increase investment in post-quantum. Likewise, the US government encourages the private sector to prepare its systems for the future. Cloudflare is already prepared, and although quantum computers are a future state, Cloudflare is helping to make sure the Internet is ready for when they arrive. Here and here, we describe the impact of quantum computing on cryptography, and how to use stronger algorithms resistant to the power of quantum computing. In October, we announced that by default, all websites and APIs served through Cloudflare now support post-quantum hybrid key agreement. And because we strongly believe that post-quantum security should be the new baseline for the Internet, we offer this post-quantum cryptography free of charge.
We were happy to see some focus in the Strategy on improving Internet protocols, which are important for ensuring that the Internet is functional, safe, and secure. The Strategy envisions a “clean-up effort” of the technical foundations of the Internet including Border Gateway Protocol (BGP) vulnerabilities, unencrypted DNS, and the slow adoption of IPv6. Cloudflare has been a long time supporter of security and privacy improvements to these foundational protocols, and wholeheartedly endorses this clean up effort. We have written about our support for improving the security of these protocols, including securing BGP through the use of RPKI and improving DNS privacy by launching support for DNS over HTTPS, DNS over TLS and Oblivious DNS over HTTPS.
Building International Partnerships and Assisting Allies and Partners
Pillar 5 of the Strategy commits the United States to forging international partnerships to pursue shared goals. Cyber attacks by their very nature are borderless, which means that protecting against cyber attacks cannot mean only protecting entities within one’s borders. Cyber defense is an international effort, and we cannot preserve and extend the open, free, global, interoperable, reliable and secure Internet if we do not help to defend, as well as build the capacity of, other countries through coalition building. The Strategy aims to assist allies and partners. With the invasion of Ukraine, Cloudflare has directly witnessed the importance of private sector collaboration [link to article] in efforts to assist allies and partners. Cloudflare is proud of the role we have played in helping protect Ukraine from cyberattack, which we described here, here, and here. Another way that we are working to provide support to vulnerable infrastructure outside of the United States is through Project Safekeeping, modeled after CIDP. In December, as part of Impact Week, we announced that we would be providing our enterprise-level Zero Trust cybersecurity solution to eligible entities in Australia, Germany, Japan, Portugal, and the United Kingdom, at no cost, with no time limit.
We again congratulate the White House on the National Cybersecurity Strategy. We have partnered with the US government in the past to help the federal government defend itself against cyberattacks, and we look forward to continuing our collaboration with the US government and other private sector entities for a more safe and secure Internet.
Before identity-driven Zero Trust rules, some SaaS applications on the public Internet relied on the IP address of a connecting user as a security model. Users would connect from known office locations, with fixed IP address ranges, and the SaaS application would check their address in addition to their login credentials.
Many systems still offer that second factor method. Customers of Cloudflare One can use a dedicated egress IP for this purpose as part of their journey to a Zero Trust model. Unlike other solutions, customers using this option do not need to deploy any infrastructure of their own. However, not all traffic needs to use those dedicated egress IPs.
Today, we are announcing policies that give administrators control over when Cloudflare uses their dedicated egress IPs. Specifically, administrators can use a rule builder in the Cloudflare dashboard to determine which egress IP is used and when, based on attributes like identity, application, IP address, and geolocation. This capability is available to any enterprise-contracted customer that adds on dedicated egress IPs to their Zero Trust subscription.
Why did we build this?
In today’s hybrid work environment, organizations aspire for more consistent security and IT experiences to manage their employees’ traffic egressing from offices, data centers, and roaming users. To deliver a more streamlined experience, many organizations are adopting modern, cloud-delivered proxy services like secure web gateways (SWGs) and deprecating their complex mix of on-premise appliances.
One traditional convenience of these legacy tools has been the ability to create allowlist policies based on static source IPs. When users were primarily in one place, verifying traffic based on egress location was easy and reliable enough. Many organizations want or are required to maintain this method of traffic validation even as their users have moved beyond being in one place.
So far, Cloudflare has supported these organizations by providing dedicated egress IPs as an add-on to our proxy Zero Trust services. Unlike the default egress IPs, these dedicated egress IPs are not shared amongst any other Gateway accounts and are only used to egress proxied traffic for the designated account.
As discussed in a previous blog post, customers are already using Cloudflare’s dedicated egress IPs to deprecate their VPN use by using them to identify their users proxied traffic or to add these to allow lists on third party providers. These organizations benefit from the simplicity of still using fixed, known IPs, and their traffic avoids the bottlenecks and backhauling of traditional on-premise appliances.
When to use egress policies
The Gateway Egress policy builder empowers administrators with enhanced flexibility and specificity to egress traffic based on the user’s identity, device posture, source/destination IP address, and more.
Traffic egressing from specific geolocations to provide geo-specific experiences (e.g. language format, regional page differences) for select user groups is a common use case. For example, Cloudflare is currently working with the marketing department of a global media conglomerate. Their designers and web experts based in India often need to verify the layout of advertisements and websites that are running in different countries.
However, those websites restrict or change access based on the geolocation of the source IP address of the user. This required the team to use an additional VPN service for just this purpose. With egress policies, administrators can create a rule to match the domain IP address or destination country IP geolocation and marketing employees to egress traffic from a dedicated egress IP geo-located to the country where they need to verify the domain. This allows their security team to rest easy as they no longer have to maintain this hole in their perimeter defense, another VPN service just for marketing, and can enforce all of their other filtering capabilities to this traffic.
Another example use case is allowlisting access to applications or services maintained by a third party. While security administrators can control how their teams access their resources and even apply filtering to their traffic they often can’t change the security controls enforced by third parties. For example, while working with a large credit processor they used a third party service to verify the riskiness of transactions routed through their Zero Trust network. This third party required them to allowlist their source IPs.
To meet this goal, this customer could have just used dedicated egress IPs and called it a day, but this means that all of their traffic is now being routed through the data center with their dedicated egress IPs. So if a user wanted to browse any other sites they would receive a subpar experience since their traffic may not be taking the most efficient path to the upstream. But now with egress policies this customer can now only apply this dedicated egress IP to this third party provider traffic and let all other user traffic egress via the default Gateway egress IPs.
Building egress policies
To demonstrate how easy it is for an administrator to configure a policy let’s walk through the last scenario. My organization uses a third-party service and in addition to a username/password login they require us to use a static source IP or network range to access their domain.
To set this up, I just have to navigate to Egress Policies under Gateway on the Zero Trust dashboard. Once there I can hit ‘Create egress policy’:
For my organization most of my users accessing this third-party service are located in Portugal so I’ll use my dedicated egress IPs that are assigned to Montijo, Portugal. The users will access example.com hosted on 203.0.113.10 so I’ll use the destination IP selector to match all traffic to this site; policy configuration below:
Once my policy is created, I’ll add in one more as a catch-all for my organization to make sure they don’t use any dedicated egress IPs for destinations not associated with this third-party service. This is key to add in because it makes sure my users receive the most performant network experience while still maintaining their privacy by egress via our shared Enterprise pool of IPs; policy configuration below:
Taking a look at the egress policy list we can see both policies are enabled and now when my users try to access example.com they will be using either the primary or secondary dedicated IPv4 or the IPv6 range as the egress IP. And for all other traffic, the default Cloudflare egress IPs will be used.
Next steps
We recognize that as organizations migrate away from on-premise appliances, they want continued simplicity and control as they proxy more traffic through their cloud security stack. With Gateway egress policies administrators will now be able to control traffic flows for their increasingly distributed workforces.
If you are interested in building policies around Cloudflare’s dedicated egress IPs, you can add them onto a Cloudflare Zero Trust Enterprise plan or contact your account manager.
The collective thoughts of the interwebz
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.