Tag Archives: Zero-Trust

Cloudflare One vs Zscaler Zero Trust Exchange: who is most feature complete? It’s not who you might expect

Post Syndicated from Ben Munroe original https://blog.cloudflare.com/cloudflare-one-vs-zscaler-zero-trust-exchange/

Cloudflare One vs Zscaler Zero Trust Exchange: who is most feature complete? It’s not who you might expect

Cloudflare One vs Zscaler Zero Trust Exchange: who is most feature complete? It’s not who you might expect

Zscaler has been building out its security offerings for 15 years. Cloudflare is 13 years old, and we have been delivering Zero Trust for the last four. This sounds like we are a late starter — but in this post, we’re going to show that on total Zero Trust, SSE, SASE and beyond, Cloudflare One functionality surpasses that of Zscaler Zero Trust Exchange.

Functional Criteria Group Cloudflare Zscaler
Internet-native network platform 100% (5 of 5) 20% (1 of 5)
Cloud-native service platform 100% (4 of 4) 25% (1 of 4)
Services to adopt SASE 83% (5 of 6) 66% (4 of 6)
Services to extend ZT, SSE, SASE and beyond 66% (8 of 12) 58% (7 of 12)
Network on-ramps 90% (9 of 10) 50% (5 of 10)

This may come as a surprise to many folks. When we’ve shared this with customers, the question we’ve often received is: How? How has Cloudflare been able to build out a competitive offering so quickly?

Having built out the world’s largest programmable Anycast network has certainly been a big advantage. This was the foundation for Cloudflare’s existing application services business — which delivers secure, performant web and application experiences to customers all around the world. It’s given us deep insight into security and performance on the Internet. But not only was our infrastructure ready to address real customer problems at scale, but our serverless compute development platform — Workers — was specifically designed to build globally distributed applications with security, reliability, and performance built in. We’ve been able to build on top of our platform to deliver Zero Trust services at an unmatched velocity — a velocity which we only expect to continue.

But we’ve also had another advantage that this timelines belies. So much has changed in the enterprise security space in the past 15 years. The idea of a performant global network like ours, for example, was not an assumption that could be made back then. When we started building out our Zero Trust offering, we had the benefit of a complete blank slate, and we’ve built out our offering on completely modern cloud assumptions.

But we know the reason you’re here — you want to see the proof. Here it is: we have released a new functional deep dive on our public page comparing Zscaler and Cloudflare’s platforms. Let’s share a sneak peek of two of the five criteria groups – services to adopt SASE and network on-ramps. Many criteria include footnotes in the PDF for added context and clarity (indicated by an *)

Services to adopt SASE Cloudflare Zscaler
Zero Trust Network Access (ZTNA) YES YES
Cloud Access Security Broker (CASB) YES YES
Secure Web Gateway (SWG) YES YES
Firewall as a Service (FWaaS) YES YES
WAN as a Service with L3-7 traffic acceleration* YES NO
On-premise SD-WAN* NO – partner NO – partner

Network on-ramps Cloudflare Zscaler
Clientless browser-based access YES YES
Device client software YES YES
Application connector software* YES YES
Branch connector software* NO YES
Anycast DNS, GRE, IPsec, QUIC, Wireguard tunnels* YES NO
Private network interconnect for data centers & offices YES NO
Inbound IP transit (BYOIP) YES NO
IPv6-only connection support* YES NO
Recursive DNS resolvers YES YES
Device clients and DNS resolvers freely open to public* YES NO

While the deep dive comparison of 37 functional criteria shows we’re out in front, and our page explains why our architecture is simpler, more trusted, and faster to innovate — we also know there’s more to a product than a list of features. Given that zero trust gets rolled out across an entire organization, the experience of using the product is paramount. Here are three key areas where Cloudflare One surpasses the Zscaler Zero Trust Exchange for both end-users and administrators.

1) Every service is built to run in every location at enterprise scale

Claim: Zscaler claims to run the “largest security cloud on the planet” yet Zscaler’s network is broken into at least 8 distinct clouds, according to its own configuration resources: zscalertwo.net, zscalerthree.net, for example. On the front end, from a usability perspective, many clouds don’t make for a seamless administrator experience as each of Zscaler’s key offerings comes with its own portal and login, meaning you interact with each like a separate product rather than with one single “security cloud.”

The Cloudflare One advantage: We are transparent about the size of our massive, global Anycast network and we report on the number of cities, not data centers. The location of our customers matter, and their ability to access every one of our services no matter where they are, matters. The number of cities in which we have data centers is more than 270 (all in the same cloud network) compared to Zscaler’s 55 cities (and remember — not all of these cities are in the same cloud network). Every service (and their updates and new features) on Cloudflare One is built to run on every server in every data center in every city, which is available to every one of our customers. And on the frontend, Cloudflare One provides one dashboard for all Zero Trust — ZTNA, CASB, SWG, RBI, DLP, and much more — solving the swivel chair problem by not spending time manually aligning policies and analytics isolated across separate screens.

2) More throughput for improved end-user experience

It’s no good offering great security if it slows and degrades user experience; seamless, frictionless, and fast access is critical to successful Zero Trust deployments — otherwise you will find your users looking for work arounds before you know it.

Zscaler states that they support “… a maximum bandwidth of 1 Gbps for each GRE [IP] tunnel if its internal IP addresses aren’t behind NAT.”  While most internet applications and connections would hit a 1 Gbps network bottleneck somewhere in their path to the end user, some applications require more bandwidth and have been designed to support it — for example, users expect video streams or large file sharing to be as instant as anything else on the Internet. The assumption that there will be a bottleneck creates an artificial limit on the kinds of throughput that can be achieved, limiting throughput even when link speeds and connectivity can be guaranteed.

The Cloudflare One advantage: We have spent a lot of time testing, and the results are clear: from an end-user perspective, performance on Cloudflare One is exceptional, and exceeds that of Zscaler.  We tested the throughput between two devices that were running a high-bandwidth application. These devices were located in different VPCs within a public cloud’s network, but they could also be on different subnets within an on-premise private network. Each VPC was configured to use Cloudflare’s Anycast IP tunnel as an on-ramp to Cloudflare’s network thereby enabling both devices to connect securely over Cloudflare One. And the throughput results recorded in both directions was 6 Gbps, which is significantly more capacity than the limits placed by Zscaler and others. So, your organization doesn’t need to worry that your new high-bandwidth application will be constrained by the Zero Trust platform you adopted.

3) Better connected to the rest of the Internet

Zscaler claims to be the “fastest onramp to the Internet.” But this is a sleight of hand: an on-ramp is only one part of the equation; your data needs to transit the network, and also exit when it reaches its destination. Without fast, effective connectivity capabilities beyond the on-ramp, Zscaler is just an SSE platform and does not extend to SASE — translating this from initialism to English, Zscaler has not focused on the net working part of the platform.

The Cloudflare One advantage: We have over 10,500 interconnection peers, which is an order of magnitude better. We don’t hand customers off at the edge like Zscaler. You can use Cloudflare’s virtual backbone for transit. The Cloudflare network routes over 3 trillion requests per day — providing Argo Smart Routing with a unique vantage point to detect real-time congestion and route IP packets across the fastest and most reliable network paths.

We started this blog writing about the importance of functionality and so let’s end there. All the peering and proven throughout advantages don’t matter as much without considering the services offered. And, while Zscaler claims to be able to eliminate the need for regional DC hubs by offering services such as SWG and ZTNA, they completely miss out on addressing organizations’ need to protect their cloud applications or on-premise servers end-to-end — including inbound traffic when they’re exposed to the Internet — using Web Application Firewalls, Load Balancing, Authoritative DNS, and DDoS Protection, exactly the space in which Cloudflare had its beginnings and now leads the pack.

In four years, we have surpassed Zscaler in completeness of offering including deployment simplicity, network resiliency and innovation velocity; read the details here for yourself and join us as we look to the next four years and beyond.

How Cloudflare Security does Zero Trust

Post Syndicated from Noelle Gotthardt original https://blog.cloudflare.com/how-cloudflare-security-does-zero-trust/

How Cloudflare Security does Zero Trust

How Cloudflare Security does Zero Trust

Throughout Cloudflare One week, we provided playbooks on how to replace your legacy appliances with Zero Trust services. Using our own products is part of our team’s culture, and we want to share our experiences when we implemented Zero Trust.

Our journey was similar to many of our customers. Not only did we want better security solutions, but the tools we were using made our work more difficult than it needed to be. This started with just a search for an alternative to remotely connecting on a clunky VPN, but soon we were deploying Zero Trust solutions to protect our employees’ web browsing and email. Next, we are looking forward to upgrading our SaaS security with our new CASB product.

We know that getting started with Zero Trust can seem daunting, so we hope that you can learn from our own journey and see how it benefited us.

Replacing a VPN: launching Cloudflare Access

Back in 2015, all of Cloudflare’s internally-hosted applications were reached via a hardware-based VPN. On-call engineers would fire up a client on their laptop, connect to the VPN, and log on to Grafana. This process was frustrating and slow.

Many of the products we build are a direct result of the challenges our own team is facing, and Access is a perfect example. Launching as an internal project in 2015, Access enabled employees to access internal applications through our identity provider. We started with just one application behind Access with the goal of improving incident response times. Engineers who received a notification on their phones could tap a link and, after authenticating via their browser, would immediately have the access they needed. As soon as people started working with the new authentication flow, they wanted it everywhere. Eventually our security team mandated that we move our apps behind Access, but for a long time it was totally organic: teams were eager to use it.

With authentication occuring at our network edge, we were able to support a globally-distributed workforce without the latency of a VPN, and we were able to do so securely. Moreover, our team is committed to protecting our internal applications with the most secure and usable authentication mechanisms, and two-factor authentication is one of the most important security controls that can be implemented. With Cloudflare Access, we’re able to rely on the strong two-factor authentication mechanisms of our identity provider.

Not all second factors of authentication deliver the same level of security. Some methods are still vulnerable to man-in-the-middle (MITM) attacks. These attacks often feature bad actors stealing one-time passwords, commonly through phishing, to gain access to private resources. To eliminate that possibility, we implemented FIDO2 supported security keys. FIDO2 is an authenticator protocol designed to prevent phishing, and we saw it as an improvement to our reliance on soft tokens at the time.

While the implementation of FIDO2 can present compatibility challenges, we were enthusiastic to improve our security posture. Cloudflare Access enabled us to limit access to our systems to only FIDO2. Cloudflare employees are now required to use their hardware keys to reach our applications. The onboarding of Access was not only a huge win for ease of use, the enforcement of security keys was a massive improvement to our security posture.

Mitigate threats & prevent data exfiltration: Gateway and Remote Browser Isolation

Deploying secure DNS in our offices

A few years later, in 2020, many customers’ security teams were struggling to extend the controls they had enabled in the office to their remote workers. In response, we launched Cloudflare Gateway, offering customers protection from malware, ransomware, phishing, command & control, shadow IT, and other Internet risks over all ports and protocols. Gateway directs and filters traffic according to the policies implemented by the customer.

Our security team started with Gateway to implement DNS filtering in all of our offices. Since Gateway was built on top of the same network as, the world’s fastest DNS resolver, any current or future Cloudflare office will have DNS filtering without incurring additional latency. Each office connects to the nearest data center and is protected.

Deploying secure DNS for our remote users

Cloudflare’s WARP client was also built on top of our DNS resolver. It extends the security and performance offered in offices to remote corporate devices. With the WARP client deployed, corporate devices connect to the nearest Cloudflare data center and are routed to Cloudflare Gateway. By sitting between the corporate device and the Internet, the entire connection from the device is secure, while also offering improved speed and privacy.

We sought to extend secure DNS filtering to our remote workforce and deployed the Cloudflare WARP client to our fleet of endpoint devices. The deployment enabled our security teams to better preserve our privacy by encrypting DNS traffic over DNS over HTTPS (DoH). Meanwhile, Cloudflare Gateway categorizes domains based on Radar, our own threat intelligence platform, enabling us to block high risk and suspicious domains for users everywhere around the world.

How Cloudflare Security does Zero Trust

Adding on HTTPS filtering and Browser Isolation

DNS filtering is a valuable security tool, but it is limited to blocking entire domains. Our team wanted a more precise instrument to block only malicious URLs, not the full domain. Since Cloudflare One is an integrated platform, most of the deployment was already complete. All we needed was to add the Cloudflare Root CA to our endpoints and then enable HTTP filtering in the Zero Trust dashboard. With those few simple steps, we were able to implement more granular blocking controls.

In addition to precision blocking, HTTP filtering enables us to implement tenant control. With tenant control, Gateway HTTP policies regulate access to corporate SaaS applications. Policies are implemented using custom HTTP headers. If the custom request header is present and the request is headed to an organizational account, access is granted. If the request header is present and the request goes to a non-organizational account, such as a personal account, the request can be blocked or opened in an isolated browser.

After protecting our users’ traffic at the DNS and HTTP layers, we implemented Browser Isolation. When Browser Isolation is implemented, all browser code executes in the cloud on Cloudflare’s network. This isolates our endpoints from malicious attacks and common data exfiltration techniques. Some remote browser isolation products introduce latency and frustrate users. Cloudflare’s Browser Isolation uses the power of our network to offer a seamless experience for our employees. It quickly improved our security posture without compromising user experience.

Preventing phishing attacks: Onboarding Area 1 email security

Also in early 2020, we saw an uptick in employee-reported phishing attempts. Our cloud-based email provider had strong spam filtering, but they fell short at blocking malicious threats and other advanced attacks. As we experienced increasing phishing attack volume and frequency we felt it was time to explore more thorough email protection options.

The team looked for four main things in a vendor: the ability to scan email attachments, the ability to analyze suspected malicious links, business email compromise protection, and strong APIs into cloud-native email providers. After testing many vendors, Area 1 became the clear choice to protect our employees. We implemented Area 1’s solution in early 2020, and the results have been fantastic.

Given the overwhelmingly positive response to the product and the desire to build out our Zero Trust portfolio, Cloudflare acquired Area 1 Email Security in April 2022. We are excited to offer the same protections we use to our customers.

What’s next: Getting started with Cloudflare’s CASB

Cloudflare acquired Vectrix in February 2022. Vectrix’s CASB offers functionality we are excited to add to Cloudflare One. SaaS security is an increasing concern for many security teams. SaaS tools are storing more and more sensitive corporate data, so misconfigurations and external access can be a significant threat. However, securing these platforms can present a significant resource challenge. Manual reviews for misconfigurations or externally shared files are time consuming, yet necessary processes for many customers. CASB reduces the burden on teams by ensuring security standards by scanning SaaS instances and identifying vulnerabilities with just a few clicks.

We want to ensure we maintain the best practices for SaaS security, and like many of our customers, we have many SaaS applications to secure. We are always seeking opportunities to make our processes more efficient, so we are excited to onboard one of our newest Zero Trust products.

Always striving for improvement

Cloudflare takes pride in deploying and testing our own products. Our security team works directly with Product to “dog food” our own products first. It’s our mission to help build a better Internet — and that means providing valuable feedback from our internal teams. As the number one consumer of Cloudflare’s products, the Security team is not only helping keep the company safer, but also contributing to build better products for our customers.

We hope you have enjoyed Cloudflare One week. We really enjoyed sharing our stories with you. To check out our recap of the week, please visit our Cloudflare TV segment.

Kubectl with Cloudflare Zero Trust

Post Syndicated from Terin Stock original https://blog.cloudflare.com/kubectl-with-zero-trust/

Kubectl with Cloudflare Zero Trust

Kubectl with Cloudflare Zero Trust

Cloudflare is a heavy user of Kubernetes for engineering workloads: it’s used to power the backend of our APIs, to handle batch-processing such as analytics aggregation and bot detection, and engineering tools such as our CI/CD pipelines. But between load balancers, API servers, etcd, ingresses, and pods, the surface area exposed by Kubernetes can be rather large.

In this post, we share a little bit about how our engineering team dogfoods Cloudflare Zero Trust to secure Kubernetes — and enables kubectl without proxies.

Our General Approach to Kubernetes Security

As part of our security measures, we heavily limit what can access our clusters over the network. Where a network service is exposed, we add additional protections, such as requiring Cloudflare Access authentication or Mutual TLS (or both) to access ingress resources.

These network restrictions include access to the cluster’s API server. Without access to this, engineers at Cloudflare would not be able to use tools like kubectl to introspect their team’s resources. While we believe Continuous Deployments and GitOps are best practices, allowing developers to use the Kubernetes API aids in troubleshooting and increasing developer velocity. Not having access would have been a deal breaker.

To satisfy our security requirements, we’re using Cloudflare Zero Trust, and we wanted to share how we’re using it, and the process that brought us here.

Before Zero Trust

In the world before Zero Trust, engineers could access the Kubernetes API by connecting to a VPN appliance. While this is common across the industry, and it does allow access to the API, it also dropped engineers as clients into the internal network: they had much more network access than necessary.

We weren’t happy with this situation, but it was the status quo for several years. At the beginning of 2020, we retired our VPN and thus the Kubernetes team needed to find another solution.

Kubernetes with Argo Tunnels

At the time we worked closely with the team developing Cloudflare Tunnels to add support for handling kubectl connections using Access and cloudflared tunnels.

While this worked for our engineering users, it was a significant hurdle to on-boarding new employees. Each Kubernetes cluster required its own tunnel connection from the engineer’s device, which made shuffling between clusters annoying. While kubectl supported connecting through SOCKS proxies, this support was not universal to all tools in the Kubernetes ecosystem.

We continued using this solution internally while we worked towards a better solution.

Kubernetes with Zero Trust

Since the launch of Cloudflare One, we’ve been dogfooding the Zero Trust agent in various configurations. At first we’d been using it to implement secure DNS with As time went on, we began to use it to dogfood additional Zero Trust features.

We’re now leveraging the private network routing in Cloudflare Zero Trust to allow engineers to access the Kubernetes APIs without needing to setup cloudflared tunnels or configure kubectl and other Kubernetes ecosystem tools to use tunnels. This isn’t something specific to Cloudflare, you can do this for your team today!

Kubectl with Cloudflare Zero Trust

Configuring Zero Trust

We use a configuration management tool for our Zero Trust configuration to enable infrastructure-as-code, which we’ve adapted below. However, the same configuration can be achieved using the Cloudflare Zero Trust dashboard.

The first thing we need to do is create a new tunnel. This tunnel will be used to connect the Cloudflare edge network to the Kubernetes API. We run the tunnel endpoints within Kubernetes, using configuration shown later in this post.

resource "cloudflare_argo_tunnel" "k8s_zero_trust_tunnel" {
  account_id = var.account_id
  name       = "k8s_zero_trust_tunnel"
  secret     = var.tunnel_secret

The “tunnel_secret” secret should be a 32-byte random number, which you should temporarily save as we’ll reuse it later for the Kubernetes setup later.

After we’ve created the tunnel, we need to create the routes so the Cloudflare network knows what traffic to send through the tunnel.

resource "cloudflare_tunnel_route" "k8s_zero_trust_tunnel_ipv4" {
  account_id = var.account_id
  tunnel_id  = cloudflare_argo_tunnel.k8s_zero_trust_tunnel.id
  network    = ""
  comment    = "Kubernetes API Server (IPv4)"
resource "cloudflare_tunnel_route" "k8s_zero_trust_tunnel_ipv6" {
  account_id = var.account_id
  tunnel_id  = cloudflare_argo_tunnel.k8s_zero_trust_tunnel.id
  network    = "2001:DB8::101/128"
  comment    = "Kubernetes API Server (IPv6)"

We support accessing the Kubernetes API via both IPv4 and IPv6 addresses, so we configure routes for both. If you’re connecting to your API server via a hostname, these IP addresses should match what is returned via a DNS lookup.

Next we’ll configure settings for Cloudflare Gateway so that it’s compatible with the API servers and clients.

resource "cloudflare_teams_list" "k8s_apiserver_ips" {
  account_id = var.account_id
  name       = "Kubernetes API IPs"
  type       = "IP"
  items      = ["", "2001:DB8::101/128"]
resource "cloudflare_teams_rule" "k8s_apiserver_zero_trust_http" {
  account_id  = var.account_id
  name        = "Don't inspect Kubernetes API"
  description = "Allow connections from kubectl to API"
  precedence  = 10000
  action      = "off"
  enabled     = true
  filters     = ["http"]
  traffic     = format("any(http.conn.dst_ip[*] in $%s)", replace(cloudflare_teams_list.k8s_apiserver_ips.id, "-", ""))

As we use mutual TLS between clients and the API server, and not all the traffic between kubectl and the API servers are HTTP, we’ve disabled HTTP inspection for these connections.

You can pair these rules with additional Zero Trust rules, such as device attestation, session lifetimes, and user and group access policies to further customize your security.

Deploying Tunnels

Once you have your tunnels created and configured, you can deploy their endpoints into your network. We’ve chosen to deploy the tunnels as pods, as this allows us to use Kubernetes’s deployment strategies for rolling out upgrades and handling node failures.

apiVersion: v1
kind: ConfigMap
  name: tunnel-zt
  namespace: example
    tunnel: api-zt
  config.yaml: |
    tunnel: 8e343b13-a087-48ea-825f-9783931ff2a5
    credentials-file: /opt/zt/creds/creds.json
        enabled: true

This creates a Kubernetes ConfigMap with a basic configuration that enables WARP routing for the tunnel ID specified. You can get this tunnel ID from your configuration management system, the Cloudflare Zero Trust dashboard, or by running the following command from another device logged into the same account.

cloudflared tunnel list

Next, we’ll need to create a secret for our tunnel credentials. While you should use a secret management system, for simplicity we’ll create one directly here.

jq -cn --arg accountTag $CF_ACCOUNT_TAG \
       --arg tunnelID $CF_TUNNEL_ID \
       --arg tunnelName $CF_TUNNEL_NAME \
       --arg tunnelSecret $CF_TUNNEL_SECRET \
   '{AccountTag: $accountTag, TunnelID: $tunnelID, TunnelName: $tunnelName, TunnelSecret: $tunnelSecret}' | \
kubectl create secret generic -n example tunnel-creds --from-file=creds.json=/dev/stdin

This creates a new secret “tunnel-creds” in the “example” namespace with the credentials file the tunnel expects.

Now we can deploy the tunnels themselves. We deploy multiple replicas to ensure some are always available, even while nodes are being drained.

apiVersion: apps/v1
kind: Deployment
    tunnel: api-zt
  name: tunnel-api-zt
  namespace: example
  replicas: 3
      tunnel: api-zt
      maxSurge: 0
      maxUnavailable: 1
        tunnel: api-zt
        - args:
            - tunnel
            - --config
            - /opt/zt/config/config.yaml
            - run
            - name: GOMAXPROCS
              value: "2"
            - name: TZ
              value: UTC
          image: cloudflare/cloudflared:2022.5.3
            failureThreshold: 1
              path: /ready
              port: 8081
            initialDelaySeconds: 10
            periodSeconds: 10
          name: tunnel
            - containerPort: 8081
              name: http-metrics
              cpu: "1"
              memory: 100Mi
            - mountPath: /opt/zt/config
              name: config
              readOnly: true
            - mountPath: /opt/zt/creds
              name: creds
              readOnly: true
        - secret:
            name: tunnel-creds
          name: creds
        - configMap:
            name: tunnel-api-zt
          name: config

Using Kubectl with Cloudflare Zero Trust

Kubectl with Cloudflare Zero Trust

After deploying the Cloudflare Zero Trust agent, members of your team can now access the Kubernetes API without needing to set up any special SOCKS tunnels!

kubectl version --short
Client Version: v1.24.1
Server Version: v1.24.1

What’s next?

If you try this out, send us your feedback! We’re continuing to improve Zero Trust for non-HTTP workflows.

Decommissioning your VDI

Post Syndicated from James Chang original https://blog.cloudflare.com/decommissioning-virtual-desktop/

Decommissioning your VDI

Decommissioning your VDI

This blog offers Cloudflare’s perspective on how remote browser isolation can help organizations offload internal web application use cases currently secured by virtual desktop infrastructure (VDI). VDI has historically been useful to secure remote work, particularly when users relied on desktop applications. However, as web-based apps have become more popular than desktop apps, the drawbacks of VDI – high costs, unresponsive user experience, and complexity – have become harder to ignore. In response, we offer practical recommendations and a phased approach to transition away from VDI, so that organizations can lower cost and unlock productivity by improving employee experiences and simplifying administrative overhead.

Modern Virtual Desktop usage

Background on Virtual Desktop Infrastructure (VDI)

Virtual Desktop Infrastructure describes running desktop environments on virtual computers hosted in a data center. When users access resources within VDI, video streams from those virtual desktops are delivered securely to endpoint devices over a network. Today, VDI is predominantly hosted on-premise in data centers and either managed directly by organizations themselves or by third-party Desktop-as-a-Service (DaaS) providers. In spite of web application usage growing in favor of desktop applications, DaaS is growing, with Gartner® recently projecting DaaS spending to double by 2024.

Both flavors of VDI promise benefits to support remote work. For security, VDI offers a way to centralize configuration for many dispersed users and to keep sensitive data far away from devices. Business executives are often attracted to VDI because of potential cost savings over purchasing and distributing devices to every user. The theory is that when processing is shifted to centralized servers, IT teams can save money shipping out fewer managed laptops and instead support bring-your-own-device (BYOD). When hardware is needed, they can purchase less expensive devices and even extend the lifespan of older devices.

Challenges with VDI

High costs

The reality of VDI is often quite different. In particular, it ends up being much more costly than organizations anticipate for both capital and operational expenditures. Gartner® projects that “by 2024, more than 90% of desktop virtualization projects deployed primarily to save cost will fail to meet their objectives.”

The reasons are multiple. On-premise VDI comes with significant upfront capital expenditures (CapEx) in servers. DaaS deployments require organizations to make opaque decisions about virtual machines (e.g. number, region, service levels, etc.) and their specifications (e.g. persistent vs. pooled, always-on vs. on-demand, etc.). In either scenario, the operational expenditures (OpEx) from maintenance and failing to rightsize capacity can lead to surprises and overruns. For both flavors, the more organizations commit to virtualization, the more they are locked into high ongoing compute expenses, particularly as workforces grow remotely.

Poor user experience

VDI also delivers a subpar user experience. Expectations for frictionless IT experiences have only increased during remote work, and users can still tell the difference between accessing apps directly versus from within a virtual desktop. VDI environments that are not rightsized can lead to clunky, latent, and unresponsive performance. Poor experiences can negatively impact productivity, security (as users seek workarounds outside of VDI), and employee retention (as users grow disaffected).


Overall, VDI is notoriously complex. Initial setup is multi-faceted and labor-intensive, with steps including investing in servers and end user licenses, planning VM requirements and capacity, virtualizing apps, setting up network connectivity, and rolling out VDI thin clients. Establishing security policies is often the last step, and for this reason, can sometimes be overlooked, leading to security gaps.

Moving VDI into full production not only requires cross-functional coordination across typical teams like IT, security, and infrastructure & operations, but also typically requires highly specialized talent, often known as virtual desktop administrators. These skills are hard to find and retain, which can be risky to rely on during this current high-turnover labor market.

Even still, administrators often need to build their own logging, auditing, inspection, and identity-based access policies on top of these virtualized environments. This means additional overhead of configuring separate services like secure web gateways.

Some organizations deploy VDI primarily to avoid the shipping costs, logistical hassles, and regulatory headaches of sending out managed laptops to their global workforce. But with VDI, what seemed like a fix for one problem can quickly create more overhead and frustration. Wrestling with VDI’s complexity is likely not worthwhile, particularly if users only need to access a select few internal web services.

Offloading Virtual Desktop use cases with Remote Browser Isolation

To avoid these frictions, organizations are exploring ways to shift use cases away from VDI, particularly when on-prem. Most applications that workforces rely on today are accessible via the browser and are hosted in public or hybrid cloud or SaaS environments, and even occasionally in legacy data centers. As a result, modern services like remote browser isolation (RBI) increasingly make sense as alternatives to begin offloading VDI workloads and shift security to the cloud.

Like VDI, Cloudflare Browser Isolation minimizes attack surface by running all app and web code away from endpoints — in this case, on Cloudflare’s global network. In the process, Cloudflare can secure data-in-use within a browser from untrusted users and devices, plus insulate those endpoints from threats like ransomware, phishing and even zero-day attacks. Within an isolated browser, administrators can set policies to protect sensitive data on any web-based or SaaS app, just as they would with VDI. Sample controls include restrictions on file uploads / downloads, copy and paste, keyboard inputs, and printing functionality.

This comparable security comes with more achievable business benefits, starting with helping employees be more productive:

  1. End users benefit from a faster and more transparent experience than with VDI. Our browser isolation is designed to run across our 270+ locations, so that isolated sessions are served as close to end users as possible. Unlike with VDI, there is no backhauling user traffic to centralized data centers. Plus, Cloudflare’s Network Vector Rendering (NVR) approach ensures that the in-app experience feels like a native, local browser – without bandwidth intensive pixel pushing techniques.
  2. Administrators benefit because they can skip all the up-front planning, ongoing overhead, and scaling pains associated with VDI. Instead, administrators turn on isolation policies from a single dashboard and let Cloudflare handle scaling to users and devices. Plus, native integrations with ZTNA, SWG, CASB, and other security services make it easy to begin modernizing VDI-adjacent use cases.

On the cost side, expenses associated with browser isolation are overall lower, smoother, and more predictable than with VDI. In fact, Gartner® recently highlighted that “RBI is cheaper than using VDI for isolation if the only application being isolated is the browser.”

Unlike on-prem VDI, there are no capital expenditures on VM capacity, and unlike DaaS subscriptions, Cloudflare offers simple, seat-based pricing with no add-on fees for configurations. Organizations also can skip purchasing standalone point solutions because Cloudflare’s RBI comes natively integrated with other services in the Cloudflare Zero Trust platform. Most notably, we do not charge for cloud consumption, which is a common source of VDI surprise.

Transitioning to Cloudflare Browser Isolation

Decommissioning your VDI
Note: Above diagram includes this table below
Decommissioning your VDI

Customer story: PensionBee

PensionBee, a leading online pension provider in the UK, recognized this opportunity to offload virtual desktop use cases and switch to RBI. As a reaction to the pandemic, PensionBee initially onboarded a DaaS solution (Amazon WorkSpaces) to help employees access internal resources remotely. Specifically, CTO Jonathan Lister Parsons was most concerned about securing Salesforce, where PensionBee held its customers’ sensitive pension data.

The DaaS supported access controls similar to PensionBee configured for employees when they previously were in the office (e.g. allowlisting the IPs of the virtual desktops). But shortly after rollout, Lister Parsons began developing concerns about the unresponsive user experience. In this recent webinar, he in fact guesstimated that “users are generally about 10% less productive when they’re using the DaaS to do their work.” This negative experience increased the support burden on PensionBee’s IT staff to the point where they had to build an automated tool to reboot an employee’s DaaS service whenever it was acting up.

“From a usability perspective, it’s clearly better if employees can have a native browsing experience that people are used to compared to a remote desktop. That’s sort of a no-brainer,” Lister Parsons said. “But typically, it’s been hard to deliver that while keeping security in place, costs low, and setup complexity down.”

When Lister Parsons encountered Cloudflare Browser Isolation, he was impressed with the service’s performance and lightweight user experience. Because PensionBee employees accessed the vast majority of their apps (including Salesforce) via a browser, RBI was a strong fit. Cloudflare’s controls over copy/paste and file downloads reduced the risk of customer pension details in Salesforce reaching local devices.

“We started using Cloudflare Zero Trust with Browser Isolation to help provide the best security for our customers’ data and protect employees from malware,” he said. “It worked so well I forgot it was on.”

PensionBee is just one of many organizations developing a roadmap for this transition from VDI. In the next section, we provide Cloudflare’s recommendations for planning and executing that journey.

Practical recommendations

Pre-implementation planning

Understanding where to start this transition some forethought. Specifically, cross-functional teams – across groups like IT, security, and infrastructure & operations (IO) – should develop a collective understanding of how VDI is used today, what use cases should be offloaded first, and what impact any changes will have across both end users and administrators.

In our own consultations, we start by asking about the needs and expectations of end users because their consistent adoption will dictate an initiative’s success. Based on that foundation, we then typically help organizations map out and prioritize the applications and data they need to secure. Last but not least, we strategize around the ‘how:’ what administrators and expertise will be needed not only for the initial configuration of new services, but also for the ongoing improvement. Below are select questions we ask customers to consider across those key dimensions to help them navigate their VDI transition.

Questions to consider

Decommissioning your VDI

Migration from VDI to RBI

Organizations can leverage Cloudflare Browser Isolation and other Zero Trust services to begin offloading VDI use cases and realize cost savings and productivity gains within days of rollout. Our recommended three-phase approach focuses on securing the most critical services with the least disruption to user experience, while also prioritizing quick time-to-value.

Phase 1: Configure clientless web isolation for web-based applications

Using our clientless web isolation approach, administrators can send users to their private web application served in an isolated browser environment with just a hyperlink – without any software needed on endpoints. Then, administrators can build data protection rules preventing risky user actions within these isolated browser-based apps. Plus, because administrators avoid rolling out endpoint clients, scaling access to employees, contractors, or third parties even on unmanaged devices is as easy as sending a link.

These isolated links can exist in parallel with your existing VDI, enabling a graceful migration to this new approach longer term. Comparing the different experiences side by side can help your internal stakeholders evangelize the RBI-based approach over time. Cross-functional communication is critical throughout this phased rollout: for example, in prioritizing what web apps to isolate before configuration, and after configuration, articulating how those changes will affect end users.

Phase 2: Shift SSH- and VNC-based apps from VDI to Cloudflare

Clientless isolation is a great fit to secure web apps. This next phase helps secure non-web apps within VDI environments, which are commonly accessed via an SSH or VNC connection. For example, privileged administrators often use SSH to control remote desktops and fulfill service requests. Other less technical employees may need the VNC’s graphical user interface to work in legacy apps inaccessible via a modern operating system.

Cloudflare enables access to these SSH and VNC environments through a browser – again without requiring any software installed on endpoints. Both the SSH and VNC setups are similar in that administrators create a secure outbound-only connection between a machine and Cloudflare’s network before a terminal is rendered in a browser. By sending traffic to our network, Cloudflare can authenticate access to apps based on identity check and other granular policies and can provide detailed audits of each user session. (You can read more about the SSH and VNC experience in prior blog posts.)

We recommend first securing SSH apps to support privileged administrators, who can provide valuable feedback. Then, move to support the broader range of users who rely on VNC. Administrators will set up connections and policies using our ZTNA service from the same management panel used for RBI. Altogether, this browser-based experience should reduce latency and have users feeling more at home and productive than in their virtualized desktops.

Phase 3: Progress towards Zero Trust security posture

Step 3A: Set up identity verification policies per application
With phases 1 and 2, you have been using Cloudflare to progressively secure access to web and non-app apps for select VDI use cases. In phase 3, build on that foundation by adopting ZTNA for all your applications, not just ones accessed through VDI.

Administrators use the same Cloudflare policy builder to add more granular conditional access rules in line with Zero Trust security best practices, including checking for an identity provider (IdP). Cloudflare integrates with multiple IdPs simultaneously and can federate multiple instances of the same IdP, enabling flexibility to support any variety of users. After setting up IdP verification, we see administrators often enhance security by requiring MFA. These types of identity checks can also be set up within VDI environments, which can build confidence in adopting Zero Trust before deprecating VDI entirely.

Step 3B: Rebuild confidence in user devices by layering in device posture checks
So far, the practical steps we’ve recommended do not require any Cloudflare software on endpoints – which optimizes for deployment speed in offloading VDI use cases. But longer term, there are security, visibility, and productivity benefits to deploying Cloudflare’s device client where it makes sense.

Cloudflare’s device client (aka WARP) works across all major operating systems and is optimized for flexible deployment. For managed devices, use any script-based method with popular mobile device management (MDM) software, and self-enrollment is a useful option for third-party users. With WARP deployed, administrators can enhance application access policies by first checking for the presence of specific programs or files, disk encryption status, the right OS version, and other additional attributes. Plus, if your organization uses endpoint protection (EPP) providers like Crowdstrike, SentinelOne, and more, verify access by first checking for the presence of that software or examining device health.

Altogether, adding device posture signals both levels up security and enables more granular visibility for both managed and BYOD devices. As with identity verification, administrators can start by enabling device posture checks for users still using virtual desktops. Over time, as administrators build more confidence in user devices, they should begin routing users on managed devices to apps directly, as opposed to through the slower VDI experience.

Step 3C: Progressively shift security services away from virtualized environments to Zero Trust
Rethinking application access use cases in prior phases has reduced reliance on complex VDI. By now, Administrators should already be building comfort with Zero Trust policies, as enabled by Cloudflare. Our final recommendation in this article is to continue that journey away from virtualization and towards Zero Trust Network Access.

Instead of sending any users into virtualized apps in virtualized desktops, organizations can reduce their overhead entirely and embrace cloud-delivered ZTNA to protect one-to-one connections between all users and all apps in any cloud environment. The more apps secured with Cloudflare vs. VDI, the greater consistency of controls, visibility, and end user experience.

Virtualization has provided a powerful technology to bridge the gap between our hardware-centric legacy investments and IT’s cloud-first future. At this point, however, reliance on virtualization puts undue pressure on your administrators and risks diminishing end user productivity. As apps, users, and data accelerate their migration to the cloud, it only makes sense to shift security controls there too with cloud-native, not virtualized services.

As longer term steps, organizations can explore taking advantage of Cloudflare’s other natively-integrated services, such as our Secure Web Gateway (SWG), Cloud Access Security Broker (CASB), and email security. Other blogs this week outline how to transition to these Cloudflare services from other legacy technologies.

Summary table

Decommissioning your VDI

Best practices and progress metrics

Below are sample best practices we recommend achieving as smooth a transition as possible, followed by sample metrics to track progress on your initiative:

  • Be attuned to end user experiences: Whatever replaces VDI needs to perform better than what came before. When trying to change user habits and drive adoption, administrators must closely track what users like and dislike about the new services.
  • Prioritize cross-functional collaboration: Sunsetting VDI will inevitably involve coordination across diverse teams across IT, security, infrastructure, and virtual desktop administrators. It is critical to establish shared ways of working and trust to overcome any road bumps.
  • Roll out incrementally and learn: Test out each step with a subset of users and apps before rolling out more widely to figure out what works (and does not). Start by testing out clientless web isolation for select apps to gain buy-in from users and executives.

Sample progress metrics

Decommissioning your VDI

Explore your VDI transition

Cloudflare Zero Trust makes it easy to begin sunsetting your VDI, beginning with leveraging our clientless browser isolation to secure web apps.

To learn more about how to move towards Zero Trust and away from virtualized desktops, request a consultation today.Replacing your VDI is a great project to fit into your overall Zero Trust roadmap. For a full summary of Cloudflare One Week and what’s new, tune in to our recap webinar.

HTTP/3 inspection on Cloudflare Gateway

Post Syndicated from Ankur Aggarwal original https://blog.cloudflare.com/cloudflare-gateway-http3-inspection/

HTTP/3 inspection on Cloudflare Gateway

HTTP/3 inspection on Cloudflare Gateway

Today, we’re excited to announce upcoming support for HTTP/3 inspection through Cloudflare Gateway, our comprehensive secure web gateway. HTTP/3 currently powers 25% of the Internet and delivers a faster browsing experience, without compromising security. Until now, administrators seeking to filter and inspect HTTP/3-enabled websites or APIs needed to either compromise on performance by falling back to HTTP/2 or lose visibility by bypassing inspection. With HTTP/3 support in Cloudflare Gateway, you can have full visibility on all traffic and provide the fastest browsing experience for your users.

Why is the web moving to HTTP/3?

HTTP is one of the oldest technologies that powers the Internet. All the way back in 1996, security and performance were afterthoughts and encryption was left to the transport layer to manage. This model doesn’t scale to the performance needs of the modern Internet and has led to HTTP being upgraded to HTTP/2 and now HTTP/3.

HTTP/3 accelerates browsing activity by using QUIC, a modern transport protocol that is always encrypted by default. This delivers faster performance by reducing round-trips between the user and the web server and is more performant for users with unreliable connections. For further information about HTTP/3’s performance advantages take a look at our previous blog here.

HTTP/3 development and adoption

Cloudflare’s mission is to help build a better Internet. We see HTTP/3 as an important building block to make the Internet faster and more secure. We worked closely with the IETF to iterate on the HTTP/3 and QUIC standards documents. These efforts combined with progress made by popular browsers like Chrome and Firefox to enable QUIC by default have translated into HTTP/3 now being used by over 25% of all websites and for an even more thorough analysis.

We’ve advocated for HTTP/3 extensively over the past few years. We first introduced support for the underlying transport layer QUIC in September 2018 and then from there worked to introduce HTTP/3 support for our reverse proxy services the following year in September of 2019. Since then our efforts haven’t slowed down and today we support the latest revision of HTTP/3, using the final “h3” identifier matching RFC 9114.

HTTP/3 inspection hurdles

But while there are many advantages to HTTP/3, its introduction has created deployment complexity and security tradeoffs for administrators seeking to filter and inspect HTTP traffic on their networks. HTTP/3 offers familiar HTTP request and response semantics, but the use of QUIC changes how it looks and behaves “on the wire”. Since QUIC runs atop UDP, it  is architecturally distinct from legacy TCP-based protocols and has poor support from legacy secure web gateways. The combination of these two factors has made it challenging for administrators to keep up with the evolving technological landscape while maintaining the users’ performance expectations and ensuring visibility and control over Internet traffic.

Without proper secure web gateway support for HTTP/3, administrators have needed to choose whether to compromise on security and/or performance for their users. Security tradeoffs include not inspecting UDP traffic, or even worse forgoing critical security capabilities such as inline anti-virus scanning, data-loss prevention, browser isolation and/or traffic logging. Naturally, for any security conscious organization discarding security and visibility is not an acceptable approach and this has led administrators to proactively disable HTTP/3 on their end user devices. This introduces deployment complexity and sacrifices performance as it requires disabling QUIC-support within the users web browsers.

How to enable HTTP/3 Inspection

Once support for HTTP/3 inspection is available for select browsers later this year, you’ll be able to enable HTTP/3 inspection through the dashboard. Once logged into the Zero Trust dashboard you will need to toggle on proxying, click the box for UDP traffic, and enable TLS decryption under Settings > Network > Firewall. Once these settings have been enabled; AV-scanning, remote browser isolation, DLP, and HTTP filtering can be applied via HTTP policies to all of your organization’s proxied HTTP traffic.

HTTP/3 inspection on Cloudflare Gateway

What’s next

Administrators will no longer need to make security tradeoffs based on the evolving technological landscape and can focus on protecting their organization and teams. We’ll reach out to all Cloudflare One customers once HTTP/3 inspection is available and are excited to simplify secure web gateway deployments for administrators.

HTTP/3 traffic inspection will be available to all administrators of all plan types; if you have not signed up already click here to get started.

Announcing Gateway + CASB

Post Syndicated from Corey Mahan original https://blog.cloudflare.com/announcing-gateway-and-casb/

Announcing Gateway + CASB

This post is also available in 简体中文, 日本語, Español.

Announcing Gateway + CASB

Shadow IT and managing access to sanctioned or unsanctioned SaaS applications remain one of the biggest pain points for IT administrators in the era of the cloud.

We’re excited to announce that starting today, Cloudflare’s Secure Web Gateway and our new API-driven Cloud Access Security Broker (CASB) work seamlessly together to help IT and security teams go from finding Shadow IT to fixing it in minutes.

Detect security issues within SaaS applications

Cloudflare’s API-driven CASB starts by providing comprehensive visibility into SaaS applications, so you can easily prevent data leaks and compliance violations. Setup takes just a few clicks to integrate with your organization’s SaaS services, like Google Workspace and Microsoft 365. From there, IT and security teams can see what applications and services their users are logging into and how company data is being shared.

So you’ve found the issues. But what happens next?

Identify and detect, but then what?

Customer feedback from the API-driven CASB beta has followed a similar theme: it was super easy to set up and detect all my security issues, but how do I fix this stuff?

Almost immediately after investigating the most critical issues, it makes sense to want to start taking action. Whether it be detecting an unknown application being used for Shadow IT or wanting to limit functionality, access, or behaviors to a known but unapproved application, remediation is front of mind.

This led to customers feeling like they had a bunch of useful data in front of them, but no clear action to take to get started on fixing them.

Create Gateway policies from CASB security findings

To solve this problem, we’re allowing you to easily create Gateway policies from CASB security findings. Security findings are issues detected within SaaS applications that involve users, data at rest, and settings that are assigned a Low, Medium, High or Critical severity per integration.

Using the security findings from CASB allows for fine-grained Gateway policies which prevent future unwanted behavior while still allowing usage that aligns to company security policy. This means going from viewing a CASB security issue, like the use of an unapproved SaaS application, to preventing or controlling access in minutes. This seamless cross-product experience all happens from a single, unified platform.

For example, take the CASB Google Workspace security finding around third-party apps which detects sign-ins or other permission sharing from a user’s account. In just a few clicks, you can create a Gateway policy to block some or all of the activity, like uploads or downloads, to the detected SaaS application. This policy can be applied to some or all users, based on what access has been granted to the user’s account.

By surfacing the exact behavior with CASB, you can take swift and targeted action to better protect your organization with Gateway.

Announcing Gateway + CASB

Get started today with the Cloudflare One

This post highlights one of the many ways the Cloudflare One suite of solutions work seamlessly together as a unified platform to find and fix security issues across SaaS applications.

Get started now with Cloudflare’s Secure Web Gateway by signing up here. Cloudflare’s API-driven CASB is in closed beta with new customers being onboarded each week. You can request access here to try out this exciting new cross-product feature.

A stronger bridge to Zero Trust

Post Syndicated from Annika Garbers original https://blog.cloudflare.com/stronger-bridge-to-zero-trust/

A stronger bridge to Zero Trust

A stronger bridge to Zero Trust

We know that migration to Zero Trust architecture won’t be an overnight process for most organizations, especially those with years of traditional hardware deployments and networks stitched together through M&A. But part of why we’re so excited about Cloudflare One is that it provides a bridge to Zero Trust for companies migrating from legacy network architectures.

Today, we’re doubling down on this — announcing more enhancements to the Cloudflare One platform that make a transition from legacy architecture to the Zero Trust network of the future easier than ever: new plumbing for more Cloudflare One on-ramps, expanded support for additional IPsec parameters, and easier on-ramps from your existing SD-WAN appliances.

Any on- or off-ramp: fully composable and interoperable

When we announced our vision for Cloudflare One, we emphasized the importance of allowing customers to connect to our network however they want — with hardware devices they’ve already deployed, with any carrier they already have in place, with existing technology standards like IPsec tunnels or more Zero Trust approaches like our lightweight application connector. In hundreds of customer conversations since that launch, we’ve heard you reiterate the importance of this flexibility. You need a platform that meets you where you are today and gives you a smooth path to your future network architecture by acting as a global router with a single control plane for any way you want to connect and manage your network traffic.

We’re excited to share that over the past few months, the last pieces of this puzzle have fallen into place, and customers can now use any Cloudflare One on-ramp and off-ramp together to route traffic seamlessly between devices, offices, data centers, cloud properties, and self-hosted or SaaS applications. This includes (new since our last announcement, and rounding out the compatibility matrix below) the ability to route traffic from networks connected with a GRE tunnel, IPsec tunnel, or CNI to applications connected with Cloudflare Tunnel.

Fully composable Cloudflare One on-ramps

From ↓ To → BYOIP WARP client CNI GRE tunnel IPSec tunnel Cloudflare Tunnel
WARP client
GRE tunnel
IPSec tunnel

This interoperability is key to organizations’ strategy for migrating from legacy network architecture to Zero Trust. You can start by improving performance and enhancing security using technologies that look similar to what you’re used to today, and incrementally upgrade to Zero Trust at a pace that makes sense for your organization.

Expanded options and easier management of Anycast IPsec tunnels

We’ve seen incredibly exciting demand since our launch of Anycast IPsec as an on-ramp for Cloudflare One back in December. Since IPsec has been the industry standard for encrypted network connectivity for almost thirty years, there are many implementations and parameters available to choose from, and our customers are using a wide variety of network devices to terminate these tunnels. To make the process of setting up and managing IPsec tunnels from any network easier, we’ve built on top of our initial release with support for new parameters, a new UI and Terraform provider support, and step-by-step guides for popular implementations.

  • Expanded support for additional configuration parameters: We started with a small set of default parameters based on industry best practices, and have expanded from there – you can see the up-to-date list in our developer docs. Since we wrote our own IPsec implementation from scratch (read more about why in our announcement blog), we’re able to add support for new parameters with just a single (quick!) development cycle. If the settings you’re looking for aren’t on our list yet, contact us to learn about our plans for supporting them.
  • Configure and manage tunnels from the Cloudflare dashboard: Anycast IPsec and GRE tunnel configuration can be managed with just a few clicks from the Cloudflare dashboard. After creating a tunnel, you can view connectivity to it from every Cloudflare location worldwide and run traceroutes or packet captures on-demand to get a more in-depth view of your traffic for troubleshooting.
  • Terraform provider support to manage your network as code: Busy IT teams love the fact that they can manage all their network configuration from a single place with Terraform.
  • Step-by-step guides for setup with your existing devices: We’ve developed and will continue to add new guides in our developer docs to walk you through establishing IPsec tunnels with Cloudflare from a variety of devices.

(Even) easier on-ramp from your existing SD-WAN appliances

We’ve heard from you consistently that you want to be able to use whatever hardware you have in place today to connect to Cloudflare One. One of the easiest on-ramp methods is leveraging your existing SD-WAN appliances to connect to us, especially for organizations with many locations. Previously, we announced partnerships with leading SD-WAN providers to make on-ramp configuration even smoother; today, we’re expanding on this by introducing new integration guides for additional devices and tunnel mechanisms including Cisco Viptela. Your IT team can follow these verified step-by-step instructions to easily configure connectivity to Cloudflare’s network.

Get started on your Zero Trust journey today

Our team is helping thousands of organizations like yours transition from legacy network architecture to Zero Trust – and we love hearing from you about the new products and features we can continue building to make this journey even easier. Learn more about Cloudflare One or reach out to your account team to talk about how we can partner to transform your network, starting today!

Cloudflare Gateway dedicated egress and egress policies

Post Syndicated from Ankur Aggarwal original https://blog.cloudflare.com/gateway-dedicated-egress-policies/

Cloudflare Gateway dedicated egress and egress policies

Cloudflare Gateway dedicated egress and egress policies

Today, we are highlighting how Cloudflare enables administrators to create security policies while using dedicated source IPs. With on-premise appliances like legacy VPNs, firewalls, and secure web gateways (SWGs), it has been convenient for organizations to rely on allowlist policies based on static source IPs. But these hardware appliances are hard to manage/scale, come with inherent vulnerabilities, and struggle to support globally distributed traffic from remote workers.

Throughout this week, we’ve written about how to transition away from these legacy tools towards Internet-native Zero Trust security offered by services like Cloudflare Gateway, our SWG. As a critical service natively integrated with the rest of our broader Zero Trust platform, Cloudflare Gateway also enables traffic filtering and routing for recursive DNS, Zero Trust network access, remote browser isolation, and inline CASB, among other functions.

Nevertheless, we recognize that administrators want to maintain the convenience of source IPs as organizations transition to cloud-based proxy services. In this blog, we describe our approach to offering dedicated IPs for egressing traffic and share some upcoming functionality to empower administrators with even greater control.

Cloudflare’s dedicated egress IPs

Source IPs are still a popular method of verifying that traffic originates from a known organization/user when accessing applications and third party destinations on the Internet. When organizations use Cloudflare as a secure web gateway, user traffic is proxied through our global network, where we apply filtering and routing policies at the closest data center to the user. This is especially powerful for globally distributed workforces or roaming users. Administrators do not have to make updates to static IP lists as users travel, and no single location becomes a bottleneck for user traffic.

Today the source IP for proxied traffic is one of two options:

  • Device client (WARP) Proxy IP – Cloudflare forward proxies traffic from the user using an IP from the default IP range shared across all Zero Trust accounts
  • Dedicated egress IP – Cloudflare provides customers with a dedicated IP (IPv4 and IPv6) or range of IPs geolocated to one or more Cloudflare network locations

The WARP Proxy IP range is the default egress method for all Cloudflare Zero Trust customers. It is a great way to preserve the privacy of your organization as user traffic is sent to the nearest Cloudflare network location which ensures the most performant Internet experience. But setting source IP security policies based on this default IP range does not provide the granularity that admins often require to filter their user traffic.

Dedicated egress IPs are useful in situations where administrators want to allowlist traffic based on a persistent identifier. As their name suggests, these dedicated egress IPs are exclusively available to the assigned customer—and not used by any other customers routing traffic through Cloudflare’s network.

Additionally, leasing these dedicated egress IPs from Cloudflare helps avoid any privacy concerns which arise when carving them out from an organization’s own IP ranges. And furthermore, alleviates the need to protect your any of the IP ranges that are assigned to your on-premise VPN appliance from DDoS attacks or otherwise.

Dedicated egress IPs are available as add-on to for any Cloudflare Zero Trust enterprise-contracted customer. Contract customers can select the specific Cloudflare data centers used for their dedicated egress, and all subscribing customers receive at least two IPs to start, so user traffic is always routed to the closest dedicated egress data center for performance and resiliency. Finally, organizations can egress their traffic through Cloudflare’s dedicated IPs via their preferred on-ramps. These include Cloudflare’s device client (WARP), proxy endpoints, GRE and IPsec on-ramps, or any of our 1600+ peering network locations, including major ISPs, cloud providers, and enterprises.

Customer use cases today

Cloudflare customers around the world are taking advantage of Gateway dedicated egress IPs to streamline application access. Below are three most common use cases we’ve seen deployed by customers of varying sizes and across industries:

  • Allowlisting access to apps from third parties: Users often need to access tools controlled by suppliers, partners, and other third party organizations. Many of those external organizations still rely on source IP to authenticate traffic. Dedicated egress IPs make it easy for those third parties to fit within these existing constraints.
  • Allowlisting access to SaaS apps: Source IPs are still commonly used as a defense-in-depth layer for how users access SaaS apps, alongside other more advanced measures like multi-factor authentication and identity provider checks.
  • Deprecating VPN usage: Often hosted VPNs will be allocated IPs within the customers advertised IP range. The security flaws, performance limitations, and administrative complexities of VPNs are well-documented in our recent Cloudflare blog. To ease customer migration, users will often choose to maintain any IP allowlist processes in place today.

Through this, administrators are able to maintain the convenience of building policies with fixed, known IPs, while accelerating performance for end users by routing through Cloudflare’s global network.

Cloudflare Zero Trust egress policies

Today, we are excited to announce an upcoming way to build more granular policies using Cloudflare’s dedicated egress IPs. With a forthcoming egress IP policy builder in the Cloudflare Zero Trust dashboard, administrators can specify which IP is used for egress traffic based on identity, application, network and geolocation attributes.

Administrators often want to route only certain traffic through dedicated egress IPs—whether for certain applications, certain Internet destinations, and certain user groups. Soon, administrators can set their preferred egress method based on a wide variety of selectors such as application, content category, domain, user group, destination IP, and more. This flexibility helps organizations take a layered approach to security, while also maintaining high performance (often via dedicated IPs) to the most critical destinations.

Furthermore, administrators will be able to use the egress IP policy builder to geolocate traffic to any country or region where Cloudflare has a presence. This geolocation capability is particularly useful for globally distributed teams which require geo-specific experiences.

For example, a large media conglomerate has marketing teams that would verify the layouts of digital advertisements running across multiple regions. Prior to partnering with Cloudflare, these teams had clunky, manual processes to verify their ads were displaying as expected in local markets: either they had to ask colleagues in those local markets to check, or they had to spin up a VPN service to proxy traffic to the region. With an egress policy these teams would simply be able to match a custom test domain for each region and egress using their dedicated IP deployed there.

What’s Next

You can take advantage of Cloudflare’s dedicated egress IPs by adding them onto a Cloudflare Zero Trust Enterprise plan or contacting your account team. If you would like to be contacted when we release the Gateway egress policy builder, join the waitlist here.

MPLS to Zero Trust in 30 days

Post Syndicated from Adi Mukadam original https://blog.cloudflare.com/mpls-to-zerotrust/

MPLS to Zero Trust in 30 days

MPLS to Zero Trust in 30 days

Employees returning to the office are experiencing that their corporate networks are much slower compared to what they’ve been using at home. It’s partly due to outdated line speeds, and also partly due to security requirements that force all traffic to get backhauled through centralized data centers. While 44% of the US currently has access to fiber-based broadband Internet with speeds reaching 1 Gbps, many MPLS sites are still on old 1.5 Mbps circuits. This is a reality check and a reminder that the current MPLS based networks are unable to support the shift from centralized applications in the datacenter to a distributed SaaS and hybrid multi-cloud world.

In this post, we are going to outline the steps required to take your network from MPLS to Zero Trust. But, before we do — a little about how we ended up in this situation.

Enterprise networks today

Over the past 10 years, most enterprise networks have evolved from perimeter hub and spoke networks into franken-networks as a means to solve connectivity and security issues. We have not had a chance to redesign them holistically for distributed application access. The band-aid and point solutions have only pushed the problems further down the road — to a future day for someone else to solve.

MPLS to Zero Trust in 30 days

The advent of cloud adoption put additional pressure on the already ailing legacy WAN. Increased Internet use for business, mining data for actionable insights, advanced security monitoring multiplied bandwidth demand at customer branches. This puts additional pressure on companies seeking to manage their WAN cost. Below is a graphical representation of business loss due to growing bandwidth needs on.

Business loss = (X) cost of project delay  + (Y) loss of productivity due to outages

MPLS to Zero Trust in 30 days

Excitement about SD-WAN

Organizations have been looking to Software-Defined WAN (SD-WAN) to solve some of these challenges. It allows organizations to shift from MPLS private lines to broadband Internet and significantly reduce their cost per Mbps. SD-WAN offers other valuable features like application-aware intelligent routing based on path quality. Orchestrator and analytics help to provide much-needed deployment speed and network visibility, respectively.

Despite the incremental improvement that SD-WAN offers over traditional network architectures, some fundamental challenges remain. SD-WAN is a hardware-dependent edge routing technology that does not always account for the middle mile. While broadband Internet is reasonably fast and available everywhere, it doesn’t offer the end-to-end security and reliability that mission-critical applications require. Further, managing security policies and Internet breakouts across hundreds of edge devices is complex, and many organizations are still choosing to backhaul traffic to centralized data centers. We require a new architecture — with security, speed, and reliability built-in.

Cloudflare Magic WAN

Cloudflare Magic WAN simplifies legacy WAN architectures by enabling customers to use the Cloudflare global network to interconnect their branch offices, data centers, and public cloud services. It includes Zero Trust security services that can be enabled as needed, improve performance, and can be managed through a single dashboard.

MPLS to Zero Trust in 30 days

Magic WAN has many advantages over traditional WAN architectures. It eliminates the need to manage a mesh of tunnels. A single Anycast IPSec or GRE tunnel from a site provides connectivity to all other sites and applications, with the Cloudflare network acting as the network hub, simplifying operational overhead. It removes the requirement for all traffic to be backhauled to a centralized data center to enforce security policies. Cloud-native firewall-as-a-service (FWaaS) for inbound and site-to-site traffic and security web gateway (SWG) for outbound traffic is available at the same data centers where WAN traffic enters the Cloudflare network. Organizations can deploy consistent security policies globally which get enforced at the Cloudflare data center closest to the user at any of our 270+ cities. SaaS and consumer application traffic can be routed directly to the Internet from the edge of the network. With Cloudflare serving millions of websites, the destination might be available on the same server, resulting in better performance for users.

Furthermore, with no appliances to manage or scale, Magic WAN gives you an elastic WAN with zero capital investment that you can quickly scale up or down depending on business needs.

Bridge to Zero Trust

The ultimate goal for many organizations is to move their network and security architecture from a castle & moat model to a Zero Trust model where there’s no longer a hard boundary between “private” and “public” networks. Instead, security is enforced at the user and the application level, using identity, endpoint health and location as key attributes. So an employee on a managed laptop in their home country may have access to all corporate applications, but if they log in from a personal laptop, they might have limited access to only certain applications. Or if the network detects malware on their managed laptop, their access can be quickly revoked, preventing the spread of ransomware, for example, through their organization.

This requires a WAN that is intelligent enough to understand user identities and endpoint health and make intelligent enforcement decisions based on these attributes. This also requires enforcement points that can apply consistent security policies regardless of whether the users are coming from a corporate branch office or from a home office over the Internet.

Cloudflare Magic WAN, part of the Cloudflare One product suite, enables this transition to a true Zero Trust architecture by building in security natively into the network.

Prep work for successful transformation from MPLS to Zero Trust

Planning leads to awareness, while preparation leads to readiness.

MPLS to Zero Trust transformation is a team effort. Traditionally, network managers are responsible for the WAN; security managers for the security perimeter & policies; infrastructure team for the cloud; application teams for application development. Future transformed state has built-in security for seamless on-demand, secured and reliable distributed application access.

1) Network, security, infrastructure, and application project management teams should collectively discuss and document the current/future state.  Sample document below

Current state Future state
Applications List Example: 1600 apps Example: 2400 apps
Location Local: 300, DC: 600, Public cloud: 400, Private cloud:100, SaaS: 200 TBD
Regional application needs Local File servers Cloud
Location/branch # of branch locations 80 85
Availability Example: Platinum 99%, Gold 95%, Silver: 90%, bronze: best effort Platinum 100%, Gold 99%, Silver: 95%, bronze: best effort
Current set up Platinum: Dual MPLS, Gold: MPLS + Internet etc Platinum: 2 x 1G DIA, Gold: 2 x 1G DIA etc
Bandwidth Platinum: 100M, Gold 50M etc Platinum 1G, Gold 500M etc
CSP with location Azure/GCP/AWS 1G ExpressRoute 1G Direct Connect 10G 10G
Internet breakout Capacity 500M On demand
DC: XXX Firewall HA Cloud based local break out
Features Limited security control Identity based granular ZT based policies
Remote Access Quantity 1000 seats 2000 seats
Technology SSL VPN Zero Trust Network Access
Cloud security None CASB, RBI
Device posture None Yes

2) Conduct transformation workshop to

  • Map all combinations of future traffic flows: Device Type – User profile – Application – Enforcement technology – Zero Trust rules
  • Traffic flows help to determine future architecture baseline

3) Invite vendors, partners, and providers for discussion to validate the design and identify technology readiness to support traffic flows and architecture.

4) Carry out budgeting exercises and a business plan to map current pain points with solutions and pricing. Involve specialized experts to develop business plans if needed.

5) Form a special project team that includes project managers, engineering point of contact from all technical groups, local site contacts, escalation team, stakeholder representatives, business owners.

Transition plan

A transition plan is a critical step toward a successful transformation. A good transition and project plan will ensure minimal downtime, while a bad plan will result in outages, business disruption, increased transition time, and cost. The plan should include detailed steps and milestones.

Sample transition plan below:

MPLS to Zero Trust in 30 days

  1. Identify bridging point

    • Bridging point will act as a bridge between transitioned and non-transitioned branch locations.
    • Ideally, regional and global data centers are preferred bridging points between existing MPLS and the new Cloudflare based WAN.
  2. Create user acceptance test (UAT)

    • Collaborate with internal teams and site contacts to create a UAT.
    • Perform UAT before and after cutover for each site to ensure users can access their applications as expected performance after transition.
  3. Migration schedule

    • Develop a migration schedule to ensure minimal business impact.
  4. Prep for Magic WAN

    • Connect applications: Leverage Cloudflare onramp options to connect your various applications to Cloudflare platform.
    • Connect branch: Configure your WAN Edge device (router, SD-WAN device, firewall etc) and connect to Cloudflare platform
    • Please refer https://developers.cloudflare.com/magic-wan/ for detailed step-by-step instructions to configure Magic WAN

Note: Above step will NOT impact existing traffic flows via the existing MPLS path. Take precautions to ensure no production impact. Please follow your change control guidelines and request a maintenance window if applicable.

MPLS to Zero Trust in 30 days

  1. Ready for cutover

    • We are ready for cutover after steps 4 & 5, i.e., ready to migrate and transition branches to Cloudflare based network.
  2. Cutover window

    • During the cutover window, production traffic will stop traversing the existing MPLS path and transition to the new Cloudflare based network..
    • Perform UAT before and after cutover.
  3. Disconnect MPLS

    • MPLS circuits can be disconnected, as sites are migrated.


  • Retire legacy VPN
    • Customers can leverage Cloudflare’s Zero Trust Network Access to access their applications and retire legacy VPN based access.
  • Assumption
    • Customer is responsible for Internet circuit procurement and installation to replace MPLS circuits.

MPLS to Zero Trust in 30 days

We’re proud of how we’ve been able to help some of Cloudflare customers reinvent their corporate networks. It makes sense to close with their own words

MPLS to Zero Trust in 30 days


Replacing MPLS, modernizing network and network security to provide business agility is a must for the digital future. Move to Zero Trust is inevitable for most organizations. Temporary band-aids and point solutions have resulted in business losses, poor employee experience and increased security risk. Moving from MPLS to Zero Trust sounds like a daunting task but teamwork, proper planning, preparation, and right solution will make transformation easily achievable and more manageable.

If you’d like to get started, contact us today and get started on your journey.

Replacing MPLS lines is a great project to fit into your overall Zero Trust roadmap. For a full summary of Cloudflare One Week and what’s new, tune in to our recap webinar.

Announcing the Cloudflare One Partner Program

Post Syndicated from Matthew Harrell original https://blog.cloudflare.com/cloudflare-one-partner-program/

Announcing the Cloudflare One Partner Program

This post is also available in 简体中文, 日本語, Deutsch, Français.

Announcing the Cloudflare One Partner Program

Today marks the launch of the Cloudflare One Partner Program, a program built around our Zero Trust, Network as a Service and Cloud Email Security offerings. The program helps channel partners deliver on the promise of Zero Trust while monetizing this important architecture in tangible ways – with a comprehensive set of solutions, enablement and incentives. We are delighted to have such broad support for the program from IT Service companies, Distributors, Value Added Resellers, Managed Service Providers and other solution providers.

This represents both a new go-to-market channel for Cloudflare, and a new way for companies of all sizes to adopt Zero Trust solutions that have previously been difficult to procure, implement and support.

The Cloudflare One Partner Program consists of the following elements:

  • New, fully cloud-native Cloudflare One product suites that help partners streamline and accelerate the design of holistic Zero Trust solutions that are easier to implement. The product suites include our Zero Trust products and Cloud Email Security products from our recent acquisition of Area 1 Security.
  • All program elements are fully operationalized through Cloudflare’s Distributors to make it easier to evaluate, quote and deliver Cloudflare One solutions in a consistent and predictable way.
  • The launch of new Partner Accreditations to enable partners to assess, implement and support Zero Trust solutions for their customers. This includes a robust set of training to help partners deliver the margin-rich services their customers need to realize the full value of their Zero Trust investments.
  • One of the most robust partner incentive structures in the industry, rewarding partners for the value they add throughout the entire customer lifecycle.

For more details visit our website here Cloudflare One Partner Program. For partners, we’ve added a dedicated Cloudflare One page in the Partner Portal.

TD Synnex has been working hand-in-hand with Cloudflare on the launch of their new Cloudflare One Partner Program for Zero Trust. This program takes Zero Trust from a term that’s broadly and loosely used and cuts through the hype with the solution bundles, enablement resources, and incentives that help the channel deliver true business value“, said Tracy Holtz, Vice President, Security and Networking at TD Synnex. “TD Synnex being the world’s leading IT distributor and solutions aggregator is thrilled to be furthering our partnership with Cloudflare to build and enable this Program of partners as it is encompassing the solution that all organizations need today.

Why is Cloudflare making this investment in the Cloudflare One Partner Program now?

The Cloudflare One Partner Program is launching to address the explosive demand to implement Zero Trust architectures that help organizations of all sizes safely and securely accelerate their digital transformations. In the face of ever-increasing cyber threats, Zero Trust moves from a concept to an imperative. Cloudflare is in a unique position to make this happen to one of the richest Zero Trust product suites in the industry including a Secure Web Gateway, ZTNA Access Management, CASB, Browser Isolation, DLP and Cloud Email Security. These products are tightly integrated and easy-to-use enabling a holistic, implementable solution.

Additionally, our Zero Trust suite has a comprehensive tech partner ecosystem that makes it easy for our customers to integrate our solutions in their existing tech stack. We integrate and closely partner with industry leaders across all major categories — identity, endpoint detection and response, mobile device management, and email service providers — to make Cloudflare One flexible and robust for our diverse customer base. Our strategic partners include Microsoft, CrowdStrike, SentinelOne, Mandiant, and others.

Enterprises have come to terms with the notion of a disintegrating traditional perimeter. The distributed and dynamic perimeter of today requires a fundamentally new approach to security. In partnership with Cloudflare, our AI-powered cybersecurity platform offers modern organizations a robust Zero Trust security solution that spans devices, network, and mission-critical applications.” said Chuck Fontana, Senior Vice President, Business Development, SentinelOne

But it takes more than just the products to realize the promise of Zero Trust. It requires the skills and expertise of the channel, as trusted advisors to their customers, to optimize the solutions to drive the specific required business outcomes, or time-to-value for the customer’s investment.

“We’ve been humbled by how our existing partners have contributed to the explosive growth of our Zero Trust business, but increased customer demand is creating an opportunity for our partners to play a bigger role in how we go to market. More than ever before we are relying on our partners to help customers evaluate, implement and support Zero Trust solutions”, said Matthew Price, CEO of Cloudflare.

By furthering our partnership with Cloudflare in the new Cloudflare One Partner Program, Rackspace Technology is able to deliver Cloudflare’s leading Zero Trust solutions paired with Rackspace Elastic Engineering and professional services at their massive scale and with continued implementation support,” said Gary Alterson, Vice President, Security Solutions at Rackspace Technology. “Since partnering with Cloudflare to develop Zero Trust solutions, we’ve already seen strong engagement with clients and prospects such as the likes of one of the world’s largest creative companies.

With the launch of this new Cloudflare One Partner Program including integrated zero trust focused solution bundles and partner enablement, we look forward to further expanding our go-to-market with Cloudflare and helping customers smoothly and quickly transform their network security by adopting a zero trust strategy for protecting their infrastructure, teams and applications,” stated Deborah Jones, Senior Product Marketing Manager, Alliances, IBM Security Services.

Assurance Data’s charter is to deliver integrated security solutions for next-generation cyber defense. We’re thrilled to work with Cloudflare, adding their innovative, 100% cloud-native Zero Trust solutions to our technology portfolio and appreciate the significant investment they are making in the partner channel, with deep partner enablement and service delivery support along with rich incentives.  The new Cloudflare One Partner Program is truly a triple win: a win for us, for our Cloudflare partnership and for our customers,” stated Randy Stephens, COO, Assurance Data.

Zero Trust is no-brainer, but many people still believe it’s too complex,” stated Scott McCrady, CEO, SolCyber. “Cloudflare has made it easy with the new Cloudflare One Partner Program. We love it because it helps our customers get integrated Zero Trust solutions in place fast, with all the enablement and incentives you would expect from a first-rate partner program.”

How is the Cloudflare One Partner program different from Cloudflare’s general Partner Program?

This new program builds on top of the benefits of the existing partner program. So all the current benefits provided to partners are available, but there are a few valuable additions for Cloudflare One partners: Product suites are listed with Distribution partners and available for VARs and other partners to quote and fulfill; We’ve added Accreditations and new training packages, so that partners have rich resources and training on which to build and enhance their own service practices; Incentives for partners are enhanced with well-structured discounts off the list prices available to partners at our Distribution partners including extra incentives that follow a “reward for value” model.

As a member of AVANT’s Security Council, Cloudflare has been a close innovation partner of AVANT’s as we enable our network of Trusted Advisors to help their customers adopt the very latest in cloud technologies,” stated Shane McNamara, EVP, Engineering and Operations, AVANT Communications. “With this new Cloudflare One Partner Program for Zero Trust, Cloudflare has launched a first-of-kind set of integrated product suites and partner services packages that will give our Trusted Advisors a compelling set of solutions to take to market.

Cloudflare’s product suite has an important role to play in advanced threat detection and in Wipro’s Zero Trust offers to clients,” said Tony Buffomante, SVP, Global CRS Leader of Wipro. “The Cloudflare One Partner Program has provided a quick ramp to build our practice. We’re already seeing significant market use cases from our partnership, with Wipro CyberSecurists providing application security, implementation services and ongoing managed services from Wipro’s 16 global cyber defense centers.

Cloudflare has made Zero Trust adoption easy, with these integrated product bundles and partner services speeding customers’ journeys to comprehensive, Zero Trust-based security for teams, infrastructure and applications. We’re excited to be one of Cloudflare’s initial launch partners for these innovative solutions,” stated Dave Trader, Field CISO, Presidio.

We are a services provider delivering cybersecurity and IT transformation solutions to private equity and mid-market organizations. The Cloudflare One Partner Program fits with our integrated services and support model, and we’re already seeing strong customer interest in the Cloudflare One product suites. We’re excited to be one of Cloudflare’s initial partners for this strategic new channel program,” stated Chris Hueneke, Chief Information Security Officer, RKON.

We’re thrilled to announce that we officially provide managed services to support Cloudflare One solutions to help customers mitigate cyber security threats with a holistic Zero Trust approach to security,” according to Joey Campione, Managing Director, Opticca Security.

Cloudflare is making it easy for us to design and deliver a Zero Trust solution, especially for our mid-market customers where the bundles ensure a complete, integrated solution,” said Katie Hanahan, vCISO and Vice President, Cybersecurity Strategy at ITsavvy, a leading IT solution provider. “And we love the investment in tools and training to help us build out our own professional services offerings to help drive the best possible outcomes for our clients.

A program built around comprehensive Zero Trust product suites

Announcing the Cloudflare One Partner Program

Cloudflare One offers comprehensive Zero Trust solutions that raise visibility, eliminate complexity, and reduce risks as remote and office users connect to applications and the Internet. In a single-pass architecture, traffic is verified, filtered, inspected, and isolated from threats. There is no performance trade-off: users connect through data centers nearby in 270+ cities in over 100 countries.

Announcing the Cloudflare One Partner ProgramCloudflare Access augments or replaces corporate VPN clients by securing SaaS and internal applications. Access works with your identity providers and endpoint protection platforms to enforce default-deny, Zero Trust rules limiting access to corporate applications, private IP spaces, and hostnames.

Announcing the Cloudflare One Partner ProgramCloudflare Gateway is our threat and data protection solution. It keeps data safe from malware, ransomware, phishing, command and control, Shadow IT, and other Internet risks over all ports and protocols.

Announcing the Cloudflare One Partner ProgramCloudflare Area 1 Email Security crawls the Internet to stop phishing, Business Email Compromise (BEC), and email supply chain attacks at the earliest stage of the attack cycle, and enhances built-in security from cloud email providers.

Announcing the Cloudflare One Partner ProgramCloudflare Browser Isolation makes web browsing safer and faster, running in the cloud away from your network and endpoints, insulating devices from attacks.

Announcing the Cloudflare One Partner ProgramCloudflare CASB (Cloud Access Security Broker) gives customers comprehensive visibility and control over SaaS apps to easily prevent data leaks, block insider threats, and avoid compliance violations.

Announcing the Cloudflare One Partner ProgramCloudflare Data Loss Prevention enables customers to detect and prevent data exfiltration or data destruction. Analyze network traffic and internal “endpoint” devices to identify leakage or loss of confidential information, and stay compliant with industry and data privacy regulations.

For more information on the program and Zero Trust product suites go here.

What’s Next?

Today’s launch of the Cloudflare One Partner Program represents just one step in a multi-step journey to invest in our partners and help customers implement and support Zero Trust solutions. Over the coming months we will be expanding the program internationally and continuing to add training resources around Cloudflare Zero Trust accreditations. We are also hosting a series of partner webinars on this new program. Please check the Partner Portal for details and future partner events.

How to augment or replace your VPN with Cloudflare

Post Syndicated from Michael Keane original https://blog.cloudflare.com/how-to-augment-or-replace-your-vpn/

How to augment or replace your VPN with Cloudflare

“Never trust, always verify.”

How to augment or replace your VPN with Cloudflare

Almost everyone we speak to these days understands and agrees with this fundamental principle of Zero Trust. So what’s stopping folks? The biggest gripe we hear: they simply aren’t sure where to start. Security tools and network infrastructure have often been in place for years, and a murky implementation journey involving applications that people rely on to do their work every day can feel intimidating.

While there’s no universal answer, several of our customers have agreed that offloading key applications from their traditional VPN to a cloud-native Zero Trust Network Access (ZTNA) solution like Cloudflare Access is a great place to start—providing an approachable, meaningful upgrade for their business.

In fact, Gartner predicted that “by 2025, at least 70% of new remote access deployments will be served predominantly by ZTNA as opposed to VPN services, up from less than 10% at the end of 2021.”1 By prioritizing a ZTNA project, IT and Security executives can better shield their business from attacks like ransomware while simultaneously improving their employees’ daily workflows. The trade-off between security and user experience is an outmoded view of the world; organizations can truly improve both if they go down the ZTNA route.

You can get started here with Cloudflare Access for free, and in this guide we’ll show you why, and how.

Why nobody likes their VPN

The network-level access and default trust granted by VPNs create avoidable security gaps by inviting the possibility of lateral movement within your network. Attackers may enter your network through a less-sensitive entry point after stealing credentials, and then traverse to find more business-critical information to exploit. In the face of rising attacks, the threat here is too real—and the path to mitigate is too within reach—to ignore.

How to augment or replace your VPN with Cloudflare

Meanwhile, VPN performance feels stuck in the 90s… and not in a fun, nostalgic way. Employees suffer through slow and unreliable connections that simply weren’t built for today’s scale of remote access. In the age of the “Great Reshuffle” and the current recruiting landscape, providing subpar experiences for teams based on legacy tech doesn’t have a great ROI. And when IT/security practitioners have plenty of other job opportunities readily available, they may not want to put up with manual, avoidable tasks born from an outdated technology stack. From both security and usability angles, moving toward VPN replacement is well worth the pursuit.

Make least-privilege access the default

Instead of authenticating a user and providing access to everything on your corporate network, a ZTNA implementation or “software-defined perimeter” authorizes access per resource, effectively eliminating the potential for lateral movement. Each access attempt is evaluated against Zero Trust rules based on identity, device posture, geolocation, and other contextual information. Users are continuously re-evaluated as context changes, and all events are logged to help improve visibility across all types of applications.

How to augment or replace your VPN with Cloudflare

As co-founder of Udaan, Amod Malviya, noted, “VPNs are frustrating and lead to countless wasted cycles for employees and the IT staff supporting them. Furthermore, conventional VPNs can lull people into a false sense of security. With Cloudflare Access, we have a far more reliable, intuitive, secure solution that operates on a per user, per access basis. I think of it as Authentication 2.0 — even 3.0″.

Better security and user experience haven’t always co-existed, but the fundamental architecture of ZTNA really does improve both compared to legacy VPNs. Whether your users are accessing Office 365 or your custom, on-prem HR app, every login experience is treated the same. With Zero Trust rules being checked behind the scenes, suddenly every app feels like a SaaS app to your end users. Like our friends at OneTrust said when they implemented ZTNA, “employees can connect to the tools they need, so simply teams don’t even know Cloudflare is powering the backend. It just works.”

Assembling a ZTNA project plan

VPNs are so entrenched in an organization’s infrastructure that fully replacing one may take a considerable amount of time, depending on the total number of users and applications served. However, there still is significant business value in making incremental progress. You can migrate away from your VPN at your own pace and let ZTNA and your VPN co-exist for some time, but it is important to at least get started.

Consider which one or two applications behind your VPN would be most valuable for a ZTNA pilot, like one with known complaints or numerous IT support tickets associated with it. Otherwise, consider internal apps that are heavily used or are visited by particularly critical or high-risk users. If you have any upcoming hardware upgrades or license renewals planned for your VPN(s), apps behind the accompanying infrastructure may also be a sensible fit for a modernization pilot.

As you start to plan your project, it’s important to involve the right stakeholders. For your ZTNA pilot, your core team should at minimum involve an identity admin and/or admin who manages internal apps used by employees, plus a network admin who understands your organization’s traffic flow as it relates to your VPN. These perspectives will help to holistically consider the implications of your project rollout, especially if the scope feels dynamic.

Executing a transition plan for a pilot app

Step 1: Connect your internal app to Cloudflare’s network
The Zero Trust dashboard guides you through a few simple steps to set up our app connector, no virtual machines required. Within minutes, you can create a tunnel for your application traffic and route it based on public hostnames or your private network routes. The dashboard will provide a string of commands to copy and paste into your command line to facilitate initial routing configurations. From there, Cloudflare will manage your configuration automatically.

A pilot web app may be the most straightforward place to start here, but you can also extend to SSH, VNC, RDP, or internal IPs and hostnames through the same workflow. With your tunnel up and running, you’ve created the means through which your users will securely access your resources and have essentially eliminated the potential for lateral movement within your network. Your application is not visible to the public Internet, significantly reducing your attack surface.

Step 2: Integrate identity and endpoint protection
Cloudflare Access acts as an aggregation layer for your existing security tools. With support for over a dozen identity providers (IdPs) like Okta, Microsoft Azure AD, Ping Identity, or OneLogin, you can link multiple simultaneous IdPs or separate tenants from one IdP. This can be particularly useful for companies undergoing mergers or acquisitions or perhaps going through compliance updates, e.g. incorporating a separate FedRAMP tenant.

In a ZTNA implementation, this linkage lets both tools play to their strengths. The IdP houses user stores and performs the identity authentication check, while Cloudflare Access controls the broader Zero Trust rules that ultimately decide access permissions to a broad range of resources.

Similarly, admins can integrate common endpoint protection providers like Crowdstrike, SentinelOne, Tanium or VMware Carbon Black to incorporate device posture into Zero Trust rulesets. Access decisions can incorporate device posture risk scores for tighter granularity.

You might find shortcut approaches to this step if you plan on using simpler authentication like one-time pins or social identity providers with external users like partners or contractors. As you mature your ZTNA rollout, you can incorporate additional IdPs or endpoint protection providers at any time without altering your fundamental setup. Each integration only adds to your source list of contextual signals at your disposal.

Step 3: Configure Zero Trust rules
Depending on your assurance levels for each app, you can customize your Zero Trust policies to appropriately restrict access to authorized users using contextual signals. For example, a low-risk app may simply require email addresses ending in “@company.com” and a successful SMS or email multifactor authentication (MFA) prompt. Higher risk apps could require hard token MFA specifically, plus a device posture check or other custom validation check using external APIs.

MFA in particular can be difficult to implement with legacy on-prem apps natively using traditional single sign-on tools. Using Cloudflare Access as a reverse proxy helps provide an aggregation layer to simplify rollout of MFA to all your resources, no matter where they live.

Step 4: Test clientless access right away
After connecting an app to Cloudflare and configuring your desired level of authorization rules, end users in most cases can test web, SSH, or VNC access without using a device client. With no downloads or mobile device management (MDM) rollouts required, this can help accelerate ZTNA adoption for key apps and be particularly useful for enabling third-party access.

Note that a device client can still be used to unlock other use cases like protecting SMB or thick client applications, verifying device posture, or enabling private routing. Cloudflare Access can handle any arbitrary L4-7 TCP or UDP traffic, and through bridges to WAN-as-a-service it can offload VPN use cases like ICMP or server-to-client initiated protocol traffic like VoIP as well.

How to augment or replace your VPN with Cloudflare

At this stage for the pilot app, you are up and running with ZTNA! Top priority apps can be offloaded from your VPN one at a time at any pace that feels comfortable to help modernize your access security. Still, augmenting and fully replacing a VPN are two very different things.

Moving toward full VPN replacement

While a few top resource candidates for VPN offloading might be clear for your company, the total scope could be overwhelming, with potentially thousands of internal IPs and domains to consider. You can configure the local domain fallback entries within Cloudflare Access to point to your internal DNS resolver for selected internal hostnames. This can help you more efficiently disseminate access to resources made available over your Intranet.

It can also be difficult for admins to granularly understand the full reach of their current VPN usage. Potential visibility issues aside, the full scope of applications and users may be in dynamic flux especially at large organizations. You can use the private network discovery report within Cloudflare Access to passively vet the state of traffic on your network over time. For discovered apps requiring more protection, Access workflows help you tighten Zero Trust rules as needed.

Both of these capabilities can help reduce anxiety around fully retiring a VPN. By starting to build your private network on top of Cloudflare’s network, you’re bringing your organization closer to achieving Zero Trust security.

The business impact our customers are seeing

Offloading applications from your VPN and moving toward ZTNA can have measurable benefits for your business even in the short term. Many of our customers speak to improvements in their IT team’s efficiency, onboarding new employees faster and spending less time on access-related help tickets. For example, after implementing Cloudflare Access, eTeacher Group reduced its employee onboarding time by 60%, helping all teams get up to speed faster.

Even if you plan to co-exist with your VPN alongside a slower modernization cadence, you can still track IT tickets for the specific apps you’ve transitioned to ZTNA to help quantify the impact. Are overall ticket numbers down? Did time to resolve decrease? Over time, you can also partner with HR for qualitative feedback through employee engagement surveys. Are employees feeling empowered with their current toolset? Do they feel their productivity has improved or complaints have been addressed?

Of course, improvements to security posture also help mitigate the risk of expensive data breaches and their lingering, damaging effects to brand reputation. Pinpointing narrow cause-and-effect relationships for the cost benefits of each small improvement may feel more art than science here, with too many variables to count. Still, reducing reliance on your VPN is a great step toward reducing your attack surface and contributes to your macro return on investment, however long your full Zero Trust journey may last.

Start the clock toward replacing your VPN

Our obsession with product simplicity has helped many of our customers sunset their VPNs already, and we can’t wait to do more.

You can get started here with Cloudflare Access for free to begin augmenting your VPN. Follow the steps outlined above with your prioritized ZTNA test cases, and for a sense of broader timing you can create your own Zero Trust roadmap as well to figure out what project should come next.

For a full summary of Cloudflare One Week and what’s new, tune in to our recap webinar.


1Nat Smith, Mark Wah, Christian Canales. (2022, April 08). Emerging Technologies: Adoption Growth Insights for Zero Trust Network Access. GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

Introducing Private Network Discovery

Post Syndicated from Abe Carryl original https://blog.cloudflare.com/introducing-network-discovery/

Introducing Private Network Discovery

Introducing Private Network Discovery

With Cloudflare One, building your private network on Cloudflare is easy. What is not so easy is maintaining the security of your private network over time. Resources are constantly being spun up and down with new users being added and removed on a daily basis, making it painful to manage over time.

That’s why today we’re opening a closed beta for our new Zero Trust network discovery tool. With Private Network Discovery, our Zero Trust platform will now start passively cataloging both the resources being accessed and the users who are accessing them without any additional configuration required. No third party tools, commands, or clicks necessary.

To get started, sign-up for early access to the closed beta and gain instant visibility into your network today. If you’re interested in learning more about how it works and what else we will be launching in the future for general availability, keep scrolling.

One of the most laborious aspects of migrating to Zero Trust is replicating the security policies which are active within your network today. Even if you do have a point-in-time understanding of your environment, networks are constantly evolving with new resources being spun up dynamically for various operations. This results in a constant cycle to discover and secure applications which creates an endless backlog of due diligence for security teams.

That’s why we built Private Network Discovery. With Private Network Discovery, organizations can easily gain complete visibility into the users and applications that live on their network without any additional effort on their part. Simply connect your private network to Cloudflare, and we will surface any unique traffic we discover on your network to allow you to seamlessly translate them into Cloudflare Access applications.

Building your private network on Cloudflare

Building out a private network has two primary components: the infrastructure side, and the client side.

The infrastructure side of the equation is powered by Cloudflare Tunnel, which simply connects your infrastructure (whether that be a single application, many applications, or an entire network segment) to Cloudflare. This is made possible by running a simple command-line daemon in your environment to establish multiple secure, outbound-only links to Cloudflare. Simply put, Tunnel is what connects your network to Cloudflare.

On the other side of this equation, you need your end users to be able to easily connect to Cloudflare and, more importantly, your network. This connection is handled by our robust device agent, Cloudflare WARP. This agent can be rolled out to your entire organization in just a few minutes using your in-house MDM tooling, and it establishes a secure connection from your users’ devices to the Cloudflare network.

Introducing Private Network Discovery

Now that we have your infrastructure and your users connected to Cloudflare, it becomes easy to tag your applications and layer on Zero Trust security controls to verify both identity and device-centric rules for each and every request on your network.

How it works

As we mentioned earlier, we built this feature to help your team gain visibility into your network by passively cataloging unique traffic destined for an RFC 1918 or RFC 4193 address space. By design, this tool operates in an observability mode whereby all applications are surfaced, but are tagged with a base state of “Unreviewed.”

Introducing Private Network Discovery

The Network Discovery tool surfaces all origins within your network, defined as any unique IP address, port, or protocol. You can review the details of any given origin and then create a Cloudflare Access application to control access to that origin. It’s also worth noting that Access applications may be composed of more than one origin.

Let’s take, for example, a privately hosted video conferencing service, Jitsi. I’m using this example as our team actually uses this service internally to test our new features before pushing them into production. In this scenario, we know that our self-hosted instance of Jitsi lives at However, as this is a video conferencing application, it communicates on both tcp: and udp: Here we would select one origin and assign it an application name.

As a note, during the closed beta you will not be able to view this application in the Cloudflare Access application table. For now, these application names will only be reflected in the discovered origins table of the Private Network Discovery report. You will see them reflected in the Application name column exclusively. However, when this feature goes into general availability you’ll find all the applications you have created under Zero Trust > Access > Applications as well.

After you have assigned an application name and added your first origin, tcp:, you can then follow the same pattern to add the other origin, udp:, as well. This allows you to create logical groupings of origins to create a more accurate representation of the resources on your network.

Introducing Private Network Discovery

By creating an application, our Network Discovery tool will automatically update the status of these individual origins from “Unreviewed” to “In-Review.” This will allow your team to easily track the origin’s status. From there, you can drill further down to review the number of unique users accessing a particular origin as well as the total number of requests each user has made. This will help equip your team with the information it needs to create identity and device-driven Zero Trust policies. Once your team is comfortable with a given application’s usage, you can then manually update the status of a given application to be either “Approved” or “Unapproved”.

What’s next

Our closed beta launch is just the beginning. While the closed beta release supports creating friendly names for your private network applications, those names do not currently appear in the Cloudflare Zero Trust policy builder.

As we move towards general availability, our top priority will be making it easier to secure your private network based on what is surfaced by the Private Network Discovery tool. With the general availability launch, you will be able to create Access applications directly from your Private Network Discovery report, reference your private network applications in Cloudflare Access and create Zero Trust security policies for those applications, all in one singular workflow.

As you can see, we have exciting plans for this tool and will continue investing in Private Network Discovery in the future. If you’re interested in gaining access to the closed beta, sign-up here and be among the first users to try it out!

Infinitely extensible Access policies

Post Syndicated from Kenny Johnson original https://blog.cloudflare.com/access-external-validation-rules/

Infinitely extensible Access policies

Infinitely extensible Access policies

Zero Trust application security means that every request to an application is denied unless it passes a specific set of defined security policies. Most Zero Trust solutions allow the use of a user’s identity, device, and location as variables to define these security policies.

We heard from customers that they wanted more control and more customizability in defining their Zero Trust policies.

Starting today, we’re excited that Access policies can consider anything before allowing a user access to an application. And by anything, we really do mean absolutely anything. You can now build infinitely customizable policies through the External Evaluation rule option, which allows you to call any API during the evaluation of an Access policy.

Why we built external evaluation rules

Over the past few years we added the ability to check location and device posture information in Access. However, there are always additional signals that can be considered depending on the application and specific requirements of an organization. We set out to give customers the ability to check whatever signal they require without any direct support in Access policies.

The Cloudflare security team, as an example, needed the ability to verify a user’s mTLS certificate against a registry to ensure applications can only be accessed by the right user from a corporate device. Originally, they considered using a Worker to check the user’s certificate after Access evaluated the request. However, this was going to take custom software development and maintenance over time. With External Evaluation rules, an API call can be made to verify whether a user is presenting the correct certificate for their device. The API call is made to a Worker that stores the mapping of mTLS certificates and user devices. The Worker executes the custom logic and then returns a true or false to Access.

How it works

Cloudflare Access is a reverse proxy in front of any web application. If a user has not yet authenticated, they will be presented with a login screen to authenticate. The user must meet the criteria defined in your Access policy. A typical policy would look something like:

  • The user’s email ends in @example.com
  • The user authenticated with a hardware based token
  • The user logged in from the United States

If the user passes the policy, they are granted a cookie that will give them access to the application until their session expires.

To evaluate the user on other custom criteria, you can add an external evaluation rule to the Access policy. The external evaluation rule requires two values: an API endpoint to call and a key to verify that any request response is coming from a trusted source.

Infinitely extensible Access policies

After the user authenticates with your identity provider, all information about the user, device and location is passed to your external API. The API returns a pass or fail response to Access which will then either allow or deny access to the user.

Example logic for the API would look like this:

 * Where your business logic should go
 * @param {*} claims
 * @returns boolean
async function externalEvaluation(claims) {
  return claims.identity.email === '[email protected]'

Where the claims object contains all the information about the user, device and network making the request. This externalEvaluation function can be extended to perform any desired business logic. We have made an open-source repository available with example code for consuming the Access claims and verifying the signing keys from Access.

This is really powerful! Any Access policy can now be infinitely extended to consider any information before allowing a user access. Potential examples include:

  • Integrating with endpoint protection tools we don’t yet integrate with by building a middleware that checks the endpoint protection tool’s API.
  • Checking IP addresses against external threat feeds
  • Calling industry-specific user registries
  • And much more!

We’re just getting started with extending Access policies. In the future we’ll make it easier to programmatically decide how a user should be treated before accessing an application, not just allow or deny access.

This feature is available in the Cloudflare Zero Trust dashboard today. Follow this guide to get started!

How Cloudflare One solves your observability problems

Post Syndicated from Chris Draper original https://blog.cloudflare.com/cloudflare-one-observability/

How Cloudflare One solves your observability problems

How Cloudflare One solves your observability problems

Today, we’re excited to announce Cloudflare One Observability. Cloudflare One Observability will help customers work across Cloudflare One applications to troubleshoot network connectivity, security policies, and performance issues to ensure a consistent experience for employees everywhere. Cloudflare One, our comprehensive SASE platform, already includes visibility for individual products; Cloudflare One Observability is the next step in bringing data together across the Cloudflare One platform.

Network taps and legacy enterprise networks

Traditional enterprise networks operated like a castle protected by a moat. Employees working from a physical office location authenticated themselves at the beginning of their session, they were protected by an extensive office firewall, and the majority of the applications they accessed were on-premise.

Many enterprise networks had a strictly defined number of “entrances” for employees at office locations. Network taps (devices used to measure and report events on a local network) monitored each entrance point, and these devices gave network administrators and engineers complete visibility into their operations.

Learn more about the old castle-and-moat network security model.

Incomplete observability in today’s enterprise network

Today’s enterprise networks have expanded beyond the traditional on-premise model and have become extremely fragmented. Now, employees can work from anywhere. People access enterprise networks from across the Internet, and the applications they use every day are a mix of on-premise and SaaS cloud instances.

SaaS applications are hosted outside the enterprise network, leaving your security teams with limited observability into how users access those applications and move data through them. Without observability on the applications your employees are using, you can’t control how sensitive data is stored, shared, or exposed to third parties.

Now that enterprise networks have become more fragmented, it is increasingly difficult to understand how the various fragments are operating. To even gain limited observability, you have to implement a disorganized combination of network taps, flow data, synthetic probes, and dashboards that fail to share data across one another.

Total observability across an enterprise & cloud network built on Cloudflare One

Cloudflare One Observability is built to solve today’s issue of network fragmentation in a zero trust world. Instead of having data spread across multiple network tools, Cloudflare One Observability will combine data from different Cloudflare One functions into a single experience. Customers will be able to go to one place to troubleshoot any issues they’re experiencing with their enterprise applications and networks.

In today’s world of fragmented enterprise networks, there are some questions that can be difficult to answer. Let’s break down a couple of customer examples and walk through how Cloudflare One Observability will simplify the troubleshooting process.

Troubleshooting bandwidth issues across branch locations

A customer may want to know, “What applications are using up the majority of my bandwidth across multiple office locations?” In a typical enterprise network, a network engineer would need to install a network tap or collect flow data at each office location, aggregate the information across separate networks, then build a custom tool to visualize the bandwidth data.

Instead, for Cloudflare One customers, Cloudflare will automatically do all the upfront data collection and aggregation. Customers will be able to skip straight to troubleshooting and solving their bandwidth problem by using Cloudflare One Observability to visualize bandwidth usage across office locations.

Identifying network vulnerabilities

Another challenging question that customers face is, “What attack trends are popular, and is my network vulnerable?” Assessing a network’s vulnerability is time-consuming as administrators dive into separate applications for VPNs, firewalls, user policies, and endpoints to understand their network’s security posture.

Cloudflare One is built from the ground up to simplify this problem. Observability is straightforward when your network on-ramps, firewalls, user policies, and endpoint protection are all managed within the same platform. Customers will be able to go to the Cloudflare One Observability experience to see security patches that are automatically applied by Cloudflare so that customers don’t have to worry. Cloudflare One lets you know whether you’ve been targeted by an attack and gives you confidence that you’re protected.

Troubleshooting slow network performance

Many people have experienced logging into a slow enterprise network. The general problem of “my network is slow when I access an on-premise or SaaS application” can be tough to solve. If employees are working remotely, a network engineer would need to dig through different applications to troubleshoot latency and jitter between VPNs, firewalls, user policies, and endpoint connections.

Cloudflare One Observability simplifies this time-consuming troubleshooting process. When your on-ramps, firewalls, user policies, and endpoint monitoring are all configured on the same platform, you only need to go to one place to troubleshoot these network functions. Cloudflare One’s architecture is built on the concept of single pass inspection. When a request lands on a Cloudflare server, that request passes through instances of Cloudflare One services all on that same single server. This makes it easy to visualize end-to-end network request handling, so customers can seamlessly analyze traffic and identify a network bottleneck or misconfiguration.

Observability powered by Cloudflare’s network

Cloudflare One Observability is built on Cloudflare’s best-in-class network. We have data centers in 270+ cities and over 100 countries. Since every Cloudflare One product runs on every server, we can provide an unparalleled fast and consistent experience to customers everywhere. Cloudflare built its network and security applications from the ground up on the same infrastructure. Unlike our competitors that have strung together a zero trust platform by building siloed applications or through acquisitions, Cloudflare One applications are seamlessly integrated and designed from day one to share data between one another.

As our applications are all built on the same infrastructure, so are our data pipelines and logging services. When you use Cloudflare One, you get the full benefits of our advanced data tools, like Instant Logs for delivering live network data as it arrives and ABR for analyzing network data at scale.

How Cloudflare One solves your observability problems

Delivering the Zero Trust observability customers need today

Since 2009, Cloudflare has built one of the fastest, most reliable, and most secure networks in the world. We’ve built Cloudflare One and Cloudflare One Observability on top of this network, and we’re extending its power to meet the challenges of any company. The move to Zero Trust is a paradigm shift, and we believe the security benefits of this new paradigm will make it inevitable for every company. We’re proud of how we have helped and continue to help existing and new customers reinvent their corporate networks.

Construction of Cloudflare One Observability is still in progress. If you’re excited about this new product, you can sign up for our wait list now!

Next generation intrusion detection: an update on Cloudflare’s IDS capabilities

Post Syndicated from Annika Garbers original https://blog.cloudflare.com/intrusion-detection/

Next generation intrusion detection: an update on Cloudflare’s IDS capabilities

Next generation intrusion detection: an update on Cloudflare’s IDS capabilities

In an ideal world, intrusion detection would apply across your entire network – data centers, cloud properties, and branch locations. It wouldn’t impact the performance of your traffic. And there’d be no capacity constraints. Today, we’re excited to bring this one step closer to reality by announcing the private beta of Cloudflare’s intrusion detection capabilities: live monitoring for threats across all of your network traffic, delivered as-a-service — with none of the constraints of legacy hardware approaches.

Cloudflare’s Network Services, part of Cloudflare One, help you connect and secure your entire corporate network — data center, cloud, or hybrid — from DDoS attacks and other malicious traffic. You can apply Firewall rules to keep unwanted traffic out or enforce a positive security model, and integrate custom or managed IP lists into your firewall policies to block traffic associated with known malware, bots, or anonymizers. Our new Intrusion Detection System (IDS) capabilities expand on these critical security controls by actively monitoring for a wide range of known threat signatures in your traffic.

What is an IDS?

Intrusion Detection Systems are traditionally deployed as standalone appliances but often incorporated as features in more modern or higher end firewalls. They expand the security coverage of traditional firewalls – which focus on blocking traffic you know you don’t want in your network – to analyze traffic against a broader threat database, detecting a variety of sophisticated attacks such as ransomware, data exfiltration, and network scanning based on signatures or “fingerprints” in network traffic. Many IDSs also incorporate anomaly detection, monitoring activity against a baseline to identify unexpected traffic patterns that could indicate malicious activity. (If you’re interested in the evolution of network firewall capabilities, we recommend this where we’ve dived deeper on the topic).

What problems have users encountered with existing IDS solutions?

We’ve interviewed tons of customers about their experiences deploying IDS and the pain points they’re hoping we can solve. Customers have mentioned the full list of historical problems we hear frequently with other hardware-based security solutions, including capacity planning, location planning and back hauling traffic through a central location for monitoring, downtime for installation, maintenance, and upgrades, and vulnerability to congestion or failure with large volumes of traffic (e.g. DDoS attacks).

Customers we talked to also consistently cited challenges making trade off decisions between security and performance for their network traffic. One network engineer explained:

“I know my security team hates me for this, but I can’t let them enable the IDS function on our on-prem firewalls – in the tests my team ran, it cut my throughput by almost a third. I know we have this gap in our security now, and we’re looking for an alternative way to get IDS coverage for our traffic, but I can’t justify slowing down the network for everyone in order to catch some theoretical bad traffic.”

Finally, customers who did choose to take the performance hit and invest in an IDS appliance reported that they often mute or ignore the feed of alerts coming into their SOC after turning it on. With the amount of noise on the Internet and the potential risk of missing an important signal, IDSs can end up generating a lot of false positives or non-actionable notifications. This volume can lead busy SOC teams to get alert fatigue and end up silencing potentially important signals buried in the noise.

How is Cloudflare tackling these problems?

We believe there’s a more elegant, efficient, and effective way to monitor all of your network traffic for threats without introducing performance bottlenecks or burning your team out with non-actionable alerts. Over the past year and a half, we’ve learned from your feedback, experimented with different technology approaches, and developed a solution to take those tough trade off decisions out of the picture.

Next generation intrusion detection: an update on Cloudflare’s IDS capabilities

One interface across all your traffic

Cloudflare’s IDS capabilities operate across all of your network traffic – any IP port or protocol — whether it flows to your IPs that we advertise on your behalf, IPs we lease to you, or soon, traffic within your private network. You can enforce consistent monitoring and security control across your entire network in one place.

No more hardware headaches

Like all of our security functions, we built our IDS from scratch in software, and it is deployed across every server on Cloudflare’s global Anycast network. This means:

  • No more capacity planning: Cloudflare’s entire global network capacity is now the capacity of your IDS – currently 142 Tbps and counting.
  • No more location planning: No more picking regions, backhauling traffic to central locations, or deploying primary and backup appliances – because every server runs our IDS software and traffic is automatically attracted to the closest network location to its source, redundancy and failover are built in.
  • No maintenance downtime: Improvements to Cloudflare’s IDS capabilities, like all of our products, are deployed continuously across our global network.

Threat intelligence from across our interconnected global network

The attack landscape is constantly evolving, and you need an IDS that stays ahead of it. Because Cloudflare’s IDS is delivered in software we wrote from the ground up and maintain, we’re able to continuously feed threat intelligence from the 20+ million Internet properties on Cloudflare back into our policies, keeping you protected from both known and new attack patterns.

Our threat intelligence combines open-source feeds that are maintained and trusted by the security community – like Suricata threat signatures – with information collected from our unique vantage point as an incredibly interconnected network carrying a significant percentage of all Internet traffic. Not only do we share these insights publicly through tools like Cloudflare Radar; we also feed them back into our security tools including IDS so that our customers are protected as quickly as possible from emerging threats. Cloudflare’s newly announced Threat Intel team will augment these capabilities even further, applying additional expertise to understanding and deriving insights from our network data.

Excited to get started?

If you’re an Advanced Magic Firewall customer, you can get access to these features in private beta starting now. You can reach out to your account team to learn more or get started now – we can’t wait to hear your feedback as we continue to develop these capabilities!

Launching In-Line Data Loss Prevention

Post Syndicated from Noelle Gotthardt original https://blog.cloudflare.com/inline-data-loss-prevention/

Launching In-Line Data Loss Prevention

Launching In-Line Data Loss Prevention

Data Loss Prevention (DLP) enables you to protect your data based on its characteristics — or what it is. Today, we are very excited to announce that Data Loss Prevention is arriving as a native part of the Cloudflare One platform. If you’re interested in early access, please see the bottom of this post!

In the process of building Cloudflare One’s DLP solution, we talked to customers of all sizes and across dozens of industries. We focused on learning about their experiences, what products they are using, and what solutions they lack. The answers revealed significant customer challenges and frustrations. We are excited to deliver a product to put those problems in the past — and to do so as part of a comprehensive Zero Trust solution.

Customers are struggling to understand their data flow

Some customers have been using DLP solutions in their organizations for many years. They have deployed endpoint agents, crafted custom rulesets, and created incident response pipelines. Some built homemade tools to trace credit card numbers on the corporate network or rulesets to track hundreds of thousands of exact data match hashes.

Meanwhile, other customers are brand new to the space. They have small, scrappy teams supporting many IT and security functions. They do not have readily available resources to allocate to DLP and do not want to deprioritize other work to get started.

Still, many told the same story: the meteoric rise of SaaS tools left them unsure of where their data is moving and living. The migration of data off of corporate servers and into the cloud resulted in a loss of visibility and control. Even teams with established data protection programs strive for better visibility on the network. They are all asking the same types of questions:

  • Where is the data going?
  • Are uploads and downloads moving to and from corporate or personal SaaS instances?
  • What applications are storing sensitive data?
  • Who has access to those applications?
  • Can we see and block large downloads from file repositories?

Many customers seem to feel as though they have fallen behind because they haven’t solved these problems — and yet many customers are reporting the exact same story. However, these struggles do not mean anyone is behind — just that a better solution is needed. This told us that building a DLP product was the right choice, but why build it within Cloudflare One?

Launching In-Line Data Loss Prevention

How Data Loss Prevention ties in to Zero Trust

A Zero Trust network architecture is fundamentally designed to secure your data. By checking every attempt to access a protected app, machine, or remote desktop, your data is protected on the basis of identity and device posture. With DNS and HTTP filtering, your data is protected based on content category and reputation. By adding an API-driven CASB, your data is protected based on your applications’ configurations, too.

With each piece of the architecture, your data is protected based on a new identifier. The identifiers above help you understand: who accessed the data, who owned the device that accessed it, where the data went, and how the destination was configured. However, what was the data that was moved?

Data Loss Prevention enables you to protect your data based on its characteristics, or what it is. For example, sensitive or confidential data can be identified a number of ways, such as keywords, patterns, or file types. These indicators help you understand the information being transmitted across or out of the network.

With DLP embedded in Cloudflare One, you can combine these identifiers to create rules catered to your organization. You get to specify the who, how, where, and what that meets your needs. We aim to deliver a comprehensive, detailed understanding of your network and your data, as well as allow you to easily implement protection.

How It Works

First: Identify the Data

DLP Profiles are being added to the Zero Trust dashboard. These profiles are where you define what data you want to protect. You will be able to add keywords and craft regexes to identify the presence of sensitive data. Profiles for common detections, such as credit card numbers, will be provided by Cloudflare.

Next: Create an HTTP Policy

After configuring a DLP Profile, you can then create a Cloudflare Gateway HTTP policy to allow or block the sensitive data from leaving your organization. Gateway will parse and scan your HTTP traffic for strings matching the keywords or regexes specified in the DLP profile.

Why Cloudflare

We know DLP is a big challenge to do comprehensively, and at scale. Those are the types of problems we excel at. Our network securely delivers traffic to 95% of the world’s Internet connected population within 50ms. It also supports our market leading products that send and protect customer traffic at unimaginable speed and scale. We are using that powerful network and our experience solving problems like this to take on Data Loss Prevention, and we’re very excited by our results

Join the waitlist

We are launching a closed beta of our Data Loss Prevention product. If you’re interested in early access, you can join the waitlist today by filling out this form.

What’s next?

We’re just getting started with DLP! We already have many plans for growth and integration with other Cloudflare One products, such as Remote Browser Isolation.

Area 1 threat indicators now available in Cloudflare Zero Trust

Post Syndicated from Jesse Kipp original https://blog.cloudflare.com/phishing-threat-indicators-in-zero-trust/

Area 1 threat indicators now available in Cloudflare Zero Trust

Area 1 threat indicators now available in Cloudflare Zero Trust

Over the last several years, both Area 1 and Cloudflare built pipelines for ingesting threat indicator data, for use within our products. During the acquisition process we compared notes, and we discovered that the overlap of indicators between our two respective systems was smaller than we expected. This presented us with an opportunity: as one of our first tasks in bringing the two companies together, we have started bringing Area 1’s threat indicator data into the Cloudflare suite of products. This means that all the products today that use indicator data from Cloudflare’s own pipeline now get the benefit of Area 1’s data, too.

Area 1 threat indicators now available in Cloudflare Zero Trust

Area 1 built a data pipeline focused on identifying new and active phishing threats, which now supplements the Phishing category available today in Gateway. If you have a policy that references this category, you’re already benefiting from this additional threat coverage.

How Cloudflare identifies potential phishing threats

Cloudflare is able to combine the data, procedures and techniques developed independently by both the Cloudflare team and the Area 1 team prior to acquisition. Customers are able to benefit from the work of both teams across the suite of Cloudflare products.

Cloudflare curates a set of data feeds both from our own network traffic, OSINT sources, and numerous partnerships, and applies custom false positive control. Customers who rely on Cloudflare are spared the software development effort as well as the operational workload to distribute and update these feeds. Cloudflare handles this automatically, with updates happening as often as every minute.

Cloudflare is able to go beyond this and work to proactively identify phishing infrastructure in multiple ways. With the Area 1 acquisition, Cloudflare is now able to apply the adversary-focused threat research approach of Area1 across our network. A team of threat researchers track state-sponsored and financially motivated threat actors, newly disclosed CVEs, and current phishing trends.

Cloudflare now operates mail exchange servers for hundreds of organizations around the world, in addition to its DNS resolvers, Zero Trust suite, and network services. Each of these products generates data that is used to enhance the security of all of Cloudflare’s products. For example, as part of mail delivery, the mail engine performs domain lookups, scores potential phishing indicators via machine learning, and fetches URLs. Data which can now be used through Cloudflare’s offerings.

How Cloudflare Area 1 identifies potential phishing threats

The Cloudflare Area 1 team operates a suite of web crawling tools designed to identify phishing pages, capture phishing kits, and highlight attacker infrastructure. In addition, Cloudflare Area 1 threat models assess campaigns based on signals gathered from threat actor campaigns; and the associated IOCs of these campaign messages are further used to enrich Cloudflare Area 1 threat data for future campaign discovery. Together these techniques give Cloudflare Area 1 a leg up on identifying the indicators of compromise for an attacker prior to their attacks against our customers. As part of this proactive approach, Cloudflare Area 1 also houses a team of threat researchers that track state-sponsored and financially motivated threat actors, newly disclosed CVEs, and current phishing trends. Through this research, analysts regularly insert phishing indicators into an extensive indicator management system that may be used for our email product or any other product that may query it.

Cloudflare Area 1 also collects information about phishing threats during our normal operation as the mail exchange server for hundreds of organizations across the world. As part of that role, the mail engine performs domain lookups, scores potential phishing indicators via machine learning, and fetches URLs. For those emails found to be malicious, the indicators associated with the email are inserted into our indicator management system as part of a feedback loop for subsequent message evaluation.

How Cloudflare data will be used to improve phishing detection

In order to support Cloudflare products, including Gateway and Page Shield, Cloudflare has a data pipeline that ingests data from partnerships, OSINT sources, as well as threat intelligence generated in-house at Cloudflare. We are always working to curate a threat intelligence data set that is relevant to our customers and actionable in the products Cloudflare supports. This is our North star: what data can we provide that enhances our customer’s security without requiring our customers to manage the complexity of data, relationships, and configuration. We offer a variety of security threat categories, but some major focus areas include:

  • Malware distribution
  • Malware and Botnet Command & Control
  • Phishing,
  • New and newly seen domains

Phishing is a threat regardless of how the potential phishing link gets entry into an organization, whether via email, SMS, calendar invite or shared document, or other means. As such, detecting and blocking phishing domains has been an area of active development for Cloudflare’s threat data team since almost its inception.

Looking forward, we will be able to incorporate that work into Cloudflare Area 1’s phishing email detection process. Cloudflare’s list of phishing domains can help identify malicious email when those domains appear in the sender, delivery headers, message body or links of an email.

1+1 = 3: Greater dataset sharing between Cloudflare and Area 1

Threat actors have long had an unfair advantage — and that advantage is rooted in the knowledge of their target, and the time they have to set up specific campaigns against their targets. That dimension of time allows threat actors to set up the right infrastructure, perform reconnaissance, stage campaigns, perform test probes, observe their results, iterate, improve and then launch their ‘production’ campaigns. This precise element of time gives us the opportunity to discover, assess and proactively filter out campaign infrastructure prior to campaigns reaching critical mass. But to do that effectively, we need visibility and knowledge of threat activity across the public IP space.

With Cloudflare’s extensive network and global insight into the origins of DNS, email or web traffic, combined with Cloudflare Area 1’s datasets of campaign tactics, techniques, and procedures (TTPs), seed infrastructure and threat models — we are now better positioned than ever to help organizations secure themselves against sophisticated threat actor activity, and regain the advantage that for so long has been heavily weighted towards the bad guys.

If you’d like to extend Zero Trust to your email security to block advanced threats, contact your Customer Success manager, or request a Phishing Risk Assessment here.

How to replace your email gateway with Cloudflare Area 1

Post Syndicated from Shalabh Mohan original https://blog.cloudflare.com/replace-your-email-gateway-with-area-1/

How to replace your email gateway with Cloudflare Area 1

How to replace your email gateway with Cloudflare Area 1

Leaders and practitioners responsible for email security are faced with a few truths every day. It’s likely true that their email is cloud-delivered and comes with some built-in protection that does an OK job of stopping spam and commodity malware. It’s likely true that they have spent considerable time, money, and staffing on their Secure Email Gateway (SEG) to stop phishing, malware, and other email-borne threats. Despite this, it’s also true that email continues to be the most frequent source of Internet threats, with Deloitte research finding that 91% of all cyber attacks begin with phishing.

If anti-phishing and SEG services have both been around for so long, why do so many phish still get through? If you’re sympathetic to Occam’s razor, it’s because the SEG was not designed to protect the email environments of today, nor is it effective at reliably stopping today’s phishing attacks.

But if you need a stronger case than Occam delivers — then keep on reading.

Why the world has moved past the SEG

The most prominent change within the email market is also what makes a traditional SEG redundant – the move to cloud-native email services. More than 85% of organizations are expected to embrace a “cloud-first” strategy by 2025, according to Gartner®. Organizations that expect cloud-native scale, resiliency, and flexibility from their security controls are not going to get it from legacy devices such as SEGs.

When it comes to email specifically, Gartner® notes that, “Advanced email security capabilities are increasingly being deployed as integrated cloud email security solutions rather than as a gateway” – with at least 40% of organizations using built-in protection capabilities from cloud email providers instead of a SEG, by 2023. Today, email comes from everywhere and goes everywhere – putting a SEG in front of your Exchange server is anachronistic; and putting a SEG in front of cloud inboxes in a mobile and remote-first world is intractable. Email security today should follow your user, should be close to your inbox, and should “be everywhere”.

Apart from being architecturally out of time, a SEG also falls short at detecting advanced phishing and socially engineered attacks. This is because a SEG was originally designed to stop spam – a high-volume problem that needs large attack samples to detect and nullify. But today’s phishing attacks are more sniper than scattergun. They are low volume, highly targeted, and exploit our implicit trust in email communications to steal money and data. Detecting modern phishing attacks requires compute-intensive advanced email analysis and threat detection algorithms that a SEG cannot perform at scale.

Nowhere is a SEG’s outdated detection philosophy more laid bare than when admins are confronted with a mountain of email threat policies to create and tune. Unlike most other cyber attacks, email phishing and Business Email Compromise (BEC) have too many “fuzzy” signals and cannot solely be detected by deterministic if-then statements. Moreover, attackers don’t stand still while you create email threat policies – they adapt fast and modify techniques to bypass the rules you just created. Relying on SEG tuning to stop phishing is like playing a game of Whack-A-Mole rigged in the attacker’s favor.

How to replace your email gateway with Cloudflare Area 1

To stop phishing, look ahead

Traditional email security defenses rely on knowledge of yesterday’s active attack characteristics, such as reputation data and threat signatures, to detect the next attack, and therefore can’t reliably defend against modern phishing attacks that continually evolve.

What’s needed is forward-looking security technology that is aware not only of yesterday’s active phishing payloads, websites, and techniques — but also has insight into the threat actors’ next moves. Which sites and accounts are they compromising or establishing for use in tomorrow’s attacks? What payloads and techniques are they preparing to use in those attacks? Where are they prodding and probing before an attack?

Cloudflare Area 1 proactively scans the Internet for attacker infrastructure and phishing campaigns that are under construction. Area 1’s threat-focused web crawlers dynamically analyze suspicious web pages and payloads, and continuously update detection models as attacker tactics evolve – all to stop phishing attacks days before they reach the inbox.

When combined with the 1T+ daily DNS requests observed by Cloudflare Gateway, this corpus of threat intelligence enables customers to stop phishing threats at the earliest stage of the attack cycle. In addition, the use of deep contextual analytics to understand message sentiment, tone, tenor and thread variations allows Area 1 to understand and distinguish between valid business process messages and sophisticated impersonation campaigns.

While we are big believers in layering security, the layers should not be redundant. A SEG duplicates a lot of capabilities that customers now get bundled in with their cloud email offering. Area 1 is built to enhance – not duplicate – native email security and stop phishing attacks that get past initial layers of defense.

How to replace your email gateway with Cloudflare Area 1

Planning for your SEG replacement project

The best way to get started with your SEG replacement project is deciding whether it’s a straight replacement or an eventual replacement that starts with augmentation. While Cloudflare Area 1 has plenty of customers that have replaced their SEG (more on that later), we have also seen scenarios where customers prefer to run Cloudflare Area 1 downstream of their SEG initially, assess the efficacy of both services, and then make a more final determination. We make the process straightforward either way!

As you start the project, it’s important to involve the right stakeholders. At a minimum, you should involve an IT admin to ensure email delivery and productivity isn’t impacted and a security admin to monitor detection efficacy. Other stakeholders might include your channel partner if that’s your preferred procurement process and someone from the privacy and compliance team to verify proper handling of data.

Next, you should decide your preferred Cloudflare Area 1 deployment architecture. Cloudflare Area 1 can be deployed as the MX record, over APIs, and can even run in multi-mode deployment. We recommend deploying Cloudflare Area 1 as the MX record for the most effective protection against external threats, but the service fits into your world based on your business logic and specific needs.

The final piece of preparation involves mapping out your email flow. If you have multiple domains, identify where emails from each of your domains route to. Check your different routing layers (e.g. are there MTAs that relay inbound messages?). Having a good understanding of the logical and physical SMTP layers within the organization will ensure proper routing of messages. Discuss what email traffic Cloudflare Area 1 should scan (north/south, east/west, both) and where it fits with your existing email policies.

Executing the transition plan

Step 1: Implement email protection
Here are the broad steps you should follow if Cloudflare Area 1 is configured as the MX record (time estimate: ~30 minutes):

  • Configure the downstream service to accept mail from Cloudflare Area 1.
  • Ensure that Cloudflare Area 1’s egress IPs are not rate limited or blocked as this would affect delivery of messages.
  • If the email server is on-premises, update firewall rules to allow Cloudflare Area 1 to deliver to these systems.
  • Configure remediation rules (e.g. quarantine, add subject or message body prefix, etc.).
  • Test the message flow by injecting messages into Cloudflare Area 1 to confirm proper delivery. (our team can assist with this step.)
  • Update MX records to point to Cloudflare Area 1.

Here are the steps if Cloudflare Area 1 is deployed downstream from an existing email security solution (time estimate: ~30 minutes):

  • Configure the proper look back hops on Cloudflare Area 1, so that Cloudflare Area 1 can detect the original sender IP address.
  • If your email server is on-premises, update firewall rules to allow Cloudflare Area 1 to deliver to the email server.
  • Configure remediation rules (e.g. quarantine, add subject or message body prefix, etc.).
  • Test the message flow by injecting messages into Cloudflare Area 1 to confirm proper delivery. (our team can assist with this step.)
  • Update the delivery routes on your SEG to deliver all mail to Cloudflare Area 1, instead of the email servers.

Step 2: Integrate DNS
One of the most common post-email steps customers follow is to integrate Cloudflare Area 1 with their DNS service. If you’re a Cloudflare Gateway customer, good news – Cloudflare Area 1 now uses Cloudflare Gateway as its recursive DNS to protect end users from accessing phishing and malicious sites through email links or web browsing.

Step 3: Integrate with downstream security monitoring and remediation services
Cloudflare Area 1’s detailed and customizable reporting allows for at-a-glance visibility into threats. By integrating with SIEMs through our robust APIs, you can easily correlate Cloudflare Area 1 detections with events from network, endpoint and other security tools for simplified incident management.

While Cloudflare Area 1 provides built-in remediation and message retraction to allow customers to respond to threats directly within the Cloudflare Area 1 dashboard, many organizations also choose to integrate with orchestration tools for custom response playbooks. Many customers leverage our API hooks to integrate with SOAR services to manage response processes across their organization.

How to replace your email gateway with Cloudflare Area 1

Metrics to measure success

How will you know your SEG replacement project has been successful and had the desired impact? We recommend measuring metrics relevant to both detection efficacy and operational simplicity.

On the detection front, the obvious metric to measure is the number and nature of phishing attacks blocked before and after the project. Are you seeing new types of phishing attacks being blocked that you weren’t seeing before? Are you getting visibility into campaigns that hit multiple mailboxes? The other detection-based metric to keep in mind is the number of false positives.

On the operational front, it’s critical that email productivity isn’t impacted. A good proxy for this is measuring the number of IT tickets related to email delivery. The availability and uptime of the email security service is another key lever to keep an eye on.

Finally, and perhaps most importantly, measure how much time your security team is spending on email security. Hopefully it’s much less than before! A SEG is known to be a heavy-lift service deployment to ongoing maintenance. If Cloudflare Area 1 can free up your team’s time to work on other pressing security concerns, that’s as meaningful as stopping the phish themselves.

You have lots of company

The reason we are articulating a SEG replacement plan here is because many of our customers have done it already and are happy with the outcomes.

For example, a Fortune 50 global insurance provider that serves 90 million customers in over 60 countries found their SEG to be insufficient in stopping phishing attacks. Specifically, it was an onerous process to search for “missed phish” once they got past the SEG and reached the inbox. They needed an email security service that could catch these phishing attacks and support a hybrid architecture with both cloud and on-premises mailboxes.

After deploying Cloudflare Area 1 downstream of their Microsoft 365 and SEG layers, our customer was protected against more than 14,000 phishing threats within the first month; none of those phishing messages reached a user’s inbox. A one-step integration with existing email infrastructure meant that maintenance and operational issues were next to none. Cloudflare Area 1’s automated message retraction and post-delivery protection also enabled the insurance provider to easily search and remediate any missed phish as well.

If you are interested in speaking with any of our customers that have augmented or replaced their SEG with Cloudflare Area 1, please reach out to your account team to learn more! If you’d like to see Cloudflare Area 1 in action, sign up for a Phishing Risk Assessment here.

Replacing a SEG is a great project to fit into your overall Zero Trust roadmap. For a full summary of Cloudflare One Week and what’s new, tune in to our recap webinar.

1Gartner Press Release, “Gartner Says Cloud Will Be the Centerpiece of New Digital Experiences”, 11 November 2021
2Gartner, “Market Guide for Email Security,” 7 October 2021, Mark Harris, Peter Firstbrook, Ravisha Chugh, Mario de Boer
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

Introducing browser isolation for email links to stop modern phishing threats

Post Syndicated from Shalabh Mohan original https://blog.cloudflare.com/email-link-isolation/

Introducing browser isolation for email links to stop modern phishing threats

This post is also available in 简体中文, 日本語 and Español.

Introducing browser isolation for email links to stop modern phishing threats

There is an implicit and unearned trust we place in our email communications. This realization — that an organization can’t truly have a Zero Trust security posture without including email — was the driving force behind Cloudflare’s acquisition of Area 1 Security earlier this year.  Today, we have taken our first step in this exciting journey of integrating Cloudflare Area 1 email security into our broader Cloudflare One platform. Cloudflare Secure Web Gateway customers can soon enable Remote Browser Isolation (RBI) for email links, giving them an unmatched level of protection from modern multi-channel email-based attacks.

Research from Cloudflare Area 1 found that nearly 10% of all observed malicious attacks involved credential harvesters, highlighting that victim identity is what threat actors usually seek. While commodity phishing attacks are blocked by existing security controls, modern attacks and payloads don’t have a set pattern that can reliably be matched with a block or quarantine rule. Additionally, with the growth of multi-channel phishing attacks, an effective email security solution needs the ability to detect blended campaigns spanning email and Web delivery, as well as deferred campaigns that are benign at delivery time, but weaponized at click time.

When enough “fuzzy” signals exist, isolating the destination to ensure end users are secure is the most effective solution. Now, with the integration of Cloudflare Browser Isolation into Cloudflare Area 1 email security, these attacks can now be easily detected and neutralized.

Human error is human

Why do humans still click on malicious links? It’s not because they haven’t attended enough training sessions or are not conscious about security. It’s because they have 50 unread emails in their inbox, have another Zoom meeting to get to, or are balancing a four-year old on their shoulders. They are trying their best. Anyone, including security researchers, can fall for socially engineered attacks if the adversary is well-prepared.

If we accept that human error is here to stay, developing security workflows introduces new questions and goals:

  • How can we reduce, rather than eliminate, the likelihood of human error?
  • How can we reduce the impact of human error when, not if, it happens?
  • How can security be embedded into an employee’s existing daily workflows?

It’s these questions that we had in mind when we reached the conclusion that email needs to be a fundamental part of any Zero Trust platform. Humans make mistakes in email just as regularly — in fact, sometimes more so — as they make mistakes surfing the Web.

To block, or not to block?

For IT teams, that is the question they wrestle with daily to balance risk mitigation with user productivity. The SOC team wants IT to block everything risky or unknown, whereas the business unit wants IT to allow everything not explicitly bad. If IT decides to block risky or unknown links, and it results in a false positive, they waste time manually adding URLs to allow lists — and perhaps the attacker later pivots those URLs to malicious content anyway. If IT decides to allow risky or unknown sites, best case they waste time reimaging infected devices and resetting login credentials — but all too common, they triage the damage from a data breach or ransomware lockdown. The operational simplicity of enabling RBI with email — also known as email link isolation — saves the IT, SOC, and business unit teams significant time.

How it works

For a Cloudflare Area 1 customer, the initial steps involve enabling RBI within your portal:

Introducing browser isolation for email links to stop modern phishing threats

With email link isolation in place, here’s the short-lived life of an email with suspicious links:

Step 1: Cloudflare Area 1 inspects the email and determines that certain links in the messages are suspicious or on the margin

Step 2: Suspicious URLs and hyperlinks in the email get rewritten to a custom Cloudflare Area 1 prefix URL.

Step 3: The email is delivered to the intended inboxes.

Step 4: If a user clicks the link in the email, Cloudflare redirects to a remote browser via <authdomain>.cloudflareaccess.com/browser/{{url}}.

Step 5: Remote browser loads a website on a server on the Cloudflare Global Network and serves draw commands to the user’s clientless browser endpoint.

By executing the browser code and controlling user interactions on a remote server rather than a user device, any and all malware and phishing attempts are isolated, and won’t infect devices and compromise user identities. This improves both user and endpoint security when there are unknown risks and unmanaged devices, and allows users to access websites without having to connect to a VPN or having strict firewall policies.

Cloudflare’s RBI technology uses a unique patented technology called Network Vector Rendering (NVR) that utilizes headless Chromium-based browsers in the cloud, transparently intercepts draw layer output, transmits the draw commands efficiency and securely over the web, and redraws them in the windows of local HTML5 browsers. Unlike legacy Browser Isolation technologies that relied on pixel pushing or DOM reconstruction, NVR is optimized for scalability, security and end user transparency, while ensuring the broadest compatibility with websites.

Introducing browser isolation for email links to stop modern phishing threats

Let’s look at a specific example of a deferred phishing attack, how it slips past traditional defenses, and how email link isolation addresses the threat.

As organizations look to adopt new security principles and network architectures like Zero Trust, adversaries continually come up with techniques to bypass these controls by exploiting the most used and most vulnerable application – email. Email is a good candidate for compromise because of its ubiquity and ability to be bypassed pretty easily by a motivated attacker.

Let’s take an example of a “deferred phishing attack”, without email link isolation.

Introducing browser isolation for email links to stop modern phishing threats

Attacker preparation: weeks before launch
The attacker sets up infrastructure for the phishing attempt to come. This may include:

  • Registering a domain
  • Encrypting it with SSL
  • Setting up proper email authentication (SPF, DKIM, DMARC)
  • Creating a benign web page

At this point, there is no evidence of an attack that can be picked up by secure email gateways, authentication-based solutions, or threat intelligence that relies solely on reputation-based signals and other deterministic detection techniques.

Attack “launch”: Sunday afternoon
The attacker sends an authentic-looking email from the newly-created domain. This email includes a link to the (still benign) webpage. There’s nothing in the email to block or flag it as suspicious. The email gets delivered to intended inboxes.

Attack launch: Sunday evening
Once the attacker is sure that all emails have reached their destination, they pivot the link to a malicious destination by changing the attacker-controlled webpage, perhaps by creating a fake login page to harvest credentials.

Attack landing: Monday morning
As employees scan their inboxes to begin their week, they see the email. Maybe not all of them click the link, but some of them do. Maybe not all of those that clicked enter their credentials, but a handful do. Without email link isolation, the attack is successful.

The consequences of the attack have also just begun – once user login credentials are obtained, attackers can compromise legitimate accounts, distribute malware to your organization’s network, steal confidential information, and cause much more downstream damage.

The integration between Cloudflare Area 1 and Cloudflare Browser Isolation provides a critical layer of post-delivery protection that can foil attacks like the deferred phishing example described above.

If the attacker prepares for and executes the attack as stated in the previous section, our email link isolation would analyze the email link at the time of click and perform a high-level assessment on whether the user should be able to navigate to it.

Safe link – Users will be redirected to this site transparently

Malicious link Users are blocked from navigating

Suspicious link Users are heavily discouraged to navigating and are presented with a splash warning page encouraging them to view in the link in an isolated browser

Introducing browser isolation for email links to stop modern phishing threats
Introducing browser isolation for email links to stop modern phishing threats

While a splash warning page was the mitigation employed in the above example, email link isolation will also offer security administrators other customizable mitigation options as well, including putting the webpage in read-only mode, restricting the download and upload of files, and disabling keyboard input altogether within their Cloudflare Gateway consoles.

Email link isolation also fits into users’ existing workflows without impacting productivity or sapping their time with IT tickets. Because Cloudflare Browser Isolation is built and deployed on the Cloudflare network, with global locations in 270 cities, web browsing sessions are served as close to users as possible, minimizing latency. Additionally, Cloudflare Browser Isolation sends the final output of each webpage to a user instead of page scrubbing or sending a pixel stream, further reducing latency and not breaking browser-based applications such as SaaS.

How do I get started?

Existing Cloudflare Area 1 and Cloudflare Gateway customers are eligible for the beta release of email link isolation. To learn more and to express interest, sign up for our upcoming beta.

If you’d like to see what threats Cloudflare Area 1 detects on your live email traffic, request a free phishing risk assessment here. It takes five minutes to get started and does not impact mail flow.

CVE-2022-1096: How Cloudflare Zero Trust provides protection from zero day browser vulnerabilities

Post Syndicated from Tim Obezuk original https://blog.cloudflare.com/cve-2022-1096-zero-trust-protection-from-zero-day-browser-vulnerabilities/

CVE-2022-1096: How Cloudflare Zero Trust provides protection from zero day browser vulnerabilities

CVE-2022-1096: How Cloudflare Zero Trust provides protection from zero day browser vulnerabilities

On Friday, March 25, 2022, Google published an emergency security update for all Chromium-based web browsers to patch a high severity vulnerability (CVE-2022-1096). At the time of writing, the specifics of the vulnerability are restricted until the majority of users have patched their local browsers.

It is important everyone takes a moment to update their local web browser. It’s one quick and easy action everyone can contribute to the cybersecurity posture of their team.

Even if everyone updated their browser straight away, this remains a reactive measure to a threat that existed before the update was available. Let’s explore how Cloudflare takes a proactive approach by mitigating the impact of zero day browser threats with our zero trust and remote browser isolation services. Cloudflare’s remote browser isolation service is built from the ground up to protect against zero day threats, and all remote browsers on our global network have already been patched.

How Cloudflare Zero Trust protects against browser zero day threats

Cloudflare Zero Trust applies a layered defense strategy to protect users from zero day threats while browsing the Internet:

  1. Cloudflare’s roaming client steers Internet traffic over an encrypted tunnel to a nearby Cloudflare data center for inspection and filtration.
  2. Cloudflare’s secure web gateway inspects and filters traffic based on our network intelligence, antivirus scanning and threat feeds. Requests to known malicious services are blocked and high risk or unknown traffic is automatically served by a remote browser.
  3. Cloudflare’s browser isolation service executes all website code in a remote browser to protect unpatched devices from threats inside the unknown website.
CVE-2022-1096: How Cloudflare Zero Trust provides protection from zero day browser vulnerabilities

Protection from the unknown

Zero day threats are often exploited and exist undetected in the real world and actively target users through risky links in emails or other external communication points such as customer support tickets. This risk cannot be eliminated, but it can be reduced by using remote browser isolation to minimize the attack surface. Cloudflare’s browser isolation service is built from the ground up to protect against zero day threats:

  • Prevent compromised web pages from affecting the endpoint device by executing all web code in a remote browser that is physically isolated from the endpoint device. The endpoint device only receives a thin HTML5 remoting shell from our network and vector draw commands from the remote browser.
  • Mitigate the impact of compromise by automatically destroying and reconstructing remote browsers back to a known clean state at the end of their browser session.
  • Protect adjacent remote browsers by encrypting all remote browser egress traffic, segmenting remote browsers with virtualization technologies and distributing browsers across physical hardware in our global network.
  • Aiding Security Incident Response (SIRT) teams by logging all remote egress traffic in the integrated secure web gateway logs.

Patching remote browsers around the world

Even with all these security controls in place, patching browsers remains critical to eliminate the risk of compromise. The process of patching local and remote browsers tells two different stories that can be the difference between compromise, and avoiding a zero day vulnerability.

Patching your workforces local browsers requires politely asking users to interrupt their work to update their browser, or apply mobile device management techniques to disrupt their work by forcing an update. Neither of these options create happy users, or deliver rapid mitigation.

Patching remote browsers is a fundamentally different process. Since the remote browser itself is running on our network, Users and Administrators do not need to intervene as security patches are automatically deployed to remote browsers on Cloudflare’s network. Then without a user restarting their local browser, any traffic to an isolated website is automatically served from a patched remote browser.

Finally, browser based vulnerabilities such as CVE-2022-1096 are not uncommon. With over 300 in 2021 and over 40 already in 2022 (according to cvedetails.com) it is critical for administrators to have a plan to rapidly mitigate and patch browsers in their organization.

Get started with Cloudflare Browser Isolation

Cloudflare Browser Isolation is available to both self serve and enterprise customers. Whether you’re a small startup or a massive enterprise, our network is ready to serve fast and secure remote browsing for your team, no matter where they are based.

To get started, visit our website and, if you’re interested in evaluating Browser Isolation, ask our team for a demo with our Clientless Web Isolation.