Tag Archives: Zero-Trust

Improved access controls: API access can now be selectively disabled

Post Syndicated from Joseph So original https://blog.cloudflare.com/improved-api-access-control/

Improved access controls: API access can now be selectively disabled

Improved access controls: API access can now be selectively disabled

Starting today, it is possible to selectively scope API access to your account to specific users.

We are making it easier for account owners to view and manage the access their users have on an account by allowing them to restrict API access to the account. Ensuring users have the least amount of access they need, and maximizing visibility of the access is critical, and our move today is another step in this direction.

When Cloudflare was first introduced, a single user had access to a single account. As we have been adopted by larger enterprises, the need to maximize access granularity and retain control of an account has become progressively more important. Nowadays, enterprises using Cloudflare could have tens or hundreds of users on an account, some of which need to do account configuration, and some that do not. In addition, to centralize the configuration of the account, some enterprises have a need for service accounts, or those shared between several members of an organization.

While account owners have always been able to restrict access to an account by their users, they haven’t been able to view the keys and tokens created by their users. Restricting use of the API is the first step in a direction that will allow account owners a single control plane experience to manage their users’ access.

Steps to secure an account

The safest thing to do to reduce risk is to scope every user to the minimal amount of access required, and the second is to monitor what they do with their access.

While a dashboard login has some degree of non-repudiation, especially when being protected by multiple factors and an SSO configuration, an API key or token can be leaked, and no further authentication factors will block the use of this credential. Therefore, in order to reduce the attack surface, we can limit what the token can do.

A Cloudflare account owner can now access their members page, and turn API access on or off for specific users, as well as account wide.

Improved access controls: API access can now be selectively disabled
Improved access controls: API access can now be selectively disabled

This feature is available for our enterprise users starting today.

Moving forward

On our journey to making the account management experience safer, and more granular, we will continue to increase the level of control account owners have over their accounts. Building these API restrictions is a first step on the way to allowing account-owned API tokens (which will limit the need to have personal tokens), as well as increasing general visibility of tokens among account members.

Announcing the Magic WAN Connector: the easiest on-ramp to your next generation network

Post Syndicated from Annika Garbers original https://blog.cloudflare.com/magic-wan-connector/

Announcing the Magic WAN Connector: the easiest on-ramp to your next generation network

Announcing the Magic WAN Connector: the easiest on-ramp to your next generation network

Cloudflare One enables organizations to modernize their corporate networks by connecting any traffic source or destination and layering Zero Trust security policies on top, saving cost and complexity for IT teams and delivering a better experience for users. Today, we’re excited to make it even easier for you to get connected with the Magic WAN Connector: a lightweight software package you can install in any physical or cloud network to automatically connect, steer, and shape any IP traffic.

You can install the Magic WAN Connector on physical or virtual hardware you already have, or purchase it pre-installed on a Cloudflare-certified device. It ensures the best possible connectivity to the closest Cloudflare network location, where we’ll apply security controls and send traffic on an optimized route to its destination. Embracing SASE has never been simpler.

Solving today’s problems and setting up for tomorrow

Over the past few years, we’ve had the opportunity to learn from IT teams about how their corporate networks have evolved and the challenges they’re facing today. Most organizations describe a starting point of private connectivity and “castle and moat” security controls: a corporate WAN composed of point-to-point and MPLS circuits and hardware appliances at the perimeter of physical networks. This architecture model worked well in a pre-cloud world, but as applications have shifted outside of the walls of the corporate data center and users can increasingly work from anywhere, the concept of the perimeter has crumbled.

In response to these shifts, traditional networking and security vendors have developed a wide array of point solutions to fill specific gaps: a virtual appliance to filter web traffic, a physical one to optimize bandwidth use across multiple circuits, a cloud-based tool to prevent data loss, and so on. IT teams now need to manage a broader-than-ever set of tools and contend with gaps in security, visibility, and control as a result.

Announcing the Magic WAN Connector: the easiest on-ramp to your next generation network
Today’s fragmented corporate network

We view this current state, with IT teams contending with a patchwork of tools and a never-ending ticket queue, as a transitional period to a world where the Internet forms the foundation of the corporate network. Cloudflare One is enabling organizations of all sizes to make the transition to SASE: connecting any traffic source and destination to a secure, fast, reliable global network where all security functions are enforced and traffic is optimized on the way to its destination, whether that’s within a private network or on the public Internet.

Announcing the Magic WAN Connector: the easiest on-ramp to your next generation network
Secure Access Service Edge architecture

Magic WAN Connector: the easiest way to connect your network to Cloudflare

The first step to adopting SASE is getting connected – establishing a secure path from your existing network to the closest location where Zero Trust security policies can be applied. Cloudflare offers a broad set of “on-ramps” to enable this connectivity, including client-based and clientless access options for roaming users, application-layer tunnels established by deploying a lightweight software daemon, network-layer connectivity with standard GRE or IPsec tunnels, and physical or virtual interconnection.

Today, to make this first step to SASE even easier, we’re introducing a new member to this family of on-ramps. The Magic WAN Connector can be deployed in any physical or cloud network to provide automatic connectivity to the closest Cloudflare network location, leveraging your existing last mile Internet connectivity and removing the requirement for IT teams to manually configure network gear to get connected.

Announcing the Magic WAN Connector: the easiest on-ramp to your next generation network
Magic WAN Connector provides easy connectivity to Cloudflare’s network

End-to-end traffic management

Hundreds of customer conversations over the past few years have helped us define a slim set of functionality that customers need within their on-premise and cloud networks. They’ve described this as “light branch, heavy cloud” architecture – minimizing the footprint at corporate network locations and shifting the majority of functions that used to be deployed in on-premise hardware to a globally distributed network.

The Magic WAN Connector includes a critical feature set to make the best possible use of available last mile connectivity. This includes traffic routing, load balancing, and failover; application-aware traffic steering and shaping; and automatic configuration and orchestration. These capabilities connect you automatically to the closest Cloudflare location, where traffic is optimized and routed to its destination. This approach allows you to use Cloudflare’s network – presence in 275 cities and 100 countries across the globe, 11,000+ interconnects and a growing fiber backbone – as an extension of your own.

Network function Magic WAN Connector Cloudflare Network
Branch routing (traffic shaping, failover, QoS) Application-aware routing and traffic steering between multiple last mile Internet circuits Application-aware routing and traffic steering across the middle mile to get traffic to its destination
Centralized device management Connector config controlled from unified Cloudflare dashboard Cloudflare unified dashboard portal, observability, Zero Trust services
Zero-touch configuration Automagic config; boots with smart defaults and sets up tunnels + routes Automagic config; Magic WAN Connector pulls down updates from central control plane
VPN + Firewall VPN termination + basic network segmentation included Full-featured SASE platform including ZTNA, FWaaS, DDoS, WAAP, and Email Security
Application-aware path selection Application-aware traffic shaping for last mile Application-aware Enhanced Internet for middle mile
Application auto discovery Works with Cloudflare network to perform application discovery and classification in real time 1+1=3: Cloudflare Zero Trust application classification tools reused in this context
Application performance visibility Acts as telemetry source for Cloudflare observability tools Cloudflare One Analytics platform & Digital Experience Monitoring
Software can be deployed in the cloud Software can be deployed as a public cloud VM All configuration controlled via unified Cloudflare dashboard

Fully integrated security from day 0

The Magic WAN Connector, like all of Cloudflare’s products, was developed from the ground up to natively integrate with the rest of the Cloudflare One portfolio. Connecting your network to Cloudflare’s with the Magic WAN Connector means automatic access to a full suite of SASE security capabilities, including our Firewall-as-a-Service, Zero Trust Network Access, Secure Web Gateway, Data Loss Prevention, Browser Isolation, Cloud Access Security Broker, Email Security, and more.

Optionally pre-packaged to make deployment easy

Cloudflare’s goal is to make it as easy as possible to on-ramp to our network, so there are flexible deployment options available for the Magic WAN Connector. You can install the software on physical or virtual Linux appliances that you manage, or purchase it pre-installed and configured on a hardware appliance for the lowest-friction path to SASE connectivity. Plug the device into your existing network and you’ll be automatically connected to and secured by the Cloudflare network within minutes.

And open source to make it even easier

We’re excited to make access to these capabilities available to all kinds of organizations, including those who want to DIY more aspects of their network deployments. To do this, we’ll be open sourcing the Magic WAN Connector software, so customers can even more easily connect to Cloudflare’s network from existing hardware.

Part of a growing family of on-ramps

In addition to introducing the Magic WAN Connector today, we’re continuing to grow the options for how customers can connect to us using existing hardware. We are excited to expand our Network On-Ramp partnerships to include leading networking companies Cisco, SonicWall, and Sophos, joining previous partners Aruba, VMWare, and Arista, to help you onboard traffic to Cloudflare smoothly.

Customers can connect to us from appliances offered by these vendors using either Anycast GRE or IPSec tunnels. Our partners have validated their solutions and tested that their networking hardware can connect to Cloudflare using these standards. To make setup easier for our mutual customers, detailed configuration instructions will be available soon at both the Cloudflare Developer Docs and partner websites.

If you are a networking solutions provider and are interested in becoming a Network On-Ramp partner, please reach out to us here.

Ready to start building the future of your corporate network?

We’re beyond excited to get the Magic WAN Connector into customer hands and help you jumpstart your transition to SASE. Learn more and sign up for early access here.

Announcing Custom DLP profiles

Post Syndicated from Adam Chalmers original https://blog.cloudflare.com/custom-dlp-profiles/

Announcing Custom DLP profiles

Introduction

Announcing Custom DLP profiles

Where does sensitive data live? Who has access to that data? How do I know if that data has been improperly shared or leaked? These questions keep many IT and security administrators up at night. The goal of data loss prevention (DLP) is to give administrators the desired visibility and control over their sensitive data.

We shipped the general availability of DLP in September 2022, offering Cloudflare One customers better protection of their sensitive data. With DLP, customers can identify sensitive data in their corporate traffic, evaluate the intended destination of the data, and then allow or block it accordingly — with details logged as permitted by your privacy and sovereignty requirements. We began by offering customers predefined detections for identifier numbers (e.g. Social Security #s) and financial information (e.g. credit card #s). Since then, nearly every customer has asked:

“When can I build my own detections?”

Most organizations care about credit card numbers, which use standard patterns that are easily detectable. But the data patterns of intellectual property or trade secrets vary widely between industries and companies, so customers need a way to detect the loss of their unique data. This can include internal project names, unreleased product names, or unannounced partner names.

As of today, your organization can build custom detections to identify these types of sensitive data using Cloudflare One. That’s right, today you are able to build Custom DLP Profile using the same regular expression approach that is used in policy building across our platform.

How to use it

Cloudflare’s DLP is embedded in our secure web gateway (SWG) product, Cloudflare Gateway, which routes your corporate traffic through Cloudflare for fast, safe Internet browsing. As your traffic passes through Cloudflare, you can inspect that HTTP traffic for sensitive data and apply DLP policies.

Building DLP custom profiles follows the same intuitive approach you’ve come to expect from Cloudflare.

First, once within the Zero Trust dashboard, navigate to the DLP Profiles tab under Gateway:

Announcing Custom DLP profiles

Here you will find any available DLP profiles, either predefined or custom:

Announcing Custom DLP profiles

Select to Create Profile to begin a new one.  After providing a name and description, select Add detection entry to add a custom regular expression. A regular expression, or regex, is a sequence of characters that specifies a search pattern in text, and is a standard way for administrators to achieve the flexibility and granularity they need in policy building.

Cloudflare Gateway currently supports regexes in HTTP policies using the Rust regex crate. For consistency, we used the same crate to offer custom DLP detections. For documentation on our regex support, see our documentation.

Regular expressions can be used to build custom PII detections of your choosing, such as email addresses, or to detect keywords for sensitive intellectual property.

Announcing Custom DLP profiles

Provide a name and a regex of your choosing. Every entry in a DLP profile is a new detection that you can scan for in your corporate traffic. Our documentation provides resources to help you create and test Rust regexes.

Below is an example of regex to detect a simple email address:

Announcing Custom DLP profiles

When you are done, you will see the entry in your profile.  You can turn entries on and off in the Status field for easier testing.

Announcing Custom DLP profiles

The custom profile can then be applied to traffic using an HTTP policy, just like a predefined profile. Here both a predefined and custom profile are used in the same policy, blocking sensitive traffic to dlptest.com:

Announcing Custom DLP profiles

Our DLP roadmap

This is just the start of our DLP journey, and we aim to grow the product exponentially in the coming quarters. In Q4 we delivered:

  • Expanded Predefined DLP Profiles
  • Custom DLP Profiles
  • PDF scanning support
  • Upgraded file name logging

Over the next quarters, we will add a number of features, including:

  • Data at rest scanning with Cloudflare CASB
  • Minimum DLP match counts
  • Microsoft Sensitivity Label support
  • Exact Data Match (EDM)
  • Context analysis
  • Optical Character Recognition (OCR)
  • Even more predefined DLP detections
  • DLP analytics
  • Many more!

Each of these features will offer you new data visibility and control solutions, and we are excited to bring these features to customers very soon.

How do I get started?

DLP is part of Cloudflare One, our Zero Trust network-as-a-service platform that connects users to enterprise resources. Our GA blog announcement provides more detail about using Cloudflare One to onboard traffic to DLP.

To get access to DLP via Cloudflare One, reach out for a consultation, or contact your account manager.

Network detection and settings profiles for the Cloudflare One agent

Post Syndicated from Kyle Krum original https://blog.cloudflare.com/location-aware-warp/

Network detection and settings profiles for the Cloudflare One agent

Network detection and settings profiles for the Cloudflare One agent

Teams can connect users, devices, and entire networks to Cloudflare One through several flexible on-ramps. Those on-ramps include traditional connectivity options like GRE or IPsec tunnels, our Cloudflare Tunnel technology, and our Cloudflare One device agent.

Each of these on-ramps send nearly all traffic to Cloudflare’s network where we can filter security threats with products like our Secure Web Gateway and Data Loss Prevention service. In other cases, the destination is an internal resource deployed in Cloudflare’s Zero Trust private network.

However, sometimes users want traffic to stay local. If a user is sitting within a few meters of their printer, they might prefer to connect through their local network instead of adding a hop through Cloudflare. They could configure Cloudflare to always ignore traffic bound for the printer, keeping it local, but when they leave the office they still need to use Cloudflare’s network to reach that printer remotely.

Solving this use case and others like it previously required manual changes from an administrator every time a user moved. An administrator would need to tell Cloudflare’s agent to include traffic sometimes and, in other situations, ignore it. This does not scale.

Starting today, any team using Cloudflare One has the flexibility to decide what traffic is sent to Cloudflare and what traffic stays local depending on the network of the user. End users do not need to change any settings when they enter or exit a managed network. Cloudflare One’s device agent will automatically detect and make the change for them.

Not everyone needs the same controls

Not every user in your enterprise needs the same network configuration. Sometimes you need to make exceptions for teams, certain members of staff, or speciality hardware/software based on business needs. Those exceptions can become a manual mess when you compound how locations and networks might also require different settings.

We’ve heard several examples from customers who run into that type of headache. Each case below describes a common theme: rigid network configuration breaks when it means real world usage.

In some cases, a user will work physically close to a server or another device that their device needs to reach. We talk to customers in manufacturing or lab environments who prefer to send all Internet-bound traffic to Cloudflare but want to continue to operate a private network inside their facility.

Today’s announcement allows teams to adapt to this type of model. When users operate inside the physical location in the trusted network, they can connect directly. When they leave, they can use Cloudflare’s network to reach back into the trusted network after they meet the conditions of the Zero Trust rules configured by an administrator.

In other situations, customers are in the process of phasing out legacy appliances in favor of Cloudflare One. However, the migration to a Zero Trust model sometimes needs to be stepwise and deliberate. In these cases, customers maintain some existing on-premise infrastructure while they deploy Cloudflare’s SASE solution.

As part of this release, teams can configure Cloudflare’s device agent to detect that a user sits inside a known location where those appliances still operate. The agent will automatically stop directing traffic to Cloudflare and instead send it to your existing appliances while you deprecate them over time.

Configuration Profiles and Managed Networks

Today’s release introduces the ability to create a profile, a defined set of configuration options. You can create rules that decide when and where profiles apply, changing settings without manual intervention.

For our network-aware work, administrators can define a profile that decides what traffic is sent to Cloudflare and what stays local. Next, that profile can apply when users are in specific networks and not when they are in other locations.

Beyond network detection, profiles can apply based on user group membership. Not every user in your workforce needs the same on-ramp configuration. Some developers might need certain traffic excluded due to local development work. As part of this launch, you can configure profiles to apply based on who the user is in addition to where the user sits.

Defining a secure way to detect a network you manage

Cloudflare needs to be able to decide what network a device is using in a way that can’t easily be spoofed by someone looking to skirt policy. To solve that challenge, today’s release introduces the ability to define a known TLS endpoint which Cloudflare’s agent can reach. In just a few minutes, an administrator can create a certificate-validated check to indicate a device is operating within a managed network.

First, an administrator can create a TLS certificate that Cloudflare will use and match based on the SHA-256 hash of the certificate. You can leverage existing infrastructure or create a new TLS endpoint via the following example:

1. Create a local certificate you can use

openssl req -x509 -newkey rsa:4096 -sha256 -days 3650 -nodes -keyout example.key -out example.pem -subj "/CN=example.com" -addext "subjectAltName=DNS:example.com"

2. Extract the sha256 thumbprint of that certificate

openssl x509 -noout -fingerprint -sha256 -inform pem -in example.pem | tr -d :

Which will output something like this:

SHA256 Fingerprint=DD4F4806C57A5BBAF1AA5B080F0541DA75DB468D0A1FE731310149500CCD8662

Next, the Cloudflare agent running on the device needs to be able to reach that certificate to validate that it is connected to a network you manage. We recommend running a simple HTTP server inside your network which the device can reach to validate the certificate.

3. Create a python3 script and save as myserver.py as part of setting up a simple HTTP server.

import ssl, http.server
server = http.server.HTTPServer(('0.0.0.0', 4443), http.server.SimpleHTTPRequestHandler)
sslcontext = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
sslcontext.load_cert_chain(certfile='./example.pem', keyfile='./example.key')
server.socket = sslcontext.wrap_socket(server.socket, server_side=True)
server.serve_forever()

Run the server

python3 myserver.py

Configure the network location in Zero Trust dashboard

Once you’ve created the example TLS endpoint above, provide the fingerprint to Cloudflare to define a managed network.

  1. Login to your Zero Trust Dashboard and navigate to Settings → WARP Client
  2. Scroll down to Network Locations and click Add new and complete the form. Use the Fingerprint generated in the previous step as the TLS Cert SHA-256 and the IP address of the device running the python script
Network detection and settings profiles for the Cloudflare One agent

Configure a Device Profile

Once the network is defined, you can create profiles that apply based on whether the agent is operating in this network. To do so, follow the steps below.

  1. Login to your Zero Trust Dashboard and navigate to Settings → WARP Client
  2. Scroll down to Device Settings and create a new profile that includes Your newly created managed network as a location
Network detection and settings profiles for the Cloudflare One agent

Reconnect your Agent

Each time the device agent detects a network change event from the operating systems (ex. waking up the device, changing Wi-Fi networks, etc.) the agent will also attempt to reach that endpoint inside your network to prove that it is operating within a network you manage.

If an endpoint that matches the SHA-256 fingerprint you’ve defined is detected, the device will get the settings profile as configured above. You can quickly validate that the device agent received the required settings by using warp-cli settings or warp-cli get-alternate-network from your command line / terminal.

What’s next?

Managed network detection and settings profiles are both new and available for you to use today. While settings profiles will work with any modern version of the agent from this last year, network detection requires at least version 2022.12.

The WARP device client currently runs on all major operating systems and is easy to deploy with the device management tools your organization already uses. You can find the download links to all version of our agent by visiting Settings →Downloads

Network detection and settings profiles for the Cloudflare One agent

Starting a Zero Trust journey can be daunting. We’re spending this week, CIO Week, to share features like this to make it less of a hassle to begin. If you want to talk to us to learn more about how to take that first step, please reach out.

New ways to troubleshoot Cloudflare Access ‘blocked’ messages

Post Syndicated from Kenny Johnson original https://blog.cloudflare.com/403-logs-cloudflare-access/

New ways to troubleshoot Cloudflare Access 'blocked' messages

New ways to troubleshoot Cloudflare Access 'blocked' messages

Cloudflare Access is the industry’s easiest Zero Trust access control solution to deploy and maintain. Users can connect via Access to reach the resources and applications that power your team, all while Cloudflare’s network enforces least privilege rules and accelerates their connectivity.

Enforcing least privilege rules can lead to accidental blocks for legitimate users. Over the past year, we have focused on adding tools to make it easier for security administrators to troubleshoot why legitimate users are denied access. These block reasons were initially limited to users denied access due to information about their identity (e.g. wrong identity provider group, email address not in the Access policy, etc.)

Zero Trust access control extends beyond identity and device. Cloudflare Access allows for rules that enforce how a user connects. These rules can include their location, IP address, the presence of our Secure Web Gateway and other controls.

Starting today, you can investigate those allow or block decisions based on how a connection was made with the same level of ease that you can troubleshoot user identity. We’re excited to help more teams make the migration to a Zero Trust model as easy as possible and ensure the ongoing maintenance is a significant reduction compared to their previous private network.

Why was I blocked?

All Zero Trust deployments start and end with identity. In a Zero Trust model, you want your resources (and the network protecting them) to have zero trust by default of any incoming connection or request. Instead, every attempt should have to prove to the network that they should be allowed to connect.

Organizations provide users with a mechanism of proof by integrating their identity provider (IdP) like Azure Active Directory or Okta. With Cloudflare, teams can integrate multiple providers simultaneously to help users connect during activities like mergers or to allow contractors to reach specific resources. Users authenticate with their provider and Cloudflare Access uses that to determine if we should trust a given request or connection.

After integrating identity, most teams start to layer on new controls like device posture. In some cases, or in every case, the resources are so sensitive that you want to ensure only approved users connecting from managed, healthy devices are allowed to connect.

While that model significantly improves security, it can also create strain for IT teams managing remote or hybrid workforces. Troubleshooting “why” a user cannot reach a resource becomes a guessing game over chat. Earlier this year, we launched a new tool to tell you exactly why a user’s identity or device posture did not meet the rules that your administrators created.

New ways to troubleshoot Cloudflare Access 'blocked' messages

What about how they connected?

As organizations advance in their Zero Trust journey, they add rules that go beyond the identity of the user or the posture of the device. For example, some teams might have regulatory restrictions that prevent users from accessing sensitive data from certain countries. Other enterprises need to understand the network context before granting access.

These adaptive controls enforce decisions around how a user connects. The user (and their device) might otherwise be allowed, but their current context like location or network prohibits them from doing so. These checks can extend to automated services, too, like a trusted chatbot that uses a service token to connect to your internal ticketing system.

While user and device posture checks require at least one step of authentication, these contextual rules can consist of policies that make it simple for a bad actor to retry over and over again like an IP address check. While the user will still be denied, that kind of information can overwhelm and flood your logs while you attempt to investigate what should be a valid login attempt.

With today’s release, your team can now have the best of both worlds.

However, other checks are not based on a user’s identity, these include looking at a device’s properties, network context, location, presence of a certificate and more. Requests that fail these “non-identity” checks are immediately blocked. These requests are immediately blocked in order to prevent a malicious user from seeing which identity providers are used by a business.

New ways to troubleshoot Cloudflare Access 'blocked' messages

Additionally, these blocks were not logged in order to avoid overloading the Access request logs of an individual account. A malicious user attempting hundreds of requests or a misconfigured API making thousands of requests should not cloud a security admin’s ability to analyze legitimate user Access requests.

These logs would immediately become overloaded if every blocked request were logged. However, we heard from users that in some situations, especially during initial setup, it is helpful to see individual block requests even for non-identity checks.

We have released a GraphQL API that allows Access administrators to look up a specific blocked request by RayID, User or Application. The API response will return a full output of the properties of the associated request which makes it much easier to diagnose why a specific request was blocked.

New ways to troubleshoot Cloudflare Access 'blocked' messages

In addition to the GraphQL API, we also improved the user facing block page to include additional detail about a user’s session. This will make it faster for end users and administrators to diagnose why a legitimate user was not allowed access.

New ways to troubleshoot Cloudflare Access 'blocked' messages

How does it work?

Collecting blocked request logs for thousands of Access customers presented an interesting scale challenge. A single application in a single customer account could have millions of blocked requests in a day, multiple that out across all protected applications across all Access customers and the number of logs start to get large quickly.

We were able to leverage our existing analytics pipeline that was built to handle the scale of our global network which is far beyond the scale of Access. The analytics pipeline is configured to intelligently begin sampling data if an individual account begins generating too many requests. The majority of customers will have all non-identity block logs captured while accounts generating large traffic volumes will retain a significant portion to diagnose issues.

How can I get started?

We have built an example guide to use the GraphQL API to diagnose Access block reasons. These logs can be manually checked using an GraphQL API client or periodically ingested into a log storage database.

We know that achieving a Zero Trust Architecture is a journey and a significant part of that is troubleshooting and initial configuration. We are committed to making Cloudflare Zero Trust the easiest Zero Trust solution to troubleshoot and configure at scale. Keep an eye out for additional announcements in the coming months that make Cloudflare Zero Trust even easier to troubleshoot.

If you don’t already have Cloudflare Zero Trust set up, getting started is easy – see the platform yourself with 50 free seats by signing up here.

Or if you would like to talk with a Cloudflare representative about your overall Zero Trust strategy, reach out to us here for a consultation.

For those who already know and love Cloudflare Zero Trust, this feature is enabled for all accounts across all pricing tiers.

Why do CIOs choose Cloudflare One?

Post Syndicated from Sam Rhea original https://blog.cloudflare.com/why-cios-select-cloudflare-one/

Why do CIOs choose Cloudflare One?

Why do CIOs choose Cloudflare One?

Cloudflare’s first customers sought us out as the “Web Application Firewall vendor” or their DDoS-mitigating Content Delivery Network. We earned their trust by solving their problems in those categories and dozens of others. Today, over 100,000 customers now rely on Cloudflare to secure and deliver their Internet properties.

However, our conversations with CIOs evolved over the last few years. The discussions stopped centering around a specific product. CIOs, and CSOs too, approached us with the challenge of managing connectivity and security for their entire enterprise. Whether they described their goals as Zero Trust or Secure Access Service Edge (SASE), their existing appliances and point solutions could no longer keep up. So we built Cloudflare One to help them.

Today, over 10,000 organizations trust Cloudflare One to connect and secure their users, devices, applications, and data. As part of CIO Week, we spoke with the leaders of some of our largest customers to better understand why they selected Cloudflare.

The feedback centered around six themes:

  1. Cloudflare One delivers more complete security.
  2. Cloudflare One makes your team faster.
  3. Cloudflare One is easier to manage.
  4. Cloudflare One products work better together.
  5. Cloudflare One is the most cost-efficient comprehensive SASE offering.
  6. Cloudflare can be your single security vendor.

If you are new to Cloudflare, or more familiar with our Internet property products, we’re excited to share how other customers approached this journey and why they partnered with Cloudflare. Today’s post breaks down their feedback in serious detail. If you’d prefer to ask us directly, skip ahead to the bottom, and we’d be glad to find time to chat.

Cloudflare One delivers more complete security

The first SASE conversations we had with customers started when they asked us how we keep Cloudflare safe. Their Internet properties relied on us for security and availability – our own policies mattered to their decisions to trust us.

That’s fair. We are a popular target for attack. However, we could not find anything on the market that could keep us safe without slowing us down. Instead, we decided to use our own network to connect employees to internal resources and secure how those same team members connected to the rest of the Internet.

After learning what we built to replace our own private network, our customers started to ask if they could use it too. CIOs were on the same Zero Trust journey with us. They trusted our commitment to delivering the most comprehensive security on the market for their public-facing resources and started partnering with us to do the same thing for their entire enterprise.

We kept investing in Cloudflare One over the last several years based on feedback from our own internal teams and those CIOs. Our first priority was to replace our internal network with a model that applies Zero Trust controls by default. We created controls that could adapt to the demands of security teams without the need to modify applications. We added rules to force hard keys on certain applications, restrict access to specific countries, or require users to ask for approval from an administrator. The flexibility meant that every request, and every connection, could be scrutinized in a way that matched the sensitivity of internal tools.

We then turned that skepticism in the other direction. Customers on this journey with us asked “how could we have Zero Trust in the rest of the Internet?” To solve that, we turned Cloudflare’s network in the other direction. We built our DNS filtering product by combining the world’s fastest DNS resolver with our unique view into threat patterns on the Internet. We layered on a comprehensive Secure Web Gateway and network firewall. We sent potentially risky sites to Cloudflare’s isolated browser, a unique solution that pushes the industry forward in terms of usability.

More recently, we started to create tools that help control the data sitting in SaaS applications and to prevent sensitive data from leaving the enterprise. We’ve been delighted to watch customers adopt every stage in this progression with us, but we kept comparing notes with other CIOs and CSOs about the risk of something that most vendors do not consider part of the SASE stack: email.

We also spent so many hours monitoring email-based phishing attacks aimed at Cloudflare. To solve that challenge, we deployed Area 1 Email Security. The efficacy of Area 1 stunned our team to the point that we acquired the company, so we could offer the same security to our customers as part of Cloudflare One.

When CIOs describe the security challenges they need to solve, we can recommend a complete solution built on our experience addressing those same concerns. We cannot afford shortcuts in how we secure Cloudflare and know they cannot either in how they keep their enterprises safe.

Zero Trust security at a social media company

Like Cloudflare, social media services are a popular target for attack. When the security team at one of the world’s most prominent social media platforms began a project to overhaul their access controls, they ran a comprehensive evaluation of vendors who could keep their platform safe from phishing attacks and lateral movement. They selected Cloudflare One due to the granular access control our network provides and the layers of security policies that can be evaluated on any request or connection without slowing down end users.

Cloudflare One makes your team faster

Many of our customers start with our Application Services products, like our cache and smart routing, because they have a need for speed. The performance of their Internet properties directly impacts revenue. These customers hunt down opportunities to use Cloudflare to shave off milliseconds.

The CIOs who approach us to solve their SASE problems tend to rank performance lower than security and maintainability. In early conversations they describe their performance goals as “good enough that my users do not complain.”

Those complaints drive IT help desk tickets, but CIOs are used to sacrificing speed for security. We don’t believe they should have to compromise. CIOs select Cloudflare One because the performance of our network improves the experience of their end users and reduces overhead for their IT administrators.

We accelerate your users from the first moment they connect. When your team members visit a destination on the Internet, their experience starts with a DNS query to find the address of the website. Cloudflare runs the world’s fastest DNS resolver, 1.1.1.1, and the DNS filtering features of our SASE offering use the same technology.

Next, your users’ devices open a connection and send an HTTP request to their destination. The Cloudflare agent on their device does so by using a BoringTun, our Rust-based and open sourced WireGuard implementation. WireGuard allows us to provide a highly-performant on-ramp to the Internet through our network without compromising battery life or security. The same technology supports the millions of users who choose to use our WARP consumer offering. We take their feedback and optimize WARP constantly to improve how our enterprise users connect.

Finally, your users rely on our network to connect them to their destination and return the responses. Out of the 3,000 top networks in the world, measured by IPv4 addresses advertised, we rank the fastest in 1,310. Once connected, we apply our smart routing technology to route users through our network to find the fastest path to and from their destination.

We develop new technologies to improve the speed of Cloudflare One, but we cannot change the speed of light. Instead, we make the distance shorter by bringing websites closer to your users. Cloudflare is the reverse proxy for more than 20% of the HTTP Internet. We serve those websites from the same data centers where your employees connect to our Secure Web Gateway. In many cases, we can deliver content from a server centimeters away from where we apply Cloudflare One’s filtering, shaving off milliseconds and reducing the need for more hops.

Faster DNS filtering for the United States Federal Government

The Cybersecurity and Infrastructure Security Agency (CISA) works within the United States Department of Homeland Security as the “nation’s risk advisor.”1 Last year they launched a program to find a protective DNS resolver for the civilian government. These agencies and departments operate around the country, in large cities and rural areas, and they need a solution that would deliver fast DNS resolutions close to where those users sit. After a thorough evaluation, they selected Cloudflare, in partnership with Accenture Federal Services, as the country’s protective DNS resolver.

Performance at a Fortune 500 Energy Company

An American energy company attempted to deploy Zscaler, but became frustrated after spending eight months attempting to integrate and maintain systems that slowed down their users. This organization already observed Cloudflare’s ability to accelerate their traffic with our network-layer DDoS protection product and ran a pilot with Cloudflare One. Following an exhaustive test, the team observed significant performance improvements, particularly with Cloudflare’s isolated browser product, and decided to rip out Zscaler and consolidate around Cloudflare.

Cloudflare One products are easier to manage

The tools that a SASE solution like Cloudflare One replaces are cumbersome to manage. Hardware appliances or virtual equivalents require upfront deployment work and ongoing investment to maintain and upgrade them. Migrating to other cloud-based SASE vendors can reduce pain for some IT teams, but that is a low bar.

CIOs tell us that the ability to manage the solution is nearly as important as the security outcomes. If their selected vendor is difficult to deploy, the migration drags on and discourages adoption of more advanced features. If the solution is difficult to use or manage, team members find ways to avoid using it or IT administrators waste time.

We built Cloudflare One to make the most advanced SASE technologies available to teams of any size, including those that lack full IT departments. We invested in building a system that could be configured and deployed without operational overhead. Over 10,000 teams rely on Cloudflare One as a result. That same commitment to ease-of-use extends to the enterprise IT and Security teams who manage Cloudflare One deployments for some of the world’s largest organizations.

We also provide features tailored to the feedback we hear from CIOs and their teams about the unique challenges of managing larger deployments at global scale. In some cases, their teams need to update hundreds of policies or their global departments rely on dozens of administrators who need to coordinate changes. We provide API support for managing every Cloudflare One feature, and we also maintain a Terraform provider for teams that need the option for peer reviewed configuration-as-code management.

Ease-of-use at a Fortune 500 telecommunications provider

We make our free and pay-as-you-go plans available to anyone with a credit card in order to make these technologies accessible to teams of any size. Sometimes, the largest teams in the world start with those plans too. A European Fortune 500 telecommunications company began adopting our Zero Trust platform on a monthly subscription when their Developer Operations (DevOps) lost their patience with their existing VPN. Developers across their organization complained about how their legacy private network slowed down their access to the tools they needed to do their job.

Their DevOps administrators adopted Cloudflare One after being able to set it up in a matter of minutes without talking to a sales rep at Cloudflare. Their company now relies on Cloudflare One to secure their internal resources and their path to the Internet for over 100,000 employees.

Cloudflare One products work better together

CIOs who start their SASE evaluation often attempt to replace a collection of point solutions. The work to glue together those products demands more time from IT departments and the gaps between those tools present security blind spots.

However, many SASE vendors offer a platform that just cobbles together point solutions. There might be one invoice, but the same pain points remain around interoperability and security challenges. We talk to CIOs and CSOs who expand their vendor search radius after realizing that the cloud-based alternative from their existing hardware provider still includes those challenges.

When CIOs select Cloudflare One, they pick a single, comprehensive SASE solution. We don’t believe that any feature, or product, should be an island. The sum should be greater than the parts. Every capability that we build in Cloudflare One adds more value to what is already available without adding more maintenance overhead.

When an organization secures their applications behind our Zero Trust access control, they can enable Cloudflare’s Web Application Firewall (WAF) to run in-line with a single button. Users who click on an unknown link open that website in our isolated browser without any additional steps. Launching soon, the same Data Loss Prevention (DLP) rules that administrators build for data-in-transit filters will apply to data sitting at rest with our API-driven Cloud Access Security Broker (CASB).

Product integration at national residential services provider

Just a few months ago, a US-based national provider of residential services, like plumbing and climate control repair, selected Cloudflare One because they could consolidate their disparate stack of existing cloud-based security vendors into a single solution. After evaluating other vendors who stitch together point solutions under a single brand name, they found more value in deploying Cloudflare’s Zero Trust network access solution together with our outbound filtering products for thousands of employees.

Cloudflare One is the most cost-efficient comprehensive SASE offering

Some CIOs approach Cloudflare to replace their collection of hardware appliances that perform, or attempt to perform, Zero Trust functions. The decision to migrate to a cloud-based solution can deliver immediate cost savings by eliminating the cost to continue to license and maintain that hardware or by avoiding the need for new capital expenditure to purchase the latest generation of hardware that can better attempt to support SSE Goals.

We’re happy to help you throw out those band-aid boxes. We’ve spent the last decade helping over 100,000 organizations get rid of their hardware in favor of a faster, safer, and more cost-efficient solution. However, we have seen CIOs approach us in the last with a newer form of this problem: renewals. CIOs who first adopted a cloud-based SSE solution two or three years ago now describe extortionate price increases from their existing vendors.

Unlike Cloudflare, many of these vendors rely on dedicated appliances that struggle to scale with increased traffic. To meet that demand, they purchased more appliances and now need to find a way to bake that cost into the price they charge existing and new customers. Other vendors rely on public cloud providers to run their services. As those providers increase their costs, these vendors pass them on to their customers at a rate that scales with usage.

Cloudflare’s network provides a different model that allows Cloudflare One to deliver a comprehensive SASE offering that is more cost-efficient than anything in the market. Rather than deploying dedicated appliances, Cloudflare deploys commodity hardware on top of which any Cloudflare service can run allowing us to scale up and down for any use case from our Bot Management features to our Workers, including our SASE products. We also purchase server hardware from multiple vendors in the exact same configuration, providing us with supply chain flexibility and reducing the risk that any one component from a specific vendor drives up our hardware costs.

We obsess over the efficiency of the computing costs of that hardware because we have no choice – over 20% of the world’s HTTP Internet relies on it today. Since every service can run on every server, including Cloudflare One, that investment in computing efficiency also benefits Cloudflare One. We also avoid the need to buy more hardware specifically for Cloudflare One capacity. We built our network to scale with the demands of some of the world’s largest Internet properties. That model allows us to absorb the traffic spikes of any enterprise SASE deployment without noticing.

However, Cloudflare One, like all of our network-driven products, has another cost component: transit. We need to reliably deliver your employee’s traffic to its destination. While that destination is increasingly on our network already if it uses our reverse proxy, sometimes employees need other websites.

Thankfully we’ve spent the last decade reducing or eliminating the cost of transit. In many cases, our reverse proxy motivates exchanges and ISPs to waive transit fees for us. It is in their best interest to provide their users with the fastest, most reliable, path to the ever-increasing number of websites that use our network. When we turn our network in the other direction for our SASE customers we still benefit from the same savings.

Cost-savings at an African infrastructure company

Earlier this year, an infrastructure based in South Africa came to Cloudflare with this exact problem. Their existing cloud-based Secure Web Gateway vendor, Zscaler, insisted on a significant price increase for the same services and threatened to turn off the system if the customer did not agree. Instead, this infrastructure company already trusted our network for their Internet properties and decided to rip out their existing SASE vendor in favor of Cloudflare One’s more cost-efficient model without the loss of any functionality.

Cloudflare can be your single security and connectivity vendor

We hear from more and more CIOs who want to reduce the number of invoices they pay and vendors they manage. Hundreds of enterprises who have adopted our SASE platform started as customers of our Application Services and Application Security products.

We’ve seen this take two forms. In one form, CIOs describe the challenge of stitching together multiple security point solutions into a single SASE deployment. They choose our network for the reasons described above; the CIO’s team benefits from features that work better together, and they avoid the need to maintain multiple systems.

In the second form, the migration to more cloud-based services across use cases ranging from SASE to public cloud infrastructure led to vendor bloat. We hear from customers who struggle to inventory which vendors their team has purchased and which of those services they even use.

That proliferation of vendors introduces more cost in terms of dollars and time. In financial terms, each vendor’s contract model might introduce new fees, like fixed platform costs, that would be redundant when paying for a single vendor. In management terms, every new vendor adds one more account manager to go find during issues or one more vendor to involve when debugging an issue that could impact multiple systems.

Bundling Cloudflare One with our Application Services, and Application Security allows your organization to rely on a single vendor for every connection that you need to secure and accelerate. Your teams can rely on a single control plane for everything from customizing your website’s cache rules to reviewing potential gaps in your Zero Trust deployment. CIOs have one point of contact, a Cloudflare Customer Success Manager, they can reach out to if they need help escalating a request across what used to require dozens of potential vendors.

Vendor consolidation at a 10,000 person research publication company

A large American data analytics company chose Cloudflare One as part of that same journey. They first sought Cloudflare to help load-balance their applications and protect their sites from DDoS attacks. After becoming familiar with our platform, and learning how performance features they used for their public-facing applications could be delivered to their internal resources, they selected Cloudflare One over Zscaler and Cisco.

What’s next?

Not every CIO shares the same motivations. One of the reasons above might be more important to you based on your business, your industry, or your stage in a Zero Trust adoption journey.

That’s fine by us! We’d love to learn more about what drives your search and how we can help. We have a team dedicated to listening to organizations who are evaluating SASE options and helping them understand and experiment with Cloudflare One. If you’d like to get started, let us know here, and we’ll reach out.

Do you prefer to avoid talking to someone just yet? Nearly every feature in Cloudflare One is available at no cost for up to 50 users. Many of our largest enterprise customers start by exploring the products themselves on our free plan, and we invite you to do so by following the link here.

……
1https://www.cisa.gov/about-cisa

Cloudflare protection for all your cardinal directions

Post Syndicated from Annika Garbers original https://blog.cloudflare.com/cardinal-directions-and-network-traffic/

Cloudflare protection for all your cardinal directions

Cloudflare protection for all your cardinal directions

As the Internet becomes the new corporate network, traditional definitions within corporate networking are becoming blurry. Concepts of the corporate WAN, “north/south” and “east/west” traffic, and private versus public application access dissolve and shift their meaning as applications shift outside corporate data center walls and users can access them from anywhere. And security requirements for all of this traffic have become more stringent as new attack vectors continue to emerge.

The good news: Cloudflare’s got you covered! In this post, we’ll recap how definitions of corporate network traffic have shifted and how Cloudflare One provides protection for all traffic flows, regardless of source or destination.

North, south, east, and west traffic

In the traditional perimeter security model, IT and network teams defined a “trusted” private network made up of the LANs at corporate locations, and the WAN connecting them. Network architects described traffic flowing between the trusted network and another, untrusted one as “north/south,” because those traffic flows are typically depicted spatially on network diagrams like the one below.

Connected north/south networks could be private, such as one belonging to a partner company, or public like the Internet. Security teams made sure all north/south traffic flowed through one or a few central locations where they could enforce controls across all the “untrusted” traffic, making sure no malicious actors could get in, and no sensitive data could get out.

Cloudflare protection for all your cardinal directions
Network diagram depicting traditional corporate network architecture

Traffic on a single LAN, such as requests from a desktop computer to a printer in an office, was referred to as “east/west” and generally was not subject to the same level of security control. The “east/west” definition also sometimes expanded to include traffic between LANs in a small geographic area, such as multiple buildings on a large office campus. As organizations became more distributed and the need to share information between geographically dispersed locations grew, “east/west” also often included WAN traffic transferred over trusted private connections like MPLS links.

As applications moved to the Internet and the cloud and users moved out of the office, clean definitions of north/south/east/west traffic started to dissolve. Traffic and data traditionally categorized as “private” and guarded within the boundaries of the corporate perimeter is now commonly transferred over the Internet, and organizations are shifting to cloud-first security models such as SASE which redefine where security controls are enforced across that traffic.

How Cloudflare keeps you protected

Cloudflare’s services can be used to secure and accelerate all of your traffic flows, regardless of whether your network architecture is fully cloud-based and Internet-native or more traditional and physically defined.

For “north/south” traffic from external users accessing your public applications, Cloudflare provides protection at all layers of the OSI stack and for a wide range of threats. Our application security portfolio, including DDoS protection, Web Application Firewall, API security, Bot Management, and more includes all the tools you need to keep public facing apps safe from malicious actors outside your network; our network services extend similar benefits to all your IP traffic. Cloudflare One has you covered for the growing amount of north/south traffic from internal users – Zero Trust Network Access provides access to corporate resources on the Internet without sacrificing security, and Secure Web Gateway filters outgoing traffic to keep your data safe from malware, ransomware, phishing, command and control, and other threats.

Cloudflare protection for all your cardinal directions
Cloudflare protection for all your traffic flows

As customers adopt SASE and multicloud architectures, the amount of east/west traffic within a single location continues to decrease. Cloudflare One enables customers to use Cloudflare’s network as an extension of theirs for east/west traffic between locations with a variety of secure on-ramp options including a device client, application and network-layer tunnels, and direct connections, and apply Zero Trust policies to all traffic regardless of where it’s headed. Some customers choose to use Cloudflare One for filtering local traffic as well, which involves a quick hop out to the closest Cloudflare location – less than 50ms from 95% of the world’s Internet-connected population – and enables security and IT teams to enforce consistent security policy across all traffic from a single control plane.

Because Cloudflare’s services are all delivered on every server in all locations across our network, customers can connect to us to get access to a full “service mesh” for any traffic. As we develop new capabilities, they can apply across any traffic flow regardless of source or destination. Watch out for some new product announcements coming later this week that enhance these integrations even further.

Get started today

As the Internet becomes the new corporate network, Cloudflare’s mission to help build a better Internet enables us to help you protect anything connected to it. Stay tuned for the rest of CIO Week for new capabilities to make all of your north, south, east, and west traffic faster, more secure, and more reliable, including updates on even more flexible application-layer capabilities for your private network traffic.

Announcing the Authorized Partner Service Delivery Track for Cloudflare One

Post Syndicated from Matthew Harrell original https://blog.cloudflare.com/cloudflare-one-authorized-services-delivery-partner-track/

Announcing the Authorized Partner Service Delivery Track for Cloudflare One

This post is also available in 简体中文, 日本語, Deutsch, Français, Español.

Announcing the Authorized Partner Service Delivery Track for Cloudflare One

In this Sunday’s Welcome to CIO Week blog, we talked about the value for CIOs in finding partners for long term digital transformation initiatives. As the adage goes, “If you want to go fast, go alone, if you want to go far, go together.”

As Cloudflare has expanded into new customer segments and emerging market categories like SASE and Zero Trust, we too have increasingly focused on expanding our relationship with go-to-market partners (e.g. service providers, implementation / consulting firms, system integrators, and more). Because security and network transformation can feel inherently daunting, customers often need strategic advice and practical support when implementing Cloudflare One – our SASE platform of Zero Trust security and networking services. These partners play a pivotal role in easing customer adoption by helping them assess, implement, and manage our services.

This blog is primarily intended for prospective and current Cloudflare go-to-market channel partners and highlights how we have grown our partnership program over the past year and will continue to, going forward.

Cloudflare One: fastest growing portfolio among Cloudflare partners

Over the past year, adoption of Cloudflare One services has been the fastest area of growth among our customer base. Investments we have made to our channel ecosystem have helped us capitalize on increased customer demand for SASE platforms, including Zero Trust security and cloud-delivered networking.

In the last year alone, we’ve seen a 3x increase in Cloudflare One partner bookings. At the same time, the number of transacting partners has increased 70% YoY.

Partners repeatedly cite the simplicity of our platform to deploy and manage, our pace of innovation to give them confidence in our roadmap, and our global network to ensure scale, speed, and resilience as key differentiators that are fueling strong customer demand for Cloudflare One services.

Migrating from legacy, on-premise appliance to a cloud-delivered SASE architecture is a journey. For most customers, partners help break that journey into two categories, broadly defined: network layer transformation and Zero Trust security modernization.

Transforming the network layer

Multi-cloud and hybrid cloud architecture are increasingly the norm. As enterprises embrace this approach, their networking infrastructure will likewise need to adapt to be able to easily connect to a variety of cloud environments.

Organizations that have traditionally relied on SD-WAN and MPLS based technologies will turn to cloud-based network-as-a-service (NaaS) offerings like Cloudflare’s Magic WAN (part of our Cloudflare One platform) to increase flexibility and reduce costs. This will also drive revenue opportunities for a new generation of cloud networking experts and advisors who have the skills to help organizations migrate from traditional on-premise hardware to a NaaS architecture.

For some organizations, transforming the network may in fact be a more attractive, initial entry point than beginning a Zero Trust security migration, as NaaS allows organizations to maintain their existing security tools while still providing a strategic path towards a full perimeter-less architecture with cloud-delivered protection in the future.

Implementing a Zero Trust architecture

For many organizations today, modernizing security for employees, devices, data, and offices with Zero Trust best practices is an equally critical priority. Trends towards hybrid and remote working have put additional pressure on IT and security teams to re-imagine how they secure access to corporate resources and move away from traditional ‘castle-and-moat’ architectures. Zero Trust promises enhanced visibility, more granular controls, and identity-aware protection across all traffic, regardless of origin or destination.

While the benefits of moving to a Zero Trust architecture are undeniable, implementing a full Zero Trust architecture is a journey that often requires the help of third parties. According to a recent report by iVanti, while 73% of companies plan to move to a cloud based architecture over the next 18 months, 46% of these companies IT security teams lack the confidence in their ability to apply a Zero Trust model on their own which is why 34% reportedly are relying on third party security providers to help them implement Zero Trust.1 This is where partners can help.

Announcing the Authorized Services Delivery Partner Track for Cloudflare One

Cloudflare is hyper focused on building the most compelling and easy-to-use SASE platform on the market to help accelerate how organizations can transform their network and security architectures. The scale and resiliency of our global network – which spans across 275+ cities in 100+ countries and has 172+ Tbps of network capacity – ensures that we can deliver our protections reliably and with high speed, regardless of where customers are around the world.

Just as our physical network of data centers continues to expand, so too does our strategic network of channel partners, who we rely on to deliver professional and managed services that customers may require as part of their Cloudflare One deployment. Cloudflare is actively working with partners worldwide to build advisory, migration, and managed services with the goal of wrapping partner services expertise around Cloudflare One engagements to ensure 100% customer adoption and satisfaction.

To help partners develop their Cloudflare One services expertise and distinguish themselves in the marketplace, today we are excited to announce the limited availability of a new specialization track for Authorized Services Delivery Partners (ASDP). This track is designed to authorize partners that meet Cloudflare’s high standards for professional services delivery around Cloudflare One.

To become an Authorized Partner, partners will need to go through a rigorous technical validation process and will be assessed on the merits of the security, performance, and reliability of their services delivery capabilities. Partners that achieve the Authorized Service Partner designation will receive a variety of benefits, such as:

  • Engagement in Cloudflare One sourced opportunities requiring services
  • Access to named Cloudflare One partner service delivery managers who can assist partners in the building of their services practices
  • Access to special partner incentive funds designed to ensure that authorized partner services are actively used in Cloudflare One customer engagements.
Announcing the Authorized Partner Service Delivery Track for Cloudflare One

To support this new partner track, we are also announcing advanced enablement and training paths that will be available in both instructor-led training and online formats via our partner portal, as well as advanced lab environments designed to help partners learn how to implement and support Cloudflare One deployments. Partners that successfully complete the ADSP requirements will also be given opportunities to shadow customer deployments to further their capabilities and expertise.

For current and prospective Cloudflare partners interested in this track, we are launching a new Cloudflare Authorized Service Delivery Partner Validation checklist, which includes details on the application process.

If you are an existing Cloudflare partner, you can also reach out to your named Channel Account Manager for additional information.

….
1iVanti 2021 Zero Trust Progress Report

Weave your own global, private, virtual Zero Trust network on Cloudflare with WARP-to-WARP

Post Syndicated from Abe Carryl original https://blog.cloudflare.com/warp-to-warp/

Weave your own global, private, virtual Zero Trust network on Cloudflare with WARP-to-WARP

Weave your own global, private, virtual Zero Trust network on Cloudflare with WARP-to-WARP

Millions of users rely on Cloudflare WARP to connect to the Internet through Cloudflare’s network. Individuals download the mobile or desktop application and rely on the Wireguard-based tunnel to make their browser faster and more private. Thousands of enterprises trust Cloudflare WARP to connect employees to our Secure Web Gateway and other Zero Trust services as they navigate the Internet.

We’ve heard from both groups of users that they also want to connect to other devices running WARP. Teams can build a private network on Cloudflare’s network today by connecting WARP on one side to a Cloudflare Tunnel, GRE tunnels, or IPSec tunnels on the other end. However, what if both devices already run WARP?

Starting today, we’re excited to make it even easier to build a network on Cloudflare with the launch of WARP-to-WARP connectivity. With a single click, any device running WARP in your organization can reach any other device running WARP. Developers can connect to a teammate’s machine to test a web server. Administrators can reach employee devices to troubleshoot issues. The feature works with our existing private network on-ramps, like the tunnel options listed above. All with Zero Trust rules built in.

To get started, sign-up to receive early access to our closed beta. If you’re interested in learning more about how it works and what else we will be launching in the future, keep scrolling.

The bridge to Zero Trust

We understand that adopting a Zero Trust architecture can feel overwhelming at times. With Cloudflare One, our mission is to make Zero Trust prescriptive and approachable regardless of where you are on your journey today. To help users navigate the uncertain, we created resources like our vendor-agnostic Zero Trust Roadmap which lays out a battle-tested path to Zero Trust. Within our own products and services, we’ve launched a number of features to bridge the gap between the networks you manage today and the network you hope to build for your organization in the future.

Ultimately, our goal is to enable you to overlay your network on Cloudflare however you want, whether that be with existing hardware in the field, a carrier you already partner with, through existing technology standards like IPsec tunnels, or more Zero Trust approaches like WARP or Tunnel. It shouldn’t matter which method you chose to start with, the point is that you need the flexibility to get started no matter where you are in this journey. We call these connectivity options on-ramps and off-ramps.

A recap of WARP to Tunnel

The model laid out above allows users to start by defining their specific needs and then customize their deployment by choosing from a set of fully composable on and offramps to connect their users and devices to Cloudflare. This means that customers are able to leverage any of these solutions together to route traffic seamlessly between devices, offices, data centers, cloud environments, and self-hosted or SaaS applications.

One example of a deployment we’ve seen thousands of customers be successful with is what we call WARP-to-Tunnel. In this deployment, the on-ramp Cloudflare WARP ensures end-user traffic reaches Cloudflare’s global network in a secure and performant manner. The off-ramp Cloudflare Tunnel then ensures that, after your Zero Trust rules have been enforced, we have secure, redundant, and reliable paths to land user traffic back in your distributed, private network.

Weave your own global, private, virtual Zero Trust network on Cloudflare with WARP-to-WARP

This is a great example of a deployment that is ideal for users that need to support public to private traffic flows (i.e. North-South)

But what happens when you need to support private to private traffic flows (i.e. East-West) within this deployment?

With WARP-to-WARP, connecting just got easier

Starting today, devices on-ramping to Cloudflare with WARP will also be able to off-ramp to each other. With this announcement, we’re adding yet another tool to leverage in new or existing deployments that provides users with stronger network fabric to connect users, devices, and autonomous systems.

Weave your own global, private, virtual Zero Trust network on Cloudflare with WARP-to-WARP

This means any of your Zero Trust-enrolled devices will be able to securely connect to any other device on your Cloudflare-defined network, regardless of physical location or network configuration. This unlocks the ability for you to address any device running WARP in the exact same way you are able to send traffic to services behind a Cloudflare Tunnel today. Naturally, all of this traffic flows through our in-line Zero Trust services, regardless of how it gets to Cloudflare, and this new connectivity announced today is no exception.

To power all of this, we now track where WARP devices are connected to, in Cloudflare’s global network, the same way we do for Cloudflare Tunnel. Traffic meant for a specific WARP device is relayed across our network, using Argo Smart Routing, and piped through the transport that routes IP packets to the appropriate WARP device. Since this traffic goes through our Zero Trust Secure Web Gateway — allowing various types of filtering — it means we upgrade and downgrade traffic from purely routed IP packets to fully proxied TLS connections (as well as other protocols). In the case of using SSH to remotely access a colleague’s WARP device, this means that your traffic is eligible for SSH command auditing as well.

Get started today with these use cases

If you already deployed Cloudflare WARP to your organization, then your IT department will be excited to learn they can use this new connectivity to reach out to any device running Cloudflare WARP. Connecting via SSH, RDP, SMB, or any other service running on the device is now simpler than ever. All of this provides Zero Trust access for the IT team members, with their actions being secured in-line, audited, and pushed to your organization’s logs.

Or, maybe you are done with designing a new function of an existing product and want to let your team members check it out at their own convenience. Sending them a link with your private IP — assigned by Cloudflare — will do the job. Their devices will see your machine as if they were in the same physical network, despite being across the other side of the world.

The usefulness doesn’t end with humans on both sides of the interaction: the weekend has arrived, and you have finally set out to move your local NAS to a host provider where you run a virtual machine. By running Cloudflare WARP on it, similarly to your laptop, you can now access your photos using the virtual machine’s private IP. This was already possible with WARP to Tunnel; but with WARP-to-WARP, you also get connectivity in reverse direction, where you can have the virtual machine periodically rsync/scp files from your laptop as well. This means you can make any server initiate traffic towards the rest of your Zero Trust organization with this new type of connectivity.

What’s next?

This feature will be available on all plans at no additional cost. To get started with this new feature, add your name to the closed beta, and we’ll notify you once you’ve been enrolled. Then, you’ll simply ensure that at least two devices are enrolled in Cloudflare Zero Trust and have the latest version of Cloudflare WARP installed.

This new feature builds upon the existing benefits of Cloudflare Zero Trust, which include enhanced connectivity, improved performance, and streamlined access controls. With the ability to connect to any other device in their deployment, Zero Trust users will be able to take advantage of even more robust security and connectivity options.

To get started in minutes, create a Zero Trust account, download the WARP agent, enroll these devices into your Zero Trust organization, and start creating Zero Trust policies to establish fast, secure connectivity between these devices. That’s it.

Bring your own certificates to Cloudflare Gateway

Post Syndicated from Ankur Aggarwal original https://blog.cloudflare.com/bring-your-certificates-cloudflare-gateway/

Bring your own certificates to Cloudflare Gateway

Bring your own certificates to Cloudflare Gateway

Today, we’re announcing support for customer provided certificates to give flexibility and ease of deployment options when using Cloudflare’s Zero Trust platform. Using custom certificates, IT and Security administrators can now “bring-their-own” certificates instead being required to use a Cloudflare-provided certificate to apply HTTP, DNS, CASB, DLP, RBI and other filtering policies.

The new custom certificate approach will exist alongside the method Cloudflare Zero Trust administrators are already used to: installing Cloudflare’s own certificate to enable traffic inspection and forward proxy controls. Both approaches have advantages, but providing them both enables organizations to find the path to security modernization that makes the most sense for them.

Custom user side certificates

When deploying new security services, organizations may prefer to use their own custom certificates for a few common reasons. Some value the privacy of controlling which certificates are deployed. Others have already deployed custom certificates to their device fleet because they may bind user attributes to these certificates or use them for internal-only domains.

So, it can be easier and faster to apply additional security controls around what administrators have deployed already–versus installing additional certificates.

To get started using your own certificate first upload your root certificates via API to Cloudflare.

curl -X POST "https://api.cloudflare.com/client/v4/accounts/<ACCOUNT_ID>/mtls_certificates"\
    -H "X-Auth-Email: <EMAIL>" \
    -H "X-Auth-Key: <API_KEY>" \
    -H "Content-Type: application/json" \
    --data '{
        "name":"example_ca_cert",
        "certificates":"<ROOT_CERTIFICATE>",
        "private_key":"<PRIVATE_KEY>",
        "ca":true
        }'

The root certificate will be stored across all of Cloudflare’s secure servers, designed to protect against unauthorized access. Once uploaded each certificate will receive an identifier in the form of a UUID (e.g. 2458ce5a-0c35-4c7f-82c7-8e9487d3ff60) . This UUID can then be used with your Zero Trust account ID to associate and enable it for your account.

curl -X PUT "https://api.cloudflare.com/client/v4/accounts/<ACCOUNT_ID>/gateway/configuration"\
    -H "X-Auth-Email: <EMAIL>" \
    -H "X-Auth-Key: <API_KEY>" \
    -H "Content-Type: application/json" \
    --data '{
        "settings":
        {
            "antivirus": {...},
            "block_page": {...},
            "custom_certificate":
            {
                "enabled": true,
                "id": "2458ce5a-0c35-4c7f-82c7-8e9487d3ff60"
            }
            "tls_decrypt": {...},
            "activity_log": {...},
            "browser_isolation": {...},
            "fips": {...},
        }
    }'

From there it takes approximately one minute and all new HTTPS connections for your organization’s users will be secured using your custom certificate. For even more details check out our developer documentation.

An additional benefit of this fast propagation time is zero maintenance downtimes. If you’re transitioning from the Cloudflare provided certificate or a custom certificate, all new HTTPS connections will use the new certificate without impacting any current connections.

Or, install Cloudflare’s own certificates

In addition to the above API-based method for custom certificates, Cloudflare also makes it easy for organizations to install Cloudflare’s own root certificate on devices to support HTTP filtering policies. Many organizations prefer offloading certificate management to Cloudflare to reduce administrative overhead. Plus, root certificate installation can be easily automated during managed deployments of Cloudflare’s device client, which is critical to forward proxy traffic.

Installing Cloudflare’s root certificate on devices takes only a few steps, and administrators can choose which file type they want to use–either a .pem or .crt file–depending on their use cases. Take a look at our developer documentation for further details on the process across operating systems and applications.

What’s next?

Whether an organization uses a custom certificate or the Cloudflare maintained certificate, the goal is the same. To apply traffic inspection to help protect against malicious activity and provide robust data protection controls to keep users safe. Cloudflare’s priority is equipping those organizations with the flexibility to achieve their risk reduction goal as swiftly as possible.

In the coming quarters we will be focused on delivering a new UI to upload and manage user side certificates as well as refreshing the HTTP policy builder to let admins determine what happens when accessing origins not signed with a public certificate.

If you want to know where SWG, RBI, DLP, and other threat and data protection services can fit into your overall security modernization initiatives, explore Cloudflare’s prescriptive roadmap to Zero Trust.
If you and your enterprise are ready to get started protecting your users, devices, and data with HTTP inspection, then reach out to Cloudflare to learn more.

Cloudflare is faster than Zscaler

Post Syndicated from David Tuber original https://blog.cloudflare.com/network-performance-update-cio-edition/

Cloudflare is faster than Zscaler

Cloudflare is faster than Zscaler

Every Innovation Week, Cloudflare looks at our network’s performance versus our competitors. In past weeks, we’ve focused on how much faster we are compared to reverse proxies like Akamai, or platforms that sell edge compute that compares to our Supercloud, like Fastly and AWS. For CIO Week, we want to show you how our network stacks up against competitors that offer forward proxy services. These products are part of our Zero Trust platform, which helps secure applications and Internet experiences out to the public Internet, as opposed to our reverse proxy which protects your websites from outside users.

We’ve run a series of tests comparing our Zero Trust services with Zscaler. We’ve compared our ZT Application protection product Cloudflare Access against Zscaler Private Access (ZPA). We’ve compared our Secure Web Gateway, Cloudflare Gateway, against Zscaler Internet Access (ZIA), and finally our Remote Browser Isolation product, Cloudflare Browser Isolation, against Zscaler Cloud Browser Isolation. We’ve found that Cloudflare Gateway is 58% faster than ZIA in our tests, Cloudflare Access is 38% faster than ZPA worldwide, and Cloudflare Browser Isolation is 45% faster than Zscaler Cloud Browser Isolation worldwide. For each of these tests, we used 95th percentile Time to First Byte and Response tests, which measure the time it takes for a user to make a request, and get the start of the response (Time to First Byte), and the end of the response (Response). These tests were designed with the goal of trying to measure performance from an end-user perspective.

In this blog we’re going to talk about why performance matters for each of these products, do a deep dive on what we’re measuring to show that we’re faster, and we’ll talk about how we measured performance for each product.

Why does performance matter?

Performance matters because it impacts your employees’ experience and their ability to get their job done. Whether it’s accessing services through access control products, connecting out to the public Internet through a Secure Web Gateway, or securing risky external sites through Remote Browser Isolation, all of these experiences need to be frictionless.

Say Anna at Acme Corporation is connecting from Sydney out to Microsoft 365 or Teams to get some work done. If Acme’s Secure Web Gateway is located far away from Anna in Singapore, then Anna’s traffic may go out of Sydney to Singapore, and then back into Sydney to reach her email. If Acme Corporation is like many companies that require Anna to use Microsoft Outlook in online mode, her performance may be painfully slow as she waits for her emails to send and receive. Microsoft 365 recommends keeping latency as low as possible and bandwidth as high as possible. That extra hop Anna has to take through her gateway could decrease throughput and increase her latency, giving Anna a bad experience.

In another example, if Anna is connecting to a hosted, protected application like Jira to complete some tickets, she doesn’t want to be waiting constantly for pages to load or to authenticate her requests. In an access-controlled application, the first thing you do when you connect is you log in. If that login takes a long time, you may get distracted with a random message from a coworker or maybe will not want to tackle any of the work at all. And even when you get authenticated, you still want your normal application experience to be snappy and smooth: users should never notice Zero Trust when it’s at its best.

If these products or experiences are slow, then something worse might happen than your users complaining: they may find ways to turn off the products or bypass them, which puts your company at risk. A Zero Trust product suite is completely ineffective if no one is using it because it’s slow. Ensuring Zero Trust is fast is critical to the effectiveness of a Zero Trust solution: employees won’t want to turn it off and put themselves at risk if they barely know it’s there at all.

Services like Zscaler may outperform many older, antiquated solutions, but their network still fails to measure up to a highly performant, optimized network like Cloudflare’s. We’ve tested all of our Zero Trust products against Zscaler’s equivalents, and we’re going to show you that we’re faster. So let’s dig into the data and show you how and why we’re faster in three critical Zero Trust scenarios, starting with Secure Web Gateway: comparing Cloudflare Gateway to Zscaler Internet Access (ZIA).

Cloudflare Gateway: a performant secure web gateway at your doorstep

A secure web gateway needs to be fast because it acts as a funnel for all of an organization’s Internet-bound traffic. If a secure web gateway is slow, then any traffic from users out to the Internet will be slow. If traffic out to the Internet is slow, then users may be prompted to turn off the Gateway, putting the organization at risk of attack.

But in addition to being close to users, a performant web gateway needs to also be well-peered with the rest of the Internet to avoid slow paths out to websites users want to access. Remember that traffic through a secure web gateway follows a forward proxy path: users connect to the proxy, and the proxy connects to the websites users are trying to access. Therefore, it behooves the proxy to be well-connected to ensure that the user traffic can get where it needs to go as fast as possible.

When comparing secure web gateway products, we pitted the Cloudflare Gateway and WARP client against Zscaler Internet Access (ZIA), which performs the same functions. Fortunately for Cloudflare users, Gateway and Cloudflare’s network is not only embedded deep into last mile networks close to users, but is also one of the most well peered networks in the world. We use our most peered network to be 55% faster than ZIA for Gateway user scenarios. Below is a box plot showing the 95th percentile response time for Cloudflare, Zscaler, and a control set that didn’t use a gateway at all:

Cloudflare is faster than Zscaler

Secure Web Gateway – Response Time
95th percentile (ms)
Control 142.22
Cloudflare 163.77
Zscaler 365.77

This data shows that not only is Cloudflare much faster than Zscaler for Gateway scenarios, but that Cloudflare is actually more comparable to not using a secure web gateway at all rather than Zscaler.

To best measure the end-user Gateway experience, we are looking at 95th percentile response time from the end-user: we’re measuring how long it takes for a user to go through the proxy, have the proxy make a request to a website on the Internet, and finally return the response. This measurement is important because it’s an accurate representation of what users see.

When we measured against Zscaler, we had our end user client try to access five different websites: a website hosted in Azure, a Cloudflare-protected Worker, Google, Slack, and Zoom: websites that users would connect to on a regular basis. In each of those instances, Cloudflare outperformed Zscaler, and in the case of the Cloudflare-protected Worker, Gateway even outperformed the control for 95th percentile Response time. Here is a box plot showing the 95th percentile responses broken down by the different endpoints we queried as a part of our tests:

Cloudflare is faster than Zscaler

No matter where you go on the Internet, Cloudflare’s Gateway outperforms Zscaler Internet Access (ZIA) when you look at end-to-end response times. But why are we so much faster than Zscaler? The answer has to do with something that Zscaler calls proxy latency.

Proxy latency is the amount of time a user request spends on a Zscaler machine before being sent to its destination and back to the user. This number completely excludes the time it takes a user to reach Zscaler, and the time it takes Zscaler to reach the destination and restricts measurement to the milliseconds Zscaler spends processing requests.

Zscaler’s latency SLA says that 95% of your requests will spend less than 100 ms on a Zscaler device. Zscaler promises that the latency they can measure on their edge, not the end-to-end latency that actually matters, will be 100ms or less for 95% of user requests. You can even see those metrics in Zscaler’s Digital Experience to measure for yourself. If we can get this proxy latency from Zscaler logs and compare it to the Cloudflare equivalent, we can see how we stack up to Zscaler’s SLA metrics. While we don’t yet have those metrics exposed to customers, we were able to enable tracing on Cloudflare to measure the Cloudflare proxy latency.

The results show that at the 95th percentile, Zscaler was exceeding their SLA, and the Cloudflare proxy latency was 7ms. Furthermore, when our proxy latency was 100ms (meeting the Zscaler SLA), their proxy latencies were over 10x ours. Zscaler’s proxy latency accounts for the difference in performance we saw at the 95th percentile, being anywhere between 140-240 ms slower than Cloudflare for each of the sites. Here are the Zscaler proxy latency values at different percentiles for all sites tested, and then broken down by each site:

Zscaler Internet Access (ZIA) P90 Proxy Latency (ms) P95 Proxy Latency (ms) P99 Proxy Latency (ms) P99.9 Proxy Latency (ms) P99.957 Proxy Latency (ms)
Global 06.0 142.0 625.0 1,071.7 1,383.7
Azure Site 97.0 181.0 458.5 1,032.7 1,291.3
Zoom 206.0 254.2 659.8 1,297.8 1,455.4
Slack 118.8 186.2 454.5 1,358.1 1,625.8
Workers Site 97.8 184.1 468.3 1,246.2 1,288.6
Google 13.7 100.8 392.6 848.9 1,115.0

At the 95th percentile, not only were their proxy latencies out of SLA, those values show the difference between Zscaler and Cloudflare: taking Zoom as an example, if Zscaler didn’t have the proxy latency, they would be on-par with Cloudflare and the control. Cloudflare’s equivalent of proxy latency is so small that using us is just like using the public Internet:

Cloudflare Gateway P90 Proxy Latency (ms) P95 Proxy Latency (ms) P99 Proxy Latency (ms) P99.9 Proxy Latency (ms) P99.957 Proxy Latency (ms)
Global 5.6 7.2 15.6 32.2 101.9
Tubes Test 6.2 7.7 12.3 18.1 19.2
Zoom 5.1 6.2 9.6 25.5 31.1
Slack 5.3 6.5 10.5 12.5 12.8
Workers 5.1 6.1 9.4 17.3 20.5
Google 5.3 7.4 12.0 26.9 30.2

The 99.957 percentile may seem strange to include, but it marks the percentile at which Cloudflare’s proxy latencies finally exceeded 100ms. Cloudflare’s 99.957 percentile proxy latency is even faster than Zscaler’s 90th percentile proxy latency. Even on the metric Zscaler cares about and holds themselves accountable for despite proxy latency not being the metric customers care about, Cloudflare is faster.

Getting this view of data was not easy. Existing testing frameworks like Catchpoint are unsuitable for this task because performance testing requires that you run the ZIA client or the WARP client on the testing endpoint. We also needed to make sure that the Cloudflare test and the Zscaler test are running on similar machines in the same place to measure performance as good as possible. This allows us to measure the end-to-end responses coming from the same location where both test environments are running:

Cloudflare is faster than Zscaler

In our setup, we put three VMs in the cloud side by side: one running Cloudflare WARP connecting to our Gateway, one running ZIA, and one running no proxy at all as a control. These VMs made requests every three minutes to the five different endpoints mentioned above and logged the HTTP browser timings for how long each request took. Based on this, we are able to get a user-facing view of performance that is meaningful.

A quick summary so far: Cloudflare is faster than Zscaler when we’re protecting users from the public Internet through a secure web gateway from an end-user perspective. Cloudflare is even faster than Zscaler according to Zscaler’s small definition of what performance through a secure web gateway means. But let’s take a look at scenarios where you need access for specific applications through Zero Trust access.

Cloudflare Access: the fastest Zero Trust proxy

Access control needs to be seamless and transparent to the user: the best compliment for a Zero Trust solution is employees barely notice it’s there. Services like Cloudflare Access and Zscaler Private Access (ZPA) allow users to cache authentication information on the provider network, ensuring applications can be accessed securely and quickly to give users that seamless experience they want. So having a network that minimizes the number of logins required while also reducing the latency of your application requests will help keep your Internet experience snappy and reactive.

Cloudflare Access does all that 38% faster than Zscaler Private Access (ZPA), ensuring that no matter where you are in the world, you’ll get a fast, secure application experience:

Cloudflare is faster than Zscaler

ZT Access – Time to First Byte (Global)
95th Percentile (ms)
Cloudflare 849
Zscaler 1,361

When we drill into the data, we see that Cloudflare is consistently faster everywhere around the world. For example, take Tokyo, where Cloudflare’s 95th percentile time to first byte times are 22% faster than Zscaler:

Cloudflare is faster than Zscaler

ZT Access – 95th Percentile Time to First Byte
(Chicago)
New Sessions (ms) Existing Sessions (ms)
Cloudflare 1,032 293
Zscaler 1,373 338

When we evaluate Cloudflare against Zscaler for application access scenarios, we are looking at two distinct scenarios that need to be measured individually. The first scenario is when a user logs into their application and has to authenticate. In this case, the Zero Trust Access service will direct the user to a login page, the user will authenticate, and then be redirected to their application.

This is called a new session, because no authentication information is cached or exists on the Access network. The second scenario is called an existing session, when a user has already been authenticated and that authentication information can be cached. This scenario is usually much faster, because it doesn’t require an extra call to an identity provider to complete.

We like to measure these scenarios separately, because when we look at 95th percentile values, we would almost always be looking at new sessions if we combined new and existing sessions together. But across both scenarios, Cloudflare is consistently faster in every region. Here’s how this data looks when you take a location Zscaler is more likely to have good peering in: users in Chicago, IL connecting to an application hosted in US-Central.

Cloudflare is faster than Zscaler

ZT Access – 95th Percentile Time to First Byte
(Chicago)
New Sessions (ms) Existing Sessions (ms)
Cloudflare 1,032 293
Zscaler 1,373 338

Cloudflare is faster overall there as well. Here’s a histogram of 95th percentile response times for new connections overall:

Cloudflare is faster than Zscaler

You’ll see that Cloudflare’s network really gives a performance boost on login, helping find optimal paths back to authentication providers to retrieve login details. In this test, Cloudflare never takes more than 2.5 seconds to return a login response, but half of Zscaler’s 95th percentile responses are almost double that, at around four seconds. This would suggest that Zscaler’s network isn’t peered as well, which causes latency early on. But it may also suggest that Zscaler may do better when the connection is established and everything is cached. But on an existing connection, Cloudflare still comes out ahead:

Cloudflare is faster than Zscaler

Zscaler and Cloudflare do match up more evenly at lower latency buckets, but Cloudflare’s response times are much more consistent, and you can see that Zscaler has half of their responses which take almost a second to load. This further highlights how well-connected we are: because we’re in more places, we provide a better application experience, and we don’t have as many edge cases with high latency and poor application performance.

We like to separate these new and existing sessions because it’s important to look at similar request paths to do a proper comparison. For example, if we’re comparing a request via Zscaler on an existing session and a request via  Cloudflare on a new session, we could see that Cloudflare was much slower than Zscaler because of the need to authenticate. So when we contracted a third party to design these tests, we made sure that they took that into account.

For these tests, Cloudflare contracted Miercom, a third party who performed a set of tests that was intended to replicate an end-user connecting to a resource protected by Cloudflare or Zscaler. Miercom set up application instances in 12 locations around the world, and devised a test that would log into the application through various Zero Trust providers to access certain content. The test methodology is described as follows, but you can look at the full report from Miercom detailing their test methodology here:

  • User connects to the application from a browser mimicked by a Catchpoint instance – new session
  • User authenticates against their identity provider
  • User accesses resource
  • User refreshes the browser page and tries to access the same resource but with credentials already present – existing session

This allows us to look at Cloudflare versus Zscaler for application performance for both new and existing sessions, and we’ve shown that we’re faster. We’re faster in secure web gateway scenarios too.

But what if you want to access resources on the public Internet and you don’t have a ZT client on your device? To do that, you’ll need remote browser isolation.

Cloudflare Browser Isolation: your friendly neighborhood web browser

Remote browser isolation products have a very strong dependency on the public Internet: if your connection to your browser isolation product isn’t good, then your browser experience will feel weird and slow. Remote browser isolation is extraordinarily dependent on performance to feel smooth and seamless to the users: if everything is fast as it should be, then users shouldn’t even notice that they’re using browser isolation. For this test, we’re pitting Cloudflare Browser Isolation against Zscaler Cloud Browser Isolation.

Cloudflare once again is faster than Zscaler for remote browser isolation performance. Comparing 95th percentile time to first byte, Cloudflare is 45% faster than Zscaler across all regions:

Cloudflare is faster than Zscaler

ZT RBI – Time to First Byte (Global)
95th Percentile (ms)
Cloudflare 2,072
Zscaler 3,781

When you compare the total response time or the ability for a browser isolation product to deliver a full response back to a user, Cloudflare is still 39% faster than Zscaler:

Cloudflare is faster than Zscaler

ZT RBI – Time to First Byte (Global)
95th Percentile (ms)
Cloudflare 2,394
Zscaler 3,932

Cloudflare’s network really shines here to help deliver the best user experience to our customers. Because Cloudflare’s network is incredibly well-peered close to end-user devices, we are able to drive down our time to first byte and response times, helping improve the end-user experience.

To measure this, we went back to Miercom to help get us the data we needed by having Catchpoint nodes connect to Cloudflare Browser Isolation and Zscaler Cloud Browser Isolation across the world from the same 14 locations and had devices simulating clients try to reach applications through the browser isolation products in each locale. For more on the test methodology, you can refer to that same Miercom report, linked here.

Next-generation performance in a Zero Trust world

In a non-Zero Trust world, you and your IT teams were the network operator — which gave you the ability to control performance. While this control was comforting, it was also a huge burden on your IT teams who had to manage middle mile connections between offices and resources. But in a Zero Trust world, your network is now… well, it’s the public Internet. This means less work for your teams — but a lot more responsibility on your Zero Trust provider, which has to manage performance for every single one of your users. The better your Zero Trust provider is at improving end-to-end performance, the better an experience your users will have and the less risk you expose yourself to. For real-time applications like authentication and secure web gateways, having a snappy user experience is critical.

A Zero Trust provider needs to not only secure your users on the public Internet, but it also needs to optimize the public Internet to make sure that your users continuously stay protected. Moving to Zero Trust doesn’t just reduce the need for corporate networks, it also allows user traffic to flow to resources more naturally. However, given your Zero Trust provider is going to be the gatekeeper for all your users and all your applications, performance is a critical aspect to evaluate to reduce friction for your users and reduce the likelihood that users will complain, be less productive, or turn the solutions off. Cloudflare is constantly improving our network to ensure that users always have the best experience, and this comes not just from routing fixes, but also through expanding peering arrangements and adding new locations. It’s this tireless effort that makes us the fastest Zero Trust provider.

Check out our compare page for more detail on how Cloudflare’s network architecture stacks up against Zscaler.

Introducing Digital Experience Monitoring

Post Syndicated from Abe Carryl original https://blog.cloudflare.com/introducing-digital-experience-monitoring/

Introducing Digital Experience Monitoring

This post is also available in 简体中文, 日本語, Français and Español.

Introducing Digital Experience Monitoring

Today, organizations of all shapes and sizes lack visibility and insight into the digital experiences of their end-users. This often leaves IT and network administrators feeling vulnerable to issues beyond their control which hinder productivity across their organization. When issues inevitably arise, teams are left with a finger-pointing exercise. They’re unsure if the root cause lies within the first, middle or last mile and are forced to file a ticket for the respective owners of each. Ideally, each team sprints into investigation to find the needle in the haystack. However, once each side has exhausted all resources, they once again finger point upstream. To help solve this problem, we’re building a new product, Digital Experience Monitoring, which will enable administrators to pinpoint and resolve issues impacting end-user connectivity and performance.

To get started, sign up to receive early access. If you’re interested in learning more about how it works and what else we will be launching in the near future, keep scrolling.

Our vision

Over the last year, we’ve received an overwhelming amount of feedback that users want to see the intelligence that Cloudflare possesses from our unique perspective, helping power the Internet embedded within our Zero Trust platform. Today, we’re excited to announce just that. Throughout the coming weeks, we will be releasing a number of features for our Digital Experience Monitoring product which will provide you with unparalleled visibility into the performance and connectivity of your users, applications, and networks.

With data centers in more than 275 cities across the globe, Cloudflare handles an average of 39 million HTTP requests and 22 million DNS requests every second. And with more than one billion unique IP addresses connecting to our network we have one of the most representative views of Internet traffic on the planet. This unique point of view on the Internet will be able to provide you deep insight into the digital experience of your users. You can think of Digital Experience Monitoring as the air traffic control tower of your Zero Trust deployment providing you with the data-driven insights you need to help each user arrive at their destination as quickly and smoothly as possible.

What is Digital Experience Monitoring?

When we began to research Digital Experience Monitoring, we started with you: the user. Users want a single dashboard to monitor user, application, and network availability and performance. Ultimately, this dashboard needs to help users cohesively understand the minute-by-minute experiences of their end-users so that they can quickly and easily resolve issues impacting productivity. Simply put, users want hop by hop visibility into the network traffic paths of each and every user in their organization.

From our conversations with our users, we understand that providing this level of insight has become even more critical and challenging in an increasingly work-from-anywhere world.

With this product, we want to empower you to answer the hard questions. The questions in the kind of tickets we all wish we could avoid when they appear in the queue like “Why can’t the CEO reach SharePoint while traveling abroad?”. Could it have been a poor Wi-Fi signal strength in the hotel? High CPU on the device? Or something else entirely?

Without the proper tools, it’s nearly impossible to answer these questions. Regardless, it’s all but certain that this investigation will be a time-consuming endeavor whether it has a happy ending or not. Traditionally, the investigation will go something like this. IT professionals will start their investigation by looking into the first-mile which may include profiling the health of the endpoint (i.e. CPU or RAM utilization), Wi-Fi signal strength, or local network congestion. With any luck at all, the issue is identified, and the pain stops here.

Unfortunately, teams rarely have the tools required to prove these theories out so, frustrated, they move on to everything in between the user and the application. Here we might be looking for an outage or a similar issue with a local Internet Service Provider (ISP). Again, even if we do have reason to believe that this is the issue it can be difficult to prove this beyond a reasonable doubt.

Reluctantly, we move onto the last mile. Here we’ll be looking to validate that the application in question is available and if so, how quickly we can establish a meaningful connection (Time to First Byte, First Contentful Paint, packet loss) to this application. More often than not, the lead investigator is left with more questions than answers after attempting to account for the hop by hop degradation. Then, by the time the ticket can be closed, the CEO has boarded a flight back home and the issue is no longer relevant.

With Digital Experience Monitoring, we’ve set out to build the tools you need to quickly find the needle in the haystack and resolve issues related to performance and connectivity. However, we also understand that availability and performance are just shorthand measures for gauging the complete experience of our customers. Of course, there is much more to a good user experience than just insights and analytics. We will continue to pay close attention to other key metrics around the volume of support tickets, contact rate, and time to resolution as other significant indicators of a healthy deployment. Internally, when shared with Cloudflare, this telemetry data will help enable our support teams to quickly validate and report issues to continuously improve the overall Zero Trust experience.

“As CIO, I am focused on outfitting Cintas with technology and systems that help us deliver on our promises for the 1 million plus businesses we serve across North America.  As we leverage more cloud based technology to create differentiated experiences for our customers, Cloudflare is an integral part of delivering on that promise.”  
Matthew Hough, CIO, Cintas

A look ahead

In the coming weeks, we’ll be launching three new features. Here is a look ahead at what you can expect when you sign up for early access.

Zero Trust Fleet Status

One of the common challenges of deploying software is understanding how it is performing in the wild. For Zero Trust, this might mean trying to answer how many of your end-users are running our device agent, Cloudflare WARP, for instance. Then, of those users, you may want to see how many users have enabled, paused, or disabled the agent during the early phases of a deployment. Shortly after finding these answers, you may want to see if there is any correlation between the users who pause their WARP agent and the data center through which they are connected to Cloudflare. These are the kinds of answers you will be able to find with Zero Trust Fleet Status. These insights will be available at both an organizational and per-user level.

Introducing Digital Experience Monitoring

Synthetic Application Monitoring

Oftentimes, the issues being reported to IT professionals will fall outside their control. For instance, an outage for a popular SaaS application can derail an otherwise perfectly productive day. But, these issues would become much easier to address if you knew about them before your users began to report them. For instance, this foresight would allow you to proactively communicate issues to the organization and get ahead of the flood of IT tickets destined for your inbox. With Synthetic Application Monitoring, we’ll be providing Zero Trust administrators the ability to create synthetic application tests to public-facing endpoints.

Introducing Digital Experience Monitoring

With this tool, users can initiate periodic traceroute and HTTP GET requests destined for a given public IP or hostname. In the dashboard, we’ll then surface global and user-level analytics enabling administrators to easily identify trends across their organization. Users will also have the ability to filter results down to identify individual users or devices who are most impacted by these outages.

Introducing Digital Experience Monitoring

Network Path Visualization

Once an issue with a given user or device is identified through the Synthetic Application Monitoring reports highlighted above, administrators will be able to view hop-by-hop telemetry data outlining the critical path to public facing endpoints. Administrators will have the ability to view this data represented graphically and export any data which may be relevant outside the context of Zero Trust.

Introducing Digital Experience Monitoring

What’s next

According to Gartner®, “by 2026 at least 60% of I&O leaders will use Digital Experience Monitoring (DEM) to measure application, services and endpoint performance from the user’s viewpoint, up from less than 20% in 2021.” The items at the top of our roadmap will be just the beginning to Cloudflare’s approach to bringing our intelligence into your Zero Trust deployments.

Perhaps what we’re most excited about with this product is that users on all Zero Trust plans will be able to get started at no additional cost and then upgrade their plans for more advanced features and usage moving forward. Join our waitlist to be notified when these initial capabilities are available and receive early access.

Gartner Market Guide for Digital Experience Monitoring, 03/28/2022, Mrudula Bangera, Padraig Byrne, Gregg Siegfried.
GARTNER is the registered trademark and service mark of Gartner Inc., and/or its affiliates in the U.S. and/or internationally and has been used herein with permission. All rights reserved.

Welcome to CIO Week 2023

Post Syndicated from Corey Mahan original https://blog.cloudflare.com/welcome-to-cio-week-2023/

Welcome to CIO Week 2023

Welcome to CIO Week 2023

When you are the Chief Information Officer (CIO), your systems need to just work. A quiet day when users go about their job without interruption is a celebration. When they do notice, something has probably fallen apart.

We understand. CIOs own some of an organization’s most mission-critical challenges. Your security counterparts expect safety to be robust while your users want it to be unintrusive. Your sales team continues to open offices in new locations while those new hires need rapid connectivity to your applications. You own a budget that never seems to grow fast enough to match price increases from point solution vendors. On top of that, CIOs must support their organizations’ shifts to new remote and hybrid work models, which means modernizing applications and infrastructure faster than ever before.

Today marks the start of CIO Week, our celebration of the work that you and your teams accomplish every day. We’ve assembled this week to showcase features, stories, and tools that you can use to continue to deliver on your mission while also improving the experience of your users and administrators. We’ve even included announcements to help on the budget front.

We’re doing this because we’ve been in the same places. Our own security team could not compromise on tools to safeguard Cloudflare while we grew beyond the walls of a couple of locations. We hired new staff members around the globe to manage one of the world’s largest networks, and they needed access to be fast. We were also predominantly a work-from-office organization. Today, we’re hiring for in-office, remote and hybrid opportunities all over the world.

We believe CIOs are shaping the future of the modern organization. From securely connecting employees and third-parties to critical applications, to safeguarding sensitive company data from phishing and other malicious threats, CIOs are effectively tasked with protecting an organization’s crown jewels. This week we’ll demonstrate how Cloudflare is helping CIOs to accelerate digital transformation and maximize employee collaboration and productivity – all while strengthening security. Welcome to CIO Week.

All eyes on digital transformation

CIOs own, sponsor, or support an organization’s digital transformation strategy that touches all parts of a business. These cross-functional efforts can include moving applications and data to the cloud, building new competencies in areas like data analytics or automation, and developing new digital products and services to drive growth.

While these initiatives are largely driven by the motivation to go faster, CIOs recognize that speed cannot come at the expense of safety. Balancing both goals, however, can quickly become complicated. Layering on new technologies can add overhead and increase total cost of ownership. Administrators can struggle if products require different management interfaces and control planes or work differently in different locations. Plus, poor integrations and interoperability can mean precious time is wasted just getting services to work together.

We think about hidden challenges like these often when building new products at Cloudflare. As Cloudflare’s CIO, who you’ll hear from shortly, likes to phrase it, we’re helping CIOs by “bringing the glue”. That is, when building anything new, we ask ourselves to focus on delivering benefits that could not be obtained using individual products in silos. Throughout this innovation week, you’ll see announcements highlighting how organizations can realize more value when services work natively together.

Designing our security products to be composable and easy to use helps our customers speed up their digital strategy.  But we think about speed in other ways too. First, we optimize our services to enforce protections for any request, from anywhere around the globe, so that security doesn’t get in the way of end users. (In fact, we’re so proud of this that we even dedicated an entire innovation week to delivering speedy user experiences across the Internet). Second, we pride ourselves on being speedy in innovation, delivering new capabilities and services at such high velocity that we not only solve the problems you’re facing today, but also help you proactively plan for fixing your problems of tomorrow.

SASE, Zero Trust and the CIO

For many organizations, an increasingly critical goal of digital transformation is revamping networking and security. As applications, users, and data have shifted outside the walls of the corporate perimeter, the traditional tools of the castle-and-moat model no longer make sense.

Instead, modernized architectures like SASE (or Secure Access Service Edge) are gaining traction, advocating to unify all networking and security controls to a single control plane in the cloud. On that journey, we’re seeing organizations turning to Zero Trust for best practices and principles to enable the broader visibility and granular controls needed to steer the modern workforce.

While concepts like SASE and Zero Trust still need the occasional explainer, the benefits are real, and CIOs are turning to our SASE platform – Cloudflare One – to start realizing those business benefits. When customers start their SASE and Zero Trust journeys with Cloudflare, they are connecting their employees to our global network to inspect and apply controls to as much traffic and data as they want. Whether your traffic is traversing from on-premise to the cloud, from one cloud to another, or something in between, Cloudflare has a way to secure and accelerate traffic.

This week, we will be announcing even more capabilities and products that make the single-vendor SASE dream a reality.

If you want to go far, let’s go together

Before taking on any long-term digital transformation challenge, it’s vital to make sure you’re surrounded by the right people and partners to go the distance.

With our broad mission to help build a better Internet, it means that we must do the same at Cloudflare. We partner with fellow industry leaders to help CIOs with efforts like the Critical Infrastructure Defense Project to quickly improve the cyber readiness of vulnerable infrastructure or our partnership with Yubico to provide security keys at “Good for the Internet” pricing (for as low as $10 per key!).

This collaborative ethos extends far beyond just these types of focused initiatives. Over recent years, Cloudflare has invested in our ecosystem of alliances, channel partners (including system integrators and advisory / consulting firms), and technology partners to make sure customers have options to pursue digital transformation in the way that makes the most sense for them. In particular, we have seen more customers and partners collaborating on long term SASE and Zero Trust use cases with our Cloudflare One platform.

Over the course of this week, we’ll share more about strategic partnerships, including opportunities to enable a Zero Trust strategy using Cloudflare One platform services and deeper integrations with key partners like Microsoft.

The expertise of partners combined with Cloudflare’s network scale and simplicity helps CIOs modernize security at their own pace.

Cloudflare is the neutral supercloud control plane

When CIOs think about a multi-cloud strategy it tends to center around applications. Multi-cloud strategies devise careful plans for migrating applications, ensuring that efficiency, scale and speed of delivery goals are met in the cloud.

But often overlooked are the highways of connectivity that are essential for a speedy connection from one cloud to another or from an on-premise data center to another network in a cloud provider. While speeding up applications is the focus, having a global endpoint and identity-neutral network fabric for consistency and composability is equally important.

This week, we’ll highlight how Cloudflare is able to connect you to/from anything. Whether a request is coming to or from other cloud providers, IoT devices, or in challenging regions or areas, Cloudflare provides a global control plane to help your business stay secure and keep things moving fast.

We believe that Cloudflare is the neutral supercloud control plane. Over the course of this week, we’ll show you how our platform is built to work seamlessly with multiple cloud providers, allowing organizations to easily and securely manage their cloud infrastructure.

A warm welcome from Cloudflare’s CIO

New project kickoff, budget planning update, security compliance report, hiring review board, hybrid tooling workshop and the list goes on.

All this and it’s only Monday morning. Sound familiar?

My job as  Cloudflare’s CIO shares most of the challenges that any other CIO post faces in these uncertain times. Today business technology leaders have to balance managing short term budget pressure, while at the same time having to keep strategic areas properly funded to not mortgage the company’s future. On the other hand one of the perks of being Cloudflare’s CIO is being a direct participant in the incredible rate of innovation we hold ourselves to at Cloudflare, and in return, the benefit we can deliver to our customers.

I can’t wait for us to share all the exciting announcements and new product features this week. Why? Well, my team has been using a lot of them from even the early versions.

One of the awesome things about getting to be CIO here is being Customer Zero for most of Cloudflare’s products, getting to try everything first, and play Product Manager from time to time… Before we ask you to trust us with your networks, security, or data, we’ve put ourselves through the test first. Securing Cloudflare using Cloudflare, or “Dog Fooding” as we call it internally, is something ingrained in our culture.

But don’t just take it from me, during the week you’ll hear from other fellow CIOs who view Cloudflare as a trusted partner. My hope is at the end of the week, you’ll consider having Cloudflare as a trusted partner too.

Welcome to CIO Week!

Democratizing access to Zero Trust with Project Galileo

Post Syndicated from Jocelyn Woolbright original https://blog.cloudflare.com/democratizing-access-to-zero-trust-with-project-galileo/

Democratizing access to Zero Trust with Project Galileo

Democratizing access to Zero Trust with Project Galileo

Project Galileo was started in 2014 to protect free expression from cyber attacks. Many of the organizations in the world that champion new ideas are underfunded and lack the resources to properly secure themselves. This means they are exposed to Internet attacks aimed at thwarting and suppressing legitimate free speech.

In the last eight years, we have worked with 50 partners across civil society to onboard more than 2,000 organizations in 111 countries to provide our powerful cyber security products to those who work in sensitive yet critical areas of human rights and democracy building.

New security needs for a new threat environment

As Cloudflare has grown as a company, we have adapted and evolved Project Galileo especially amid global events such as COVID-19, social justice movements after the death of George Floyd, the war in Ukraine, and emerging threats to these groups intended to silence them. Early in the pandemic, as organizations had to quickly implement work-from-home solutions, new risks stemmed from this shift.

In our conversations with partners and participants, we noticed a theme. The digital divide in terms of cyber security products on the market and the “one size fits all” model mean that only large enterprises with a dedicated security team and extensive budgets have the ability to keep their internal resources and data secure. For Project Galileo, we work with a range of organizations that vary in size, internal capacity, and technical expertise. Especially since many of these groups rely on their online presence to collect donations, organize volunteers, and promote their mission, one size fits all security products do not match the needs and expertise for these groups.

Announcing new Zero Trust tools for Project Galileo participants

With this, we have extended our Zero Trust products to all domains under Project Galileo, as we want organizations to have access to Enterprise-level cyber security products no matter their size and budgets. Zero Trust security means that no one is trusted by default from inside or outside the network, and verification is required from everyone trying to gain access to resources on the network. This allows organizations of any size to solve the common security problems such as data loss, malware and phishing so these organizations can focus on their unique missions.

For Impact Week, we are excited to share how Project Galileo participants and partners use Cloudflare’s Zero Trust products to keep their operations running smoothly.

CyberPeace Institute

Democratizing access to Zero Trust with Project Galileo

We started partnering with the CyberPeace Institute for Project Galileo in 2022. As part of our partnership, we have worked to provide our cyber security services to at-risk organizations around the world.

Established in 2019, the CyberPeace Institute is an independent and neutral nongovernmental organization, headquartered in Switzerland, whose mission is to ensure the rights of people to security, dignity and equity in cyberspace. The Institute works in close collaboration with relevant partners to reduce the harms from cyberattacks on people’s lives worldwide. By analyzing cyberattacks, the Institute exposes their societal impact, how international laws and norms are being violated, and advances responsible behavior to enforce cyberpeace.
Since our partnership, we’ve been working to onboard their organization to Cloudflare Zero Trust, to secure critical applications and protect employees from online threats.

“The CyberPeace Institute works with humanitarian non-governmental organizations (NGOs) to protect their operations and build their cyber capabilities, data and resources in an increasingly complex digital environment. Both the Institute and Cloudflare share a core motivation to ensure the rights of people to security, dignity and equity in cyberspace. This alignment gives us confidence that Cloudflare is the right strategic partner as we evolve with our mission. We are grateful for the support of Project Galileo” stated Stéphane Duguin, Chief Executive Officer, CyberPeace Institute.

The Information Technology Disaster Resource Center

Democratizing access to Zero Trust with Project Galileo

The Information Technology Disaster Resource Center is a nonprofit composed of thousands of service oriented technical professionals and private sector partners that assist in disaster response operations in the United States. These teams train and work in collaboration with NGOs and first responders to deliver emergency communications and technical solutions to aid communities in crisis. ITDRC provides connectivity, Wi-Fi hotspots, cell phone charging stations, and Internet-enabled computers for shelters, fire camps, and community recovery. A key part of their mission is to leverage technology to connect survivors and responders amid crises.

ITDRC started using Cloudflare in 2020 when they were accepted to Project Galileo. Since then, they have implemented many Zero Trust products to secure their volunteers and employees.

Chris Hillis, Co-founder at ITDRC says, “Cloudflare Zero Trust is essential to securing our employees, volunteers, and disaster survivors on site and in the field. Cloudflare delivers secure, reliable, and fast connectivity to the Internet and critical applications that our teams need to respond to disasters effectively. Setting up policies has been simple for our administrators, and our team benefits from a safer, faster experience, whether accessing internally hosted applications, or the broader Internet. With Cloudflare Access, we are able to ensure that team members receive a consistent user experience accessing internal applications based on their role, all while utilizing our existing identity provider and securing our infrastructure. Utilizing Cloudflare Gateway adds an additional layer of security to our networks and devices, helping to protect our users from external threats, and themselves.”

Meedan

Democratizing access to Zero Trust with Project Galileo

Meedan is a global technology not-for-profit that builds software and programmatic initiatives to strengthen journalism, digital literacy, and accessibility of information online and off. They develop open-source tools for creating and sharing context on digital media through crowdsourcing, annotation, verification, archival, and translation. Their projects span issues including election monitoring, pandemic response, and human rights documentation.

Aaron Huslage, Director of Systems and Security at Meedan says, “Meedan and Cloudflare both share a vision of a more equitable, safer Internet. We were proud to be a founding member of Project Galileo in 2014 and support the work that program has done to protect Human Rights Defenders around the world. Closer to home Cloudflare helps our employees be more secure and productive when creating and distributing our open source software.”

Organization of American States

Democratizing access to Zero Trust with Project Galileo

The Organization of American States is the world’s oldest regional organization, dating back to the First International Conference of American States, held in Washington, D.C., from October 1889 to April 1890. Its 35 members focus on four main pillars — democracy, human rights, security, and development. It serves as a home for multilateral dialogue on topics such as the rights of indigenous peoples, territorial disputes, and regional goals for education.

“The partnership with Cloudflare will help the Organization of American States (OAS) democratize best-in-class security to modernize and strengthen our internal cybersecurity posture with a Zero Trust approach, delivered in the cloud, without sacrificing our workforce performance.” Andrew Vanjani, OAS Chief Information Officer.

How do I get started?

First, we want to thank all of our civil society partners that we work alongside to offer Cloudflare protection and work with us to extend even more products to organizations around the world. If you are an organization looking for protection under Project Galileo, please visit our website: cloudflare.com/galileo.

Zero trust with Kafka

Post Syndicated from Grab Tech original https://engineering.grab.com/zero-trust-with-kafka

Introduction

Grab’s real-time data platform team, also known as Coban, has been operating large-scale Kafka clusters for all Grab verticals, with a strong focus on ensuring a best-in-class-performance and 99.99% availability.

Security has always been one of Grab’s top priorities and as fraudsters continue to evolve, there is an increased need to continue strengthening the security of our data streaming platform. One of the ways of doing this is to move from a pure network-based access control to state-of-the-art security and zero trust by default, such as:

  • Authentication: The identity of any remote systems – clients and servers – is established and ascertained first, prior to any further communications.
  • Authorisation: Access to Kafka is granted based on the principle of least privilege; no access is given by default. Kafka clients are associated with the whitelisted Kafka topics and permissions – consume or produce – they strictly need. Also, granted access is auditable.
  • Confidentiality: All in-transit traffic is encrypted.

Solution

We decided to use mutual Transport Layer Security (mTLS) for authentication and encryption. mTLS enables clients to authenticate servers, and servers to reciprocally authenticate clients.

Kafka supports other authentication mechanisms, like OAuth, or Salted Challenge Response Authentication Mechanism (SCRAM), but we chose mTLS because it is able to verify the peer’s identity offline. This verification ability means that systems do not need an active connection to an authentication server to ascertain the identity of a peer. This enables operating in disparate network environments, where all parties do not necessarily have access to such a central authority.

We opted for Hashicorp Vault and its PKI engine to dynamically generate clients and servers’ certificates. This enables us to enforce the usage of short-lived certificates for clients, which is a way to mitigate the potential impact of a client certificate being compromised or maliciously shared. We said zero trust, right?

For authorisation, we chose Policy-Based Access Control (PBAC), a more scalable solution than Role-Based Access Control (RBAC), and the Open Policy Agent (OPA) as our policy engine, for its wide community support.

To integrate mTLS and the OPA with Kafka, we leveraged Strimzi, the Kafka on Kubernetes operator. In a previous article, we have alluded to Strimzi and hinted at how it would help with scalability and cloud agnosticism. Built-in security is undoubtedly an additional driver of our adoption of Strimzi.

Server authentication

Figure 1 – Server authentication process for internal cluster communications

We first set up a single Root Certificate Authority (CA) for each environment (staging, production, etc.). This Root CA, in blue on the diagram, is securely managed by the Hashicorp Vault cluster. Note that the color of the certificates, keys, signing arrows and signatures on the diagrams are consistent throughout this article.

To secure the cluster’s internal communications, like the communications between the Kafka broker and Zookeeper pods, Strimzi sets up a Cluster CA, which is signed by the Root CA (step 1). The Cluster CA is then used to sign the individual Kafka broker and zookeeper certificates (step 2). Lastly, the Root CA’s public certificate is imported into the truststores of both the Kafka broker and Zookeeper (step 3), so that all pods can mutually verify their certificates when authenticating one with the other.

Strimzi’s embedded Cluster CA dynamically generates valid individual certificates when spinning up new Kafka and Zookeeper pods. The signing operation (step 2) is handled automatically by Strimzi.

For client access to Kafka brokers, Strimzi creates a different set of intermediate CA and server certificates, as shown in the next diagram.

Figure 2 – Server authentication process for client access to Kafka brokers

The same Root CA from Figure 1 now signs a different intermediate CA, which the Strimzi community calls the Client CA (step 1). This naming is misleading since it does not actually sign any client certificates, but only the server certificates (step 2) that are set up on the external listener of the Kafka brokers. These server certificates are for the Kafka clients to authenticate the servers. This time, the Root CA’s public certificate will be imported into the Kafka Client truststore (step 3).

Client authentication

Figure 3 – Client authentication process

For client authentication, the Kafka client first needs to authenticate to Hashicorp Vault and request an ephemeral certificate from the Vault PKI engine (step 1). Vault then issues a certificate and signs it using its Root CA (step 2). With this certificate, the client can now authenticate to Kafka brokers, who will use the Root CA’s public certificate already in their truststore, as previously described (step 3).

CA tree

Putting together the three different authentication processes we have just covered, the CA tree now looks like this. Note that this is a simplified view for a single environment, a single cluster, and two clients only.

Figure 4 – Complete certificate authority tree

As mentioned earlier, each environment (staging, production, etc.) has its own Root CA. Within an environment, each Strimzi cluster has its own pair of intermediate CAs: the Cluster CA and the Client CA. At the leaf level, the Zookeeper and Kafka broker pods each have their own individual certificates.

On the right side of the diagram, each Kafka client can get an ephemeral certificate from Hashicorp Vault whenever they need to connect to Kafka. Each team or application has a dedicated Vault PKI role in Hashicorp Vault, restricting what can be requested for its certificate (e.g., Subject, TTL, etc.).

Strimzi deployment

We heavily use Terraform to manage and provision our Kafka and Kafka-related components. This enables us to quickly and reliably spin up new clusters and perform cluster scaling operations.

Under the hood, Strimzi Kafka deployment is a Kubernetes deployment. To increase the performance and the reliability of the Kafka cluster, we create dedicated Kubernetes nodes for each Strimzi Kafka broker and each Zookeeper pod, using Kubernetes taints and tolerations. This ensures that all resources of a single node are dedicated solely to either a single Kafka broker or a single Zookeeper pod.

We also decided to go with a single Kafka cluster by Kubernetes cluster to make the management easier.

Client setup

Coban provides backend microservice teams from all Grab verticals with a popular Kafka SDK in Golang, to standardise how teams utilise Coban Kafka clusters. Adding mTLS support mostly boils down to upgrading our SDK.

Our enhanced SDK provides a default mTLS configuration that works out of the box for most teams, while still allowing customisation, e.g., for teams that have their own Hashicorp Vault Infrastructure for compliance reasons. Similarly, clients can choose among various Vault auth methods such as AWS or Kubernetes to authenticate to Hashicorp Vault, or even implement their own logic for getting a valid client certificate.

To mitigate the potential risk of a user maliciously sharing their application’s certificate with other applications or users, we limit the maximum Time-To-Live (TTL) for any given certificate. This also removes the overhead of maintaining a Certificate Revocation List (CRL). Additionally, our SDK stores the certificate and its associated private key in memory only, never on disk, hence reducing the attack surface.

In our case, Hashicorp Vault is a dependency. To prevent it from reducing the overall availability of our data streaming platform, we have added two features to our SDK – a configurable retry mechanism and automatic renewal of clients’ short-lived certificates when two thirds of their TTL is reached. The upgraded SDK also produces new metrics around this certificate renewal process, enabling better monitoring and alerting.

Authorisation

Figure 5 – Authorisation process before a client can access a Kafka record

For authorisation, we set up the Open Policy Agent (OPA) as a standalone deployment in the Kubernetes cluster, and configured Strimzi to integrate the Kafka brokers with that OPA.

OPA policies – written in the Rego language – describe the authorisation logic. They are created in a GitLab repository along with the authorisation rules, called data sources (step 1). Whenever there is a change, a GitLab CI pipeline automatically creates a bundle of the policies and data sources, and pushes it to an S3 bucket (step 2). From there, it is fetched by the OPA (step 3).

When a client – identified by its TLS certificate’s Subject – attempts to consume or produce a Kafka record (step 4), the Kafka broker pod first issues an authorisation request to the OPA (step 5) before processing the client’s request. The outcome of the authorisation request is then cached by the Kafka broker pod to improve performance.

As the core component of the authorisation process, the OPA is deployed with the same high availability as the Kafka cluster itself, i.e. spread across the same number of Availability Zones. Also, we decided to go with one dedicated OPA by Kafka cluster instead of having a unique global OPA shared between multiple clusters. This is to reduce the blast radius of any OPA incidents.

For monitoring and alerting around authorisation, we submitted an Open Source contribution in the opa-kafka-plugin project in order to enable the OPA authoriser to expose some metrics. Our contribution to the open source code allows us to monitor various aspects of the OPA, such as the number of authorised and unauthorised requests, as well as the cache hit-and-miss rates. Also, we set up alerts for suspicious activity such as unauthorised requests.

Finally, as a platform team, we need to make authorisation a scalable, self-service process. Thus, we rely on the Git repository’s permissions to let Kafka topics’ owners approve the data source changes pertaining to their topics.

Teams who need their applications to access a Kafka topic would write and submit a JSON data source as simple as this:

{
 "example_topic": {
   "read": [
     "clientA.grab",
     "clientB.grab"
   ],
   "write": [
     "clientB.grab"
   ]
 }
}

GitLab CI unit tests and business logic checks are set up in the Git repository to ensure that the submitted changes are valid. After that, the change would be submitted to the topic’s owner for review and approval.

What’s next?

The performance impact of this security design is significant compared to unauthenticated, unauthorised, plaintext Kafka. We observed a drop in throughput, mostly due to the low performance of encryption and decryption in Java, and are currently benchmarking different encryption ciphers to mitigate this.

Also, on authorisation, our current PBAC design is pretty static, with a list of applications granted access for each topic. In the future, we plan to move to Attribute-Based Access Control (ABAC), creating dynamic policies based on teams and topics’ metadata. For example, teams could be granted read and write access to all of their own topics by default. Leveraging a versatile component such as the OPA as our authorisation controller enables this evolution.

Join us

Grab is the leading superapp platform in Southeast Asia, providing everyday services that matter to consumers. More than just a ride-hailing and food delivery app, Grab offers a wide range of on-demand services in the region, including mobility, food, package and grocery delivery services, mobile payments, and financial services across 428 cities in eight countries.

Powered by technology and driven by heart, our mission is to drive Southeast Asia forward by creating economic empowerment for everyone. If this mission speaks to you, join our team today!

Gateway + CASB: alphabetti spaghetti that spells better SaaS security

Post Syndicated from Alex Dunbrack original https://blog.cloudflare.com/gateway-casb-in-action/

Gateway + CASB: alphabetti spaghetti that spells better SaaS security

This post is also available in 简体中文 and Español.

Gateway + CASB: alphabetti spaghetti that spells better SaaS security

Back in June 2022, we announced an upcoming feature that would allow for Cloudflare Zero Trust users to easily create prefilled HTTP policies in Cloudflare Gateway (Cloudflare’s Secure Web Gateway solution) via issues identified by CASB, a new Cloudflare product that connects, scans, and monitors your SaaS apps – like Google Workspace and Microsoft 365 – for security issues.

With Cloudflare’s 12th Birthday Week nearing its end, we wanted to highlight, in true Cloudflare fashion, this new feature in action.

Gateway + CASB: alphabetti spaghetti that spells better SaaS security

What is CASB? What is Gateway?

To quickly recap, Cloudflare’s API-driven CASB offers IT and security teams a fast, yet effective way to connect, scan, and monitor their SaaS apps for security issues, like file exposures, misconfigurations, and Shadow IT. In just a few clicks, users can see an exhaustive list of security issues that may be affecting the security of their SaaS apps, including Google Workspace, Microsoft 365, Slack, and GitHub.

Cloudflare Gateway, our Secure Web Gateway (SWG) offering, allows teams to monitor and control the outbound connections originating from endpoint devices. For example, don’t want your employees to access gambling and social media websites on company devices? Just block access to them in our easy-to-use Zero Trust dashboard.

The problems at hand

As we highlighted in our first post, Shadow IT – or unapproved third-party applications being used by employees – continues to be one of the biggest pain points for IT administrators in the cloud era. When employees grant access to external services without the consent of their IT or security department, they risk granting bad actors access to some of the company’s most sensitive data stored in these SaaS applications.

Another major issue affecting the security of data stored in the cloud is file exposure in the form of oversharing. When an employee shares a highly sensitive Google Doc to someone via a public link, would your IT or security team know about it? And even if they do, do they have a way to minimize the risk and block access to it?

With these two products now being used by customers around the world, we’re excited to share how visibility and basic awareness of SaaS security issues doesn’t have to be the end of it. What are admins supposed to do next?

Gateway + CASB: blocking identified threats in three (yes, three) clicks

Now, when CASB discovers a problem (which we call a Finding), it’s now possible to easily create a corresponding Gateway policy in as few as three clicks.

This means users can now automatically generate fine-grained Gateway policies to prevent specific inappropriate behavior from continuing, while still allowing for expected access and usage that meets company policy.

Example 1: Block employees from uploading to their personal Google Drive

Gateway + CASB: alphabetti spaghetti that spells better SaaS security

A common use case we heard during CASB’s beta program was the tendency for employees to upload corporate data – documents, spreadsheets, files, folders,  etc. – to their personal Google Drive (or similar) accounts, presenting the risk of intellectual property making its way out of a secure corporate environment. With Gateway and CASB working together, IT administrators can now directly block upload activity from anywhere other than their corporate tenant of Google Drive or Microsoft OneDrive.

Example 2: Restrict repeat oversharers from uploading and downloading files

Gateway + CASB: alphabetti spaghetti that spells better SaaS security

A great existing use case of Cloudflare CASB has been the ability to identify employees that are habitual oversharers of files in their corporate Google or Microsoft tenants – sharing files to anyone that has the link, sharing files with emails outside their company, etc.

Now when these employees are identified, CASB admins can create Gateway policies to block specific users from further upload and download activity until the behavior has been addressed.

Example 3: Prevent file uploads to unapproved, Shadow IT applications

Gateway + CASB: alphabetti spaghetti that spells better SaaS security

To address the concern of Shadow IT, CASB-originating Gateway policies can be customized, including being able to restrict upload and download events to only the SaaS applications your organization uses. Let’s say your company uses Box as its file storage solution; in just a few clicks, you can use an identified CASB Finding to create a Gateway policy that blocks activity to any file sharing application other than Box. This gives IT and security admins the peace of mind that their files will only end up in the approved cloud application they use.

Get started today with the Cloudflare Zero Trust

Ultimately, the power of Cloudflare Zero Trust comes from its existence as a single, unified platform that draws strength from its combination of products and features. As we continue our work towards bringing these new and exciting offerings to market, we believe that it’s just as important to highlight their synergies and associated use cases, this time from Cloudflare Gateway and CASB.

For those not already using Cloudflare Zero Trust, don’t hesitate to get started today – see the platform yourself with 50 free seats by signing up here.

For those who already know and love Cloudflare Zero Trust, reach out to your Cloudflare sales contact to get started with CASB and Gateway. We can’t wait to hear what interesting and exciting use cases you discover from this new cross-product functionality.

How Cloudflare implemented hardware keys with FIDO2 and Zero Trust to prevent phishing

Post Syndicated from Evan Johnson original https://blog.cloudflare.com/how-cloudflare-implemented-fido2-and-zero-trust/

How Cloudflare implemented hardware keys with FIDO2 and Zero Trust to prevent phishing

How Cloudflare implemented hardware keys with FIDO2 and Zero Trust to prevent phishing

Cloudflare’s security architecture a few years ago was a classic “castle and moat” VPN architecture. Our employees would use our corporate VPN to connect to all the internal applications and servers to do their jobs. We enforced two-factor authentication with time-based one-time passcodes (TOTP), using an authenticator app like Google Authenticator or Authy when logging into the VPN but only a few internal applications had a second layer of auth. That architecture has a strong looking exterior, but the security model is weak. We recently detailed the mechanics of a phishing attack we prevented, which walks through how attackers can phish applications that are “secured” with second factor authentication methods like TOTP. Happily, we had long done away with TOTP and replaced it with hardware security keys and Cloudflare Access. This blog details how we did that.

The solution to the phishing problem is through a multi-factor  authentication (MFA) protocol called FIDO2/WebAuthn. Today, all Cloudflare employees log in with FIDO2 as their secure multi-factor and authenticate to our systems using our own Zero Trust products. Our newer architecture is phish proof and allows us to more easily enforce the least privilege access control.

A little about the terminology of security keys and what we use

In 2018, we knew we wanted to migrate to phishing-resistant MFA. We had seen evilginx2 and the maturity around phishing push-based mobile authenticators, and TOTP. The only phishing-resistant MFA that withstood social engineering and credential stealing attacks were security keys that implement FIDO standards. FIDO-based MFA introduces new terminology, such as FIDO2, WebAuthn, hard(ware) keys, security keys, and specifically, the YubiKey (the name of a well-known manufacturer of hardware keys), which we will reference throughout this post.

WebAuthn refers to the web authentication standard, and we wrote in depth about how that protocol works when we released support for security keys in the Cloudflare dashboard.

CTAP1(U2F) and CTAP2 refers to the client to authenticator protocol which details how software or hardware devices interact with the platform performing the WebAuthn protocol.

FIDO2 is the collection of these two protocols being used for authentication. The distinctions aren’t important, but the nomenclature can be confusing.

The most important thing to know is all of these protocols and standards were developed to create open authentication protocols that are phishing-resistant and can be implemented with a hardware device. In software, they are implemented with Face ID, Touch ID, Windows Assistant, or similar. In hardware a YubiKey or other separate physical device is used for authentication with USB, Lightning, or NFC.

FIDO2 is phishing-resistant because it implements a challenge/response that is cryptographically secure, and the challenge protocol incorporates the specific website or domain the user is authenticating to. When logging in, the security key will produce a different response on example.net than when the user is legitimately trying to log in on example.com.

At Cloudflare, we’ve issued multiple types of security keys to our employees over the years, but we currently issue two different FIPS-validated security keys to all employees. The first key is a YubiKey 5 Nano or YubiKey 5C Nano that is intended to stay in a USB slot on our employee laptops at all times. The second is the YubiKey 5 NFC or YubiKey 5C NFC that works on desktops and on mobile either through NFC or USB-C.

In late 2018 we distributed security keys at a whole company event. We asked all employees to enroll their keys, authenticate with them, and ask questions about the devices during a short workshop. The program was a huge success, but there were still rough edges and applications that didn’t work with WebAuthn. We weren’t ready for full enforcement of security keys and needed some middle-ground solution while we worked through the issues.

The beginning: selective security key enforcement with Cloudflare Zero Trust

We have thousands of applications and servers we are responsible for maintaining, which were protected by our VPN. We started migrating all of these applications to our Zero Trust access proxy at the same time that we issued our employees their set of security keys.

How Cloudflare implemented hardware keys with FIDO2 and Zero Trust to prevent phishing

Cloudflare Access allowed our employees to securely access sites that were once protected by the VPN. Each internal service would validate a signed credential to authenticate a user and ensure the user had signed in with our identity provider. Cloudflare Access was necessary for our rollout of security keys because it gave us a tool to selectively enforce the first few internal applications that would require authenticating with a security key.

How Cloudflare implemented hardware keys with FIDO2 and Zero Trust to prevent phishing

We used Terraform when onboarding our applications to our Zero Trust products and this is the Cloudflare Access policy where we first enforced security keys. We set up Cloudflare Access to use OAuth2 when integrating with our identity provider and the identity provider informs Access about which type of second factor was used as part of the OAuth flow.

In our case, swk is a proof of possession of a security key. If someone logged in and didn’t use their security key they would be shown a helpful error message instructing them to log in again and press on their security key when prompted.

How Cloudflare implemented hardware keys with FIDO2 and Zero Trust to prevent phishing

Selective enforcement instantly changed the trajectory of our security key rollout. We began enforcement on a single service on July 29, 2020, and authentication with security keys massively increased over the following two months. This step was critical to give our employees an opportunity to familiarize themselves with the new technology. A window of selective enforcement should be at least a month to account for people on vacation, but in hindsight it doesn’t need to be much longer than that.

What other security benefits did we get from moving our applications to use our Zero Trust products and off of our VPN? With legacy applications, or applications that don’t implement SAML, this migration was necessary for enforcement of role based access control and the principle of the least privilege. A VPN will authenticate your network traffic but all of your applications will have no idea who the network traffic belongs to. Our applications struggled to enforce multiple levels of permissions and each had to re-invent their own auth scheme.

When we onboarded to Cloudflare Access we created groups to enforce RBAC and tell our applications what permission level each person should have.

How Cloudflare implemented hardware keys with FIDO2 and Zero Trust to prevent phishing

Here’s a site where only members of the ACL-CFA-CFDATA-argo-config-admin-svc group have access. It enforces that the employee used their security key when logging in, and no complicated OAuth or SAML integration was needed for this. We have over 600 internal sites using this same pattern and all of them enforce security keys.

The end of optional: the day Cloudflare dropped TOTP completely

In February 2021, our employees started to report social engineering attempts to our security team. They were receiving phone calls from someone claiming to be in our IT department, and we were alarmed. We decided to begin requiring security keys to be used for all authentication to prevent any employees from being victims of the social engineering attack.

How Cloudflare implemented hardware keys with FIDO2 and Zero Trust to prevent phishing

After disabling all other forms of MFA (SMS, TOTP etc.), except for WebAuthn, we were officially FIDO2 only. “Soft token” (TOTP) isn’t perfectly at zero on this graph though. This is caused because those who lose their security keys or become locked out of their accounts need to go through a secure offline recovery process where logging in is facilitated through an alternate method. Best practice is to distribute multiple security keys for employees to allow for a back-up, in case this situation arises.

Now that all employees are using their YubiKeys for phishing-resistant MFA are we finished? Well, what about SSH and non-HTTP protocols? We wanted a single unified approach to identity and access management so bringing security keys to arbitrary other protocols was our next consideration.

Using security keys with SSH

To support bringing security keys to SSH connections we deployed Cloudflare Tunnel to all of our production infrastructure. Cloudflare Tunnel seamlessly integrates with Cloudflare Access regardless of the protocol transiting the tunnel, and running a tunnel requires the tunnel client cloudflared. This means that we could deploy the cloudflared binary to all of our infrastructure and create a tunnel to each machine, create Cloudflare Access policies where security keys are required, and ssh connections would start requiring security keys through Cloudflare Access.

In practice these steps are less intimidating than they sound and the Zero Trust developer docs have a fantastic tutorial on how to do this. Each of our servers have a configuration file required to start the tunnel. Systemd invokes cloudflared which uses this (or similar) configuration file when starting the tunnel.

tunnel: 37b50fe2-a52a-5611-a9b1-ear382bd12a6
credentials-file: /root/.cloudflared/37b50fe2-a52a-5611-a9b1-ear382bd12a6.json

ingress:
  - hostname: <identifier>.ssh.cloudflare.com
    service: ssh://localhost:22
  - service: http_status:404

When an operator needs to SSH into our infrastructure they use the ProxyCommand SSH directive to invoke cloudflared, authenticate using Cloudflare Access, and then forward the SSH connection through Cloudflare. Our employees’ SSH configurations have an entry that looks kind of like this, and can be generated with a helper command in cloudflared:

Host *.ssh.cloudflare.com
    ProxyCommand /usr/local/bin/cloudflared access ssh –hostname %h.ssh.cloudflare.com

It’s worth noting that OpenSSH has supported FIDO2 since version 8.2, but we’ve found there are benefits to having a unified approach to access control where all access control lists are maintained in a single place.

What we’ve learned and how our experience can help you

There’s no question after the past few months that the future of authentication is FIDO2 and WebAuthn. In total this took us a few years, and we hope these learnings can prove helpful to other organizations who are looking to modernize with FIDO-based authentication.

If you’re interested in rolling out security keys at your organization, or you’re interested in Cloudflare’s Zero Trust products, reach out to [email protected]. Although we’re happy that our preventative efforts helped us resist the latest round of phishing and social engineering attacks, our security team is still growing to help prevent whatever comes next.

Click Here! (safely): Automagical Browser Isolation for potentially unsafe links in email

Post Syndicated from Joao Sousa Botto original https://blog.cloudflare.com/safe-email-links/

Click Here! (safely): Automagical Browser Isolation for potentially unsafe links in email

Click Here! (safely): Automagical Browser Isolation for potentially unsafe links in email

We’re often told not to click on ‘odd’ links in email, but what choice do we really have? With the volume of emails and the myriad of SaaS products that companies use, it’s inevitable that employees find it almost impossible to distinguish a good link before clicking on it. And that’s before attackers go about making links harder to inspect and hiding their URLs behind tempting “Confirm” and “Unsubscribe” buttons.

We need to let end users click on links and have a safety net for when they unwittingly click on something malicious — let’s be honest, it’s bound to happen even if you do it by mistake. That safety net is Cloudflare’s Email Link Isolation.

With Email Link Isolation, when a user clicks on a suspicious link — one that email security hasn’t identified as ‘bad’, but is still not 100% sure it’s ‘good’ — they won’t immediately be taken to that website. Instead, the user first sees an interstitial page recommending extra caution with the website they’ll visit, especially if asked for passwords or personal details.

Click Here! (safely): Automagical Browser Isolation for potentially unsafe links in email

From there, one may choose to not visit the webpage or to proceed and open it in a remote isolated browser that runs on Cloudflare’s global network and not on the user’s local machine. This helps protect the user and the company.

The user experience in our isolated browser is virtually indistinguishable from using one’s local browser (we’ll talk about why below), but untrusted and potentially malicious payloads will execute away from the user’s computer and your corporate network.

In summary, this solution:

  • Keeps users alert to prevent credential theft and account takeover
  • Automatically blocks dangerous downloads
  • Prevents malicious scripts from executing on the user’s device
  • Protects against zero-day exploits on the browser

How can I try it

Area 1 is Cloudflare’s email security solution. It protects organizations from the full range of email attack types (URLs, payloads, BEC), vectors (email, web, network), and attack channels (external, internal, trusted partners) by enforcing multiple layers of protection before, during, and after the email hits the inbox. Today it adds Email Link Isolation to the protections it offers.

If you are a Cloudflare Area 1 customer you can request access to the Email Link Isolation beta today. We have had Email Link Isolation deployed to all Cloudflare employees for the last four weeks and are ready to start onboarding customers.

During the beta it will be available for free on all plans. After the beta it will still be included at no extra cost with our PhishGuard plan.

Under the hood

To create Email Link Isolation we used a few ingredients that are quite special to Cloudflare. It may seem complicated and, in a sense, the protection is complex, but we designed this so that the user experience is fast, safe, and with clear options on how to proceed.

1. Find potentially unsafe domains

First, we have created a constantly updating list of domains that the Cloudflare’s DNS resolver recently saw for the first time, or that are somehow potentially unsafe (leveraging classifiers from the Cloudflare Gateway and other products). These are domains that would be too disruptive for the organization to block outright, but that should still be navigated with extra caution.

For example, people acquire domains and create new businesses every day. There’s nothing wrong with that – quite the opposite. However, attackers often set up or acquire websites serving legitimate content and, days or weeks later, send a link to intended targets. The emails flow through as benign and the attacker weaponizes the website when emails are already sitting on people’s inboxes. Blocking all emails with links to new websites would cause users to surely miss important communications, and delivering the emails while making links safe to click on is a much better suited approach.

There is also hosting infrastructure from large cloud providers, such as Microsoft or Google, that prevent crawling and scanning. These are used on our day-to-day business, but attackers may deploy malicious content there. You wouldn’t want to fully block emails with links to Microsoft SharePoint, for example, but it’s certainly safer to use Email Link Isolation on them if they link to outside your organization.

Attackers are constantly experimenting with new ways of looking legitimate to their targets, and that’s why relying on the early signals that Cloudflare sees makes such a big difference.

The second ingredient we want to highlight is that, as Cloudflare Area 1 processes and inspects emails for security concerns, it also checks the domain of every link against the suspicious list. If an email contains a link to a suspicious domain, Cloudflare Area 1 automatically changes it (rewrites) so that the interstitial page is shown, and the link opens with Cloudflare Browser Isolation by default.

Note: Rewriting email links is only possible when emails are processed inline, which is one of the options for deploying Area 1. One of the big disadvantages of any email security solution deployed as API-only is that closing this last mile gap through link rewriting isn’t a possibility.

3. Opens remotely but feels local

When a user clicks on one of these rewritten links, instead of directly accessing a potential threat, our systems will first check their current classification (benign, suspicious, malicious). Then, if it’s malicious, the user will be blocked from continuing to the website and see an interstitial page informing them why. No further action is required.

If the link is suspicious, the user is offered the option to open it in an isolated browser. What happens next? The link is opened with Cloudflare Browser Isolation in a nearby Cloudflare data center (globally within 50 milliseconds of 95% of the Internet-connect population). To ensure website compatibility and security, the target website is entirely executed in a sandboxed Chromium-based browser. Finally, the website is instantly streamed back to the user as vector instructions consumed by a lightweight HTML5-compatible remoting client in the user’s preferred web browser. These safety precautions happen with no perceivable latency to the end user.

Cloudflare Browser Isolation is an extremely secure remote browsing experience that feels just like local browsing. And delivering this is only possible by serving isolated browsers on a low latency, global network with our unique vector based streaming technology. This architecture is different from legacy remote browser isolation solutions that rely on fragile and insecure DOM-scrubbing, or are bandwidth intensive and high latency pixel pushing techniques hosted in a few high latency data centers.

4. Reassess (always learning)

Last but not least, another ingredient that makes Email Link Isolation particularly effective is that behind the scenes our services are constantly reevaluating domains and updating their reputation in Cloudflare’s systems.

When a domain on our suspicious list is confirmed to be benign, all links to it can automatically start opening with the user’s local browser instead of with Cloudflare Browser Isolation.

Similarly, if a domain on the suspicious list is identified as malign, all links to that domain can be immediately blocked from opening. So, our services are constantly learning and acting accordingly.

It’s been four weeks since we deployed Email Link Isolation to all our 3,000+ Cloudflare employees, here’s what we saw:

  • 100,000 link rewrites per week on Spam and Malicious emails. Such emails were already blocked server side by Area 1 and users never see them. It’s still safer to rewrite these as they may be released from quarantine on user request.
  • 2,500 link rewrites per week on Bulk emails. Mostly graymail, which are commercial/bulk communications the user opted into. They may end up in the users’ spam folder.
  • 1,000 link rewrites per week on emails that do not fit any of the categories above — these are the ones that normally reach the user’s inboxes. These are almost certainly benign, but there’s still enough doubt to warrant a link rewrite.
  • 25 clicks on rewritten links per week (up to six per day).
Click Here! (safely): Automagical Browser Isolation for potentially unsafe links in email

As a testament to the efficacy of Cloudflare Area 1, 25 suspicious link clicks per week for a universe of over 3,000 employees is a very low number. Thanks to Email Link Isolation, users were protected against exploits.

Better together with Cloudflare Zero Trust

In future iterations, administrators will be able to connect Cloudflare Area 1 to their Cloudflare Zero Trust account and apply isolation policies, DLP (Data Loss Protection) controls and in-line CASB (a cloud access security broker) to email link isolated traffic.

We are starting our beta today. If you’re interested in trying Email Link Isolation and start to feel safer with your email experience, you should sign up here.

The (hardware) key to making phishing defense seamless with Cloudflare Zero Trust and Yubico

Post Syndicated from David Harnett original https://blog.cloudflare.com/making-phishing-defense-seamless-cloudflare-yubico/

The (hardware) key to making phishing defense seamless with Cloudflare Zero Trust and Yubico

This post is also available in 简体中文, Français, 日本語 and Español.

The (hardware) key to making phishing defense seamless with Cloudflare Zero Trust and Yubico

Hardware keys provide the best authentication security and are phish-proof. But customers ask us how to implement them and which security keys they should buy. Today we’re introducing an exclusive program for Cloudflare customers that makes hardware keys more accessible and economical than ever. This program is made possible through a new collaboration with Yubico, the industry’s leading hardware security key vendor and provides Cloudflare customers with exclusive “Good for the Internet” pricing.

Yubico Security Keys are available today for any Cloudflare customer, and they easily integrate with Cloudflare’s Zero Trust service. That service is open to organizations of any size from a family protecting a home network to the largest employers on the planet. Any Cloudflare customer can sign in to the Cloudflare dashboard today and order hardware security keys for as low as $10 per key.

In July 2022, Cloudflare prevented a breach by an SMS phishing attack that targeted more than 130 companies, due to the company’s use of Cloudflare Zero Trust paired with hardware security keys. Those keys were YubiKeys and this new collaboration with Yubico, the maker of YubiKeys, removes barriers for organizations of any size in deploying hardware keys.

Why hardware security keys?

Organizations need to ensure that only the right users are connecting to their sensitive resources – whether those destinations are self-hosted web applications, SaaS tools, or services that rely on arbitrary TCP connections and UDP streams. Users traditionally proved their identity with a username and password but phishing attacks can deceive users to steal both of those pieces of information.

In response, teams began deploying multifactor authentication (MFA) tools to add an additional layer of security. Users needed to input their username, password, and some additional value. For example, a user might have an application running on their device which generates random numbers, or they might enroll their phone number to receive a code via text message. While these MFA options do improve security, they are still vulnerable to phishing attacks. Phishing websites evolved and prompted the user to input MFA codes or attackers stole a user’s phone number in a SIM swap attack.

Hardware security keys provide organizations with an MFA option that cannot be phished. These keys use the WebAuthn standard to present a certificate to the authentication service to validate the key in a cryptographically secured exchange, something a phishing website cannot obtain and later spoof.

Users enroll one or more keys with their identity provider and, in addition to presenting their username and password, the provider prompts for an MFA option that can include the hardware key. Every member of the team enjoys less friction by tapping on the key when they log in instead of fumbling for a code in an app. Meanwhile, security teams sleep better at night knowing their services are protected from phishing attacks.

Extending hardware security keys with Cloudflare’s Zero Trust products

While most identity providers now allow users to enroll hardware keys as an MFA option, administrators still do not have control to require that hardware keys be used. Individual users can fallback to a less secure option, like an app-based code, if they fail to present the security key itself.

We ran into this when we first deployed security keys at Cloudflare. If users could fallback to a less secure and more easily phished option like an app-based code, then so could attackers. Along with more than 10,000 organizations, we use Cloudflare’s Zero Trust products internally to, in part, secure how users connect to the resources and tools they need.

When any user needs to reach an internal application or service, Cloudflare’s network evaluates every request or connection for several signals like identity, device posture, and country. Administrators can build granular rules that only apply to certain destinations, as well. An internal administrator tool with the ability to read customer data could require a healthy corporate device, connecting from a certain country, and belonging to a user in a particular identity provider group. Meanwhile, a new marketing splash page being shared for feedback could just require identity. If we could obtain the presence of a security key, as opposed to a different, less secure MFA option, from the user’s authentication then we could enforce that signal as well.

Several years ago, identity providers, hardware vendors, and security companies partnered to develop a new standard, the Authentication Method Reference (AMR), to share exactly that type of data. With AMR, identity providers can share several details about the login attempt, including the type of MFA option in use. Shortly after that announcement, we introduced the ability to build rules in Cloudflare’s Zero Trust platform to look for and enforce that signal. Now, teams of any size can build resource-based rules that can ensure that team members always use their hardware key.

What are the obstacles to deploying hardware security keys?

The security of requiring something that you physically control is also the same reason that deploying hardware keys adds a layer of complexity – you need to find a way to put that physical key in the hands of your users, at scale, and make it possible for every member of your team to enroll them.

In every case, that deployment starts with purchasing hardware security keys. Compared to app-based codes, which can be free, security keys have a real cost. For some organizations, that cost is a deterrent, and they stay less secure due to that hurdle, but it is important to note that not all MFA is created equal.

For other teams, especially the organizations that are now partially or fully remote, providing those keys to end users who will never step foot in a physical office can be a challenge for IT departments. When we first deployed hardware keys at Cloudflare, we did it at our company-wide retreat. Many organizations no longer have that opportunity to physically hand out keys in a single venue or even in global offices.

Collaborating with Yubico

Birthday Week at Cloudflare has always been about removing the barriers and hurdles that keep users and teams from being more secure or faster on the Internet. As part of that goal, we’ve partnered with Yubico to continue to remove the friction in adopting a hardware key security model.

  • The offer is open to any Cloudflare customer. Cloudflare customers can claim this offer for Yubico Security Keys directly in the Cloudflare dashboard.
  • Yubico is providing Security Keys at “Good for the Internet” pricing – as low as $10 per key.  Yubico will ship the keys to customers directly. The specific security keys and prices for this offer are: Yubico Security Key NFC at \$10 USD and the Yubico Security Key C NFC at \$11.60 USD. Customers can purchase up to 10 keys.  For larger organizations there is a second offer to purchase the YubiEnterprise Subscription for 50% off the first year of a 3+ year subscription. For the YubiEnterprise Subscription there are no limits on the number of security keys.
  • Both Cloudflare and Yubico developer docs and support organizations will guide customers in setting up keys and integrating them with their Identity Providers and with Cloudflare’s Zero Trust service.

How to get started

You can request your own hardware keys by navigating to the dashboard, and following the banner notification flow. Yubico will then email you directly using the administrator email that you have provided in your Cloudflare account. For larger organizations looking to deploy YubiKeys at scale, you can explore Yubico’s YubiEnterprise Subscription and receive a 50% discount off the first year of a 3+year subscription.

Already have hardware security keys? If you have physical hardware keys you can begin building rules in Cloudflare Access to enforce their usage by enrolling them into an identity provider that supports AMR, like Okta or Azure AD.

Finally, if you are interested in our own journey deploying Yubikeys alongside our Zero Trust product, check out this blog post from our Director of Security, Evan Johnson, that recaps Cloudflare’s experience and what we recommend from the lessons we learned.

Bringing Zero Trust to mobile network operators

Post Syndicated from Mike Conlow original https://blog.cloudflare.com/zero-trust-for-mobile-operators/

Bringing Zero Trust to mobile network operators

Bringing Zero Trust to mobile network operators

At Cloudflare, we’re excited about the quickly-approaching 5G future. Increasingly, we’ll have access to high throughput and low-latency wireless networks wherever we are. It will make the Internet feel instantaneous, and we’ll find new uses for this connectivity such as sensors that will help us be more productive and energy-efficient. However, this type of connectivity doesn’t have to come at the expense of security, a concern raised in this recent Wired article. Today we’re announcing the creation of a new partnership program for mobile networks—Zero Trust for Mobile Operators—to jointly solve the biggest security and performance challenges.

SASE for Mobile Networks

Every network is different, and the key to managing the complicated security environment of an enterprise network is having lots of tools in the toolbox. Most of these functions fall under the industry buzzword SASE, which stands for Secure Access Service Edge. Cloudflare’s SASE product is Cloudflare One, and it’s a comprehensive platform for network operators.  It includes:

  • Magic WAN, which offers secure Network-as-a-Service (NaaS) connectivity for your data centers, branch offices and cloud VPCs and integrates with your legacy MPLS networks
  • Cloudflare Access, which is a Zero Trust Network Access (ZTNA) service requiring strict verification for every user and every device before authorizing them to access internal resources.
  • Gateway, our Secure Web Gateway, which operates between a corporate network and the Internet to enforce security policies and protect company data.
  • A Cloud Access Security Broker, which monitors the network and external cloud services for security threats.
  • Cloudflare Area 1, an email threat detection tool to scan email for phishing, malware, and other threats.
Bringing Zero Trust to mobile network operators

We’re excited to partner with mobile network operators for these services because our networks and services are tremendously complementary. Let’s first think about SD-WAN (Software-Defined Wide Area Network) connectivity, which is the foundation on which much of the SASE framework rests. As an example, imagine a developer working from home developing a solution with a Mobile Network Operator’s (MNO) Internet of Things APIs. Maybe they’re developing tracking software for the number of drinks left in a soda machine, or want to track the routes for delivery trucks.

The developer at home and their fleet of devices should be on the same wide area network, securely, and at reasonable cost. What Cloudflare provides is the programmable software layer that enables this secure connectivity. The developer and the developer’s employer still need to have connectivity to the Internet at home, and for the fleet of devices. The ability to make a secure connection to your fleet of devices doesn’t do any good without enterprise connectivity, and the enterprise connectivity is only more valuable with the secure connection running on top of it. They’re the perfect match.

Once the connectivity is established, we can layer on a Zero Trust platform to ensure every user can only access a resource to which they’ve been explicitly granted permission. Any time a user wants to access a protected resource – via ssh, to a cloud service, etc. – they’re challenged to authenticate with their single-sign-on credentials before being allowed access. The networks we use are growing and becoming more distributed. A Zero Trust architecture enables that growth while protecting against known risks.

Bringing Zero Trust to mobile network operators

Edge Computing

Given the potential of low-latency 5G networks, consumers and operators are both waiting for a “killer 5G app”. Maybe it will be autonomous vehicles and virtual reality, but our bet is on a quieter revolution: moving compute – the “work” that a server needs to do to respond to a request – from big regional data centers to small city-level data centers, embedding the compute capacity inside wireless networks, and eventually even to the base of cell towers.

Cloudflare’s edge compute platform is called Workers, and it does exactly this – execute code at the edge. It’s designed to be simple. When a developer is building an API to support their product or service, they don’t want to worry about regions and availability zones. With Workers, a developer writes code they want executed at the edge, deploys it, and within seconds it’s running at every Cloudflare data center globally.

Some workloads we already see, and expect to see more of, include:

  • IoT (Internet of Things) companies implementing complex device logic and security features directly at the edge, letting them add cutting-edge capabilities without adding cost or latency to their devices.
  • eCommerce platforms storing and caching customized assets close to their visitors for improved customer experience and great conversion rates.
  • Financial data platforms, including new Web3 players, providing near real-time information and transactions to their users.
  • A/B testing and experimentation run at the edge without adding latency or introducing dependencies on the client-side.
  • Fitness-type devices tracking a user’s movement and health statistics can offload compute-heavy workloads while maintaining great speed/latency.
  • Retail applications providing fast service and a customized experience for each customer without an expensive on-prem solution.

The Cloudflare Case Studies section has additional examples from NCR, Edgemesh, BlockFi, and others on how they’re using the Workers platform. While these examples are exciting, we’re most excited about providing the platform for new innovation.

You may have seen last week we announced Workers for Platforms is now in General Availability. Workers for Platforms is an umbrella-like structure that allows a parent organization to enable Workers for their own customers. As an MNO, your focus is on providing the means for devices to send communication to clients. For IoT use cases, sending data is the first step, but the exciting potential of this connectivity is the applications it enables. With Workers for Platforms, MNOs can expose an embedded product that allows customers to access compute power at the edge.

Network Infrastructure

The complementary networks between mobile networks and Cloudflare is another area of opportunity. When a user is interacting with the Internet, one of the most important factors for the speed of their connection is the physical distance from their handset to the content and services they’re trying to access. If the data request from a user in Denver needs to wind its way to one of the major Internet hubs in Dallas, San Jose, or Chicago (and then all the way back!), that is going to be slow. But if the MNO can link to the service locally in Denver, the connection will be much faster.

One of the exciting developments with new 5G networks is the ability of MNOs to do more “local breakout”. Many MNOs are moving towards cloud-native and distributed radio access networks (RANs) which provides more flexibility to move and multiply packet cores. These packet cores are the heart of a mobile network and all of a subscriber’s data flows through one.

For Cloudflare – with a data center presence in 275+ cities globally – a user never has to wait long for our services. We can also take it a step further. In some cases, our services are embedded within the MNO or ISP’s own network. The traffic which connects a user to a device, authorizes the connection, and securely transmits data is all within the network boundary of the MNO – it never needs to touch the public Internet, incur added latency, or otherwise compromise the performance for your subscribers.

We’re excited to partner with mobile networks because our security services work best when our customers have excellent enterprise connectivity underneath. Likewise, we think mobile networks can offer more value to their customers with our security software added on top. If you’d like to talk about how to integrate Cloudflare One into your offerings, please email us at [email protected], and we’ll be in touch!