All posts by Ankur Aggarwal

Protocol detection with Cloudflare Gateway

Post Syndicated from Ankur Aggarwal original https://blog.cloudflare.com/gatway-protocol-detection


Cloudflare Gateway, our secure web gateway (SWG), now supports the detection, logging, and filtering of network protocols regardless of their source or destination port. Protocol detection makes it easier to set precise policies without having to rely on the well known port and without the risk of over/under-filtering activity that could disrupt your users’ work. For example, you can filter all SSH traffic on your network by simply choosing the protocol.

Today, protocol detection is available to any Enterprise user of Gateway and supports a growing list of protocols including HTTP, HTTPS, SSH, TLS, DCE/RPC, MQTT, and TPKT.

Why is this needed?

As many configuration planes move to using RESTful APIs, and now even GraphQL, there is still a need to manage devices via protocols like SSH. Whether it is the only management protocol available on a new third party device, or one of the first ways we learned to connect to and manage a server, SSH is still extensively used.

With other legacy SWG and firewall tools, the process of blocking traffic by specifying only the well known port number (for example, port 22 for SSH) can be both insecure and inconvenient. For example, if you used SSH over any other port it would not be filtered properly, or if you tried using another protocol over a well known port, such as port 22, it would be blocked. An argument could also be made to lock down the destinations to only allow incoming connections over certain ports, but companies don’t often control their destination devices.

With so many steps, there are risks of over-blocking legitimate traffic, which potentially prevents users from reaching the resources they need to stay productive and leads to a large volume of support tickets for your administrators. Alternatively, you could underblock and miss out on filtering your intended traffic, creating security risks for your organization.

How we built it

To build a performant protocol detection and filtering capability we had to make sure it could be applied in the same place Gateway policies are being applied. To meet this requirement we added a new TCP socket pre-read hook to OXY, our Rust-based policy framework, to buffer the first few bytes of the data stream. This buffer, then, allows Gateway to compare the bytes to our protocol signature database and apply the correct next step. And since this is all built into OXY, if the policy is set to Block, the connection will be closed; if it’s set to Allow, the connection will be proxied or progressed to establish the TLS session.

How to set up Gateway protocol filtering

Cloudflare Gateway’s protocol detection simplifies this process by allowing you to specify the protocol within a Gateway Network policy. To get started navigate to the Settings section on the Zero Trust dashboard and then select the Network tile. Under the Firewall section you’ll see a toggle for protocol detection and once enabled you’ll be able to create network policies.

Next, go to the Firewall Policies section of your Zero Trust Gateway dashboard and then click ‘+ Add a policy’. There you can create a policy such as the one below to block SSH for all users within the Sales department.

This will prevent members of the sales team from initiating an outgoing or incoming SSH session.

Get started

Customers with a Cloudflare One Enterprise account will find this functionality in their Gateway dashboard today. We plan to make it available to Pay-as-you-go and Free customer accounts soon, as well as expanding the list of protocols.

If you’re interested in using protocol detection or ready to explore more broadly how Cloudflare can help you modernize your security, request a workshop or contact your account manager.

Manage and control the use of dedicated egress IPs with Cloudflare Zero Trust

Post Syndicated from Ankur Aggarwal original https://blog.cloudflare.com/gateway-egress-policies/

Manage and control the use of dedicated egress IPs with Cloudflare Zero Trust

Manage and control the use of dedicated egress IPs with Cloudflare Zero Trust

Before identity-driven Zero Trust rules, some SaaS applications on the public Internet relied on the IP address of a connecting user as a security model. Users would connect from known office locations, with fixed IP address ranges, and the SaaS application would check their address in addition to their login credentials.

Many systems still offer that second factor method. Customers of Cloudflare One can use a dedicated egress IP for this purpose as part of their journey to a Zero Trust model. Unlike other solutions, customers using this option do not need to deploy any infrastructure of their own. However, not all traffic needs to use those dedicated egress IPs.

Today, we are announcing policies that give administrators control over when Cloudflare uses their dedicated egress IPs. Specifically, administrators can use a rule builder in the Cloudflare dashboard to determine which egress IP is used and when, based on attributes like identity, application, IP address, and geolocation. This capability is available to any enterprise-contracted customer that adds on dedicated egress IPs to their Zero Trust subscription.

Why did we build this?

In today’s hybrid work environment, organizations aspire for more consistent security and IT experiences to manage their employees’ traffic egressing from offices, data centers, and roaming users. To deliver a more streamlined experience, many organizations are adopting modern, cloud-delivered proxy services like secure web gateways (SWGs) and deprecating their complex mix of on-premise appliances.

One traditional convenience of these legacy tools has been the ability to create allowlist policies based on static source IPs. When users were primarily in one place, verifying traffic based on egress location was easy and reliable enough. Many organizations want or are required to maintain this method of traffic validation even as their users have moved beyond being in one place.

So far, Cloudflare has supported these organizations by providing dedicated egress IPs as an add-on to our proxy Zero Trust services. Unlike the default egress IPs, these dedicated egress IPs are not shared amongst any other Gateway accounts and are only used to egress proxied traffic for the designated account.

As discussed in a previous blog post, customers are already using Cloudflare’s dedicated egress IPs to deprecate their VPN use by using them to identify their users proxied traffic or to add these to allow lists on third party providers. These organizations benefit from the simplicity of still using fixed, known IPs, and their traffic avoids the bottlenecks and backhauling of traditional on-premise appliances.

When to use egress policies

The Gateway Egress policy builder empowers administrators with enhanced flexibility and specificity to egress traffic based on the user’s identity, device posture, source/destination IP address, and more.

Traffic egressing from specific geolocations to provide geo-specific experiences (e.g. language format, regional page differences) for select user groups is a common use case. For example, Cloudflare is currently working with the marketing department of a global media conglomerate. Their designers and web experts based in India often need to verify the layout of advertisements and websites that are running in different countries.

However, those websites restrict or change access based on the geolocation of the source IP address of the user. This required the team to use an additional VPN service for just this purpose. With egress policies, administrators can create a rule to match the domain IP address or destination country IP geolocation and marketing employees to egress traffic from a dedicated egress IP geo-located to the country where they need to verify the domain. This allows their security team to rest easy as they no longer have to maintain this hole in their perimeter defense, another VPN service just for marketing, and can enforce all of their other filtering capabilities to this traffic.

Another example use case is allowlisting access to applications or services maintained by a third party. While security administrators can control how their teams access their resources and even apply filtering to their traffic they often can’t change the security controls enforced by third parties. For example, while working with a large credit processor they used a third party service to verify the riskiness of transactions routed through their Zero Trust network. This third party required them to allowlist their source IPs.

To meet this goal, this customer could have just used dedicated egress IPs and called it a day, but this means that all of their traffic is now being routed through the data center with their dedicated egress IPs. So if a user wanted to browse any other sites they would receive a subpar experience since their traffic may not be taking the most efficient path to the upstream. But now with egress policies this customer can now only apply this dedicated egress IP to this third party provider traffic and let all other user traffic egress via the default Gateway egress IPs.

Building egress policies

To demonstrate how easy it is for an administrator to configure a policy let’s walk through the last scenario. My organization uses a third-party service and in addition to a username/password login they require us to use a static source IP or network range to access their domain.

To set this up, I just have to navigate to Egress Policies under Gateway on the Zero Trust dashboard. Once there I can hit ‘Create egress policy’:

Manage and control the use of dedicated egress IPs with Cloudflare Zero Trust

For my organization most of my users accessing this third-party service are located in Portugal so I’ll use my dedicated egress IPs that are assigned to Montijo, Portugal. The users will access example.com hosted on 203.0.113.10 so I’ll use the destination IP selector to match all traffic to this site; policy configuration below:

Manage and control the use of dedicated egress IPs with Cloudflare Zero Trust

Once my policy is created, I’ll add in one more as a catch-all for my organization to make sure they don’t use any dedicated egress IPs for destinations not associated with this third-party service. This is key to add in because it makes sure my users receive the most performant network experience while still maintaining their privacy by egress via our shared Enterprise pool of IPs; policy configuration below:

Manage and control the use of dedicated egress IPs with Cloudflare Zero Trust

Taking a look at the egress policy list we can see both policies are enabled and now when my users try to access example.com they will be using either the primary or secondary dedicated IPv4 or the IPv6 range as the egress IP. And for all other traffic, the default Cloudflare egress IPs will be used.

Manage and control the use of dedicated egress IPs with Cloudflare Zero Trust

Next steps

We recognize that as organizations migrate away from on-premise appliances, they want continued simplicity and control as they proxy more traffic through their cloud security stack. With Gateway egress policies administrators will now be able to control traffic flows for their increasingly distributed workforces.

If you are interested in building policies around Cloudflare’s dedicated egress IPs, you can add them onto a Cloudflare Zero Trust Enterprise plan or contact your account manager.

Cloudflare Zero Trust for managed service providers

Post Syndicated from Ankur Aggarwal original https://blog.cloudflare.com/gateway-managed-service-provider/

Cloudflare Zero Trust for managed service providers

Cloudflare Zero Trust for managed service providers

As part of CIO week, we are announcing a new integration between our DNS Filtering solution and our Partner Tenant platform that supports parent-child policy requirements for our partner ecosystem and our direct customers. Our Tenant platform, launched in 2019, has allowed Cloudflare partners to easily integrate Cloudflare solutions across millions of customer accounts. Cloudflare Gateway, introduced in 2020, has grown from protecting personal networks to Fortune 500 enterprises in just a few short years. With the integration between these two solutions, we can now help Managed Service Providers (MSPs) support large, multi-tenant deployments with parent-child policy configurations and account-level policy overrides that seamlessly protect global employees from threats online.

Why work with Managed Service Providers?

Managed Service Providers (MSPs) are a critical part of the toolkit of many CIOs. In the age of disruptive technology, hybrid work, and shifting business models, outsourcing IT and security operations can be a fundamental decision that drives strategic goals and ensures business success across organizations of all sizes. An MSP is a third-party company that remotely manages a customer’s information technology (IT) infrastructure and end-user systems. MSPs promise deep technical knowledge, threat insights, and tenured expertise across a variety of security solutions to protect from ransomware, malware, and other online threats. The decision to partner with an MSP can allow internal teams to focus on more strategic initiatives while providing access to easily deployable, competitively priced IT and security solutions. Cloudflare has been making it easier for our customers to work with MSPs to deploy and manage a complete Zero Trust transformation.

One decision criteria for selecting an appropriate MSP is the provider’s ability to keep the partner’s best technology, security and cost interests in mind. An MSP should be leveraging innovative and lower cost security solutions whenever possible to drive the best value to your organization. Out of date technology can quickly incur higher implementation and maintenance costs compared to more modern and purpose-built solutions given the broader attack surface brought about by hybrid work. In a developing space like Zero Trust, an effective MSP should be able to support vendors that can be deployed globally, managed at scale, and effectively enforce global corporate policy across business units. Cloudflare has worked with many MSPs, some of which we will highlight today, that implement and manage Zero Trust security policies cost-effectively at scale.

The MSPs we are highlighting have started to deploy Cloudflare Gateway DNS Filtering to complement their portfolio as part of a Zero Trust access control strategy. DNS filtering provides quick time-to-value for organizations seeking protection from ransomware, malware, phishing, and other Internet threats. DNS filtering is the process of using the Domain Name System to block malicious websites and prevent users from reaching harmful or inappropriate content on the Internet. This ensures that company data remains secure and allows companies to have control over what their employees can access on company-managed networks and devices.

Filtering policies are often set by the Organization with consultation from the service provider. In some cases, these policies also need to be managed independently at the account or business unit level by either the MSP or the customer. This means it is very common for a parent-child relationship to be required to balance the deployment of corporate level rules from customization across devices, office locations, or business units. This structure is vital for MSPs that are deploying access policies across millions of devices and accounts.

Cloudflare Zero Trust for managed service providers

Better together: Zero Trust ❤️ Tenant Platform

To make it easier for MSPs to manage millions of accounts with appropriate access controls and policy management, we integrated Cloudflare Gateway with our existing Tenant platform with a new feature that provides parent-child configurations. This allows MSP partners to create and manage accounts, set global corporate security policies, and allow appropriate management or overrides at the individual business unit or team level.

Cloudflare Zero Trust for managed service providers

The Tenant platform allows MSPs ability to create millions of end customer accounts at their discretion to support their specific onboarding and configurations. This also ensures proper separation of ownership between customers and allows end customers to access the Cloudflare dashboard directly, if required.

Each account created is a separate container of subscribed resources (zero trust policies, zones, workers, etc.) for each of the MSPs end customers. Customer administrators can be invited to each account as necessary for self-service management, while the MSP retains control of the capabilities enabled for each account.

With MSPs now able to set up and manage accounts at scale, we’ll explore how the integration with Cloudflare Gateway lets them manage scaled DNS filtering policies for these accounts.

Tiered Zero Trust accounts

With individual accounts for each MSP end customer in place, MSPs can either fully manage the deployment or provide a self-service portal backed by Cloudflare configuration APIs. Supporting a configuration portal also means you would never want your end users to block access to this domain, so the MSP can add a hidden policy to all of its end customer accounts when they onboard which would be a simple one time API call. Although issues start to arise anytime they need to push an update to said policy, this now means they have to update the policy once for each and every MSP end customer and for some MSPs that can mean over 1 million API calls.

To help turn this into a single API call, we introduced the concept of a top level account aka parent account. This parent account allows MSPs to set global policies which are applied to all DNS queries before the subsequent MSP end customer policies aka child account policies. This structure helps ensure MSPs can set their own global policies for all of their child accounts while each child account can further filter their DNS queries to meet their needs without impacting any other child account.

Cloudflare Zero Trust for managed service providers

This extends further than just policies as well, each child account can create their own custom block page, upload their own certificates to display these block pages, and set up their own DNS endpoints (IPv4, IPv6, DoH, and DoT) via Gateway locations. Also, because these are the exact same as non-MSP Gateway accounts, there aren’t any lower limits when it comes to the default limits on the number policies, locations, or lists per parent or child account.

Managed Service Provider integrations

To help bring this to life, below are real-world examples of how Cloudflare customers are using this new managed service provider feature to help protect their organizations.

US federal government

The US federal government requires many of the same services to support a protective DNS service for their 100+ civilian agencies, and they often outsource many of their IT and security operations to service providers like Accenture Federal Services (AFS).

In 2022, Cloudflare and AFS were selected by Cybersecurity and Infrastructure Security Agency (CISA) with the Department of Homeland Security (DHS) to develop a joint solution to help the federal government defend itself against cyberattacks. The solution consists of Cloudflare’s protective DNS resolver which will filter DNS queries from offices and locations of the federal government and stream events directly to Accenture’s platform to provide unified administration and log storage.

Accenture Federal Services is providing a central interface to each department that allows them to adjust their DNS filtering policies. This interface works with Cloudflare’s Tenant platform and Gateway client APIs to provide a seamless customer experience for government employees managing their security policies using our new parent-child configurations. CISA, as the parent account, can set their own global policies, while allowing agencies, child accounts, to bypass select global policies, and set their own default block pages.

In conjunction with our parent-child structure we provided a few improvements to our DNS location matching and filtering defaults. Currently, all Gateway accounts can purchase a dedicated IPv4 resolver IP address(es) and these are great for situations where a customer doesn’t have a static source IP address or wants their own IPv4 address to host the solution.

For CISA, they wanted not only a dedicated IPv4 address but to assign that same address from their parent account to their child accounts. This would allow them to have their own default IPv4 addresses for all agencies easing the burden of onboarding. Next they also want the ability to fail closed, which means if a DNS query did not match any location (which must have a source IPv4 address/network configured) it would be dropped. This allows CISA to ensure only configured IPv4 networks had access to their protective services. Lasty, we didn’t have to address this with IPv6, DoH, and DoT DNS endpoints as those are custom with each and every DNS location created.

Malwarebytes

Malwarebytes, a global leader in real-time cyber protection, recently integrated with Cloudflare to provide a DNS filtering module within their Nebula platform. The Nebula platform is a cloud-hosted security operations solution that manages control of any malware or ransomware incident—from alert to fix. This new module allows Malwarebytes customers to filter on content categories and add policy rules for groups of devices. A key need was the ability to easily integrate with their current device client, provide individual account management, and provide room for future expansion across additional Zero Trust services like Cloudflare Browser Isolation.

Cloudflare was able to provide a comprehensive solution that was easily integrated into the Malwarebytes platform. This included using DNS-over-HTTP (DoH) to segment users across unique locations and adding a unique token per device to properly track the device ID and apply the correct DNS policies. And lastly, the integration was completed using the Cloudflare Tenant API which allowed seamless integration with their current workflow and platform. This combination of our Zero Trust services and Tenant platform let Malwarebytes quickly go to market for new segments within their business.

“It’s challenging for organizations today to manage access to malicious sites and keep their end users safe and productive. Malwarebytes’ DNS Filtering module extends our cloud-based security platform to web protection. After evaluating other Zero Trust providers it was clear to us that Cloudflare could offer the comprehensive solution IT and security teams need while providing lightning fast performance at the same time. Now, IT and security teams can block whole categories of sites, take advantage of an extensive database of pre-defined scores on known, suspicious web domains, protect core web-based applications and manage specific site restrictions, removing the headache from overseeing site access.” – Mark Strassman, Chief Product Officer, Malwarebytes

Large global ISP

We’ve been working with a large global ISP recently to support DNS filtering which is a part of a larger security solution offered for families for over one million accounts in just the first year! The ISP leverages our Tenant and Gateway APIs to seamlessly integrate into their current platform and user experience with minimal engineering effort. We look forward to sharing more detail around this implementation in the coming months.

What’s next

As the previous stories highlight, MSPs play a key role in securing today’s diverse ecosystem of organizations, of all sizes and maturities. Companies of all sizes find themselves squaring off against the same complex threat landscape and are challenged to maintain a proper security posture and manage risk with constrained resources and limited security tooling. MSPs provide the additional resources, expertise and advanced security tooling that can help reduce the risk profile for these companies. Cloudflare is committed to making it easier for MSPs to be effective in delivering Zero Trust solutions to their customers.

Given the importance of MSPs for our customers and the continued growth of our partner network, we plan to launch quite a few features in 2023 and beyond that better support our MSP partners. First, a key item on our roadmap is the development of a refreshed tenant management dashboard for improved account and user management. Second, we want to extend our multi-tenant configurations across our entire Zero Trust solution set to make it easier for MSPs to implement secure hybrid work solutions at scale.

Lastly, to better support hierarchical access, we plan to expand the user roles and access model currently available to MSP partners to allow their teams to more easily support and manage their various accounts. Cloudflare has always prided itself on its ease of use, and our goal is to make Cloudflare the Zero Trust platform of choice for service and security providers globally.

Throughout CIO week, we’ve touched on how our partners are helping modernize the security posture for their customers to align with a world transformed by hybrid work and hybrid multi-cloud infrastructures. Ultimately, the power of Cloudflare Zero Trust comes from its existence as a composable, unified platform that draws strength from its combination of products, features, and our partner network.

  • If you’d like to learn more about becoming an MSP partner, you can read more here: https://www.cloudflare.com/partners/services
  • If you’d like to learn more about improving your security with DNS Filtering and Zero Trust, or would like to get started today, test the platform yourself with 50 free seats by signing up here.

Bring your own certificates to Cloudflare Gateway

Post Syndicated from Ankur Aggarwal original https://blog.cloudflare.com/bring-your-certificates-cloudflare-gateway/

Bring your own certificates to Cloudflare Gateway

Bring your own certificates to Cloudflare Gateway

Today, we’re announcing support for customer provided certificates to give flexibility and ease of deployment options when using Cloudflare’s Zero Trust platform. Using custom certificates, IT and Security administrators can now “bring-their-own” certificates instead being required to use a Cloudflare-provided certificate to apply HTTP, DNS, CASB, DLP, RBI and other filtering policies.

The new custom certificate approach will exist alongside the method Cloudflare Zero Trust administrators are already used to: installing Cloudflare’s own certificate to enable traffic inspection and forward proxy controls. Both approaches have advantages, but providing them both enables organizations to find the path to security modernization that makes the most sense for them.

Custom user side certificates

When deploying new security services, organizations may prefer to use their own custom certificates for a few common reasons. Some value the privacy of controlling which certificates are deployed. Others have already deployed custom certificates to their device fleet because they may bind user attributes to these certificates or use them for internal-only domains.

So, it can be easier and faster to apply additional security controls around what administrators have deployed already–versus installing additional certificates.

To get started using your own certificate first upload your root certificates via API to Cloudflare.

curl -X POST "https://api.cloudflare.com/client/v4/accounts/<ACCOUNT_ID>/mtls_certificates"\
    -H "X-Auth-Email: <EMAIL>" \
    -H "X-Auth-Key: <API_KEY>" \
    -H "Content-Type: application/json" \
    --data '{
        "name":"example_ca_cert",
        "certificates":"<ROOT_CERTIFICATE>",
        "private_key":"<PRIVATE_KEY>",
        "ca":true
        }'

The root certificate will be stored across all of Cloudflare’s secure servers, designed to protect against unauthorized access. Once uploaded each certificate will receive an identifier in the form of a UUID (e.g. 2458ce5a-0c35-4c7f-82c7-8e9487d3ff60) . This UUID can then be used with your Zero Trust account ID to associate and enable it for your account.

curl -X PUT "https://api.cloudflare.com/client/v4/accounts/<ACCOUNT_ID>/gateway/configuration"\
    -H "X-Auth-Email: <EMAIL>" \
    -H "X-Auth-Key: <API_KEY>" \
    -H "Content-Type: application/json" \
    --data '{
        "settings":
        {
            "antivirus": {...},
            "block_page": {...},
            "custom_certificate":
            {
                "enabled": true,
                "id": "2458ce5a-0c35-4c7f-82c7-8e9487d3ff60"
            }
            "tls_decrypt": {...},
            "activity_log": {...},
            "browser_isolation": {...},
            "fips": {...},
        }
    }'

From there it takes approximately one minute and all new HTTPS connections for your organization’s users will be secured using your custom certificate. For even more details check out our developer documentation.

An additional benefit of this fast propagation time is zero maintenance downtimes. If you’re transitioning from the Cloudflare provided certificate or a custom certificate, all new HTTPS connections will use the new certificate without impacting any current connections.

Or, install Cloudflare’s own certificates

In addition to the above API-based method for custom certificates, Cloudflare also makes it easy for organizations to install Cloudflare’s own root certificate on devices to support HTTP filtering policies. Many organizations prefer offloading certificate management to Cloudflare to reduce administrative overhead. Plus, root certificate installation can be easily automated during managed deployments of Cloudflare’s device client, which is critical to forward proxy traffic.

Installing Cloudflare’s root certificate on devices takes only a few steps, and administrators can choose which file type they want to use–either a .pem or .crt file–depending on their use cases. Take a look at our developer documentation for further details on the process across operating systems and applications.

What’s next?

Whether an organization uses a custom certificate or the Cloudflare maintained certificate, the goal is the same. To apply traffic inspection to help protect against malicious activity and provide robust data protection controls to keep users safe. Cloudflare’s priority is equipping those organizations with the flexibility to achieve their risk reduction goal as swiftly as possible.

In the coming quarters we will be focused on delivering a new UI to upload and manage user side certificates as well as refreshing the HTTP policy builder to let admins determine what happens when accessing origins not signed with a public certificate.

If you want to know where SWG, RBI, DLP, and other threat and data protection services can fit into your overall security modernization initiatives, explore Cloudflare’s prescriptive roadmap to Zero Trust.
If you and your enterprise are ready to get started protecting your users, devices, and data with HTTP inspection, then reach out to Cloudflare to learn more.

HTTP/3 inspection on Cloudflare Gateway

Post Syndicated from Ankur Aggarwal original https://blog.cloudflare.com/cloudflare-gateway-http3-inspection/

HTTP/3 inspection on Cloudflare Gateway

HTTP/3 inspection on Cloudflare Gateway

Today, we’re excited to announce upcoming support for HTTP/3 inspection through Cloudflare Gateway, our comprehensive secure web gateway. HTTP/3 currently powers 25% of the Internet and delivers a faster browsing experience, without compromising security. Until now, administrators seeking to filter and inspect HTTP/3-enabled websites or APIs needed to either compromise on performance by falling back to HTTP/2 or lose visibility by bypassing inspection. With HTTP/3 support in Cloudflare Gateway, you can have full visibility on all traffic and provide the fastest browsing experience for your users.

Why is the web moving to HTTP/3?

HTTP is one of the oldest technologies that powers the Internet. All the way back in 1996, security and performance were afterthoughts and encryption was left to the transport layer to manage. This model doesn’t scale to the performance needs of the modern Internet and has led to HTTP being upgraded to HTTP/2 and now HTTP/3.

HTTP/3 accelerates browsing activity by using QUIC, a modern transport protocol that is always encrypted by default. This delivers faster performance by reducing round-trips between the user and the web server and is more performant for users with unreliable connections. For further information about HTTP/3’s performance advantages take a look at our previous blog here.

HTTP/3 development and adoption

Cloudflare’s mission is to help build a better Internet. We see HTTP/3 as an important building block to make the Internet faster and more secure. We worked closely with the IETF to iterate on the HTTP/3 and QUIC standards documents. These efforts combined with progress made by popular browsers like Chrome and Firefox to enable QUIC by default have translated into HTTP/3 now being used by over 25% of all websites and for an even more thorough analysis.

We’ve advocated for HTTP/3 extensively over the past few years. We first introduced support for the underlying transport layer QUIC in September 2018 and then from there worked to introduce HTTP/3 support for our reverse proxy services the following year in September of 2019. Since then our efforts haven’t slowed down and today we support the latest revision of HTTP/3, using the final “h3” identifier matching RFC 9114.

HTTP/3 inspection hurdles

But while there are many advantages to HTTP/3, its introduction has created deployment complexity and security tradeoffs for administrators seeking to filter and inspect HTTP traffic on their networks. HTTP/3 offers familiar HTTP request and response semantics, but the use of QUIC changes how it looks and behaves “on the wire”. Since QUIC runs atop UDP, it  is architecturally distinct from legacy TCP-based protocols and has poor support from legacy secure web gateways. The combination of these two factors has made it challenging for administrators to keep up with the evolving technological landscape while maintaining the users’ performance expectations and ensuring visibility and control over Internet traffic.

Without proper secure web gateway support for HTTP/3, administrators have needed to choose whether to compromise on security and/or performance for their users. Security tradeoffs include not inspecting UDP traffic, or even worse forgoing critical security capabilities such as inline anti-virus scanning, data-loss prevention, browser isolation and/or traffic logging. Naturally, for any security conscious organization discarding security and visibility is not an acceptable approach and this has led administrators to proactively disable HTTP/3 on their end user devices. This introduces deployment complexity and sacrifices performance as it requires disabling QUIC-support within the users web browsers.

How to enable HTTP/3 Inspection

Once support for HTTP/3 inspection is available for select browsers later this year, you’ll be able to enable HTTP/3 inspection through the dashboard. Once logged into the Zero Trust dashboard you will need to toggle on proxying, click the box for UDP traffic, and enable TLS decryption under Settings > Network > Firewall. Once these settings have been enabled; AV-scanning, remote browser isolation, DLP, and HTTP filtering can be applied via HTTP policies to all of your organization’s proxied HTTP traffic.

HTTP/3 inspection on Cloudflare Gateway

What’s next

Administrators will no longer need to make security tradeoffs based on the evolving technological landscape and can focus on protecting their organization and teams. We’ll reach out to all Cloudflare One customers once HTTP/3 inspection is available and are excited to simplify secure web gateway deployments for administrators.

HTTP/3 traffic inspection will be available to all administrators of all plan types; if you have not signed up already click here to get started.

Cloudflare Gateway dedicated egress and egress policies

Post Syndicated from Ankur Aggarwal original https://blog.cloudflare.com/gateway-dedicated-egress-policies/

Cloudflare Gateway dedicated egress and egress policies

Cloudflare Gateway dedicated egress and egress policies

Today, we are highlighting how Cloudflare enables administrators to create security policies while using dedicated source IPs. With on-premise appliances like legacy VPNs, firewalls, and secure web gateways (SWGs), it has been convenient for organizations to rely on allowlist policies based on static source IPs. But these hardware appliances are hard to manage/scale, come with inherent vulnerabilities, and struggle to support globally distributed traffic from remote workers.

Throughout this week, we’ve written about how to transition away from these legacy tools towards Internet-native Zero Trust security offered by services like Cloudflare Gateway, our SWG. As a critical service natively integrated with the rest of our broader Zero Trust platform, Cloudflare Gateway also enables traffic filtering and routing for recursive DNS, Zero Trust network access, remote browser isolation, and inline CASB, among other functions.

Nevertheless, we recognize that administrators want to maintain the convenience of source IPs as organizations transition to cloud-based proxy services. In this blog, we describe our approach to offering dedicated IPs for egressing traffic and share some upcoming functionality to empower administrators with even greater control.

Cloudflare’s dedicated egress IPs

Source IPs are still a popular method of verifying that traffic originates from a known organization/user when accessing applications and third party destinations on the Internet. When organizations use Cloudflare as a secure web gateway, user traffic is proxied through our global network, where we apply filtering and routing policies at the closest data center to the user. This is especially powerful for globally distributed workforces or roaming users. Administrators do not have to make updates to static IP lists as users travel, and no single location becomes a bottleneck for user traffic.

Today the source IP for proxied traffic is one of two options:

  • Device client (WARP) Proxy IP – Cloudflare forward proxies traffic from the user using an IP from the default IP range shared across all Zero Trust accounts
  • Dedicated egress IP – Cloudflare provides customers with a dedicated IP (IPv4 and IPv6) or range of IPs geolocated to one or more Cloudflare network locations

The WARP Proxy IP range is the default egress method for all Cloudflare Zero Trust customers. It is a great way to preserve the privacy of your organization as user traffic is sent to the nearest Cloudflare network location which ensures the most performant Internet experience. But setting source IP security policies based on this default IP range does not provide the granularity that admins often require to filter their user traffic.

Dedicated egress IPs are useful in situations where administrators want to allowlist traffic based on a persistent identifier. As their name suggests, these dedicated egress IPs are exclusively available to the assigned customer—and not used by any other customers routing traffic through Cloudflare’s network.

Additionally, leasing these dedicated egress IPs from Cloudflare helps avoid any privacy concerns which arise when carving them out from an organization’s own IP ranges. And furthermore, alleviates the need to protect your any of the IP ranges that are assigned to your on-premise VPN appliance from DDoS attacks or otherwise.

Dedicated egress IPs are available as add-on to for any Cloudflare Zero Trust enterprise-contracted customer. Contract customers can select the specific Cloudflare data centers used for their dedicated egress, and all subscribing customers receive at least two IPs to start, so user traffic is always routed to the closest dedicated egress data center for performance and resiliency. Finally, organizations can egress their traffic through Cloudflare’s dedicated IPs via their preferred on-ramps. These include Cloudflare’s device client (WARP), proxy endpoints, GRE and IPsec on-ramps, or any of our 1600+ peering network locations, including major ISPs, cloud providers, and enterprises.

Customer use cases today

Cloudflare customers around the world are taking advantage of Gateway dedicated egress IPs to streamline application access. Below are three most common use cases we’ve seen deployed by customers of varying sizes and across industries:

  • Allowlisting access to apps from third parties: Users often need to access tools controlled by suppliers, partners, and other third party organizations. Many of those external organizations still rely on source IP to authenticate traffic. Dedicated egress IPs make it easy for those third parties to fit within these existing constraints.
  • Allowlisting access to SaaS apps: Source IPs are still commonly used as a defense-in-depth layer for how users access SaaS apps, alongside other more advanced measures like multi-factor authentication and identity provider checks.
  • Deprecating VPN usage: Often hosted VPNs will be allocated IPs within the customers advertised IP range. The security flaws, performance limitations, and administrative complexities of VPNs are well-documented in our recent Cloudflare blog. To ease customer migration, users will often choose to maintain any IP allowlist processes in place today.

Through this, administrators are able to maintain the convenience of building policies with fixed, known IPs, while accelerating performance for end users by routing through Cloudflare’s global network.

Cloudflare Zero Trust egress policies

Today, we are excited to announce an upcoming way to build more granular policies using Cloudflare’s dedicated egress IPs. With a forthcoming egress IP policy builder in the Cloudflare Zero Trust dashboard, administrators can specify which IP is used for egress traffic based on identity, application, network and geolocation attributes.

Administrators often want to route only certain traffic through dedicated egress IPs—whether for certain applications, certain Internet destinations, and certain user groups. Soon, administrators can set their preferred egress method based on a wide variety of selectors such as application, content category, domain, user group, destination IP, and more. This flexibility helps organizations take a layered approach to security, while also maintaining high performance (often via dedicated IPs) to the most critical destinations.

Furthermore, administrators will be able to use the egress IP policy builder to geolocate traffic to any country or region where Cloudflare has a presence. This geolocation capability is particularly useful for globally distributed teams which require geo-specific experiences.

For example, a large media conglomerate has marketing teams that would verify the layouts of digital advertisements running across multiple regions. Prior to partnering with Cloudflare, these teams had clunky, manual processes to verify their ads were displaying as expected in local markets: either they had to ask colleagues in those local markets to check, or they had to spin up a VPN service to proxy traffic to the region. With an egress policy these teams would simply be able to match a custom test domain for each region and egress using their dedicated IP deployed there.

What’s Next

You can take advantage of Cloudflare’s dedicated egress IPs by adding them onto a Cloudflare Zero Trust Enterprise plan or contacting your account team. If you would like to be contacted when we release the Gateway egress policy builder, join the waitlist here.

Introducing SSH command logging

Post Syndicated from Ankur Aggarwal original https://blog.cloudflare.com/ssh-command-logging/

Introducing SSH command logging

Introducing SSH command logging

SSH (Secure Shell Protocol) is an important protocol for managing remote machines. It provides a way for infrastructure teams to remotely and securely manage their fleet of machines. SSH was a step-up in security from other protocols like telnet. It ensures encrypted traffic and enforces per user controls over access to a particular machine. However, it can still introduce a significant security risk. SSH, especially root access, is destructive in the wrong hands (think rm -r *) and can be difficult to track. Logging and securing user actions via SSH typically requires custom development or restrictive software deployments. We’re excited to announce SSH command logging as part of Cloudflare Zero Trust.

Securing SSH access

Security teams put significant effort into securing SSH across their organization because of the negative impact it can have in the wrong hands. Traditional SSH security consists of strong authentication, like certificate based authentication, and tight controls on who has “root” access. Additionally, VPNs and IP allow lists are used to further protect a machine from being publicly accessible to the Internet. The security challenges that remain are visibility and potential for lateral movement.

SSH commands to a remote machine are end-to-end encrypted, which means that it is impossible to see what is being run by a particular user on a specific machine. Typically, logs can only be captured on the machine itself in log files, and a malicious user can delete log files as easily as any other command they’re running to cover their tracks. Solutions exist to send these logs to an external logging service, but this requires installing additional software on every machine that can be accessed using SSH. ProxyJump, a common way to deploy a JumpHost model, further compounds this problem because a user can easily traverse a local network of machines once they gain access to a machine in the network.

Introducing SSH command logging

We built SSH command logging into Cloudflare Zero Trust to provide SSH visibility at a network layer instead of relying on software on individual machines. Our first customer for this capability is the Cloudflare security team. SSH command logging provides a full replay of all commands run during an SSH session, including across multiple jump-hosts or bastions. This allows for a clear picture of what happened in the event of an accident, suspected breach, or attack.

SSH command logging was built as an extension of Cloudflare’s Secure Web Gateway. The Secure Web Gateway already performs secure TLS inspection of all traffic from a user device. Now, it also supports SSH inspection by bootstrapping a proxy server upon new connections. Administrators are able to configure network policies to allow SSH access and audit the specific commands run.

Introducing SSH command logging

This then exposes that machine for SSH access and proxies all SSH commands via Cloudflare’s global edge network. All commands are automatically captured without complex logging software installed on the machine and across all hosts. TTY traffic can also be recorded for a later full session replay.

As an added measure of security, all logs captured by Cloudflare are immediately encrypted via a public key provided by each customer, to ensure that only authorized security users can inspect SSH commands. Furthermore, we are launching this feature with an opt-in FIPS 140-2 mode to support FedRAMP compliant users.

All user authentication is performed via Cloudflare Short-Lived Certificates. Once the client certificate is loaded onto a user machine, their SSH setup is complete and secure. This eliminates the need for tedious and tricky SSH key-pair management. This means no more key management on end user devices, all the need is the Cloudflare root CA, and they can access any machine they are entitled to, including using ProxyJump.

What’s next

This is only the beginning for SSH security in Cloudflare Zero Trust. In the future, we will integrate with popular SIEM tools and provide alerting for specific commands and risky behavior.

SSH command logging is in closed beta, and we will be opening this feature up to users in the coming weeks. Stay tuned for more updates when we announce general availability!

Replace your hardware firewalls with Cloudflare One

Post Syndicated from Ankur Aggarwal original https://blog.cloudflare.com/replace-your-hardware-firewalls-with-cloudflare-one/

Replace your hardware firewalls with Cloudflare One

Replace your hardware firewalls with Cloudflare One

Today, we’re excited to announce new capabilities to help customers make the switch from hardware firewall appliances to a true cloud-native firewall built for next-generation networks. Cloudflare One provides a secure, performant, and Zero Trust-enabled platform for administrators to apply consistent security policies across all of their users and resources. Best of all, it’s built on top of our global network, so you never need to worry about scaling, deploying, or maintaining your edge security hardware.

As part of this announcement, Cloudflare launched the Oahu program today to help customers leave legacy hardware behind; in this post we’ll break down the new capabilities that solve the problems of previous firewall generations and save IT teams time and money.

How did we get here?

In order to understand where we are today, it’ll be helpful to start with a brief history of IP firewalls.

Stateless packet filtering for private networks

The first generation of network firewalls were designed mostly to meet the security requirements of private networks, which started with the castle and moat architecture we defined as Generation 1 in our post yesterday. Firewall administrators could build policies around signals available at layers 3 and 4 of the OSI model (primarily IPs and ports), which was perfect for (e.g.) enabling a group of employees on one floor of an office building to access servers on another via a LAN.

This packet filtering capability was sufficient until networks got more complicated, including by connecting to the Internet. IT teams began needing to protect their corporate network from bad actors on the outside, which required more sophisticated policies.

Better protection with stateful & deep packet inspection

Firewall hardware evolved to include stateful packet inspection and the beginnings of deep packet inspection, extending basic firewall concepts by tracking the state of connections passing through them. This enabled administrators to (e.g.) block all incoming packets not tied to an already present outgoing connection.

These new capabilities provided more sophisticated protection from attackers. But the advancement came at a cost: supporting this higher level of security required more compute and memory resources. These requirements meant that security and network teams had to get better at planning the capacity they’d need for each new appliance and make tradeoffs between cost and redundancy for their network.

In addition to cost tradeoffs, these new firewalls only provided some insight into how the network was used. You could tell users were accessing 198.51.100.10 on port 80, but to do a further investigation about what these users were accessing would require you to do a reverse lookup of the IP address. That alone would only land you at the front page of the provider, with no insight into what was accessed, reputation of the domain/host, or any other information to help answer “Is this a security event I need to investigate further?”. Determining the source would be difficult here as well, it would require correlating a private IP address handed out via DHCP with a device and then subsequently a user (if you remembered to set long lease times and never shared devices).

Application awareness with next generation firewalls

To accommodate these challenges, the industry introduced the Next Generation Firewall (NGFW). These were the long reigning, and in some cases are still the industry standard, corporate edge security device. They adopted all the capabilities of previous generations while adding in application awareness to help administrators gain more control over what passed through their security perimeter. NGFWs introduced the concept of vendor-provided or externally-provided application intelligence, the ability to identify individual applications from traffic characteristics. Intelligence which could then be fed into policies defining what users could and couldn’t do with a given application.

As more applications moved to the cloud, NGFW vendors started to provide virtualized versions of their appliances. These allowed administrators to no longer worry about lead times for the next hardware version and allowed greater flexibility when deploying to multiple locations.

Over the years, as the threat landscape continued to evolve and networks became more complex, NGFWs started to build in additional security capabilities, some of which helped consolidate multiple appliances. Depending on the vendor, these included VPN Gateways, IDS/IPS, Web Application Firewalls, and even things like Bot Management and DDoS protection. But even with these features, NGFWs had their drawbacks — administrators still needed to spend time designing and configuring redundant (at least primary/secondary) appliances, as well as choosing which locations had firewalls and incurring performance penalties from backhauling traffic there from other locations. And even still, careful IP address management was required when creating policies to apply pseudo identity.

Adding user-level controls to move toward Zero Trust

As firewall vendors added more sophisticated controls, in parallel, a paradigm shift for network architecture was introduced to address the security concerns introduced as applications and users left the organization’s “castle” for the Internet. Zero Trust security means that no one is trusted by default from inside or outside the network, and verification is required from everyone trying to gain access to resources on the network. Firewalls started incorporating Zero Trust principles by integrating with identity providers (IdPs) and allowing users to build policies around user groups — “only Finance and HR can access payroll systems” — enabling finer-grained control and reducing the need to rely on IP addresses to approximate identity.

These policies have helped organizations lock down their networks and get closer to Zero Trust, but CIOs are still left with problems: what happens when they need to integrate another organization’s identity provider? How do they safely grant access to corporate resources for contractors? And these new controls don’t address the fundamental problems with managing hardware, which still exist and are getting more complex as companies go through business changes like adding and removing locations or embracing hybrid forms of work. CIOs need a solution that works for the future of corporate networks, instead of trying to duct tape together solutions that address only some aspects of what they need.

The cloud-native firewall for next-generation networks

Cloudflare is helping customers build the future of their corporate networks by unifying network connectivity and Zero Trust security. Customers who adopt the Cloudflare One platform can deprecate their hardware firewalls in favor of a cloud-native approach, making IT teams’ lives easier by solving the problems of previous generations.

Connect any source or destination with flexible on-ramps

Rather than managing different devices for different use cases, all traffic across your network — from data centers, offices, cloud properties, and user devices — should be able to flow through a single global firewall. Cloudflare One enables you to connect to the Cloudflare network with a variety of flexible on-ramp methods including network-layer (GRE or IPsec tunnels) or application-layer tunnels, direct connections, BYOIP, and a device client. Connectivity to Cloudflare means access to our entire global network, which eliminates many of the challenges with physical or virtualized hardware:

  • No more capacity planning: The capacity of your firewall is the capacity of Cloudflare’s global network (currently >100Tbps and growing).
  • No more location planning: Cloudflare’s Anycast network architecture enables traffic to connect automatically to the closest location to its source. No more picking regions or worrying about where your primary/backup appliances are — redundancy and failover are built in by default.
  • No maintenance downtimes: Improvements to Cloudflare’s firewall capabilities, like all of our products, are deployed continuously across our global edge.
  • DDoS protection built in: No need to worry about DoS attacks overwhelming your firewalls; Cloudflare’s network automatically blocks attacks close to their source and sends only the clean traffic on to its destination.

Configure comprehensive policies, from packet filtering to Zero Trust

Cloudflare One policies can be used to secure and route your organizations traffic across all the various traffic ramps. These policies can be crafted using all the same attributes available through a traditional NGFW while expanding to include Zero Trust attributes as well. These Zero Trust attributes can include one or more IdPs or endpoint security providers.

When looking strictly at layers 3 through 5 of the OSI model, policies can be based on IP, port, protocol, and other attributes in both a stateless and stateful manner. These attributes can also be used to build your private network on Cloudflare when used in conjunction with any of the identity attributes and the Cloudflare device client.

Additionally, to help relieve the burden of managing IP allow/block lists, Cloudflare provides a set of managed lists that can be applied to both stateless and stateful policies. And on the more sophisticated end, you can also perform deep packet inspection and write programmable packet filters to enforce a positive security model and thwart the largest of attacks.

Cloudflare’s intelligence helps power our application and content categories for our Layer 7 policies, which can be used to protect your users from security threats, prevent data exfiltration, and audit usage of company resources. This starts with our protected DNS resolver, which is built on top of our performance leading consumer 1.1.1.1 service. Protected DNS allows administrators to protect their users from navigating or resolving any known or potential security risks. Once a domain is resolved, administrators can apply HTTP policies to intercept, inspect, and filter a user’s traffic. And if those web applications are self-hosted or SaaS enabled you can even protect them using a Cloudflare access policy, which acts as a web based identity proxy.

Last but not least, to help prevent data exfiltration, administrators can lock down access to external HTTP applications by utilizing remote browser isolation. And coming soon, administrators will be able to log and filter which commands a user can execute over an SSH session.

Simplify policy management: one click to propagate rules everywhere

Traditional firewalls required deploying policies on each device or configuring and maintaining an orchestration tool to help with this process. In contrast, Cloudflare allows you to manage policies across our entire network from a simple dashboard or API, or use Terraform to deploy infrastructure as code. Changes propagate across the edge in seconds thanks to our Quicksilver technology. Users can get visibility into traffic flowing through the firewall with logs, which are now configurable.

Consolidating multiple firewall use cases in one platform

Firewalls need to cover a myriad of traffic flows to satisfy different security needs, including blocking bad inbound traffic, filtering outbound connections to ensure employees and applications are only accessing safe resources, and inspecting internal (“East/West”) traffic flows to enforce Zero Trust. Different hardware often covers one or multiple use cases at different locations; we think it makes sense to consolidate these as much as possible to improve ease of use and establish a single source of truth for firewall policies. Let’s walk through some use cases that were traditionally satisfied with hardware firewalls and explain how IT teams can satisfy them with Cloudflare One.

Protecting a branch office

Traditionally, IT teams needed to provision at least one hardware firewall per office location (multiple for redundancy). This involved forecasting the amount of traffic at a given branch and ordering, installing, and maintaining the appliance(s). Now, customers can connect branch office traffic to Cloudflare from whatever hardware they already have — any standard router that supports GRE or IPsec will work — and control filtering policies across all of that traffic from Cloudflare’s dashboard.

Step 1: Establish a GRE or IPsec tunnel
The majority of mainstream hardware providers support GRE and/or IPsec as tunneling methods. Cloudflare will provide an Anycast IP address to use as the tunnel endpoint on our side, and you configure a standard GRE or IPsec tunnel with no additional steps — the Anycast IP provides automatic connectivity to every Cloudflare data center.

Step 2: Configure network-layer firewall rules
All IP traffic can be filtered through Magic Firewall, which includes the ability to construct policies based on any packet characteristic — e.g., source or destination IP, port, protocol, country, or bit field match. Magic Firewall also integrates with IP Lists and includes advanced capabilities like programmable packet filtering.

Step 3: Upgrade traffic for application-level firewall rules
After it flows through Magic Firewall, TCP and UDP traffic can be “upgraded” for fine-grained filtering through Cloudflare Gateway. This upgrade unlocks a full suite of filtering capabilities including application and content awareness, identity enforcement, SSH/HTTP proxying, and DLP.

Replace your hardware firewalls with Cloudflare One

Protecting a high-traffic data center or VPC

Firewalls used for processing data at a high-traffic headquarters or data center location can be some of the largest capital expenditures in an IT team’s budget. Traditionally, data centers have been protected by beefy appliances that can handle high volumes gracefully, which comes at an increased cost. With Cloudflare’s architecture, because every server across our network can share the responsibility of processing customer traffic, no one device creates a bottleneck or requires expensive specialized components. Customers can on-ramp traffic to Cloudflare with BYOIP, a standard tunnel mechanism, or Cloudflare Network Interconnect, and process up to terabits per second of traffic through firewall rules smoothly.

Replace your hardware firewalls with Cloudflare One

Protecting a roaming or hybrid workforce

In order to connect to corporate resources or get secure access to the Internet, users in traditional network architectures establish a VPN connection from their devices to a central location where firewalls are located. There, their traffic is processed before it’s allowed to its final destination. This architecture introduces performance penalties and while modern firewalls can enable user-level controls, they don’t necessarily achieve Zero Trust. Cloudflare enables customers to use a device client as an on-ramp to Zero Trust policies; watch out for more updates later this week on how to smoothly deploy the client at scale.

Replace your hardware firewalls with Cloudflare One

What’s next

We can’t wait to keep evolving this platform to serve new use cases. We’ve heard from customers who are interested in expanding NAT Gateway functionality through Cloudflare One, who want richer analytics, reporting, and user experience monitoring across all our firewall capabilities, and who are excited to adopt a full suite of DLP features across all of their traffic flowing through Cloudflare’s network. Updates on these areas and more are coming soon (stay tuned).

Cloudflare’s new firewall capabilities are available for enterprise customers today. Learn more here and check out the Oahu Program to learn how you can migrate from hardware firewalls to Zero Trust — or talk to your account team to get started today.

PII and Selective Logging controls for Cloudflare’s Zero Trust platform

Post Syndicated from Ankur Aggarwal original https://blog.cloudflare.com/pii-and-selective-logging-controls-for-cloudflares-zero-trust-platform/

PII and Selective Logging controls for Cloudflare’s Zero Trust platform

PII and Selective Logging controls for Cloudflare’s Zero Trust platform

At Cloudflare, we believe that you shouldn’t have to compromise privacy for security. Last year, we launched Cloudflare Gateway — a comprehensive, Secure Web Gateway with built-in Zero Trust browsing controls for your organization. Today, we’re excited to share the latest set of privacy features available to administrators to log and audit events based on your team’s needs.

Protecting your organization

Cloudflare Gateway helps organizations replace legacy firewalls while also implementing Zero Trust controls for their users. Gateway meets you wherever your users are and allows them to connect to the Internet or even your private network running on Cloudflare. This extends your security perimeter without having to purchase or maintain any additional boxes.

Organizations also benefit from improvements to user performance beyond just removing the backhaul of traffic to an office or data center. Cloudflare’s network delivers security filters closer to the user in over 250 cities around the world. Customers start their connection by using the world’s fastest DNS resolver. Once connected, Cloudflare intelligently routes their traffic through our network with layer 4 network and layer 7 HTTP filters.

To get started, administrators deploy Cloudflare’s client (WARP) on user devices, whether those devices are macOS, Windows, iOS, Android, ChromeOS or Linux. The client then sends all outbound layer 4 traffic to Cloudflare, along with the identity of the user on the device.

With proxy and TLS decryption turned on, Cloudflare will log all traffic sent through Gateway and surface this in Cloudflare’s dashboard in the form of raw logs and aggregate analytics. However, in some instances, administrators may not want to retain logs or allow access to all members of their security team.

The reasons may vary, but the end result is the same: administrators need the ability to control how their users’ data is collected and who can audit those records.

Legacy solutions typically give administrators an all-or-nothing blunt hammer. Organizations could either enable or disable all logging. Without any logging, those services did not capture any personally identifiable information (PII). By avoiding PII, administrators did not have to worry about control or access permissions, but they lost all visibility to investigate security events.

That lack of visibility adds even more complications when teams need to address tickets from their users to answer questions like “why was I blocked?”, “why did that request fail?”, or “shouldn’t that have been blocked?”. Without logs related to any of these events, your team can’t help end users diagnose these types of issues.

Protecting your data

Starting today, your team has more options to decide the type of information Cloudflare Gateway logs and who in your organization can review it. We are releasing role-based dashboard access for the logging and analytics pages, as well as selective logging of events. With role-based access, those with access to your account will have PII information redacted from their dashboard view by default.

We’re excited to help organizations build least-privilege controls into how they manage the deployment of Cloudflare Gateway. Security team members can continue to manage policies or investigate aggregate attacks. However, some events call for further investigation. With today’s release, your team can delegate the ability to review and search using PII to specific team members.

We still know that some customers want to reduce the logs stored altogether, and we’re excited to help solve that too. Now, administrators can now select what level of logging they want Cloudflare to store on their behalf. They can control this for each component, DNS, Network, or HTTP and can even choose to only log block events.

That setting does not mean you lose all logs — just that Cloudflare never stores them. Selective logging combined with our previously released Logpush service allows users to stop storage of logs on Cloudflare and turn on a Logpush job to their destination of choice in their location of choice as well.

How to Get Started

To get started, any Cloudflare Gateway customer can visit the Cloudflare for Teams dashboard and navigate to Settings > Network. The first option on this page will be to specify your preference for activity logging. By default, Gateway will log all events, including DNS queries, HTTP requests and Network sessions. In the network settings page, you can then refine what type of events you wish to be logged. For each component of Gateway you will find three options:

  1. Capture all
  2. Capture only blocked
  3. Don’t capture
PII and Selective Logging controls for Cloudflare’s Zero Trust platform

Additionally, you’ll find an option to redact all PII from logs by default. This will redact any information that can be used to potentially identify a user including User Name, User Email, User ID, Device ID, source IP, URL, referrer and user agent.

We’ve also included new roles within the Cloudflare dashboard, which provide better granularity when partitioning Administrator access to Access or Gateway components. These new roles will go live in January 2022 and can be modified on enterprise accounts by visiting Account Home → Members.

If you’re not yet ready to create an account, but would like to explore our Zero Trust services, check out our interactive demo where you can take a self-guided tour of the platform with narrated walkthroughs of key use cases, including setting up DNS and HTTP filtering with Cloudflare Gateway.

What’s Next

Moving forward, we’re excited to continue adding more and more privacy features that will give you and your team more granular control over your environment. The features announced today are available to users on any plan; your team can follow this link to get started today.