Tag Archives: Cloudflare Access

Eliminate VPN vulnerabilities with Cloudflare One

Post Syndicated from Dan Hall original https://blog.cloudflare.com/eliminate-vpn-vulnerabilities-with-cloudflare-one


On January 19, 2024, the Cybersecurity & Infrastructure Security Agency (CISA) issued Emergency Directive 24-01: Mitigate Ivanti Connect Secure and Ivanti Policy Secure Vulnerabilities. CISA has the authority to issue emergency directives in response to a known or reasonably suspected information security threat, vulnerability, or incident. U.S. Federal agencies are required to comply with these directives.

Federal agencies were directed to apply a mitigation against two recently discovered vulnerabilities; the mitigation was to be applied within three days. Further monitoring by CISA revealed that threat actors were continuing to exploit the vulnerabilities and had developed some workarounds to earlier mitigations and detection methods. On January 31, CISA issued Supplemental Direction V1 to the Emergency Directive instructing agencies to immediately disconnect all instances of Ivanti Connect Secure and Ivanti Policy Secure products from agency networks and perform several actions before bringing the products back into service.

This blog post will explore the threat actor’s tactics, discuss the high-value nature of the targeted products, and show how Cloudflare’s Secure Access Service Edge (SASE) platform protects against such threats.

As a side note and showing the value of layered protections, Cloudflare’s WAF had proactively detected the Ivanti zero-day vulnerabilities and deployed emergency rules to protect Cloudflare customers.

Threat Actor Tactics

Forensic investigations (see the Volexity blog for an excellent write-up) indicate that the attacks began as early as December 2023. Piecing together the evidence shows that the threat actors chained two previously unknown vulnerabilities together to gain access to the Connect Secure and Policy Secure appliances and achieve unauthenticated remote code execution (RCE).

CVE-2023-46805 is an authentication bypass vulnerability in the products’ web components that allows a remote attacker to bypass control checks and gain access to restricted resources. CVE-2024-21887 is a command injection vulnerability in the products’ web components that allows an authenticated administrator to execute arbitrary commands on the appliance and send specially crafted requests. The remote attacker was able to bypass authentication and be seen as an “authenticated” administrator, and then take advantage of the ability to execute arbitrary commands on the appliance.

By exploiting these vulnerabilities, the threat actor had near total control of the appliance. Among other things, the attacker was able to:

  • Harvest credentials from users logging into the VPN service
  • Use these credentials to log into protected systems in search of even more credentials
  • Modify files to enable remote code execution
  • Deploy web shells to a number of web servers
  • Reverse tunnel from the appliance back to their command-and-control server (C2)
  • Avoid detection by disabling logging and clearing existing logs

Little Appliance, Big Risk

This is a serious incident that is exposing customers to significant risk. CISA is justified in issuing their directive, and Ivanti is working hard to mitigate the threat and develop patches for the software on their appliances. But it also serves as another indictment of the legacy “castle-and-moat” security paradigm. In that paradigm, remote users were outside the castle while protected applications and resources remained inside. The moat, consisting of a layer of security appliances, separated the two. The moat, in this case the Ivanti appliance, was responsible for authenticating and authorizing users, and then connecting them to protected applications and resources. Attackers and other bad actors were blocked at the moat.

This incident shows us what happens when a bad actor is able to take control of the moat itself, and the challenges customers face to recover control. Two typical characteristics of vendor-supplied appliances and the legacy security strategy highlight the risks:

  • Administrators have access to the internals of the appliance
  • Authenticated users indiscriminately have access to a wide range of applications and resources on the corporate network, increasing the risk of bad actor lateral movement

A better way: Cloudflare’s SASE platform

Cloudflare One is Cloudflare’s SSE and single-vendor SASE platform. While Cloudflare One spans broadly across security and networking services (and you can read about the latest additions here), I want to focus on the two points noted above.

First, Cloudflare One employs the principles of Zero Trust, including the principle of least privilege. As such, users that authenticate successfully only have access to the resources and applications necessary for their role. This principle also helps in the event of a compromised user account as the bad actor does not have indiscriminate network-level access. Rather, least privilege limits the range of lateral movement that a bad actor has, effectively reducing the blast radius.

Second, while customer administrators need to have access to configure their services and policies, Cloudflare One does not provide any external access to the system internals of Cloudflare’s platform. Without that access, a bad actor would not be able to launch the types of attacks executed when they had access to the internals of the Ivanti appliance.  

It’s time to eliminate the legacy VPN

If your organization is impacted by the CISA directive, or you are just ready to modernize and want to augment or replace your current VPN solution, Cloudflare is here to help. Cloudflare’s Zero Trust Network Access (ZTNA) service, part of the Cloudflare One platform, is the fastest and safest way to connect any user to any application.

Contact us to get immediate onboarding help or to schedule an architecture workshop to help you augment or replace your Ivanti (or any) VPN solution.
Not quite ready for a live conversation? Read our learning path article on how to replace your VPN with Cloudflare or our SASE reference architecture for a view of how all of our SASE services and on-ramps work together.

Zero Trust WARP: tunneling with a MASQUE

Post Syndicated from Dan Hall original https://blog.cloudflare.com/zero-trust-warp-with-a-masque


Slipping on the MASQUE

In June 2023, we told you that we were building a new protocol, MASQUE, into WARP. MASQUE is a fascinating protocol that extends the capabilities of HTTP/3 and leverages the unique properties of the QUIC transport protocol to efficiently proxy IP and UDP traffic without sacrificing performance or privacy

At the same time, we’ve seen a rising demand from Zero Trust customers for features and solutions that only MASQUE can deliver. All customers want WARP traffic to look like HTTPS to avoid detection and blocking by firewalls, while a significant number of customers also require FIPS-compliant encryption. We have something good here, and it’s been proven elsewhere (more on that below), so we are building MASQUE into Zero Trust WARP and will be making it available to all of our Zero Trust customers — at WARP speed!

This blog post highlights some of the key benefits our Cloudflare One customers will realize with MASQUE.

Before the MASQUE

Cloudflare is on a mission to help build a better Internet. And it is a journey we’ve been on with our device client and WARP for almost five years. The precursor to WARP was the 2018 launch of 1.1.1.1, the Internet’s fastest, privacy-first consumer DNS service. WARP was introduced in 2019 with the announcement of the 1.1.1.1 service with WARP, a high performance and secure consumer DNS and VPN solution. Then in 2020, we introduced Cloudflare’s Zero Trust platform and the Zero Trust version of WARP to help any IT organization secure their environment, featuring a suite of tools we first built to protect our own IT systems. Zero Trust WARP with MASQUE is the next step in our journey.

The current state of WireGuard

WireGuard was the perfect choice for the 1.1.1.1 with WARP service in 2019. WireGuard is fast, simple, and secure. It was exactly what we needed at the time to guarantee our users’ privacy, and it has met all of our expectations. If we went back in time to do it all over again, we would make the same choice.

But the other side of the simplicity coin is a certain rigidity. We find ourselves wanting to extend WireGuard to deliver more capabilities to our Zero Trust customers, but WireGuard is not easily extended. Capabilities such as better session management, advanced congestion control, or simply the ability to use FIPS-compliant cipher suites are not options within WireGuard; these capabilities would have to be added on as proprietary extensions, if it was even possible to do so.

Plus, while WireGuard is popular in VPN solutions, it is not standards-based, and therefore not treated like a first class citizen in the world of the Internet, where non-standard traffic can be blocked, sometimes intentionally, sometimes not. WireGuard uses a non-standard port, port 51820, by default. Zero Trust WARP changes this to use port 2408 for the WireGuard tunnel, but it’s still a non-standard port. For our customers who control their own firewalls, this is not an issue; they simply allow that traffic. But many of the large number of public Wi-Fi locations, or the approximately 7,000 ISPs in the world, don’t know anything about WireGuard and block these ports. We’ve also faced situations where the ISP does know what WireGuard is and blocks it intentionally.

This can play havoc for roaming Zero Trust WARP users at their local coffee shop, in hotels, on planes, or other places where there are captive portals or public Wi-Fi access, and even sometimes with their local ISP. The user is expecting reliable access with Zero Trust WARP, and is frustrated when their device is blocked from connecting to Cloudflare’s global network.

Now we have another proven technology — MASQUE — which uses and extends HTTP/3 and QUIC. Let’s do a quick review of these to better understand why Cloudflare believes MASQUE is the future.

Unpacking the acronyms

HTTP/3 and QUIC are among the most recent advancements in the evolution of the Internet, enabling faster, more reliable, and more secure connections to endpoints like websites and APIs. Cloudflare worked closely with industry peers through the Internet Engineering Task Force on the development of RFC 9000 for QUIC and RFC 9114 for HTTP/3. The technical background on the basic benefits of HTTP/3 and QUIC are reviewed in our 2019 blog post where we announced QUIC and HTTP/3 availability on Cloudflare’s global network.

Most relevant for Zero Trust WARP, QUIC delivers better performance on low-latency or high packet loss networks thanks to packet coalescing and multiplexing. QUIC packets in separate contexts during the handshake can be coalesced into the same UDP datagram, thus reducing the number of receive and system interrupts. With multiplexing, QUIC can carry multiple HTTP sessions within the same UDP connection. Zero Trust WARP also benefits from QUIC’s high level of privacy, with TLS 1.3 designed into the protocol.

MASQUE unlocks QUIC’s potential for proxying by providing the application layer building blocks to support efficient tunneling of TCP and UDP traffic. In Zero Trust WARP, MASQUE will be used to establish a tunnel over HTTP/3, delivering the same capability as WireGuard tunneling does today. In the future, we’ll be in position to add more value using MASQUE, leveraging Cloudflare’s ongoing participation in the MASQUE Working Group. This blog post is a good read for those interested in digging deeper into MASQUE.

OK, so Cloudflare is going to use MASQUE for WARP. What does that mean to you, the Zero Trust customer?

Proven reliability at scale

Cloudflare’s network today spans more than 310 cities in over 120 countries, and interconnects with over 13,000 networks globally. HTTP/3 and QUIC were introduced to the Cloudflare network in 2019, the HTTP/3 standard was finalized in 2022, and represented about 30% of all HTTP traffic on our network in 2023.

We are also using MASQUE for iCloud Private Relay and other Privacy Proxy partners. The services that power these partnerships, from our Rust-based proxy framework to our open source QUIC implementation, are already deployed globally in our network and have proven to be fast, resilient, and reliable.

Cloudflare is already operating MASQUE, HTTP/3, and QUIC reliably at scale. So we want you, our Zero Trust WARP users and Cloudflare One customers, to benefit from that same reliability and scale.

Connect from anywhere

Employees need to be able to connect from anywhere that has an Internet connection. But that can be a challenge as many security engineers will configure firewalls and other networking devices to block all ports by default, and only open the most well-known and common ports. As we pointed out earlier, this can be frustrating for the roaming Zero Trust WARP user.

We want to fix that for our users, and remove that frustration. HTTP/3 and QUIC deliver the perfect solution. QUIC is carried on top of UDP (protocol number 17), while HTTP/3 uses port 443 for encrypted traffic. Both of these are well known, widely used, and are very unlikely to be blocked.

We want our Zero Trust WARP users to reliably connect wherever they might be.

Compliant cipher suites

MASQUE leverages TLS 1.3 with QUIC, which provides a number of cipher suite choices. WireGuard also uses standard cipher suites. But some standards are more, let’s say, standard than others.

NIST, the National Institute of Standards and Technology and part of the US Department of Commerce, does a tremendous amount of work across the technology landscape. Of interest to us is the NIST research into network security that results in FIPS 140-2 and similar publications. NIST studies individual cipher suites and publishes lists of those they recommend for use, recommendations that become requirements for US Government entities. Many other customers, both government and commercial, use these same recommendations as requirements.

Our first MASQUE implementation for Zero Trust WARP will use TLS 1.3 and FIPS compliant cipher suites.

How can I get Zero Trust WARP with MASQUE?

Cloudflare engineers are hard at work implementing MASQUE for the mobile apps, the desktop clients, and the Cloudflare network. Progress has been good, and we will open this up for beta testing early in the second quarter of 2024 for Cloudflare One customers. Your account team will be reaching out with participation details.

Continuing the journey with Zero Trust WARP

Cloudflare launched WARP five years ago, and we’ve come a long way since. This introduction of MASQUE to Zero Trust WARP is a big step, one that will immediately deliver the benefits noted above. But there will be more — we believe MASQUE opens up new opportunities to leverage the capabilities of QUIC and HTTP/3 to build innovative Zero Trust solutions. And we’re also continuing to work on other new capabilities for our Zero Trust customers.
Cloudflare is committed to continuing our mission to help build a better Internet, one that is more private and secure, scalable, reliable, and fast. And if you would like to join us in this exciting journey, check out our open positions.

Introducing behavior-based user risk scoring in Cloudflare One

Post Syndicated from Noelle Kagan original https://blog.cloudflare.com/cf1-user-risk-score


Cloudflare One, our secure access service edge (SASE) platform, is introducing new capabilities to detect risk based on user behavior so that you can improve security posture across your organization.

Traditionally, security and IT teams spend a lot of time, labor, and money analyzing log data to track how risk is changing within their business and to stay on top of threats. Sifting through such large volumes of data – the majority of which may well be benign user activity – can feel like finding a needle in a haystack.

Cloudflare’s approach simplifies this process with user risk scoring. With AI/machine learning techniques, we analyze the real-time telemetry of user activities and behaviors that pass through our network to identify abnormal behavior and potential indicators of compromises that could lead to danger for your organization, so your security teams can lock down suspicious activity and adapt your security posture in the face of changing risk factors and sophisticated threats.

User risk scoring

The concept of trust in cybersecurity has evolved dramatically. The old model of “trust but verify” has given way to a Zero Trust approach, where trust is never assumed and verification is continuous, as each network request is scrutinized. This form of continuous evaluation enables administrators to grant access based not just on the contents of a request and its metadata, but on its context — such as whether the user typically logs in at that time or location.

Previously, this kind of contextual risk assessment was time-consuming and required expertise to parse through log data. Now, we’re excited to introduce Zero Trust user risk scoring which does this automatically, allowing administrators to specify behavioral rules — like monitoring for anomalous “impossible travel” and custom Data Loss Prevention (DLP) triggers, and use these to generate dynamic user risk scores.

Zero Trust user risk scoring detects user activity and behaviors that could introduce risk to your organizations, systems, and data and assigns a score of Low, Medium, or High to the user involved. This approach is sometimes referred to as user and entity behavior analytics (UEBA) and enables teams to detect and remediate possible account compromise, company policy violations, and other risky activity.

How risk scoring works and detecting user risk

User risk scoring is built to examine behaviors. Behaviors are actions taken or completed by a user and observed by Cloudflare One, our SASE platform that helps organizations implement Zero Trust.

Once tracking for a particular behavior is enabled, the Zero Trust risk scoring engine immediately starts to review existing logs generated within your Zero Trust account. Then, after a user in your account performs a behavior that matches one of the enabled risk behaviors based on observed log data, Cloudflare assigns a risk score — Low, Medium, or High — to the user who performed the behavior.

Behaviors are built using log data from within your Cloudflare account. No additional user data is being collected, tracked or stored beyond what is already available in the existing Zero Trust logs (which adhere to the log retention timeframes).

A popular priority amongst security and insider threat teams is detecting when a user performs so-called “impossible travel”. Impossible travel, available as a predefined risk behavior today, is when a user completes a login from two different locations that the user could not have traveled to in that period of time. For example, if Alice is in Seattle and logs into her organization’s finance application that is protected by Cloudflare Access and only a few minutes later is seen logging into her organization’s business suite from Sydney, Australia, impossible travel would be triggered and Alice would be assigned a risk level of High.

For users that are observed performing multiple risk behaviors, they will be assigned the highest-level risk behavior they’ve triggered. This real-time risk assessment empowers your security teams to act swiftly and decisively.

Zero Trust user risk scoring detecting impossible travel and flagging a user as high risk

Enabling predefined risk behaviors

Behaviors can be enabled and disabled at any time, but are disabled by default. Therefore, users will not be assigned risk scores until you have decided what is considered a risk to your organization and how urgent that risk is.

To start detecting a given risk behavior, an administrator must first ensure the behavior requirements are met (for instance, to detect whether a user has triggered a high number of DLP policies, you’ll need to first set up a DLP profile). From there, simply enable the behavior in the Zero Trust dashboard.

After a behavior has been enabled, Cloudflare will start analyzing behaviors to flag users with the corresponding risk when detected. The risk level of any behavior can be changed by an administrator. You have the freedom to enable behaviors that are relevant to your security posture as well as adjust the default risk score (Low, Medium, or High) from an out-of-the-box assignment.

And for security administrators who have investigated a user and need to clear a user’s risk score, simply go to Risk score > User risk scoring, choose the appropriate user, and select ‘Reset user risk’ followed by ‘Confirm.’ Once a user’s risk score is reset, they disappear from the risk table — until or unless they trigger another risk behavior.

Zero Trust user risk scoring behaviors can be enabled in seconds

How do I get started?

User risk scoring and DLP are part of Cloudflare One, which converges Zero Trust security and network connectivity services on one unified platform and global control plane.

To get access via Cloudflare One, reach out for a consultation, or contact your account manager.

Wildcard and multi-hostname support in Cloudflare Access

Post Syndicated from Kenny Johnson original https://blog.cloudflare.com/access-wildcard-and-multi-hostname/

Wildcard and multi-hostname support in Cloudflare Access

Wildcard and multi-hostname support in Cloudflare Access

We are thrilled to announce the full support of wildcard and multi-hostname application definitions in Cloudflare Access. Until now, Access had limitations that restricted it to a single hostname or a limited set of wildcards. Before diving into these new features let’s review Cloudflare Access and its previous limitations around application definition.

Access and hostnames

Cloudflare Access is the gateway to applications, enforcing security policies based on identity, location, network, and device health. Previously, Access applications were defined as a single hostname. A hostname is a unique identifier assigned to a device connected to the internet, commonly used to identify a website, application, or server. For instance, “www.example.com” is a hostname.

Upon successful completion of the security checks, a user is granted access to the protected hostname via a cookie in their browser, in the form of a JSON Web Token (JWT). This cookie’s session lasts for a specific period of time defined by the administrators and any request made to the hostname must have this cookie present.

However, a single hostname application definition was not sufficient in certain situations, particularly for organizations with Single Page Applications and/or hundreds of identical hostnames.

Many Single Page Applications have two separate hostnames – one for the front-end user experience and the other for receiving API requests (e.g., app.example.com and api.example.com). This created a problem for Access customers because the front-end service could no longer communicate with the API as they did not share a session, leading to Access blocking the requests. Developers had to use different custom approaches to issue or share the Access JWT between different hostnames.

In many instances, organizations also deploy applications using a consistent naming convention, such as example.service123.example.com, especially for automatically provisioned applications. These applications often have the same set of security requirements. Previously, an Access administrator had to create a unique Access application per unique hostname, even if the services were functionally identical. This resulted in hundreds or thousands of Access applications needing to be created.

We aimed to make things easier for security teams as easier configuration means a more coherent security architecture and ultimately more secure applications.

We introduced two significant changes to Cloudflare Access: Multi-Hostname Applications and Wildcard Support.

Multi-Hostname Applications

Multi-Hostname Applications allow teams to protect multiple subdomains with a single Access app, simplifying the process and reducing the need for multiple apps.

Wildcard and multi-hostname support in Cloudflare Access

Access also takes care of JWT cookie issuance across all hostnames associated with a given application. This means that a front-end and API service on two different hostnames can communicate securely without any additional software changes.

Wildcards

A wildcard is a special character, in this case *, defines a specific application pattern to match instead of explicitly having to define each unique application. Access applications can now be defined using a wildcard anywhere in the subdomain or path of a hostname. This allows an administrator to protect hundreds of applications with a single application policy.

Wildcard and multi-hostname support in Cloudflare Access

In a scenario where an application requires additional security controls, Access is configured such that the most specific hostname definition wins (e.g., test.example.com will take precedence over *.example.com).

Give it a try!

Wildcard Applications are now available in open beta on the Cloudflare One Dashboard. Multi Hostname support will enter an open beta in the coming weeks. For more information, please see our product documentation about Multi-hostname applications and wildcards.

Introducing custom pages for Cloudflare Access

Post Syndicated from Kenny Johnson original https://blog.cloudflare.com/access-custom-pages/

Introducing custom pages for Cloudflare Access

Introducing custom pages for Cloudflare Access

Over 10,000 organizations rely on Cloudflare Access to connect their employees, partners, and contractors to the applications they need. From small teams on our free plan to some of the world’s largest enterprises, Cloudflare Access is the Zero Trust front door to how they work together. As more users start their day with Cloudflare Access, we’re excited to announce new options to customize how those users experience our industry-leading Zero Trust solution. We’re excited to announce customizable Cloudflare Access pages including login, blocks and the application launcher.

Where does Cloudflare Access fit in a user’s workflow today?

Most teams we work with start their Zero Trust journey by replacing their existing virtual private network (VPN) with Cloudflare Access. The reasons vary. For some teams, their existing VPN allows too much trust by default and Access allows them to quickly build segmentation based on identity, device posture, and other factors. Other organizations deploy Cloudflare Access because they are exhausted from trying to maintain their VPN and dealing with end user complaints.

When those administrators begin setting up Cloudflare Access, they connect the resources they need to protect to Cloudflare’s network. They can deploy a Cloudflare Tunnel to create a secure, outbound-only, connection to Cloudflare, rely on our existing DNS infrastructure, or even force SaaS application logins through our network. Administrators can then layer on granular Zero Trust rules to determine who can reach a given resource.

To the end user, Cloudflare Access is just a security guard checking for identity, device posture, or other signals at every door. In most cases they should never need to think about us. Instead, they just enjoy a much faster experience with less hassle. When they attempt to reach an application or service, we check each and every request and connection for proof that they should be allowed.

When they do notice Cloudflare Access, they interact with screens that help them make a decision about what they need. In these cases we don’t just want to be a silent security guard – we want to be a helpful tour guide.

Introducing custom pages for Cloudflare Access

Cloudflare Access supports the ability for administrators to configure multiple identity providers simultaneously. Customers love this capability when they work with contractors or acquired teams. We can also configure this only for certain applications. When users arrive, though, we need to know which direction to send them for their initial authentication. We present this selection screen, along with guiding text provided by the administrator, to the user.

Introducing custom pages for Cloudflare Access

When teams move their applications behind Cloudflare Access, we become the front door to how they work. We use that position to present the user with all of the applications they can reach in a portal that allows them to click on any tile to launch the application.

Introducing custom pages for Cloudflare Access

In some cases, the user lacks sufficient permissions to reach the destination. Even though they are being blocked we still want to reduce confusion. Instead of just presenting a generic browser error or dropping a connection, we display a block page.

Why do these need to change?

More and more large enterprises are starting to adopt a Zero Trust VPN replacement and they’re selecting Cloudflare to do so. Unlike small teams that can send a short Slack message about an upcoming change to their employee workflow, some of the CIOs and CSOs that deploy Access need to anticipate questions and curiosity from tens of thousands of employees and contractors.

Those users do not know what Cloudflare is and we don’t need them to. Instead, we just want to securely connect them to the tools they need. To solve that, we need to give IT administrators more space to communicate and we need to get our branding out of the way.

What will I be able to customize?

Following the release of Access page customization, administrators will be able to customize: the login screen, access denied errors and the Access Application Launcher.

What’s next?

We are building page customization in Cloudflare Access following the existing template our reverse proxy customers can use to modify pages presented to end users. We’re excited to bring that standard experience to these workflows as well.

Even though we’re building on that pattern, we still want your feedback. Ahead of a closed beta we are looking for customers who want to provide input as we fine tune this new configuration option. Interested in helping shape this work? Let us know here.

Using Cloudflare Access with CNI

Post Syndicated from David Tuber original https://blog.cloudflare.com/access-aegis-cni/

Using Cloudflare Access with CNI

Using Cloudflare Access with CNI

We are thrilled to introduce an innovative new approach to secure hosted applications via Cloudflare Access without the need for any installed software or custom code on your application server. But before we dive into how this is possible, let’s review why Access previously required installed software or custom code on your application server.

Protecting an application with Access

Traditionally, companies used a Virtual Private Network (VPN) to access a hosted application, where all they had to do was configure an IP allowlist rule for the VPN. However, this is a major security threat because anyone on the VPN can access the application, including unauthorized users or attackers.

We built Cloudflare Access to replace VPNs and provide the option to enforce Zero Trust policies in hosted applications. Access allows you to verify a user’s identity before they even reach the application. By acting as a proxy in front of your application’s hostname (e.g. app.example.com), Cloudflare enables strong verification techniques such as identity, device posture, hardkey MFA, and more. All without having to directly add SSO or Authentication logic directly into your applications.

However, since Access enforces at a hostname level, there is still a potential for bypass – the origin server IP address. This means that if someone knows your origin server IP address, they can bypass Access and directly interact with the target application. Seems scary, right? Luckily, there are proven solutions to prevent an origin IP attack.

Traditionally, organizations use two approaches to prevent an Origin IP bypass: Cloudflare Tunnel and JSON Web Token (JWT) Validation.

Cloudflare Tunnel

Cloudflare Tunnel creates a secure, outbound-only tunnel from your origin server to Cloudflare, with no origin IP address. This means that the only inbound traffic to your origin is coming from Cloudflare. However, it does require a daemon to be installed in your origin server’s network.

JWT Validation

JWT validation, on the other hand, prevents requests coming from unauthenticated sources by issuing a JWT when a user successfully authenticates. Application software can then be modified to check any inbound HTTP request for the Access JWT. The Access JWT uses signature-based verification to ensure that it cannot be easily spoofed by malicious users. However, modifying the logic of legacy hosted applications can be cumbersome or even impossible, making JWT validation a limited option for some.

Protecting an application without installed or custom software

And now, the exciting news – our new approach to protect Access applications from bypass without any installed software or code modifications! We achieve this using Cloud Network Interconnect (CNI) and a new Cloudflare product called Aegis.

In this blog, we’ll explore the benefits of using Access, CNI, and Aegis together to protect and optimize your applications. This offers a better way to securely connect your on-premise or cloud infrastructure to the Cloudflare network, as well as manage access to your applications and resources. All without having to install additional software.

Cloudflare Access

Cloudflare Access is a cloud-based identity and access management solution that allows users to secure access to their applications and resources. With Access, users can easily set up single sign-on (SSO) and multi-factor authentication (MFA) to protect against unauthorized access.

Many companies use Access today to protect their applications. However, since Access is based on an application’s hostname, there is still a possibility that security controls are bypassed by going straight to an application’s IP address. The solution to this is using Cloudflare Tunnels and JWT validation, to ensure that any request to the application server is legitimate and coming directly from Cloudflare.

Both Cloudflare Tunnels and JWT validation require additional software (e.g. cloudflared) or code customization in the application itself. This takes time and requires ongoing monitoring and maintenance.

Cloudflare Network Interconnect

Cloudflare Network Interconnect (CNI) enables users to securely connect their on-premises or cloud infrastructure to the Cloudflare network. Until recently, direct network connections were a cumbersome and manual process. Cloud CNI allows users to manage their own direct connections of their infrastructure and Cloudflare.

Cloudflare peers with over 11,500 networks directly and is located in over 285 cities which means there are many opportunities for direct connections with a company’s own private network. This can massively reduce latency of requests between an application server and Cloudflare, leading to a better application user experience.

Aegis

Cloudflare Aegis allows a customer to define a reliable IP address for traffic from Cloudflare to their own infrastructure. With Aegis it is assured that the assigned IP address is coming only from Cloudflare and for traffic associated with a specific account. This means that a company can configure their origin applications to verify all inbound requests are coming from the known IP. You can read more about Aegis here.

Access + CNI and Aegis

With CNI and Aegis, the only configuration required is an allowlist rule based on the inbound IP address. Cloudflare takes care of the rest and ensures that all requests are verified by Access (and other security products like DDoS and Web Application Firewall). All without requiring software or application code modification!

This is a different approach from traditional IP allowlists for VPNs because you can still enforce Zero Trust policies on the inbound request. Plus, Cloudflare has logic in place to ensure that the Aegis IP address can only be used by Cloudflare services.

Hosting your own infrastructure and applications can be a powerful way to have complete control and customization over your online presence. However, one of the challenges of hosting your own infrastructure is providing secure access to your applications and resources.

Traditionally, users have relied on virtual private networks (VPNs) or private circuits to provide secure access to their applications. While these solutions can be effective, they can also be complex to set up and maintain, and may not offer the same level of security and performance as newer solutions.

How it works

An application can be secured behind Access if its hostname is configured in Cloudflare. That hostname can be pointed to either a Cloudflare Tunnel, Load Balancer or direct IP Address. An application can then be configured to enforce specific security policies like identity provider group, hard key MFA, device posture and more.

Using Cloudflare Access with CNI

However, the network path that the application takes can be different and Cloudflare Network Interconnect allows for a completely private path from Cloudflare to your application. For example, Cloudflare Tunnel implicitly assumes that the network path between Cloudflare and your application is using the public Internet. Cloudflare Tunnel encrypts your traffic over the public Internet and ensures that your connection to Cloudflare is secure. But the public Internet is still a concern for a lot of people, who don’t want to harden their service to the public Internet at all.

Using Cloudflare Access with CNI

What if you implicitly knew that your connection was secure because nobody else was using it? That’s what Cloudflare Network Interconnect allows you to guarantee: private, performant connectivity back to Cloudflare.

By configuring Access and CNI together, you get protected application access over a private link. Cloudflare Aegis provides a dedicated IP that allows you to apply network-level firewall policies to ensure that your solution is completely airgapped: no one can access your application but Cloudflare-protected Access calls that come from their own dedicated IP address.

Using Cloudflare Access with CNI

Even if somebody could access your application over the CNI, they would get blocked by your firewall because they didn’t go through Access. This provides security at Layer 7 and Layer 3: at the application and the network.

Getting started

Access, Cloud CNI and Aegis are generally available to all Enterprise customers. If you would like to learn more about protecting and accelerating your private applications, please reach out to your account team for more information and how to enable your account.

Announcing SCIM support for Cloudflare Access & Gateway

Post Syndicated from Kenny Johnson original https://blog.cloudflare.com/access-and-gateway-with-scim/

Announcing SCIM support for Cloudflare Access & Gateway

Announcing SCIM support for Cloudflare Access & Gateway

Today, we’re excited to announce that Cloudflare Access and Gateway now support the System for Cross-domain Identity Management (SCIM) protocol. Before we dive into what this means, let’s take a step back and review what SCIM, Access, and Gateway are.

SCIM is a protocol that enables organizations to manage user identities and access to resources across multiple systems and domains. It is often used to automate the process of creating, updating, and deleting user accounts and permissions, and to keep these accounts and permissions in sync across different systems.

Announcing SCIM support for Cloudflare Access & Gateway

For example, most organizations have an identity provider, such as Okta or Azure Active Directory, that stores information about its employees, such as names, addresses, and job titles. The organization also likely uses cloud-based applications for collaboration. In order to access the cloud-based application, employees need to create an account and log in with a username and password. Instead of manually creating and managing these accounts, the organization can use SCIM to automate the process. Both the on-premise system and the cloud-based application are configured to support SCIM.

When a new employee is added to, or removed from, the identity provider, SCIM automatically creates an account for that employee in the cloud-based application, using the information from the on-premises system. If an employee’s information is updated in the identity provider, such as a change in job title, SCIM automatically updates the corresponding information in the cloud-based application. If an employee leaves the organization, their account can be deleted from both systems using SCIM.

SCIM helps organizations efficiently manage user identities and access across multiple systems, reducing the need for manual intervention and ensuring that user information is accurate and up to date.

Cloudflare Access provides secure access to your internal applications and resources. It integrates with your existing identity provider to enforce strong authentication for users and ensure that only authorized users have access to your organization’s resources. After a user successfully authenticates via the identity provider, Access initiates a session for that user. Once the session has expired, Access will redirect the user back to the identity provider.

Similarly, Cloudflare Gateway is a comprehensive secure web gateway (SWG) which leverages the same identity provider configurations as Access to allow administrators to build DNS, Network, and HTTP inspection policies based on identity. Once a user logs in using WARP client via the identity provider, their identity is logged and evaluated against any policies created by their organization’s administrator.

Challenges before SCIM

Before SCIM, if a user needed to be deprovisioned (e.g. leaving the business, a security breach or other factors) an administrator needed to remove access for the user in both the identity provider and Access. This was because a user’s Cloudflare Zero Trust session would stay active until they attempted to log in via the identity provider again. This was time-consuming and error-prone, and it leaves room for security vulnerabilities if a user’s access is not removed in a timely manner.

Announcing SCIM support for Cloudflare Access & Gateway

Another challenge with Cloudflare Access and Gateway was that identity provider groups had to be manually entered. This meant that if an identity provider group changed, an administrator had to manually update the value within the Cloudflare Zero trust dashboard to reflect those changes. This was tedious and time-consuming, and led to inconsistencies if the updates were not made promptly. Additionally, it required additional resources and expertise to manage this process effectively.

Announcing SCIM support for Cloudflare Access & Gateway

SCIM for Access & Gateway

Now, with the integration of SCIM, Access and Gateway can automatically deprovision users after they are deactivated in an identity provider and synchronize identity provider groups. This ensures that only active users, in the right group, have access to your organization’s resources, improving the security of your network.

User deprovisioning via SCIM listens for any user deactivation events in the identity provider and then revokes all active sessions for that user. This immediately cuts off their access to any application protected by Access and their session via WARP for Gateway.

Announcing SCIM support for Cloudflare Access & Gateway

Additionally, the integration of SCIM allows for the synchronization of identity provider group information in Access and Gateway policies. This means that all identity provider groups will automatically be available in both the Access and Gateway policy builders. There is also an option to automatically force a user to reauthenticate if their group membership changes.

For example, if you wanted to create an Access policy that only applied to users with emails associated with example.com and apart from the risky user group, you would be able to build a policy as show below by simply selecting the risky user group from a drop-down:

Announcing SCIM support for Cloudflare Access & Gateway

Similarly, if you wanted to create a Gateway policy to block example.com and all of its subdomains for these same users you could create the policy below:

Announcing SCIM support for Cloudflare Access & Gateway

What’s next

Today, SCIM support is available for Azure Active Directory and Okta for Self-Hosted Access applications. In the future, we plan to extend support for more Identity Providers and to Access for SaaS.

Try it now

SCIM is available for all Zero Trust customers today and can be used to improve operations and overall security. Try out SCIM for Access and Gateway yourself today.

One-click data security for your internal and SaaS applications

Post Syndicated from Tim Obezuk original https://blog.cloudflare.com/one-click-zerotrust-isolation/

One-click data security for your internal and SaaS applications

One-click data security for your internal and SaaS applications

Most of the CIOs we talk to want to replace dozens of point solutions as they start their own Zero Trust journey. Cloudflare One, our comprehensive Secure Access Service Edge (SASE) platform can help teams of any size rip out all the legacy appliances and services that tried to keep their data, devices, and applications safe without compromising speed.

We also built those products to work better together. Today, we’re bringing Cloudflare’s best-in-class browser isolation technology to our industry-leading Zero Trust access control product. Your team can now control the data in any application, and what a user can do in the application, with a single click in the Cloudflare dashboard. We’re excited to help you replace your private networks, virtual desktops, and data control boxes with a single, faster solution.

Zero Trust access control is just the first step

Most organizations begin their Zero Trust migration by replacing a virtual private network (VPN). VPN deployments trust too many users by default. In most configurations, any user on a private network can reach any resource on that same network.

The consequences vary. On one end of the spectrum, employees in marketing can accidentally stumble upon payroll amounts for the entire organization. At the other end, attackers who compromise the credentials of a support agent can move through a network to reach trade secrets or customer production data.

Zero Trust access control replaces this model by inverting the security posture. A Zero Trust network trusts no one by default. Every user and each request or connection, must prove they can reach a specific resource. Administrators can build granular rules and monitor comprehensive logs to prevent incidental or malicious access incidents.

Over 10,000 teams have adopted Cloudflare One to replace their own private network with a Zero Trust model. We offer those teams rules that go beyond just identity. Security teams can enforce hard key authentication for specific applications as a second factor. Sensitive production systems can require users to provide the reason they need temporary access while they request permission from a senior manager. We integrate with just about every device posture provider, or you can build your own, to ensure that only corporate devices connect to your systems.

The teams who deploy this solution improve the security of their enterprise overnight while also making their applications faster and more usable for employees in any region. However, once users pass all of those checks we still rely on the application to decide what they can and cannot do.

In some cases, that means Zero Trust access control is not sufficient. An employee planning to leave tomorrow could download customer contact info. A contractor connecting from an unmanaged device can screenshot schematics. As enterprises evolve on their SASE migration, they need to extend Zero Trust control to application usage and data.

Isolate sessions without any client software

Cloudflare’s browser isolation technology gives teams the ability to control usage and data without making the user experience miserable. Legacy approaches to browser isolation relied on one of two methods to secure a user on the public Internet:

  • Document Object Model (DOM) manipulation – unpack the webpage, inspect it, hope you caught the vulnerability, attempt to repack the webpage, deliver it. This model leads to thousands of broken webpages and total misses on zero days and other threats.
  • Pixel pushing – stream a browser running far away to the user, like a video. This model leads to user complaints due to performance and a long tail of input incompatibilities.

Cloudflare’s approach is different. We run headless versions of Chromium, the open source project behind Google Chrome and Microsoft Edge and other browsers, in our data centers around the world. We send the final rendering of the webpage, the draw commands, to a user’s local device.

One-click data security for your internal and SaaS applications

The user thinks it is just the Internet. Highlighting, right-clicking, videos – they all just work. Users do not need a special browser client. Cloudflare’s technology just works in any browser on mobile or desktop. For security teams, they can guarantee that code never executes on the devices in the field to stop Zero-Day attacks.

We added browser isolation to Cloudflare One to protect against attacks that leap out of a browser from the public Internet. However, controlling the browser also gives us the ability to pass that control along to security and IT departments, so they can focus on another type of risk – data misuse.

As part of this launch, when administrators secure an application with Cloudflare’s Zero Trust access control product, they can click an additional button that will force sessions into our isolated browser.

One-click data security for your internal and SaaS applications

When the user authenticates, Cloudflare Access checks all the Zero Trust rules configured for a given application. When this isolation feature is enabled, Cloudflare will silently open the session in our isolated browser. The user does not need any special software or to be trained on any unique steps. They just navigate to the application and start doing their work. Behind the scenes, the session runs entirely in Cloudflare’s network.

Control usage and data in sessions

By running the session in Cloudflare’s isolated browser, administrators can begin to build rules that replace some goals of legacy virtual desktop solutions. Some enterprises deploy virtual desktop instances (VDIs) to sandbox application usage. Those VDI platforms extended applications to employees and contractors without allowing the application to run on the physical device.

Employees and contractors tend to hate this method. The client software required is clunky and not available on every operating system. The speed slows them down. Administrators also need to invest time in maintaining the desktops and the virtualization software that power them.

We’re excited to help you replace that point solution, too. Once an application is isolated in Cloudflare’s network, you can toggle additional rules that control how users interact with the resource. For example, you can disable potential data loss vectors like file downloads, printing, or copy-pasting. Add watermarks, both visible and invisible, to audit screenshot leaks.

You can extend this control beyond just data loss. Some teams have sensitive applications where you need users to connect without inputting any data, but they do not have the developer time to build a “Read Only” mode. With Cloudflare One, those teams can toggle “Disable keyboard” and allow users to reach the service while blocking any input.

One-click data security for your internal and SaaS applications

The isolated solution also integrates with Cloudflare One’s Data Loss Prevention (DLP) suite. With a few additional settings, you can bring comprehensive data control to your applications without any additional engineering work or point solution deployment. If a user strays too far in an application and attempts to download something that contains personal information like social security or credit card numbers, Cloudflare’s network will stop that download while still allowing otherwise approved files.

One-click data security for your internal and SaaS applications

Extend that control to SaaS applications

Most of the customers we hear from need to bring this level of data and usage control to their self-hosted applications. Many of the SaaS tools they rely on have more advanced role-based rules. However, that is not always the case and, even if the rules exist, they are not as comprehensive as needed and require an administrator to manage a dozen different application settings.

To avoid that hassle you can bring Cloudflare One’s one-click isolation feature to your SaaS applications, too. Cloudflare’s access control solution can be configured as an identity proxy that will force all logins to any SaaS application that supports SSO through Cloudflare’s network where additional rules, including isolation, can be applied.

What’s next?

Today’s announcement brings together two of our customers’ favorite solutions – our Cloudflare Access solution and our browser isolation technology. Both products are available to use today. You can start building rules that force isolation or control data usage by following the guides linked here.

Willing to wait for the easy button? Join the beta today for the one-click version that we are rolling out to customer accounts.

New ways to troubleshoot Cloudflare Access ‘blocked’ messages

Post Syndicated from Kenny Johnson original https://blog.cloudflare.com/403-logs-cloudflare-access/

New ways to troubleshoot Cloudflare Access 'blocked' messages

New ways to troubleshoot Cloudflare Access 'blocked' messages

Cloudflare Access is the industry’s easiest Zero Trust access control solution to deploy and maintain. Users can connect via Access to reach the resources and applications that power your team, all while Cloudflare’s network enforces least privilege rules and accelerates their connectivity.

Enforcing least privilege rules can lead to accidental blocks for legitimate users. Over the past year, we have focused on adding tools to make it easier for security administrators to troubleshoot why legitimate users are denied access. These block reasons were initially limited to users denied access due to information about their identity (e.g. wrong identity provider group, email address not in the Access policy, etc.)

Zero Trust access control extends beyond identity and device. Cloudflare Access allows for rules that enforce how a user connects. These rules can include their location, IP address, the presence of our Secure Web Gateway and other controls.

Starting today, you can investigate those allow or block decisions based on how a connection was made with the same level of ease that you can troubleshoot user identity. We’re excited to help more teams make the migration to a Zero Trust model as easy as possible and ensure the ongoing maintenance is a significant reduction compared to their previous private network.

Why was I blocked?

All Zero Trust deployments start and end with identity. In a Zero Trust model, you want your resources (and the network protecting them) to have zero trust by default of any incoming connection or request. Instead, every attempt should have to prove to the network that they should be allowed to connect.

Organizations provide users with a mechanism of proof by integrating their identity provider (IdP) like Azure Active Directory or Okta. With Cloudflare, teams can integrate multiple providers simultaneously to help users connect during activities like mergers or to allow contractors to reach specific resources. Users authenticate with their provider and Cloudflare Access uses that to determine if we should trust a given request or connection.

After integrating identity, most teams start to layer on new controls like device posture. In some cases, or in every case, the resources are so sensitive that you want to ensure only approved users connecting from managed, healthy devices are allowed to connect.

While that model significantly improves security, it can also create strain for IT teams managing remote or hybrid workforces. Troubleshooting “why” a user cannot reach a resource becomes a guessing game over chat. Earlier this year, we launched a new tool to tell you exactly why a user’s identity or device posture did not meet the rules that your administrators created.

New ways to troubleshoot Cloudflare Access 'blocked' messages

What about how they connected?

As organizations advance in their Zero Trust journey, they add rules that go beyond the identity of the user or the posture of the device. For example, some teams might have regulatory restrictions that prevent users from accessing sensitive data from certain countries. Other enterprises need to understand the network context before granting access.

These adaptive controls enforce decisions around how a user connects. The user (and their device) might otherwise be allowed, but their current context like location or network prohibits them from doing so. These checks can extend to automated services, too, like a trusted chatbot that uses a service token to connect to your internal ticketing system.

While user and device posture checks require at least one step of authentication, these contextual rules can consist of policies that make it simple for a bad actor to retry over and over again like an IP address check. While the user will still be denied, that kind of information can overwhelm and flood your logs while you attempt to investigate what should be a valid login attempt.

With today’s release, your team can now have the best of both worlds.

However, other checks are not based on a user’s identity, these include looking at a device’s properties, network context, location, presence of a certificate and more. Requests that fail these “non-identity” checks are immediately blocked. These requests are immediately blocked in order to prevent a malicious user from seeing which identity providers are used by a business.

New ways to troubleshoot Cloudflare Access 'blocked' messages

Additionally, these blocks were not logged in order to avoid overloading the Access request logs of an individual account. A malicious user attempting hundreds of requests or a misconfigured API making thousands of requests should not cloud a security admin’s ability to analyze legitimate user Access requests.

These logs would immediately become overloaded if every blocked request were logged. However, we heard from users that in some situations, especially during initial setup, it is helpful to see individual block requests even for non-identity checks.

We have released a GraphQL API that allows Access administrators to look up a specific blocked request by RayID, User or Application. The API response will return a full output of the properties of the associated request which makes it much easier to diagnose why a specific request was blocked.

New ways to troubleshoot Cloudflare Access 'blocked' messages

In addition to the GraphQL API, we also improved the user facing block page to include additional detail about a user’s session. This will make it faster for end users and administrators to diagnose why a legitimate user was not allowed access.

New ways to troubleshoot Cloudflare Access 'blocked' messages

How does it work?

Collecting blocked request logs for thousands of Access customers presented an interesting scale challenge. A single application in a single customer account could have millions of blocked requests in a day, multiple that out across all protected applications across all Access customers and the number of logs start to get large quickly.

We were able to leverage our existing analytics pipeline that was built to handle the scale of our global network which is far beyond the scale of Access. The analytics pipeline is configured to intelligently begin sampling data if an individual account begins generating too many requests. The majority of customers will have all non-identity block logs captured while accounts generating large traffic volumes will retain a significant portion to diagnose issues.

How can I get started?

We have built an example guide to use the GraphQL API to diagnose Access block reasons. These logs can be manually checked using an GraphQL API client or periodically ingested into a log storage database.

We know that achieving a Zero Trust Architecture is a journey and a significant part of that is troubleshooting and initial configuration. We are committed to making Cloudflare Zero Trust the easiest Zero Trust solution to troubleshoot and configure at scale. Keep an eye out for additional announcements in the coming months that make Cloudflare Zero Trust even easier to troubleshoot.

If you don’t already have Cloudflare Zero Trust set up, getting started is easy – see the platform yourself with 50 free seats by signing up here.

Or if you would like to talk with a Cloudflare representative about your overall Zero Trust strategy, reach out to us here for a consultation.

For those who already know and love Cloudflare Zero Trust, this feature is enabled for all accounts across all pricing tiers.

Improved Access Control: Domain Scoped Roles are now generally available

Post Syndicated from Garrett Galow original https://blog.cloudflare.com/domain-scoped-roles-ga/

Improved Access Control: Domain Scoped Roles are now generally available

Improved Access Control: Domain Scoped Roles are now generally available

Starting today, it is possible to scope your users’ access to specific domains with Domain Scoped Roles becoming generally available!

We are making it easier for account owners to manage their team’s access to Cloudflare by allowing user access to be scoped to individual domains. Ensuring users have the least amount of access they need and no more is critical, and Domain Scoped Roles is a major step in this direction. Additionally, with the use of Domain Groups, account owners can grant users access to a group of domains instead of individually. Domains can be added or removed from these groups to automatically update the access of those who have been granted access to the group. This reduces toil in managing user access.

One of the most common uses we have seen for Domain Scoped Roles is to limit access to production domains to a small set of team members, while still allowing development and pre-production domains to be open to the rest of the team. That way, someone can’t make changes to a production domain unless they are given access.

We are doing a rollout of this functionality across all Enterprise Cloudflare accounts, and you will receive an email when this functionality is enabled for your account.

Any existing access on accounts today will remain the same, with the ability to further scope down access where it makes sense. All of our account-wide roles are still available to assign to users.

How to use Domain Scoped Roles

Once you have Domain Scoped Roles, here is how to start using it:

Log in to dash.cloudflare.com, select your account, and navigate to the members page.

From this page, you can manage your members’ permissions. In this case, we will invite a new user, however you can also modify an existing user’s permissions.

Improved Access Control: Domain Scoped Roles are now generally available

After clicking “Invite”, you will determine which users to invite, multiple users can be invited at the same time. After selecting users, we provide appropriate scope. Within the scope selection list, three options are available: all domains, a specific domain, and a domain group. Selecting all domains continues to grant account wide access, and all of our legacy roles are available at this level of scoping. A specific domain or domain groups provide access to our new domain scoped roles. Finally, with a user and a scope selected, a role (or multiple roles) can be selected to grant appropriate permissions.

Improved Access Control: Domain Scoped Roles are now generally available

Before sending the invite, you will be able to confirm the users, scope, and roles.

Improved Access Control: Domain Scoped Roles are now generally available

Domain Groups

In addition to manually creating inclusion or exclusion lists per user, account owners can also create Domain Groups to allow granting one or more users to a group of domains. Domain Groups can be created from the member invite flow or directly from Account Configurations → Lists. When creating a domain group, the user selects the domains to include and, from that point on, the group can be used when inviting a user to the account.

What’s next

We are doing a rollout of this functionality across all Enterprise Cloudflare accounts, and you will receive an email when this functionality is enabled for your account.

Any existing access on accounts today will remain the same, with the ability to further scope access where you decide. All of our account-wide roles are still available to assign to users.

If you are an enterprise customer and interested in getting Domain Scoped Roles sooner, please contact your CSM to get enabled! Otherwise, you will receive an email when your account has this feature enabled.

This announcement represents a step forward in our migration to a new authorization system built for Cloudflare’s scale. This will allow us to expand these capabilities to more products in the future and to create an authorization system that puts customers more in control of their team’s access across all of Cloudflare’s services.

How Cloudflare Security does Zero Trust

Post Syndicated from Noelle Gotthardt original https://blog.cloudflare.com/how-cloudflare-security-does-zero-trust/

How Cloudflare Security does Zero Trust

How Cloudflare Security does Zero Trust

Throughout Cloudflare One week, we provided playbooks on how to replace your legacy appliances with Zero Trust services. Using our own products is part of our team’s culture, and we want to share our experiences when we implemented Zero Trust.

Our journey was similar to many of our customers. Not only did we want better security solutions, but the tools we were using made our work more difficult than it needed to be. This started with just a search for an alternative to remotely connecting on a clunky VPN, but soon we were deploying Zero Trust solutions to protect our employees’ web browsing and email. Next, we are looking forward to upgrading our SaaS security with our new CASB product.

We know that getting started with Zero Trust can seem daunting, so we hope that you can learn from our own journey and see how it benefited us.

Replacing a VPN: launching Cloudflare Access

Back in 2015, all of Cloudflare’s internally-hosted applications were reached via a hardware-based VPN. On-call engineers would fire up a client on their laptop, connect to the VPN, and log on to Grafana. This process was frustrating and slow.

Many of the products we build are a direct result of the challenges our own team is facing, and Access is a perfect example. Launching as an internal project in 2015, Access enabled employees to access internal applications through our identity provider. We started with just one application behind Access with the goal of improving incident response times. Engineers who received a notification on their phones could tap a link and, after authenticating via their browser, would immediately have the access they needed. As soon as people started working with the new authentication flow, they wanted it everywhere. Eventually our security team mandated that we move our apps behind Access, but for a long time it was totally organic: teams were eager to use it.

With authentication occuring at our network edge, we were able to support a globally-distributed workforce without the latency of a VPN, and we were able to do so securely. Moreover, our team is committed to protecting our internal applications with the most secure and usable authentication mechanisms, and two-factor authentication is one of the most important security controls that can be implemented. With Cloudflare Access, we’re able to rely on the strong two-factor authentication mechanisms of our identity provider.

Not all second factors of authentication deliver the same level of security. Some methods are still vulnerable to man-in-the-middle (MITM) attacks. These attacks often feature bad actors stealing one-time passwords, commonly through phishing, to gain access to private resources. To eliminate that possibility, we implemented FIDO2 supported security keys. FIDO2 is an authenticator protocol designed to prevent phishing, and we saw it as an improvement to our reliance on soft tokens at the time.

While the implementation of FIDO2 can present compatibility challenges, we were enthusiastic to improve our security posture. Cloudflare Access enabled us to limit access to our systems to only FIDO2. Cloudflare employees are now required to use their hardware keys to reach our applications. The onboarding of Access was not only a huge win for ease of use, the enforcement of security keys was a massive improvement to our security posture.

Mitigate threats & prevent data exfiltration: Gateway and Remote Browser Isolation

Deploying secure DNS in our offices

A few years later, in 2020, many customers’ security teams were struggling to extend the controls they had enabled in the office to their remote workers. In response, we launched Cloudflare Gateway, offering customers protection from malware, ransomware, phishing, command & control, shadow IT, and other Internet risks over all ports and protocols. Gateway directs and filters traffic according to the policies implemented by the customer.

Our security team started with Gateway to implement DNS filtering in all of our offices. Since Gateway was built on top of the same network as 1.1.1.1, the world’s fastest DNS resolver, any current or future Cloudflare office will have DNS filtering without incurring additional latency. Each office connects to the nearest data center and is protected.

Deploying secure DNS for our remote users

Cloudflare’s WARP client was also built on top of our 1.1.1.1 DNS resolver. It extends the security and performance offered in offices to remote corporate devices. With the WARP client deployed, corporate devices connect to the nearest Cloudflare data center and are routed to Cloudflare Gateway. By sitting between the corporate device and the Internet, the entire connection from the device is secure, while also offering improved speed and privacy.

We sought to extend secure DNS filtering to our remote workforce and deployed the Cloudflare WARP client to our fleet of endpoint devices. The deployment enabled our security teams to better preserve our privacy by encrypting DNS traffic over DNS over HTTPS (DoH). Meanwhile, Cloudflare Gateway categorizes domains based on Radar, our own threat intelligence platform, enabling us to block high risk and suspicious domains for users everywhere around the world.

How Cloudflare Security does Zero Trust

Adding on HTTPS filtering and Browser Isolation

DNS filtering is a valuable security tool, but it is limited to blocking entire domains. Our team wanted a more precise instrument to block only malicious URLs, not the full domain. Since Cloudflare One is an integrated platform, most of the deployment was already complete. All we needed was to add the Cloudflare Root CA to our endpoints and then enable HTTP filtering in the Zero Trust dashboard. With those few simple steps, we were able to implement more granular blocking controls.

In addition to precision blocking, HTTP filtering enables us to implement tenant control. With tenant control, Gateway HTTP policies regulate access to corporate SaaS applications. Policies are implemented using custom HTTP headers. If the custom request header is present and the request is headed to an organizational account, access is granted. If the request header is present and the request goes to a non-organizational account, such as a personal account, the request can be blocked or opened in an isolated browser.

After protecting our users’ traffic at the DNS and HTTP layers, we implemented Browser Isolation. When Browser Isolation is implemented, all browser code executes in the cloud on Cloudflare’s network. This isolates our endpoints from malicious attacks and common data exfiltration techniques. Some remote browser isolation products introduce latency and frustrate users. Cloudflare’s Browser Isolation uses the power of our network to offer a seamless experience for our employees. It quickly improved our security posture without compromising user experience.

Preventing phishing attacks: Onboarding Area 1 email security

Also in early 2020, we saw an uptick in employee-reported phishing attempts. Our cloud-based email provider had strong spam filtering, but they fell short at blocking malicious threats and other advanced attacks. As we experienced increasing phishing attack volume and frequency we felt it was time to explore more thorough email protection options.

The team looked for four main things in a vendor: the ability to scan email attachments, the ability to analyze suspected malicious links, business email compromise protection, and strong APIs into cloud-native email providers. After testing many vendors, Area 1 became the clear choice to protect our employees. We implemented Area 1’s solution in early 2020, and the results have been fantastic.

Given the overwhelmingly positive response to the product and the desire to build out our Zero Trust portfolio, Cloudflare acquired Area 1 Email Security in April 2022. We are excited to offer the same protections we use to our customers.

What’s next: Getting started with Cloudflare’s CASB

Cloudflare acquired Vectrix in February 2022. Vectrix’s CASB offers functionality we are excited to add to Cloudflare One. SaaS security is an increasing concern for many security teams. SaaS tools are storing more and more sensitive corporate data, so misconfigurations and external access can be a significant threat. However, securing these platforms can present a significant resource challenge. Manual reviews for misconfigurations or externally shared files are time consuming, yet necessary processes for many customers. CASB reduces the burden on teams by ensuring security standards by scanning SaaS instances and identifying vulnerabilities with just a few clicks.

We want to ensure we maintain the best practices for SaaS security, and like many of our customers, we have many SaaS applications to secure. We are always seeking opportunities to make our processes more efficient, so we are excited to onboard one of our newest Zero Trust products.

Always striving for improvement

Cloudflare takes pride in deploying and testing our own products. Our security team works directly with Product to “dog food” our own products first. It’s our mission to help build a better Internet — and that means providing valuable feedback from our internal teams. As the number one consumer of Cloudflare’s products, the Security team is not only helping keep the company safer, but also contributing to build better products for our customers.

We hope you have enjoyed Cloudflare One week. We really enjoyed sharing our stories with you. To check out our recap of the week, please visit our Cloudflare TV segment.

Verify Apple devices with no installed software

Post Syndicated from Kenny Johnson original https://blog.cloudflare.com/private-attestation-token-device-posture/

Verify Apple devices with no installed software

Verify Apple devices with no installed software

One of the foundations of Zero Trust is determining if a user’s device is “healthy” — that it has its operating system up-to-date with the latest security patches, that it’s not jailbroken, that it doesn’t have malware installed, and so on. Traditionally, determining this has required installing software directly onto a user’s device.

Earlier this month, Cloudflare participated in the announcement of an open source standard called a Private Attestation Token. Device manufacturers who support the standard can now supply a Private Attestation Token with any request made by one of their devices. On the IT Administration side, Private Attestation Tokens means that security teams can verify a user’s device before they access a sensitive application — without the need to install any software or collect a user’s device data.

At WWDC 2022, Apple announced Private Attestation Tokens. Today, we’re announcing that Cloudflare Access will support verifying a Private Attestation token. This means that security teams that rely on Cloudflare Access can verify a user’s Apple device before they access a sensitive application — no additional software required.

Determining a “healthy” device

There are many solutions on the market that help security teams determine if a device is “healthy” and corporately managed. What the majority of these solutions have in common is that they require software to be installed directly on the user’s machine. This comes with challenges associated with client software including compatibility issues, version management, and end user support. Many companies have dedicated Mobile Device Management (MDM) tools to manage the software installed on employee machines.

MDM is a proven model, but it is also a challenge to manage — taking a dedicated team in many cases. What’s more, installing client or MDM software is not always possible for contractors, vendors or employees using personal machines. Security teams have to resort to VDI or VPN solutions for external users to securely access corporate applications.

How Private Attestation Tokens verify a device

Private Attestation Tokens leverage the Privacy Pass Protocol, which Cloudflare authored with major device manufacturers, to attest to a device’s health and integrity.

In order for Private Attestation Tokens to work, four parties agree to work in concert with a common framework to generate and exchange anonymous, unforgeable tokens. Without all four parties in the process, PATs won’t work.

  1. An Origin. A website, application, or API that receives requests from a client. When a website receives a request to their origin, the origin must know to look for and request a token from the client making the request. For Cloudflare customers, Cloudflare acts as the origin (on behalf of customers) and handles the requesting and processing of tokens.
  2. A Client. Whatever tool the visitor is using to attempt to access the Origin. This will usually be a web browser or mobile application. In our example, let’s say the client is a mobile Safari Browser.
  3. An Attester. The Attester is who the client asks to prove something (i.e that a mobile device has a valid IMEI) before a token can be issued. In our example below, the Attester is Apple, the device vendor.
  4. An Issuer. The issuer is the only one in the process that actually generates, or issues, a token. The Attester makes an API call to whatever Issuer the Origin has chosen to trust,  instructing the Issuer to produce a token. In our case, Cloudflare will also be the Issuer.
Verify Apple devices with no installed software

We are then able to rely on the attestation from the device manufacturer as a form of validation that a device is in a “healthy” enough state to be allowed access to a sensitive application.

Checking device health without client software

Private Attestation Tokens do not require any additional software to be installed on the user’s device. This is because the “attestation” of device health and validity is attested directly by the device operating system’s manufacturer — in this case, Apple.

This means that a security team can use Cloudflare Access and Private Attestation Tokens to verify if a user is accessing from a “healthy” Apple device before allowing access to a sensitive corporate application. Some checks as part of the attestation include:

  • Is the device on the latest OS version?
  • Is the device jailbroken?
  • Is the window attempting to log in, in focus?
  • And much more.

Over time, we are working with other device manufacturers to expand device support and what is verified as part of the device attestation process. The attributes that are attested will also continue to expand over time, which means the device verification in Access will only strengthen.

In the next few months, we will move Private Attestation Support in Cloudflare Access to a closed beta. The first version will work for iOS devices and support will expand from there. The only change required will be an updated Access policy, no software will need to be installed. If you would like to be part of the beta program, sign up here today!

How to augment or replace your VPN with Cloudflare

Post Syndicated from Michael Keane original https://blog.cloudflare.com/how-to-augment-or-replace-your-vpn/

How to augment or replace your VPN with Cloudflare

“Never trust, always verify.”

How to augment or replace your VPN with Cloudflare

Almost everyone we speak to these days understands and agrees with this fundamental principle of Zero Trust. So what’s stopping folks? The biggest gripe we hear: they simply aren’t sure where to start. Security tools and network infrastructure have often been in place for years, and a murky implementation journey involving applications that people rely on to do their work every day can feel intimidating.

While there’s no universal answer, several of our customers have agreed that offloading key applications from their traditional VPN to a cloud-native Zero Trust Network Access (ZTNA) solution like Cloudflare Access is a great place to start—providing an approachable, meaningful upgrade for their business.

In fact, Gartner predicted that “by 2025, at least 70% of new remote access deployments will be served predominantly by ZTNA as opposed to VPN services, up from less than 10% at the end of 2021.”1 By prioritizing a ZTNA project, IT and Security executives can better shield their business from attacks like ransomware while simultaneously improving their employees’ daily workflows. The trade-off between security and user experience is an outmoded view of the world; organizations can truly improve both if they go down the ZTNA route.

You can get started here with Cloudflare Access for free, and in this guide we’ll show you why, and how.

Why nobody likes their VPN

The network-level access and default trust granted by VPNs create avoidable security gaps by inviting the possibility of lateral movement within your network. Attackers may enter your network through a less-sensitive entry point after stealing credentials, and then traverse to find more business-critical information to exploit. In the face of rising attacks, the threat here is too real—and the path to mitigate is too within reach—to ignore.

How to augment or replace your VPN with Cloudflare

Meanwhile, VPN performance feels stuck in the 90s… and not in a fun, nostalgic way. Employees suffer through slow and unreliable connections that simply weren’t built for today’s scale of remote access. In the age of the “Great Reshuffle” and the current recruiting landscape, providing subpar experiences for teams based on legacy tech doesn’t have a great ROI. And when IT/security practitioners have plenty of other job opportunities readily available, they may not want to put up with manual, avoidable tasks born from an outdated technology stack. From both security and usability angles, moving toward VPN replacement is well worth the pursuit.

Make least-privilege access the default

Instead of authenticating a user and providing access to everything on your corporate network, a ZTNA implementation or “software-defined perimeter” authorizes access per resource, effectively eliminating the potential for lateral movement. Each access attempt is evaluated against Zero Trust rules based on identity, device posture, geolocation, and other contextual information. Users are continuously re-evaluated as context changes, and all events are logged to help improve visibility across all types of applications.

How to augment or replace your VPN with Cloudflare

As co-founder of Udaan, Amod Malviya, noted, “VPNs are frustrating and lead to countless wasted cycles for employees and the IT staff supporting them. Furthermore, conventional VPNs can lull people into a false sense of security. With Cloudflare Access, we have a far more reliable, intuitive, secure solution that operates on a per user, per access basis. I think of it as Authentication 2.0 — even 3.0″.

Better security and user experience haven’t always co-existed, but the fundamental architecture of ZTNA really does improve both compared to legacy VPNs. Whether your users are accessing Office 365 or your custom, on-prem HR app, every login experience is treated the same. With Zero Trust rules being checked behind the scenes, suddenly every app feels like a SaaS app to your end users. Like our friends at OneTrust said when they implemented ZTNA, “employees can connect to the tools they need, so simply teams don’t even know Cloudflare is powering the backend. It just works.”

Assembling a ZTNA project plan

VPNs are so entrenched in an organization’s infrastructure that fully replacing one may take a considerable amount of time, depending on the total number of users and applications served. However, there still is significant business value in making incremental progress. You can migrate away from your VPN at your own pace and let ZTNA and your VPN co-exist for some time, but it is important to at least get started.

Consider which one or two applications behind your VPN would be most valuable for a ZTNA pilot, like one with known complaints or numerous IT support tickets associated with it. Otherwise, consider internal apps that are heavily used or are visited by particularly critical or high-risk users. If you have any upcoming hardware upgrades or license renewals planned for your VPN(s), apps behind the accompanying infrastructure may also be a sensible fit for a modernization pilot.

As you start to plan your project, it’s important to involve the right stakeholders. For your ZTNA pilot, your core team should at minimum involve an identity admin and/or admin who manages internal apps used by employees, plus a network admin who understands your organization’s traffic flow as it relates to your VPN. These perspectives will help to holistically consider the implications of your project rollout, especially if the scope feels dynamic.

Executing a transition plan for a pilot app

Step 1: Connect your internal app to Cloudflare’s network
The Zero Trust dashboard guides you through a few simple steps to set up our app connector, no virtual machines required. Within minutes, you can create a tunnel for your application traffic and route it based on public hostnames or your private network routes. The dashboard will provide a string of commands to copy and paste into your command line to facilitate initial routing configurations. From there, Cloudflare will manage your configuration automatically.

A pilot web app may be the most straightforward place to start here, but you can also extend to SSH, VNC, RDP, or internal IPs and hostnames through the same workflow. With your tunnel up and running, you’ve created the means through which your users will securely access your resources and have essentially eliminated the potential for lateral movement within your network. Your application is not visible to the public Internet, significantly reducing your attack surface.

Step 2: Integrate identity and endpoint protection
Cloudflare Access acts as an aggregation layer for your existing security tools. With support for over a dozen identity providers (IdPs) like Okta, Microsoft Azure AD, Ping Identity, or OneLogin, you can link multiple simultaneous IdPs or separate tenants from one IdP. This can be particularly useful for companies undergoing mergers or acquisitions or perhaps going through compliance updates, e.g. incorporating a separate FedRAMP tenant.

In a ZTNA implementation, this linkage lets both tools play to their strengths. The IdP houses user stores and performs the identity authentication check, while Cloudflare Access controls the broader Zero Trust rules that ultimately decide access permissions to a broad range of resources.

Similarly, admins can integrate common endpoint protection providers like Crowdstrike, SentinelOne, Tanium or VMware Carbon Black to incorporate device posture into Zero Trust rulesets. Access decisions can incorporate device posture risk scores for tighter granularity.

You might find shortcut approaches to this step if you plan on using simpler authentication like one-time pins or social identity providers with external users like partners or contractors. As you mature your ZTNA rollout, you can incorporate additional IdPs or endpoint protection providers at any time without altering your fundamental setup. Each integration only adds to your source list of contextual signals at your disposal.

Step 3: Configure Zero Trust rules
Depending on your assurance levels for each app, you can customize your Zero Trust policies to appropriately restrict access to authorized users using contextual signals. For example, a low-risk app may simply require email addresses ending in “@company.com” and a successful SMS or email multifactor authentication (MFA) prompt. Higher risk apps could require hard token MFA specifically, plus a device posture check or other custom validation check using external APIs.

MFA in particular can be difficult to implement with legacy on-prem apps natively using traditional single sign-on tools. Using Cloudflare Access as a reverse proxy helps provide an aggregation layer to simplify rollout of MFA to all your resources, no matter where they live.

Step 4: Test clientless access right away
After connecting an app to Cloudflare and configuring your desired level of authorization rules, end users in most cases can test web, SSH, or VNC access without using a device client. With no downloads or mobile device management (MDM) rollouts required, this can help accelerate ZTNA adoption for key apps and be particularly useful for enabling third-party access.

Note that a device client can still be used to unlock other use cases like protecting SMB or thick client applications, verifying device posture, or enabling private routing. Cloudflare Access can handle any arbitrary L4-7 TCP or UDP traffic, and through bridges to WAN-as-a-service it can offload VPN use cases like ICMP or server-to-client initiated protocol traffic like VoIP as well.

How to augment or replace your VPN with Cloudflare

At this stage for the pilot app, you are up and running with ZTNA! Top priority apps can be offloaded from your VPN one at a time at any pace that feels comfortable to help modernize your access security. Still, augmenting and fully replacing a VPN are two very different things.

Moving toward full VPN replacement

While a few top resource candidates for VPN offloading might be clear for your company, the total scope could be overwhelming, with potentially thousands of internal IPs and domains to consider. You can configure the local domain fallback entries within Cloudflare Access to point to your internal DNS resolver for selected internal hostnames. This can help you more efficiently disseminate access to resources made available over your Intranet.

It can also be difficult for admins to granularly understand the full reach of their current VPN usage. Potential visibility issues aside, the full scope of applications and users may be in dynamic flux especially at large organizations. You can use the private network discovery report within Cloudflare Access to passively vet the state of traffic on your network over time. For discovered apps requiring more protection, Access workflows help you tighten Zero Trust rules as needed.

Both of these capabilities can help reduce anxiety around fully retiring a VPN. By starting to build your private network on top of Cloudflare’s network, you’re bringing your organization closer to achieving Zero Trust security.

The business impact our customers are seeing

Offloading applications from your VPN and moving toward ZTNA can have measurable benefits for your business even in the short term. Many of our customers speak to improvements in their IT team’s efficiency, onboarding new employees faster and spending less time on access-related help tickets. For example, after implementing Cloudflare Access, eTeacher Group reduced its employee onboarding time by 60%, helping all teams get up to speed faster.

Even if you plan to co-exist with your VPN alongside a slower modernization cadence, you can still track IT tickets for the specific apps you’ve transitioned to ZTNA to help quantify the impact. Are overall ticket numbers down? Did time to resolve decrease? Over time, you can also partner with HR for qualitative feedback through employee engagement surveys. Are employees feeling empowered with their current toolset? Do they feel their productivity has improved or complaints have been addressed?

Of course, improvements to security posture also help mitigate the risk of expensive data breaches and their lingering, damaging effects to brand reputation. Pinpointing narrow cause-and-effect relationships for the cost benefits of each small improvement may feel more art than science here, with too many variables to count. Still, reducing reliance on your VPN is a great step toward reducing your attack surface and contributes to your macro return on investment, however long your full Zero Trust journey may last.

Start the clock toward replacing your VPN

Our obsession with product simplicity has helped many of our customers sunset their VPNs already, and we can’t wait to do more.

You can get started here with Cloudflare Access for free to begin augmenting your VPN. Follow the steps outlined above with your prioritized ZTNA test cases, and for a sense of broader timing you can create your own Zero Trust roadmap as well to figure out what project should come next.

For a full summary of Cloudflare One Week and what’s new, tune in to our recap webinar.

___

1Nat Smith, Mark Wah, Christian Canales. (2022, April 08). Emerging Technologies: Adoption Growth Insights for Zero Trust Network Access. GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

Infinitely extensible Access policies

Post Syndicated from Kenny Johnson original https://blog.cloudflare.com/access-external-validation-rules/

Infinitely extensible Access policies

Infinitely extensible Access policies

Zero Trust application security means that every request to an application is denied unless it passes a specific set of defined security policies. Most Zero Trust solutions allow the use of a user’s identity, device, and location as variables to define these security policies.

We heard from customers that they wanted more control and more customizability in defining their Zero Trust policies.

Starting today, we’re excited that Access policies can consider anything before allowing a user access to an application. And by anything, we really do mean absolutely anything. You can now build infinitely customizable policies through the External Evaluation rule option, which allows you to call any API during the evaluation of an Access policy.

Why we built external evaluation rules

Over the past few years we added the ability to check location and device posture information in Access. However, there are always additional signals that can be considered depending on the application and specific requirements of an organization. We set out to give customers the ability to check whatever signal they require without any direct support in Access policies.

The Cloudflare security team, as an example, needed the ability to verify a user’s mTLS certificate against a registry to ensure applications can only be accessed by the right user from a corporate device. Originally, they considered using a Worker to check the user’s certificate after Access evaluated the request. However, this was going to take custom software development and maintenance over time. With External Evaluation rules, an API call can be made to verify whether a user is presenting the correct certificate for their device. The API call is made to a Worker that stores the mapping of mTLS certificates and user devices. The Worker executes the custom logic and then returns a true or false to Access.

How it works

Cloudflare Access is a reverse proxy in front of any web application. If a user has not yet authenticated, they will be presented with a login screen to authenticate. The user must meet the criteria defined in your Access policy. A typical policy would look something like:

  • The user’s email ends in @example.com
  • The user authenticated with a hardware based token
  • The user logged in from the United States

If the user passes the policy, they are granted a cookie that will give them access to the application until their session expires.

To evaluate the user on other custom criteria, you can add an external evaluation rule to the Access policy. The external evaluation rule requires two values: an API endpoint to call and a key to verify that any request response is coming from a trusted source.

Infinitely extensible Access policies

After the user authenticates with your identity provider, all information about the user, device and location is passed to your external API. The API returns a pass or fail response to Access which will then either allow or deny access to the user.

Example logic for the API would look like this:

/**
 * Where your business logic should go
 * @param {*} claims
 * @returns boolean
 */
async function externalEvaluation(claims) {
  return claims.identity.email === '[email protected]'
}

Where the claims object contains all the information about the user, device and network making the request. This externalEvaluation function can be extended to perform any desired business logic. We have made an open-source repository available with example code for consuming the Access claims and verifying the signing keys from Access.

This is really powerful! Any Access policy can now be infinitely extended to consider any information before allowing a user access. Potential examples include:

  • Integrating with endpoint protection tools we don’t yet integrate with by building a middleware that checks the endpoint protection tool’s API.
  • Checking IP addresses against external threat feeds
  • Calling industry-specific user registries
  • And much more!

We’re just getting started with extending Access policies. In the future we’ll make it easier to programmatically decide how a user should be treated before accessing an application, not just allow or deny access.

This feature is available in the Cloudflare Zero Trust dashboard today. Follow this guide to get started!

Clientless Web Isolation is now generally available

Post Syndicated from Tim Obezuk original https://blog.cloudflare.com/clientless-web-isolation-general-availability/

Clientless Web Isolation is now generally available

Clientless Web Isolation is now generally available

Today, we’re excited to announce that Clientless Web Isolation is generally available. A new on-ramp for Browser Isolation that natively integrates Zero Trust Network Access (ZTNA) with the zero-day, phishing and data-loss protection benefits of remote browsing for users on any device browsing any website, internal app or SaaS application. All without needing to install any software or configure any certificates on the endpoint device.

Cloudflare’s clientless web isolation simplifies connections to remote browsers through a hyperlink (e.g.: https://<your-auth-domain>.cloudflareaccess.com/browser). We explored use cases in detail in our beta announcement post, but here’s a quick refresher on the use cases that clientless isolated browsing enables:

Share secure browsing across the entire team on any device

Simply navigating to Clientless Web Isolation will land your user such as an analyst, or researcher in a remote browser, ready to securely conduct their research or investigation without exposing their public IP or device to potentially malicious code on the target website.

Clientless Web Isolation is now generally available

Suspicious hyperlinks and PDF documents from sensitive applications can be opened in a remote browser by rewriting the link with the clientless endpoint. For example:

https://<authdomain>.cloudflareaccess.com/browser/https://www.example.com/suspiciouslink

This is powerful when integrated into a security incident monitoring tool, help desk or any tool where users are clicking unknown or untrusted hyperlinks.

Integrate Browser Isolation with a third-party secure web gateway

Browser Isolation can be integrated with a legacy secure web gateway through the use of a redirecting custom block page. Integrating Browser Isolation with your existing secure web gateway enables safe browsing without the support burden of micromanaging block lists.

See our developer documentation for example block pages.

Securely access sensitive data on BYOD devices endpoints

In an ideal world, users would always access sensitive data from corporate devices. Unfortunately it’s not possible or feasible: contractors, by definition, rely on non-corporate devices. Employees may not be able to take their device home, it is unavailable due to a disaster or travel to high risk areas without their managed machine.

Historically IT departments have worked around this by adopting legacy Virtual Desktop Infrastructure (VDI). This made sense a decade ago when most business applications were desktop applications. Today this architecture makes little sense when most business applications live in the browser. VDI is a tremendously expensive method to deliver BYOD support and still requires complex network administration to connect with DNS filtering and Secure Web Gateways.

All traffic from Browser Isolation to the Internet or an Access protected application is secured and inspected by the Secure Web Gateway out of the box. It only takes a few clicks to require Gateway device posture checks for users connecting over Clientless Web Isolation.

Get started

Clientless web isolation is available as a capability for all Cloudflare Zero Trust subscribers who have added Browser Isolation to their plan. If you are interested in learning more about use cases see the beta announcement post and our developer documentation.

Guest Blog: k8s tunnels with Kudelski Security

Post Syndicated from Guest Author original https://blog.cloudflare.com/guest-blog-zero-trust-access-kubernetes/

Guest Blog: k8s tunnels with Kudelski Security

Guest Blog: k8s tunnels with Kudelski Security

Today, we’re excited to publish a blog post written by our friends at Kudelski Security, a managed security services provider. A few weeks back, Romain Aviolat, the Principal Cloud and Security Engineer at Kudelski Security approached our Zero Trust team with a unique solution to a difficult problem that was powered by Cloudflare’s Identity-aware Proxy, which we call Cloudflare Tunnel, to ensure secure application access in remote working environments.

We enjoyed learning about their solution so much that we wanted to amplify their story. In particular, we appreciated how Kudelski Security’s engineers took full advantage of the flexibility and scalability of our technology to automate workflows for their end users. If you’re interested in learning more about Kudelski Security, check out their work below or their research blog.

Zero Trust Access to Kubernetes

Over the past few years, Kudelski Security’s engineering team has prioritized migrating our infrastructure to multi-cloud environments. Our internal cloud migration mirrors what our end clients are pursuing and has equipped us with expertise and tooling to enhance our services for them. Moreover, this transition has provided us an opportunity to reimagine our own security approach and embrace the best practices of Zero Trust.

So far, one of the most challenging facets of our Zero Trust adoption has been securing access to our different Kubernetes (K8s) control-plane (APIs) across multiple cloud environments. Initially, our infrastructure team struggled to gain visibility and apply consistent, identity-based controls to the different APIs associated with different K8s clusters. Additionally, when interacting with these APIs, our developers were often left blind as to which clusters they needed to access and how to do so.

To address these frictions, we designed an in-house solution leveraging Cloudflare to automate how developers could securely authenticate to K8s clusters sitting across public cloud and on-premise environments. Specifically, for a given developer, we can now surface all the K8s services they have access to in a given cloud environment, authenticate an access request using Cloudflare’s Zero Trust rules, and establish a connection to that cluster via Cloudflare’s Identity-aware proxy, Cloudflare Tunnel.

Most importantly, this automation tool has enabled Kudelski Security as an organization to enhance our security posture and improve our developer experience at the same time. We estimate that this tool saves a new developer at least two hours of time otherwise spent reading documentation, submitting IT service tickets, and manually deploying and configuring the different tools needed to access different K8s clusters.

In this blog, we detail the specific pain points we addressed, how we designed our automation tool, and how Cloudflare helped us progress on our Zero Trust journey in a work-from-home friendly way.

Challenges securing multi-cloud environments

As Kudelski Security has expanded our client services and internal development teams, we have inherently expanded our footprint of applications within multiple K8s clusters and multiple cloud providers. For our infrastructure engineers and developers, the K8s cluster API is a crucial entry point for troubleshooting. We work in GitOps and all our application deployments are automated, but we still frequently need to connect to a cluster to pull logs or debug an issue.

However, maintaining this diversity creates complexity and pressure for infrastructure administrators. For end users, sprawling infrastructure can translate to different credentials, different access tools for each cluster, and different configuration files to keep track of.

Such a complex access experience can make real-time troubleshooting particularly painful. For example, on-call engineers trying to make sense of an unfamiliar K8s environment may dig through dense documentation or be forced to wake up other colleagues to ask a simple question. All this is error-prone and a waste of precious time.

Common, traditional approaches of securing access to K8s APIs presented challenges we knew we wanted to avoid. For example, we felt that exposing the API to the public internet would inherently increase our attack surface, that’s a risk we couldn’t afford. Moreover, we did not want to provide broad-based access to our clusters’ APIs via our internal networks and condone the risks of lateral movement. As Kudelski continues to grow, the operational costs and complexity of deploying VPNs across our workforce and different cloud environments would lead to scaling challenges as well.

Instead, we wanted an approach that would allow us to maintain small, micro-segmented environments, small failure domains, and no more than one way to give access to a service.

Leveraging Cloudflare’s Identity-aware Proxy for Zero Trust access

To do this, Kudelski Security’s engineering team opted for a more modern approach: creating connections between users and each of our K8 clusters via an Identity-aware proxy (IAP). IAPs  are flexible to deploy and add an additional layer of security in front of our applications by verifying the identity of a user when an access request is made. Further, they support our Zero Trust approach by creating connections from users to individual applications — not entire networks.

Each cluster has its own IAP and its own sets of policies, which check for identity (via our corporate SSO) and other contextual factors like the device posture of a developer’s laptop. The IAP doesn’t replace the K8s cluster authentication mechanism, it adds a new one on top of it, and thanks to identity federation and SSO this process is completely transparent for our end users.

In our setup, Kudelski Security is using Cloudflare’s IAPs as a component of Cloudflare Access — a ZTNA solution and one of several security services unified by Cloudflare’s Zero Trust platform.

Guest Blog: k8s tunnels with Kudelski Security

For many web-based apps, IAPs help create a frictionless experience for end users requesting access via a browser. Users authenticate via their corporate SSO or identity provider before reaching the secured app, while the IAP works in the background.

That user flow looks different for CLI-based applications because we cannot redirect CLI network flows like we do in a browser. In our case, our engineers want to use their favorite K8s clients which are CLI-based like kubectl or k9s. This means our Cloudflare IAP needs to act as a SOCKS5 proxy between the CLI client and each K8s cluster.

To create this IAP connection, Cloudflare provides a lightweight server-side daemon called cloudflared that connects infrastructure with applications. This encrypted connection runs on Cloudflare’s global network where Zero Trust policies are applied with single-pass inspection.

Without any automation, however, Kudelski Security’s infrastructure team would need to distribute the daemon on end user devices, provide guidance on how to set up those encrypted connections, and take other manual, hands-on configuration steps and maintain them over time. Plus, developers would still lack a single pane of visibility across the different K8s clusters that they would need to access in their regular work.

Guest Blog: k8s tunnels with Kudelski Security

Our automated solution: k8s-tunnels!

To solve these challenges, our infrastructure engineering team developed an internal tool — called ‘k8s-tunnels’ — that embeds complex configuration steps which make life easier for our developers. Moreover, this tool automatically discovers all the K8s clusters that a given user has access to based on the Zero Trust policies created. To enable this functionality, we embedded the SDKs of some major public cloud providers that Kudelski Security uses. The tool also embeds the cloudflared daemon, meaning that we only need to distribute a single tool to our users.

Guest Blog: k8s tunnels with Kudelski Security

All together, a developer who launches the tool goes through the following workflow: (we assume that the user already has valid credentials otherwise the tool would open a browser on our IDP to obtain them)

1. The user selects one or more cluster to

Guest Blog: k8s tunnels with Kudelski Security

2. k8s-tunnel will automatically open the connection with Cloudflare and expose a local SOCKS5 proxy on the developer machine

3. k8s-tunnel amends the user local kubernetes client configuration by pushing the necessary information to go through the local SOCKS5 proxy

4. k8s-tunnel switches the Kubernetes client context to the current connection

Guest Blog: k8s tunnels with Kudelski Security

5. The user can now use his/her favorite CLI client to access the K8s cluster

The whole process is really straightforward and is being used on a daily basis by our engineering team. And, of course, all this magic is made possible through the auto-discovery mechanism we’ve built into k8s-tunnels. Whenever new engineers join our team, we simply ask them to launch the auto-discovery process and get started.

Here is an example of the auto-discovery process in action.

  1. k8s-tunnels will connect to our different cloud providers APIs and list the K8s clusters the user has access to
  2. k8s-tunnels will maintain a local config file on the user machine of those clusters so this process does not be run more than once
Guest Blog: k8s tunnels with Kudelski Security

Automation enhancements

For on-premises deployments, it was a bit trickier as we didn’t have a simple way to store the K8s clusters metadata like we do with resource tags with public cloud providers. We decided to use Vault as a Key-Value-store to mimic public-cloud resource tags for on-prem. This way we can achieve auto-discovery of on-prem clusters following the same process as with a public-cloud provider.

Maybe you saw that in the previous CLI screenshot, the user can select multiple clusters at the same time! We quickly realized that our developers often needed to access multiple environments at the same time to compare a workload running in production and in staging. So instead of opening and closing tunnels every time they needed to switch clusters, we designed our tool such that they can now simply open multiple tunnels in parallel within a single k8s-tunnels instance and just switch the destination K8s cluster on their laptop.

Last but not least, we’ve also added the support for favorites and notifications on new releases, leveraging Cloudflare Workers, but that’s for another blog post.

What’s Next

In designing this tool, we’ve identified a couple of issues inside Kubernetes client libraries when used in conjunction with SOCKS5 proxies, and we’re working with the Kubernetes community to fix those issues, so everybody should benefit from those patches in the near future.

With this blog post, we wanted to highlight how it is possible to apply Zero Trust security for complex workloads running on multi-cloud environments, while simultaneously improving the end user experience.

Although today our ‘k8s-tunnels’ code is too specific to Kudelski Security, our goal is to share what we’ve created back with the Kubernetes community, so that other organizations and Cloudflare customers can benefit from it.

Introducing Clientless Web Isolation

Post Syndicated from Tim Obezuk original https://blog.cloudflare.com/introducing-clientless-web-isolation-beta/

Introducing Clientless Web Isolation

Introducing Clientless Web Isolation

Today, we’re excited to announce the beta for Cloudflare’s clientless web isolation. A new on-ramp for Browser Isolation that natively integrates Zero Trust Network Access (ZTNA) with the zero-day, phishing and data-loss protection benefits of remote browsing for users on any device browsing any website, internal app or SaaS application. All without needing to install any software or configure any certificates on the endpoint device.

Secure access for managed and unmanaged devices

In early 2021, Cloudflare announced the general availability of Browser Isolation, a fast and secure remote browser that natively integrates with Cloudflare’s Zero Trust platform. This platform — also known as Cloudflare for Teams — combines secure Internet access with our Secure Web Gateway solution (Gateway) and secure application access with a ZTNA solution (Access).

Typically, admins deploy Browser Isolation by rolling out Cloudflare’s device client on endpoints, so that Cloudflare can serve as a secure DNS and HTTPS Internet proxy. This model protects users and sensitive applications when the administrator manages their team’s devices. And for end users, the experience feels frictionless like a local browser: they are hardly aware that they are actually browsing on a secure machine running in a Cloudflare data center near them.

The end-to-end integration of Browser Isolation with secure Internet access makes it easy for administrators to deploy Browser Isolation across their teams without users being aware they’re actually browsing on a secure machine in a nearby Cloudflare data center. However, managing endpoint clients can add configuration overhead for users on unmanaged devices, or contractors on devices managed by third-party organizations.

Cloudflare’s clientless web isolation streamlines connections to remote browsers through a hyperlink (e.g.: https://<your-auth-domain>.cloudflareaccess.com/browser). Once users are authenticated through any of Cloudflare Access’s supported identity providers, the user’s browser uses HTML5 to establish a low-latency connection to a remote browser hosted in a nearby Cloudflare data center without installing any software. There are no servers to manage and scale, or regions to configure.

The simple act of clicking a link in an email, or website causes your browser to download and execute payloads of active web content which can exploit unknown zero-day threats and compromise an endpoint.

Cloudflare’s clientless web isolation can be initiated through a prefixed URL (e.g., https://<your-auth-domain>.cloudflareaccess.com/browser/https://www.example.com). Simply configuring your custom block page, email gateway, or ticketing tool to prefix high-risk links with Browser Isolation will automatically send high risk clicks to a remote browser, protecting the endpoint from any malicious code that may be present on the target link.

Introducing Clientless Web Isolation

Here at Cloudflare, we use Cloudflare’s products to protect Cloudflare, and in fact, use this clientless web isolation approach for our own security investigation activities. By prefixing high risk links with our auth domain, our security team is able to safely investigate potentially malicious websites and phishing sites.

No risky code ever reaches an employee device, and at the end of their investigation, the remote browser is terminated and reset to a known clean state for their next investigation.

Integrated Zero Trust access and remote browsing

The time when corporate data was only accessed from managed devices, inside controlled networks has long since passed. Enterprises relying on strict device posture controls to verify that application access only occurs from managed devices have had few tools to support contractor or BYOD workforces. Historically, administrators have worked around the issue by deploying costly, resource intensive Virtual Desktop Infrastructure (VDI) environments.

Moreover, when it comes to securing application access, Cloudflare Access excels in applying least-privilege, default-deny policies to web-based applications, without needing to install any client software on user devices.

Cloudflare’s clientless web isolation augments ZTNA use cases, allowing applications protected by Access and Gateway to leverage Browser Isolation’s data protection controls such as local printing control, clipboard and file upload / download restrictions to prevent sensitive data from transferring onto unmanaged devices.

Isolated links can easily be added to the Access app launcher as bookmarks allowing your team and contractors to easily access any site with one click.

Introducing Clientless Web Isolation

Finally, just because a remote browser reduces the impact of a compromise, doesn’t mean it should have unmanaged access to the Internet. All traffic from the remote browser to the target website is secured, inspected and logged by Cloudflare’s SWG solution (Gateway) ensuring that known threats are filtered through HTTP policies and anti-virus scanning.

Join the clientless web isolation beta

Clientless web isolation will be available as a capability to Cloudflare for Teams subscribers who have added Browser Isolation to their plan. We’ll be opening Cloudflare’s clientless web isolation for beta access soon. If you’re interested in participating, sign up here to be the first to hear from us.

We’re excited about the secure browsing and application access use cases for our clientless web isolation model. Now, teams of any size, can deliver seamless Zero Trust connectivity to unmanaged devices anywhere in the world.

Build your next video application on Cloudflare

Post Syndicated from Jonathan Kuperman original https://blog.cloudflare.com/build-video-applications-cloudflare/

Build your next video application on Cloudflare

Build your next video application on Cloudflare

Historically, building video applications has been very difficult. There’s a lot of complicated tech behind recording, encoding, and playing videos. Luckily, Cloudflare Stream abstracts all the difficult parts away, so you can build custom video and streaming applications easily. Let’s look at how we can combine Cloudflare Stream, Access, Pages, and Workers to create a high-performance video application with very little code.

Today, we’re going to build a video application inspired by Cloudflare TV. We’ll have user authentication and the ability for administrators to upload recorded videos or livestream new content. Think about being able to build your own YouTube or Twitch using Cloudflare services!

Fetching a list of videos

On the main page of our application, we want to display a list of all videos. The videos are uploaded and stored with Cloudflare Stream, but more on that later! This code could be changed to display only the “trending” videos or a selection of videos chosen for each user. For now, we’ll use the search API and pass in an empty string to return all.

import { getSignedStreamId } from "../../src/cfStream"

export async function onRequestGet(context) {
    const {
        request,
        env,
        params,
    } = context

    const { id } = params

    if (id) {
        const res = await fetch(`https://api.cloudflare.com/client/v4/accounts/${env.CF_ACCOUNT_ID}/stream/${id}`, {
            method: "GET",
            headers: {
                "Authorization": `Bearer ${env.CF_API_TOKEN_STREAM}`
            }
        })

        const video = (await res.json()).result

        if (video.meta.visibility !== "public") {
            return new Response(null, {status: 401})
        }

        const signedId = await getSignedStreamId(id, env.CF_STREAM_SIGNING_KEY)

        return new Response(JSON.stringify({
            signedId: `${signedId}`
        }), {
            headers: {
                "content-type": "application/json"
            }
        })
    } else {
        const url = new URL(request.url)
        const res = await (await fetch(`https://api.cloudflare.com/client/v4/accounts/${env.CF_ACCOUNT_ID}/stream?search=${url.searchParams.get("search") || ""}`, {
            headers: {
                "Authorization": `Bearer ${env.CF_API_TOKEN_STREAM}`
            }
        })).json()

        const filteredVideos = res.result.filter(x => x.meta.visibility === "public") 
        const videos = await Promise.all(filteredVideos.map(async x => {
            const signedId = await getSignedStreamId(x.uid, env.CF_STREAM_SIGNING_KEY)
            return {
                uid: x.uid,
                status: x.status,
                thumbnail: `https://videodelivery.net/${signedId}/thumbnails/thumbnail.jpg`,
                meta: {
                    name: x.meta.name
                },
                created: x.created,
                modified: x.modified,
                duration: x.duration,
            }
        }))
        return new Response(JSON.stringify(videos), {headers: {"content-type": "application/json"}})
    }
}

We’ll go through each video, filter out any private videos, and pull out the metadata we need, such as the thumbnail URL, ID, and created date.

Playing the videos

To allow users to play videos from your application, they need to be public, or you’ll have to sign each request. Marking your videos as public makes this process easier. However, there are many reasons you might want to control access to your videos. If you want users to log in before they play them or the ability to limit access in any way, mark them as private and use signed URLs to control access. You can find more information about securing your videos here.

If you are testing your application locally or expect to have fewer than 10,000 requests per day, you can call the /token endpoint to generate a signed token. If you expect more than 10,000 requests per day, sign your own tokens as we do here using JSON Web Tokens.

Allowing users to upload videos

The next step is to build out an admin page where users can upload their videos. You can find documentation on allowing user uploads here.

This process is made easy with the Cloudflare Stream API. You use your API token and account ID to generate a unique, one-time upload URL. Just make sure your token has the Stream:Edit permission. We hook into all POST requests from our application and return the generated upload URL.

export const cfTeamsAccessAuthMiddleware = async ({request, data, env, next}) => {
    try {
        const userEmail = request.headers.get("cf-access-authenticated-user-email")

        if (!userEmail) {
            throw new Error("User not found, make sure application is behind Cloudflare Access")
        }
  
        // Pass user info to next handlers
        data.user = {
            email: userEmail
        }
  
        return next()
    } catch (e) {
        return new Response(e.toString(), {status: 401})
    }
}

export const onRequest = [
    cfTeamsAccessAuthMiddleware
]

The admin page contains a form allowing users to drag and drop or upload videos from their computers. When a logged-in user hits submit on the upload form, the application generates a unique URL and then posts the FormData to it. This code would work well for building a video sharing site or with any application that allows user-generated content.

async function getOneTimeUploadUrl() {
    const res = await fetch('/api/admin/videos', {method: 'POST', headers: {'accept': 'application/json'}})
    const upload = await res.json()
    return upload.uploadURL
}

async function uploadVideo() {
    const videoInput = document.getElementById("video");

    const oneTimeUploadUrl = await getOneTimeUploadUrl();
    const video = videoInput.files[0];
    const formData = new FormData();
    formData.append("file", video);

    const uploadResult = await fetch(oneTimeUploadUrl, {
        method: "POST",
        body: formData,
    })
}

Adding real time video with Stream Live

You can add a livestreaming section to your application as well, using Stream Live in conjunction with the techniques we’ve already covered.  You could allow logged-in users to start a broadcast and then allow other logged-in users, or even the public, to watch it in real-time! The streams will automatically save to your account, so they can be viewed immediately after the broadcast finishes in the main section of your application.

Securing our app with middleware

We put all authenticated pages behind this middleware function. It checks the request headers to make sure the user is sending a valid authenticated user email.

export const cfTeamsAccessAuthMiddleware = async ({request, data, env, next}) => {
    try {
        const userEmail = request.headers.get("cf-access-authenticated-user-email")

        if (!userEmail) {
            throw new Error("User not found, make sure application is behind Cloudflare Access")
        }
  
        // Pass user info to next handlers
        data.user = {
            email: userEmail
        }
  
        return next()
    } catch (e) {
        return new Response(e.toString(), {status: 401})
    }
}

export const onRequest = [
    cfTeamsAccessAuthMiddleware
]

Putting it all together with Pages

We have Cloudflare Access controlling our log-in flow. We use the Stream APIs to manage uploading, displaying, and watching videos. We use Workers for managing fetch requests and handling API calls. Now it’s time to tie it all together using Cloudflare Pages!

Pages provides an easy way to deploy and host static websites. But now, Pages seamlessly integrates with the Workers platform (link to announcement post). With this new integration, we can deploy this entire application with a single, readable repository.

Controlling access

Some applications are better public; others contain sensitive data and should be restricted to specific users. The main page is public for this application, and we’ve used Cloudflare Access to limit the admin page to employees. You could just as easily use Access to protect the entire application if you’re building an internal learning service or even if you want to beta launch a new site!

When a user clicks the admin link on our demo site, they will be prompted for an email address. If they enter a valid Cloudflare email, the application will send them an access code. Otherwise, they won’t be able to access that page.

Check out the source code and get started building your own video application today!

Building a full stack application with Cloudflare Pages

Post Syndicated from Greg Brimble original https://blog.cloudflare.com/building-full-stack-with-pages/

Building a full stack application with Cloudflare Pages

Building a full stack application with Cloudflare Pages

We were so excited to announce support for full stack applications in Cloudflare Pages that we knew we had to show it off in a big way. We’ve built a sample image-sharing platform to demonstrate how you can add serverless functions right from within Pages with help from Cloudflare Workers. With just one new file to your project, you can add dynamic rendering, interact with other APIs, and persist data with KV and Durable Objects. The possibilities for full-stack applications, in combination with Pages’ quick development cycles and unlimited preview environments, gives you the power to create almost any application.

Today, we’re walking through our example image-sharing platform. We want to be able to share pictures with friends while still also keeping some images private. We’ll build a JSON API with Functions (storing data on KV and Durable Objects), integrate with Cloudflare Images and Cloudflare Access, and use React for our front end.

If you’re wanting to dive right into the good stuff, our demo instance is published here, and the code is on GitHub, but stick around for a more gentle approach.

Building a full stack application with Cloudflare Pages

Building serverless functions with Cloudflare Pages

File-based routing

If you’re not already familiar, Cloudflare Pages connects with your git provider (GitHub and GitLab), and automates the deployment of your static site to Cloudflare’s network. Functions lets you enhance these apps by sprinkling in dynamic data. If you haven’t already, you can sign up here.

In our project, let’s create a new function:

// ./functions/time.js


export const onRequest = () => {
  return new Response(new Date().toISOString())
}

git commit-ing and pushing this file should trigger a build and deployment of your first Pages function. Any requests for /time will be served by this function, and all other requests will fall-back to the static assets of your project. Placing Functions files in directories works as you’d expect: ./functions/api/time.js would be available at /api/time and ./functions/some_directory/index.js would be available at /some_directory.

We also support TypeScript (./functions/time.ts would work just the same), as well as parameterized files:

  • ./functions/todos/[id].js with single square brackets will match all requests like /todos/123;
  • and ./functions/todos/[[path]].js with double square brackets, will match requests for any number of path segments (e.g. /todos/123/subtasks).

We declare a PagesFunction type in the @cloudflare/workers-types library which you can use to type-check your Functions.

Dynamic data

So, returning to our image-sharing app, let’s assume we already have some images uploaded, and we want to display them on the homepage. We’ll need an endpoint which will return a list of these images, which the front-end can call:

// ./functions/api/images.ts

export const jsonResponse = (value: any, init: ResponseInit = {}) =>
  new Response(JSON.stringify(value), {
    headers: { "Content-Type": "application/json", ...init.headers },
    ...init,
  });

const generatePreviewURL = ({
  previewURLBase,
  imagesKey,
  isPrivate,
}: {
  previewURLBase: string;
  imagesKey: string;
  isPrivate: boolean;
}) => {
  // If isPrivate, generates a signed URL for the 'preview' variant
  // Else, returns the 'blurred' variant URL which never requires signed URLs
  // https://developers.cloudflare.com/images/cloudflare-images/serve-images/serve-private-images-using-signed-url-tokens

  return "SIGNED_URL";
};

export const onRequestGet: PagesFunction<{
  IMAGES: KVNamespace;
}> = async ({ env }) => {
  const { imagesKey } = (await env.IMAGES.get("setup", "json")) as Setup;

  const kvImagesList = await env.IMAGES.list<ImageMetadata>({
    prefix: `image:uploaded:`,
  });

  const images = kvImagesList.keys
    .map((kvImage) => {
      try {
        const { id, previewURLBase, name, alt, uploaded, isPrivate } =
          kvImage.metadata as ImageMetadata;

        const previewURL = generatePreviewURL({
          previewURLBase,
          imagesKey,
          isPrivate,
        });

        return {
          id,
          previewURL,
          name,
          alt,
          uploaded,
          isPrivate,
        };
      } catch {
        return undefined;
      }
    })
    .filter((image) => image !== undefined);

  return jsonResponse({ images });
};

Eagle-eyed readers will notice we’re exporting onRequestGet which lets us only respond to GET requests.

We’re also using a KV namespace (accessed with env.IMAGES) to store information about images that have been uploaded. To create a binding in your Pages project, navigate to the “Settings” tab.

Building a full stack application with Cloudflare Pages

Interfacing with other APIs

Cloudflare Images is an inexpensive, high-performance, and featureful service for hosting and transforming images. You can create multiple variants to render your images in different ways and control access with signed URLs. We’ll add a function to interface with this service’s API and upload incoming files to Cloudflare Images:

// ./functions/api/admin/upload.ts

export const onRequestPost: PagesFunction<{
  IMAGES: KVNamespace;
}> = async ({ request, env }) => {
  const { apiToken, accountId } = (await env.IMAGES.get(
    "setup",
    "json"
  )) as Setup;

  // Prepare the Cloudflare Images API request body
  const formData = await request.formData();
  formData.set("requireSignedURLs", "true");
  const alt = formData.get("alt") as string;
  formData.delete("alt");
  const isPrivate = formData.get("isPrivate") === "on";
  formData.delete("isPrivate");

  // Upload the image to Cloudflare Images
  const response = await fetch(
    `https://api.cloudflare.com/client/v4/accounts/${accountId}/images/v1`,
    {
      method: "POST",
      body: formData,
      headers: {
        Authorization: `Bearer ${apiToken}`,
      },
    }
  );

  // Store the image metadata in KV
  const {
    result: {
      id,
      filename: name,
      uploaded,
      variants: [url],
    },
  } = await response.json<{
    result: {
      id: string;
      filename: string;
      uploaded: string;
      requireSignedURLs: boolean;
      variants: string[];
    };
  }>();

  const metadata: ImageMetadata = {
    id,
    previewURLBase: url.split("/").slice(0, -1).join("/"),
    name,
    alt,
    uploaded,
    isPrivate,
  };

  await env.IMAGES.put(
    `image:uploaded:${uploaded}`,
    "Values stored in metadata.",
    { metadata }
  );
  await env.IMAGES.put(`image:${id}`, JSON.stringify(metadata));

  return jsonResponse(true);
};

Persisting data

We’re already using KV to store information that is read often but rarely written to. What about features that require a bit more synchronicity?

Let’s add a download counter to each of our images. We can create a highres variant in Cloudflare Images, and increment the counter every time a user requests a link. This requires a bit more setup, but unlocking the power of Durable Objects in your projects is absolutely worth it.

We’ll need to create and publish the Durable Object class capable of maintaining this download count:

// ./durable_objects/downloadCounter.js
ts#example---counter

export class DownloadCounter {
  constructor(state) {
    this.state = state;
    // `blockConcurrencyWhile()` ensures no requests are delivered until initialization completes.
    this.state.blockConcurrencyWhile(async () => {
      let stored = await this.state.storage.get("value");
      this.value = stored || 0;
    });
  }

  async fetch(request) {
    const url = new URL(request.url);
    let currentValue = this.value;

    if (url.pathname === "/increment") {
      currentValue = ++this.value;
      await this.state.storage.put("value", currentValue);
    }

    return jsonResponse(currentValue);
  }
}

Middleware

If you need to execute some code (such as authentication or logging) before you run your function, Pages offers easy-to-use middleware which can be applied at any level in your file-based routing. By creating a _middleware.ts file in a directory, we know to first run this file, and then execute your function when next() is called.

In our application, we want to prevent unauthorized users from uploading images (/api/admin/upload) or deleting images (/api/admin/delete). Cloudflare Access lets us apply role-based access control to all or part of our application, and you only need a single file to integrate it into our serverless functions. We create  ./functions/api/admin/_middleware.ts which will apply to all incoming requests at /api/admin/*:

// ./functions/api/admin/_middleware.ts

const validateJWT = async (jwtAssertion: string | null, aud: string) => {
  // If the JWT is valid, return the JWT payload
  // Else, return false
  // https://developers.cloudflare.com/cloudflare-one/identity/users/validating-json

  return jwtPayload;
};

const cloudflareAccessMiddleware: PagesFunction<{ IMAGES: KVNamespace }> =
  async ({ request, env, next, data }) => {
    const { aud } = (await env.IMAGES.get("setup", "json")) as Setup;

    const jwtPayload = await validateJWT(
      request.headers.get("CF-Access-JWT-Assertion"),
      aud
    );

    if (jwtPayload === false)
      return new Response("Access denied.", { status: 403 });

    // We could also use the data object to pass information between middlewares
    data.user = jwtPayload.email;

    return await next();
  };

export const onRequest = [cloudflareAccessMiddleware];

Middleware is a powerful tool at your disposal allowing you to easily protect parts of your application with Cloudflare Access, or quickly integrate with observability and error logging platforms such as Honeycomb and Sentry.

Integrating as Jamstack

The “Jam” of “Jamstack” stands for JavaScript, API and Markup. Cloudflare Pages previously provided the ‘J’ and ‘M’, and with Workers in the middle, you can truly go full-stack Jamstack.

We’ve built the front end of this image sharing platform with Create React App as an approachable example, but Cloudflare Pages natively integrates with an ever-growing number of frameworks (currently 23), and you can always configure your own entirely custom build command.

Your front end simply needs to make a call to the Functions we’ve already configured, and render out that data. We’re using SWR to simplify things, but you could do this with entirely vanilla JavaScript fetch-es, if that’s your preference.

// ./src/components/ImageGrid.tsx

export const ImageGrid = () => {
  const { data, error } = useSWR<{ images: Image[] }>("/api/images");

  if (error || data === undefined) {
    return <div>An unexpected error has occurred when fetching the list of images. Please try again.</div>;
  }


  return (
    <div>
      {data.images.map((image) => (
        <ImageCard image={image} key={image.id} />
      ))}
    </div>
  );

}

Local development

No matter how fast it is, iterating on a project like this can be painful if you have to push up every change in order to test how it works. We’ve released a first-class integration with wrangler for local development of Pages projects, including full support for Functions, Workers, secrets, environment variables and KV. Durable Objects support is coming soon.

Install from npm:

npm install wrangler@beta

and either serve a folder of static assets, or proxy your existing tooling:

# Serve a directory
npx wrangler pages dev ./public

# or integrate with your other tools
npx wrangler pages dev -- npx react-scripts start

Go forth, and build!

If you like puppies, we’ve deployed our image-sharing application here, and if you like code, that’s over on GitHub. Feel free to fork and deploy it yourself! There’s a five-minute setup wizard, and you’ll need Cloudflare Images, Access, Workers, and Durable Objects.

We are so excited about the future of the Pages platform, and we want to hear what you’re building! Show off your full-stack applications in the #what-i-built channel, or get assistance in the #pages-help channel on our Discord server.

Building a full stack application with Cloudflare Pages

Helping Apache Servers stay safe from zero-day path traversal attacks (CVE-2021-41773)

Post Syndicated from Michael Tremante original https://blog.cloudflare.com/helping-apache-servers-stay-safe-from-zero-day-path-traversal-attacks/

Helping Apache Servers stay safe from zero-day path traversal attacks (CVE-2021-41773)

Helping Apache Servers stay safe from zero-day path traversal attacks (CVE-2021-41773)

On September 29, 2021, the Apache Security team was alerted to a path traversal vulnerability being actively exploited (zero-day) against Apache HTTP Server version 2.4.49. The vulnerability, in some instances, can allow an attacker to fully compromise the web server via remote code execution (RCE) or at the very least access sensitive files. CVE number 2021-41773 has been assigned to this issue. Both Linux and Windows based servers are vulnerable.

An initial patch was made available on October 4 with an update to 2.4.50, however, this was found to be insufficient resulting in an additional patch bumping the version number to 2.4.51 on October 7th (CVE-2021-42013).

Customers using Apache HTTP Server versions 2.4.49 and 2.4.50 should immediately update to version 2.4.51 to mitigate the vulnerability. Details on how to update can be found on the official Apache HTTP Server project site.

Any Cloudflare customer with the setting normalize URLs to origin turned on have always been protected against this vulnerability.

Additionally, customers who have access to the Cloudflare Web Application Firewall (WAF), receive additional protection by turning on the rule with the following IDs:

  • 1c3d3022129c48e9bb52e953fe8ceb2f (for our new WAF)
  • 100045A (for our legacy WAF)

The rule can also be identified by the following description:

Rule message: Anomaly:URL:Query String - Multiple Slashes, Relative Paths, CR, LF or NULL.

Given the nature of the vulnerability, attackers would normally try to access sensitive files (for example /etc/passwd), and as such, many other Cloudflare Managed Rule signatures are also effective at stopping exploit attempts depending on the file being accessed.

How the vulnerability works

The vulnerability leverages missing path normalization logic. If the Apache server is not configured with a require all denied directive for files outside the document root, attackers can craft special URLs to read any file on the file system accessible by the Apache process. Additionally, this flaw could also leak the source of interpreted files like CGI scripts and, in some cases, also allow the attacker to take over the web server by executing shell scripts.

For example, the following path:

$hostname/cgi-bin/../../../etc/passwd

would allow the attacker to climb the directory tree (../ indicates parent directory) outside of the web server document root and then subsequently access /etc/passwd.

Well implemented path normalization logic would correctly collapse the path into the shorter $hostname/etc/passwd by normalizing all ../ character sequences nullifying the attempt to climb up the directory tree.

Correct normalization is not easy as it also needs to take into consideration character encoding, such as percent encoded characters used in URLs. For example, the following path is equivalent to the first one provided:

$hostname/cgi-bin/.%2e/%2e%2e/%2e%2e/etc/passwd

as the characters %2e represent the percent encoded version of dot “.”. Not taking this properly into account was the cause of the vulnerability.

The PoC for this vulnerability is straightforward and simply relies on attempting to access sensitive files on vulnerable Apache web servers.

Exploit Attempts

Cloudflare has seen a sharp increase in attempts to exploit and find vulnerable servers since October 5.

Helping Apache Servers stay safe from zero-day path traversal attacks (CVE-2021-41773)

Most exploit attempts observed have been probing for static file paths — indicating heavy scanning activity before attackers (or researchers) may have attempted more sophisticated techniques that could lead to remote code execution. The most commonly attempted file paths are reported below:

/cgi-bin/.%2e/.git/config
/cgi-bin/.%2e/app/etc/local.xml
/cgi-bin/.%2e/app/etc/env.php
/cgi-bin/.%2e/%2e%2e/%2e%2e/etc/passwd

Conclusion

Keeping web environments safe is not an easy task. Attackers will normally gain access and try to exploit vulnerabilities even before PoCs become widely available — we reported such a case not too long ago with Atlassian’s Confluence OGNL vulnerability.

It is vital to employ all security measures available. Cloudflare features such as our URL normalization and the WAF, are easy to implement and can buy time to deploy any relevant patches offered by the affected software vendors.