Tag Archives: Cloudflare Access

Integrating Cloudflare Gateway and Access

Post Syndicated from Kenny Johnson original https://blog.cloudflare.com/integrating-cloudflare-gateway-and-access/

Integrating Cloudflare Gateway and Access

We’re excited to announce that you can now set up your Access policies to require that all user traffic to your application is filtered by Cloudflare Gateway. This ensures that all of the traffic to your self-hosted and SaaS applications is secured and centrally logged. You can also use this integration to build rules that determine which users can connect to certain parts of your SaaS applications, even if the application does not support those rules on its own.

Stop threats from returning to your applications and data

We built Cloudflare Access as an internal project to replace our own VPN. Unlike a traditional private network, Access follows a Zero Trust model. Cloudflare’s edge checks every request to protected resources for identity and other signals like device posture (i.e., information about a user’s machine, like Operating system version, if antivirus is running, etc.).

By deploying Cloudflare Access, our security and IT teams could build granular rules for each application and log every request and event. Cloudflare’s network accelerated how users connected. We launched Access as a product for our customers in 2018 to share those improvements with teams of any size.

Integrating Cloudflare Gateway and Access

Over the last two years, we added new types of rules that check for hardware security keys, location, and other signals. However, we were still left with some challenges:

  • What happened to devices before they connected to applications behind Access? Were they bringing something malicious with them?
  • Could we make sure these devices were not leaking data elsewhere when they reached data behind Access?
  • Had the credentials used for a Cloudflare Access login been phished elsewhere?
Integrating Cloudflare Gateway and Access

We built Cloudflare Gateway to solve those problems. Cloudflare Gateway sends all traffic from a device to Cloudflare’s network, where it can be filtered for threats, file upload/download, and content categories.

Administrators deploy a lightweight agent on user devices that proxies all Internet-bound traffic through Cloudflare’s network. As that traffic arrives in one of our data centers in 200 cities around the world, Cloudflare’s edge inspects the traffic. Gateway can then take actions like prevent users from connecting to destinations that contain malware or block the upload of files to unapproved locations.

With today’s launch, you can now build Access rules that restrict connections to devices that are running Cloudflare Gateway. You can configure Cloudflare Gateway to run in always-on mode and ensure that the devices connecting to your applications are secured as they navigate the rest of the Internet.

Log every connection to every application

In addition to filtering, Cloudflare Gateway also logs every request and connection made from a device. With Gateway running, your organization can audit how employees use SaaS applications like Salesforce, Office 365, and Workday.

Integrating Cloudflare Gateway and Access

However, we’ve talked to several customers who share a concern over log integrity — “what stops a user from bypassing Gateway’s logging by connecting to a SaaS application from a different device?” Users could type in their password and use their second factor authentication token on a different device — that way, the organization would lose visibility into that corporate traffic.

Today’s release gives your team the ability to ensure every connection to your SaaS applications uses Cloudflare Gateway. Your team can integrate Cloudflare Access, and its ruleset, into the login flow of your SaaS applications. Cloudflare Access checks for additional factors when your users log in with your SSO provider. By adding a rule to require Cloudflare Gateway be used, you can prevent users from ever logging into a SaaS application without connecting through Gateway.

Build data control rules in SaaS applications

One other challenge we had internally at Cloudflare is that we lacked the ability to add user-based controls in some of the SaaS applications we use. For example, a team member connecting to a data visualization application had access to dashboards created by other teams, that they shouldn’t have access to.

We can use Cloudflare Gateway to solve that problem. Gateway provides the ability to restrict certain URLs to groups of users; this allows us  to add rules that only let specific team members reach records that live at known URLs.

Integrating Cloudflare Gateway and Access

However, if someone is not using Gateway, we lose that level of policy control. The integration with Cloudflare Access ensures that those rules are always enforced. If users are not running Gateway, they cannot login to the application.

What’s next?

You can begin using this feature in your Cloudflare for Teams account today with the Teams Standard or Teams enterprise plan. Documentation is available here to help you get started.

Want to try out Cloudflare for Teams? You can sign up for Teams today on our free plan and test Gateway’s DNS filtering and Access for up to 50 users at no cost.

Announcing Workplace Records for Cloudflare for Teams

Post Syndicated from Matthew Prince original https://blog.cloudflare.com/work-jurisdiction-records-for-teams/

Announcing Workplace Records for Cloudflare for Teams

We wanted to close out Privacy & Compliance Week by talking about something universal and certain: taxes. Businesses worldwide pay employment taxes based on where their employees do work. For most businesses and in normal times, where employees do work has been relatively easy to determine: it’s where they come into the office. But 2020 has made everything more complicated, even taxes.

As businesses worldwide have shifted to remote work, employees have been working from “home” — wherever that may be. Some employees have taken this opportunity to venture further from where they usually are, sometimes crossing state and national borders.

Announcing Workplace Records for Cloudflare for Teams

In a lot of ways, it’s gone better than expected. We’re proud of helping provide technology solutions like Cloudflare for Teams that allow employees to work from anywhere and ensure they still have a fast, secure connection to their corporate resources. But increasingly we’ve been hearing from the heads of the finance, legal, and HR departments of our customers with a concern: “If I don’t know where my employees are, I have no idea where I need to pay taxes.”

Today we’re announcing the beta of a new feature for Cloudflare for Teams to help solve this problem: Workplace Records. Cloudflare for Teams uses Access and Gateway logs to provide the state and country from which employees are working. Workplace Records can be used to help finance, legal, and HR departments determine where payroll taxes are due and provide a record to defend those decisions.

Every location became a potential workplace

Before 2020, employees who frequently traveled could manage tax jurisdiction reporting by gathering plane tickets or keeping manual logs of where they spent time. It was tedious, for employees and our payroll team, but manageable.

The COVID pandemic transformed that chore into a significant challenge for our finance, legal, and HR teams. Our entire organization was suddenly forced to work remotely. If we couldn’t get comfortable that we knew where people were working, we worried we may be forced to impose somewhat draconian rules requiring employees to check-in. That didn’t seem very Cloudflare-y.

The challenge impacts individual team members as well. Reporting mistakes can lead to tax penalties for employees or amendments during filing season. Our legal team started to field questions from employees stuck in new regions because of travel restrictions. Our payroll team prepared for a backlog of amendments.

Announcing Workplace Records for Cloudflare for Teams

Logging jurisdiction without manual reporting

When team members open their corporate laptops and start a workday, they log in to Cloudflare Access — our Zero Trust tool that protects applications and data. Cloudflare Access checks their identity and other signals like multi-factor methods to determine if they can proceed. Importantly, the process also logs their region so we can enforce country-specific rules.

Our finance, legal, and HR teams worked with our engineering teams to use that model to create Workplace Records. We now have the confidence to know we can meet our payroll tax obligations without imposing onerous limitations on team members. We’re able to prepare and adjust, in real-time, while confidentially supporting our employees as they work remotely for wherever is most comfortable and productive for them.

Announcing Workplace Records for Cloudflare for Teams

Respecting team member privacy

Workplace Records only provides resolution within a taxable jurisdiction, not a specific address. The goal is to give only the information that finance, legal, and HR departments need to ensure they can meet their compliance obligations.

The system also generates these reports by capturing team member logins to work applications on corporate devices. We use the location of that login to determine “this was a workday from Texas”. If a corporate laptop is closed or stored away for the weekend, we aren’t capturing location logs. We’d rather team members enjoy time off without connecting.

Two clicks to enforce regional compliance

Workplace Records can also help ensure company policy compliance for a company’s teams. For instance, companies may have policies about engineering teams only creating intellectual property in countries in which transfer agreements are in place. Workplace Records can help ensure that engineering work isn’t being done in countries that may put the intellectual property at risk.

Announcing Workplace Records for Cloudflare for Teams

Administrators can build rules in Cloudflare Access to require that team members connect to internal or SaaS applications only from countries where they operate. Cloudflare’s network will check every request both for identity and the region from which they’re connecting.

We also heard from our own accounting teams that some regions enforce strict tax penalties when employees work without an incorporated office or entity. In the same way that you can require users to work only from certain countries, you can also block users from connecting to your applications from specific regions.

No deciphering required

When we started planning Workplace Records, our payroll team asked us to please not send raw data that added more work on them to triage and sort.

Available today, you can view the country of each login to internal systems on a per-user basis. You can export this data to an external SIEM and you can build rules that control access to systems by country.

Launching today in beta is a new UI that summarizes the working days spent in specific regions for each user. Workplace Records will add a company-wide report early in Q1. The service is available as a report for free to all Cloudflare for Teams customers.

Announcing Workplace Records for Cloudflare for Teams

Going forward, we plan to work with Human Capital Management (HCM), Human Resource Information Systems (HRIS), Human Resource Management Systems (HRMS), and Payroll providers to automatically integrate Workplace Records.

What’s next?

At Cloudflare, we know even after the pandemic we are going to be more tolerant of remote work than before. The more that we can allow our team to work remotely and ensure we are meeting our regulatory, compliance, and tax obligations, the more flexibility we will be able to provide.

Cloudflare for Teams with Workplace Records is helping solve a challenge for our finance, legal, and HR teams. Now with the launch of the beta, we hope we can help enable a more flexible and compliant work environment for all our Cloudflare for Teams customers.
This feature will be available to all Cloudflare for Teams subscribers early next week. You can start using Cloudflare for Teams today at no cost for up to 50 users, including the Workplace Records feature.

Announcing Workplace Records for Cloudflare for Teams

Many services, one cloudflared

Post Syndicated from Adam Chalmers original https://blog.cloudflare.com/many-services-one-cloudflared/

Many services, one cloudflared
Route many different local services through many different URLs, with only one cloudflared

Many services, one cloudflared

I work on the Argo Tunnel team, and we make a program called cloudflared, which lets you securely expose your web service to the Internet while ensuring that all its traffic goes through Cloudflare.

Say you have some local service (a website, an API, a TCP server, etc), and you want to securely expose it to the internet using Argo Tunnel. First, you run cloudflared, which establishes some long-lived TCP connections to the Cloudflare edge. Then, when Cloudflare receives a request for your chosen hostname, it proxies the request through those connections to cloudflared, which in turn proxies the request to your local service. This means anyone accessing your service has to go through Cloudflare, and Cloudflare can do caching, rewrite parts of the page, block attackers, or build Zero Trust rules to control who can reach your application (e.g. users with a @corp.com email). Previously, companies had to use VPNs or firewalls to achieve this, but Argo Tunnel aims to be more flexible, more secure, and more scalable than the alternatives.

Some of our larger customers have deployed hundreds of services with Argo Tunnel, but they’re consistently experiencing a pain point with these larger deployments. Each instance of cloudflared can only proxy a single service. This means if you want to put, say, 100 services on the internet, you’ll need 100 instances of cloudflared running on your server. This is inefficient (because you’re using 100x as many system resources) and, even worse, it’s a pain to manage 100 long-lived services!

Today, we’re thrilled to announce our most-requested feature: you can now expose unlimited services using one cloudflared. Any customer can start using this today, at no extra cost, using the Named Tunnels we released a few months ago.

Named Tunnels

Earlier this year, we announced Named Tunnels—tunnels with immutables ID that you can run and stop as you please. You can route traffic into the tunnel by adding a DNS or Cloudflare Load Balancer record, and you can route traffic from the tunnel into your local services by running cloudflared.

Many services, one cloudflared

You can create a tunnel by running $ cloudflared tunnel create my_tunnel_name. Once you’ve got a tunnel, you can use DNS records or Cloudflare Load Balancers to route traffic into the tunnel. Once traffic is routed into the tunnel, you can use our new ingress rules to map traffic to local services.

Map traffic with ingress rules

An ingress rule basically says “send traffic for this internet URL to this local service.” When you invoke cloudflared it’ll read these ingress rules from the configuration file. You write ingress rules under the ingress key of your config file, like this:

$ cat ~/cloudflared_config.yaml

tunnel: my_tunnel_name
credentials-file: .cloudflared/e0000000-e650-4190-0000-19c97abb503b.json
ingress:
 # Rules map traffic from a hostname to a local service:
 - hostname: example.com
   service: https://localhost:8000
 # Rules can match the request's path to a regular expression:
 - hostname: static.example.com
   path: /images/*\.(jpg|png|gif)
   service: https://machine1.local:3000
 # Rules can match the request's hostname to a wildcard character:
 - hostname: "*.ssh.foo.com"
   service: ssh://localhost:2222
 # You can map traffic to the built-in “Hello World” test server:
 - hostname: foo.com
   service: hello_world
 # This “catch-all” rule doesn’t have a hostname/path, so it matches everything
 - service: http_status:404

This example maps traffic to three different local services. But cloudflared can map traffic to more than just addresses: it can respond with a given HTTP status (as in the last rule) or with the built-in Hello World test server (as in the second-last rule). See the docs for a full list of supported services.

You can match traffic using the hostname, a path regex, or both. If you don’t use any filters, the ingress rule will match everything (so if you have DNS records from different zones routing into the tunnel, the rule will match all their URLs). Traffic is matched to rules from top to bottom, so in this example, the last rule will match anything that wasn’t matched by an earlier rule. We actually require the last rule to match everything; otherwise, cloudflared could receive a request and not know what to respond with.

Testing your rules

To make sure all your rules are valid, you can run

$ cat ~/cloudflared_config_invalid.yaml

ingress:
 - hostname: example.com
   service: https://localhost:8000

$ cloudflared tunnel ingress validate
Validating rules from /usr/local/etc/cloudflared/config.yml
Validation failed: The last ingress rule must match all URLs (i.e. it should not have a hostname or path filter)

This will check that all your ingress rules use valid regexes and map to valid services, and it’ll ensure that your last rule (and only your last rule) matches all traffic. To make sure your ingress rules do what you expect them to do, you can run

$ cloudflared tunnel ingress rule https://static.example.com/images/dog.gif
Using rules from ~/cloudflared_config.yaml
Matched rule #2
        Hostname: static.example.com
        path: /images/*\.(jpg|png|gif)

This will check which rule matches the given URL, almost like a dry run for the ingress rules (no tunnels are run and no requests are actually sent). It’s helpful for making sure you’re routing the right URLs to the right services!

Per-rule configuration

Whenever cloudflared gets a request from the internet, it proxies that request to the matching local service on your origin. Different services might need different configurations for this request; for example, you might want to tweak the timeout or HTTP headers for a certain origin. You can set a default configuration for all your local services, and then override it for specific ones, e.g.

ingress:
  # Set configuration for all services
  originRequest:
    connectTimeout: 30s
 # This service inherits all the default (root-level) configuration
 - hostname: example.com
   service: https://localhost:8000
 # This service overrides the default configuration
 - service: https://localhost:8001
   originRequest:
     connectTimeout: 10s
     disableChunkedEncoding: true

For a full list of configuration options, check out the docs.

What’s next?

We really hope this makes Argo Tunnel an even easier way to deploy services onto the Internet. If you have any questions, file an issue on our GitHub. Happy developing!

Introducing WARP for Desktop and Cloudflare for Teams

Post Syndicated from Kyle Krum original https://blog.cloudflare.com/warp-for-desktop/

Introducing WARP for Desktop and Cloudflare for Teams

Introducing WARP for Desktop and Cloudflare for Teams

Cloudflare launched ten years ago to keep web-facing properties safe from attack and fast for visitors. Cloudflare customers owned Internet properties that they placed on our network. Visitors to those sites and applications enjoyed a faster experience, but that speed was not consistent for accessing Internet properties outside the Cloudflare network.

Over the last few years, we began building products that could help deliver a faster and safer Internet to everyone, not just visitors to sites on our network. We started with the first step to visiting any website, a DNS query, and released the world’s fastest public DNS resolver, 1.1.1.1. Any Internet user could improve the speed to connect to any website simply by changing their resolver.

While making the Internet faster for users, we also focused on making it more private. We built 1.1.1.1 to accelerate the last mile of connections, from user to our edge or other destinations on the Internet. Unlike other providers, we did not build it to sell ads.

Last year we went one step further to make the entire connection from a device both faster and safer when we launched Cloudflare WARP. With the push of a button, users could connect their mobile device to the entire Internet using a WireGuard tunnel through a Cloudflare data center near to them. Traffic to sites behind Cloudflare became even faster and a user’s experience with the rest of the Internet became more secure and private.

We brought that experience to desktops in beta earlier this year, and are excited to announce the general availability of Cloudflare WARP for desktop users today. The entire Internet can now be more secure and private regardless of how you connect.

Bringing the power of WARP to security teams everywhere

WARP made the Internet faster and more private for individual users everywhere. But as businesses embraced remote work models at scale, security teams struggled to extend the security controls they had enabled in the office to their remote workers. Today, we’re bringing everything our users have come to expect from WARP to security teams. The release also enables new functionality in our Cloudflare Gateway product.

Customers can use the Cloudflare WARP application to connect corporate desktops to Cloudflare Gateway for advanced web filtering. The Gateway features rely on the same performance and security benefits of the underlying WARP technology, now with security filtering available to the connection.

The result is a simple way for enterprises to protect their users wherever they are without requiring the backhaul of network traffic to a centralized security boundary. Instead, organizations can configure the WARP client application to securely and privately send remote users’ traffic through a Cloudflare data center near them. Gateway administrators apply policies to outbound Internet traffic proxied through the client, allowing organizations to protect users from threats on the Internet, and stop corporate data from leaving their organization.

Privacy, Security and Speed for Everyone

WARP was built on the philosophy that even people who don’t know what “VPN” stands for should be able to still easily get the protection a VPN offers. For those of us unfortunately very familiar with traditional corporate VPNs, something better was needed. Enter our own WireGuard implementation called BoringTun.

The WARP application uses BoringTun to encrypt all the traffic from your device and send it directly to Cloudflare’s edge, ensuring that no one in between is snooping on what you’re doing. If the site you are visiting is already a Cloudflare customer, the content is immediately sent down to your device. With WARP+ we use Argo Smart Routing to to devise the shortest path through our global network of data centers to reach whomever you are talking to.

Introducing WARP for Desktop and Cloudflare for Teams

Combined with the power of 1.1.1.1 (the world’s fastest public DNS resolver), WARP keeps your traffic secure, private and fast. Since nearly everything you do on the Internet starts with a DNS request, choosing the fastest DNS server across all your devices will accelerate almost everything you do online. Speed isn’t everything though, and while the connection between your application and a website may be encrypted, DNS lookups for that website were not. This allowed anyone, even your Internet Service Provider, to potentially snoop (and sell) on where you are going on the Internet.

Cloudflare will never snoop or sell your personal data. And if you use DNS-over-HTTPS or DNS-over-TLS to our 1.1.1.1 resolver, your DNS request will be sent over a secure channel. This means that if you use the 1.1.1.1 resolver then in addition to our privacy guarantees an eavesdropper can’t see your DNS requests. Don’t take our word for it though, earlier this year we published the results of a third-party privacy examination, something we’ll keep doing and wish others would do as well.

For Gateway customers, we are committed to privacy and trust and will never sell your personal data to third parties. While your administrator will have the ability to audit your organization’s traffic, create rules around how long data is retained, or create specific policies about where they can go, Cloudflare will never sell your personal data or use your personal data to retarget you with advertisements. Privacy and control of your organization’s data is in your hands.

Now integrated with Cloudflare Gateway

Traditionally, companies have used VPN solutions to gate access to corporate resources and keep devices secure with their filtering rules. These connections quickly became a point of failure (and intrusion vector) as organizations needed to manage and scale up VPN servers as traffic through their on premise servers grew. End users didn’t like it either. VPN servers were usually overwhelmed at peak times, the client was bulky and they were rarely made with performance in mind. And once a bad actor got in, they had access to everything.

Introducing WARP for Desktop and Cloudflare for Teams
Traditional VPN architecture‌‌

In January 2020, we launched Cloudflare for Teams as a replacement to this model. Cloudflare for Teams is built around two core products. Cloudflare Access is a Zero Trust solution allowing organizations to connect internal (and now, SaaS) applications to Cloudflare’s edge and build security rules to enforce safe access to them. No longer were VPNs a single entry point to your organization; users could work from anywhere and still get access. Cloudflare Gateway’s first features focused on protecting users from threats on the Internet with a DNS resolver and policy engine built for enterprises.

The strength and power of WARP clients, used today by millions of users around the world, will enable incredible new use cases for security teams:

  • Encrypt all user traffic – Regardless of your users’ location, all traffic from their device is encrypted with WARP and sent privately to the nearest WARP endpoint. This keeps your users and your organizations protected from whomever may be snooping. If you still used a traditional VPN on top of Access to encrypt user traffic, that is no longer needed.
  • WARP+ – Cloudflare offers a premium WARP+ service for customers who want additional speed benefits. That now comes packaged into Teams deployments. Any Teams customer who deploys the Teams client applications will automatically receive the premium speed benefits of WARP+.
  • Gateway for remote workers – Until today, Gateway required that you keep track of all your users’ IP addresses and build policies per location. This made it difficult to enforce policy or provide malware protection when a user took their device to a new location. With the client installed, these policies can be enforced anywhere.
  • L7 Firewall and user based policies – Today’s announcement of Cloudflare Gateway SWG and Secure DNS allows your organization to enforce device authentication to your Teams account, enabling you to build user-specific policies and force all traffic through the firewall.
  • Device and User auditing – Along with user and device policies, administrators will also be able to audit specific user and device traffic. Used in conjunction with logpush, this will allow your organization to do detailed level tracing in case of a breach or audit.
Introducing WARP for Desktop and Cloudflare for Teams

Enroll your organization to use the WARP client with Cloudflare for Teams

We know how hard it can be to deploy another piece of software in your organization, so we’ve worked hard to make deployment easy. To get started, just navigate to our sign-up page and create an account. If you already have an active account, you can bypass this step and head straight to the Cloudflare for Teams dashboard where you’ll be dropped directly into our onboarding flow. After you have signed up and configured your team, setup a Gateway policy and then choose one of the three ways to install the clients to enforce that policy from below:

Self Install
If you are a small organization without an IT department, asking your users to download the client themselves and type in the required settings is the fastest way to get going.

Introducing WARP for Desktop and Cloudflare for Teams
Manually join an organization‌‌

Scripted Install
Our desktop installers support the ability to quickly script the installation. In the case of Windows, this is as easy as this command line:

Cloudflare_WARP_Release-x64.msi /quiet ORGANIZATION="<insert your org>" SERVICE_MODE="warp" ENABLE="true" GATEWAY_UNIQUE_ID="<insert your gateway DoH domain>" SUPPORT_URL=”<mailto or http of your support person>"

Managed Device
Organizations with MDM tools like Intune or JAMF can deploy WARP to their entire fleet of devices from a single operation. Just as you preconfigure all other device settings, WARP can be set so that all end users need to do is login with your team’s identity provider by clicking on the Cloudflare WARP client after it has been deployed.

Introducing WARP for Desktop and Cloudflare for Teams
Microsoft Intune Configuration

For a complete list of the installation options, required fields and step by step instructions for all platforms see the WARP Client documentation.

What’s coming next

There is still more we want to build for both our consumer users of WARP and our Cloudflare for Teams customers. Here’s a sneak peek at some of the ones we are most excited about (and allowed to share):

  • New partner integrations with CrowdStrike and VMware Carbon Black (Tanium available today) will allow you to build even more comprehensive Cloudflare Access policies that check for device health before allowing users to connect to applications
  • Split Tunnel support will allow you or your organization to specify applications, sites or IP addresses that should be excluded from WARP. This will allow content like games, streaming services, or any application you choose to work outside the connection.
  • BYOD device support, especially for mobile clients. Enterprise users that are not on the clock should be able to easily toggle off “office mode,” so corporate policies don’t limit personal use of their personal devices.
  • We are still missing one major operating system from our client portfolio and Linux support is coming.

Download now

We are excited to finally share these applications with our customers. We’d especially like to thank our Cloudflare MVP’s, the 100,000+ beta users on desktop, and the millions of existing users on mobile who have helped grow WARP into what it is today.

You can download the applications right now from https://one.one.one.one

Argo Tunnels that live forever

Post Syndicated from Sam Rhea original https://blog.cloudflare.com/argo-tunnels-that-live-forever/

Argo Tunnels that live forever

Cloudflare secures your origin servers by proxying requests to your DNS records through our anycast network and to the external IP of your origin. However, external IP addresses can provide attackers with a path around Cloudflare security if they discover those destinations.

Argo Tunnels that live forever

We launched Argo Tunnel as a secure way to connect your origin to Cloudflare without a publicly routable IP address. With Tunnel, you don’t send traffic to an external IP. Instead, a lightweight daemon runs in your infrastructure and creates outbound-only connections to Cloudflare’s edge. With Argo Tunnel, you can quickly deploy infrastructure in a Zero Trust model by ensuring all requests to your resources pass through Cloudflare’s security filters.

Argo Tunnels that live forever

Originally, your Argo Tunnel connection corresponded to a DNS record in your account. Requests to that hostname hit Cloudflare’s network first and our edge sends those requests over the Argo Tunnel to your origin. Since these connections are outbound-only, you no longer need to poke holes in your infrastructure’s firewall. Your origins can serve traffic through Cloudflare without being vulnerable to attacks that bypass Cloudflare.

However, fitting an outbound-only connection into a reverse proxy creates some ergonomic and stability hurdles. The original Argo Tunnel architecture attempted to both manage DNS records and create connections. When connections became disrupted, Argo Tunnel would recreate the entire deployment. Additionally, Argo Tunnel connections could not be treated like regular origin servers in Cloudflare’s control plane and had to be managed directly from the server-side software.

Today, we’re introducing a new architecture that treats Argo Tunnel connections like a true origin server without the risk of exposure to the rest of the Internet. Now, when you create a Tunnel connection, you can point DNS records for any hostname in your account, or load balancer pools, to that connection from the Cloudflare dashboard. You can also run Argo Tunnel connections without the need for leaving certificates and service tokens on your servers.

Keeping persistent objects persistent

Argo Tunnel has objects that tend to stay persistent (DNS records) and objects that deliberately change and recreate (connections from `cloudflared` to Cloudflare). Argo Tunnel previously conflated the two categories, which led to some issues.

The edge vs. the control plane

Cloudflare as a whole consists of two components: the edge network and the control plane that manages the configuration of that network.

The data centers in 200 cities around the world that proxy traffic to your origin make up the edge network. These data centers are highly available and, thanks to Anycast IP routing, can gracefully handle traffic if one or more data centers go offline.

When you make a change to something in Cloudflare (whether via the UI in Cloudflare’s dashboard, or the API) our control plane receives it, authenticates it, and then pushes it to our edge.

If the control plane goes down, the edge should not be degraded – traffic will continue to be served using the most recent configuration. At launch, Argo Tunnel muddled the two in some places, which meant that control plane issues could become edge issues for Tunnel users.

Starting every Tunnel from scratch

Regardless of whether a Tunnel is connecting for the first time or the 100th, the operation repeated a series of high-level steps in the original architecture:

  1. cloudflared connects to an Argo Tunnel service running in Cloudflare’s control plane. That service registers your Tunnel and its connections.
  2. cloudflared creates a public DNS record for your hostname which points to a randomly generated CNAME record for load balanced Tunnels or an IPv6 for traditional Tunnels. The ephemeral CNAME record represents your Tunnel.
  3. The control plane then tells Cloudflare’s edge about these DNS entries and where the CNAME or IP address should send traffic. Traffic can now be routed to cloudflared.
  4. If the Tunnel disconnects, for any reason, the Argo Tunnel service unregistered the Tunnel and deleted the DNS record.

The last step is an issue. In most cases, you create an Argo Tunnel for a service meant to run indefinitely. The DNS record should stay persistent – it’s an app that you manage that should not change. However, a simple restart or disconnection meant that cloudflared had to follow every step and start itself from scratch. If any of those upstream services were degraded, the Tunnel would fail to reconnect.

This model also introduces other shortcomings. You cannot gracefully change the DNS record of a Tunnel; instead, you had to stop cloudflared and rerun the service. Visibility was limited. Load balancing introduced complications with how origins were counted.

Phase 1: Improving stability

The team started by reducing the impact of those dependencies. Over the last year, Argo Tunnel has quietly replaced single points of failure with distributed systems that are more fault tolerant.

Tunnels now live longer. Argo Tunnel has migrated to Cloudflare’s Unimog platform, which has increased the average life of a connection from minutes to days. When connections live longer, they restart less, and are then subject to fewer upstream hiccups.

Additionally, some Tunnels no longer need to follow the entire creation flow. If your Tunnel reconnects, we opportunistically try to reestablish it with the records already at our edge.

These changes have dramatically improved the stability of Argo Tunnel as a platform, but still left a couple of core problems: Tunnel reconnections were treated like new connections and managing those connections added friction.

Phase 2: Named Tunnels that outlive connections

Starting today, Argo Tunnel’s architecture distinguishes between the persistent objects (DNS records, cloudflared) from the ephemeral objects (the connections). To do that, this release introduces the concept of a permanent name that you assign to a Tunnel.

In the old model, cloudflaredcreated both the DNS record entries and established the connections from the server to Cloudflare’s network. DNS records became tied to those connections and could not be changed. Even worse, each time cloudflared restarted, we treated it like a new Tunnel and had to propagate this information into DNS and Load Balancer systems. If those had delays, the restart could become an outage.

Argo Tunnels that live forever

Today’s release separates DNS creation from connection creation to make tunnels more stable and more simple to manage. In this model, you can use `cloudflared` to create an Argo Tunnel that has a persistent, stable name, that can be entirely unrelated to the hostname.

Once created, you can point DNS records in your account to a stable subdomain that relies on a UUID tied to that persistent name. Since the name and UUID do not change, your DNS record never needs to be cleaned up or recreated when Argo Tunnel restarts. In the event of a restart, the enrolled instance of cloudflared connects back to that UUID address.

Argo Tunnels that live forever

You can also treat named Argo Tunnels like origin servers in this architecture – except these origins can only be connected to via a DNS record in your account. You can delete a DNS record and create a new one that points to the UUID address and traffic will be served – all without touching cloudflared.

How it works

You can begin using this new architecture today with the following steps. First, you’ll need to upgrade to the latest version of cloudflared.

1. Login to Cloudflare from `cloudflared`

Run cloudflared tunnel login and authenticate to your Cloudflare account. This step will generate a cert.pem file. That certificate contains a token that gives your instance of cloudflared the ability to create Named Tunnels in your account, as well as the ability to eventually point DNS records to them.

Argo Tunnels that live forever

2. Create your Tunnel

You can now create a Tunnel that has a persistent name. Run cloudflared tunnel create <name> to do so. The name does not have to be a hostname. For example, you can assign a name that represents the application, the particular server, or the cloud environment where it runs.

cloudflared will create a Tunnel with the name that you give it and a UUID. This name will be associated with your account. Only DNS records in your account will proxy traffic to the connection. Additionally, the name will not be removed unless you actively delete it. The connections can stop and restart and will use the same name and UUID.

Argo Tunnels that live forever

Creating a named Tunnel also generates a credentials file that is distinct from the cert.pem issued during the login. You only need the credentials file to run the Tunnel. If you do not want to create additional named Tunnels or DNS records from cloudflared, you can delete the cert.pem file to avoid leaving API tokens and certificates in your environment.

3. Configure Tunnel details

Configure your instance of cloudflared, including the URL that cloudflared will proxy traffic to in the configuration file. Alternatively, you can run the Tunnel in an ad hoc mode from the command line using the steps below.

4. Run your Tunnel

You can begin running the Tunnel with the command, cloudflared tunnel run <name> or cloudflared tunnel run <UUID> and it will start proxying traffic. If you are running the Tunnel without the cert.pem file and only the credentials file, you must use cloudflared tunnel run <UUID>.

Argo Tunnels that live forever

5. Send traffic to your Tunnel

You can now decide how to send traffic to this persistent Tunnel. If you want to create a long-lived DNS record in the Cloudflare dashboard, you can point it to the Tunnel UUID subdomain in the format UUID.cfargotunnel.com. You can do the same in the Cloudflare Load Balancer panel to add this object to a load balanced pool where it will be treated as just one additional origin.

Argo Tunnels that live forever

Alternatively, you can continue to create DNS records from cloudflared. Run the following command, cloudflared tunnel route dns <name> <hostname> or cloudflared tunnel route dns <UUID> <hostname> to associate the DNS record with the Tunnel address. You will only be able to create a DNS record from cloudflared for the zone name you selected when authenticating. Unlike the previous architecture, this DNS record will not be deleted if the Tunnel disconnects.

When this instance of cloudflared restarts, the name, UUID, and DNS record will not need to be recreated. The connection will reestablish and begin serving traffic.

[Optional] Check what Tunnels exist

You can also use this architecture to see your active Tunnels. Run cloudflared tunnel list to view the Tunnels created and their connection status. You can delete Tunnels, as well, by running cloudflared tunnel delete <name> or cloudflare tunnel delete <UUID>. To delete Tunnels, you do need the cert.pem file.

Argo Tunnels that live forever

Credential and cert management

Once you have created a named Tunnel, you no longer need the cert.pem file to run that Tunnel and connect it to Cloudflare’s network. If you’re running the tunnel on a remote server or in a container, you can copy the credential file without sharing cert.pem outside your computer.

Similarly, if you want to let another person on your team run the Tunnel, you can send them the credentials file without sharing the cert.pem file as well. The cert.pem file is still required to create additional Tunnels, list existing tunnels, manage DNS records, or delete Tunnels.

The credentials file contains a secret scoped to the specific Tunnel UUID which establishes a connection from cloudflared to Cloudflare’s network. cloudflared operates like a client and establishes a TLS connection from your infrastructure to Cloudflare’s edge.

What’s next?

The new Argo Tunnel architecture is available today. You’ll need cloudflared version 2020.9.3 or later to begin using these features. The latest version of cloudflared is backwards compatible with the legacy model of Argo Tunnel. Additional documentation is available here.

Zero Trust For Everyone

Post Syndicated from Abe Carryl original https://blog.cloudflare.com/teams-plans/

Zero Trust For Everyone

We launched Cloudflare for Teams to make Zero Trust security accessible for all organizations, regardless of size, scale, or resources. Starting today, we are excited to take another step on this journey by announcing our new Teams plans, and more specifically, our Cloudflare for Teams Free plan, which protects up to 50 users at no cost. To get started, sign up today.

If you’re interested in how and why we’re doing this, keep scrolling.

Our Approach to Zero Trust

Cloudflare Access is one-half of Cloudflare for Teams – a Zero Trust solution that secures inbound connections to your protected applications. Cloudflare Access works like a bouncer, checking identity at the door to all of your applications.

The other half of Cloudflare for Teams is Cloudflare Gateway which, as our clever name implies, is a Secure Web Gateway protecting all of your users’ outbound connections to the Internet. To continue with this analogy, Cloudflare Gateway is your organization’s bodyguard, securing your users as they navigate the Internet.

Together, these two solutions provide a powerful, single dashboard to protect your users, networks, and applications from malicious actors.

Zero Trust For Everyone

A Mission-Driven Solution

At Cloudflare, our mission is to help build a better Internet. That means a better Internet for everyone, regardless of size, scale, or resources. With Cloudflare for Teams, our part in this mission is to keep your team members secure from unknown threats and your applications safe from attack, so that your team can focus on your business.

Earlier this year, shortly after we launched Cloudflare for Teams, organizations suddenly had to change the way they worked. Users left offices, and the security provided by those offices, to work from home. This accelerated the pace of IT transformation from years to days, or even hours.

To alleviate that burden, we provided Cloudflare for Teams for everyone at no cost, and with no restrictions until September 1, 2020. We also offered free one-on-one onboarding to make adoption seamless, and used those sessions to improve the product for our current users as well.

Moving forward, users will continue to work from home, and applications will continue to move away from managed data centers. While our initial free program is no longer available, our team wanted to find a new way to continue helping organizations of any size adjust to this new security model that seems to be here to stay.

The New Free Plan

Today, we are launching the Cloudflare for Teams Free plan, which brings the features of enterprise Zero Trust products and Secure Web Gateways to small teams as well.

Cloudflare for Teams Free offers robust Zero Trust security features for both internal and SaaS applications, and supports integration with a myriad of social and enterprise identity providers like AzureAD or Github. Our Free plan also includes DNS content and security filtering for multiple network locations, complete with 24 hour log retention. By offering Cloudflare for Teams Free, our goal is to empower you to take your first step on a journey to Zero Trust with us.

Zero Trust For Everyone

What You Can Do with Teams Free

With up to 50 seats of Access and Gateway, we’ve seen that the possibilities are endless. In fact, here are some of our favorite ways users are already getting the most out of Cloudflare for Teams Free today.

  • Collaborate on your startup. Build your product without worrying about security. Use Access to protect your development environment.
  • Secure your home Wi-Fi network. Point your home Wi-Fi router’s traffic to Gateway, and set up simple filtering rules to block malware and phishing attacks.
  • Protect the backend of your personal website. Lock down your WordPress admin panel pages, and invite collaborators to work on your blog by using Access’ one-time-pin feature.
  • Safeguard a guest Wi-Fi network. Shield a retail location with Gateway by enforcing your Acceptable Use Policy on your network.

Standalone and Standard

In addition to our new Cloudflare for Teams Free plan, we’re also making it easier to continue your Zero Trust journey by offering enhanced features in our standalone Cloudflare Access or Cloudflare Gateway plans.

With standalone Access, you can easily scale up or down with as many users as you need at any time for $3 per user.

Similarly, with Gateway standalone, you can safely and securely deploy DNS or HTTP security controls from 1 up to 20 different locations for $5 per user without compromising on reliability or performance.

Last but not least, we’re excited to finally give users a way to bundle with Teams Standard, which brings together everything from Access and Gateway under one simple plan at $7 per user.

Getting Started

To get started, just navigate to our sign-up page and create an account. If you already have an active account, you can head straight to the Cloudflare for Teams dashboard, where you’ll be dropped directly into our self-guided onboarding flow. From here, you’re just three steps away from deploying Access or Gateway but, in our opinion, you can’t go wrong kicking off with either.

Cloudflare Access: now for SaaS apps, too

Post Syndicated from Sam Rhea original https://blog.cloudflare.com/cloudflare-access-for-saas/

Cloudflare Access: now for SaaS apps, too

Cloudflare Access: now for SaaS apps, too

We built Cloudflare Access™ as a tool to solve a problem we had inside of Cloudflare. We rely on a set of applications to manage and monitor our network. Some of these are popular products that we self-host, like the Atlassian suite, and others are tools we built ourselves. We deployed those applications on a private network. To reach them, you had to either connect through a secure WiFi network in a Cloudflare office, or use a VPN.

That VPN added friction to how we work. We had to dedicate part of Cloudflare’s onboarding just to teaching users how to connect. If someone received a PagerDuty alert, they had to rush to their laptop and sit and wait while the VPN connected. Team members struggled to work while mobile. New offices had to backhaul their traffic. In 2017 and early 2018, our IT team triaged hundreds of help desk tickets with titles like these:

Cloudflare Access: now for SaaS apps, too

While our IT team wrestled with usability issues, our Security team decided that poking holes in our private network was too much of a risk to maintain. Once on the VPN, users almost always had too much access. We had limited visibility into what happened on the private network. We tried to segment the network, but that was error-prone.

Around that time, Google published its BeyondCorp paper that outlined a model of what has become known as Zero Trust Security. Instead of trusting any user on a private network, a Zero Trust perimeter evaluates every request and connection for user identity and other variables.

We decided to create our own implementation by building on top of Cloudflare. Despite BeyondCorp being a new concept, we had experience in this field. For nearly a decade, Cloudflare’s global network had been operating like a Zero Trust perimeter for applications on the Internet – we just didn’t call it that. For example, products like our WAF evaluated requests to public-facing applications. We could add identity as a new layer and use the same network to protect applications teams used internally.

We began moving our self-hosted applications to this new project. Users logged in with our SSO provider from any network or location, and the experience felt like any other SaaS app. Our Security team gained the control and visibility they needed, and our IT team became more productive. Specifically, our IT teams have seen ~80% reduction in the time they spent servicing VPN-related tickets, which unlocked over $100K worth of help desk efficiency annually. Later in 2018, we launched this as a product that our customers could use as well.

By shifting security to Cloudflare’s network, we could also make the perimeter smarter. We could require that users login with a hard key, something that our identity provider couldn’t support. We could restrict connections to applications from specific countries. We added device posture integrations. Cloudflare Access became an aggregator of identity signals in this Zero Trust model.

As a result, our internal tools suddenly became more secure than the SaaS apps we used. We could only add rules to the applications we could place on Cloudflare’s reverse proxy. When users connected to popular SaaS tools, they did not pass through Cloudflare’s network. We lacked a consistent level of visibility and security across all of our applications. So did our customers.

Starting today, our team and yours can fix that. We’re excited to announce that you can now bring the Zero Trust security features of Cloudflare Access to your SaaS applications. You can protect any SaaS application that can integrate with a SAML identity provider with Cloudflare Access.

Even though that SaaS application is not deployed on Cloudflare, we can still add security rules to every login. You can begin using this feature today and, in the next couple of months, you’ll be able to ensure that all traffic to these SaaS applications connects through Cloudflare Gateway.

Standardizing and aggregating identity in Cloudflare’s network

Support for SaaS applications in Cloudflare Access starts with standardizing identity. Cloudflare Access  aggregates different sources of identity: username, password, location, and device. Administrators build rules to determine what requirements a user must meet to reach an application. When users attempt to connect, Cloudflare enforces every rule in that checklist before the user ever reaches the app.

The primary rule in that checklist is user identity. Cloudflare Access is not an identity provider; instead, we source identity from SSO services like Okta, Ping Identity, OneLogin, or public apps like GitHub. When a user attempts to access a resource, we prompt them to login with the provider configured. If successful, the provider shares the user’s identity and other metadata with Cloudflare Access.

A username is just one part of a Zero Trust decision. We consider additional rules, like country restrictions or device posture via partners like Tanium or, soon, additional partners CrowdStrike and VMware Carbon Black. If the user meets all of those criteria, Cloudflare Access summarizes those variables into a standard proof of identity that our network trusts: a JSON Web Token (JWT).

Cloudflare Access: now for SaaS apps, too

A JWT is a secure, information-dense way to share information. Most importantly, JWTs follow a standard, so that different systems can trust one another. When users login to Cloudflare Access, we generate and sign a JWT that contains the decision and information about the user. We store that information in the user’s browser and treat that as proof of identity for the duration of their session.

Every JWT must consist of three Base64-URL strings: the header, the payload, and the signature.

  • The header defines the cryptographic operation that encrypts the data in the JWT.
  • The payload consists of name-value pairs for at least one and typically multiple claims, encoded in JSON. For example, the payload can contain the identity of a user.
  • The signature allows the receiving party to confirm that the payload is authentic.

We store the identity data inside of the payload and include the following details:

  • User identity: typically the email address of the user retrieved from your identity provider.
  • Authentication domain: the domain that signs the token. For Access, we use “example.cloudflareaccess.com” where “example” is a subdomain you can configure.
  • amr: If available, the multifactor authentication method the login used, like a hard key or a TOTP code.
  • Country: The country where the user is connecting from.
  • Audience: The domain of the application you are attempting to reach.
  • Expiration: the time at which the token is no longer valid for use.

Some applications support JWTs natively for SSO. We can send the token to the application and the user can login. In other cases, we’ve released plugins for popular providers like Atlassian and Sentry. However, most applications lack JWT support and rely on a different standard: SAML.

Converting JWT to SAML with Cloudflare Workers

You can deploy Cloudflare’s reverse proxy to protect the applications you host, which puts Cloudflare Access in a position to add identity checks when those requests hit our edge. However, the SaaS applications you use are hosted and managed by the vendors themselves as part of the value they offer. In the same way that I cannot decide who can walk into the front door of the bakery downstairs, you can’t build rules about what requests should and shouldn’t be allowed.

When those applications support integration with your SSO provider, you do have control over the login flow. Many applications rely on a popular standard, SAML, to securely exchange identity data and user attributes between two systems. The SaaS application does not need to know the details of the identity provider’s rules.

Cloudflare Access uses that relationship to force SaaS logins through Cloudflare’s network. The application itself thinks of Cloudflare Access as the SAML identity provider. When users attempt to login, the application sends the user to login with Cloudflare Access.

That said, Cloudflare Access is not an identity provider – it’s an identity aggregator. When the user reaches Access, we will redirect them to the identity provider in the same way that we do today when users request a site that uses Cloudflare’s reverse proxy. By adding that hop through Access, though, we can layer the additional contextual rules and log the event.

Cloudflare Access: now for SaaS apps, too

We still generate a JWT for every login providing a standard proof of identity. Integrating with SaaS applications required us to convert that JWT into a SAML assertion that we can send to the SaaS application. Cloudflare Access runs in every one of Cloudflare’s data centers around the world to improve availability and avoid slowing down users. We did not want to lose those advantages for this flow. To solve that, we turned to Cloudflare Workers.

The core login flow of Cloudflare Access already runs on Cloudflare Workers. We built support for SaaS applications by using Workers to take the JWT and convert its content into SAML assertions that are sent to the SaaS application. The application thinks that Cloudflare Access is the identity provider, even though we’re just aggregating identity signals from your SSO provider and other sources into the JWT, and sending that summary to the app via SAML.

Integrate with Gateway for comprehensive logging (coming soon)

Cloudflare Gateway keeps your users and data safe from threats on the Internet by filtering Internet-bound connections that leave laptops and offices. Gateway gives administrators the ability to block, allow, or log every connection and request to SaaS applications.

However, users are connecting from personal devices and home WiFi networks, potentially bypassing Internet security filtering available on corporate networks. If users have their password and MFA token, they can bypass security requirements and reach into SaaS applications from their own, unprotected devices at home.

To ensure traffic to your SaaS apps only connects over Gateway-protected devices, Cloudflare Access will add a new rule type that requires Gateway when users login to your SaaS applications. Once enabled, users will only be able to connect to your SaaS applications when they use Cloudflare Gateway. Gateway will log those connections and provide visibility into every action within SaaS apps and the Internet.

Every identity provider is now capable of SAML SSO

Identity providers come in two flavors and you probably use both every day. One type is purpose-built to be an identity provider, and the other accidentally became one. With this release, Cloudflare Access can convert either into a SAML-compliant SSO option.

Corporate identity providers, like Okta or Azure AD, manage your business identity. Your IT department creates and maintains the account. They can integrate it with SaaS Applications for SSO.

The second type of login option consists of SaaS providers that began as consumer applications and evolved into public identity providers. LinkedIn, GitHub, and Google required users to create accounts in their applications for networking, coding, or email.

Over the last decade, other applications began to trust those public identity provider logins. You could use your Google account to log into a news reader and your GitHub account to authenticate to DigitalOcean. Services like Google and Facebook became SSO options for everyone. However, most corporate applications only supported integration with a single SAML provider, something public identity providers do not provide. To rely on SSO as a team, you still needed a corporate identity provider.

Cloudflare Access converts a user login from any identity provider into a JWT. With this release, we also generate a standard SAML assertion. Your team can now use the SAML SSO features of a corporate identity provider with public providers like LinkedIn or GitHub.

Multi-SSO meets SaaS applications

We describe Cloudflare Access as a Multi-SSO service because you can integrate multiple identity providers, and their SSO flows, into Cloudflare’s Zero Trust network. That same capability now extends to integrating multiple identity providers with a single SaaS application.

Most SaaS applications will only integrate with a single identity provider, limiting your team to a single option. We know that our customers work with partners, contractors, or acquisitions which can make it difficult to standardize around a single identity option for SaaS logins.

Cloudflare Access can connect to multiple identity providers simultaneously, including multiple instances of the same provider. When users are prompted to login, they can choose the option that their particular team uses.

Cloudflare Access: now for SaaS apps, too

We’ve taken that ability and extended it into the Access for SaaS feature. Access generates a consistent identity from any provider, which we can now extend for SSO purposes to a SaaS application. Even if the application only supports a single identity provider, you can still integrate Cloudflare Access and merge identities across multiple sources. Now, team members who use your Okta instance and contractors who use LinkedIn can both SSO into your Atlassian suite.

All of your apps in one place

Cloudflare Access released the Access App Launch as a single destination for all of your internal applications. Your team members visit a URL that is unique to your organization and the App Launch displays all of the applications they can reach. The feature requires no additional administrative configuration; Cloudflare Access reads the user’s JWT and returns only the applications they are allowed to reach.

Cloudflare Access: now for SaaS apps, too

That experience now extends to all applications in your organization. When you integrate SaaS applications with Cloudflare Access, your users will be able to discover them in the App Launch. Like the flow for internal applications, this requires no additional configuration.

How to get started

To get started, you’ll need a Cloudflare Access account and a SaaS application that supports SAML SSO. Navigate to the Cloudflare for Teams dashboard and choose the “SaaS” application option to start integrating your applications. Cloudflare Access will walk through the steps to configure the application to trust Cloudflare Access as the SSO option.

Cloudflare Access: now for SaaS apps, too

Do you have an application that needs additional configuration? Please let us know.

Protect SaaS applications with Cloudflare for Teams today

Cloudflare Access for SaaS is available to all Cloudflare for Teams customers, including organizations on the free plan. Sign up for a Cloudflare for Teams account and follow the steps in the documentation to get started.

We will begin expanding the Gateway beta program to integrate Gateway’s logging and web filtering with the Access for SaaS feature before the end of the year.

What is Cloudflare One?

Post Syndicated from Rustam Lalkaka original https://blog.cloudflare.com/cloudflare-one/

What is Cloudflare One?

Running a secure enterprise network is really difficult. Employees spread all over the world work from home. Applications are run from data centers, hosted in public cloud, and delivered as services. Persistent and motivated attackers exploit any vulnerability.

Enterprises used to build networks that resembled a castle-and-moat. The walls and moat kept attackers out and data in. Team members entered over a drawbridge and tended to stay inside the walls. Trust folks on the inside of the castle to do the right thing, and deploy whatever you need in the relative tranquility of your secure network perimeter.

The Internet, SaaS, and “the cloud” threw a wrench in that plan. Today, more of the workloads in a modern enterprise run outside the castle than inside. So why are enterprises still spending money building more complicated and more ineffective moats?

Today, we’re excited to share Cloudflare One™, our vision to tackle the intractable job of corporate security and networking.

What is Cloudflare One?

Cloudflare One combines networking products that enable employees to do their best work, no matter where they are, with consistent security controls deployed globally.

Starting today, you can begin replacing traffic backhauls to security appliances with Cloudflare WARP and Gateway to filter outbound Internet traffic. For your office networks, we plan to bring next-generation firewall capabilities to Magic Transit with Magic Firewall to let you get rid of your top-of-shelf firewall appliances.

With multiple on-ramps to the Internet through Cloudflare, and the elimination of backhauled traffic, we plan to make it simple and cost-effective to manage that routing compared to MPLS and SD-WAN models. Cloudflare Magic WAN will provide a control plane for how your traffic routes through our network.

You can use Cloudflare One today to replace the other function of your VPN: putting users on a private network for access control. Cloudflare Access delivers Zero Trust controls that can replace private network security models. Later this week, we’ll announce how you can extend Access to any application – including SaaS applications. We’ll also preview our browser isolation technology to keep the endpoints that connect to those applications safe from malware.

Finally, the products in Cloudflare One focus on giving your team the logs and tools to both understand and then remediate issues. As part of our Gateway filtering launch this week we’re including logs that provide visibility into the traffic leaving your organization. We’ll be sharing how those logs get smarter later this week with a new Intrusion Detection System that detects and stops intrusion attempts.

What is Cloudflare One?

Many of those components are available today, some new features are arriving this week, and other pieces will be launching soon. All together, we’re excited to share this vision and for the future of the corporate network.

Problems in enterprise networking and security

The demands placed on a corporate network have changed dramatically. IT has gone from a back-office function to mission critical. In parallel with networks becoming more integral, users spread out from offices to work from home. Applications left the datacenter and are now being run out of multiple clouds or are being delivered by vendors directly over the Internet.

Direct network paths became hairpin turns

Employees sitting inside of an office could connect over a private network to applications running in a datacenter nearby. When team members left the office, they could use a VPN to sneak back onto the network from outside the walls. Branch offices hopped on that same network over expensive MPLS links.

When applications left the data center and users left their offices, organizations responded by trying to force that scattered world into the same castle-and-moat model. Companies purchased more VPN licenses and replaced MPLS links with difficult SD-WAN deployments. Networks became more complex in an attempt to mimic an older model of networking when in reality the Internet had become the new corporate network.

Defense-in-depth splintered

Attackers looking to compromise corporate networks have a multitude of tools at their disposal, and may execute surgical malware strikes, throw a volumetric kitchen sink at your network, or any number of things in between. Traditionally, defense against each class of attack was provided by a separate, specialized piece of hardware running in a datacenter.

Security controls used to be relatively easy when every user and every application sat in the same place. When employees left offices and workloads left data centers, the same security controls struggled to follow. Companies deployed a patchwork of point solutions, attempting to rebuild their topside firewall appliances across hybrid and dynamic environments.

High-visibility required high-effort

The move to a patchwork model sacrificed more than just defense-in-depth — companies lost visibility into what was happening in their networks and applications. We hear from customers that this capture and standardization of logs has become one of their biggest hurdles. They purchased expensive data ingestion, analysis, storage, and analytics tools.

Enterprises now rely on multiple point solutions that one of the biggest hurdles is the capture and standardization of logs. Increasing regulatory and compliance pressures place more emphasis on data retention and analysis. Splintered security solutions become a data management nightmare.

Fixing issues relied on best guesses

Without visibility into this new networking model, security teams had to guess at what could go wrong. Organizations who wanted to adopt an “assume breach” model struggled to determine what kind of breach could even occur, so they threw every possible solution at the problem.

We talk to enterprises who purchase new scanning and filtering services, delivered in virtual appliances, for problems they are unsure they have. These teams attempt to remediate every possible event manually, because they lack visibility, rather than targeting specific events and adapting the security model.

How does Cloudflare One fit?

Over the last several years, we’ve been assembling the components of Cloudflare One. We launched individual products to target some of these problems one-at-a-time. We’re excited to share our vision for how they all fit together in Cloudflare One.

Flexible data planes

Cloudflare launched as a reverse proxy. Customers put their Internet-facing properties on our network and their audience connected to those specific destinations through our network. Cloudflare One represents years of launches that allow our network to process any type of traffic flowing in either the “reverse” or “forward” direction.

In 2019, we launched Cloudflare WARP — a mobile application that kept Internet-bound traffic private with an encrypted connection to our network while also making it faster and more reliable. We’re now packaging that same technology into an enterprise version launching this week to connect roaming employees to Cloudflare Gateway.

Your data centers and offices should have the same advantage. We launched Magic Transit last year to secure your networks from IP-layer attacks. Our initial focus with Magic Transit has been delivering best-in-class DDoS mitigation to on-prem networks. DDoS attacks are a persistent thorn in network operators’ sides, and Magic Transit effectively diffuses their sting without forcing performance compromises. That rock-solid DDoS mitigation is the perfect platform on which to build higher level security functions that apply to the same traffic already flowing across our network.

Earlier this year, we expanded that model when we launched Cloudflare Network Interconnect (CNI) to allow our customers to interconnect branch offices and data centers directly with Cloudflare. As part of Cloudflare One, we’ll apply outbound filtering to that same connection.

Cloudflare One should not just help your team move to the Internet as a corporate network, it should be faster than the Internet. Our network is carrier-agnostic, exceptionally well-connected and peered, and delivers the same set of services globally. In each of these on-ramps, we’re adding smarter routing based on our Argo Smart Routing technology, which has been shown to reduce latency by 30% or more in the real-world. Security + Performance, because they’re better together.

A single, unified control plane

When users connect to the Internet from branch offices and devices, they skip the firewall appliances that used to live in headquarters altogether. To keep pace, enterprises need a way to secure traffic that no longer lives entirely within their own network. Cloudflare One applies standard security controls to all traffic – regardless of how that connection starts or where in the network stack it lives.

Cloudflare Access starts by introducing identity into Cloudflare’s network. Teams apply filters based on identity and context to both inbound and outbound connections. Every login, request, and response proxies through Cloudflare’s network regardless of the location of the server or user. The scale of our network and its distribution can filter and log enterprise traffic without compromising performance.

Cloudflare Gateway keeps connections to the rest of the Internet safe. Gateway inspects traffic leaving devices and networks for threats and data loss events that hide inside of connections at the application layer. Launching soon, Gateway will bring that same level of control lower in the stack to the transport layer.

You should have the same level of control over how your networks send traffic. We’re excited to announce Magic Firewall, a next-generation firewall for all traffic leaving your offices and data centers. With Gateway and Magic Firewall, you can build a rule once and run it everywhere, or tailor rules to specific use cases in a single control plane.

We know some attacks can’t be filtered because they launch before filters can be built to stop them. Cloudflare Browser, our isolated browser technology gives your team a bulletproof pane of glass from threats that can evade known filters. Later this week, we’ll invite customers to sign up to join the beta to browse the Internet on Cloudflare’s edge without the risk of code leaping out of the browser to infect an endpoint.

Finally, the PKI infrastructure that secures your network should be modern and simpler to manage. We heard from customers who described certificate management as one of the core problems of moving to a better model of security. Cloudflare works with, not against, modern encryption standards like TLS 1.3. Cloudflare made it easy to add encryption to your sites on the Internet with one click. We’re going to bring that ease-of-management to the network functions you run on Cloudflare One.

One place to get your logs, one location for all of your security analysis

Cloudflare’s network serves 18 million HTTP requests per second on average. We’ve built logging pipelines that make it possible for some of the largest Internet properties in the world to capture and analyze their logs at scale. Cloudflare One builds on that same capability.

Cloudflare Access and Gateway capture every request, inbound or outbound, without any server-side code changes or advanced client-side configuration. Your team can export those logs to the SIEM provider of your choice with our Cloudflare Logpush service – the same pipeline that exports HTTP request events at scale for public sites. Magic Transit expands that logging capability to entire networks and offices to ensure you never lose visibility from any location.

We’re going beyond just logging events. Available today for your websites, Cloudflare Web Analytics converts logs into insights. We plan to keep expanding that visibility into how your network operates, as well. Just as Cloudflare has replaced the “band-aid boxes” that performed disparate network functions and unified them into a cohesive, adaptable edge, we intend to do the same for the fragmented, hard to use, and expensive security analytics ecosystem. More to come on this soon.Smarter, faster remediation

Data and analytics should surface events that a team can remediate. Log systems that lead to one-click fixes can be powerful tools, but we want to make that remediation automatic.

Launching into a closed preview later this week, Cloudflare Intrusion Detection System (IDS) will proactively scan your network for anomalous events and recommend actions or, better yet, take actions for you to remediate problems. We plan to bring that same proactive scanning and remediation approach to Cloudflare Access and Cloudflare Gateway.

Run your network on our globally scaled network

Over 25 million Internet properties rely on Cloudflare’s network to reach their audiences. More than 10% of all websites connect through our reverse proxy, including 16% of the Fortune 1000. Cloudflare accelerates traffic for huge chunks of the Internet by delivering services from datacenters around the world.

We deliver Cloudflare One from those same data centers. And critically, every datacenter we operate delivers the same set of services, whether that is Cloudflare Access, WARP, Magic Transit, or our WAF. As an example, when your employees connect through Cloudflare WARP to one of our data centers, there is a real chance they never have to leave our network or that data center to reach the site or data they need. As a result, their entire Internet experience becomes extraordinarily fast, no matter where they are in the world.

We expect that performance bonus to become even more meaningful as browsing moves to Cloudflare’s edge with Cloudflare Browser. The isolated browsers running in Cloudflare’s data centers can request content that sits just centimeters away. Even further, as more web properties rely on Cloudflare Workers to power their applications, entire workflows can stay inside of a data center within 100 ms of your employees.

What’s next?

While many of these features are available today, we’re going to be launching several new features over the next several days as part of Cloudflare’s Zero Trust week. Stay tuned for announcements each day this week that add new pieces to the Cloudflare One featureset.

What is Cloudflare One?

What Happens When The Whole World Goes Remote? Not To Worry, We Were Built For This

Post Syndicated from Dina Aluzri original https://blog.cloudflare.com/what-happens-when-the-whole-world-goes-remote-not-to-worry-we-were-built-for-this/

What Happens When The Whole World Goes Remote? Not To Worry, We Were Built For This

What Happens When The Whole World Goes Remote? Not To Worry, We Were Built For This

In March, governments all over the world issued stay-at-home orders, causing a mass migration to teleworking. Alongside many of our partners, Cloudflare launched free products and services supported by onboarding sessions to help our clients secure and accelerate their remote work environments. Over the past few months, a dedicated team of specialists met with hundreds of organizations – from tiny startups, to massive corporations – to help them extend better security and performance to a suddenly-remote workforce.

Most companies we heard from had a VPN in place, but it wasn’t set up to accommodate a full-on remote work environment. When employees began working from home, they found that the VPN was getting overloaded with requests, causing performance lags.

While many organizations had bought more VPN licenses to allow employees to connect to their tools, they found that just having licenses wasn’t enough: they needed to reduce the amount of traffic flowing through their VPN by taking select applications off of the private network.

We Were Built For This

My name is Dina and I am a Customer Success Manager (CSM) in our San Francisco office. I am responsible for ensuring the success of Cloudflare’s Enterprise customers and managing all of their post-sales experience. As CSMs, we bring strong product knowledge, best practices and a high degree of empathy to ensure our customers’ satisfaction with Cloudflare’s services. This is driven through delivering the value of our products and services to our customer’s business via regular check-ins and quarterly reviews.

One customer I work with, a service that connects physicians to patients over the Internet, was forced to respond to a requirement for the entire company to work-from-home within 48 hours. This company could not move their entire workforce to a VPN without overwhelming their appliances and IT Help Desk. Any interruption to business continuity could threaten the provider’s ability to deliver services to customers during a time of peak demand, and especially during a time when doctors’ office environments felt unsafe.

Cloudflare Access had been a product of interest for my customer for a while now, and the adoption of the product couldn’t have come at a better time. As an enterprise customer, they enlisted the help of our subject matter experts to troubleshoot the sudden WFH situation. With the help of the Cloudflare CSM, solution engineers, and product managers, the organization was able to migrate internally-hosted applications to Cloudflare Access in less than 15 minutes.

Hundreds of team members began using Cloudflare Access within the next 24 hours. We were able to operationalize a critical call center app without a VPN at the click of a button. In the words of the customer, “…we should have done this a long time ago. It’s beautiful, this is perfect.” Virtually overnight, over 700 users were immediately signed on, and more are being added daily as the company grows and working from home remains a necessity. It could even become a permanent way of working for many organizations around the world.

It’s More Than A Solution

At Cloudflare, our customers are our top priority. Our constant innovation and transparency maintain our customers’ trust and support. This particular telemedicine company has been a Cloudflare customer for 5 years, not only because of the technology, but because of those exact priorities we uphold.

The customer’s journey follows a pattern I see often in my role: this customer initially used Cloudflare for Infrastructure products to protect external sites – their website and customer-facing applications. Because they leverage our solutions in this way, the domain of their call center app was already IP whitelisted on Cloudflare via Zone lockdown, one of our Enterprise features. With their deep knowledge and experience with Cloudflare, they could easily apply the same benefits they were getting from WAF and CDN to their internal employees.

Throughout their tenure with Cloudflare, this customer has constantly interacted with us through events (when they were in-person 😔), webinars, email communication, feature request reviews, and frequent catch-ups over the phone. These conversations provided regular opportunities for the customer to expand their knowledge of Cloudflare, build trust, and grow their usage of Cloudflare’s services. The customer first learned about our Access technology at Cloudflare Connect in NYC last year.

The event proved to be the perfect forum for them to interact with the Cloudflare community, discover new technology, and discuss and brainstorm with onsite technical experts. The customer had been using Cloudflare for our core security and performance services for a long time. They had been receiving value from our other services, and when COVID-19 hit, Cloudflare was there as a trusted advisor and partner to easily tackle this inevitable situation. Everything they’ve learned through their constant interaction with the Cloudflare team, and at Cloudflare events, finally came to fruition.

I tell this customer’s story to raise awareness and encourage our customers – and really anyone interested – to stay up to date with the constant innovation here at Cloudflare. We continue to host and facilitate events and webinars. This allows you, as our customer, to learn and derive value from our technology, so your business is further protected and optimized.  After a webinar and one meeting with us, you can transition your whole work environment to virtual.

Listen in on webinars, attend events, and read up on our blog, developer docs, and support pages, so you can easily call us, and with a click of a button turn on the next feature that will enhance your site and work experience.

Interested in Learning More?

Join Cloudflare’s Product Managers each month to hear the latest highlights, including this month’s feature on Cloudflare for Teams and Cloudflare Workers.  

Need help with your team working remotely? Cloudflare for Teams is free to try for up to 50 users. Learn more.

Two clicks to add region-based Zero Trust compliance

Post Syndicated from Sam Rhea original https://blog.cloudflare.com/two-clicks-to-enable-regional-zero-trust-compliance/

Two clicks to add region-based Zero Trust compliance

Your team members are probably not just working from home – they may be working from different regions or countries. The flexibility of remote work gives employees a chance to work from the towns where they grew up or countries they always wanted to visit. However, that distribution also presents compliance challenges.

Depending on your industry, keeping data inside of certain regions can be a compliance or regulatory requirement. You might require employees to connect from certain countries or exclude entire countries altogether from your corporate systems.

When we worked in physical offices, keeping data inside of a country was easy. All of your users connecting to an application from that office were, of course, in that country. Remote work changed that and teams had to scramble to find a way to keep people productive from anywhere, which often led to sacrifices in terms of compliance. Starting today, you can make geography-based compliance easy again in Cloudflare Access with just two clicks.

You can now build rules that require employees to connect from certain countries. You can also add rules that block team members from connecting from other countries. This feature works with any identity provider configured and requires no other changes for your users or administrators.

What is Cloudflare Access?

Cloudflare Access secures applications by applying Zero Trust enforcement to every request. Rather than trusting anyone on a private network, Access checks for identity any time someone attempts to reach an application. With Cloudflare’s global network, that check takes place in a data center in over 200 cities around the world to avoid compromising performance.

Behind the scenes, administrators build rules to decide who should be able to reach the tools protected by Access. In turn, when users need to connect to those tools, they are prompted to authenticate with one of the identity provider options. Cloudflare Access checks their login against the list of allowed users and, if permitted, allows the request to proceed.

Two clicks to add region-based Zero Trust compliance

Cloudflare Access can check more than just their username. As a Zero Trust platform, Access aggregates multiple sources of signal about a user and surfaces those to the administrator. Some signals include if the user authenticated with a mutual TLS client certificate or hard key. However, some organizations also have compliance requirements that center around region, in addition to multifactor authentication.

Allow some countries, exclude others

You can build Cloudflare Access rules to be as simple as only allow team members with @team.com email addresses. However, usernames and passwords alone are not always sufficient. Depending on where you operate, or where you need to operate, you can use Cloudflare Access to layer country-specific rules on top of your identity provider workflows.

With this release, you can now add rules that require users to connect from certain countries or restrict logins from other countries. For example, you can require that users only connect from Portugal.

Two clicks to add region-based Zero Trust compliance

You can also exclude countries altogether. Cloudflare does not have an office in Costa Rica, a place I know many of us would love to visit. If a member of the team was on a beach vacation there and I wanted to make sure they really unplugged from work, we could add a rule to block logins to our applications from Costa Rica.

Two clicks to add region-based Zero Trust compliance

Some applications might not need country-specific requirements. Cloudflare Access rules can be configured on an application-by-application basis. You can add rules about country connections to specific applications that contain sensitive information, while limiting others to just identity.

Audit logins by country and user

Cloudflare Access captures every request a user makes to an internal application, without the need for any code changes. Your organization can export these logs to a third-party storage or SIEM solution to audit the country of origin for each user request. With that data, your compliance and security teams can quickly audit where your corporate devices are operating without the need to deploy additional client-side software.

Layer with other Zero Trust rules

Zero trust security starts with a username. Administrators build rules to determine which users can reach specific applications. Cloudflare Access integrates with your team’s identity provider, or even multiple identity providers, to make those username-based decisions at the edge of our network.

However, identity consists of more than just a username. Cloudflare Access can aggregate multiple sources of signal in Cloudflare’s network. Access can use that information to make a decision about identity in our network – long before that request ever reaches your infrastructure.

You can combine user rules with mutual TLS requirements, or device posture checks, and even force logins to always use a hard key. All of these zero-trust rules run inline with Cloudflare’s existing security features, like our WAF and DDoS mitigation, to add layers of security to every request. The Cloudflare network gives your team a zero-trust platform to apply all of the data we can gather about a request to determine whether or not it should be allowed.

The country rules we’re announcing today become another layer in that zero trust model. Like other sources of signal, you can combine these rules to build a comprehensive policy tailored to your organization’s compliance or security needs. For example, you can build a rule that only allows users to login to your application when they connect from Germany and use a physical hard key.

Two clicks to add region-based Zero Trust compliance

How to get started

To get started, navigate to an application you have added to Cloudflare Access or create a new one. Cloudflare Access policies consist of actions that can allow, block, or bypass requests based on the criteria defined. Access follows policies in order of precedence from top to bottom in the UI.

Inside of a policy you can define the criteria with three types of operators:

  • Include: Include rules function like OR operators. Users must meet at least one criterion in an Include rule. For example, an include rule can be constructed to allow anyone with @cloudflare.com email domains or [email protected] email domains to connect.
  • Require: Require rules function like AND operators. Users must meet all Require rule criteria.
  • Exclude: Exclusion rules function like “NOT” operators. Users must not meet the criterion of an Exclude rule.

To require that users connect from a particular country, create an Allow policy that includes your users email or identity provider group. Within that Allow policy, add a Require rule and choose the country that will be required. If you want to create a rule that requires multiple countries, you can add them into an Access Group.

Two clicks to add region-based Zero Trust compliance

You can then add that group into the Require rule.

Two clicks to add region-based Zero Trust compliance

What’s next?

Cloudflare Access, part of Cloudflare for Teams, is available today. The country requirement rule is available in all plans.You can follow the documentation here to add the additional rule.

How Argo Tunnel engineering uses Argo Tunnel

Post Syndicated from Chung-Ting Huang original https://blog.cloudflare.com/how-argo-tunnel-engineering-uses-argo-tunnel/

How Argo Tunnel engineering uses Argo Tunnel

Whether you are managing a fleet of machines or sharing a private site from your localhost, Argo Tunnel is here to help. On the Argo Tunnel team we help make origins accessible from the Internet in a secure and seamless manner. We also care deeply about productivity and developer experience for the team, so naturally we want to make sure we have a development environment that is reliable, easy to set up and fast to iterate on.

A brief history of our development environment (dev-stack)

Docker compose

When our development team was still small, we used a docker-compose file to orchestrate the services needed to develop Argo Tunnel. There was no native support for hot reload, so every time an engineer made a change, they had to restart their dev-stack.

We could hack around it to hot reload with docker-compose, but when that failed, we had to waste time debugging the internals of Docker. As the team grew, we realized we needed to invest in improving our dev stack.

At the same time Cloudflare was in the process of migrating from Marathon to kubernetes (k8s). We set out to find a tool that could detect changes in source code and automatically upgrade pods with new images.

Skaffold + Minikube

Initially Skaffold seemed to match the criteria. It watches for change in source code, builds new images and deploys applications onto any k8s. Following Skaffold’s tutorial, we picked minikube as the local k8s, but together they didn’t meet our expectations. Port forwarding wasn’t stable, we got frequent connections refused or timeout.

In addition, iteration time didn’t improve, because spinning up minikube takes a long time and it doesn’t use the host’s docker registry and so it can’t take advantage of caching. At this point we considered reverting back to using docker compose, but the k8s ecosystem is booming, so we did some more research.

Tilt + Docker for mac k8s

Eventually we found a great blog post from Tilt comparing different options for local k8s, and they seem to be solving the exact problem we are having. Tilt is a tool that makes local development on k8s easier. It detects changes in local sources and updates your deployment accordingly.

In addition, it supports live updates without having to rebuild containers, a process that used to take around 20 minutes. With live updates, we can copy the newest source into the container, run cargo build within the container, and restart the service without building a new image. Following Tilt’s blog post, we switched to Docker for Mac’s built-in k8s. Combining Tilt and Docker for Mac k8s, we finally have a development environment that meets our needs.

Rust services that could take 20 minutes to rebuild now take less than a minute.

Collaborating with a distributed team

We reached a much happier state with our dev-stack, but one problem remained: we needed a way to share it. As our teams became distributed with people in Austin, Lisbon and Seattle, we needed better ways to help each other.

One day, I was helping our newest member understand an error observed in cloudflared, Argo Tunnel’s command line interface (CLI) client. I knew the error could either originate from the backend service or a mock API gateway service, but I couldn’t tell for sure without looking at logs.

To get them, I had to ask our new teammate to manually send me the logs of the two services. By the time I discovered the source of the error, reviewed the deployment manifest, and determined the error was caused by a secret set as an empty string, two full hours had elapsed!

I could have solved this in minutes if I had remote access to her development environment. That’s exactly what Argo Tunnel can do! Argo Tunnel provides remote access to development environments by creating secure outbound-only connections to Cloudflare’s edge network from a resource exposing it to the Internet. That model helps protect servers and resources from being vulnerable to attack by an exposed IP address.

I can use Argo Tunnel to expose a remote dev environment, but the information stored is sensitive. Once exposed, we needed a way to prevent users from reaching it unless they are an authenticated member of my team. Cloudflare Access solves that challenge. Access sits in front of the hostname powered by Argo Tunnel and checks for identity on every request. I can combine both services to share the dev-stack details with the rest of the team in a secure deployment.

The built-in k8s dashboard gives a great overview of the dev-stack, with the list of pods, deployments, services, config maps, secrets, etc. It also allows us to inspect pod logs and exec into a container. By default, it is secured by a token that changes every time the service restarts. To avoid the hassle of distributing the service token to everyone on the team, we wrote a simple reverse proxy that injects the service token in the authorization header before forwarding requests to the dashboard service.

Then we run Argo Tunnel as a sidecar to this reverse proxy, so it is accessible from the Internet. Finally, to make sure no random person can see our dashboard, we put an Access policy that only allows team members to access the hostname.

The request flow is eyeball -> Access -> Argo Tunnel -> reverse proxy -> dashboard service

How Argo Tunnel engineering uses Argo Tunnel

Working example

Your team can use the same model to develop remotely. Here’s how to get started.

  1. Start a local k8s cluster. https://docs.tilt.dev/choosing_clusters.html offers great advice in choosing a local cluster based on your OS and experience with k8s
How Argo Tunnel engineering uses Argo Tunnel

2. Enable dashboard service:

How Argo Tunnel engineering uses Argo Tunnel

3. Create a reverse proxy that will inject the service token of the kubernetes-dashboard service account in the Authorization header before forwarding requests to kubernetes dashboard service

package main
 
import (
   "crypto/tls"
   "fmt"
   "net/http"
   "net/http/httputil"
   "net/url"
   "os"
)
 
func main() {
   config, err := loadConfigFromEnv()
   if err != nil {
       panic(err)
   }
   reverseProxy := httputil.NewSingleHostReverseProxy(config.proxyURL)
   // The default Director builds the request URL. We want our custom Director to add Authorization, in
   // addition to building the URL
   singleHostDirector := reverseProxy.Director
   reverseProxy.Director = func(r *http.Request) {
       singleHostDirector(r)
       r.Header.Add("Authorization", fmt.Sprintf("Bearer %s", config.token))
       fmt.Println("request header", r.Header)
       fmt.Println("request host", r.Host)
       fmt.Println("request ULR", r.URL)
   }
   reverseProxy.Transport = &http.Transport{
       TLSClientConfig: &tls.Config{
           InsecureSkipVerify: true,
       },
   }
   server := http.Server{
       Addr:    config.listenAddr,
       Handler: reverseProxy,
   }
   server.ListenAndServe()
}
 
type config struct {
   listenAddr string
   proxyURL   *url.URL
   token      string
}
 
func loadConfigFromEnv() (*config, error) {
   listenAddr, err := requireEnv("LISTEN_ADDRESS")
   if err != nil {
       return nil, err
   }
   proxyURLStr, err := requireEnv("DASHBOARD_PROXY_URL")
   if err != nil {
       return nil, err
   }
   proxyURL, err := url.Parse(proxyURLStr)
   if err != nil {
       return nil, err
   }
   token, err := requireEnv("DASHBOARD_TOKEN")
   if err != nil {
       return nil, err
   }
   return &config{
       listenAddr: listenAddr,
       proxyURL:   proxyURL,
       token:      token,
   }, nil
}
 
func requireEnv(key string) (string, error) {
   result := os.Getenv(key)
   if result == "" {
       return "", fmt.Errorf("%v not provided", key)
   }
   return result, nil
}

4. Create an Argo Tunnel sidecar to expose this reverse proxy

apiVersion: apps/v1
kind: Deployment
metadata:
 name: dashboard-auth-proxy
 namespace: kubernetes-dashboard
 labels:
   app: dashboard-auth-proxy
spec:
 replicas: 1
 selector:
   matchLabels:
     app: dashboard-auth-proxy
 template:
   metadata:
     labels:
       app: dashboard-auth-proxy
   spec:
     containers:
       - name: dashboard-tunnel
         # Image from https://hub.docker.com/r/cloudflare/cloudflared
         image: cloudflare/cloudflared:2020.8.0
         command: ["cloudflared", "tunnel"]
         ports:
           - containerPort: 5000
         env:
           - name: TUNNEL_URL
             value: "http://localhost:8000"
           - name: NO_AUTOUPDATE
             value: "true"
           - name: TUNNEL_METRICS
             value: "localhost:5000"
       # dashboard-proxy is a proxy that injects the dashboard token into Authorization header before forwarding
       # the request to dashboard_proxy service
       - name: dashboard-auth-proxy
         image: dashboard-auth-proxy
         ports:
           - containerPort: 8000
         env:
           - name: LISTEN_ADDRESS
             value: localhost:8000
           - name: DASHBOARD_PROXY_URL
             value: https://kubernetes-dashboard
           - name: DASHBOARD_TOKEN
             valueFrom:
               secretKeyRef:
                 name: ${TOKEN_NAME}
                 key: token

5. Find out the URL to access your dashboard from Tilt’s UI

How Argo Tunnel engineering uses Argo Tunnel

6. Share the URL with your collaborators so they can access your dashboard anywhere they are through the tunnel!

How Argo Tunnel engineering uses Argo Tunnel

You can find the source code for the example in https://github.com/cloudflare/argo-tunnel-examples/tree/master/sharing-k8s-dashboard

If this sounds like a team you want to be on, we are hiring!

Require hard key auth with Cloudflare Access

Post Syndicated from Sam Rhea original https://blog.cloudflare.com/require-hard-key-auth-with-cloudflare-access/

Require hard key auth with Cloudflare Access

Last month, attackers compromised a Twitter team member’s access to an internal administrative panel in order to take over high-profile accounts. Full details of the breach are still pending, but Twitter has shared that the attackers stole credentials through a coordinated spear phishing attack.

The attackers convinced a team member to share login permissions, giving the attackers the ability to access the Twitter control plane. Once authenticated, they sent password reset flows to email accounts they controlled in order to hijack the Twitter accounts.

Administrative panels like Twitter’s are a rich target for phishing attacks because they give attackers a backdoor to privileged systems. Customer-facing teams at SaaS companies rely on these administrative panels to update end-user data and troubleshoot user account issues. If an attacker can compromise a single team member’s account they can potentially impact thousands of end users.

We have our own administrative panel at Cloudflare and we’ve deployed a number of safeguards over the last several years to keep it secure from phishing attacks. However, we had no way to enforce the security feature we think would most insulate us from phishing attacks: physical hard keys.

With hard keys, users can only login when they use a physical device – one that does not produce codes that could be shared over chat. Google notably eliminated all employee phishing cases by rolling out their own hard keys. We issued them to every team member who had to login to our identity provider to reach the admin panel, but our identity provider allowed users to to fall back to a less secure option for MFA.

To solve that problem, we moved the hard key requirement into Cloudflare’s network. Using Cloudflare Access, we can now restrict the ability to reach our admin panel only to team members who authenticate with a hard key. Starting today, we’re making that feature available to all teams.

Securing our own internal tools

Our admin panel gives our team members the ability to turn on features, manage settings, and investigate issues for our customers. The tool itself maintains its own set of application-level permissions that control the actions that administrators can take. Some customer accounts are scoped to specific team members; other accounts cannot be modified without the customer’s explicit approval.

We layer other safeguards on top of those permissions. Users who are inactive for a number of days need to be manually re-enabled. We regularly audit who can use the tool and trim back the list.

We used to lean on a VPN to keep the front door to the admin panel secure. Team members would connect to our VPN using a client on their device. Once on the VPN, they could reach the login page of the tool.

The VPN was painful for end users and a source of concern for our security team. Two issues, in particular, motivated us to find a better solution.

  • Segmentation was difficult. Not every user in the company needs to reach the admin panel, but any user on the VPN could connect to the login page. We wanted to limit that access to users in specific permission groups as part of a defense-in-depth strategy.
  • We could not add additional signals. Users could connect to the private network with just a password and MFA code. We could not limit VPN connections to corporate laptops, healthy devices, or other more granular restrictions.

About two years ago, we migrated that admin panel’s security perimeter to Cloudflare Access. Access gave us a zero-trust alternative to our VPN. Instead of being able to reach the admin panel because you are on the private network, Access continuously checks every request to the tool for identity against a list of allowed users.

We could use Access to limit who could reach certain tools, and it became a foundation for adding other sources of signal down the road. Access also gave us a new level of visibility. Since all requests pass through Cloudflare’s edge, Access provides our team with visibility into every request to every path. Without making any server-side code changes, Access can log every request and attribute it to an authenticated user. We can export those to our SIEM and create a comprehensive audit trail.

Hard keys without weak fallbacks

We integrated Cloudflare Access with our identity provider, which supports multifactor authentication (MFA). To login through Cloudflare Access, users would need to authenticate with their password and a MFA option. The ability to choose an option meant a less secure method could be selected.

Those fallback options were subject to a higher risk of phishing attack. SMS-based codes can be vulnerable to SIM swapping attacks. App-based time-based one-time-pin (TOTP) codes can linger on forgotten devices and, more dangerously, be transmitted as part of an attack.

Hard keys stand out because they rely on control of a physical item. With hard keys, users login with their password and then have to tap an actual key (typically in the form of a USB device). That tap presents a certificate, that only lives on the key, to the service configured to trust it. A user with the hard key could not inadvertently share that code over the phone or in a chat with an attacker.

We distribute hard keys to all team members at Cloudflare. However, we could not require team members to use them on an app-by-app basis. If I don’t have my hard key around, I always have the option to fall back to a TOTP code. We needed a filtering engine that could combine multiple sources of identity signal, which Cloudflare Access provided.

How Cloudflare Access solved our own problem

If you remember going to bars or clubs before the pandemic, Cloudflare Access might seem familiar. You had to show your identity card at the door to a bouncer. The bouncer (and the establishment) did not issue that card – your government did. However, they know what it should look like and how to use its information.

Once checked, the bouncer stamped the back of your hand. You could put your ID back in your wallet and the stamp became proof that the bartenders inside knew to trust.

Cloudflare Access works like that bouncer. When users connect to a resource secured by Cloudflare Access, we check for their ID by sending them to login to their identity provider like Okta or Azure AD. Users authenticate and their identity provider sends Cloudflare Access details like who they are and, for certain providers, what MFA method they used.

Like that stamp, Cloudflare Access uses the external identity information to create a distinct badge that we trust. Access generates a JSON Web Token (JWT) for the user and stores it in their browser. That token becomes the user’s badge for the rest of their session. Cloudflare Access looks for that JWT on every request the user makes to the application and enforces rules that an administrator configures about who can proceed.

Cloudflare Access can store more than just the user’s identity in the JWT. If the identity provider captures the MFA method used by a team member, Access can read that value and store it as an additional field in the JWT. RFC 8176, Authentication Method Reference Values, standardizes these values and how they are shared between systems.

We can use that standard to introduce an MFA option into the JWT created by Cloudflare Access. Access could then add an additional check that evaluated both the user’s identity and the MFA method they used to login.

The policy flexibility gave us what we needed to work with our security team to solve this problem. By adding a new rule that layers on top of the identity rule, we could immediately require that every team member logging in to our admin panel do so with their software-secured key.

Require hard key auth with Cloudflare Access

In our case, that includes any hard keys supported by WebAuthN and FIDO2 or keys tied to a physical device like Apple Touch ID and Windows Hello. Access would reject the attempt if they (or an attacker) used the fallback TOTP code – even if the identity provider allowed the login.

How to get started

You can begin using the same feature our own security team needed today at no additional cost. You’ll need an identity provider that supports Authenticated Method Reference, or amr settings. Today, that consists of Okta and Azure Active Directory. We expect others will add support and we will update our documentation as they do.

To get started, navigate to an application you have in Cloudflare Access or create a new one. In the rules that determine who is allowed to reach the application, add a rule in the “Require” section. Select “MFA” from the dropdown and then choose which options you want to require.

Require hard key auth with Cloudflare Access

What’s next?

Cloudflare Access, part of Cloudflare for Teams, is available today. You can follow the documentation here to add the additional rule. All accounts can use 50 seats of Cloudflare Access for free, including the hard key requirement feature.

Protecting Remote Desktops at Scale with Cloudflare Access

Post Syndicated from Mike Borkenstein original https://blog.cloudflare.com/protecting-remote-desktops-at-scale-with-cloudflare-access/

Protecting Remote Desktops at Scale with Cloudflare Access

Early last year, before any of us knew that so many people would be working remotely in 2020, we announced that Cloudflare Access, Cloudflare’s Zero Trust authentication solution, would begin protecting the Remote Desktop Protocol (RDP). To protect RDP, customers would deploy Argo Tunnel to create an encrypted connection between their RDP server and our edge – effectively locking down RDP resources from the public Internet. Once locked down with Tunnel, customers could use Cloudflare Access to create identity-driven rules enforcing who could login to their resources.

Setting Tunnel up initially required installing the Cloudflare daemon, cloudflared, on each RDP server. However, as the adoption of remote work increased we learned that installing and provisioning a new daemon on every server in a network was a tall order for customers managing large fleets of servers.

What should have been a simple, elegant VPN replacement became a deployment headache. As organizations helped tens of thousands of users switch to remote work, no one had the bandwidth to deploy tens of thousands of daemons.

Message received: today we are announcing Argo Tunnel RDP Bastion mode, a simpler way to protect RDP connections at scale. 🎉 By functioning as a jump-host, cloudflared can reside on a single node in your network and proxy requests to any internal server, eliminating deployment headaches.

Previously, if a user wanted to RDP to a resource not yet protected with a dedicated cloudflared tunnel, they would have to reach out to a member of their infrastructure team and request that it be provisioned manually. For larger enterprises managing thousands of network assets, this could pose a significant burden, involving new configuration management manifests and implementing tunnel health monitoring.

Argo Tunnel RDP Bastion mode enables teams to reach any machine through a single cloudflared instance – a single tunnel, gated by Cloudflare Access, to reach hundreds of remote desktops.

Why does RDP matter?

RDP is one of the most popular protocols used by employees to access their office computers from remote devices. It is installed by default on Windows, and is supported on *nix and MacOS operating systems. Many companies rely on RDP to allow their employees to work from home.

Utilization of the remote desktop protocol has increased significantly in correlation with increased work from home due to the Coronavirus pandemic. Unfortunately, in a rush to make machines available to remote users, many organizations have misconfigured RDP, which has given attackers a new opportunity to target remote desktops.

This increase is due primarily to two factors. The first factor is exposure. Many RDP servers are inadvertently exposed directly to the open Internet due to incomplete enforcement of firewall rules or unpatched vulnerabilities. Quickly exposing desktop fleets in a rush to help employees work from home might result in more security oversights.

Second, most RDP servers are not protected with corporate SSO tools. When users connect over RDP, they often enter a local password to login to the target machine. However, organizations don’t always manage these credentials properly. Instead, users set and save passwords on an ad-hoc basis outside of the single sign-on credentials used for other services. That oversight leads to outdated, reused, and ultimately weak passwords that are potentially  securing Internet-exposed resources.

Where does Cloudflare Access fit?

Cloudflare Access adds stronger authentication to RDP sessions by first locking down access to the remote machine via Argo Tunnel, then enforcing identity-based policies to determine who can gain access. Whether your organization uses Okta, Azure AD, or another provider, your users will be prompted to authenticate with those credentials before starting any RDP sessions.

With RDP connections protected by Access, organizations can enforce the same password strength and rotation requirements for RDP connections as they do for other critical tools.

How does it work?

Protecting Remote Desktops at Scale with Cloudflare Access

On the origin side, an admin will configure a single cloudflared instance to run in bastion mode. That bastion will reach out to the two closest Cloudflare edge data centers and create a long-lived HTTP2 session. Once set up, cloudflared will wait for incoming connections from clients to specify which final origin to connect to. This is unlike conventional cloudflared tunnel behavior, which immediately creates a single outgoing connection to a pre-configured origin.

On the RDP user side, a cloudflared instance running as a client will be configured with the final destination of the RDP session.  This isn’t the address of the cloudflared bastion but rather the internal hostname the user wants to connect to.

Next, the user’s primary RDP client (i.e. “Remote Desktop Connection” on Windows) will initiate a connection to the local cloudflared client. cloudflared will launch a browser window and navigate to the Access app’s login page, prompting the user to authenticate with an IdP.

Once authenticated, the cloudflared client will tunnel the RDP traffic over HTTPS requests to the Cloudflare edge, including the final RDP destination and Access JWT in the request headers. The edge will verify the Access JWT to ensure that the client is authorized to reach the origin and, if it is, will use a special PoP to PoP route called Argo Smart Routing to forward the connection to the bastion over the shortest path possible.

For each incoming connection, the bastion will initiate an outgoing RDP session to the final internal destination and proxy traffic back and forth to the client.

Protecting Remote Desktops at Scale with Cloudflare Access

What’s next?

While today we are proxying just RDP traffic in bastion mode, we will eventually be expanding this functionality to protocols like FTP, SSH, and generic TCP.

In the effort to make protecting internal resources easier than ever before, cloudflared can now also be conveniently found in the Cloudflare package repo, in tagged releases on the cloudflared Github repo, and in the cloudflared Docker hub repo.

Introducing IP Lists

Post Syndicated from Patrick R. Donahue original https://blog.cloudflare.com/introducing-ip-lists/

Introducing IP Lists

Authentication on the web has been steadily moving to the application layer using services such as Cloudflare Access to establish and enforce software-controlled, zero trust perimeters. However, there are still several important use cases for restricting access at the network-level by source IP address, autonomous system number (ASN), or country. For example, some businesses are prohibited from doing business with customers in certain countries, while others maintain a blocklist of problematic IPs that have previously attacked them.

Introducing IP Lists

Enforcing these network restrictions at centralized chokepoints using appliances—hardware or virtualized—adds unacceptable latency and complexity, but doing so performantly for individual IPs at the Cloudflare edge is easy. Today we’re making it just as easy to manage tens of thousands of IPs across all of your zones by grouping them in data structures known as IP Lists. Lists can be stored with metadata at the Cloudflare edge, replicated within seconds to our data centers in 200+ cities, and used as part of our powerful, expressive Firewall Rules engine to take action on incoming requests.

Introducing IP Lists
Creating and using an IP List

Previously, these sort of network-based security controls have been configured using IP Access or Zone Lockdown rules. Both tools have a number of shortcomings that we’ve eliminated with the introduction of IP Lists, including:

IP prefix boundaries

Our legacy IP Access rules allow the use of a limited number of IP prefix lengths: /16 and /24 for IPv4; and /32, /48, and /64 for IPv6. These restrictions typically result in users creating far more rules than needed, e.g., if you want to block a /20 IPv4 network you must create 16 separate /24 entries.

With IP Lists we’ve removed this restriction entirely, allowing users to create Lists with any prefix length: /2 through /32 for IPv4 and /4 through /64 for IPv6. Lists can contain both IPv4 and IPv6 networks as well as individual IP addresses.

Order of evaluation

Perhaps the most limiting factor in the use of IP Access rules today is that they are evaluated before Firewall Rules. You can elect to Block or Challenge the request based on a the source IP address, country or ASN, or you can allow the request to bypass all subsequent L7 mitigations: DDoS, Firewall Rules, Zone Lockdown, User Agent, Browser Integrity Check, Hotlink Protection, IP Reputation (including “Under Attack” Mode), Rate Limiting, and Managed Rules.

IP Lists introduce much more flexibility.

For example, with IP Lists, you can combine a check of a source IP address with a Bot Management score, contents of an HTTP request header, or any other filter criteria that the Firewall Rules engine supports to implement more complex logic.

Below is a rule that blocks requests to /login with a bot score below 30, unless it is coming from the probe servers of Pingdom, an external monitoring solution.

Introducing IP Lists

Shared use across zones

Zone Lockdown rules are defined exclusively at the zone level, and cannot be re-used across zones, so if you wanted to allow only a specific set of IPs to the same hundred zones you’d have to recreate the rules and IPs in each zone.

IP Lists are stored at the account level, not zone level, so the same list can be referenced—and updated—in Firewall Rules across multiple zones. We’re also hard at work on letting you create account-wide Firewall Rules, which will streamline your security configuration even further.

Organization, labeling, and bulk uploading

IP Access and Zone Lockdown rules must be created one at a time whereas IP Lists can be uploaded in bulk through the UI using a CSV file (or pasting multiple lines, as shown below), or via the API. Individual items are timestamped, and can be given descriptions along with the group itself.

In the clip below, the contents of Pingdom’s IPv4 list are copied to the clipboard and then pasted into the Lists UI. Multiple rows will automatically be created as shown:

Introducing IP Lists

Actions available for use in rules

Because IP Lists are used within Firewall Rules, users can take advantage of all the actions available today, as well as those that we add in the future. In the coming months we plan to migrate all of the capabilities under Firewall → Tools into Firewall Rules, including Rate Limiting, which will require the addition of the Custom Response action. This action, which allows users to specify the specific status code, content type, and payload that gets to the eyeball, will then be usable with IP Lists.

Planned Enhancements

We wanted to get IP Lists in your hands as soon as possible, but we’re still working on adding additional capabilities. If you have thoughts on our ideas below, or have other suggestions, please comment at the end of this blog post—we’d love to hear what would make Lists more useful to you!

Multiple Lists and increased quotas for paid plans

As of today every account can create one (1) IP List with a total of 1,000 entries. In the near future we plan to increase both the number of Lists that can be created as well as the total count of entries.

If you have a specific use case for multiple (or larger) Lists today, please contact your Customer Success Manager or file a support ticket.

Additional types of custom Lists

Lists are assigned a type during creation and the first type available is the IP List. We plan to add Country and ASN Lists, and are monitoring feedback to see what other types may be useful.

Expiring List entries

We’ve heard a few requests from beta testers who are interested in expiring individual List entries after some specified period of time, e.g., 24 hours from addition to the List. If this is something of interest to you, please let us know along with your use cases.

Managed Lists

In addition to Lists that you create and manage yourself, we plan to curate Lists that you can subscribe to and use in your rules. Our initial ideas revolve around surfacing intelligence gleaned from the 27M properties reverse proxying traffic through the Cloudflare edge, e.g., equipping you with lists of IPs that are known open proxies so requests from these can be treated differently.

In addition to intelligent lists, we’re planning on creating other managed lists for your convenience—but need your help in identifying which those are. Are there lists of IPs you find yourself manually inputting? We’d like to hear about those as candidates for Cloudflare Managed lists. Some examples from beta testers include third-party performance monitoring tools that should never have security enforcements applied to them.

Are you paying for a third-party List today that you’d like to subscribe to and have automatically updated within Cloudflare? Let us know in the comments below.

Get started today and let us know what you think

IP Lists are now available in all Cloudflare accounts. We’re excited to let you start using them, and look forward to your feedback.

Tanium’s endpoint security meets Cloudflare for Teams

Post Syndicated from Sam Rhea original https://blog.cloudflare.com/tanium-cloudflare-teams/

Tanium’s endpoint security meets Cloudflare for Teams

When Cloudflare first launched in 2010, network security still relied heavily on physical security. To connect to a private network, most users simply needed to be inside the walls of the office. Once on that network, users could connect to corporate applications and infrastructure.

When users left the office, a Virtual Private Network (VPN) became a bandaid to let users connect back into that office network. Administrators poked holes in their firewall that allowed traffic to route back through headquarters. The backhaul degraded user experience and organizations had no visibility into patterns and events that occurred once users were on the network.

Cloudflare Access launched two years ago to replace that model with an identity-based solution built on Cloudflare’s global network. Instead of a private network, teams secure applications with Cloudflare’s network. Cloudflare checks every request to those applications for identity, rather than IP ranges, and accelerates those connections using the same network that powers some of the world’s largest web properties.

In this zero-trust model, Cloudflare Access checks identity on every request – not just the initial login to a VPN client. Administrators build rules that Cloudflare’s network continuously enforces. Each request is evaluated for permission and logged for audit purposes. However, users can take their passwords and 2FA keys to unapproved devices. Logins from unmanaged devices, like a personal iPad, can violate an organization’s compliance audit. Users can also connect from corporate devices that are infected with malware, posing a risk that it could spread further.

Instead of the walls of an office building, modern physical security relies on organizations that control which devices can, and cannot, connect to corporate resources. The identity of the device can be evaluated alongside the identity of the user to keep data and applications safer.

Starting today, Cloudflare for Teams customers can add that layer of device security into their deployment with Tanium’s endpoint management platform. Cloudflare and Tanium are partnering to make zero-trust security seamless, combining Cloudflare’s network with Tanium’s on-device security.

Cloudflare Access

Cloudflare Access secures applications by applying zero-trust enforcement to every request. Rather than trusting any users on a private network who logged into a VPN client, Access checks for identity any time someone attempts to reach the application. With Cloudflare’s global network, that check takes place in a data center in over 200 cities around the world to avoid compromising performance.

Tanium’s endpoint security meets Cloudflare for Teams

Behind the scenes, administrators build rules to decide who should be able to reach the tools protected by Access. In turn, when users need to connect to those tools, they are prompted to authenticate with one of the identity provider options. Cloudflare Access checks their login against the list of allowed users and, if permitted, allows the request to proceed.

Prior to this announcement, the rules that administrators could create relied entirely on a user login. Today, Cloudflare and Tanium customers can ensure any connection to their corporate resources is protected with two layers of assurance; number one, the user’s corporate credentials, and number two, their managed device.

Adding Tanium’s endpoint security

Tanium delivers a unified platform that consists of agents running on corporate devices that constantly evaluate and monitor the health of the endpoint. The solution reduces IT and Security complexity by providing comprehensive visibility and control and visibility over all endpoints in a single platform. 50% of the Fortune 100 and 4 of 5 U.S. military branches rely on Tanium to manage and secure devices, wherever they operate.

Tanium deployments use a single agent to replace several legacy approaches to endpoint management and security. For IT teams, the agent provides inventory management, device configuration, and performance monitoring to reduce the burden of managing fleets of endpoints. Security teams can use that same agent for detection and response, patch updates, and data risk and privacy enforcement.

Like Cloudflare’s products for network performance and security, Tanium replaces traditional endpoint solutions with a single platform to keep devices safe. Starting today, organizations can connect both platforms for end-to-end network and endpoint security.

How it works

Integrating Tanium and Cloudflare for Teams takes 10 minutes. Once configured, administrators can build rules that require users connecting to applications to both login with their SSO and use a device managed by Tanium.

Tanium’s endpoint security meets Cloudflare for Teams

In the new Cloudflare for Teams UI, administrators can add Tanium as an authentication mechanism. The UI will prompt them to add their Tanium public certificate and the endpoint used to validate the connecting device. With that information, Cloudflare Access can query the device’s health when evaluating a connection without the risk that the device could be impersonated.

Tanium’s endpoint security meets Cloudflare for Teams

Administrators can then copy their Cloudflare for Teams public certificate and add it into their Tanium deployment. With that certificate, Tanium administrators can ensure that the only service that can query for data from the endpoint is their unique Cloudflare for Teams account.

Finally, administrators can add new rules into their Cloudflare Access policies that evaluate device posture. When users connect to resources secured by Access, Cloudflare’s network will check that the user authenticates with their identity provider and is connecting from a healthy, Tanium-monitored, device.

Tanium’s endpoint security meets Cloudflare for Teams

Cloudflare’s network and Tanium’s distribution makes that check seamless for the end user. Cloudflare Access runs in all of Cloudflare’s data centers in 200 cities around the world; putting enforcement decisions within 100ms of 99% of the world’s Internet connected population. By integrating directly with the Tanium agent, the evaluation can also occur without a connection back to the Tanium administrative layer.

What’s next?

With this integration, organizations can get defense in depth for corporate apps with Tanium and Access working together to secure user connections. All Cloudflare for Teams customers who have a Tanium deployment can begin integrating device posture into their Access policies today at no additional cost.

If you’re interested in taking advantage of this integration, we’re standing by to help you set it up. Fill out the form here and a member of our team will get in touch to help answer any questions.

If you already use Tanium or Cloudflare Access and want to try it out yourself, documentation  from Cloudflare for Teams and Tanium is available to get started today.

Releasing Cloudflare Access’ most requested feature

Post Syndicated from Sam Rhea original https://blog.cloudflare.com/releasing-cloudflare-access-most-requested-feature/

Releasing Cloudflare Access’ most requested feature

Cloudflare Access, part of Cloudflare for Teams, replaces legacy corporate VPNs with Cloudflare’s global network. Instead of starting a VPN client to backhaul traffic through an office, users visit the hostname of an internal application and login with your team’s SSO provider. While the applications feel like SaaS apps for end users, your security and IT departments can configure granular controls and audit logging in a single place.

Since Access launched two years ago, customers have been able to integrate multiple SSO providers at the same time. This MultiSSO option makes it seamless for teams to have employees login with Okta or Azure AD while partners and contractors use LinkedIN or GitHub.

The integrations always applied globally. Users would see all SSO options when connecting to any application protected by Cloudflare Access. As more organizations use Cloudflare Access to connect distributed and mixed workforces to resources, listing every provider on every app no longer scales.

For example, your team might have an internal GitLab instance that only employees need to access using your corporate G Suite login. Meanwhile, the marketing department needs to share QA versions of new sites with an external agency who authenticates with LinkedIn. Asking both sets of users to pick an SSO provider on both applications adds a redundant step and can lead to additional questions or IT tickets.

The ability to only show users the relevant identity provider became the most requested feature in Cloudflare Access in the last few months. Starting today, you can use the new Cloudflare for Teams UI to configure identity options on individual applications.

Cloudflare Access

Cloudflare Access secures applications by applying zero-trust enforcement to every request. Rather than trusting anyone on a private network, Access checks for identity any time someone attempts to reach the application. With Cloudflare’s global network, that check takes place in a data center in over 200 cities around the world to avoid compromising performance.

Releasing Cloudflare Access’ most requested feature

Behind the scenes, administrators build rules to decide who should be able to reach the tools protected by Access. In turn, when users need to connect to those tools, they are prompted to authenticate with one of the identity provider options. Cloudflare Access checks their login against the list of allowed users and, if permitted, allows the request to proceed.

The challenge of agreeing on identity

Most zero-trust options, like the VPN appliances they replace, rely on one source of identity. If your team has an application that you need to share with partners or contractors, you need to collectively agree on a single standard.

Some teams opt to solve that challenge by onboarding external users to their own identity provider. When contractors join a project, the IT department receives help desk tickets to create new user accounts in the organization directory. Contractors receive instructions on how to sign-up, they spend time creating passwords and learning the new tool, and then use those credentials to login.

This option gives an organization control of identity, but adds overhead in terms of time and cost. The project owner also needs to pay for new SSO seat licenses, even if those seats are temporary. The IT department must spend time onboarding, helping, and then offboarding those user accounts. And the users themselves need to learn a new system and manage yet another password – this one with permission to your internal resources.

Releasing Cloudflare Access’ most requested feature

Alternatively, other groups decide to “federate” identity. In this flow, an organization will connect their own directory service to their partner’s equivalent service. External users login with their own credentials, but administrators do the work to merge the two services to trust one another.

Releasing Cloudflare Access’ most requested feature

While this method avoids introducing new passwords, both organizations need to agree to dedicate time to integrate their identity providers – assuming that those providers can integrate. Businesses then need to configure this setup with each contractor or partner group. This model also requires that external users be part of a larger organization, making it unavailable to single users or freelancers.

Cloudflare Access avoids forcing the decision on a single source of identity by supporting multiple. When users connect, they are presented with those options. Users choose their specific provider and Access checks that individual’s login against the list of allowed users.

Releasing Cloudflare Access’ most requested feature

Configuring per-app options

Not all of those options apply to every application that an organization secures. To segment those applications, and reduce user confusion, you can now scope specific apps to different providers.

To get started, select the application that you want to segment with a particular provider in the Cloudflare for Teams UI. Click the tab titled “Authentication”.

Releasing Cloudflare Access’ most requested feature

The tab will list all providers integrated with your account. By default, Access will continue to enable all options for end users. You can toggle any provider on or off in this view and save. The next time your users visit this application, they will only see the options enabled.

If you disable all but one option, Access will skip the login page entirely and redirect the user directly to the provider – saving them an unnecessary click.

What’s next?

You can start configuring individual identity providers with specific applications in the new Cloudflare for Teams dashboard. Additional documentation is also available.

The new Teams UI makes this feature possible, but the login page that your end users see still has the legacy design from the older Access dashboard that launched two years ago. Cloudflare for Teams will be releasing a style update to that page in the next month to bring it in line with this new UI.

Setting up Cloudflare for Teams as a Start-Up Business

Post Syndicated from David Harnett original https://blog.cloudflare.com/setting-up-cloudflare-for-teams-as-a-start-up-business/

Setting up Cloudflare for Teams as a Start-Up Business

Earlier this year, Cloudflare acquired S2 Systems. We were a start-up in Kirkland, Washington and now we are home to Cloudflare’s Seattle-area office.

Our team developed a new approach to remote browser isolation (RBI), a technology that runs your web browser in a cloud data center, stopping threats on the Internet from executing any code on your machine. The closer we can bring that data center to the user, the faster we can make that experience. Since the acquisition, we have been focused on running our RBI platform in every one of Cloudflare’s data centers in 200 cities around the world.

The RBI solution will join a product suite that we call Cloudflare for Teams, which consists of two products: Access and Gateway.

Those two products solve a number of problems that companies have with securing users, devices, and data. As a start-up, we struggled with a few of these challenges in really painful ways:

  • How do we let prospects securely trial our RBI platform?
  • How do we keep our small office secure without an IT staff?
  • How can we connect to the powerful, but physically clunky and heavy development machines, when we are not in that office?

Dogfooding our own products has long been part of Cloudflare’s identity, and our team has had a chance to do the same from a new perspective.

Managing access to our RBI service for early adopter customers and partners

As we built the first version of our product, we worked closely with early adopters to test the product and gather feedback. However, we were not ready to share the product with the entire world yet, so we needed a way to lock down who could reach the prototype and beta versions.

It took us the best part of six months to build, test and modify (multiple times) the system for managing access to the product.

We chose a complicated solution that took almost as much time to build as did features within the product. We deployed a load balancer that also served as a reverse proxy in front of the RBI host and acted as a bouncer for unauthenticated requests. That sat behind an ASP.NET core server. Furthest to the right sat the most difficult component: identity.

Setting up Cloudflare for Teams as a Start-Up Business

We had to manually add identity providers every time a new customer wanted to test out the service. Our CTO frequently burned hours each day adding customers manually, configuring groups, and trying to balance policies that kept different tenants secure.

From six months to 30 minutes

As we learned more about Cloudflare during the due diligence period, we started to hear more about Cloudflare Access. Like the RBI solution, Access applied Cloudflare’s network to a new type of problem: how do teams keep their users and resources secure without also slowing them down?

When members of the Cloudflare team visited our office in Kirkland, none of them needed a VPN to connect. Their self-managed applications just worked, like any other SaaS app.

We then had a chance to try Access ourselves. After the deal closed, we collaborated with the Cloudflare team on an announcement. This started just hours after the acquisition completed, so we did not have a chance to onboard to Cloudflare’s corporate SSO yet. Instead, the team secured new marketing pages and forms behind Cloudflare Access which prompted us to login with our S2 emails. Again, it just worked.

We immediately began rethinking every hour we had spent building our own authentication platform. The next day, we set up a Cloudflare Access account. We secured our trial platform by building a couple of rules in the Access UI to decide who should be able to reach it.

We sent a note out to the team to try it out. They logged in with our SSO credentials and Cloudflare connected them to the application. No client needed on their side, no multi-level authentication platform on ours.

We shut down all of our demo authentication servers. Now, when we have customers who want to trial the RBI technology, we can add their account to the rules in a couple of minutes. They visit a single hostname, login, and can start connecting to a faster, safer browser.

Protecting our people and devices from Internet threats

When we signed a sublease for our first office location, we found the business card of the building’s Comcast representative taped to the door. We called them and after a week the Comcast Business technicians had a simple network running for us.

We wanted to implement a real network security model for our small office. We tried deploying multiple firewalls, with access controls, and added some tools to secure outbound traffic.

We spent way too much time on it. Every configuration change involved the staff trying to troubleshoot problems. The system wound up blocking things that should not be blocked, and missing things that should be blocked. It reached the point where we just turned off most of it.

Another product in the Cloudflare for Teams platform, Cloudflare Gateway, solved this challenge for us. Rather than 30 minutes, this upgrade took about 10.

Cloudflare Gateway secures users from threats on the Internet by stopping traffic from devices or office networks from reaching malicious destinations. The first feature in the product, DNS-based security, adds threat-blocking into the world’s fastest DNS resolver, Cloudflare’s 1.1.1.1 product.

Setting up Cloudflare for Teams as a Start-Up Business

We created a policy to block security threats, changed our router’s DNS settings, and never had to worry about it again. As needed, we could log back into the UI and review reports that told us about the malicious traffic that Gateway caught.

As I’m writing this post, none of us are working in that office. We’re staying home, but we still can use Gateway’s security model. Gateway now integrates with the 1.1.1.1 app for mobile devices; in a couple of clicks, we can protect iOS and Android phones and tablets with the same level of security. Soon, we’ll be releasing desktop versions to make that easy on every device.

Connecting to dev machines while working from home

Back at the office, we still have a small fleet of high-powered Linux machines. These desktops run 16 cores, 32 threads, and 32GB of DDR memory. We use these to build and test Chromium, but dragging these boxes to each developer’s house would have been a huge hassle.

We still had a physical VPN appliance that we had purchased during our start-up days. We had hired vendors to install it onsite and configure some elaborate syncing with our identity providers. The only thing more difficult than setting it up was using it. With everyone suddenly working from home, I don’t think we would have been able to make it work.

So we returned to Cloudflare Access instead. Working with guidance from Cloudflare’s IT and Security teams, we added a new hostname in the Cloudflare account for the Seattle area office. We then installed the Cloudflare daemon, cloudflared, on the machines in the offices. Those daemons created outbound-only tunnels from the machines to the Cloudflare network, available at a dedicated subdomain for each developer.

On the other side of that connection, each engineer on our team installed cloudflared on their machines at home. They need to make one change to their SSH config file, adding two lines that include a ProxyCommand. The setup requires no other modifications, no special SSH clients or commands. Even the developers who rely on tools like Visual Studio Code’s Remote SSH extension could keep their workflow exactly the same.

The only difference is that, instead of a VPN, when developers start a new SSH session, Access prompts them to login with Cloudflare’s SSO. They do so and are connected to their machine through Cloudflare’s network and smart routing technology.

What’s next?

As a start-up, every hour we spent trying to cobble together tools was an hour we lost building our product but we needed to provide secure access to our product so we made the time investment. The only other option would have been to purchase products that were way outside of the price range for a small start-up where the only office perk was bulk Costco trail mix.

Cloudflare for Teams immediately solved the challenges we had, in a fairly comprehensive way. We now can seamlessly grant prospects permissions to try the product, our office network is safer, and our developers can stay productive at home.

It could be easy to think “I wish we had done this sooner,” and to some extent, I do. However, seeing the before-and-after of our systems has made us more excited about what we’re doing as we bring the remote browser technology into Cloudflare’s network.

The RBI platform is going to benefit from the same advantages of that network that make features in Access and Gateway feel like magic. We’re going to apply everything that Cloudflare has learned securing and improving connections and use it to solve a new customer problem.

Interested in skipping the hard parts about our story and getting started with Cloudflare for Teams? You can use all of the features covered in this blog post today, at no cost through September.

A single dashboard for Cloudflare for Teams

Post Syndicated from Sam Rhea original https://blog.cloudflare.com/a-single-dashboard-for-cloudflare-for-teams/

A single dashboard for Cloudflare for Teams

Starting today, Cloudflare Access can now be used in the Cloudflare for Teams dashboard. You can manage security policies for your people and devices in the same place that you build zero-trust rules to protect your applications and resources. Everything is now in one place in a single dashboard.

We are excited to launch a new UI that can be used across the entire Teams platform, but we didn’t build this dashboard just for the sake of a new look-and-feel. While migrating the Access dashboard, we focused on solving one of the largest sources of user confusion in the product.

This post breaks down why the original  UI caused some headaches, how we think about objects in Cloudflare for Teams, and how we set out to fix the way we display that to our users.

Cloudflare Access

Cloudflare Access is one-half of Cloudflare for Teams, a security platform that runs on Cloudflare’s network. Teams protects users, devices and data  without compromising experience or performance. We built Cloudflare Access to solve our own headaches with private networks as we grew from a team concentrated in a single office to a globally distributed organization.

A single dashboard for Cloudflare for Teams

Cloudflare Access replaces corporate VPNs with Cloudflare’s network in a zero-trust model. Instead of placing internal tools on a private network, teams deploy them in any environment, including hybrid or multi-cloud models, and secure them consistently with Cloudflare’s network.

When users connect to those tools, they are prompted to login with their team’s identity provider. Cloudflare Access checks their login against the list of allowed users and, if permitted, allows the request to proceed.

Deploying Access does not require exposing new holes in corporate firewalls. Teams connect their resources through a secure outbound connection, Argo Tunnel, which runs in your infrastructure to connect the applications and machines to Cloudflare. That tunnel makes outbound-only calls to the Cloudflare network and organizations can replace complex firewall rules with just one: disable all inbound connections.

Sites vs. Accounts

When you use Cloudflare, you use the platform at two levels: account and site. You have one Cloudflare account, though you can be a member of multiple accounts. That one account captures details like your billing profile and notification settings.

Your account contains sites, the hostnames or zones that you add to Cloudflare. You configure features that apply to a site, like web application firewall (WAF) and caching rules.

When we launched Access nearly two years ago, you could use the product to add an identity check to a site you added to Cloudflare, either at the hostname, subdomain, or path. To do that, users select the site in their Cloudflare dashboard, toggle to the Access tab, and build a rule specific to that site.

A single dashboard for Cloudflare for Teams

To add rules to a different site, a user steps back up a level. They need to select the new site from the dropdown and load the Access tab for that site. However, two components in the UI remained the same and shared configuration:

  • SSO integration
  • Logs

The SSO integration is where Access pulls information about identity. Users integrate their Okta, AzureAD, GSuite accounts, or other identity providers, in this card. We made a decision that the integration should apply across your entire account; you should not need to reconfigure your SSO connection on every site where you want to add an Access rule.

However, we displayed that information in the site-specific page. Cloudflare has account-level concepts, like billing or account users, but we wanted to keep everything related to Access in a single page so we made this compromise. Logs followed a similar pattern.

This decision caused confusion. For example, we add a log table to the bottom of the tab when users view “site{.}com”. However, that table actually presented logs from both “site{.}com” and any other hostname in the account.

As more features were added, this exception grew out of control. At this point, the majority of features you see when you open the Access tab for one of your sites are account-level features stuffed into the site view. The page below is the Access tab for a site in my account, widgetcorp{.}tech. Highlighted in green are the boxes that apply to the site I have selected. Highlighted in red are the boxes that apply to my Access account.

A single dashboard for Cloudflare for Teams

This user experience is unnecessarily complex . Even worse, though, is that confusion in security products can lead to real incidents. Any time that a user asks “am I building something for my account or this site?” We needed to fix both.

Starting with a new design

A few months ago, Cloudflare launched Cloudflare for Teams, which consists of two complementary products: Access and a new solution, Cloudflare Gateway. If Access is a bouncer standing in front of the door, checking identity, Gateway is a bodyguard, keeping your team safe as you navigate the Internet.

Gateway has no concept of sites, at least not sites that you host yourself. Rather than securing your Internet properties, like Cloudflare’s infrastructure products that rely on the reverse proxy, Gateway secures your team from the Internet, and the threats on it. For the first time, you could use a Cloudflare product without a site on Cloudflare.

Gateway introduced other new concepts which have no relation to a domain name in the traditional Cloudflare sense. You can add your office network and your home WiFi to your Gateway account. You can build rules to block any sites on the Internet. You can now use Gateway on mobile devices and soon desktops as well.

To capture that model, we started on a new UI from scratch, and earlier this year we launched a new dashboard for Gateway, dash.teams.cloudflare.com.

A single dashboard for Cloudflare for Teams

Account settings now have a home of their own

The products in Cloudflare for Teams should live in one place; you shouldn’t need to hop back and forth between different dashboards to manage them. Bringing Access into the Teams dashboard puts everything under one roof.

That also gave us an opportunity to solve the confusion in the current Access UI. Since the Teams dashboard is not constrained by the site-specific model, we could break out the dashboard into components that made sense for how people use the Access product.

A single dashboard for Cloudflare for Teams

The new dashboard untangles the tools in Access that apply to your entire account (the methods that you use to secure your resources) from the features that apply to a single site (the rules you build to protect a resource).

One dashboard for your team

Merging Access into the Cloudflare for Teams dashboard, and solving the problems of the original UI, is just the beginning. We’ll be using that foundation to release new features in both Access and Gateway, including more that apply across both products.

You will also be able to continue to extend some of the configuration made in Access to Gateway. For example, an integration with a provider like Okta to build zero-trust policies in Access can eventually be reused for adding group-based policies into Gateway. You’ll see the beginning of that in the new UI, as well, with categories like “My Teams” and “Logs” that apply or will apply to both products. As we continue, we’re going to try to avoid making the same mistake of conflating account, site, and now product objects.

A single dashboard for Cloudflare for Teams

What’s next?

The new Access UI is available to all customers today in the Cloudflare for Teams dashboard. You can get started by visiting this link and signing in with your Cloudflare account.

To use the Access UI, you will first need to enable Cloudflare Access and add a site to Cloudflare in the existing dashboard. Instructions are available here. You can also watch a guided tour of the new site.

No new features have been added, though we’re busy working on them. This release focused entirely on improving how users approach the product based on the feedback we have received over 22 months. We’re still listening to new feedback. Run into an issue or notice an area of improvement? Please tell us.

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

Post Syndicated from Jason Farber original https://blog.cloudflare.com/deploying-gateway-using-a-raspberry-pi-dns-over-https-and-pi-hole/

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

Like many who are able, I am working remotely and in this post, I describe some of the ways to deploy Cloudflare Gateway directly from your home. Gateway’s DNS filtering protects networks from malware, phishing, ransomware and other security threats. It’s not only for corporate environments – it can be deployed on your browser or laptop to protect your computer or your home WiFi. Below you will learn how to deploy Gateway, including, but not limited to, DNS over HTTPS (DoH) using a Raspberry Pi, Pi-hole and DNSCrypt.

We recently launched Cloudflare Gateway and shortly thereafter, offered it for free until at least September to any company in need. Cloudflare leadership asked the global Solutions Engineering (SE) team, amongst others, to assist with the incoming onboarding calls. As an SE at Cloudflare, our role is to learn new products, such as Gateway, to educate, and to ensure the success of our prospects and customers. We talk to our customers daily, understand the challenges they face and consult on best practices. We were ready to help!

One way we stay on top of all the services that Cloudflare provides, is by using them ourselves. In this blog, I’ll talk about my experience setting up Cloudflare Gateway.

Gateway sits between your users, device or network and the public Internet. Once you setup Cloudflare Gateway, the service will inspect and manage all Internet-bound DNS queries. In simple terms, you point your recursive DNS to Cloudflare and we enforce policies you create, such as activating SafeSearch, an automated filter for adult and offensive content that’s built into popular search engines like Google, Bing, DuckDuckGo, Yandex and others.

There are various methods and locations DNS filtering can be enabled, whether it’s on your entire laptop, each of your individual browsers and devices or your entire home network. With DNS filtering front of mind, including DoH, I explored each model. The model you choose ultimately depends on your objective.

But first, let’s review what DNS and DNS over HTTPS are.

DNS is the protocol used to resolve hostnames (like www.cloudflare.com) into IP addresses so computers can talk to each other. DNS is an unencrypted clear text protocol, meaning that any eavesdropper or machine between the client and DNS server can see the contents of the DNS request. DNS over HTTPS adds security to DNS and encrypt DNS queries using HTTPS (the protocol we use to encrypt the web).

Let’s get started

Navigate to https://dash.teams.cloudflare.com. If you don’t already have an account, the sign up process only takes a few minutes.

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

Configuring a Gateway location, shown below, is the first step.

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

Conceptually similar to HTTPS traffic, when our edge receives an HTTPS request, we match the incoming SNI header to the correct domain’s configuration (or for plain text HTTP the Host header). And when our edge receives a DNS query, we need a similar mapping to identify the correct configuration. We attempt to match configurations, in this order:

  1. DNS over HTTPS check and lookup based on unique hostname
  2. IPv4 check and lookup based on source IPv4 address
  3. Lookup based on IPv6 destination address

Let’s discuss each option.

DNS over HTTPS

The first attempt to match DNS requests to a location is pointing your traffic to a unique DNS over HTTPS hostname. After you configure your first location, you are given a unique destination IPv6 address and a unique DoH endpoint as shown below. Take note of the hostname as you will need it shortly. I’ll first discuss deploying Gateway in a browser and then to your entire network.

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

DNS over HTTPS is my favorite method for deploying Gateway and securing DNS queries at the same time. Enabling DoH prevents anyone but the DNS server of your choosing from seeing your DNS queries.

Enabling DNS over HTTPS in browsers

By enabling it in a browser, only queries issued in that browser are affected. It’s available in most browsers and there are quite a few tutorials online to show you how to turn it on.

Browser Supports DoH Supports Custom Alternative Providers Supports Custom Servers
Chrome Yes Yes No
Safari No No No
Edge Yes** Yes** No**
Firefox Yes Yes Yes
Opera Yes* Yes* No*
Brave Yes* Yes* No*
Vivaldi Yes* Yes* No*

* Chromium based browser. Same support as Chrome
** Most recent version of Edge is built on Chromium

Chromium based browsers

Using Chrome as an example on behalf of all the Chromium-based browsers, enabling DNS over HTTPS is straightforward, but as you can see in the table above, there is one issue: Chrome does not currently support custom servers. So while it is great that a user can protect their DNS queries, they cannot choose the provider, including Gateway.

Here is how to enable DoH in Chromium based browsers:

Navigate to chrome://flags and toggle the beta flag to enabled.

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

Firefox

Firefox is the exception to the rule because they support both DNS over HTTPS and the ability to define a custom server. Mozilla provides detailed instructions about how to get started.

Once enabled, navigate to Preferences -> General -> Network Security and select ‘Settings’. Scroll to the section ‘Enable DNS over HTTPS’, select ‘Custom’ and input your Gateway DoH address, as shown below:

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

Optionally, you can enable Encrypted SNI (ESNI), which is an IETF draft for encrypting the SNI headers, by toggling the ‘network.security.esni.enabled’ preference in about:config to ‘true’. Read more about how Cloudflare is one of the few providers that supports ESNI by default.

Congratulations, you’ve configured Gateway using DNS over HTTPS! Keep in mind that only queries issued from the configured browser will be secured. Any other device connected to your network such as your mobile devices, gaming platforms, or smart TVs will still use your network’s default DNS server, likely assigned by your ISP.

Configuring Gateway for your entire home or business network

Deploying Gateway at the router level allows you to secure every device on your network without needing to configure each one individually.

Requirements include:

  • Access to your router’s administrative portal
  • A router that supports DHCP forwarding
  • Raspberry Pi with WiFi or Ethernet connectivity

There aren’t any consumer routers on the market that natively support DoH custom servers and likely few that natively support DoH at all. A newer router I purchased, the Netgear R7800, does not support either, but it is one of the most popular routers for flashing dd-wrt or open-wrt, which both support DoH. Unfortunately, neither of these popular firmwares support custom servers.

While it’s rare to find a router that supports DoH out of the box, DoH with custom servers, or has potential to be flashed, it’s common for a router to support DHCP forwarding (dd-wrt and open-wrt both support DHCP forwarding). So, I installed Pi-hole on my Raspberry Pi and used it as my home network’s DNS and DHCP server.

Getting started with Pi-hole and dnscrypt-proxy

If your Raspberry Pi is new and hasn’t been configured yet, follow their guide to get started. (Note: by default, ssh is disabled, so you will need a keyboard and/or mouse to access your box in your terminal.)

Once your Raspberry Pi has been initialized, assign it a static IP address in the same network as your router. I hardcoded my router’s LAN address to 192.168.1.2

Using vim:
sudo vi /etc/dhcpcd.conf

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

Restart the service.
sudo /etc/init.d/dhcpcd restart

Check that your static IP is configured correctly.
ip addr show dev eth0

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

Now that your Raspberry Pi is configured, we need to install Pi-hole: https://github.com/pi-hole/pi-hole/#one-step-automated-install

I chose to use dnscrypt-proxy as the local service that Pi-hole will use to forward all DNS queries. You can find the latest version here.

To install dnscrypt-proxy on your pi-hole, follow these steps:

wget https://github.com/DNSCrypt/dnscrypt-proxy/releases/download/2.0.39/dnscrypt-proxy-linux_arm-2.0.39.tar.gz
tar -xf dnscrypt-proxy-linux_arm-2.0.39.tar.gz
mv linux-arm dnscrypt-proxy
cd dnscrypt-proxy
cp example-dnscrypt-proxy.toml dnscrypt-proxy.toml

Next step is to build a DoH stamp. A stamp is simply an encoded DNS address that encodes your DoH server and other options.

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

As a reminder, you can find Gateway’s unique DoH address in your location configuration.

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

At the very bottom of your dnscrypt-proxy.toml configuration file, uncomment both lines beneath [static].

  • Change  [static.'myserver'] to [static.'gateway']
  • Replace the default stamp with the one generated above

The static section should look similar to this:

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

Also in dnscrypt-proxy.toml configuration file, update the following settings:
server_names = ['gateway']
listen_addresses = ['127.0.0.1:5054']
fallback_resolvers = ['1.1.1.1:53', '1.0.0.1:53']
cache = false

Now we need to install dnscrypt-proxy as a service and configure Pi-hole to point to the listen_addresses defined above.

Install dnscrypt-proxy as a service:
sudo ./dnscrypt-proxy -service install

Start the service:
sudo ./dnscrypt-proxy -service start

You can validate the status of the service by running:
sudo service dnscrypt-proxy status or netstat -an | grep 5054:

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole
Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

Also, confirm the upstream is working by querying localhost on port 5054:
dig www.cloudflare.com  -p 5054 @127.0.0.1

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

You will see a matching request in the Gateway query log (note the timestamps match):

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

Configuring DNS and DHCP in the Pi-hole administrative console

Open your browser and navigate to the Pi-hole’s administrative console. In my case, it’s http://192.168.1.6/admin

Go to Settings -> DNS to modify the upstream DNS provider, which we’ve just configured to be dnscrypt-proxy.

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

Change the upstream server to 127.0.0.1#5054 and hit save. If you want to deploy redundancy, add in a secondary address in Custom 2, such as 1.1.1.1 or Custom 3, such as your IPv6 destination address.

Almost done!

In Settings->DHCP, enable the DHCP server:

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

Hit save.

At this point, your Pi-hole server is running in isolation and we need to deploy it to your network. The simplest way to ensure your Pi-hole is being used exclusively by every device is to use your Pi-hole as both a DNS server and a DHCP server. I’ve found that routers behave oddly if you outsource DNS but not DHCP, so I outsource both.

After I enabled the DHCP server on the Pi-hole, I set the router’s configuration to DHCP forwarding and defined the Pi-hole static address:

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

After applying the routers configuration, I confirmed it was working properly by forgetting the network in my network settings and re-joining. This results in a new IPv4 address (from our new DHCP server) and if all goes well, a new DNS server (the IP of Pi-hole).

Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole
Deploying Gateway using a Raspberry Pi, DNS over HTTPS and Pi-hole

Done!

Now that our entire network is using Gateway, we can configure Gateway Policies to secure our DNS queries!

IPv4 check and lookup based on source IPv4 address

For this method to work properly, Gateway requires that your network has a static IPv4 address. If your IP address does not change, then this is the quickest solution (but still does not prevent third-parties from seeing what domains you are going to). However, if you are configuring Gateway in your home, like I am, and you don’t explicitly pay for this service, then most likely you have a dynamic IP address. These addresses will always change when your router restarts, intentionally or not.

Lookup based on IPv6 destination address

Another option for matching requests in Gateway is to configure your DNS server to point to a unique IPv6 address provided to you by Cloudflare. Any DNS query pointed to this address will be matched properly on our edge.

This might be a good option if you want to use Cloudflare Gateway on your entire laptop by setting your local DNS resolution to this address. However, if your home router or ISP does not support IPv6, DNS resolution won’t work.

Conclusion

In this blog post, we’ve discussed the various ways Gateway can be deployed and how DNS over HTTPS is one of the next big Internet privacy improvements. Deploying Gateway can be done on a per device basis, on your router or even with a Raspberry Pi.

Project Crossbow: Lessons from Refactoring a Large-Scale Internal Tool

Post Syndicated from Junade Ali original https://blog.cloudflare.com/project-crossbow-lessons-from-refactoring-a-large-scale-internal-tool/

Project Crossbow: Lessons from Refactoring a Large-Scale Internal Tool

Project Crossbow: Lessons from Refactoring a Large-Scale Internal Tool

Cloudflare’s global network currently spans 200 cities in more than 90 countries. Engineers working in product, technical support and operations often need to be able to debug network issues from particular locations or individual servers.

Crossbow is the internal tool for doing just this; allowing Cloudflare’s Technical Support Engineers to perform diagnostic activities from running commands (like traceroutes, cURL requests and DNS queries) to debugging product features and performance using bespoke tools.

In September last year, an Engineering Manager at Cloudflare asked to transition Crossbow from a Product Engineering team to the Support Operations team. The tool had been a secondary focus and had been transitioned through multiple engineering teams without developing subject matter knowledge.

The Support Operations team at Cloudflare is closely aligned with Cloudflare’s Technical Support Engineers; developing diagnostic tooling and Natural Language Processing technology to drive efficiency. Based on this alignment, it was decided that Support Operations was the best team to own this tool.

Learning from Sisyphus

Whilst seeking advice on the transition process, an SRE Engineering Manager in Cloudflare suggested reading: “A Case Study in Community-Driven Software Adoption”. This book proved a truly invaluable read for anyone thinking of doing internal tool development or contributing to such tooling. The book describes why multiple tools are often created for the same purpose by different autonomous teams and how this issue can be overcome. The book also describes challenges and approaches to gaining adoption of tooling, especially where this requires some behaviour change for engineers who use such tools.

That said, there are some things we learnt along the way of taking over Crossbow and performing a refactor and revamp of a large-scale internal tool. This blog post seeks to be an addendum to such guidance and provide some further practical advice.

In this blog post we won’t dwell too much on the work of the Cloudflare Support Operations team, but this can be found in the SRECon talk: “Support Operations Engineering: Scaling Developer Products to the Millions”. The software development methodology used in Cloudflare’s Support Operations Group closely resembles Extreme Programming.

Cutting The Fat

There were two ways of using Crossbow, a CLI (command line interface) and UI in Cloudflare’s internal tool for Cloudflare’s Technical Support Engineers. Maintaining both interfaces clearly had significant overhead for improvement efforts, and we took the decision to deprecate one of the interfaces. This allowed us to focus our efforts on one platform to achieve large-scale improvements across technology, usability and functionality.

We set-up a poll to allow engineering, operations, solutions engineering and technical support teams to provide their feedback on how they used the tooling. Polling was not only critical for gaining vital information to how different teams used the tool, but also ensured that prior to deprecation that people knew their views were taken onboard. We polled not only on the option people preferred, but which options they felt were necessary to them and the reasons as to why.

We found that the reasons for favouring the web UI primarily revolved around the absence of documentation and training. Instead, we discovered those who used the CLI found it far more critical for their workflow. Product Engineering teams do not routinely have access to the support UI but some found it necessary to use Crossbow for their jobs and users wanted to be able to automate commands with shell scripts.

Technically, the UI was in JavaScript with an API Gateway service that converted HTTP requests to gRPC alongside some configuration to allow it to work in the support UI. The CLI directly interfaced with the gRPC API so it was a simpler system. Given the Cloudflare Support Operations team primarily works on Systems Engineering projects and had limited UI resources, the decision to deprecate the UI was also in our own interest.

We rolled out a new internal Crossbow user group, trained up teams and created new documentation, provided advance notification of deprecation and abrogated the source code of these services. We also dramatically improved the user experience when using the CLI for users through simple improvements to the help information and easier CLI usage.

Rearchitecting Pub/Sub with Cloudflare Access

One of the primary challenges we encountered was how the system architecture for Crossbow was designed many years ago. A gRPC API ran commands at Cloudflare’s edge network using a configuration management tool which the SRE team expressed a desire to deprecate (with Crossbow being the last user of it).

During a visit to the Singapore Office, the Edge SRE Engineering Manager locally wanted his team to understand Crossbow and how to contribute. During this meeting, we provided an overview of the current architecture and the team there were forthcoming in providing potential refactoring ideas to handle global network stability and move away from the old pipeline. This provided invaluable insight into the common issues experienced between technical approaches and instances of where the tool would fail requiring Technical Support Engineers to consult the SRE team.

We decided to adopt a more simple pub/sub pipeline, instead the edge network would expose a gRPC daemon that would listen for new jobs and execute them and then make a callback to the API service with the results (which would be relayed onto the client).

For authentication between the API service and the client or the API service and the network edge, we implemented a JWT authentication scheme. For a CLI user, the authentication was done by querying an HTTP endpoint behind Cloudflare Access using cloudflared, which provided a JWT the client could use for authentication with gRPC. In practice, this looks something like this:

  1. CLI makes request to authentication server using cloudflared
  2. Authentication server responds with signed JWT token
  3. CLI makes gRPC request with JWT authentication token to API service
  4. API service validates token using a public key

The gRPC API endpoint was placed on Cloudflare Spectrum; as users were authenticated using Cloudflare Access, we could remove the requirement for users to be on the company VPN to use the tool. The new authentication pipeline, combined with a single user interface, also allowed us to improve the collection of metrics and usage logs of the tool.

Risk Management

Risk is inherent in the activities undertaken by engineering professionals, meaning that members of the profession have a significant role to play in managing and limiting it.

As with all engineering projects, it was critical to manage risk. However, the risk to manage is different for different engineering projects. Availability wasn’t the largest factor, given that Technical Support Engineers could escalate issues to the SRE team if the tool wasn’t available. The main risk was security of the Cloudflare network and ensuring Crossbow did not affect the availability of any other services. To this end we took methodical steps to improve isolation and engaged the InfoSec team early to assist with specification and code reviews of the new pipeline. Where a risk to availability existed, we ensured this was properly communicated to the support team and the internal Crossbow user group to communicate the risk/reward that existed.

Feedback, Build, Refactor, Measure

The Support Operations team at Cloudflare works using a methodology based on Extreme Programming. A key tenant of Extreme Programming is that of Test Driven Development, this is often described as a “red-green-green” pattern or “red-green-refactor”. First the engineer enshrines the requirements in tests, then they make those tests pass and then refactor to improve code quality before pushing the software.

As we took on this project, the Cloudflare Support and SRE teams were working on Project Baton – an effort to allow Technical Support Engineers to handle more customer escalations without handover to the SRE teams.

As part of this effort, they had already created an invaluable resource in the form of a feature wish list for Crossbow. We associated JIRAs with all these items and prioritised this work to deliver such feature requests using a Test Driven Development workflow and the introduction of Continuous Integration. Critically we measured such improvements once deployed. Adding simple functionality like support for MTR (a Linux network diagnostic tool) and exposing support for different cURL flags provided improvements in usage.

We were also able to embed Crossbow support for other tools available at the network edge created by other teams, allowing them to maintain such tools and expose features to Crossbow users. Through the creation of an improved development environment and documentation, we were able to drive Product Engineering teams to contribute functionality that was in the mutual interest of them and the customer support team.

Finally, we owned a number of tools which were used by Technical Support Engineers to discover what Cloudflare configuration was applied to a given URL and performing distributed performance testing, we deprecated these tools and rolled them into Crossbow. Another tool owned by the Cloudflare Workers team, called Edge Worker Debug was rolled into Crossbow and the team deprecated their tool.

Results

From implementing user analytics on the tool on the 16 December 2019 to the week ending the 22 January 2020, we found a found usage increase of 4.5x. This growth primarily happened within a 4 week period; by adding the most wanted functionality, we were able to achieve a critical saturation of usage amongst Technical Support Engineers.

Project Crossbow: Lessons from Refactoring a Large-Scale Internal Tool

Beyond this point, it became critical to use the number of checks being run as a metric to evaluate how useful the tool was. For example, only the week starting January 27 saw no meaningful increase in unique users (a 14% usage increase over the previous week – within the normal fluctuation of stable usage). However, over the same timeframe, we saw a 2.6x increase in the number of tests being run – coinciding with introduction of a number of new high-usage functionalities.

Project Crossbow: Lessons from Refactoring a Large-Scale Internal Tool

Conclusion

Through removing low-value/high-maintenance functionality and merciless refactoring, we were dramatically able to improve the quality of Crossbow and therefore improve the velocity of delivery. We were able to dramatically improve usage through enabling functionality to measure usage, receive feature requests in feedback loops with users and test-driven development. Consolidation of tooling reduced overhead of developing support tooling across the business, providing a common framework for developing and exposing functionality for Technical Support Engineers.

There are two key counterintuitive learnings from this project. The first is that cutting functionality can drive usage, providing this is done intelligently. In our case, the web UI contained no additional functionality that wasn’t in the CLI, yet caused substantial engineering overhead for maintenance. By deprecating this functionality, we were able to reduce technical debt and thereby improve the velocity of delivering more important functionality. This effort requires effective communication of the decision making process and involvement from those who are impacted by such a decision.

Secondly, tool development efforts are often focussed by user feedback but lack a means of objectively measuring such improvements. When logging is added, it is often done purely for security and audit logging purposes. Whilst feedback loops with users are invaluable, it is critical to have an objective measure of how successful such a feature is and how it is used. Effective measurement drives the decision making process of future tooling and therefore, in the long run, the usage data can be more important than the original feature itself.

If you’re interested in debugging interesting technical problems on a network with these tools, we’re hiring for Support Engineers (including Security Operations, Technical Support and Support Operations Engineering) in San Francisco, Austin, Champaign, London, Lisbon, Munich and Singapore.