Tag Archives: Cloudflare Tunnel

Introducing Private Network Discovery

Post Syndicated from Abe Carryl original https://blog.cloudflare.com/introducing-network-discovery/

Introducing Private Network Discovery

Introducing Private Network Discovery

With Cloudflare One, building your private network on Cloudflare is easy. What is not so easy is maintaining the security of your private network over time. Resources are constantly being spun up and down with new users being added and removed on a daily basis, making it painful to manage over time.

That’s why today we’re opening a closed beta for our new Zero Trust network discovery tool. With Private Network Discovery, our Zero Trust platform will now start passively cataloging both the resources being accessed and the users who are accessing them without any additional configuration required. No third party tools, commands, or clicks necessary.

To get started, sign-up for early access to the closed beta and gain instant visibility into your network today. If you’re interested in learning more about how it works and what else we will be launching in the future for general availability, keep scrolling.

One of the most laborious aspects of migrating to Zero Trust is replicating the security policies which are active within your network today. Even if you do have a point-in-time understanding of your environment, networks are constantly evolving with new resources being spun up dynamically for various operations. This results in a constant cycle to discover and secure applications which creates an endless backlog of due diligence for security teams.

That’s why we built Private Network Discovery. With Private Network Discovery, organizations can easily gain complete visibility into the users and applications that live on their network without any additional effort on their part. Simply connect your private network to Cloudflare, and we will surface any unique traffic we discover on your network to allow you to seamlessly translate them into Cloudflare Access applications.

Building your private network on Cloudflare

Building out a private network has two primary components: the infrastructure side, and the client side.

The infrastructure side of the equation is powered by Cloudflare Tunnel, which simply connects your infrastructure (whether that be a single application, many applications, or an entire network segment) to Cloudflare. This is made possible by running a simple command-line daemon in your environment to establish multiple secure, outbound-only links to Cloudflare. Simply put, Tunnel is what connects your network to Cloudflare.

On the other side of this equation, you need your end users to be able to easily connect to Cloudflare and, more importantly, your network. This connection is handled by our robust device agent, Cloudflare WARP. This agent can be rolled out to your entire organization in just a few minutes using your in-house MDM tooling, and it establishes a secure connection from your users’ devices to the Cloudflare network.

Introducing Private Network Discovery

Now that we have your infrastructure and your users connected to Cloudflare, it becomes easy to tag your applications and layer on Zero Trust security controls to verify both identity and device-centric rules for each and every request on your network.

How it works

As we mentioned earlier, we built this feature to help your team gain visibility into your network by passively cataloging unique traffic destined for an RFC 1918 or RFC 4193 address space. By design, this tool operates in an observability mode whereby all applications are surfaced, but are tagged with a base state of “Unreviewed.”

Introducing Private Network Discovery

The Network Discovery tool surfaces all origins within your network, defined as any unique IP address, port, or protocol. You can review the details of any given origin and then create a Cloudflare Access application to control access to that origin. It’s also worth noting that Access applications may be composed of more than one origin.

Let’s take, for example, a privately hosted video conferencing service, Jitsi. I’m using this example as our team actually uses this service internally to test our new features before pushing them into production. In this scenario, we know that our self-hosted instance of Jitsi lives at 10.0.0.1:443. However, as this is a video conferencing application, it communicates on both tcp:10.0.0.1:443 and udp:10.0.0.1:10000. Here we would select one origin and assign it an application name.

As a note, during the closed beta you will not be able to view this application in the Cloudflare Access application table. For now, these application names will only be reflected in the discovered origins table of the Private Network Discovery report. You will see them reflected in the Application name column exclusively. However, when this feature goes into general availability you’ll find all the applications you have created under Zero Trust > Access > Applications as well.

After you have assigned an application name and added your first origin, tcp:10.0.0.1:443, you can then follow the same pattern to add the other origin, udp:10.0.0.1:10000, as well. This allows you to create logical groupings of origins to create a more accurate representation of the resources on your network.

Introducing Private Network Discovery

By creating an application, our Network Discovery tool will automatically update the status of these individual origins from “Unreviewed” to “In-Review.” This will allow your team to easily track the origin’s status. From there, you can drill further down to review the number of unique users accessing a particular origin as well as the total number of requests each user has made. This will help equip your team with the information it needs to create identity and device-driven Zero Trust policies. Once your team is comfortable with a given application’s usage, you can then manually update the status of a given application to be either “Approved” or “Unapproved”.

What’s next

Our closed beta launch is just the beginning. While the closed beta release supports creating friendly names for your private network applications, those names do not currently appear in the Cloudflare Zero Trust policy builder.

As we move towards general availability, our top priority will be making it easier to secure your private network based on what is surfaced by the Private Network Discovery tool. With the general availability launch, you will be able to create Access applications directly from your Private Network Discovery report, reference your private network applications in Cloudflare Access and create Zero Trust security policies for those applications, all in one singular workflow.

As you can see, we have exciting plans for this tool and will continue investing in Private Network Discovery in the future. If you’re interested in gaining access to the closed beta, sign-up here and be among the first users to try it out!

Ridiculously easy to use Tunnels

Post Syndicated from Abe Carryl original https://blog.cloudflare.com/ridiculously-easy-to-use-tunnels/

Ridiculously easy to use Tunnels

Ridiculously easy to use Tunnels

A little over a decade ago, Cloudflare launched at TechCrunch Disrupt. At the time, we talked about three core principles that differentiated Cloudflare from traditional security vendors: be more secure, more performant, and ridiculously easy to use. Ease of use is at the heart of every decision we make, and this is no different for Cloudflare Tunnel.

That’s why we’re thrilled to announce today that creating tunnels, which previously required up to 14 commands in the terminal, can now be accomplished in just three simple steps directly from the Zero Trust dashboard.

If you’ve heard enough, jump over to sign-up/teams to unplug your VPN and start building your private network with Cloudflare. If you’re interested in learning more about our motivations for this release and what we’re building next, keep scrolling.

Our connector

Cloudflare Tunnel is the easiest way to connect your infrastructure to Cloudflare, whether that be a local HTTP server, web services served by a Kubernetes cluster, or a private network segment. This connectivity is made possible through our lightweight, open-source connector, cloudflared. Our connector offers high-availability by design, creating four long-lived connections to two distinct data centers within Cloudflare’s network. This means that whether an individual connection, server, or data center goes down, your network remains up. Users can also maintain redundancy within their own environment by deploying multiple instances of the connector in the event a single host goes down for one reason or another.

Historically, the best way to deploy our connector has been through the cloudflared CLI. Today, we’re thrilled to announce that we have launched a new solution to remotely create, deploy, and manage tunnels and their configuration directly from the Zero Trust dashboard. This new solution allows our customers to provide their workforce with Zero Trust network access in 15 minutes or less.

CLI? GUI? Why not both

Command line interfaces are exceptional at what they do. They allow users to pass commands at their console or terminal and interact directly with the operating system. This precision grants users exact control over the interactions they may have with a given program or service where this exactitude is required.

However, they also have a higher learning curve and can be less intuitive for new users. This means users need to carefully research the tools they wish to use prior to trying them out. Many users don’t have the luxury to perform this level of research, only to test a program and find it’s not a great fit for their problem.

Conversely, GUIs, like our Zero Trust dashboard, have the flexibility to teach by doing. Little to no program knowledge is required to get started. Users can be intuitively led to their desired results and only need to research how and why they completed certain steps after they know this solution fits their problem.

When we first released Cloudflare Tunnel, it had less than ten distinct commands to get started. We now have far more than this, as well as a myriad of new use cases to invoke them. This has made what used to be an easy-to-navigate CLI library into something more cumbersome for users just discovering our product.

Simple typos led to immense frustration for some users. Imagine, for example, a user needs to advertise IP routes for their private network tunnel. It can be burdensome to remember cloudflared tunnel route ip add <IP/CIDR>. Through the Zero Trust dashboard, you can forget all about the semantics of the CLI library. All you need to know is the name of your tunnel and the network range you wish to connect through Cloudflare. Simply enter my-private-net (or whatever you want to call it), copy the installation script, and input your network range. It’s that simple. If you accidentally type an invalid IP or CIDR block, the dashboard will provide an actionable, human-readable error and get you on track.

Whether you prefer the CLI or GUI, they ultimately achieve the same outcome through different means. Each has merit and ideally users get the best of both worlds in one solution.

Ridiculously easy to use Tunnels

Eliminating points of friction

Tunnels have typically required a locally managed configuration file to route requests to their appropriate destinations. This configuration file was never created by default, but was required for almost every use case. This meant that users needed to use the command line to create and populate their configuration file using examples from developer documentation. As functionality has been added into cloudflared, configuration files have become unwieldy to manage. Understanding the parameters and values to include as well as where to include them has become a burden for users. These issues were often difficult to catch with the naked eye and painful to troubleshoot for users.

We also wanted to improve the concept of tunnel permissions with our latest release. Previously, users were required to manage two distinct tokens: The cert.pem and the Tunnel_UUID.json file. In short, cert.pem, issued during the cloudflared tunnel login command, granted the ability to create, delete, and list tunnels for their Cloudflare account through the CLI. Tunnel_UUID.json, issued during the cloudflared tunnel create <NAME> command, granted the ability to run a specified tunnel. However, since tunnels can now be created directly from your Cloudflare account in the Zero Trust dashboard, there is no longer a requirement to authenticate your origin prior to creating a tunnel. This action is already performed during the initial Cloudflare login event.

With today’s release, users no longer need to manage configuration files or tokens locally. Instead, Cloudflare will manage this for you based on the inputs you provide in the Zero Trust dashboard. If users typo a hostname or service, they’ll know well before attempting to run their tunnel, saving time and hassle. We’ll also manage your tokens for you, and if you need to refresh your tokens at some point in the future, we’ll rotate the token on your behalf as well.

Client or clientless Zero Trust

We commonly refer to Cloudflare Tunnel as an “on-ramp” to our Zero Trust platform. Once connected, you can seamlessly pair it with WARP, Gateway, or Access to protect your resources with Zero Trust security policies, so that each request is validated against your organization’s device and identity based rules.

Clientless Zero Trust

Users can achieve a clientless Zero Trust deployment by pairing Cloudflare Tunnel with Access. In this model, users will follow the flow laid out in the Zero Trust dashboard. First, users name their tunnel. Next, users will be provided a single installation script tailored to the origin’s operating system and system architecture. Finally, they’ll create either public hostnames or private network routes for their tunnel. As outlined earlier, this step eliminates the need for a configuration file. Public hostname values will now replace ingress rules for remotely managed tunnels. Simply add the public hostname through which you’d like to access your private resource. Then, map the hostname value to a service behind your origin server. Finally, create a Cloudflare Access policy to ensure only those users who meet your requirements are able to access this resource.

Client-based Zero Trust

Alternatively, users can pair Cloudflare Tunnel with WARP and Gateway if they prefer a client-based approach to Zero Trust. Here, they’ll follow the same flow outlined above but instead of creating a public hostname, they’ll add a private network. This step replaces the cloudflared tunnel route ip add <IP/CIDR> step from the CLI library. Then, users can navigate to the Cloudflare Gateway section of the Zero Trust dashboard and create two rules to test private network connectivity and get started.

  1. Name: Allow <current user> for <IP/CIDR>
    Policy: Destination IP in <IP/CIDR> AND User Email is <Current User Email>
    Action: Allow
  2. Name: Default deny for <IP/CIDR>
    Policy: Destination IP in <IP/CIDR>
    Action: Block

It’s important to note, with either approach, most use cases will only require a single tunnel. A tunnel can advertise both public hostnames and private networks without conflicts. This helps make orchestration simple. In fact, we suggest starting with the least number of tunnels possible and using replicas to handle redundancy rather than additional tunnels. This, of course, is dependent on each user’s environment, but generally it’s smart to start with a single tunnel and create more only when there is a need to keep networks or services logically separated.

What’s next

Since we launched Cloudflare Tunnel, hundreds of thousands of tunnels have been created. That’s many tunnels that need to be migrated over to our new orchestration method. We want to make this process frictionless. That’s why we’re currently building out tooling to seamlessly migrate locally managed configurations to Cloudflare managed configurations. This will be available in a few weeks.

At launch, we also will not support global configuration options listed in our developer documentation. These parameters require case-by-case support, and we’ll be adding these commands incrementally over time. Most notably, this means the best way to adjust your cloudflared logging levels will still be by modifying the Cloudflare Tunnel service start command and appending the --loglevel flag into your service run command. This will become a priority after releasing the migration wizard.

As you can see, we have exciting plans for the future of remote tunnel management and will continue investing in this as we move forward. Check it out today and deploy your first Cloudflare Tunnel from the dashboard in three simple steps.

Guest Blog: k8s tunnels with Kudelski Security

Post Syndicated from Guest Author original https://blog.cloudflare.com/guest-blog-zero-trust-access-kubernetes/

Guest Blog: k8s tunnels with Kudelski Security

Guest Blog: k8s tunnels with Kudelski Security

Today, we’re excited to publish a blog post written by our friends at Kudelski Security, a managed security services provider. A few weeks back, Romain Aviolat, the Principal Cloud and Security Engineer at Kudelski Security approached our Zero Trust team with a unique solution to a difficult problem that was powered by Cloudflare’s Identity-aware Proxy, which we call Cloudflare Tunnel, to ensure secure application access in remote working environments.

We enjoyed learning about their solution so much that we wanted to amplify their story. In particular, we appreciated how Kudelski Security’s engineers took full advantage of the flexibility and scalability of our technology to automate workflows for their end users. If you’re interested in learning more about Kudelski Security, check out their work below or their research blog.

Zero Trust Access to Kubernetes

Over the past few years, Kudelski Security’s engineering team has prioritized migrating our infrastructure to multi-cloud environments. Our internal cloud migration mirrors what our end clients are pursuing and has equipped us with expertise and tooling to enhance our services for them. Moreover, this transition has provided us an opportunity to reimagine our own security approach and embrace the best practices of Zero Trust.

So far, one of the most challenging facets of our Zero Trust adoption has been securing access to our different Kubernetes (K8s) control-plane (APIs) across multiple cloud environments. Initially, our infrastructure team struggled to gain visibility and apply consistent, identity-based controls to the different APIs associated with different K8s clusters. Additionally, when interacting with these APIs, our developers were often left blind as to which clusters they needed to access and how to do so.

To address these frictions, we designed an in-house solution leveraging Cloudflare to automate how developers could securely authenticate to K8s clusters sitting across public cloud and on-premise environments. Specifically, for a given developer, we can now surface all the K8s services they have access to in a given cloud environment, authenticate an access request using Cloudflare’s Zero Trust rules, and establish a connection to that cluster via Cloudflare’s Identity-aware proxy, Cloudflare Tunnel.

Most importantly, this automation tool has enabled Kudelski Security as an organization to enhance our security posture and improve our developer experience at the same time. We estimate that this tool saves a new developer at least two hours of time otherwise spent reading documentation, submitting IT service tickets, and manually deploying and configuring the different tools needed to access different K8s clusters.

In this blog, we detail the specific pain points we addressed, how we designed our automation tool, and how Cloudflare helped us progress on our Zero Trust journey in a work-from-home friendly way.

Challenges securing multi-cloud environments

As Kudelski Security has expanded our client services and internal development teams, we have inherently expanded our footprint of applications within multiple K8s clusters and multiple cloud providers. For our infrastructure engineers and developers, the K8s cluster API is a crucial entry point for troubleshooting. We work in GitOps and all our application deployments are automated, but we still frequently need to connect to a cluster to pull logs or debug an issue.

However, maintaining this diversity creates complexity and pressure for infrastructure administrators. For end users, sprawling infrastructure can translate to different credentials, different access tools for each cluster, and different configuration files to keep track of.

Such a complex access experience can make real-time troubleshooting particularly painful. For example, on-call engineers trying to make sense of an unfamiliar K8s environment may dig through dense documentation or be forced to wake up other colleagues to ask a simple question. All this is error-prone and a waste of precious time.

Common, traditional approaches of securing access to K8s APIs presented challenges we knew we wanted to avoid. For example, we felt that exposing the API to the public internet would inherently increase our attack surface, that’s a risk we couldn’t afford. Moreover, we did not want to provide broad-based access to our clusters’ APIs via our internal networks and condone the risks of lateral movement. As Kudelski continues to grow, the operational costs and complexity of deploying VPNs across our workforce and different cloud environments would lead to scaling challenges as well.

Instead, we wanted an approach that would allow us to maintain small, micro-segmented environments, small failure domains, and no more than one way to give access to a service.

Leveraging Cloudflare’s Identity-aware Proxy for Zero Trust access

To do this, Kudelski Security’s engineering team opted for a more modern approach: creating connections between users and each of our K8 clusters via an Identity-aware proxy (IAP). IAPs  are flexible to deploy and add an additional layer of security in front of our applications by verifying the identity of a user when an access request is made. Further, they support our Zero Trust approach by creating connections from users to individual applications — not entire networks.

Each cluster has its own IAP and its own sets of policies, which check for identity (via our corporate SSO) and other contextual factors like the device posture of a developer’s laptop. The IAP doesn’t replace the K8s cluster authentication mechanism, it adds a new one on top of it, and thanks to identity federation and SSO this process is completely transparent for our end users.

In our setup, Kudelski Security is using Cloudflare’s IAPs as a component of Cloudflare Access — a ZTNA solution and one of several security services unified by Cloudflare’s Zero Trust platform.

Guest Blog: k8s tunnels with Kudelski Security

For many web-based apps, IAPs help create a frictionless experience for end users requesting access via a browser. Users authenticate via their corporate SSO or identity provider before reaching the secured app, while the IAP works in the background.

That user flow looks different for CLI-based applications because we cannot redirect CLI network flows like we do in a browser. In our case, our engineers want to use their favorite K8s clients which are CLI-based like kubectl or k9s. This means our Cloudflare IAP needs to act as a SOCKS5 proxy between the CLI client and each K8s cluster.

To create this IAP connection, Cloudflare provides a lightweight server-side daemon called cloudflared that connects infrastructure with applications. This encrypted connection runs on Cloudflare’s global network where Zero Trust policies are applied with single-pass inspection.

Without any automation, however, Kudelski Security’s infrastructure team would need to distribute the daemon on end user devices, provide guidance on how to set up those encrypted connections, and take other manual, hands-on configuration steps and maintain them over time. Plus, developers would still lack a single pane of visibility across the different K8s clusters that they would need to access in their regular work.

Guest Blog: k8s tunnels with Kudelski Security

Our automated solution: k8s-tunnels!

To solve these challenges, our infrastructure engineering team developed an internal tool — called ‘k8s-tunnels’ — that embeds complex configuration steps which make life easier for our developers. Moreover, this tool automatically discovers all the K8s clusters that a given user has access to based on the Zero Trust policies created. To enable this functionality, we embedded the SDKs of some major public cloud providers that Kudelski Security uses. The tool also embeds the cloudflared daemon, meaning that we only need to distribute a single tool to our users.

Guest Blog: k8s tunnels with Kudelski Security

All together, a developer who launches the tool goes through the following workflow: (we assume that the user already has valid credentials otherwise the tool would open a browser on our IDP to obtain them)

1. The user selects one or more cluster to

Guest Blog: k8s tunnels with Kudelski Security

2. k8s-tunnel will automatically open the connection with Cloudflare and expose a local SOCKS5 proxy on the developer machine

3. k8s-tunnel amends the user local kubernetes client configuration by pushing the necessary information to go through the local SOCKS5 proxy

4. k8s-tunnel switches the Kubernetes client context to the current connection

Guest Blog: k8s tunnels with Kudelski Security

5. The user can now use his/her favorite CLI client to access the K8s cluster

The whole process is really straightforward and is being used on a daily basis by our engineering team. And, of course, all this magic is made possible through the auto-discovery mechanism we’ve built into k8s-tunnels. Whenever new engineers join our team, we simply ask them to launch the auto-discovery process and get started.

Here is an example of the auto-discovery process in action.

  1. k8s-tunnels will connect to our different cloud providers APIs and list the K8s clusters the user has access to
  2. k8s-tunnels will maintain a local config file on the user machine of those clusters so this process does not be run more than once
Guest Blog: k8s tunnels with Kudelski Security

Automation enhancements

For on-premises deployments, it was a bit trickier as we didn’t have a simple way to store the K8s clusters metadata like we do with resource tags with public cloud providers. We decided to use Vault as a Key-Value-store to mimic public-cloud resource tags for on-prem. This way we can achieve auto-discovery of on-prem clusters following the same process as with a public-cloud provider.

Maybe you saw that in the previous CLI screenshot, the user can select multiple clusters at the same time! We quickly realized that our developers often needed to access multiple environments at the same time to compare a workload running in production and in staging. So instead of opening and closing tunnels every time they needed to switch clusters, we designed our tool such that they can now simply open multiple tunnels in parallel within a single k8s-tunnels instance and just switch the destination K8s cluster on their laptop.

Last but not least, we’ve also added the support for favorites and notifications on new releases, leveraging Cloudflare Workers, but that’s for another blog post.

What’s Next

In designing this tool, we’ve identified a couple of issues inside Kubernetes client libraries when used in conjunction with SOCKS5 proxies, and we’re working with the Kubernetes community to fix those issues, so everybody should benefit from those patches in the near future.

With this blog post, we wanted to highlight how it is possible to apply Zero Trust security for complex workloads running on multi-cloud environments, while simultaneously improving the end user experience.

Although today our ‘k8s-tunnels’ code is too specific to Kudelski Security, our goal is to share what we’ve created back with the Kubernetes community, so that other organizations and Cloudflare customers can benefit from it.

Extending Cloudflare’s Zero Trust platform to support UDP and Internal DNS

Post Syndicated from Abe Carryl original https://blog.cloudflare.com/extending-cloudflares-zero-trust-platform-to-support-udp-and-internal-dns/

Extending Cloudflare’s Zero Trust platform to support UDP and Internal DNS

Extending Cloudflare’s Zero Trust platform to support UDP and Internal DNS

At the end of 2020, Cloudflare empowered organizations to start building a private network on top of our network. Using Cloudflare Tunnel on the server side, and Cloudflare WARP on the client side, the need for a legacy VPN was eliminated. Fast-forward to today, and thousands of organizations have gone on this journey with us — unplugging their legacy VPN concentrators, internal firewalls, and load balancers. They’ve eliminated the need to maintain all this legacy hardware; they’ve dramatically improved speeds for end users; and they’re able to maintain Zero Trust rules organization-wide.

We started with TCP, which is powerful because it enables an important range of use cases. However, to truly replace a VPN, you need to be able to cover UDP, too. Starting today, we’re excited to provide early access to UDP on Cloudflare’s Zero Trust platform. And even better: as a result of supporting UDP, we can offer Internal DNS — so there’s no need to migrate thousands of private hostnames by hand to override DNS rules. You can get started with Cloudflare for Teams for free today by signing up here; and if you’d like to join the waitlist to gain early access to UDP and Internal DNS, please visit here.

The topology of a private network on Cloudflare

Building out a private network has two primary components: the infrastructure side, and the client side.

The infrastructure side of the equation is powered by Cloudflare Tunnel, which simply connects your infrastructure (whether that be a singular application, many applications, or an entire network segment) to Cloudflare. This is made possible by running a simple command-line daemon in your environment to establish multiple secure, outbound-only, load-balanced links to Cloudflare. Simply put, Tunnel is what connects your network to Cloudflare.

On the other side of this equation, we need your end users to be able to easily connect to Cloudflare and, more importantly, your network. This connection is handled by our robust device client, Cloudflare WARP. This client can be rolled out to your entire organization in just a few minutes using your in-house MDM tooling, and it establishes a secure, WireGuard-based connection from your users’ devices to the Cloudflare network.

Extending Cloudflare’s Zero Trust platform to support UDP and Internal DNS

Now that we have your infrastructure and your users connected to Cloudflare, it becomes easy to tag your applications and layer on Zero Trust security controls to verify both identity and device-centric rules for each and every request on your network.

Up until now though, only TCP was supported.

Extending Cloudflare Zero Trust to support UDP

Over the past year, with more and more users adopting Cloudflare’s Zero Trust platform, we have gathered data surrounding all the use cases that are keeping VPNs plugged in. Of those, the most common need has been blanket support for UDP-based traffic. Modern protocols like QUIC take advantage of UDP’s lightweight architecture — and at Cloudflare, we believe it is part of our mission to advance these new standards to help build a better Internet.

Today, we’re excited to open an official waitlist for those who would like early access to Cloudflare for Teams with UDP support.

What is UDP and why does it matter?

UDP is a vital component of the Internet. Without it, many applications would be rendered woefully inadequate for modern use. Applications which depend on near real time communication such as video streaming or VoIP services are prime examples of why we need UDP and the role it fills for the Internet. At their core, however, TCP and UDP achieve the same results — just through vastly different means. Each has their own unique benefits and drawbacks, which are always felt downstream by the applications that utilize them.

Here’s a quick example of how they both work, if you were to ask a question to somebody as a metaphor. TCP should look pretty familiar: you would typically say hi, wait for them to say hi back, ask how they are, wait for their response, and then ask them what you want.

UDP, on the other hand, is the equivalent of just walking up to someone and asking what you want without checking to make sure that they’re listening. With this approach, some of your question may be missed, but that’s fine as long as you get an answer.

Like the conversation above, with UDP many applications actually don’t care if some data gets lost; video streaming or game servers are good examples here. If you were to lose a packet in transit while streaming, you wouldn’t want the entire stream to be interrupted until this packet is received — you’d rather just drop the packet and move on. Another reason application developers may utilize UDP is because they’d prefer to develop their own controls around connection, transmission, and quality control rather than use TCP’s standardized ones.

For Cloudflare, end-to-end support for UDP-based traffic will unlock a number of new use cases. Here are a few we think you’ll agree are pretty exciting.

Internal DNS Resolvers

Most corporate networks require an internal DNS resolver to disseminate access to resources made available over their Intranet. Your Intranet needs an internal DNS resolver for many of the same reasons the Internet needs public DNS resolvers. In short, humans are good at many things, but remembering long strings of numbers (in this case IP addresses) is not one of them. Both public and internal DNS resolvers were designed to solve this problem (and much more) for us.

In the corporate world, it would be needlessly painful to ask internal users to navigate to, say, 192.168.0.1 to simply reach Sharepoint or OneDrive. Instead, it’s much easier to create DNS entries for each resource and let your internal resolver handle all the mapping for your users as this is something humans are actually quite good at.

Under the hood, DNS queries generally consist of a single UDP request from the client. The server can then return a single reply to the client. Since DNS requests are not very large, they can often be sent and received in a single packet. This makes support for UDP across our Zero Trust platform a key enabler to pulling the plug on your VPN.

Thick Client Applications

Another common use case for UDP is thick client applications. One benefit of UDP we have discussed so far is that it is a lean protocol. It’s lean because the three-way handshake of TCP and other measures for reliability have been stripped out by design. In many cases, application developers still want these reliability controls, but are intimately familiar with their applications and know these controls could be better handled by tailoring them to their application. These thick client applications often perform critical business functions and must be supported end-to-end to migrate. As an example, legacy versions of Outlook may be implemented through thick clients where most of the operations are performed by the local machine, and only the sync interactions with Exchange servers occur over UDP.

Again, UDP support on our Zero Trust platform now means these types of applications are no reason to remain on your legacy VPN.

And more…

A huge portion of the world’s Internet traffic is transported over UDP. Often, people equate time-sensitive applications with UDP, where occasionally dropping packets would be better than waiting — but there are a number of other use cases, and we’re excited to be able to provide sweeping support.

How can I get started today?

You can already get started building your private network on Cloudflare with our tutorials and guides in our developer documentation. Below is the critical path. And if you’re already a customer, and you’re interested in joining the waitlist for UDP and Internal DNS access, please skip ahead to the end of this post!

Connecting your network to Cloudflare

First, you need to install cloudflared on your network and authenticate it with the command below:

cloudflared tunnel login

Next, you’ll create a tunnel with a user-friendly name to identify your network or environment.

cloudflared tunnel create acme-network

Finally, you’ll want to configure your tunnel with the IP/CIDR range of your private network. By doing this, you’re making the Cloudflare WARP agent aware that any requests to this IP range need to be routed to our new tunnel.

cloudflared tunnel route ip add 192.168.0.1/32

Then, all you need to do is run your tunnel!

Connecting your users to your network

To connect your first user, start by downloading the Cloudflare WARP agent on the device they’ll be connecting from, then follow the steps in our installer.

Next, you’ll visit the Teams Dashboard and define who is allowed to access our network by creating an enrollment policy. This policy can be created under Settings > Devices > Device Enrollment. In the example below, you can see that we’re requiring users to be located in Canada and have an email address ending @cloudflare.com.

Once you’ve created this policy, you can enroll your first device by clicking the WARP desktop icon on your machine and navigating to preferences > Account > Login with Teams.

Last, we’ll remove the IP range we added to our Tunnel from the Exclude list in Settings > Network > Split Tunnels. This will ensure this traffic is, in fact, routed to Cloudflare and then sent to our private network Tunnel as intended.

In addition to the tutorial above, we also have in-product guides in the Teams Dashboard which go into more detail about each step and provide validation along the way.

To create your first Tunnel, navigate to the Access > Tunnels.

To enroll your first device into WARP, navigate to My Team > Devices.

What’s Next

We’re incredibly excited to release our waitlist today and even more excited to launch this feature in the coming weeks. We’re just getting started with private network Tunnels and plan to continue adding more support for Zero Trust access rules for each request to each internal DNS hostname after launch. We’re also working on a number of efforts to measure performance and to ensure we remain the fastest Zero Trust platform — making using us a delight for your users, compared to the pain of using a legacy VPN.

Cloudflare Tunnel for Content Teams

Post Syndicated from Alice Bracchi original https://blog.cloudflare.com/cloudflare-tunnel-for-content-teams/

Cloudflare Tunnel for Content Teams

Cloudflare Tunnel for Content Teams

A big part of the job of a technical writer is getting feedback on the content you produce. Writing and maintaining product documentation is a deeply collaborative and cyclical effort — through constant conversation with product managers and engineers, technical writers ensure the content is clear and serves the user in the most effective way. Collaboration with other technical writers is also important to keep the documentation consistent with Cloudflare’s content strategy.

So whether we’re documenting a new feature or overhauling a big portion of existing documentation, sharing our writing with stakeholders before it’s published is quite literally half the work.

In my experience as a technical writer, the feedback I’ve received has been exponentially more impactful when stakeholders could see my changes in context. This is especially true for bigger and more strategic changes. Imagine I’m changing the structure of an entire section of a product’s documentation, or shuffling the order of pages in the navigation bar. It’s hard to guess the impact of those changes just by looking at the markdown files.

We writers check those changes in context by building a development server on our local machines. But sharing what we see locally with our stakeholders has always been a pain point for us. We’ve sent screenshots (hardly a good idea). We’ve recorded our screens. We’ve asked stakeholders to check out our branches locally and build a development server on their own. Lately, we’ve added a GitHub action to our open-source cloudflare-docs repo that allows us to generate a preview link for all pull requests with a certain label. However, that requires us to open a pull request with our changes, and that is not ideal if we’re documenting a feature that’s yet to be announced, or if our work is still in its early stages.

So the question has always been: could there be a way for someone else to see what we see, as easily as we see it?

Enter Cloudflare Tunnel

I was working on a complete refresh of Cloudflare Tunnel’s documentation when I realized the product could very well answer that question for us as a technical writing team.

If you’re not familiar with the product, Cloudflare Tunnel provides a secure way to connect your local resources to the Cloudflare network without poking holes in your firewall. By running cloudflared in your environment, you can create outbound-only connections to Cloudflare’s edge, and ensure all traffic to your origins goes through Cloudflare and is protected from outside interference.

For our team, Cloudflare Tunnel could offer a way for our stakeholders to interact with what’s on our local environments in real-time, just like a customer would if the changes were published. To do that, we could expose our local environment to the edge through a tunnel, assign a DNS record to that tunnel, and then share that URL with our stakeholders.

So if each member in the technical writing team had their own tunnel that they could spin up every time they needed to get feedback, that would pretty much solve our long-standing problem.

Cloudflare Tunnel for Content Teams

Setting up the tunnel

To test out that this would work, I went ahead and tried it for myself.

First, I made sure to create a local branch of the cloudflare-docs repo, make local changes, and run a development server locally on port 8000.

Since I already had cloudflared installed on my machine, the next thing I needed to do was log into my team’s Cloudflare account, pick the zone I wanted to create tunnels for (I picked developers.cloudflare.com), and authorize Cloudflare Tunnel for that zone.

$ cloudflared login

Next, it was time to create the Named Tunnel.

$ cloudflared tunnel create alice
Tunnel credentials written to /Users/alicebracchi/.cloudflared/0e025819-6f12-4f49-8183-c678273feef4.json. cloudflared chose this file based on where your origin certificate was found. Keep this file secret. To revoke these credentials, delete the tunnel.

Created tunnel alice with id 0e025819-6f12-4f49-8183-c678273feef4

Alright, tunnel created. Next, I needed to assign a DNS record to it. I wanted it to be something readable and easily shareable with stakeholders (like abracchi.developers.cloudflare.com), so I ran the following command and specified the tunnel name first and then the desired subdomain:

$ cloudflared tunnel route dns alice abracchi

Next, I needed a way to tell the tunnel to serve traffic to my localhost:8000 port. For that, I created a configuration file in my default cloudflared directory and specified the following fields:

url: https://localhost:8000
tunnel: 0e025819-6f12-4f49-8183-c678273feef4
credentials-file: /Users/alicebracchi/.cloudflared/0e025819-6f12-4f49-8183-c678273feef4
.json  

Time to run the tunnel. The following command established connections between my origin and the Cloudflare edge, telling the tunnel to serve traffic to my origin according to the parameters I’d specified in the config file:

$ cloudflared tunnel --config /Users/alicebracchi/.cloudflared/config.yml run alice
2021-10-18T09:39:54Z INF Starting tunnel tunnelID=0e025819-6f12-4f49-8183-c678273feef4
2021-10-18T09:39:54Z INF Version 2021.9.2
2021-10-18T09:39:54Z INF GOOS: darwin, GOVersion: go1.16.5, GoArch: amd64
2021-10-18T09:39:54Z INF Settings: map[cred-file:/Users/alicebracchi/.cloudflared/0e025819-6f12-4f49-8183-c678273feef4.json credentials-file:/Users/alicebracchi/.cloudflared/0e025819-6f12-4f49-8183-c678273feef4.json url:http://localhost:8000]
2021-10-18T09:39:54Z INF Generated Connector ID: 90a7e3a9-9d59-4d26-9b87-4b94ebf4d2a0
2021-10-18T09:39:54Z INF cloudflared will not automatically update when run from the shell. To enable auto-updates, run cloudflared as a service: https://developers.cloudflare.com/argo-tunnel/reference/service/
2021-10-18T09:39:54Z INF Initial protocol http2
2021-10-18T09:39:54Z INF Starting metrics server on 127.0.0.1:64193/metrics
2021-10-18T09:39:55Z INF Connection 13bf4c0c-b35b-4f9a-b6fa-f0a3dd001951 registered connIndex=0 location=MAD
2021-10-18T09:39:56Z INF Connection 38510c22-5256-45f2-abf8-72f1207ca242 registered connIndex=1 location=LIS
2021-10-18T09:39:57Z INF Connection 9ab0ea06-b1cf-483c-bd48-64a067a87c39 registered connIndex=2 location=MAD
2021-10-18T09:39:58Z INF Connection df079efe-8246-4e93-85f5-10caf8b7c354 registered connIndex=3 location=LIS

And sure enough, at abracchi.developers.cloudflare.com, my teammates could see what I was seeing on localhost:8000.

Securing the tunnel

After creating the tunnel, I needed to make sure only people within Cloudflare could access that tunnel. As it was, anyone with access to abracchi.developers.cloudflare.com could see what was in my local environment. To fix this, I set up an Access self-hosted application by navigating to Access > Applications on the Teams Dashboard. For this application, I then created a policy that restricts access to the tunnel to a user group that includes only Cloudflare employees and requires authentication via Google or One-time PIN (OTP).

This makes applications like my tunnel easily shareable between colleagues, but also safe from potential vulnerabilities.

Cloudflare Tunnel for Content Teams

Et voilà!

Back to the Tunnels page, this is what the content team’s Cloudflare Tunnel setup looks like after each writer completed the process I’ve outlined above. Every writer has their personal tunnel set up and their local environment exposed to the Cloudflare Edge:

Cloudflare Tunnel for Content Teams

What’s next

The team is now seamlessly sharing visual content with their stakeholders, but there’s still room for improvement. Cloudflare Tunnel is just the first step towards making the feedback loop easier for everyone involved. We’re currently exploring ways we can capture integrated feedback directly at the URL that’s shared with the stakeholders, to avoid back-and-forth on separate channels.

We’re also looking into bringing in Cloudflare Pages to make the entire deployment process faster. Stay tuned for future updates, and in the meantime, check out our developer docs.

Getting Cloudflare Tunnels to connect to the Cloudflare Network with QUIC

Post Syndicated from Sudarsan Reddy original https://blog.cloudflare.com/getting-cloudflare-tunnels-to-connect-to-the-cloudflare-network-with-quic/

Getting Cloudflare Tunnels to connect to the Cloudflare Network with QUIC

Getting Cloudflare Tunnels to connect to the Cloudflare Network with QUIC

I work on Cloudflare Tunnel, which lets customers quickly connect their private services and networks through the Cloudflare network without having to expose their public IPs or ports through their firewall. Tunnel is managed for users by cloudflared, a tool that runs on the same network as the private services. It proxies traffic for these services via Cloudflare, and users can then access these services securely through the Cloudflare network.

Recently, I was trying to get Cloudflare Tunnel to connect to the Cloudflare network using a UDP protocol, QUIC. While doing this, I ran into an interesting connectivity problem unique to UDP. In this post I will talk about how I went about debugging this connectivity issue beyond the land of firewalls, and how some interesting differences between UDP and TCP came into play when sending network packets.

How does Cloudflare Tunnel work?

Getting Cloudflare Tunnels to connect to the Cloudflare Network with QUIC

cloudflared works by opening several connections to different servers on the Cloudflare edge. Currently, these are long-lived TCP-based connections proxied over HTTP/2 frames. When Cloudflare receives a request to a hostname, it is proxied through these connections to the local service behind cloudflared.

While our HTTP/2 protocol mode works great, we’d like to improve a few things. First, TCP traffic sent over HTTP/2 is susceptible to Head of Line (HoL) blocking — this affects both HTTP traffic and traffic from WARP routing. Additionally, it is currently not possible to initiate communication from cloudflared’s HTTP/2 server in an efficient way. With the current Go implementation of HTTP/2, we could use Server-Sent Events, but this is not very useful in the scheme of proxying L4 traffic.

The upgrade to QUIC solves possible HoL blocking issues and opens up avenues that allow us to initiate communication from cloudflared to a different cloudflared in the future.

Naturally, QUIC required a UDP-based listener on our edge servers which cloudflared could connect to. We already connect to a TCP-based listener for the existing protocols, so this should be nice and easy, right?

Failed to dial to the edge

Things weren’t as straightforward as they first looked. I added a QUIC listener on the edge, and the ability for cloudflared to connect to this new UDP-based listener. I tried to run my brand new QUIC tunnel and this happened.

$  cloudflared tunnel run --protocol quic my-tunnel
2021-09-17T18:44:11Z ERR Failed to create new quic connection, err: failed to dial to edge: timeout: no recent network activity

cloudflared wasn’t even establishing a connection to the edge. I started looking at the obvious places first. Did I add a firewall rule allowing traffic to this port? Check. Did I have iptables rules ACCEPTing or DROPping appropriate traffic for this port? Check. They seemed to be in order. So what else could I do?

tcpdump all the packets

I started by logging for UDP traffic on the machine my server was running on to see what could be happening.

$  sudo tcpdump -n -i eth0 port 7844 and udp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
14:44:27.742629 IP 173.13.13.10.50152 > 198.41.200.200.7844: UDP, length 1252
14:44:27.743298 IP 203.0.113.0.7844 > 173.13.13.10.50152: UDP, length 37

Looking at this tcpdump helped me understand why I had no connectivity! Not only was this port getting UDP traffic but I was also seeing traffic flow out. But there seemed to be something strange afoot. Incoming packets were being sent to 198.41.200.200:7844 while responses were being sent back from 203.0.113.0:7844 (this is an example IP used for illustration purposes)  instead.

Why is this a problem? If a host (in this case, the server) chooses an address from a network unable to communicate with a public Internet host, it is likely that the return half of the communication will never arrive. But wait a minute. Why is some other IP getting prioritized over a source address my packets were already being sent to? Let’s take a deeper look at some IP addresses. (Note that I’ve deliberately oversimplified and scrambled results to minimally illustrate the problem)

$  ip addr list
eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1600 qdisc noqueue state UP group default qlen 1000
inet 203.0.113.0/32 scope global eth0
inet 198.41.200.200/32 scope global eth0 

$ ip route show
default via 203.0.113.0 dev eth0

So this was clearly why the server was working fine on my machine but not on the Cloudflare edge servers. It looks like I have multiple IPs on the interface my service is bound to. The IP that is the default route is being sent back as the source address of the packet.

Why does this work for TCP but not UDP?

Connection-oriented protocols, like TCP, initiate a connection (connect()) with a three-way handshake. The kernel therefore maintains a state about ongoing connections and uses this to determine the source IP address at the time of a response.

Because UDP (unless SOCK_SEQPACKET is involved) is connectionless, the kernel cannot maintain state like TCP does. The recvfrom  system call is invoked from the server side and tells who the data comes from. Unfortunately, recvfrom  does not tell us which IP this data is addressed for. Therefore, when the UDP server invokes the sendto system call to respond to the client, we can only tell it which address to send the data to. The responsibility of determining the source-address IP then falls to the kernel. The kernel has certain heuristics that it uses to determine the source address. This may or may not work, and in the ip routes example above, these heuristics did not work. The kernel naturally (and wrongly) picks the address of the default route to respond with.

Telling the kernel what to do

I had to rely on my application to set the source address explicitly and therefore not rely on kernel heuristics.

Linux has some generic I/O system calls, namely recvmsg  and sendmsg. Their function signatures allow us to both read or write additional out-of-band data we can pass the source address to. This control information is passed via the msghdr struct’s msg_control field.

ssize_t sendmsg(int socket, const struct msghdr *message, int flags)
ssize_t recvmsg(int socket, struct msghdr *message, int flags);
 
struct msghdr {
     void    *   msg_name;   /* Socket name          */
     int     msg_namelen;    /* Length of name       */
     struct iovec *  msg_iov;    /* Data blocks          */
     __kernel_size_t msg_iovlen; /* Number of blocks     */
     void    *   msg_control;    /* Per protocol magic (eg BSD file descriptor passing) */
    __kernel_size_t msg_controllen; /* Length of cmsg list */
     unsigned int    msg_flags;
};

We can now copy the control information we’ve gotten from recvmsg back when calling sendmsg, providing the kernel with information about the source address.The library I used (https://github.com/lucas-clemente/quic-go) had a recent update that did exactly this! I pulled the changes into my service and gave it a spin.

But alas. It did not work! A quick tcpdump showed that the same source address was being sent back. It seemed clear from reading the source code that the recvmsg and sendmsg were being called with the right values. It did not make sense.

So I had to see for myself if these system calls were being made.

strace all the system calls

strace is an extremely useful tool that tracks all system calls and signals sent/received by a process. Here’s what it had to say. I’ve removed all the information not relevant to this specific issue.

17:39:09.130346 recvmsg(3, {msg_name={sa_family=AF_INET6,
sin6_port=htons(35224), inet_pton(AF_INET6, "::ffff:171.54.148.10", 
&sin6_addr), sin6_flowinfo=htonl(0), sin6_scope_id=0}, msg_namelen=112->28, msg_iov=
[{iov_base="_\5S\30\273]\[email protected]\34\24\322\243{2\361\312|\325\n\1\314\316`\3
03\250\301X\20", iov_len=1452}], msg_iovlen=1, msg_control=[{cmsg_len=36, 
cmsg_level=SOL_IPV6, cmsg_type=0x32}, {cmsg_len=28, cmsg_level=SOL_IP, 
cmsg_type=IP_PKTINFO, cmsg_data={ipi_ifindex=if_nametoindex("eth0"),
ipi_spec_dst=inet_addr("198.41.200.200"),ipi_addr=inet_addr("198.41.200.200")}},
{cmsg_len=17, cmsg_level=SOL_IP, 
cmsg_type=IP_TOS, cmsg_data=[0]}], msg_controllen=96, msg_flags=0}, 0) = 28 <0.000007>
17:39:09.165160 sendmsg(3, {msg_name={sa_family=AF_INET6, 
sin6_port=htons(35224), inet_pton(AF_INET6, "::ffff:171.54.148.10", 
&sin6_addr), sin6_flowinfo=htonl(0), sin6_scope_id=0}, msg_namelen=28, 
msg_iov=[{iov_base="Oe4\37:3\344 &\243W\10~c\\\316\2640\255*\231 
OY\326b\26\300\264&\33\""..., iov_len=1302}], msg_iovlen=1, msg_control=
[{cmsg_len=28, cmsg_level=SOL_TCP, cmsg_type=0x8}], msg_controllen=28, 
msg_flags=0}, 0) = 1302 <0.000054>

Let’s start with recvmsg . We can clearly see that the ipi_addr for the source is being passed correctly: ipi_addr=inet_addr(“172.16.90.131”). This part works as expected. Looking at sendmsg  almost instantly tells us where the problem is. The field we want, ip_spec_dst is not being set as we make this system call. So the kernel continues to make wrong guesses as to what the source address may be.

This turned out to be a bug where the library was using IPROTO_TCP instead of IPPROTO_IPV4 as the control message level while making the sendmsg call. Was that it? Seemed a little anticlimactic. I submitted a slightly more typesafe fix and sure enough, straces now showed me what I was expecting to see.

18:22:08.334755 sendmsg(3, {msg_name={sa_family=AF_INET6, 
sin6_port=htons(37783), inet_pton(AF_INET6, "::ffff:171.54.148.10", 
&sin6_addr), sin6_flowinfo=htonl(0), sin6_scope_id=0}, msg_namelen=28, 
msg_iov=
[{iov_base="Ki\20NU\242\211Y\254\337\3107\224\201\233\242\2647\245}6jlE\2
70\227\3023_\353n\364"..., iov_len=33}], msg_iovlen=1, msg_control=
[{cmsg_len=28, cmsg_level=SOL_IP, cmsg_type=IP_PKTINFO, cmsg_data=
{ipi_ifindex=if_nametoindex("eth0"), 
ipi_spec_dst=inet_addr("198.41.200.200"),ipi_addr=inet_addr("0.0.0.0")}}
], msg_controllen=32, msg_flags=0}, 0) =
33 <0.000049>

cloudflared is now able to connect with UDP (QUIC) to the Cloudflare network from anywhere in the world!

$  cloudflared tunnel --protocol quic run sudarsans-tunnel
2021-09-21T11:37:30Z INF Starting tunnel tunnelID=a72e9cb7-90dc-499b-b9a0-04ee70f4ed78
2021-09-21T11:37:30Z INF Version 2021.9.1
2021-09-21T11:37:30Z INF GOOS: darwin, GOVersion: go1.16.5, GoArch: amd64
2021-09-21T11:37:30Z INF Settings: map[p:quic protocol:quic]
2021-09-21T11:37:30Z INF Initial protocol quic
2021-09-21T11:37:32Z INF Connection 3ade6501-4706-433e-a960-c793bc2eecd4 registered connIndex=0 location=AMS

While the programmatic bug causing this issue was a trivial one, the journey into systematically discovering the issue and understanding how Linux internals worked for UDP along the way turned out to be very rewarding for me. It also reiterated my belief that tcpdump and strace are indeed invaluable tools in anybody’s arsenal when debugging network problems.

What’s next?

You can give this a try with the latest cloudflared release at https://github.com/cloudflare/cloudflared/releases/latest. Just remember to set the protocol flag to quic. We plan to leverage this new mode to roll out some exciting new features for Cloudflare Tunnel. So upgrade away and keep watching this space for more information on how you can take advantage of this.

Tunnel: Cloudflare’s Newest Homeowner

Post Syndicated from Abe Carryl original https://blog.cloudflare.com/observe-and-manage-cloudflare-tunnel/

Tunnel: Cloudflare’s Newest Homeowner

Cloudflare Tunnel connects your infrastructure to Cloudflare. Your team runs a lightweight connector in your environment, cloudflared, and services can reach Cloudflare and your audience through an outbound-only connection without the need for opening up holes in your firewall.

Tunnel: Cloudflare’s Newest Homeowner

Whether the services are internal apps protected with Zero Trust policies, websites running in Kubernetes clusters in a public cloud environment, or a hobbyist project on a Raspberry Pi — Cloudflare Tunnel provides a stable, secure, and highly performant way to serve traffic.

Starting today, with our new UI in the Cloudflare for Teams Dashboard, users who deploy and manage Cloudflare Tunnel at scale now have easier visibility into their tunnels’ status, routes, uptime, connectors, cloudflared version, and much more. On the Teams Dashboard you will also find an interactive guide that walks you through setting up your first tunnel.  

Getting Started with Tunnel

Tunnel: Cloudflare’s Newest Homeowner

We wanted to start by making the tunnel onboarding process more transparent for users. We understand that not all users are intimately familiar with the command line nor are they deploying tunnel in an environment or OS they’re most comfortable with. To alleviate that burden, we designed a comprehensive onboarding guide with pathways for MacOS, Windows, and Linux for our two primary onboarding flows:

  1. Connecting an origin to Cloudflare
  2. Connecting a private network via WARP to Tunnel

Our new onboarding guide walks through each command required to create, route, and run your tunnel successfully while also highlighting relevant validation commands to serve as guardrails along the way. Once completed, you’ll be able to view and manage your newly established tunnels.

Managing your tunnels

Tunnel: Cloudflare’s Newest Homeowner

When thinking about the new user interface for tunnel we wanted to concentrate our efforts on how users gain visibility into their tunnels today. It was important that we provide the same level of observability, but through the lens of a visual, interactive dashboard. Specifically, we strove to build a familiar experience like the one a user may see if they were to run cloudflared tunnel list to show all of their tunnels, or cloudflared tunnel info if they wanted to better understand the connection status of a specific tunnel.

Tunnel: Cloudflare’s Newest Homeowner

In the interface, you can quickly search by name or filter by name, status, uptime, or creation date. This allows users to easily identify and manage the tunnels they need, when they need them. We also included other key metrics such as Status and Uptime.

A tunnel’s status depends on the health of its connections:

  • Active: This means your tunnel is running and has a healthy connection to the Cloudflare network.
  • Inactive: This means your tunnel is not running and is not connected to Cloudflare.
  • Degraded: This means one or more of your four long-lived TCP connections to Cloudflare have been disconnected, but traffic is still being served to your origin.

A tunnel’s uptime is also calculated by the health of its connections. We perform this calculation by determining the UTC timestamp of when the first (of four) long-lived TCP connections is established with the Cloudflare Edge. In the event this single connection is terminated, we will continue tracking uptime as long as one of the other three connections continues to serve traffic. If no connections are active, Uptime will reset to zero.

Tunnel Routes and Connectors

Last year, shortly after the announcement of Named Tunnels, we released a new feature that allowed users to utilize the same Named Tunnel to serve traffic to many different services through the use of Ingress Rules. In the new UI, if you’re running your tunnels in this manner, you’ll be able to see these various services reflected by hovering over the route’s value in the dashboard. Today, this includes routes for DNS records, Load Balancers, and Private IP ranges.

Even more recently, we announced highly available and highly scalable instances of cloudflared, known more commonly as “cloudflared replicas.” To view your cloudflared replicas, select and expand a tunnel. Then you will identify how many cloudflared replicas you’re running for a given tunnel, as well as the corresponding connection status, data center, IP address, and version. And ultimately, when you’re ready to delete a tunnel, you can do so directly from the dashboard as well.

What’s next

Moving forward, we’re excited to begin incorporating more Cloudflare Tunnel analytics into our dashboard. We also want to continue making Cloudflare Tunnel the easiest way to connect to Cloudflare. In order to do that, we will focus on improving our onboarding experience for new users and look forward to bringing more of that functionality into the Teams Dashboard. If you have things you’re interested in having more visibility around in the future, let us know below!

Quick Tunnels: Anytime, Anywhere

Post Syndicated from Rishabh Bector original https://blog.cloudflare.com/quick-tunnels-anytime-anywhere/

Quick Tunnels: Anytime, Anywhere

Quick Tunnels: Anytime, Anywhere

My name is Rishabh Bector, and this summer, I worked as a software engineering intern on the Cloudflare Tunnel team. One of the things I built was quick Tunnels and before departing for the summer, I wanted to write a blog post on how I developed this feature.

Over the years, our engineering team has worked hard to continually improve the underlying architecture through which we serve our Tunnels. However, the core use case has stayed largely the same. Users can implement Tunnel to establish an encrypted connection between their origin server and Cloudflare’s edge.

This connection is initiated by installing a lightweight daemon on your origin, to serve your traffic to the Internet without the need to poke holes in your firewall or create intricate access control lists. Though we’ve always centered around the idea of being a connector to Cloudflare, we’ve also made many enhancements behind the scenes to the way in which our connector operates.

Typically, users run into a few speed bumps before being able to use Cloudflare Tunnel. Before they can create or route a tunnel, users need to authenticate their unique token against a zone on their account. This means in order to simply spin up a Tunnel testing environment, users need to first create an account, add a website, change their nameservers, and wait for DNS propagation.

Starting today, we’re excited to fix that. Cloudflare Tunnel now supports a free version that includes all the latest features and does not require any onboarding to Cloudflare. With today’s change, you can begin experimenting with Tunnel in five minutes or less.

Introducing Quick Tunnels

When administrators start using Cloudflare Tunnel, they need to perform four specific steps:

  1. Create the Tunnel
  2. Configure the Tunnel and what services it will represent
  3. Route traffic to the Tunnel
  4. And finally… run the Tunnel!

These steps give you control over how your services connect to Cloudflare, but they are also a chore. Today’s change, which we are calling quick Tunnels, not only removes some onboarding requirements, we’re also condensing these into a single step.

If you have a service running locally that you want to share with teammates or an audience, you can use this single command to connect your service to Cloudflare’s edge. First, you need to install the Cloudflare connector, a lightweight daemon called cloudflared. Once installed, you can run the command below.

cloudflared tunnel

Quick Tunnels: Anytime, Anywhere

When run, cloudflared will generate a URL that consists of a random subdomain of the website trycloudflare.com and point traffic to localhost port 8080. If you have a web service running at that address, users who visit the subdomain generated will be able to visit your web service through Cloudflare’s network.

Configuring Quick Tunnels

We built this feature with the single command in mind, but if you have services that are running at different default locations, you can optionally configure your quick Tunnel to support that.

One example is if you’re building a multiplayer game that you want to share with friends. If that game is available locally on your origin, or even your laptop, at localhost:3000, you can run the command below.

cloudflared tunnel ---url localhost:3000

You can do this with IP addresses or URLs, as well. Anything that cloudflared can reach can be made available through this service.

How does it work?

Cloudflare quick Tunnels is powered by Cloudflare Workers, giving us a serverless compute deployment that puts Tunnel management in a Cloudflare data center closer to you instead of a centralized location.

When you run the command cloudflared tunnel, your instance of cloudflared initiates an outbound-only connection to Cloudflare. Since that connection was initiated without any account details, we treat it as a quick Tunnel.

A Cloudflare Worker, which we call the quick Tunnel Worker, receives a request that a new quick Tunnel should be created. The Worker generates the random subdomain and returns that to the instance of cloudflared. That instance of cloudflared can now establish a connection for that subdomain.

Meanwhile, a complementary service running on Cloudflare’s edge receives that subdomain and the identification number of the instance of cloudflared. That service uses that information to create a DNS record in Cloudflare’s authoritative DNS which maps the randomly-generated hostname to the specific Tunnel you created.

The deployment also relies on the Workers Cron Trigger feature to perform clean up operations. On a regular interval, the Worker looks for quick Tunnels which have been disconnected for more than five minutes. Our Worker classifies these Tunnels as abandoned and proceeds to delete them and their associated DNS records.

What about Zero Trust policies?

By default, all the quick Tunnels that you create are available on the public Internet at the randomly generated URL. While this might be fine for some projects and tests, other use cases require more security.

Quick Tunnels: Anytime, Anywhere

If you need to add additional Zero Trust rules to control who can reach your services, you can use Cloudflare Access alongside Cloudflare Tunnel. That use case does require creating a Cloudflare account and adding a zone to Cloudflare, but we’re working on ideas to make that easier too.

Where should I notice improvements?

We first launched a version of Cloudflare Tunnel that did not require accounts over two years ago. While we’ve been thrilled that customers have used this for their projects, Cloudflare Tunnel evolved significantly since then. Specifically, Cloudflare Tunnel relies on a new architecture that is more redundant and stable than the one used by that older launch. While all Tunnels that migrated to this new architecture, which we call Named Tunnels, enjoyed those benefits, the users on this option that did not require an account were left behind.

Today’s announcement brings that stability to quick Tunnels. Tunnels are now designed to be long-lived, persistent objects. Unless you delete them, Tunnels can live for months, an improvement over the average lifespan measured in hours before connectivity issues disrupted a Tunnel in the older architecture.

These quick Tunnels run on this same, resilient architecture not only expediting time-to-value, but also improving the overall tunnel quality of life.

What’s next?

Today’s quick Tunnels add a powerful feature to Cloudflare Tunnels: the ability to create a reliable, resilient tunnel in a single command, without the hassle of creating an account first. We’re excited to help your team build and connect services to Cloudflare’s network and on to your audience or teammates. If you have additional questions, please share them in this community post here.

Building a Pet Cam using a Raspberry Pi, Cloudflare Tunnels and Teams

Post Syndicated from Mohak Kataria original https://blog.cloudflare.com/building-a-pet-cam-using-a-raspberry-pi-cloudflare-tunnels-and-teams/

Building a Pet Cam using a Raspberry Pi, Cloudflare Tunnels and Teams

Building a Pet Cam using a Raspberry Pi, Cloudflare Tunnels and Teams

I adopted Ziggy in late 2020. It took me quite a while to get used to his routine and mix it with mine. He consistently jumped on the kitchen counter in search of food, albeit only when no one was around. And I only found out when he tossed the ceramic butter box. It shattered and made a loud bang in the late hours of the night. Thankfully, no one was asleep yet.

This got me thinking that I should keep an eye on his mischievous behaviour, even when I’m not physically at home. I briefly considered buying a pet cam, but I remembered I had bought a Raspberry Pi a few months before. It was hardly being used, and it had a case (like this) allowing a camera module to be added. I hadn’t found a use for the camera module — until now.

This was a perfect weekend project: I would set up my own pet cam, connect it to the Internet, and make it available for me to check from anywhere in the world. I also wanted to ensure that only I could access it and that it had some easy way to login, possibly using my Google account. The solution? Cloudflare Tunnel and Teams. Cloudflare would help me expose a service running in an internal network using Tunnel while providing a security solution on top of it to keep it secure. Teams on the other hand, would help me by adding access control in the form of Google authentication.

Building a Pet Cam using a Raspberry Pi, Cloudflare Tunnels and Teams

So all I am left to do is configure my Raspberry Pi to be able to run a camera as a web service. That weekend, I started researching for it and made a list of things I needed:

  • A Raspberry Pi with a compatible camera module. I used a Raspberry Pi 4 model B with camera module v2.
  • Linux knowledge.
  • A domain name I could make changes to.
  • Understanding of how DNS works.
  • A Cloudflare account with Cloudflare for Teams+Tunnel access.
  • Internet connection.

In this blog post, I’ll walk you through the process I followed to set everything up for the pet cam. To keep things simple and succinct, I will not cover how to set up your Raspberry Pi, but you should make sure it has Internet access and that you can run shell commands on it, either via SSH or using a VNC connection.

Setup

The first thing we need to do is connect the camera module to the Raspberry Pi. For more detailed instructions, follow the official guide, steps 1 to 3.

Building a Pet Cam using a Raspberry Pi, Cloudflare Tunnels and Teams

After setting up the camera and testing that it works, we need to set it up as a camera with a web server. This is so we can access it at a URL such as https://192.168.0.2:8080 within the local network, to which the Raspberry Pi is also connected. To do that, we will use Motion, a program for setting up the camera module v2 as a web server.

To install Motion, input these commands:

$ sudo apt-get update && sudo apt-get upgrade
$ sudo apt install autoconf automake build-essential pkgconf libtool git libzip-dev libjpeg-dev gettext libmicrohttpd-dev libavformat-dev libavcodec-dev libavutil-dev libswscale-dev libavdevice-dev default-libmysqlclient-dev libpq-dev libsqlite3-dev libwebp-dev
$ sudo wget https://github.com/Motion-Project/motion/releases/download/release-4.3.1/pi_buster_motion_4.3.1-1_armhf.deb
$ sudo dpkg -i pi_buster_motion_4.3.1-1_armhf.deb

The above commands will update the local packages with new versions from the repositories and then install that version of Motion from Motion’s GitHub project.

Next, we need to configure Motion using:

$ sudo vim /etc/motion/motion.conf
# Find the following lines and update them to following:
# daemon on
# stream_localhost off
# save and exit

After that, we need to set Motion up as a daemon, so it runs whenever the system is restarted:

$ sudo vim /etc/default/motion
# and change the following line 
# start_motion_daemon=yes
# save and exit and run the next command
$ sudo service motion start

Great. Now that we have Motion set up, we can see the live feed from our camera in a browser on Raspberry Pi module at the default URL: http://localhost:8081 (the port can be changed in the config edit step above). Alternatively, we can open it on another machine within the same network by replacing 0.0.0.0 with the IP of the Raspberry Pi in the network.

For now, the camera web server is available only within our local network. However, I wanted to keep an eye on Ziggy no matter where I am, as long as I have Internet access and a browser. This is perfect for Cloudflare Tunnel. An alternative would be to open a port in the firewall on the router in my home network, but I hate that idea of having to mess with the router configuration. I am not really an expert at that, and if I leave a backdoor open to my internal network, it can get scary quickly!

The Cloudflare Tunnel documentation takes us through its installation. The only issue is that the architecture of the Raspberry Pi is based on armv7l (32-bit) and there is no package for it in the remote repositories. We could build cloudflared from source if we wanted as it’s an open source project, but an easier route is to wget it.

$ wget https://bin.equinox.io/c/VdrWdbjqyF/cloudflared-stable-linux-arm.tgz
# a quick check of version shall confirm if it installed correctly
$ cloudflared -v 
cloudflared version 2021.5.10 (built 2021-05-26-1355 UTC)

Let’s set up our Tunnel now:

$ cloudflared tunnel create camera
Tunnel credentials written to /home/pi/.cloudflared/5f8182ba-906c-4910-98c3-7d042bda0594.json. cloudflared chose this file based on where your origin certificate was found. Keep this file secret. To revoke these credentials, delete the tunnel.

Created tunnel camera with id 5f8182ba-906c-4910-98c3-7d042bda0594

Now we need to configure the Tunnel to forward the traffic to the Motion webcam server:

$ vim /home/pi/.cloudflared/config.yaml 
# And add the following.
tunnel: 5f8182ba-906c-4910-98c3-7d042bda0594
credentials-file: /home/pi/.cloudflared/5f8182ba-906c-4910-98c3-7d042bda0594.json 

ingress:
  - hostname: camera.imohak.com
    service: http://0.0.0.0:9095
  - service: http_status:404

The Tunnel uuid should be the one created with the command above and so should the path of the credential file. The ingress should have the domain we want to use. In my case I have set up camera.imohak.com as my domain and 404 as the fallback rule.

Next, we need to route the DNS to this Tunnel:

$ cloudflared tunnel route dns 5f8182ba-906c-4910-98c3-7d042bda0594 camera.imohak.com

This adds a DNS CNAME record, which can be verified from the Cloudflare dashboard as shown here:

Building a Pet Cam using a Raspberry Pi, Cloudflare Tunnels and Teams

Let’s test the Tunnel!

$ cloudflared tunnel run camera
2021-06-15T21:44:41Z INF Starting tunnel tunnelID=5f8182ba-906c-4910-98c3-7d042bda0594
2021-06-15T21:44:41Z INF Version 2021.5.10
2021-06-15T21:44:41Z INF GOOS: linux, GOVersion: go1.16.3, GoArch: arm
2021-06-15T21:44:41Z INF Settings: map[cred-file:/home/pi/.cloudflared/5f8182ba-906c-4910-98c3-7d042bda0594.json credentials-file:/home/pi/.cloudflared/5f8182ba-906c-4910-98c3-7d042bda0594.json]
2021-06-15T21:44:41Z INF cloudflared will not automatically update when run from the shell. To enable auto-updates, run cloudflared as a service: https://developers.cloudflare.com/argo-tunnel/reference/service/
2021-06-15T21:44:41Z INF Generated Connector ID: 7e38566e-0d33-426d-b64d-326d0592486a
2021-06-15T21:44:41Z INF Initial protocol http2
2021-06-15T21:44:41Z INF Starting metrics server on 127.0.0.1:43327/metrics
2021-06-15T21:44:42Z INF Connection 6e7e0168-22a4-4804-968d-0674e4c3b4b1 registered connIndex=0 location=DUB
2021-06-15T21:44:43Z INF Connection fc83017d-46f9-4cee-8fc6-e4ee75c973f5 registered connIndex=1 location=LHR
2021-06-15T21:44:44Z INF Connection 62d28eee-3a1e-46ef-a4ba-050ae6e80aba registered connIndex=2 location=DUB
2021-06-15T21:44:44Z INF Connection 564164b1-7d8b-4c83-a920-79b279659491 registered connIndex=3 location=LHR

Next, we go to the browser and open the URL camera.imohak.com.

Voilà. Or, not quite yet.

Locking it Down

We still haven’t put any requirement for authentication on top of the server. Right now, anyone who knows about the domain can just open it and look at what is happening inside my house. Frightening, right? Thankfully we have two options now:

Building a Pet Cam using a Raspberry Pi, Cloudflare Tunnels and Teams
  1. Use Motion’s inbuilt authentication mechanisms. However, we shall not choose this option as it’s just another username/password to remember which one can easily forget and who knows if in the future, there is a vulnerability found in the way motion authenticates and my credentials are leaked? We are looking for an SSO using Google which is easy and quick to use and gives us a secure login based on google credentials.
  2. Use Cloudflare Access. Access gives us the ability to create policies based on IP addresses and email addresses, and it lets us integrate different types of authentication methods, such as OTP or Google. In our case, we require authentication through Google.

To take advantage of this Cloudflare Access functionality, the first step is to set up Cloudflare for Teams. Visit https://dash.teams.cloudflare.com/, follow the setup guide and choose a team name(imohak in my case).

Building a Pet Cam using a Raspberry Pi, Cloudflare Tunnels and Teams
Building a Pet Cam using a Raspberry Pi, Cloudflare Tunnels and Teams

After this, we have two things left to do: add a login method and add an application. Let’s cover how we add a login method first. Navigate to Configuration > Authentication and click on +Add, under the Login tab. The Dashboard will show us a list of identity providers to choose from. Select Google — follow this guide for a walkthrough of how to set up a Google Cloud application, get a ClientID and Client Secret, and use them to configure the identity provider in Teams.

After adding a login method and testing it, we should see a confirmation page like this:

Building a Pet Cam using a Raspberry Pi, Cloudflare Tunnels and Teams
Building a Pet Cam using a Raspberry Pi, Cloudflare Tunnels and Teams

The last thing we need to do is to add the pet-cam subdomain as an application protected behind Teams. This enables us to enforce the Google authentication requirement we have configured before. To do that, navigate to Access > Applications, click on Add an application, and select Self-hosted.

On the next page, we specify a name, session duration and also the URL at which the application should be accessible. We add the subdomain camera.imohak.com and also name the app ‘camera’ to keep it simple.

Building a Pet Cam using a Raspberry Pi, Cloudflare Tunnels and Teams

Next, we select Google as an identity provider for this application. Given that we are not choosing multiple authentication methods, I can also enable Instant Auth — this means we don’t need to select Google when we open the URL.

Building a Pet Cam using a Raspberry Pi, Cloudflare Tunnels and Teams

Now we add policies to the application. Here, we add an email check so that after the Google authentication, a check is made to ensure the specified email address is the only one who is able to access the URL. If needed, we can choose to configure other, more complex rules. At this point, we click on Next and finish the setup.

Building a Pet Cam using a Raspberry Pi, Cloudflare Tunnels and Teams

The Result

The setup is now complete. Time to test everything! After opening the browser and entering my URL, voilà. Now, when I visit this URL, I see a Google authentication page and, after logging in, Ziggy eating his dinner.

Building a Pet Cam using a Raspberry Pi, Cloudflare Tunnels and Teams

Modernizing a familiar approach to REST APIs, with PostgreSQL and Cloudflare Workers

Post Syndicated from Kristian Freeman original https://blog.cloudflare.com/modernizing-a-familiar-approach-to-rest-apis-with-postgresql-and-cloudflare-workers/

Modernizing a familiar approach to REST APIs, with PostgreSQL and Cloudflare Workers

Postgres is a ubiquitous open-source database technology. It contains a vast number of features and offers rock-solid reliability. It’s also one of the most popular SQL database tools in the industry. As the industry builds “modern” developer experience tools—real-time and highly interactive—Postgres has also served as a great foundation. Projects like Hasura, which offers a real-time GraphQL engine, and Supabase, an open-source Firebase alternative, use Postgres under the hood. This makes Postgres a technology that every developer should know, and consider using in their applications.

For many developers, REST APIs serve as the primary way we interact with our data. Language-specific libraries like pg allow developers to connect with Postgres in their code, and directly interact with their databases. Yet in almost every case, developers reinvent the wheel, building the same connection logic on an app-by-app basis.

Many developers building applications with Cloudflare Workers, our serverless functions platform, have asked how they can use Postgres in Workers functions. Today, we’re releasing a new tutorial for Workers that shows how to connect to Postgres inside Workers functions. Built on PostgREST, you’ll write a REST API that communicates directly with your database, on the edge.

This means that you can entirely build applications on Cloudflare’s edge — using Workers as a performant and globally-distributed API, and Cloudflare Pages, our Jamstack deployment platform, as the host for your frontend user interface. With Workers, you can add new API endpoints and handle authentication in front of your database without needing to alter your Postgres configuration. With features like Workers KV and Durable Objects, Workers can provide globally-distributed caching in front of your Postgres database. Features like WebSockets can be used to build real-time interactions for your applications, without having to migrate from Postgres to a new database-as-a-service platform.

PostgREST is an open-source tool that generates a standards-compliant REST API for your Postgres databases. Many growing database-as-a-service startups like Retool and Supabase use PostgREST under the hood. PostgREST is fast and has great defaults, allowing you to access your Postgres data using standard REST conventions.

It’s great to be able to access your database directly from Workers, but do you really want to expose your database directly to the public Internet? Luckily, Cloudflare has a solution for this, and it works great with PostgREST: Cloudflare Tunnel. Cloudflare Tunnel is one of my personal favorite products at Cloudflare. It creates a secure tunnel between your local server and the Cloudflare network. We want to expose our PostgREST endpoint, without making our entire database available on the public internet. Cloudflare Tunnel allows us to do that securely.

Modernizing a familiar approach to REST APIs, with PostgreSQL and Cloudflare Workers

By using PostgREST with Postgres, we can build REST API-based applications. In particular, it’s an excellent fit for Cloudflare Workers, our serverless function platform. Workers is a great place to build REST APIs. With the open-source JavaScript library postgrest-js, we can interact with a PostgREST endpoint from inside our Workers function, using simple JS-based primitives.

By the way — if you haven’t built a REST API with Workers yet, check out our free video course with Egghead: “Building a Serverless API with Cloudflare Workers”.

Scaling applications built on Postgres is an incredibly common problem that developers face. Often, this means duplicating your Postgres database and distributing reads between your primary database, and a fleet of “read replicas”. With PostgREST and Workers, we can begin to explore a different approach to solving the scaling problem. Workers’ unique architecture allows us to deploy hyper-performant functions in front of Postgres databases. With tools like Workers KV and Durable Objects, exposed in Workers as basic JavaScript APIs, we can build intelligent caches for our databases, without sacrificing performance or developer experience.

If you’d like to learn more about building REST APIs in Cloudflare Workers using PostgREST, check out our new tutorial! We’ve also provided two open-source libraries to help you get started. cloudflare/postgres-postgrest-cloudflared-example helps you set up a Cloudflare Tunnel-backed Postgres + PostgREST endpoint. postgrest-worker-example is an example of using postgrest-js inside of Cloudflare Workers, to build REST APIs with your Postgres databases.

Modernizing a familiar approach to REST APIs, with PostgreSQL and Cloudflare Workers

With postgrest-js, you can build dynamic queries and request data from your database using the JS primitives you know and love:

// Get all users with at least 100 followers
const { data: users, error } = await client
.from('users')
.select(‘*’)
.gte('followers', 100)

You can also join our Cloudflare Developers Discord community! Learn more about what you can build with Cloudflare Workers, and meet our wonderful community of developers from around the world. Get your invite link here.