Tag Archives: Zero-Trust

Welcome to CIO Week and the future of corporate networks

Post Syndicated from Annika Garbers original https://blog.cloudflare.com/welcome-to-cio-week/

Welcome to CIO Week and the future of corporate networks

Welcome to CIO Week and the future of corporate networks

The world of a CIO has changed — today’s corporate networks look nothing like those of even five or ten years ago — and these changes have created gaps in visibility and security, introduced high costs and operational burdens, and made networks fragile and brittle.

We’re optimistic that CIOs have a brighter future to look forward to. The Internet has evolved from a research project into integral infrastructure companies depend on, and we believe a better Internet is the path forward to solving the most challenging problems CIOs face today. Cloudflare is helping build an Internet that’s faster, more secure, more reliable, more private, and programmable, and by doing so, we’re enabling organizations to build their next-generation networks on ours.

This week, we’ll demonstrate how Cloudflare One, our Zero Trust Network-as-a-Service, is helping CIOs transform their corporate networks. We’ll also introduce new functionality that expands the scope of Cloudflare’s platform to address existing and emerging needs for CIOs. But before we jump into the week, we wanted to spend some time on our vision for the corporate network of the future. We hope this explanation will clarify language and acronyms used by vendors and analysts who have realized the opportunity in this space (what does Zero Trust Network-as-a-Service mean, anyway?) and set context for how our innovative approach is realizing this vision for real CIOs today.

Welcome to CIO Week and the future of corporate networks

Generation 1: Castle and moat

For years, corporate networks looked like this:

Welcome to CIO Week and the future of corporate networks

Companies built or rented space in data centers that were physically located within or close to major office locations. They hosted business applications — email servers, ERP systems, CRMs, etc. — on servers in these data centers. Employees in offices connected to these applications through the local area network (LAN) or over private wide area network (WAN) links from branch locations. A stack of security hardware (e.g., firewalls) in each data center enforced security for all traffic flowing in and out. Once on the corporate network, users could move laterally to other connected devices and hosted applications, but basic forms of network authentication and physical security controls like employee badge systems generally prevented untrusted users from getting access.

Network Architecture Scorecard: Generation 1

Characteristic Score Description
Security ⭐⭐ All traffic flows through perimeter security hardware. Network access restricted with physical controls. Lateral movement is only possible once on network.
Performance ⭐⭐⭐ Majority of users and applications stay within the same building or regional network.
Reliability ⭐⭐ Dedicated data centers, private links, and security hardware present single points of failure. There are cost tradeoffs to purchase redundant links and hardware.
Cost ⭐⭐ Private connectivity and hardware are high cost capital expenditures, creating a high barrier to entry for small or new businesses. However, a limited number of links/boxes are required (trade off with redundancy/reliability). Operational costs are low to medium after initial installation.
Visibility ⭐⭐⭐ All traffic is routed through central location, so it’s possible to access NetFlow/packet captures and more for 100% of flows.
Agility Significant network changes have a long lead time.
Precision Controls are primarily exercised at the network layer (e.g., IP ACLs). Accomplishing “allow only HR to access employee payment data” looks like: IP in range X allowed to access IP in range Y (and requires accompanying spreadsheet to track IP allocation).

Applications and users left the castle

So what changed? In short, the Internet. Faster than anyone expected, the Internet became critical to how people communicate and get work done. The Internet introduced a radical shift in how organizations thought about their computing resources: if any computer can talk to any other computer, why would companies need to keep servers in the same building as employees’ desktops? And even more radical, why would they need to buy and maintain their own servers at all? From these questions, the cloud was born, enabling companies to rent space on other servers and host their applications while minimizing operational overhead. An entire new industry of Software-as-a-Service emerged to simplify things even further, allowing companies to completely abstract away questions of capacity planning, server reliability, and other operational struggles.

This golden, Internet-enabled future — cloud and SaaS everything — sounds great! But CIOs quickly ran into problems. Established corporate networks with castle-and-moat architecture can’t just go down for months or years during a large-scale transition, so most organizations are in a hybrid state, one foot still firmly in the world of data centers, hardware, and MPLS. And traffic to applications still needs to stay secure, so even if it’s no longer headed to a server in a company-owned data center, many companies have continued to send it there (backhauled through private lines) to flow through a stack of firewall boxes and other hardware before it’s set free.

As more applications moved to the Internet, the volume of traffic leaving branches — and being backhauled through MPLS lines through data centers for security — continued to increase. Many CIOs faced an unpleasant surprise in their bandwidth charges the month after adopting Office 365: with traditional network architecture, more traffic to the Internet meant more traffic over expensive private links.

As if managing this first dramatic shift — which created complex hybrid architectures and brought unexpected cost increases — wasn’t enough, CIOs had another to handle in parallel. The Internet changed the game not just for applications, but also for users. Just as servers don’t need to be physically located at a company’s headquarters anymore, employees don’t need to be on the office LAN to access their tools. VPNs allow people working outside of offices to get access to applications hosted on the company network (whether physical or in the cloud).

These VPNs grant remote users access to the corporate network, but they’re slow, clunky to use, and can only support a limited number of people before performance degrades to the point of unusability. And from a security perspective, they’re terrifying — once a user is on the VPN, they can move laterally to discover and gain access to other resources on the corporate network. It’s much harder for CIOs and CISOs to control laptops with VPN access that could feasibly be brought anywhere — parks, public transportation, bars — than computers used by badged employees in the traditional castle-and-moat office environment.

In 2020, COVID-19 turned these emerging concerns about VPN cost, performance, and security into mission-critical, business-impacting challenges, and they’ll continue to be even as some employees return to offices.

Welcome to CIO Week and the future of corporate networks

Generation 2: Smörgåsbord of point solutions

Lots of vendors have emerged to tackle the challenges introduced by these major shifts, often focusing on one or a handful of use cases. Some providers offer virtualized versions of hardware appliances, delivered over different cloud platforms; others have cloud-native approaches that address a specific problem like application access or web filtering. But stitching together a patchwork of point solutions has caused even more headaches for CIOs and most products available focused only on shoring up identity, endpoint, and application security without truly addressing network security.

Gaps in visibility

Compared to the castle and moat model, where traffic all flowed through a central stack of appliances, modern networks have extremely fragmented visibility. IT teams need to piece together information from multiple tools to understand what’s happening with their traffic. Often, a full picture is impossible to assemble, even with the support of tools including SIEM and SOAR applications that consolidate data from multiple sources. This makes troubleshooting issues challenging: IT support ticket queues are full of unsolved mysteries. How do you manage what you can’t see?

Gaps in security

This patchwork architecture — coupled with the visibility gaps it introduced — also creates security challenges. The concept of “Shadow IT” emerged to describe services that employees have adopted and are using without explicit IT permission or integration into the corporate network’s traffic flow and security policies. Exceptions to filtering policies for specific users and use cases have become unmanageable, and our customers have described a general “wild west” feeling about their networks as Internet use grew faster than anyone could have anticipated. And it’s not just gaps in filtering that scare CIOs — the proliferation of Shadow IT means company data can and does now exist in a huge number of unmanaged places across the Internet.

Poor user experience

Backhauling traffic through central locations to enforce security introduces latency for end users, amplified as they work in locations farther and farther away from their former offices. And the Internet, while it’s come a long way, is still fundamentally unpredictable and unreliable, leaving IT teams struggling to ensure availability and performance of apps for users with many factors (even down to shaky coffee shop Wi-Fi) out of their control.

High (and growing) cost

CIOs are still paying for MPLS links and hardware to enforce security across as much traffic as possible, but they’ve now taken on additional costs of point solutions to secure increasingly complex networks. And because of fragmented visibility and security gaps, coupled with performance challenges and rising expectations for a higher quality of user experience, the cost of providing IT support is growing.

Network fragility

All this complexity means that making changes can be really hard. On the legacy side of current hybrid architectures, provisioning MPLS lines and deploying new security hardware come with long lead times, only worsened by recent issues in the global hardware supply chain. And with the medley of point solutions introduced to manage various aspects of the network, a change to one tool can have unintended consequences for another. These effects compound in IT departments often being the bottleneck for business changes, limiting the flexibility of organizations to adapt to an only-accelerating rate of change.

Network Architecture Scorecard: Generation 2

Characteristic Score Description
Security Many traffic flows are routed outside of perimeter security hardware, Shadow IT is rampant, and controls that do exist are enforced inconsistently and across a hodgepodge of tools.
Performance Traffic backhauled through central locations introduces latency as users move further away; VPNs and a bevy of security tools introduce processing overhead and additional network hops.
Reliability ⭐⭐ The redundancy/cost tradeoff from Generation 1 is still present; partial cloud adoption grants some additional resiliency but growing use of unreliable Internet introduces new challenges.
Cost Costs from Generation 1 architecture are retained (few companies have successfully deprecated MPLS/security hardware so far), but new costs of additional tools added, and operational overhead is growing.
Visibility Traffic flows and visibility are fragmented; IT stitches partial picture together across multiple tools.
Agility ⭐⭐ Some changes are easier to make for aspects of business migrated to cloud; others have grown more painful as additional tools introduce complexity.
Precision ⭐⭐ Mix of controls exercised at network layer and application layer. Accomplishing “allow only HR to access employee payment data” looks like: Users in group X allowed to access IP in range Y (and accompanying spreadsheet to track IP allocation)

In summary — to reiterate where we started — modern CIOs have really hard jobs. But we believe there’s a better future ahead.

Generation 3: The Internet as the new corporate network

The next generation of corporate networks will be built on the Internet. This shift is already well underway, but CIOs need a platform that can help them get access to a better Internet — one that’s more secure, faster, more reliable, and preserves user privacy while navigating complex global data regulations.

Zero Trust security at Internet scale

CIOs are hesitant to give up expensive forms of private connectivity because they feel more secure than the public Internet. But a Zero Trust approach, delivered on the Internet, dramatically increases security versus the classic castle and moat model or a patchwork of appliances and point software solutions adopted to create “defense in depth.” Instead of trusting users once they’re on the corporate network and allowing lateral movement, Zero Trust dictates authenticating and authorizing every request into, out of, and between entities on your network, ensuring that visitors can only get to applications they’re explicitly allowed to access. And delivering this authentication and policy enforcement from an edge location close to the user enables radically better performance, rather than forcing traffic to backhaul through central data centers or traverse a huge stack of security tools.

In order to enable this new model, CIOs need a platform that can:

Connect all the entities on their corporate network.

It has to not just be possible, but also easy and reliable to connect users, applications, offices, data centers, and cloud properties to each other as flexibly as possible. This means support for the hardware and connectivity methods customers have today, from enabling mobile clients to operate across OS versions to compatibility with standard tunneling protocols and network peering with global telecom providers.

Apply comprehensive security policies.

CIOs need a solution that integrates tightly with their existing identity and endpoint security providers and provides Zero Trust protection at all layers of the OSI stack across traffic within their network. This includes end-to-end encryption, microsegmentation, sophisticated and precise filtering and inspection for traffic between entities on their network (“East/West”) and to/from the Internet (“North/South”), and protection from other threats like DDoS and bot attacks.

Visualize and provide insight on traffic.

At a base level, CIOs need to understand the full picture of their traffic: who’s accessing what resources and what does performance (latency, jitter, packet loss) look like? But beyond providing the information necessary to answer basic questions about traffic flows and user access, next-generation visibility tools should help users understand trends and highlight potential problems proactively, and they should provide easy-to-use controls to respond to those potential problems. Imagine logging into one dashboard that provides a comprehensive view of your network’s attack surface, user activity, and performance/traffic health, receiving customized suggestions to tighten security and optimize performance, and being able to act on those suggestions with a single click.

Better quality of experience, everywhere in the world

More classic critiques of the public Internet: it’s slow, unreliable, and increasingly subject to complicated regulations that make operating on the Internet as a CIO of a globally distributed company exponentially challenging. The platform CIOs need will make intelligent decisions to optimize performance and ensure reliability, while offering flexibility to make compliance easy.

Fast, in the ways that matter most.

Traditional methods of measuring network performance, like speed tests, don’t tell the full story of actual user experience. Next-generation platforms will measure performance holistically and consider application-specific factors, along with using real-time data on Internet health, to optimize traffic end-to-end.

Reliable, despite factors out of your control.

Scheduled downtime is a luxury of the past: today’s CIOs need to operate 24×7 networks with as close as possible to 100% uptime and reachability from everywhere in the world. They need a provider that’s resilient in its own services, but also has the capacity to handle massive attacks with grace and flexibility to route around issues with intermediary providers. Network teams should also not need to take action for their provider’s planned or unplanned data center outages, such as needing to manually configure new data center connections. And they should be able to onboard new locations at any time without waiting for vendors to provision additional capacity close to their network.

Localized and compliant with data privacy regulations.

Data sovereignty laws are rapidly evolving. CIOs need to bet on a platform that will give them the flexibility to adapt as new protections are rolled out across the globe, with one interface to manage their data (not fractured solutions in different regions).

A paradigm shift that’s possible starting today

These changes sound radical and exciting. But they’re also intimidating — wouldn’t a shift this large be impossible to execute, or at least take an unmanageably long time, in complex modern networks? Our customers have proven this doesn’t have to be the case.

Meaningful change starting with just one flow

Generation 3 platforms should prioritize ease of use. It should be possible for companies to start their Zero Trust journey with just one traffic flow and grow momentum from there. There’s lots of potential angles to start with, but we think one of the easiest is configuring clientless Zero Trust access for one application. Anyone, from the smallest to the largest organizations, should be able to pick an app and prove the value of this approach within minutes.

A bridge between the old & new world

Shifting from network-level access controls (IP ACLs, VPNs, etc.) to application and user-level controls to enforce Zero Trust across your entire network will take time. CIOs should pick a platform that makes it easy to migrate infrastructure over time by allowing:

  • Upgrading from IP-level to application-level architecture over time: Start by connecting with a GRE or IPsec tunnel, then use automatic service discovery to identify high-priority applications to target for finer-grained connection.
  • Upgrading from more open to more restrictive policies over time: Start with security rules that mirror your legacy architecture, then leverage analytics and logs to implement more restrictive policies once you can see who’s accessing what.
  • Making changes to be quick and easy: Design your next-generation network using a modern SaaS interface.
Welcome to CIO Week and the future of corporate networks

Network Architecture Scorecard: Generation 3

Characteristic Score Description
Security ⭐⭐⭐ Granular security controls are exercised on every traffic flow; attacks are blocked close to their source; technologies like Browser Isolation keep malicious code entirely off of user devices.
Performance ⭐⭐⭐ Security controls are enforced at location closest to each user; intelligent routing decisions ensure optimal performance for all types of traffic.
Reliability ⭐⭐⭐ The platform leverages redundant infrastructure to ensure 100% availability; no one device is responsible for holding policy and no one link is responsible for carrying all critical traffic.
Cost ⭐⭐ Total cost of ownership is reduced by consolidating functions.
Visibility ⭐⭐⭐ Data from across the edge is aggregated, processed and presented along with insights and controls to act on it.
Agility ⭐⭐⭐ Making changes to network configuration or policy is as simple as pushing buttons in a dashboard; changes propagate globally within seconds.
Precision ⭐⭐⭐ Controls are exercised at the user and application layer. Accomplishing “allow only HR to access employee payment data” looks like: Users in HR on trusted devices allowed to access employee payment data

Cloudflare One is the first built-from-scratch, unified platform for next-generation networks

In order to achieve the ambitious vision we’ve laid out, CIOs need a platform that can combine Zero Trust and network services operating on a world-class global network. We believe Cloudflare One is the first platform to enable CIOs to fully realize this vision.

We built Cloudflare One, our combined Zero Trust network-as-a-service platform, on our global network in software on commodity hardware. We initially started on this journey to serve the needs of our own IT and security teams and extended capabilities to our customers over time as we realized their potential to help other companies transform their networks. Every Cloudflare service runs on every server in over 250 cities with over 100 Tbps of capacity, providing unprecedented scale and performance. Our security services themselves are also faster — our DNS filtering runs on the world’s fastest public DNS resolver and identity checks run on Cloudflare Workers, the fastest serverless platform.

We leverage insights from over 28 million requests per second and 10,000+ interconnects to make smarter security and performance decisions for all of our customers. We provide both network connectivity and security services in a single platform with single-pass inspection and single-pane management to fill visibility gaps and deliver exponentially more value than the sum of point solutions could alone. We’re giving CIOs access to our globally distributed, blazing-fast, intelligent network to use as an extension of theirs.

This week, we’ll recap and expand on Cloudflare One, with examples from real customers who are building their next-generation networks on Cloudflare. We’ll dive more deeply into the capabilities that are available today and how they’re solving the problems introduced in Generation 2, as well as introduce some new product areas that will make CIOs’ lives easier by eliminating the cost and complexity of legacy hardware, hardening security across their networks and from multiple angles, and making all traffic routed across our already fast network even faster.

We’re so excited to share how we’re making our dreams for the future of corporate networks reality — we hope CIOs (and everyone!) reading this are excited to hear about it.

Cloudflare Tunnel for Content Teams

Post Syndicated from Alice Bracchi original https://blog.cloudflare.com/cloudflare-tunnel-for-content-teams/

Cloudflare Tunnel for Content Teams

Cloudflare Tunnel for Content Teams

A big part of the job of a technical writer is getting feedback on the content you produce. Writing and maintaining product documentation is a deeply collaborative and cyclical effort — through constant conversation with product managers and engineers, technical writers ensure the content is clear and serves the user in the most effective way. Collaboration with other technical writers is also important to keep the documentation consistent with Cloudflare’s content strategy.

So whether we’re documenting a new feature or overhauling a big portion of existing documentation, sharing our writing with stakeholders before it’s published is quite literally half the work.

In my experience as a technical writer, the feedback I’ve received has been exponentially more impactful when stakeholders could see my changes in context. This is especially true for bigger and more strategic changes. Imagine I’m changing the structure of an entire section of a product’s documentation, or shuffling the order of pages in the navigation bar. It’s hard to guess the impact of those changes just by looking at the markdown files.

We writers check those changes in context by building a development server on our local machines. But sharing what we see locally with our stakeholders has always been a pain point for us. We’ve sent screenshots (hardly a good idea). We’ve recorded our screens. We’ve asked stakeholders to check out our branches locally and build a development server on their own. Lately, we’ve added a GitHub action to our open-source cloudflare-docs repo that allows us to generate a preview link for all pull requests with a certain label. However, that requires us to open a pull request with our changes, and that is not ideal if we’re documenting a feature that’s yet to be announced, or if our work is still in its early stages.

So the question has always been: could there be a way for someone else to see what we see, as easily as we see it?

Enter Cloudflare Tunnel

I was working on a complete refresh of Cloudflare Tunnel’s documentation when I realized the product could very well answer that question for us as a technical writing team.

If you’re not familiar with the product, Cloudflare Tunnel provides a secure way to connect your local resources to the Cloudflare network without poking holes in your firewall. By running cloudflared in your environment, you can create outbound-only connections to Cloudflare’s edge, and ensure all traffic to your origins goes through Cloudflare and is protected from outside interference.

For our team, Cloudflare Tunnel could offer a way for our stakeholders to interact with what’s on our local environments in real-time, just like a customer would if the changes were published. To do that, we could expose our local environment to the edge through a tunnel, assign a DNS record to that tunnel, and then share that URL with our stakeholders.

So if each member in the technical writing team had their own tunnel that they could spin up every time they needed to get feedback, that would pretty much solve our long-standing problem.

Cloudflare Tunnel for Content Teams

Setting up the tunnel

To test out that this would work, I went ahead and tried it for myself.

First, I made sure to create a local branch of the cloudflare-docs repo, make local changes, and run a development server locally on port 8000.

Since I already had cloudflared installed on my machine, the next thing I needed to do was log into my team’s Cloudflare account, pick the zone I wanted to create tunnels for (I picked developers.cloudflare.com), and authorize Cloudflare Tunnel for that zone.

$ cloudflared login

Next, it was time to create the Named Tunnel.

$ cloudflared tunnel create alice
Tunnel credentials written to /Users/alicebracchi/.cloudflared/0e025819-6f12-4f49-8183-c678273feef4.json. cloudflared chose this file based on where your origin certificate was found. Keep this file secret. To revoke these credentials, delete the tunnel.

Created tunnel alice with id 0e025819-6f12-4f49-8183-c678273feef4

Alright, tunnel created. Next, I needed to assign a DNS record to it. I wanted it to be something readable and easily shareable with stakeholders (like abracchi.developers.cloudflare.com), so I ran the following command and specified the tunnel name first and then the desired subdomain:

$ cloudflared tunnel route dns alice abracchi

Next, I needed a way to tell the tunnel to serve traffic to my localhost:8000 port. For that, I created a configuration file in my default cloudflared directory and specified the following fields:

url: https://localhost:8000
tunnel: 0e025819-6f12-4f49-8183-c678273feef4
credentials-file: /Users/alicebracchi/.cloudflared/0e025819-6f12-4f49-8183-c678273feef4
.json  

Time to run the tunnel. The following command established connections between my origin and the Cloudflare edge, telling the tunnel to serve traffic to my origin according to the parameters I’d specified in the config file:

$ cloudflared tunnel --config /Users/alicebracchi/.cloudflared/config.yml run alice
2021-10-18T09:39:54Z INF Starting tunnel tunnelID=0e025819-6f12-4f49-8183-c678273feef4
2021-10-18T09:39:54Z INF Version 2021.9.2
2021-10-18T09:39:54Z INF GOOS: darwin, GOVersion: go1.16.5, GoArch: amd64
2021-10-18T09:39:54Z INF Settings: map[cred-file:/Users/alicebracchi/.cloudflared/0e025819-6f12-4f49-8183-c678273feef4.json credentials-file:/Users/alicebracchi/.cloudflared/0e025819-6f12-4f49-8183-c678273feef4.json url:http://localhost:8000]
2021-10-18T09:39:54Z INF Generated Connector ID: 90a7e3a9-9d59-4d26-9b87-4b94ebf4d2a0
2021-10-18T09:39:54Z INF cloudflared will not automatically update when run from the shell. To enable auto-updates, run cloudflared as a service: https://developers.cloudflare.com/argo-tunnel/reference/service/
2021-10-18T09:39:54Z INF Initial protocol http2
2021-10-18T09:39:54Z INF Starting metrics server on 127.0.0.1:64193/metrics
2021-10-18T09:39:55Z INF Connection 13bf4c0c-b35b-4f9a-b6fa-f0a3dd001951 registered connIndex=0 location=MAD
2021-10-18T09:39:56Z INF Connection 38510c22-5256-45f2-abf8-72f1207ca242 registered connIndex=1 location=LIS
2021-10-18T09:39:57Z INF Connection 9ab0ea06-b1cf-483c-bd48-64a067a87c39 registered connIndex=2 location=MAD
2021-10-18T09:39:58Z INF Connection df079efe-8246-4e93-85f5-10caf8b7c354 registered connIndex=3 location=LIS

And sure enough, at abracchi.developers.cloudflare.com, my teammates could see what I was seeing on localhost:8000.

Securing the tunnel

After creating the tunnel, I needed to make sure only people within Cloudflare could access that tunnel. As it was, anyone with access to abracchi.developers.cloudflare.com could see what was in my local environment. To fix this, I set up an Access self-hosted application by navigating to Access > Applications on the Teams Dashboard. For this application, I then created a policy that restricts access to the tunnel to a user group that includes only Cloudflare employees and requires authentication via Google or One-time PIN (OTP).

This makes applications like my tunnel easily shareable between colleagues, but also safe from potential vulnerabilities.

Cloudflare Tunnel for Content Teams

Et voilà!

Back to the Tunnels page, this is what the content team’s Cloudflare Tunnel setup looks like after each writer completed the process I’ve outlined above. Every writer has their personal tunnel set up and their local environment exposed to the Cloudflare Edge:

Cloudflare Tunnel for Content Teams

What’s next

The team is now seamlessly sharing visual content with their stakeholders, but there’s still room for improvement. Cloudflare Tunnel is just the first step towards making the feedback loop easier for everyone involved. We’re currently exploring ways we can capture integrated feedback directly at the URL that’s shared with the stakeholders, to avoid back-and-forth on separate channels.

We’re also looking into bringing in Cloudflare Pages to make the entire deployment process faster. Stay tuned for future updates, and in the meantime, check out our developer docs.

Getting Cloudflare Tunnels to connect to the Cloudflare Network with QUIC

Post Syndicated from Sudarsan Reddy original https://blog.cloudflare.com/getting-cloudflare-tunnels-to-connect-to-the-cloudflare-network-with-quic/

Getting Cloudflare Tunnels to connect to the Cloudflare Network with QUIC

Getting Cloudflare Tunnels to connect to the Cloudflare Network with QUIC

I work on Cloudflare Tunnel, which lets customers quickly connect their private services and networks through the Cloudflare network without having to expose their public IPs or ports through their firewall. Tunnel is managed for users by cloudflared, a tool that runs on the same network as the private services. It proxies traffic for these services via Cloudflare, and users can then access these services securely through the Cloudflare network.

Recently, I was trying to get Cloudflare Tunnel to connect to the Cloudflare network using a UDP protocol, QUIC. While doing this, I ran into an interesting connectivity problem unique to UDP. In this post I will talk about how I went about debugging this connectivity issue beyond the land of firewalls, and how some interesting differences between UDP and TCP came into play when sending network packets.

How does Cloudflare Tunnel work?

Getting Cloudflare Tunnels to connect to the Cloudflare Network with QUIC

cloudflared works by opening several connections to different servers on the Cloudflare edge. Currently, these are long-lived TCP-based connections proxied over HTTP/2 frames. When Cloudflare receives a request to a hostname, it is proxied through these connections to the local service behind cloudflared.

While our HTTP/2 protocol mode works great, we’d like to improve a few things. First, TCP traffic sent over HTTP/2 is susceptible to Head of Line (HoL) blocking — this affects both HTTP traffic and traffic from WARP routing. Additionally, it is currently not possible to initiate communication from cloudflared’s HTTP/2 server in an efficient way. With the current Go implementation of HTTP/2, we could use Server-Sent Events, but this is not very useful in the scheme of proxying L4 traffic.

The upgrade to QUIC solves possible HoL blocking issues and opens up avenues that allow us to initiate communication from cloudflared to a different cloudflared in the future.

Naturally, QUIC required a UDP-based listener on our edge servers which cloudflared could connect to. We already connect to a TCP-based listener for the existing protocols, so this should be nice and easy, right?

Failed to dial to the edge

Things weren’t as straightforward as they first looked. I added a QUIC listener on the edge, and the ability for cloudflared to connect to this new UDP-based listener. I tried to run my brand new QUIC tunnel and this happened.

$  cloudflared tunnel run --protocol quic my-tunnel
2021-09-17T18:44:11Z ERR Failed to create new quic connection, err: failed to dial to edge: timeout: no recent network activity

cloudflared wasn’t even establishing a connection to the edge. I started looking at the obvious places first. Did I add a firewall rule allowing traffic to this port? Check. Did I have iptables rules ACCEPTing or DROPping appropriate traffic for this port? Check. They seemed to be in order. So what else could I do?

tcpdump all the packets

I started by logging for UDP traffic on the machine my server was running on to see what could be happening.

$  sudo tcpdump -n -i eth0 port 7844 and udp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
14:44:27.742629 IP 173.13.13.10.50152 > 198.41.200.200.7844: UDP, length 1252
14:44:27.743298 IP 203.0.113.0.7844 > 173.13.13.10.50152: UDP, length 37

Looking at this tcpdump helped me understand why I had no connectivity! Not only was this port getting UDP traffic but I was also seeing traffic flow out. But there seemed to be something strange afoot. Incoming packets were being sent to 198.41.200.200:7844 while responses were being sent back from 203.0.113.0:7844 (this is an example IP used for illustration purposes)  instead.

Why is this a problem? If a host (in this case, the server) chooses an address from a network unable to communicate with a public Internet host, it is likely that the return half of the communication will never arrive. But wait a minute. Why is some other IP getting prioritized over a source address my packets were already being sent to? Let’s take a deeper look at some IP addresses. (Note that I’ve deliberately oversimplified and scrambled results to minimally illustrate the problem)

$  ip addr list
eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1600 qdisc noqueue state UP group default qlen 1000
inet 203.0.113.0/32 scope global eth0
inet 198.41.200.200/32 scope global eth0 

$ ip route show
default via 203.0.113.0 dev eth0

So this was clearly why the server was working fine on my machine but not on the Cloudflare edge servers. It looks like I have multiple IPs on the interface my service is bound to. The IP that is the default route is being sent back as the source address of the packet.

Why does this work for TCP but not UDP?

Connection-oriented protocols, like TCP, initiate a connection (connect()) with a three-way handshake. The kernel therefore maintains a state about ongoing connections and uses this to determine the source IP address at the time of a response.

Because UDP (unless SOCK_SEQPACKET is involved) is connectionless, the kernel cannot maintain state like TCP does. The recvfrom  system call is invoked from the server side and tells who the data comes from. Unfortunately, recvfrom  does not tell us which IP this data is addressed for. Therefore, when the UDP server invokes the sendto system call to respond to the client, we can only tell it which address to send the data to. The responsibility of determining the source-address IP then falls to the kernel. The kernel has certain heuristics that it uses to determine the source address. This may or may not work, and in the ip routes example above, these heuristics did not work. The kernel naturally (and wrongly) picks the address of the default route to respond with.

Telling the kernel what to do

I had to rely on my application to set the source address explicitly and therefore not rely on kernel heuristics.

Linux has some generic I/O system calls, namely recvmsg  and sendmsg. Their function signatures allow us to both read or write additional out-of-band data we can pass the source address to. This control information is passed via the msghdr struct’s msg_control field.

ssize_t sendmsg(int socket, const struct msghdr *message, int flags)
ssize_t recvmsg(int socket, struct msghdr *message, int flags);
 
struct msghdr {
     void    *   msg_name;   /* Socket name          */
     int     msg_namelen;    /* Length of name       */
     struct iovec *  msg_iov;    /* Data blocks          */
     __kernel_size_t msg_iovlen; /* Number of blocks     */
     void    *   msg_control;    /* Per protocol magic (eg BSD file descriptor passing) */
    __kernel_size_t msg_controllen; /* Length of cmsg list */
     unsigned int    msg_flags;
};

We can now copy the control information we’ve gotten from recvmsg back when calling sendmsg, providing the kernel with information about the source address.The library I used (https://github.com/lucas-clemente/quic-go) had a recent update that did exactly this! I pulled the changes into my service and gave it a spin.

But alas. It did not work! A quick tcpdump showed that the same source address was being sent back. It seemed clear from reading the source code that the recvmsg and sendmsg were being called with the right values. It did not make sense.

So I had to see for myself if these system calls were being made.

strace all the system calls

strace is an extremely useful tool that tracks all system calls and signals sent/received by a process. Here’s what it had to say. I’ve removed all the information not relevant to this specific issue.

17:39:09.130346 recvmsg(3, {msg_name={sa_family=AF_INET6,
sin6_port=htons(35224), inet_pton(AF_INET6, "::ffff:171.54.148.10", 
&sin6_addr), sin6_flowinfo=htonl(0), sin6_scope_id=0}, msg_namelen=112->28, msg_iov=
[{iov_base="_\5S\30\273]\275@\34\24\322\243{2\361\312|\325\n\1\314\316`\3
03\250\301X\20", iov_len=1452}], msg_iovlen=1, msg_control=[{cmsg_len=36, 
cmsg_level=SOL_IPV6, cmsg_type=0x32}, {cmsg_len=28, cmsg_level=SOL_IP, 
cmsg_type=IP_PKTINFO, cmsg_data={ipi_ifindex=if_nametoindex("eth0"),
ipi_spec_dst=inet_addr("198.41.200.200"),ipi_addr=inet_addr("198.41.200.200")}},
{cmsg_len=17, cmsg_level=SOL_IP, 
cmsg_type=IP_TOS, cmsg_data=[0]}], msg_controllen=96, msg_flags=0}, 0) = 28 <0.000007>
17:39:09.165160 sendmsg(3, {msg_name={sa_family=AF_INET6, 
sin6_port=htons(35224), inet_pton(AF_INET6, "::ffff:171.54.148.10", 
&sin6_addr), sin6_flowinfo=htonl(0), sin6_scope_id=0}, msg_namelen=28, 
msg_iov=[{iov_base="Oe4\37:3\344 &\243W\10~c\\\316\2640\255*\231 
OY\326b\26\300\264&\33\""..., iov_len=1302}], msg_iovlen=1, msg_control=
[{cmsg_len=28, cmsg_level=SOL_TCP, cmsg_type=0x8}], msg_controllen=28, 
msg_flags=0}, 0) = 1302 <0.000054>

Let’s start with recvmsg . We can clearly see that the ipi_addr for the source is being passed correctly: ipi_addr=inet_addr(“172.16.90.131”). This part works as expected. Looking at sendmsg  almost instantly tells us where the problem is. The field we want, ip_spec_dst is not being set as we make this system call. So the kernel continues to make wrong guesses as to what the source address may be.

This turned out to be a bug where the library was using IPROTO_TCP instead of IPPROTO_IPV4 as the control message level while making the sendmsg call. Was that it? Seemed a little anticlimactic. I submitted a slightly more typesafe fix and sure enough, straces now showed me what I was expecting to see.

18:22:08.334755 sendmsg(3, {msg_name={sa_family=AF_INET6, 
sin6_port=htons(37783), inet_pton(AF_INET6, "::ffff:171.54.148.10", 
&sin6_addr), sin6_flowinfo=htonl(0), sin6_scope_id=0}, msg_namelen=28, 
msg_iov=
[{iov_base="Ki\20NU\242\211Y\254\337\3107\224\201\233\242\2647\245}6jlE\2
70\227\3023_\353n\364"..., iov_len=33}], msg_iovlen=1, msg_control=
[{cmsg_len=28, cmsg_level=SOL_IP, cmsg_type=IP_PKTINFO, cmsg_data=
{ipi_ifindex=if_nametoindex("eth0"), 
ipi_spec_dst=inet_addr("198.41.200.200"),ipi_addr=inet_addr("0.0.0.0")}}
], msg_controllen=32, msg_flags=0}, 0) =
33 <0.000049>

cloudflared is now able to connect with UDP (QUIC) to the Cloudflare network from anywhere in the world!

$  cloudflared tunnel --protocol quic run sudarsans-tunnel
2021-09-21T11:37:30Z INF Starting tunnel tunnelID=a72e9cb7-90dc-499b-b9a0-04ee70f4ed78
2021-09-21T11:37:30Z INF Version 2021.9.1
2021-09-21T11:37:30Z INF GOOS: darwin, GOVersion: go1.16.5, GoArch: amd64
2021-09-21T11:37:30Z INF Settings: map[p:quic protocol:quic]
2021-09-21T11:37:30Z INF Initial protocol quic
2021-09-21T11:37:32Z INF Connection 3ade6501-4706-433e-a960-c793bc2eecd4 registered connIndex=0 location=AMS

While the programmatic bug causing this issue was a trivial one, the journey into systematically discovering the issue and understanding how Linux internals worked for UDP along the way turned out to be very rewarding for me. It also reiterated my belief that tcpdump and strace are indeed invaluable tools in anybody’s arsenal when debugging network problems.

What’s next?

You can give this a try with the latest cloudflared release at https://github.com/cloudflare/cloudflared/releases/latest. Just remember to set the protocol flag to quic. We plan to leverage this new mode to roll out some exciting new features for Cloudflare Tunnel. So upgrade away and keep watching this space for more information on how you can take advantage of this.

Zero Trust — Not a Buzzword

Post Syndicated from Fernando Serto original https://blog.cloudflare.com/zero-trust-not-a-buzzword/

Zero Trust — Not a Buzzword

Zero Trust — Not a Buzzword

Over the last few years, Zero Trust, a term coined by Forrester, has picked up a lot of steam. Zero Trust, at its core, is a network architecture and security framework focusing on not having a distinction between external and internal access environments, and never trusting users/roles.

In the Zero Trust model, the network only delivers applications and data to authenticated and authorised users and devices, and gives organisations visibility into what is being accessed and to apply controls based on behavioural analysis. It gained popularity as the media reported on several high profile breaches caused by misuse, abuse or exploitation of VPN systems, breaches into end-users’ devices with access to other systems within the network, or breaches through third parties — either by exploiting access or compromising software repositories in order to deploy malicious code. This would later be used to provide further access into internal systems, or to deploy malware and potentially ransomware into environments well within the network perimeter.

When we first started talking to CISOs about Zero Trust, it felt like it was just a buzzword, and CISOs were bombarded with messaging from different cybersecurity vendors offering them Zero Trust solutions. Recently, another term, SASE (Secure Access Services Edge), a framework released by Gartner, also came up and added even more confusion to the mix.

Then came COVID-19 in 2020, and with it the reality of lockdowns and remote work. And while some organizations took that as an opportunity to accelerate projects around modernising their access infrastructure, others, due to procurement processes, or earlier technology decisions, ended up having to take a more tactical approach, ramping up existing remote access infrastructure by adding more licenses or capacity without having an opportunity to rethink their approach, nor having an opportunity to take into account the impact of their employees’ experience while working remotely full time in the early days of the pandemic.

So we thought it might be a good time to check on organizations in Asia Pacific, and look at the following:

  • The pandemic’s impact on businesses
  • Current IT security approaches and challenges
  • Awareness, adoption and implementation of Zero Trust
  • Key drivers and challenges in adopting Zero Trust

In August 2021, we commissioned a research company called The Leading Edge to conduct a survey that touches on these topics. The survey was conducted across five countries — Australia, India, Japan, Malaysia, and Singapore, and 1,006 IT and cybersecurity decision-makers and influencers from companies with more than 500 employees participated.

For example, 54% of organisations said they saw an increase in security incidents in 2021, when compared to the previous year, with 83% of respondents who experienced security incidents saying they had to make significant changes to their IT security procedures as a result.

Zero Trust — Not a Buzzword
Increase in security incidents when compared to 2020. ▲▼ Significantly higher/lower than total sample

And while the overall APAC stats are already quite interesting, I thought it would be even more fascinating to look at the unique characteristics of each of the five countries, so let’s have a look:

Australia

Australian organisations reported the highest impact of COVID-19 when it comes to their IT security approach, with 87% of the 203 respondents surveyed saying the pandemic had a moderate to significant impact on their IT security posture. The two biggest cities in Australia (Sydney and Melbourne) were in lockdown for over 100 days, each in the second half of 2021 alone. With the extensive lockdowns, it’s not a surprise that 48% of respondents reported challenges with maximising remote workers’ productivity without exposing them or their devices to new risks.

With 94% of organisations in Australia having reported they will be implementing a combination of return to office and work from home, building an effective and uniform security approach can be quite challenging. If you combine that with the fact that 62% saw an increase in security incidents over the last year, we can safely assume IT and cybersecurity decision-makers and influencers in Australia have been working on improving their security posture over the last year, even though 40% of respondents indicated they struggled to secure the right level of funding for such projects.

Australia seems to be well advanced on the journey into implementing Zero Trust when compared to other four countries included in the report, with 45% of the organisations that have adopted Zero Trust starting their Zero Trust journey over the last one to four years. Australian organisations have always been known for fast cloud adoption, and even in the early 2010s Australians were already consuming IaaS quite heavily.

India

When compared to the other countries in the report, India has a very challenging environment when it comes to working from home, with Internet connectivity being inconsistent, even though there’s been significant improvement in internet speeds in the country, and problems like power outages regularly occurring in certain areas outside of city centres. Surprisingly, the biggest challenge reported by Indian organisations was that they could benefit from newer security functionality, which goes to show that legacy security approaches are still widely present in India. Likewise, 37% of the respondents reported that their access technologies are too complex, which supports the previous point that newer security functionality would be beneficial to the same organisations.

When asked about their concerns around the shift in how their users will access applications, one of the biggest concerns raised by 59% of the respondents was around applications being protected by VPN or IP address controls alone. This shows Zero Trust would fit really well with their IT strategy moving forward, as controls can now be applied to users and their devices.

Another interesting point to make, and where Zero Trust can be leveraged, is 65% of respondents saying internal IT and security staff shortage and cuts is a huge challenge. Most security technologies out there would require special skills to build, maintain and operate, and this is where simplifying access with the right Zero Trust approach could really help improve the productivity of those teams.

Japan

When we look at the results of the survey across all five countries, it’s fairly obvious that Japan didn’t seem to have quite the same challenges as the other countries when the pandemic started. Businesses continued to operate normally for most of 2020 and 2021, which would explain why the impact wasn’t in line with the other countries. Having said that, 51% of the respondents surveyed in Japan still reported they saw a moderate to significant impact in their IT security approach, which is still significant, even though lower than the other countries.

Japanese organisations also reported an increase in the number of security incidents, which supports the fact that even though the impact of the pandemic wasn’t as severe as in other countries, 45% of the respondents still reported an increase in security incidents, and 63% still had to make changes to their IT security procedures as a direct result of incidents.

Malaysia

Malaysia rated second highest (at 80%) in our report on the impact the pandemic has had on organisations’ IT security approach, and rated highest on both employees using their home networks and using personal devices for work, at 94% and 92% respectively. From a security perspective, that poses a significant impact to an organization’s security posture and increases the attack surface for an organisation substantially.

From a risk perspective, Malaysian organisations rated lack of management over employees’ devices pretty highly, with 65% of them expressing concerns over it. Other areas worth calling out were applications and data being exposed to the public Internet, and lack of visibility into staff activity inside applications.

With 57% of the respondents calling out an increase in security incidents when compared to the previous year, 89% of the respondents said they had to make significant changes to their IT security procedures due to either security incidents or attack attempts against their environments.

Singapore

In Singapore, 79% of IT and cybersecurity decision-makers and influencers reported that the pandemic has impacted their IT security approach, and two in five organisations said they could benefit from more modern security functionality as a direct result of the impact caused by the pandemic. 52% of the organisations also reported an increase in security incidents compared to 2020, with almost half having seen an increase in phishing attempts.

Singaporean organisations were also not immune to a significant increase in IT security spend as a direct result of the pandemic, with 62% of them having reported more investment in security. Some of the challenges these organisations were facing were related to applications being directly exposed to the public Internet, limited oversight on third party access and applications being protected by username and password only.

While Singapore is known for high speed home Internet, it was quite a surprise for me to see that 40% of organisations surveyed reported issues with latency or slow connectivity into applications via VPN. This goes to show that the problem of concentrating traffic into a single location can impact application performance even across relatively small geographies, and even if bandwidth is not necessarily a problem, like what happens in Singapore.

The work in IT security never stops

While there were distinct differences in each country around IT security posture and Zero Trust adoption, across Asia Pacific, the similarities are what stand out the most:

  • Cyberattacks continue to rise
  • Flexible work is here to stay
  • Skilled in-house IT security workers are a scarce resource
  • Need to educate stakeholders around Zero Trust

These challenges are not easy to tackle, add to these the required focus on improving employee experience, reducing operational complexities, better visibility into 3rd party activity, and tighter controls due to the increase in security incidents, and you’ve got a heck of a huge responsibility for IT.

And this is where Cloudflare comes in. Not only have we been helping our employees work security throughout the pandemic, we have also been helping organisations all over the globe streamline their IT security operations when it comes to users accessing applications through Cloudflare Access, or securing their activity on the Internet through our Secure Web Gateway services, which even includes controls around SaaS applications and browser isolation, all with the best possible user experience.

So come talk to us!

Tunnel: Cloudflare’s Newest Homeowner

Post Syndicated from Abe Carryl original https://blog.cloudflare.com/observe-and-manage-cloudflare-tunnel/

Tunnel: Cloudflare’s Newest Homeowner

Cloudflare Tunnel connects your infrastructure to Cloudflare. Your team runs a lightweight connector in your environment, cloudflared, and services can reach Cloudflare and your audience through an outbound-only connection without the need for opening up holes in your firewall.

Tunnel: Cloudflare’s Newest Homeowner

Whether the services are internal apps protected with Zero Trust policies, websites running in Kubernetes clusters in a public cloud environment, or a hobbyist project on a Raspberry Pi — Cloudflare Tunnel provides a stable, secure, and highly performant way to serve traffic.

Starting today, with our new UI in the Cloudflare for Teams Dashboard, users who deploy and manage Cloudflare Tunnel at scale now have easier visibility into their tunnels’ status, routes, uptime, connectors, cloudflared version, and much more. On the Teams Dashboard you will also find an interactive guide that walks you through setting up your first tunnel.  

Getting Started with Tunnel

Tunnel: Cloudflare’s Newest Homeowner

We wanted to start by making the tunnel onboarding process more transparent for users. We understand that not all users are intimately familiar with the command line nor are they deploying tunnel in an environment or OS they’re most comfortable with. To alleviate that burden, we designed a comprehensive onboarding guide with pathways for MacOS, Windows, and Linux for our two primary onboarding flows:

  1. Connecting an origin to Cloudflare
  2. Connecting a private network via WARP to Tunnel

Our new onboarding guide walks through each command required to create, route, and run your tunnel successfully while also highlighting relevant validation commands to serve as guardrails along the way. Once completed, you’ll be able to view and manage your newly established tunnels.

Managing your tunnels

Tunnel: Cloudflare’s Newest Homeowner

When thinking about the new user interface for tunnel we wanted to concentrate our efforts on how users gain visibility into their tunnels today. It was important that we provide the same level of observability, but through the lens of a visual, interactive dashboard. Specifically, we strove to build a familiar experience like the one a user may see if they were to run cloudflared tunnel list to show all of their tunnels, or cloudflared tunnel info if they wanted to better understand the connection status of a specific tunnel.

Tunnel: Cloudflare’s Newest Homeowner

In the interface, you can quickly search by name or filter by name, status, uptime, or creation date. This allows users to easily identify and manage the tunnels they need, when they need them. We also included other key metrics such as Status and Uptime.

A tunnel’s status depends on the health of its connections:

  • Active: This means your tunnel is running and has a healthy connection to the Cloudflare network.
  • Inactive: This means your tunnel is not running and is not connected to Cloudflare.
  • Degraded: This means one or more of your four long-lived TCP connections to Cloudflare have been disconnected, but traffic is still being served to your origin.

A tunnel’s uptime is also calculated by the health of its connections. We perform this calculation by determining the UTC timestamp of when the first (of four) long-lived TCP connections is established with the Cloudflare Edge. In the event this single connection is terminated, we will continue tracking uptime as long as one of the other three connections continues to serve traffic. If no connections are active, Uptime will reset to zero.

Tunnel Routes and Connectors

Last year, shortly after the announcement of Named Tunnels, we released a new feature that allowed users to utilize the same Named Tunnel to serve traffic to many different services through the use of Ingress Rules. In the new UI, if you’re running your tunnels in this manner, you’ll be able to see these various services reflected by hovering over the route’s value in the dashboard. Today, this includes routes for DNS records, Load Balancers, and Private IP ranges.

Even more recently, we announced highly available and highly scalable instances of cloudflared, known more commonly as “cloudflared replicas.” To view your cloudflared replicas, select and expand a tunnel. Then you will identify how many cloudflared replicas you’re running for a given tunnel, as well as the corresponding connection status, data center, IP address, and version. And ultimately, when you’re ready to delete a tunnel, you can do so directly from the dashboard as well.

What’s next

Moving forward, we’re excited to begin incorporating more Cloudflare Tunnel analytics into our dashboard. We also want to continue making Cloudflare Tunnel the easiest way to connect to Cloudflare. In order to do that, we will focus on improving our onboarding experience for new users and look forward to bringing more of that functionality into the Teams Dashboard. If you have things you’re interested in having more visibility around in the future, let us know below!

Announcing Access Temporary Authentication

Post Syndicated from Kenny Johnson original https://blog.cloudflare.com/announcing-access-temporary-authentication/

Announcing Access Temporary Authentication

Zero Trust rules by default block attempts to reach a resource. To obtain access, users need to prove they should be allowed to connect using signals like their identity, their device health, and other factors.

However, some workflows need a second opinion. Starting today, you can add new policies in Cloudflare Access that grant temporary access to specific users based on approvals for a set of predefined administrators. You can decide that some applications need second-party approval in addition to other Zero Trust signals. We’re excited to give your team another layer of Zero Trust control for any application — whether it’s a popular SaaS tool or you host it yourself.

Why temporary authentication?

Configuring appropriate user access is a challenge. Most companies start granting employee-specific application access based on username or email. This requires manual provisioning and deprovisioning when an employee joins or leaves.

When this becomes unwieldy, security teams generally use identity provider groups to set access levels by employee role. Which allows better provisioning and deprovisioning, but again starts to get clunky when application access requirements do not conform around roles. If a specific support rep needs access, then they need to be added to an existing group (for example, engineering) or a new group needs to be created (for example, specfic_support_reps). Even if that new team member only needed temporary access, it is unlikely they were ever removed from the identity group they were added to. This leads to overprovisioned and unnecessary groups in your identity provider.

In most cases, there are two sets of application users — those that access every day to do their jobs and those that need specific access periodically. We wanted to make it possible to give these periodic users temporary access to applications. Additionally, some services are so sensitive that every user should only have temporary access, for example in the case of production database access.

Starting with Purpose Justification

Cloudflare Access starts solving this problem by allowing security administrators to collect a business reason for accessing a specific application. This provides an audit trail and a prompt to remind users that they should only connect to the resource with a good reason. However, the feature does actively stop a user from accessing something.

Announcing Access Temporary Authentication

Added control with Temporary Authentication

As part of this release, we have extended Purpose Justification with Temporary Access to introduce scoped permissions and second approval requirements. Now a user’s Purpose Justification, along with location and IP address, will be sent to a preconfigured list of approvers who can then either approve or deny a user’s access request, or grant access for a set amount of time.

This allows security teams to avoid over-provisioning sensitive applications without also creating bottlenecks on a few key individuals in their organization with access to sensitive tools. Better yet, all of these requests and approvals are logged for regulatory and investigative purposes.

Announcing Access Temporary Authentication

When the user’s session expires, they need to repeat the process if they need access again. If you have a group of users who should always be allowed to reach a resource, without second approval, you can define groups that are allowed to skip this step.

Purpose Justification and Temporary Access were both built using Cloudflare Workers. This means both user access requests and administrator access reviews are rendered from the closest data center to the user. You could request access to an application from an approver across the world with virtually no latency.

Workers also allowed us to be very flexible when Temporary Authentication is required. As an example, the same user who normally has persistent access to an application can be required to request access when connecting from a personal device or when visiting a high-risk country.

How to get started

To get started with Temporary Authentication in Cloudflare Access, go to the Teams Dashboard and create an Access application. Within the Application’s Zero Trust policy, you can configure when you want to allow for temporary authentication with human approval. For more detailed information, you can refer to our developer docs.

Cloudflare for Offices

Post Syndicated from James Allworth original https://blog.cloudflare.com/cloudflare-for-offices/

Cloudflare for Offices

Cloudflare for Offices

Cloudflare’s network is one of the biggest, most connected, and fastest in the world. It extends to more than 250 cities. In those cities, we’re often present in multiple data centers in order to connect to as many networks and bring our services as close to as many users as possible. We’re always asking ourselves: how can we get closer to even more of the world’s Internet users?

Today, we’re taking a big step toward that goal.

Introducing Cloudflare for Offices. We are creating strategic partnerships that will enable us to extend Cloudflare’s network into over 1,000 of the world’s busiest office buildings and multi-dwelling units. These buildings span the globe, and are where millions of people work every day; now, they’re going to be microseconds away from our global network. Our first deployments will include 30 Hudson Yards, 4 Times Square, and 520 Madison in New York; Willis Tower in Chicago; John Hancock Tower in Boston; and the Embarcadero Center and Salesforce Tower in San Francisco.

And we’re not done. We’ve built custom secure hardware and partnered with fiber providers to scale this model globally. It will bring a valuable new resource to the literal doorstep of building tenants.

Cloudflare has built a mutually beneficial relationship with the world’s ISPs by reducing their operational costs and improving customer performance. Similarly, we expect a mutually beneficial relationship as we roll out Cloudflare for Offices. Real estate operators & service offices upgraded with this amenity increase the value and occupancy of their portfolio. IT teams can enforce a consistent security posture while enabling flexible work environments from any location their employees prefer. And employees in these smart spaces, experiencing faster Internet performance, can be more productive, seamlessly working as they choose, be it at the office, at home, or on the go.

Why offices?

There’s no disputing the fact that the nature of work has undergone a tremendous shift over the past 18 months. While we still don’t know what the future of work will look like exactly, here’s what we do know: it’s going to require more flexibility, all while maintaining security and performance standards that are a prerequisite for operating on today’s Internet. Enabling flexibility, and improving performance AND security (as opposed to trading one off for the other) has been a long held belief of Cloudflare. Alongside, of course, driving value for organizations.

Cloudflare for Offices — by connecting directly with enterprises — enables us to now do that for commercial office space.

No More Band-Aid Boxes in the Basement

There are a variety of advantages to Cloudflare for Offices. First and foremost, it eliminates the need to rely on the costly, rigid hardware solutions and multiple, regional, third parties that are often required to provide secure and performant branch office connectivity. Businesses have maintained expensive and hardware-intensive office networks since the dawn of the modern Internet.

Never have they gotten less return on that investment than through the pandemic.

The hybrid future of work will only exacerbate the high costs and complexity of maintaining and securing this outdated infrastructure. MPLS links. WANs. Hardware firewalls. VPNs. All these remain mainstays of the modern office. In the same way that we look back on maintaining server rooms for compute and storage as complete anachronisms, so too will we soon look back on maintaining all these boxes in an office. We’ve spoken to customers who now have over half of their workforce remote, and who are considering giving up their office space or increasing their presence in shared workspaces. Some are being hamstrung because of a need for MPLS to make their network operate securely. But it’s not just customers. This is a problem that we ourselves have been facing. Setting up new offices, or securing and optimizing shared workspaces, is a huge lift, physically as well as technologically.

Cloudflare for Offices simplifies this: a direct connection to Cloudflare’s network puts all office traffic behind Cloudflare’s services. Now, creating an office is as simple as plugging a cable into our box, and all the security and performance features that an office typically needs are microseconds away. It also enables the creation of custom topologies on Cloudflare’s network, dramatically increasing the flexibility of your physical footprint.

“Throughout the pandemic, we’ve supported our over 12,000 employees to work safely and seamlessly from home or from our offices. Cloudflare solutions have been critical, and we’re excited to continue to partner on efficient and strong solutions.”
Mark Papermaster, CTO and Executive Vice President, Technology and Engineering, AMD

Zero (Trust) to 100 performance

COVID-19 hasn’t just driven a paradigm shift in where people work, however. It’s also driven a paradigm shift in how organizations think about IT security.

The old model — castle and moat — was designed during the desktop era, when most computing happened on premises. Everyone within the walls of the enterprise was considered authenticated; if you were outside the office, you needed to “tunnel” in through the moat in the castle of the office. As more and more users entered the portable era — through laptops and smartphones — then more tunnels were created.

The pandemic made it so that everyone was outside the moat, tunneling into an empty castle. Nobody was in the office anymore. The paradigm has been stretched to a parody.

Google was one of the first organizations to start to think about how things could be done differently: it proposed a model called BeyondCorp, which treated internal employees to an organization similar to how it treated external customers or suppliers to an organization. To put it simply: nobody is trusted, no matter if they’re in the office or not. If you want access to something, be prepared to prove you are who you say you are.

Fast-forward to 2021, and this model — otherwise known as Zero Trust — has become the gold standard of enterprise security, to which more and more organizations are implementing. Cloudflare’s Zero Trust solution — Cloudflare for Teams — has become increasingly popular for not just its advanced functionality and its ease of use, but because, when coupled with our enterprise connectivity offerings, allows you to run more and more of your traffic across Cloudflare’s network. We call this holistic solution Cloudflare One, and it provides your organization a virtual private network in the cloud, with all the associated security and visibility benefits.

Cloudflare for Offices

Cloudflare for Offices is the onramp for offices onto Cloudflare One. It’s a fast, private onramp for your office network traffic straight onto the Cloudflare network — with all the security and visibility benefits that running your traffic over our network provides.

We also realize that for many organizations, Zero Trust is a journey. Not every customer is ready to go from MPLS and built-out networks to trusting the public Internet overnight. Cloudflare for Offices is a great start in the journey — by building out your own networks on top of Cloudflare, you reduce your threat vectors while being able to keep your existing topologies. This gives you the privacy and security of Cloudflare One, but with the flexibility to build Zero Trust any way you choose.

But security and visibility are not the only benefits. One of the common complaints we hear from customers about competing solutions is that performance can be extremely variable. The proximity Cloudflare has to so many people around the world is important because when employees connect using a Zero Trust solution, at least a subset (but often all) the traffic going from an end-user device needs to connect to the Zero Trust provider. Having Cloudflare equipment close means that the performance of the user device will be vastly increased as opposed to having to connect to a far off data center. You’ve probably read about what happens when Cloudflare takes control of your Last Mile connectivity and your network to your data centers. And you know that connecting to a Cloudflare data center in the same city increases performance, but imagine what happens when you’re connecting to Cloudflare in your office basement. And when you think about all the employees that you have are running on a zero trust model, that performance difference sums up to a lot of additional employee productivity.

Up until now, something like this has been extremely expensive, complicated, and oftentimes, slow.

“We see a lot of potential in the way Cloudflare is bringing its network directly to our office locations. It’s critical that we empower our employees to work productively and securely, and this makes it that much easier for us to do so no matter where our teams are working from in the future–and reducing our network costs along the way.”
Aaron Dearinger, Edge Architect, Garmin International

Cloudflare for Offices allows for customers to choose their Network as a Service: let us manage your footprint and build your network out however you like.

Living on the Edge

But it’s not just zero trust that gets a boost. Workers, Cloudflare’s serverless platform, runs on the edge from the nearest data center to the user making the request. As you might have already read: it’s fast. With more and more business and application logic being moved to Workers, your end users stand to benefit.

But it does beg the question: just how fast are we talking?

Cloudflare for Offices
Photo by Denys Nevozhai on Unsplash

One example building we’re planning to enable is Salesforce Tower, in San Francisco. It’s 1,070 feet tall. A light signal running from the top of the building to the basement along a single-mode fiber cable would take no more than 6 µs (6 microseconds) to complete its journey. This puts customers fractions of a millisecond away from Cloudflare’s network.

The edge is becoming indistinguishable in performance from local compute.

Built for Purpose

We’ve written many times before about how Cloudflare designs our hardware. But deploying Cloudflare hardware outside of data centers — and into office basements — presented a new set of challenges. Cooling, energy efficiency, and resiliency were even more important in the design. Similarly, these are going to be deployed to offices all over the world; they needed to be cost-effective. Finally, and perhaps most importantly, there is also a security aspect to this: we could not assume the same level of access control inside a building as we could inside a data center.

Cloudflare for Offices

This is where the inherent advantages of designing and owning the hardware come to the fore. Because of it, we’re able to build exactly what we need for the environment: ranging from how resilient these devices need to be, to an appropriate level of security given where they’re going to be operating. In fact, we have been working on hardware security for the last five years in anticipation of the launch of Cloudflare for Offices. We’re starting with switching, and we plan to add compute and storage capabilities in short order. Stay tuned for more details.

Join the Revolution

If you’re an organization (tenant) in a large office building, an owner/operator of multi-tenant (or multi-dwelling) real estate, or a co-working space looking to bring Cloudflare to your doorstep — with all the flexibility, performance and security enhancements, and cost savings that would entail — then we’d love for you to get in touch with us.

The Zero Trust platform built for speed

Post Syndicated from Kenny Johnson original https://blog.cloudflare.com/the-zero-trust-platform-built-for-speed/

The Zero Trust platform built for speed

The Zero Trust platform built for speed

Cloudflare for Teams secures your company’s users, devices, and data — without slowing you down. Your team should not need to sacrifice performance in order to be secure. Unlike other vendors in the market, Cloudflare’s products not only avoid back hauling traffic and adding latency — they make your team faster.

We’ve accomplished this by building Cloudflare for Teams on Cloudflare. All the products in the Zero Trust platform build on the improvements and features we’re highlighting as part of Speed Week:

  1. Cloudflare for Teams replaces legacy private networks with Cloudflare’s network, a faster way to connect users to applications.
  2. Cloudflare’s Zero Trust decisions are enforced in Cloudflare Workers, the performant serverless platform that runs in every Cloudflare data center.
  3. The DNS filtering features in Cloudflare Gateway run on the same technology that powers 1.1.1.1, the world’s fastest recursive DNS resolver.
  4. Cloudflare’s Secure Web Gateway accelerates connections to the applications your team uses.
  5. The technology that powers Cloudflare Browser Isolation is fundamentally different compared to other approaches and the speed advantages demonstrate that.

We’re excited to share how each of these components work together to deliver a comprehensive Zero Trust platform that makes your team faster. All the tools we talk about below are available today, they’re easy to use (and get started with) — and they’re free for your first 50 users. If you want to sign up now, head over to the Teams Dashboard!

Shifting From an Old Model to a New, Much Faster One

Legacy access control slowed down teams

Most of our customers start their Zero Trust journey by replacing their legacy private network. Private networks, by default, trust users inside those networks. If a user is on the network, they are considered trusted and can reach other services unless explicitly blocked. Security teams hate that model. It creates a broad attack surface for internal and external bad actors. All they need to do is get network access.

Zero Trust solutions provide a more secure alternative where every request or connection is considered untrusted. Instead, users need to prove they should be able to reach the specific applications or services based on signals like identity, device posture, location and even multifactor method.

Cloudflare Access gives your team the ability to apply these rules, while also logging every event, as a full VPN replacement. Now instead of sneaking onto the network, a malicious user would need valid user credentials, a hard-key and company laptop to even get started.

It also makes your applications much, much faster by avoiding the legacy VPN backhaul requirement.

Private networks attempt to mirror a physical location. Deployments start inside the walls of an office building, for example. If a user was in the building, they could connect. When they left the building, they needed a Virtual Private Network (VPN) client. The VPN client punched a hole back into the private network and allowed the user to reach the same set of resources. If those resources also sat outside the office, the VPN became a slow backhaul requirement.

Some businesses address this by creating different VPN instances for their major hubs across the country or globe. However, they still need to ensure a fast and secure connection between major hubs and applications. This is typically done with dedicated MPLS connections to improve application performance. MPLS lines are both expensive and take IT resources to maintain.

When teams replace their VPN with a Zero Trust solution, they can and often do reduce the latency added by backhauling traffic through a VPN appliance. However, we think that “slightly faster” is not good enough. Cloudflare Access delivers your applications and services to end users on Cloudflare’s network while verifying every request to ensure the user is properly authenticated.

Cloudflare’s Zero Trust approach speeds teams up

Organizations start by connecting their resources to Cloudflare’s network using Cloudflare Tunnel, a service that runs in your environment and creates outbound-only connections to Cloudflare’s edge. That service is powered by our Argo Smart Routing technology, which improves performance of web assets by 30% on average (Argo Smart Routing became even faster earlier this week).

The Zero Trust platform built for speed

On the other side, users connect to Cloudflare’s network by reaching a data center near them in over 250 cities around the world. 95% of the entire Internet-connected world is now within 50 ms of a Cloudflare presence, and 80% of the entire Internet-connected world is within 20ms (for reference, it takes 300-400 ms for a human to blink).

The Zero Trust platform built for speed

Finally, Cloudflare’s network finds the best route to get your users to your applications — regardless of where they are located, using Cloudflare’s global backbone. Our backbone consists of dedicated fiber optic lines and reserved portions of wavelength that connect Cloudflare data centers together. This is split approximately 55/45 between “metro” capacity, which redundantly connects data centers in which we have a presence, and “long-haul” capacity, which connects Cloudflare data centers in different cities. There are no individual VPN instances or MPLS lines, all a user needs to do is access their desired application and Cloudflare handles the logic to efficiently route their request.

The Zero Trust platform built for speed

When teams replace their private networks with Cloudflare, they accelerate the performance of the applications their employees need. However, the Zero Trust model also includes new security layers. Those safeguards should not slow you down, either — and on Cloudflare, they won’t.

Instant Zero Trust decisions built on the Internet’s most performant serverless platform, Workers

Cloudflare Access checks every request and connection against the rules that your administrators configure on a resource-by-resource basis. If users have not proved they should be able to reach a given resource, we begin evaluating their signals by taking steps like prompting them to authenticate with their identity/Sign-Sign On provider or checking their device for posture. If users meet all the criteria, we allow them to proceed.

Despite evaluating dozens of signals, we think this step should be near instantaneous to the user. To solve that problem, we built Cloudflare Access’ authentication layer entirely on Cloudflare Workers. Every application’s Access policies are stored and evaluated at every one of Cloudflare’s 250+ data centers. Instead of a user’s traffic having to be backhauled to an office and then to the application, traffic is routed from the closest data center to the user directly to their desired application.

As Rita Kozlov wrote earlier this week, Cloudflare Workers is the Internet’s fast serverless platform. Workers runs in every data center in Cloudflare’s network — meaning the authentication decision does not add more backhaul or get in the way of the network acceleration discussed above. In comparison to other serverless platforms, Cloudflare Workers is “210% faster than Lambda@Edge and 298% faster than Lambda.”

By building on Cloudflare Workers, we can authenticate user sessions to a given resource in less than three milliseconds on average. This also makes Access resilient — unlike a VPN that can go down and block user access, even if any Cloudflare data center goes offline, user requests are redirected to a nearby data center.

Filtering built on the same platform as the world’s fastest public DNS resolver

After securing internal resources, the next phase in a Zero Trust journey for many customers is to secure their users, devices, and data from external threats. Cloudflare Gateway helps organizations start by filtering DNS queries leaving devices and office networks.

When users navigate to a website or connect to a service, their device begins by making a DNS query to their DNS resolver. Most DNS resolvers respond with the IP of the hostname being requested. If the DNS resolver is aware of what hostnames on the Internet are dangerous, the resolver can instead keep the user safe by blocking the query.

Historically, organizations deployed DNS filtering solutions using appliances that sat inside their physical office. Similar to the private network challenges, users outside the office had to use a VPN to backhaul their traffic to the appliances in the office that provided DNS filtering and other security solutions.

That model has shifted to cloud-based solutions. However, those solutions are only as good as the speed of their DNS resolution and distribution of the data centers. Again, this is better for performance — but not yet good enough.

We wanted to bring DNS filtering closer to each user. When DNS queries are made from a device running Cloudflare Gateway, all requests are initially sent to a nearby Cloudflare data center. These DNS queries are then checked against a comprehensive list of known threats.

We’re able to do this faster than a traditional DNS filter because Cloudflare operates the world’s fastest public DNS resolver, 1.1.1.1. Cloudflare processes hundreds of billions of DNS queries per day and the users who choose 1.1.1.1 enjoy the fastest DNS resolution on the Internet and audited privacy guarantees.

Customers who secure their teams with Cloudflare Gateway benefit from the same improvements and optimizations that have kept 1.1.1.1 the fastest resolver on the Internet. When organizations begin filtering DNS with Cloudflare Gateway, they immediately improve the Internet experience for their employees compared to any other DNS resolver.

A Secure Web Gateway without performance penalties

In the kick-off post for Speed Week, we described how delivering a waitless Internet isn’t just about having ample bandwidth. The speed of light and round trips incurred by DNS, TLS and HTTP protocols can easily manifest into a degraded browsing experience.

To protect their teams from threats and data loss on the Internet, security teams inspect and filter traffic on a Virtual Private Network (VPN) and Secure Web Gateway (SWG). On an unfiltered Internet connection, your DNS, TLS and HTTP requests take a short trip from your browser to your local ISP which then sends the request to the target destination. With a filtered Internet connection, this traffic is instead sent from your local ISP to a centralized SWG hosted either on-premise or in a zero trust network — before eventually being dispatched to the end destination.

This centralization of Internet traffic introduces the tromboning effect, artificially degrading performance by forcing traffic to take longer paths to destinations even when the end destination is closer than the filtering service. This effect can be eliminated by performing filtering on a network that is interconnected directly with your ISP.

To quantify this point we again leveraged Catchpoint to measure zero trust network round trip time from a range of international cities. Based on public documentation we also measured publicly available endpoints for Cisco Umbrella, ZScaler, McAfee and Menlo Security.

The Zero Trust platform built for speed

There is a wide variance in results. Cloudflare, on average, responds in 10.63ms, followed by Cisco Umbrella (26.39ms), ZScaler (35.60ms), Menlo Security (37.64ms) and McAfee (59.72ms).

Cloudflare for Teams is built on the same network that powers the world’s fastest DNS resolver and WARP to deliver consumer-grade privacy and performance. Since our network is highly interconnected and located in over 250 cities around the world our network, we’re able to eliminate the tromboning effect by inspecting and filtering traffic in the same Internet exchange points that your Internet Service Provider uses to connect you to the Internet.

These tests are simple network latency tests and do not encapsulate latency’s impact end-to-end on DNS, TLS and HTTPS connections or the benefits of our global content delivery network serving cached content for the millions of websites accelerated by our network. Unlike content delivery networks which are publicly measured, zero trust networks are hidden behind enterprise contracts which hinder industry-wide transparency.

Latency sensitivity and Browser Isolation

The web browser has evolved into workplace’s most ubiquitous application, and with it created one of the most common attack vectors for phishing, malware and data loss. This risk has led many security teams to incorporate a remote browser isolation solution into their security stack.

Users browsing remotely are especially sensitive to latency. Remote web pages will typically load fast due to the remote browser’s low latency, high bandwidth connection to the website, but user interactions such as scrolling, typing and mouse input stutter and buffer leading to significant user frustration. A high latency connection on a local browser is the opposite with latency manifesting as slow page load times.

Segmenting these results per continent, we can see highly inconsistent latency on centralized zero trust networks and far more consistent results for Cloudflare’s decentralized zero trust network.

The Zero Trust platform built for speed
The Zero Trust platform built for speed
The Zero Trust platform built for speed

The thin green line shows Cloudflare consistently responding in under 11ms globally, with other vendors delivering unstable and inconsistent results. If you’ve had a bad experience with other Remote Browser Isolation tools in the past, it was likely because it wasn’t built on a network designed to support it.

Give it a try!

We believe that security shouldn’t result in sacrificing performance — and we’ve architected our Zero Trust platform to make it so. We also believe that Zero Trust security shouldn’t just be the domain of the big players with lots of resources — it should be available to everyone as part of our mission to help make the Internet a better place. We’ve made all the tools covered above free for your first 50 users. Get started today in the Teams Dashboard!

Magic makes your network faster

Post Syndicated from Annika Garbers original https://blog.cloudflare.com/magic-makes-your-network-faster/

Magic makes your network faster

Magic makes your network faster

We launched Magic Transit two years ago, followed more recently by its siblings Magic WAN and Magic Firewall, and have talked at length about how this suite of products helps security teams sleep better at night by protecting entire networks from malicious traffic. Today, as part of Speed Week, we’ll break down the other side of the Magic: how using Cloudflare can automatically make your entire network faster. Our scale and interconnectivity, use of data to make more intelligent routing decisions, and inherent architecture differences versus traditional networks all contribute to performance improvements across all IP traffic.

What is Magic?

Cloudflare’s “Magic” services help customers connect and secure their networks without the cost and complexity of maintaining legacy hardware. Magic Transit provides connectivity and DDoS protection for Internet-facing networks; Magic WAN enables customers to replace legacy WAN architectures by routing private traffic through Cloudflare; and Magic Firewall protects all connected traffic with a built-in firewall-as-a-service. All three share underlying architecture principles that form the basis of the performance improvements we’ll dive deeper into below.

Anycast everything

In contrast to traditional “point-to-point” architecture, Cloudflare uses Anycast GRE or IPsec (coming soon) tunnels to send and receive traffic for customer networks. This means that customers can set up a single tunnel to Cloudflare, but effectively get connected to every single Cloudflare location, dramatically simplifying the process to configure and maintain network connectivity.

Magic makes your network faster

Every service everywhere

In addition to being able to send and receive traffic from anywhere, Cloudflare’s edge network is also designed to run every service on every server in every location. This means that incoming traffic can be processed wherever it lands, which allows us to block DDoS attacks and other malicious traffic within seconds, apply firewall rules, and route traffic efficiently and without bouncing traffic around between different servers or even different locations before it’s dispatched to its destination.

Zero Trust + Magic: the next-gen network of the future

With Cloudflare One, customers can seamlessly combine Zero Trust and network connectivity to build a faster, more secure, more reliable experience for their entire corporate network. Everything we’ll talk about today applies even more to customers using the entire Cloudflare One platform – stacking these products together means the performance benefits multiply (check out our post on Zero Trust and speed from today for more on this).

More connectivity = faster traffic

So where does the Magic come in? This part isn’t intuitive, especially for customers using Magic Transit in front of their network for DDoS protection: how can adding a network hop subtract latency?

The answer lies in Cloudflare’s network architecture — our web of connectivity to the rest of the Internet. Cloudflare has invested heavily in building one of the world’s most interconnected networks (9800 interconnections and counting, including with major ISPs, cloud services, and enterprises). We’re also continuing to grow our own private backbone and giving customers the ability to directly connect with us. And our expansive connectivity to last mile providers means we’re just milliseconds away from the source of all your network traffic, regardless of where in the world your users or employees are.

This toolkit of varying connectivity options means traffic routed through the Cloudflare network is often meaningfully faster than paths across the public Internet alone, because more options available for BGP path selection mean increased ability to choose more performant routes. Imagine having only one possible path between your house and the grocery store versus ten or more – chances are, adding more options means better alternatives will be available. A cornucopia of connectivity methods also means more resiliency: if there’s an issue on one of the paths (like construction happening on what is usually the fastest street), we can easily route around it to avoid impact to your traffic.

One common comparison customers are interested in is latency for inbound traffic. From the end user perspective, does routing through Cloudflare speed up or slow down traffic to networks protected by Magic Transit? Our response: let’s test it out and see! We’ve repeatedly compared Magic Transit vs standard Internet performance for customer networks across geographies and industries and consistently seen really exciting results. Here’s an example from one recent test where we used third-party probes to measure the ping time to the same customer network location (their data center in Qatar) before and after onboarding with Magic Transit:

Probe location RTT w/o Magic (ms) RTT w/ Magic (ms) Difference (ms) Difference (% improvement)
Dubai 27 23 4 13%
Marseille 202 188 13 7%
Global (results averaged across 800+ distributed probes) 194 124 70 36%

All of these results were collected without the use of Argo Smart Routing for Packets, which we announced on Tuesday. Early data indicates that networks using Smart Routing will see even more substantial gains.

Modern architecture eliminates traffic trombones

In addition to the performance boost available for traffic routed across the Cloudflare network versus the public Internet, customers using Magic products benefit from a new architecture model that totally removes up to thousands of miles worth of latency.

Traditionally, enterprises adopted a “hub and spoke” model for granting employees access to applications within and outside their network. All traffic from within a connected network location was routed through a central “hub” where a stack of network hardware (e.g. firewalls) was maintained. This model worked great in locations where the hub and spokes were geographically close, but started to strain as companies became more global and applications moved to the cloud.

Now, networks using hub and spoke architecture are often backhauling traffic thousands of miles, between continents and across oceans, just to apply security policies before packets are dispatched to their final destination, which is often physically closer to where they started! This creates a “trombone” effect, where precious seconds are wasted bouncing traffic back and forth across the globe, and performance problems are amplified by packet loss and instability along the way.

Magic makes your network faster

Network and security teams have tried to combat this issue by installing hardware at more locations to establish smaller, regional hubs, but this quickly becomes prohibitively expensive and hard to manage. The price of purchasing multiple hardware boxes and dedicated private links adds up quickly, both in network gear and connectivity itself as well as the effort required to maintain additional infrastructure. Ultimately, this cost usually outweighs the benefit of the seconds regained with shorter network paths.

The “hub” is everywhere

There’s a better way — with the Anycast architecture of Magic products, all traffic is automatically routed to the closest Cloudflare location to its source. There, security policies are applied with single-pass inspection before traffic is routed to its destination. This model is conceptually similar to a hub and spoke, except that the hub is everywhere: 95% of the entire Internet-connected world is within 50 ms of a Cloudflare location (check out this week’s updates on our quickly-expanding network presence for the latest). This means instead of tromboning traffic between locations, it can stop briefly at a Cloudflare hop in-path before it goes on its way: dramatically faster architecture without compromising security.

To demonstrate how this architecture shift can make a meaningful difference, we created a lab to mirror the setup we’ve heard many customers describe as they’ve explained performance issues with their existing network. This example customer network is headquartered in South Carolina and has branch office locations on the west coast, in California and Oregon. Traditionally, traffic from each branch would be backhauled through the South Carolina “hub” before being sent on to its destination, either another branch or the public Internet.

In our alternative setup, we’ve connected each customer network location to Cloudflare with an Anycast GRE tunnel, simplifying configuration and removing the South Carolina trombone. We can also enforce network and application-layer filtering on all of this traffic, ensuring that the faster network path doesn’t compromise security.

Magic makes your network faster

Here’s a summary of results from performance tests on this example network demonstrating the difference between the traditional hub and spoke setup and the Magic “global hub” — we saw up to 70% improvement in these tests, demonstrating the dramatic impact this architecture shift can make.

LAX <> OR (ms)
ICMP round-trip for “Regular” (hub and spoke) WAN 127
ICMP round-trip for Magic WAN 38
Latency savings for Magic WAN vs “Regular” WAN 70%

This effect can be amplified for networks with globally distributed locations — imagine the benefits for customers who are used to delays from backhauling traffic between different regions across the world.

Getting smarter

Adding more connectivity options and removing traffic trombones provide a performance boost for all Magic traffic, but we’re not stopping there. In the same way we leverage insights from hundreds of billions of requests per day to block new types of malicious traffic, we’re also using our unique perspective on Internet traffic to make more intelligent decisions about routing customer traffic versus relying on BGP alone. Earlier this week, we announced updates to Argo Smart Routing including the brand-new Argo Smart Routing for Packets. Customers using Magic products can enable it to automatically boost performance for any IP traffic routed through Cloudflare (by 10% on average according to results so far, and potentially more depending on the individual customer’s network topology) — read more on this in the announcement blog.

What’s next?

The modern architecture, well-connected network, and intelligent optimizations we’ve talked about today are just the start. Our vision is for any customer using Magic to connect and protect their network to have the best performance possible for all of their traffic, automatically. We’ll keep investing in expanding our presence, interconnections, and backbone, as well as continuously improving Smart Routing — but we’re also already cooking up brand-new products in this space to deliver optimizations in new ways, including WAN Optimization and Quality of Service functions. Stay tuned for more Magic coming soon, and get in touch with your account team to learn more about how we can help make your network faster starting today.

Quick Tunnels: Anytime, Anywhere

Post Syndicated from Rishabh Bector original https://blog.cloudflare.com/quick-tunnels-anytime-anywhere/

Quick Tunnels: Anytime, Anywhere

Quick Tunnels: Anytime, Anywhere

My name is Rishabh Bector, and this summer, I worked as a software engineering intern on the Cloudflare Tunnel team. One of the things I built was quick Tunnels and before departing for the summer, I wanted to write a blog post on how I developed this feature.

Over the years, our engineering team has worked hard to continually improve the underlying architecture through which we serve our Tunnels. However, the core use case has stayed largely the same. Users can implement Tunnel to establish an encrypted connection between their origin server and Cloudflare’s edge.

This connection is initiated by installing a lightweight daemon on your origin, to serve your traffic to the Internet without the need to poke holes in your firewall or create intricate access control lists. Though we’ve always centered around the idea of being a connector to Cloudflare, we’ve also made many enhancements behind the scenes to the way in which our connector operates.

Typically, users run into a few speed bumps before being able to use Cloudflare Tunnel. Before they can create or route a tunnel, users need to authenticate their unique token against a zone on their account. This means in order to simply spin up a Tunnel testing environment, users need to first create an account, add a website, change their nameservers, and wait for DNS propagation.

Starting today, we’re excited to fix that. Cloudflare Tunnel now supports a free version that includes all the latest features and does not require any onboarding to Cloudflare. With today’s change, you can begin experimenting with Tunnel in five minutes or less.

Introducing Quick Tunnels

When administrators start using Cloudflare Tunnel, they need to perform four specific steps:

  1. Create the Tunnel
  2. Configure the Tunnel and what services it will represent
  3. Route traffic to the Tunnel
  4. And finally… run the Tunnel!

These steps give you control over how your services connect to Cloudflare, but they are also a chore. Today’s change, which we are calling quick Tunnels, not only removes some onboarding requirements, we’re also condensing these into a single step.

If you have a service running locally that you want to share with teammates or an audience, you can use this single command to connect your service to Cloudflare’s edge. First, you need to install the Cloudflare connector, a lightweight daemon called cloudflared. Once installed, you can run the command below.

cloudflared tunnel

Quick Tunnels: Anytime, Anywhere

When run, cloudflared will generate a URL that consists of a random subdomain of the website trycloudflare.com and point traffic to localhost port 8080. If you have a web service running at that address, users who visit the subdomain generated will be able to visit your web service through Cloudflare’s network.

Configuring Quick Tunnels

We built this feature with the single command in mind, but if you have services that are running at different default locations, you can optionally configure your quick Tunnel to support that.

One example is if you’re building a multiplayer game that you want to share with friends. If that game is available locally on your origin, or even your laptop, at localhost:3000, you can run the command below.

cloudflared tunnel ---url localhost:3000

You can do this with IP addresses or URLs, as well. Anything that cloudflared can reach can be made available through this service.

How does it work?

Cloudflare quick Tunnels is powered by Cloudflare Workers, giving us a serverless compute deployment that puts Tunnel management in a Cloudflare data center closer to you instead of a centralized location.

When you run the command cloudflared tunnel, your instance of cloudflared initiates an outbound-only connection to Cloudflare. Since that connection was initiated without any account details, we treat it as a quick Tunnel.

A Cloudflare Worker, which we call the quick Tunnel Worker, receives a request that a new quick Tunnel should be created. The Worker generates the random subdomain and returns that to the instance of cloudflared. That instance of cloudflared can now establish a connection for that subdomain.

Meanwhile, a complementary service running on Cloudflare’s edge receives that subdomain and the identification number of the instance of cloudflared. That service uses that information to create a DNS record in Cloudflare’s authoritative DNS which maps the randomly-generated hostname to the specific Tunnel you created.

The deployment also relies on the Workers Cron Trigger feature to perform clean up operations. On a regular interval, the Worker looks for quick Tunnels which have been disconnected for more than five minutes. Our Worker classifies these Tunnels as abandoned and proceeds to delete them and their associated DNS records.

What about Zero Trust policies?

By default, all the quick Tunnels that you create are available on the public Internet at the randomly generated URL. While this might be fine for some projects and tests, other use cases require more security.

Quick Tunnels: Anytime, Anywhere

If you need to add additional Zero Trust rules to control who can reach your services, you can use Cloudflare Access alongside Cloudflare Tunnel. That use case does require creating a Cloudflare account and adding a zone to Cloudflare, but we’re working on ideas to make that easier too.

Where should I notice improvements?

We first launched a version of Cloudflare Tunnel that did not require accounts over two years ago. While we’ve been thrilled that customers have used this for their projects, Cloudflare Tunnel evolved significantly since then. Specifically, Cloudflare Tunnel relies on a new architecture that is more redundant and stable than the one used by that older launch. While all Tunnels that migrated to this new architecture, which we call Named Tunnels, enjoyed those benefits, the users on this option that did not require an account were left behind.

Today’s announcement brings that stability to quick Tunnels. Tunnels are now designed to be long-lived, persistent objects. Unless you delete them, Tunnels can live for months, an improvement over the average lifespan measured in hours before connectivity issues disrupted a Tunnel in the older architecture.

These quick Tunnels run on this same, resilient architecture not only expediting time-to-value, but also improving the overall tunnel quality of life.

What’s next?

Today’s quick Tunnels add a powerful feature to Cloudflare Tunnels: the ability to create a reliable, resilient tunnel in a single command, without the hassle of creating an account first. We’re excited to help your team build and connect services to Cloudflare’s network and on to your audience or teammates. If you have additional questions, please share them in this community post here.

Zero Trust controls for your SaaS applications

Post Syndicated from Kenny Johnson original https://blog.cloudflare.com/access-saas-integrations/

Zero Trust controls for your SaaS applications

Zero Trust controls for your SaaS applications

Most teams start that journey by moving the applications that lived on their private networks into this Zero Trust model. Instead of a private network where any user on the network is assumed to be trusted, the applications that use Cloudflare Access now check every attempt against the rules you create. For your end users, this makes these applications just feel like regular SaaS apps, while your security teams have full control and logs.

However, we kept hearing from teams that wanted to use their Access control plane to apply consistent security controls to their SaaS apps, and consolidate logs from self-hosted and SaaS in one place.

We’re excited to give your team the tools to solve that challenge. With Access in front of your SaaS applications, you can build Zero Trust rules that determine who can reach your SaaS applications in the same place where your rules for self-hosted applications and network access live. To make that easier, we are launching guided integrations with the Amazon Web Services (AWS) management console, Zendesk, and Salesforce. In just a few minutes, your team can apply a Zero Trust layer over every resource you use and ensure your logs never miss a request.

How it works

Cloudflare Access secures applications that you host by becoming the authoritative DNS for the application itself. All DNS queries, and subsequent HTTP requests, hit Cloudflare’s network first. Once there, Cloudflare can apply the types of identity-aware and context-driven rules that make it possible to move to a Zero Trust model. Enforcing these rules in our network means your application doesn’t need to change. You can secure it on Cloudflare, integrate your single sign-on (SSO) provider and other systems like Crowdstrike and Tanium, and begin building rules.

Zero Trust controls for your SaaS applications

SaaS applications pose a different type of challenge. You do not control where your SaaS applications are hosted — and that’s a big part of the value. You don’t need to worry about maintaining the hardware or software of the application.

However, that also means that your team cannot control how users reach those resources. In most cases, any user on the Internet can attempt to log in. Even if you incorporate SSO authentication or IP-based allowlisting, you might not have the ability to add location or device rules. You also have no way to centrally capture logs of user behavior on a per-request basis. Logging and permissions vary across SaaS applications — some are quite granular while others have non-existent controls and logging.

Cloudflare Access for SaaS solves that problem by injecting Zero Trust checks into the SSO flow for any application that supports SAML authentication. When users visit your SaaS application and attempt to log in, they are redirected through Cloudflare and then to your identity provider. They authenticate with your identity provider and are sent back to Cloudflare, where we layer on additional rules like device posture, multi factor method, and country of login. If the user meets all the requirements, Cloudflare converts the user’s authentication with your identity provider into a SAML assertion that we send to the SaaS application.

Zero Trust controls for your SaaS applications

We built support for SaaS applications by using Workers to take the JWT and convert its content into SAML assertions that are sent to the SaaS application. The application thinks that Cloudflare Access is the identity provider, even though we’re just aggregating identity signals from your SSO provider and other sources into the JWT, and sending that summary to the app via SAML. All of this leverages Cloudflare’s global network and ensures users do not see a performance penalty.

Enforcing managed devices and Gateway for SaaS applications

COVID-19 made it commonplace for employees to work from anywhere and, more concerning, from any device. Many SaaS applications contain sensitive data that should only be accessed with a corporately managed device. A benefit of SaaS tools is that they’re readily available from any device, it’s up to security administrators to enforce which devices can be used to log in.

Once Access for SaaS has been configured as the SSO provider for SaaS applications, policies that verify a device can be configured. You can then lock a tool like Salesforce down to only users with a device that has a known serial number, hard auth key plugged in, an up to date operating system and much more.

Zero Trust controls for your SaaS applications

Cloudflare Gateway keeps your users and data safe from threats on the Internet by filtering Internet-bound connections that leave laptops and offices. Gateway gives administrators the ability to block, allow, or log every connection and request to SaaS applications.

However, users are connecting from personal devices and home WiFi networks, potentially bypassing Internet security filtering available on corporate networks. If users have their password and MFA token, they can bypass security requirements and reach into SaaS applications from their own, unprotected devices at home.

Zero Trust controls for your SaaS applications

To ensure traffic to your SaaS apps only connects over Gateway-protected devices, Cloudflare Access will add a new rule type that requires Gateway when users login to your SaaS applications. Once enabled, users will only be able to connect to your SaaS applications when they use Cloudflare Gateway. Gateway will log those connections and provide visibility into every action within SaaS apps and the Internet.

Getting started and what’s next

It’s easy to get started with setting up Access for SaaS application. Visit the Cloudflare for Teams Dashboard and follow one of our published guides.

We will make it easier to protect SaaS applications and will soon be supporting configuration via metadata files. We will also continue to publish SaaS app specific integration guides. Are there specific applications you’ve been trying to integrate? Let us know in the community!

Capturing Purpose Justification in Cloudflare Access

Post Syndicated from Molly Cinnamon original https://blog.cloudflare.com/access-purpose-justification/

Capturing Purpose Justification in Cloudflare Access

The digital world often takes its cues from the real world. For example, there’s a standard question every guard or agent asks when you cross a border—whether it’s a building, a neighborhood, or a country: “What’s the purpose of your visit?” It’s a logical question: sure, the guard knows some information—like who you are (thanks to your ID) and when you’ve arrived—but the context of “why” is equally important. It can set expectations around behavior during your visit, as well as what spaces you should or should not have access to.

Capturing Purpose Justification in Cloudflare Access
The purpose justification prompt appears upon login, asking users to specify their use case before hitting submit and proceeding.

Digital access follows suit. Recent data protection regulations, such as the GDPR, have formalized concepts of purpose limitation and data proportionality: people should only access data necessary for a specific stated reason. System owners know people need access to do their job, but especially for particularly sensitive applications, knowing why a login was needed is just as vital as knowing who, when, and how.

Starting today, Cloudflare for Teams administrators can prompt users to enter a justification for accessing an application prior to login. Administrators can add this prompt to any existing or new Access application with just two clicks, giving them the ability to:

  • Log and review employee justifications for accessing sensitive applications
  • Add additional layers of security to applications they deem sensitive
  • Customize modal text to communicate data use & sharing principles
  • Help meet regulatory requirements for data access control (such as GDPR)

Starting with Zero Trust access control

Cloudflare Access has been built with access management at its core: rather than trusting anyone on a private network, Access checks for identity, context and device posture every time someone attempts to reach an application or resource.

Behind the scenes, administrators build rules to decide who should be able to reach the tools protected by Access. When users need to connect to those tools, they are prompted to authenticate with one of the identity provider options. Cloudflare Access checks their login against the list of allowed users and, if permitted, allows the request to proceed.

Some applications and workflows contain data so sensitive that the user should have to prove who they are and why they need to reach that service. In this next phase of Zero Trust security, access to data should be limited to specific business use cases or needs, rather than generic all-or-nothing access.

Deploying Zero Trust purpose justification

We created this functionality because we, too, wanted to make sure we had these provisions in place at Cloudflare. We have sensitive internal tools that help our team members serve our customers, and we’ve written before about how we use Cloudflare Access to lock down those tools in a Zero Trust manner.

However, we were not satisfied with just restricting access in the least privileged model. We are accountable to the trust our customers put in our services, and we feel it is important to always have an explicit business reason when connecting to some data sets or tools.

We built purpose justification capture in Cloudflare Access to solve that problem. When team members connect to certain resources, Access prompts them to justify why. Cloudflare’s network logs that rationale and allows the user to proceed.

Purpose justification capture in Access helps fulfill policy requirements, but even for enterprises who don’t need to comply with specific regulations, it also enables a thoughtful privacy and security framework for access controls. Prompting employees to justify their use case helps solve the data management challenge of balancing transparency with security — helping to ensure that sensitive data is used the right way.

Capturing Purpose Justification in Cloudflare Access
Purpose justification capture adds an additional layer of context for enterprise administrators.

Distinguishing Sensitive Domains

So how do you distinguish if something is sensitive? There are two main categories of  applications that may be considered “sensitive.” First: does it contain personally identifiable information or sensitive financials? Second, do all the employees who have access actually need access? The flexibility of the configuration of Access policies helps effectively distinguish sensitive domains for specific user groups.

Purpose justification in Cloudflare Access enables Teams administrators to configure the language of the prompt itself by domain. This is a helpful place to remind employees of the sensitivity of the data, such as, “This application contains PII. Please be mindful of company policies and provide a justification for access,” or “Please enter the case number corresponding to your need for access.” The language can proactively ensure that employees with access to an internal tool are using it as intended.

Additionally, Access identity management allows Teams customers to configure purpose capture for only specific, more sensitive employee groups. For example, some employees need daily access to an application and should be considered “trusted.” But other employees may still have access, but should only rarely need to use the tool— security teams or data protection officers may view their access as higher risk. The policies enable flexible logical constructions that equate to actions such as “ask everyone but the following employees for a purpose.”

This distinction of sensitive applications and “trusted” employees enables friction to the benefit of data protection, rather than a loss of efficiency for employees.

Capturing Purpose Justification in Cloudflare Access
Purpose justification is configurable as an Access policy, allowing for maximum flexibility in configuring and layering rules to protect sensitive applications.

Auditing justification records

As a Teams administrator, enterprise data protection officer, or security analyst, you can view purpose justification logs for a specific application to better understand how it has been accessed and used. Auditing the logs can reveal insights about security threats, the need for improved data classification training, or even potential application development to more appropriately address employees’ use cases.

The justifications are seamlessly integrated with other Access audit logs — they are viewable in the Teams dashboard as an additional column in the table of login events, and exportable to a SIEM for further data analysis.

Capturing Purpose Justification in Cloudflare Access
Teams administrators can review the purpose justifications submitted upon application login by their employees.

Getting started

You can start adding purpose justification prompts to your application access policies in Cloudflare Access today. The purpose justification feature is available in all plans, and with the Cloudflare for Teams free plan, you can use it for up to 50 users at no cost.

We’re excited to continue adding new features that give you more flexibility over purpose justification in Access… Have feedback for us? Let us know in this community post.

Introducing Shadow IT Discovery

Post Syndicated from Abe Carryl original https://blog.cloudflare.com/introducing-shadow-it-discovery/

Introducing Shadow IT Discovery

Introducing Shadow IT Discovery

Your team likely uses more SaaS applications than you realize. The time your administrators spend vetting and approving applications sanctioned for use can suddenly be wasted when users sign up for alternative services and store data in new places. Starting today, you can use Cloudflare for Teams to detect and block unapproved SaaS applications with just two clicks.

Increasing Shadow IT usage

SaaS applications save time and budget for IT departments. Instead of paying for servers to host tools — and having staff ready to monitor, upgrade, and troubleshoot those tools — organizations can sign up for a SaaS equivalent with just a credit card and never worry about hosting or maintenance again.

That same convenience causes a data control problem. Those SaaS applications sit outside any environment that you control; the same reason they are easy for your team is also a potential liability now that your sensitive data is kept by third parties. Most organizations keep this in check through careful audits of the SaaS applications being used. Depending on industry and regulatory impact, IT departments evaluate, approve, and catalog the applications they use.

However, users can intentionally or accidentally bypass those approvals. For example, if your organization relies on OneDrive but a user is more comfortable with Google Drive, that user might decide to store work files in Google Drive instead. IT has no visibility into this happening and the user might think it’s fine. That user begins sharing files with other users in your organization, who also sign up with Google Drive, and suddenly an unsanctioned application holds sensitive information. This is “Shadow IT” and these applications inherently obfuscate the controls put in place by your organization.

Detecting Shadow IT

Cloudflare Gateway routes all Internet bound traffic to Cloudflare’s network to enforce granular controls for your users to block them from unknown security threats. Now, it also provides your team added assurance with a low-effort, high-visibility overview into the SaaS applications being used in your environment.

By simply turning on Gateway, all HTTP requests for your organization are aggregated in your Gateway Activity Log for audit and security purposes. Within the activity log, we surface pertinent information about the user, action, and request. These records include data about the application and application type. In the example above, the application type would be Collaboration and Online Meeting and the application would be Google Drive.

From there, Gateway analyzes your HTTP request in the Activity Log and surfaces your Shadow IT, by categorizing and sorting these seemingly miscellaneous applications into actionable insights without any additional lift from your team.

Introducing Shadow IT Discovery

Introducing Shadow IT Discovery

With Shadow IT Discovery, Cloudflare for Teams first catalogs all applications used in your organization. The feature runs in an “observation” mode first – all applications are analyzed, but default to “unreviewed.”

Your team can then review the applications found and, with just a couple clicks, designate applications approved or unapproved — either for a single application or in bulk.

This allows administrators to easily track the top approved and unapproved applications their users are accessing to better profile their security posture. When drilling down into a more detailed view, administrators can take bulk actions to move multiple newly discovered applications at once. In this view, users can also filter on application type to easily identify redundancies in their organization.

Another feature we wanted to add was the ability to quickly highlight if an application being used by your organization has already been secured by Cloudflare Access. You can find this information in the column titled Secured. If an application is not Secured by Access, you can start that process today as well with Access for SaaS. (We added two new tutorials this week!)

Introducing Shadow IT Discovery

When you mark an application unapproved, Cloudflare for Teams does not block it outright. We know some organizations need to label an application unapproved and check in with the users before they block access to it altogether. If your team is ready, you can then apply a Gateway rule to block access to it going forward.

Saving IT cost

While we’re excited to help IT teams stop worrying about unapproved apps, we also talked to teams who feared they were overspending for certain approved applications.

We want to help here too. Today’s launch counts the number of unique users who access any one application over different time intervals. IT teams can use this data to check usage against licenses and right size as needed.

Without this feature, many administrators and our own internal IT department were losing sleep each night wondering if their users were circumventing their controls and putting them at risk of attack. Additionally, many administrators are financially impacted as they procure software licenses for their entire organization. With Shadow IT Discovery, we empower your team to anticipate popular applications and begin the assessment process earlier in the procurement lifecycle.

What’s next

We’re excited to announce Shadow IT and can’t wait to see what you’ll do with it. To get started, deploy HTTP filtering for your organization with the Cloudflare for Teams client. In the future, we’ll also be adding automation to block unapproved applications in Gateway, but we can’t wait to hear what else you’d like to see out of this feature.

How AWS can help your US federal agency meet the executive order on improving the nation’s cybersecurity

Post Syndicated from Michael Cotton original https://aws.amazon.com/blogs/security/how-aws-can-help-your-us-federal-agency-meet-the-executive-order-on-improving-the-nations-cybersecurity/

AWS can support your information security modernization program to meet the President’s Executive Order on Improving the Nation’s Cybersecurity (issued May 12th, 2021). When working with AWS, a US federal agency gains access to resources, expertise, technology, professional services, and our AWS Partner Network (APN), which can help the agency meet the security and compliance requirements of the executive order.

For federal agencies, the Executive Order on Improving the Nation’s Cybersecurity requires an update to agency plans to prioritize cloud adoption, identify the most sensitive data and update the protections for that data, encrypt data at rest and in transit, implement multi-factor authentication, and meet expanded logging requirements. It also introduces Zero Trust Architectures and, for the first time, requires an agency to develop plans implementing Zero Trust concepts.

This post focuses on how AWS can help you plan for and accelerate cloud adoption. In the rest of the series you’ll learn how AWS offers guidance for building architectures with a Zero Trust security model, multi-factor authentication, encryption for data at-rest and in-transit, and logging capabilities required to increase visibility for security and compliance purposes.

Prioritize the adoption and use of cloud technologies

AWS has developed multiple frameworks to help you plan your migration to AWS and establish a structured, programmatic approach to AWS adoption. We provide a variety of tools, including server, data, and database features, to rapidly migrate various types of applications from on-premises to AWS. The following lists include links and helpful information regarding the ways AWS can help accelerate your cloud adoption.

Planning tools

  • AWS Cloud Adoption Framework (AWS CAF) – We developed the AWS CAF to assist your organization in developing and implementing efficient and effective plans for cloud adoption. The guidance and best practices provided by the framework help you build a comprehensive approach to cloud computing across your organization, and throughout the IT lifecycle. Using the AWS CAF will help you realize measurable business benefits from cloud adoption faster, and with less risk.
  • Migration Evaluator – You can build a data-driven business case for your cloud adoption on AWS by using our Migration Evaluator (formerly TSO Logic) to gain access to insights and help accelerate decision-making for migration to AWS.
  • AWS Migration Acceleration Program This program assists your organization with migrating to the cloud by providing you training, professional services, and service credits to streamline your migration, helping your agency more quickly decommission legacy hardware, software, and data centers.

AWS services and technologies for migration

  • AWS Application Migration Service (AWS MGN) – This service allows you to replicate entire servers to AWS using block-level replication, performs tests to verify the migration, and executes the cutover to AWS. This is the simplest and fastest method to migrate to AWS.
  • AWS CloudEndure Migration Factory Solution – This solution enables you to replicate entire servers to AWS using block-level replication and executes the cutover to AWS. This solution is designed to coordinate and automate manual processes for large-scale migrations involving a substantial number of servers.
  • AWS Server Migration Service – This is an agentless service that automates the migration of your on-premises VMware vSphere, Microsoft Hyper-V/SCVMM, and Azure virtual machines to AWS. It replicates existing servers as Amazon Machine Images (AMIs), enabling you to transition more quickly and easily to AWS.
  • AWS Database Migration Service – This service automates replication of your on-premises databases to AWS, making it much easier for you to migrate large and complex applications to AWS with minimal downtime.
  • AWS DataSync – This is an online data transfer service that simplifies, automates, and accelerates moving your data between on-premises storage systems and AWS.
  • VMware Cloud on AWS – This service simplifies and speeds up your migration to AWS by enabling your agency to use the same VMware Cloud Foundation technologies across your on-premises environments and in the AWS Cloud. VMware workloads running on AWS have access to more than 200 AWS services, making it easier to move and modernize applications without having to purchase new hardware, rewrite applications, or modify your operations.
  • AWS Snow Family – These services provide devices that can physically transport exabytes of data into and out of AWS. These devices are fully encrypted and integrate with AWS security, monitoring, storage management, and computing capabilities to help accelerate your migration of large data sets to AWS.

AWS Professional Services

  • AWS Professional Services – Use the AWS Cloud to more effectively reach your constituents and better achieve your core mission. This is a global team of experts that can help you realize your desired business outcomes when using the AWS Cloud. Each offering delivers a set of activities, best practices, and documentation reflecting our experience supporting hundreds of customers in their journey to the AWS Cloud.

AWS Partners

  • AWS Government Competency Partners – This page identifies partners who have demonstrated their ability to help government customers accelerate their migration of applications and legacy infrastructure to AWS.

AWS has solutions and partners to assist in your planning and accelerating your migration to the cloud. We can help you develop integrated, cost-effective solutions to help secure your environment and implement the executive order requirements. In short, AWS is ready to help you meet the accelerated timeline goals set in this executive order.

Next steps

For further reading, see the blog post Zero Trust architectures: An AWS perspective, and to learn more about how AWS can help you meet the requirements of the executive order, see the other post in this series:

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Michael Cotton

Michael is a Senior Solutions Architect at AWS.

Helping Keep Governments Safe and Secure

Post Syndicated from Sam Rhea original https://blog.cloudflare.com/helping-keep-governments-safe-and-secure/

Helping Keep Governments Safe and Secure

Helping Keep Governments Safe and Secure

Today, we are excited to share that Cloudflare and Accenture Federal Services (AFS) have been selected by the Department of Homeland Security (DHS) to develop a joint solution to help the federal government defend itself against cyberattacks. The solution consists of Cloudflare’s protective DNS resolver which will filter DNS queries from offices and locations of the federal government and stream events directly to Accenture’s analysis platform.

Located within DHS, the Cybersecurity and Infrastructure Security Agency (CISA) operates as “the nation’s risk advisor.”1 CISA works with partners across the public and private sector to improve the security and reliability of critical infrastructure; a mission that spans across the federal government, State, Local, Tribal, and Territorial partnerships and the private sector to provide solutions to emerging and ever-changing threats.

Over the last few years, CISA has repeatedly flagged the cyber risk posed by malicious hostnames, phishing emails with malicious links, and untrustworthy upstream Domain Name System (DNS) resolvers.2 Attackers can compromise devices or accounts, and ultimately data, by tricking a user or system into sending a DNS query for a specific hostname. Once that query is resolved, those devices establish connections that can lead to malware downloads, phishing websites, or data exfiltration.

In May 2021, CISA and the National Security Agency (NSA) proposed that teams deploy protective DNS resolvers to prevent those attacks from becoming incidents. Unlike standard DNS resolvers, protective DNS resolvers check the hostname being resolved to determine if the destination is malicious. If that is the case, or even if the destination is just suspicious, the resolver can stop answering the DNS query and block the connection.

Earlier this year, CISA announced they are not only recommending a protective DNS resolver — they have launched a program to offer a solution to their partners. After a thorough review process, CISA has announced that they have selected Cloudflare and AFS to deliver a joint solution that can be used by departments and agencies of any size within the Federal Civilian Executive Branch.

Helping keep governments safer

Attacks against the critical infrastructure in the United States are continuing to increase. Cloudflare Radar, where we publish insights from our global network, consistently sees the U.S. as one of the most targeted countries for DDoS attacks. Attacks like phishing campaigns compromise credentials to sensitive systems. Ransomware bypasses traditional network perimeters and shuts down target systems.

The sophistication of those attacks also continues to increase. Last year’s SolarWinds Orion compromise represents a new type of supply chain attack where trusted software becomes the backdoor for data breaches. Cloudflare’s analysis of the SolarWinds incident observed compromise patterns that were active over eight months, during which the destinations used grew to nearly 5,000 unique subdomains.

The increase in volume and sophistication has driven a demand for the information and tools to defend against these types of threats at all levels of the US government. Last year, CISA advised over 6,000 state and local officials, as well as federal partners, on mechanisms to protect their critical infrastructure.

At Cloudflare, we have observed a similar pattern. In 2017, Cloudflare launched the Athenian Project to provide state, county, or municipal governments with security for websites that administer elections or report results. In 2020, 229 state and local governments, in 28 states, trusted Cloudflare to help defend their election websites. State and local government websites served by Cloudflare’s Athenian Project increased by 48% last year.

As these attacks continue to evolve, one thing many have in common is their use of a DNS query to a malicious hostname. From SolarWinds to last month’s spearphishing attack against the U.S. Agency for International Development, attackers continue to rely on one of the most basic technologies used when connecting to the Internet.

Delivering a protective DNS resolver

User activity on the Internet typically starts with a DNS query to a DNS resolver. When users visit a website in their browser, open a link in an email, or use a mobile application, their device first sends a DNS query to convert the domain name of the website or server into the Internet Protocol (IP) address of the host serving that site. Once their device has the IP address, they can establish a connection.

Helping Keep Governments Safe and Secure
Figure 1. Complete DNS lookup and web page query

Attacks on the Internet can also start the same way. Devices that download malware begin making DNS queries to establish connections and leak information. Users that visit an imposter website input their credentials and become part of a phishing attack.

These attacks are successful because DNS resolvers, by default, trust all destinations. If a user sends a DNS query for any hostname, the resolver returns the IP address without determining if that destination is suspicious.

Some hostnames are known to security researchers, including hostnames used in previous attacks or ones that use typos of popular hostnames. Other attacks start from unknown or new threats. Detecting those requires monitoring DNS query behavior, detecting patterns to new hostnames, or blocking newly seen and registered domains altogether.

Protective DNS resolvers apply a Zero Trust model to DNS queries. Instead of trusting any destination, protective resolvers check the hostname of every query and IP address of every response against a list of known malicious destinations. If the hostname or IP address is in that list, the resolver will not return the result to the user and the connection will fail.

Building a solution with Accenture Federal Services

The solution being delivered to CISA, Cloudflare Gateway, builds on Cloudflare’s network to deliver a protective DNS resolver that does not compromise performance. It starts by sending all DNS queries from enrolled devices and offices to Cloudflare’s network. While more of the HTTP Internet continues to be encrypted, the default protocol for sending DNS queries on most devices is still unencrypted. Cloudflare Gateway’s protective DNS resolver supports encrypted options like DNS over HTTPS (DoH) and DNS over TLS (DoT).

Next, blocking DNS queries to malicious hostnames starts with knowing what hostnames are potentially malicious. Cloudflare’s network provides our protective DNS resolver with unique visibility into threats on the Internet. Every day, Cloudflare’s network handles over 800 billion DNS queries. Our infrastructure responds to 25 million HTTP requests per second. We deploy that network in more than 200 cities in over 100 countries around the world, giving our team the ability to see attack patterns around the world.

We convert that data into the insights that power our security products. For example, we analyze the billions of DNS queries we handle to detect anomalous behavior that would indicate a hostname is being used to leak data through a DNS tunneling attack. For the CISA solution, Cloudflare’s datasets are further enriched by applying additional cybersecurity research along with Accenture’s Cyber Threat Intelligence (ACTI) feed to provide signals to detect new and changing threats on the internet. This dataset is further analyzed by data scientists using advanced business intelligence tools powered by artificial intelligence and machine learning.

Working towards a FedRAMP future

Our Public Sector team is focused on partnering with Federal, State and Local Governments to provide a safe and secure digital experience. We are excited to help CISA deliver an innovative, modern, and cost-efficient solution to the entire civilian federal government.

We will continue this path following our recent announcement that we are currently “In Process” in the Federal Risk and Authorization Management Program (FedRAMP) Marketplace. The government’s rigorous security assessment will allow other federal agencies to adopt Cloudflare’s Zero Trust Security solutions in the future.

What’s next?

We are looking forward to working with Accenture Federal Services to deliver this protective DNS resolver solution to CISA. This contract award demonstrates CISA’s belief in the importance of having protective DNS capabilities as part of a layered defense. We applaud CISA for taking this step and allowing us to partner with the US Government to deliver this solution.

Like CISA, we believe that teams large and small should have the tools they need to protect their critical systems. Your team can also get started using Cloudflare to secure your organization today. Cloudflare Gateway, part of Cloudflare for Teams, is available to organizations of any size.

1https://www.cisa.gov/about-cisa
2See, for example, https://www.cisa.gov/sites/default/files/publications/Addressing_DNS_Resolution_on_Federal_Networks_Memo.pdf; https://media.defense.gov/2021/Mar/03/2002593055/-1/-1/0/CSI_Selecting-Protective-DNS_UOO11765221.PDF

Browser VNC with Zero Trust Rules

Post Syndicated from Kenny Johnson original https://blog.cloudflare.com/browser-vnc-with-zero-trust-rules/

Browser VNC with Zero Trust Rules

Browser VNC with Zero Trust Rules

Starting today, we’re excited to share that you can now shift another traditional client-driven use case to a browser. Teams can now provide their users with a Virtual Network Computing (VNC) client fully rendered in the browser with built-in Zero Trust controls.

Like the SSH flow, this allows users to connect from any browser on any device, with no client software needed. The feature runs in every one of our data centers in over 200 cities around the world, bringing the experience closer to your end users. We also built the experience using Cloudflare Workers, to offer nearly instant start times. In the future we will support full auditability of user actions in their VNC and SSH sessions.

A quick refresher on VNC

VNC is a desktop sharing platform built on top of the Remote Frame Buffer protocol that allows for a GUI on any server. It is built to be platform-independent and provides an easy way for administrators to make interfaces available to users that are less comfortable with a command-line to work with a remote machine. Or to complete work better suited for a visual interface.

In my case, the most frequent reason I use VNC is to play games that have compatibility issues. Using a virtual machine to run a Windows Server was much cheaper than buying a new laptop.

In most business use cases, VNC isn’t used to play games, it’s driven by security or IT management requirements. VNC can be beneficial to create a “clean room” style environment for users to interact with secure information that cannot be moved to their personal machine.

How VNC is traditionally deployed

Typically, VNC deployments require software to be installed onto a user’s machine. This software allows a user to establish a VNC connection and render the VNC server’s GUI. This comes with challenges of operating system compatibility (remember how VNC was supposed to be platform independent?), security, and management overhead.

Managing software like a VNC viewer typically requires Mobile Device Management (MDM) software or users making individual changes to their machines. This is further complicated by contractors and external users requiring access via VNC.

Challenges with VNC deployments

VNC is often used to create an environment for a user to interact with sensitive data. However, it can be very difficult to monitor when a user makes a connection to a VNC server and then what they do during their session, without significant network configuration.

On top of the security concerns, software installed on a user’s machine, like a VNC viewer, is generally difficult to manage — think compatibility issues with operating systems, security updates, and many other problems.

Unlike SSH, where the majority of servers and clients predominantly use OpenSSH, there are numerous commercial and free VNC servers / clients in various states of quality and cost.

We wanted to fix this!

It was time for Browser VNC

One major challenge of rendering a GUI is latency — if a user’s mouse or keystrokes are slow, the experience is almost unusable. Using Cloudflare Tunnel, we can deliver the VNC connection at our edge, meaning we’re less than <50 ms away from 99% of Internet users.

To do this we built a full VNC viewer implementation that runs in a web browser. Something like this would normally require running a server-side TCP → WebSocket proxy (eg. websockify since TCP connections are not natively supported in browsers today). Since we already have exactly this with cloudflared + Cloudflare Tunnel, we can connect to existing TCP tunnels and provide an entirely in-browser VNC experience. Because the server-side proxy happens at the TCP level, the VNC session is end-to-end encrypted between the web client and the VNC server within your network.

Browser VNC with Zero Trust Rules

Once we establish a connection, we use noVNC to render any VNC server natively in the browser.

All of this is delivered using Cloudflare Workers. We were able to build this entire experience on our serverless platform to render the VNC experience at our edge.

The final step is to authenticate the traffic going to the Tunnel established with your VNC server. For this, we can use Cloudflare Access, as it allows us to verify a user’s identity and enforce additional security checks. Once a user is properly authenticated, they are presented with a cookie that is then checked on every request made to the VNC server.

Browser VNC with Zero Trust Rules

And then a user can use their VNC terminal!

Browser VNC with Zero Trust Rules

Why Browser Based is the future

First and foremost, a browser-based experience is straightforward for users. All they need is an Internet connection and URL to access their SSH and VNC instances. Previously they needed software like a puTTY client and RealVNC.

Legacy applications, including VNC servers, serve as another attack vector for malicious users because they are difficult to monitor and keep patched with security updates. VNC based in the browser means that we can push security updates instantly. As well as taking advantage of built-in security features of modern browsers (e.g. chromium sandboxing).

Visibility is another major improvement. In future releases, we will support screen recording and network request logging to provide detailed information on exactly what was completed during a VNC session. We already provide clear logs on any time a user accesses their VNC or SSH server via the browser.

Browser VNC with Zero Trust Rules

We’re just getting started!

Browser VNC is available now in every Cloudflare for Teams plan. You can get started for up to 50 users at no cost here.

Soon we’ll be announcing our plans to support additional protocols only available in on-prem deployments. Let us know in the Community if there are particular protocols you would like us to consider!

If you have questions about getting started, feel free to post in the community. If you would like to get started today, follow our step-by-step tutorial.

Introducing Zero Trust Private Networking

Post Syndicated from Kenny Johnson original https://blog.cloudflare.com/private-networking/

Introducing Zero Trust Private Networking

Starting today, you can build identity-aware, Zero Trust network policies using Cloudflare for Teams. You can apply these rules to connections bound for the public Internet or for traffic inside a private network running on Cloudflare. These rules are enforced in Cloudflare’s network of data centers in over 200 cities around the world, giving your team comprehensive network filtering and logging, wherever your users work, without slowing them down.

Last week, my teammate Pete’s blog post described the release of network-based policies in Cloudflare for Teams. Your team can now keep users safe from threats by limiting the ports and IPs that devices in your fleet can reach. With that release, security teams can now replace even more security appliances with Cloudflare’s network.

We’re excited to help your team replace that hardware, but we also know that those legacy network firewalls were used to keep private data and applications safe in a castle-and-moat model. You can now use Cloudflare for Teams to upgrade to a Zero Trust networking model instead, with a private network running on Cloudflare and rules based on identity, not IP address.

To learn how, keep reading or watch the demo below.

Deprecating the castle-and-moat model

Private networks provided security by assuming that the network should trust you by virtue of you being in a place where you could physically connect. If you could enter an office and connect to the network, the network assumed that you should be trusted and allowed to reach any other destination on that network. When work happened inside the closed walls of offices, with security based on the physical door to the building, that model at least offered some basic protections.

That model fell apart when users left the offices. Even before the pandemic sent employees home, roaming users or branch offices relied on virtual private networks (VPNs) to punch holes back into the private network. Users had to authenticate to the VPN but, once connected, still had the freedom to reach almost any resource. With more holes in the firewall, and full lateral movement, this model became a risk to any security organization.

However, the alternative was painful or unavailable to most teams. Building network segmentation rules required complex configuration and still relied on source IPs instead of identity. Even with that level of investment in network segmentation, organizations still had to trust the IP of the user rather than the user’s identity.

These types of IP-based rules served as band-aids while the rest of the use cases in an organization moved into the future. Resources like web applications migrated to models that used identity, multi-factor authentication, and continuous enforcement while networking security went unchanged.

But private networks can be great!

There are still great reasons to use private networks for applications and resources. It can be easier and faster to create and share something on a private network instead of waiting to create a public DNS and IP record.

Also, IPs are more easily discarded and reused across internal networks. You do not need to give every team member permission to edit public DNS records. And in some cases, regulatory and security requirements flat out prohibit tools being exposed publicly on the Internet.

Private networks should not disappear, but the usability and security compromises they require should stay in the past. Two months ago, we announced the ability to build a private network on Cloudflare. This feature allows your team to replace VPN appliances and clients with a network that has a point of presence in over 200 cities around the world.

Introducing Zero Trust Private Networking
Zero Trust rules are enforced on the Cloudflare edge

While that release helped us address the usability compromises of a traditional VPN, today’s announcement handles the security compromises. You can now build identity-based, Zero Trust policies inside that private network. This means that you can lock down specific CIDR ranges or IP addresses based on a user’s identity, group, device or network. You can also control and log every connection without additional hardware or services.

How it works

Cloudflare’s daemon, cloudflared, is used to create a secure TCP tunnel from your network to Cloudflare’s edge. This tunnel is private and can only be accessed by connections that you authorize. On their side, users can deploy Cloudflare WARP on their machines to forward their network traffic to Cloudflare’s edge — this allows them to hit specific private IP addresses. Since Cloudflare has 200+ data centers across the globe, all of this occurs without any traffic backhauls or performance penalties.

With today’s release, we now enforce in-line network firewall policies as well. All traffic arriving to Cloudflare’s edge will be evaluated by the Layer 4 firewall. So while you can choose to enable or disable the Layer 7 firewall or bypass HTTP inspection for a given domain, all TCP traffic arriving to Cloudflare will traverse the Layer 4 firewall. Network-level policies will allow you to match traffic that arrives from (or is destined to) data centers, branch offices, and remote users based on the following traffic criteria:

  • Source IP address or CIDR in the header
  • Destination IP address or CIDR in the header
  • Source port or port range in the header
  • Destination port or port range in the header

With these criteria in place, you can enforce identity-aware policies down to a specific port across your entire network plane.

Get started with Zero Trust networking

There are a few things you’ll want to have configured before building your Zero Trust private network policies (we cover these in detail in our previous private networking post):

  • Install cloudflared on your private network
  • Route your private IP addresses to Cloudflare’s edge
  • Deploy the WARP client to your users’ machines

Once the initial setup is complete, this is how you can configure your Zero Trust network policies on the Teams Dashboard:

1. Create a new network policy in Gateway.

Introducing Zero Trust Private Networking

2. Specify the IP and Port combination you want to allow access to. In this example, we are exposing an RDP port on a specific private IP address.

Introducing Zero Trust Private Networking

3. Add any desired identity policies to your network policy. In this example, we have limited access to users in a “Developers” group specified in the identity provider.

Introducing Zero Trust Private Networking

Once this policy is configured, only users in the specific identity group running the WARP client will be able to access applications on the specified IP and port combination.

And that’s it. Without any additional software or configuration, we have created an identity-aware network policy for all of my users that will work on any machine or network across the world while maintaining Zero Trust. Existing infrastructure can be securely exposed in minutes not hours or days.

What’s Next

We want to make this even easier to use and more secure. In the coming months, we are planning to add support for Private DNS resolution, Private IP conflict management and granular session control for private network policies. Additionally, for now this flow only works for client-to-server (WARP to cloudflared) connections. Coming soon, we’ll introduce support for east-west connections that will allow teams to connect cloudflared and other parts of Cloudflare One routing.

Getting started is easy — open your Teams Dashboard and follow our documentation.

Network-based policies in Cloudflare Gateway

Post Syndicated from Pete Zimmerman original https://blog.cloudflare.com/network-based-policies-in-cloudflare-gateway/

Network-based policies in Cloudflare Gateway

Over the past year, Cloudflare Gateway has grown from a DNS filtering solution to a Secure Web Gateway. That growth has allowed customers to protect their organizations with fine-grained identity-based HTTP policies and malware protection wherever their users are. But what about other Internet-bound, non-HTTP traffic that users generate every day — like SSH?

Today we’re excited to announce the ability for administrators to configure network-based policies in Cloudflare Gateway. Like DNS and HTTP policy enforcement, organizations can use network selectors like IP address and port to control access to any network origin.

Because Cloudflare for Teams integrates with your identity provider, it also gives you the ability to create identity-based network policies. This means you can now control access to non-HTTP resources on a per-user basis regardless of where they are or what device they’re accessing that resource from.

A major goal for Cloudflare One is to expand the number of on-ramps to Cloudflare — just send your traffic to our edge however you wish and we’ll make sure it gets to the destination as quickly and securely as possible. We released Magic WAN and Magic Firewall to let administrators replace MPLS connections, define routing decisions, and apply packet-based filtering rules on network traffic from entire sites. When coupled with Magic WAN, Gateway allows customers to define network-based rules that apply to traffic between whole sites, data centers, and that which is Internet-bound.

Solving Zero Trust networking problems

Until today, administrators could only create policies that filtered traffic at the DNS and HTTP layers. However, we know that organizations need to control the network-level traffic leaving their endpoints. We kept hearing two categories of problems from our users and we’re excited that today’s announcement addresses both.

First, organizations want to replace their legacy network firewall appliances. Those appliances are complex to manage, expensive to maintain, and force users to backhaul traffic. Security teams deploy those appliances in part to control the ports and IPs devices can use to send traffic. That level of security helps prevent devices from sending traffic over non-standard ports or to known malicious IPs, but customers had to deal with the downsides of on-premise security boxes.

Second, moving to a Zero Trust model for named resources is not enough. Cloudflare Access provides your team with Zero Trust controls over specific applications, including non-HTTP applications, but we know that customers who are migrating to this model want to bring that level of Zero Trust control to all of their network traffic.

How it works

Cloudflare Gateway, part of Cloudflare One, helps organizations replace legacy firewalls and upgrade to Zero Trust networking by starting with the endpoint itself. Wherever your users do their work, they can connect to a private network running on Cloudflare or the public Internet without backhauling traffic.

First, administrators deploy the Cloudflare WARP agent on user devices, whether those devices are MacOS, Windows, iOS, Android and (soon) Linux. The WARP agent can operate in two modes:

  • DNS filtering: WARP becomes a DNS-over-HTTPS (DoH) client and sends all DNS queries to a nearby Cloudflare data center where Cloudflare Gateway can filter those queries for threats like websites that host malware or phishing campaigns.
  • Proxy mode: WARP creates a WireGuard tunnel from the device to Cloudflare’s edge and sends all network traffic through the tunnel. Cloudflare Gateway can then inspect HTTP traffic and apply policies like URL-based rules and virus scanning.

Today’s announcement relies on the second mode. The WARP agent will send all TCP traffic leaving the device to Cloudflare, along with the identity of the user on the device and the organization in which the device is enrolled. The Cloudflare Gateway service will take the identity and then review the TCP traffic against four criteria:

  • Source IP or network
  • Source Port
  • Destination IP or network
  • Destination Port

Before allowing the packets to proceed to their destination, Cloudflare Gateway checks the organization’s rules to determine if they should be blocked. Rules can apply to all of an organization’s traffic or just specific users and directory groups. If the traffic is allowed, Cloudflare Gateway still logs the identity and criteria above.

Cloudflare Gateway accomplishes this without slowing down your team. The Gateway service runs in every Cloudflare data center in over 200 cities around the world, giving your team members an on-ramp to the Internet that does not backhaul or hairpin traffic. We enforce rules using Cloudflare’s Rust-based Wirefilter execution engine, taking what we’ve learned from applying IP-based rules in our reverse proxy firewall at scale and giving your team the performance benefits.

Building a Zero Trust networking rule

SSH is a versatile protocol that allows users to connect to remote machines and even tunnel traffic from a local machine to a remote machine before reaching the intended destination. That’s great but it also leaves organizations with a gaping hole in their security posture. At first, an administrator could configure a rule that blocks all outbound SSH traffic across the organization.

Network-based policies in Cloudflare Gateway

As soon as you save that policy, the phone rings and it’s an engineer asking why they can’t use a lot of their development tools. Right, engineers use SSH a lot so we should use the engineering IdP group to allow just our engineers to use SSH.

Network-based policies in Cloudflare Gateway

You take advantage of rule precedence and place that rule above the existing rule that affects all users to allow engineers to SSH outbound but not any other users in the organization.

Network-based policies in Cloudflare Gateway

It doesn’t matter which corporate device engineers are using or where they are located, they will be allowed to use SSH and all other users will be blocked.

One more thing

Last month, we announced the ability for customers to create private networks on Cloudflare. Using Cloudflare Tunnel, organizations can connect environments they control using private IP space and route traffic between sites; better, WARP users can connect to those private networks wherever they’re located. No need for centralized VPN concentrators and complicated configurations–connect your environment to Cloudflare and configure routing.

Network-based policies in Cloudflare Gateway

Today’s announcement gives administrators the ability to configure network access policies to control traffic within those private networks. What if the engineer above wasn’t trying to SSH to an Internet-accessible resource but to something an organization deliberately wants to keep within an internal private network (e.g., a development server)? Again, not everyone in the organization should have access to that either. Now administrators can configure identity-based rules that apply to private networks built on Cloudflare.

What’s next?

We’re laser-focused on our Cloudflare One goal to secure organizations regardless of how their traffic gets to Cloudflare. Applying network policies to both WARP users and routing between private networks is part of that vision.

We’re excited to release these building blocks to Zero Trust Network Access policies to protect an organization’s users and data. We can’t wait to dig deeper into helping organizations secure applications that use private hostnames and IPs like they can today with their publicly facing applications.

We’re just getting started–follow this link so you can too.

AWS Verified episode 5: A conversation with Eric Rosenbach of Harvard University’s Belfer Center

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/aws-verified-episode-5-a-conversation-with-eric-rosenbach-of-harvard-universitys-belfer-center/

I am pleased to share the latest episode of AWS Verified, where we bring you conversations with global cybersecurity leaders about important issues, such as how to create a culture of security, cyber resiliency, Zero Trust, and other emerging security trends.

Recently, I got the opportunity to experience distance learning when I took the AWS Verified series back to school. I got a taste of life as a Harvard grad student, meeting (virtually) with Eric Rosenbach, Co-Director of the Belfer Center of Science and International Affairs at Harvard University’s John F. Kennedy School of Government. I call it, “Verified meets Veritas.” Harvard’s motto may never be the same again.

In this video, Eric shared with me the Belfer Center’s focus as the hub of the Harvard Kennedy School’s research, teaching, and training at the intersection of cutting edge and interdisciplinary topics, such as international security, environmental and resource issues, and science and technology policy. In recognition of the Belfer Center’s consistently stellar work and its six consecutive years ranked as the world’s #1 university-affiliated think tank, in 2021 it was named a center of excellence by the University of Pennsylvania’s Think Tanks and Civil Societies Program.

Eric’s deep connection to the students reflects the Belfer Center’s mission to prepare future generations of leaders to address critical areas in practical ways. Eric says, “I’m a graduate of the school, and now that I’ve been out in the real world as a policy practitioner, I love going into the classroom, teaching students about the way things work, both with cyber policy and with cybersecurity/cyber risk mitigation.”

In the interview, I talked with Eric about his varied professional background. Prior to the Belfer Center, he was the Chief of Staff to US Secretary of Defense, Ash Carter. Eric was also the Assistant Secretary of Defense for Homeland Defense and Global Security, where he was known around the US government as the Pentagon’s cyber czar. He has served as an officer in the US Army, written two books, been the Chief Security Officer for the European ISP Tiscali, and was a professional committee staff member in the US Senate.

I asked Eric to share his opinion on what the private sector and government can learn from each other. I’m excited to share Eric’s answer to this with you as well as his thoughts on other topics, because the work that Eric and his Belfer Center colleagues are doing is important for technology leaders.

Watch my interview with Eric Rosenbach, and visit the AWS Verified webpage for previous episodes, including interviews with security leaders from Netflix, Vodafone, Comcast, and Lockheed Martin. If you have an idea or a topic you’d like covered in this series, please leave a comment below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds 11 patents in the field of cloud security architecture. Follow Steve on Twitter.

Integrating Cloudflare Gateway and Access

Post Syndicated from Kenny Johnson original https://blog.cloudflare.com/integrating-cloudflare-gateway-and-access/

Integrating Cloudflare Gateway and Access

We’re excited to announce that you can now set up your Access policies to require that all user traffic to your application is filtered by Cloudflare Gateway. This ensures that all of the traffic to your self-hosted and SaaS applications is secured and centrally logged. You can also use this integration to build rules that determine which users can connect to certain parts of your SaaS applications, even if the application does not support those rules on its own.

Stop threats from returning to your applications and data

We built Cloudflare Access as an internal project to replace our own VPN. Unlike a traditional private network, Access follows a Zero Trust model. Cloudflare’s edge checks every request to protected resources for identity and other signals like device posture (i.e., information about a user’s machine, like Operating system version, if antivirus is running, etc.).

By deploying Cloudflare Access, our security and IT teams could build granular rules for each application and log every request and event. Cloudflare’s network accelerated how users connected. We launched Access as a product for our customers in 2018 to share those improvements with teams of any size.

Integrating Cloudflare Gateway and Access

Over the last two years, we added new types of rules that check for hardware security keys, location, and other signals. However, we were still left with some challenges:

  • What happened to devices before they connected to applications behind Access? Were they bringing something malicious with them?
  • Could we make sure these devices were not leaking data elsewhere when they reached data behind Access?
  • Had the credentials used for a Cloudflare Access login been phished elsewhere?
Integrating Cloudflare Gateway and Access

We built Cloudflare Gateway to solve those problems. Cloudflare Gateway sends all traffic from a device to Cloudflare’s network, where it can be filtered for threats, file upload/download, and content categories.

Administrators deploy a lightweight agent on user devices that proxies all Internet-bound traffic through Cloudflare’s network. As that traffic arrives in one of our data centers in 200 cities around the world, Cloudflare’s edge inspects the traffic. Gateway can then take actions like prevent users from connecting to destinations that contain malware or block the upload of files to unapproved locations.

With today’s launch, you can now build Access rules that restrict connections to devices that are running Cloudflare Gateway. You can configure Cloudflare Gateway to run in always-on mode and ensure that the devices connecting to your applications are secured as they navigate the rest of the Internet.

Log every connection to every application

In addition to filtering, Cloudflare Gateway also logs every request and connection made from a device. With Gateway running, your organization can audit how employees use SaaS applications like Salesforce, Office 365, and Workday.

Integrating Cloudflare Gateway and Access

However, we’ve talked to several customers who share a concern over log integrity — “what stops a user from bypassing Gateway’s logging by connecting to a SaaS application from a different device?” Users could type in their password and use their second factor authentication token on a different device — that way, the organization would lose visibility into that corporate traffic.

Today’s release gives your team the ability to ensure every connection to your SaaS applications uses Cloudflare Gateway. Your team can integrate Cloudflare Access, and its ruleset, into the login flow of your SaaS applications. Cloudflare Access checks for additional factors when your users log in with your SSO provider. By adding a rule to require Cloudflare Gateway be used, you can prevent users from ever logging into a SaaS application without connecting through Gateway.

Build data control rules in SaaS applications

One other challenge we had internally at Cloudflare is that we lacked the ability to add user-based controls in some of the SaaS applications we use. For example, a team member connecting to a data visualization application had access to dashboards created by other teams, that they shouldn’t have access to.

We can use Cloudflare Gateway to solve that problem. Gateway provides the ability to restrict certain URLs to groups of users; this allows us  to add rules that only let specific team members reach records that live at known URLs.

Integrating Cloudflare Gateway and Access

However, if someone is not using Gateway, we lose that level of policy control. The integration with Cloudflare Access ensures that those rules are always enforced. If users are not running Gateway, they cannot login to the application.

What’s next?

You can begin using this feature in your Cloudflare for Teams account today with the Teams Standard or Teams enterprise plan. Documentation is available here to help you get started.

Want to try out Cloudflare for Teams? You can sign up for Teams today on our free plan and test Gateway’s DNS filtering and Access for up to 50 users at no cost.