All posts by Achiel van der Mandele

Argo Smart Routing for UDP: speeding up gaming, real-time communications and more

Post Syndicated from Achiel van der Mandele original http://blog.cloudflare.com/turbo-charge-gaming-and-streaming-with-argo-for-udp/

Argo Smart Routing for UDP: speeding up gaming, real-time communications and more

Argo Smart Routing for UDP: speeding up gaming, real-time communications and more

Today, Cloudflare is super excited to announce that we’re bringing traffic acceleration to customer’s UDP traffic. Now, you can improve the latency of UDP-based applications like video games, voice calls, and video meetings by up to 17%. Combining the power of Argo Smart Routing (our traffic acceleration product) with UDP gives you the ability to supercharge your UDP-based traffic.

When applications use TCP vs. UDP

Typically when people talk about the Internet, they think of websites they visit in their browsers, or apps that allow them to order food. This type of traffic is sent across the Internet via HTTP which is built on top of the Transmission Control Protocol (TCP). However, there’s a lot more to the Internet than just browsing websites and using apps. Gaming, live video, or tunneling traffic to different networks via a VPN are all common applications that don’t use HTTP or TCP. These popular applications leverage the User Datagram Protocol (or UDP for short). To understand why these applications use UDP instead of TCP, we’ll need to dig into how these different applications work.

When you load a web page, you generally want to see the entire web page; the website would be confusing if parts of it are missing. For this reason, HTTP uses TCP as a method of transferring website data. TCP ensures that if a packet ever gets lost as it crosses the Internet, that packet will be resent. Having a reliable protocol like TCP is generally a good idea when 100% of the information sent needs to be loaded. It’s worth noting that later HTTP versions like HTTP/3 actually deviated from TCP as a transmission protocol, but they still ensure packet delivery by handling packet retransmission using the QUIC protocol.

There are other applications that prioritize quickly sending real time data and are less concerned about perfectly delivering 100% of the data. Let’s explore Real-Time Communications (RTC) like video meetings as an example. If two people are streaming video live, all they care about is what is happening now. If a few packets are lost during the initial transmission, retransmission is usually too slow to render the lost packet data in the current video frame. TCP doesn’t really make sense in this scenario.

Instead, RTC protocols are built on top of UDP. TCP is like a formal back and forth conversation where every sentence matters. UDP is more like listening to your friend's stream of consciousness: you don’t care about every single bit as long as you get the gist of it. UDP transfers packet data with speed and efficiency without guaranteeing the delivery of those packets. This is perfect for applications like RTC where reducing latency is more important than occasionally losing a packet here or there. The same applies to gaming traffic; you generally want the most up-to-date information, and you don’t really care about retransmitting lost packets.

Gaming and RTC applications really care about latency. Latency is the length of time it takes a packet to be sent to a server plus the length of time to receive a response from the server (called round-trip time or RTT). In the case of video games, the higher the latency, the longer it will take for you to see other players move and the less time you’ll have to react to the game. With enough latency, games become unplayable: if the players on your screen are constantly blipping around it’s near impossible to interact with them. In RTC applications like video meetings, you’ll experience a delay between yourself and your counterpart. You may find yourselves accidentally talking over each other which isn’t a great experience.

Companies that host gaming or RTC infrastructure often try to reduce latency by spinning up servers that are geographically closer to their users. However, it’s common to have two users that are trying to have a video call between distant locations like Amsterdam and Los Angeles. No matter where you install your servers, that's still a long distance for that traffic to travel. The longer the path, the higher the chances are that you're going to run into congestion along the way. Congestion is just like a traffic jam on a highway, but for networks. Sometimes certain paths get overloaded with traffic. This causes delays and packets to get dropped. This is where Argo Smart Routing comes in.

Argo Smart Routing

Cloudflare customers that want the best cross-Internet application performance rely on Argo Smart Routing’s traffic acceleration to reduce latency. Argo Smart Routing is like the GPS of the Internet. It uses real time global network performance measurements to accelerate traffic, actively route around Internet congestion, and increase your traffic’s stability by reducing packet loss and jitter.

Argo Smart Routing was launched in May 2017, and its first iteration focused on reducing website traffic latency. Since then, we’ve improved Argo Smart Routing and also launched Argo Smart Routing for Spectrum TCP traffic which reduces latency in any TCP-based protocols. Today, we’re excited to bring the same Argo Smart Routing technology to customer’s UDP traffic which will reduce latency, packet loss, and jitter in gaming, and live audio/video applications.

Argo Smart Routing accelerates Internet traffic by sending millions of synthetic probes from every Cloudflare data center to the origin of every Cloudflare customer. These probes measure the latency of all possible routes between Cloudflare’s data centers and a customer’s origin. We then combine that with probes running between Cloudflare’s data centers to calculate possible routes. When an Internet user makes a request to an origin, Cloudflare calculates the results of our real time global latency measurements, examines Internet congestion data, and calculates the optimal route for customer’s traffic. To enable Argo Smart Routing for UDP traffic, Cloudflare extended the route computations typically used for HTTP and TCP traffic and applied them to UDP traffic.

We knew that Argo Smart Routing offered impressive benefits for HTTP traffic, reducing time to first byte by up to 30% on average for customers. But UDP can be treated differently by networks, so we were curious to see if we would see a similar reduction in round-trip-time for UDP. To validate, we ran a set of tests. We set up an origin in Iowa, USA and had a client connect to it from Tokyo, Japan. Compared to a regular Spectrum setup, we saw a decrease in round-trip-time of up to 17.3% on average. For the standard setup, Spectrum was able to proxy packets to Iowa in 173.3 milliseconds on average. Comparatively, turning on Argo Smart Routing reduced the average round-trip-time down to 143.3 milliseconds. The distance between those two cities is 6,074 miles (9,776 kilometers), meaning we've effectively moved the two closer to each other by over a thousand miles (or 1,609 km) just by turning on this feature.

We're incredibly excited about Argo Smart Routing for UDP and what our customers will use it for. If you're in gaming or real-time-communications, or even have a different use-case that you think would benefit from speeding up UDP traffic, please contact your account team today. We are currently in closed beta but are excited about accepting applications.

One-click ISO 27001 certified deployment of Regional Services in the EU

Post Syndicated from Achiel van der Mandele original https://blog.cloudflare.com/one-click-iso-27001-deployment/

One-click ISO 27001 certified deployment of Regional Services in the EU

One-click ISO 27001 certified deployment of Regional Services in the EU

Today, we’re very happy to announce the general availability of a new region for Regional Services that allows you to limit your traffic to only ISO 27001 certified data centers inside the EU. This helps customers that have very strict requirements surrounding which data centers are allowed to decrypt and service traffic. Enabling this feature is a one-click operation right on the Cloudflare dashboard.

Regional Services – a recap

In 2020, we saw an increase in prospects asking about data localization. Specifically, increased regulatory pressure limited them from using vendors that operated at global scale. We launched Regional Services, a new way for customers to use the Cloudflare network. With Regional Services, we put customers back in control over which data centers are used to service traffic. Regional Services operates by limiting exactly which data centers are used to decrypt and service HTTPS traffic. For example, a customer may want to use only data centers inside the European Union to service traffic. Regional Services operates by leveraging our global network for DDoS protection but only decrypting traffic and applying Layer 7 products inside data centers that are located inside the European Union.

We later followed up with the Data Localization Suite and additional regions: India, Singapore and Japan.

With Regional Services, customers get the best of both worlds: we empower them to use our global network for volumetric DDoS protection whilst limiting where traffic is serviced. We do that by accepting the raw TCP connection at the closest data center but forwarding it on to a data center in-region for decryption. That means that only machines of the customer’s choosing actually see the raw HTTP request, which could contain sensitive data such as a customer’s bank account or medical information.

A new region and a new UI

Traditionally we’ve seen requests for data localization largely center around countries or geographic areas. Many types of regulations require companies to make promises about working only with vendors that are capable of restricting where their traffic is serviced geographically. Organizations can have many reasons for being limited in their choices, but they generally fall into two buckets: compliance and contractual commitments.

More recently, we are seeing that more and more companies are asking about security requirements. An often asked question about security in IT is: how do you ensure that something is safe? For instance, for a data center you might be wondering how physical access is managed. Or how often security policies are reviewed and updated. This is where certifications come in. A common certification in IT is the ISO 27001 certification:

Per the ISO.org:

“ISO/IEC 27001 is the world’s best-known standard for information security management systems (ISMS) and their requirements. Additional best practice in data protection and cyber resilience are covered by more than a dozen standards in the ISO/IEC 27000 family. Together, they enable organizations of all sectors and sizes to manage the security of assets such as financial information, intellectual property, employee data and information entrusted by third parties.”

In short, ISO 27001 is a certification that a data center can achieve that ensures that they maintain a set of security standards to keep the data center secure. With the new Regional Services region, HTTPS traffic will only be decrypted in data centers that hold the ISO 27001 certification. Products such as WAF, Bot Management and Workers will only be applied in those relevant data centers.

The other update we’re excited to announce is a brand new User Interface for configuring the Data Localization Suite. The previous UI was limited in that customers had to preconfigure a region for an entire zone: you couldn’t mix and match regions. The new UI allows you to do just that: each individual hostname can be configured for a different region, directly on the DNS tab:

One-click ISO 27001 certified deployment of Regional Services in the EU

Configuring a region for a particular hostname is now just a single click away. Changes take effect within seconds, making this the easiest way to configure data localization yet. For customers using the Metadata Boundary, we’ve also launched a self-serve UI that allows you to configure where logs flow:

One-click ISO 27001 certified deployment of Regional Services in the EU

We’re excited about these new updates that give customers more flexibility in choosing which of Cloudflare’s data centers to use as well as making it easier than ever to configure them. The new region and existing regions are now a one-click configuration option right from the dashboard. As always, we love getting feedback, especially on what new regions you’d like to see us add in the future. In the meantime, if you’re interested in using the Data Localization Suite, please reach out to your account team.

Regional Services comes to India, Japan and Australia

Post Syndicated from Achiel van der Mandele original https://blog.cloudflare.com/regional-services-comes-to-apac/

Regional Services comes to India, Japan and Australia

This post is also available in Deutsch, Français.

Regional Services comes to India, Japan and Australia

We announced the Data Localization Suite in 2020, when requirements for data localization were already important in the European Union. Since then, we’ve witnessed a growing trend toward localization globally. We are thrilled to expand our coverage to these countries in Asia Pacific, allowing more customers to use Cloudflare by giving them precise control over which parts of the Cloudflare network are able to perform advanced functions like WAF or Bot Management that require inspecting traffic.

Regional Services, a recap

In 2020, we introduced (Regional Services), a new way for customers to use Cloudflare. With Regional Services, customers can limit which data centers actually decrypt and inspect traffic. This helps because certain customers are affected by regulations on where they are allowed to service traffic. Others have agreements with their customers as part of contracts specifying exactly where traffic is allowed to be decrypted and inspected.

As one German bank told us: “We can look at the rules and regulations and debate them all we want. As long as you promise me that no machine outside the European Union will see a decrypted bank account number belonging to one of my customers, we’re happy to use Cloudflare in any capacity”.

Under normal operation, Cloudflare uses its entire network to perform all functions. This is what most customers want: leverage all of Cloudflare’s data centers so that you always service traffic to eyeballs as quickly as possible. Increasingly, we are seeing customers that wish to strictly limit which data centers service their traffic. With Regional Services, customers can use Cloudflare’s network but limit which data centers perform the actual decryption. Products that require decryption, such as WAF, Bot Management and Workers will only be applied within those data centers.

How does Regional Services work?

You might be asking yourself: how does that even work? Doesn’t Cloudflare operate an anycast network? Cloudflare was built from the bottom up to leverage anycast, a routing protocol. All of Cloudflare’s data centers advertise the same IP addresses through Border Gateway Protocol. Whichever data center is closest to you from a network point of view is the one that you’ll hit.

This is great for two reasons. The first is that the closer the data center to you, the faster the reply. The second great benefit is that this comes in very handy when dealing with large DDoS attacks. Volumetric DDoS attacks throw a lot of bogus traffic at you, which overwhelms network capacity. Cloudflare’s anycast network is great at taking on these attacks because they get distributed across the entire network.

Anycast doesn’t respect regional borders, it doesn’t even know about them. Which is why out of the box, Cloudflare can’t guarantee that traffic inside a country will also be serviced there. Although typically you’ll hit a data center inside your country, it’s very possible that your Internet Service Provider will send traffic to a network that might route it to a different country.

Regional Services solves that: when turned on, each data center becomes aware of which region it is operating in. If a user from a country hits a data center that doesn’t match the region that the customer has selected, we simply forward the raw TCP stream in encrypted form. Once it reaches a data center inside the right region, we decrypt and apply all Layer 7 products. This covers products such as CDN, WAF, Bot Management and Workers.

Let’s take an example. A user is in Kerala, India and their Internet Service Provider has determined that the fastest path to one of our data centers is to Colombo, Sri Lanka. In this example, a customer may have selected India as the sole region within which traffic should be serviced. The Colombo data center sees that this traffic is meant for the India region. It does not decrypt, but instead forwards it to the closest data center inside India. There, we decrypt and products such as WAF and Workers are applied as if the traffic had hit the data center directly.

Regional Services comes to India, Japan and Australia

Bringing Regional Services to Asia

Historically, we’ve seen most interest in Regional Services in geographic regions such as the European Union and the Americas. Over the past few years, however, we are seeing a lot of interest from Asia Pacific. Based on customer feedback and analysis on regulations we quickly concluded there were three key regions we needed to support: India, Japan and Australia. We’re proud to say that all three are now generally available for use today.

But we’re not done yet! We realize there are many more customers that require localization to their particular region. We’re looking to add many more in the near future and are working hard to make it easier to support more of them. If you have a region in mind, we’d love to hear it!

India, Japan and Australia are all live today! If you’re interested in using the Data Localization Suite, contact your account team!

Magic Firewall gets Smarter

Post Syndicated from Achiel van der Mandele original https://blog.cloudflare.com/magic-firewall-gets-smarter/

Magic Firewall gets Smarter

Magic Firewall gets Smarter

Today, we’re very excited to announce a set of updates to Magic Firewall, adding security and visibility features that are key in modern cloud firewalls. To improve security, we’re adding threat intel integration and geo-blocking. For visibility, we’re adding packet captures at the edge, a way to see packets arrive at the edge in near real-time.

Magic Firewall is our network-level firewall which is delivered through Cloudflare to secure your enterprise. Magic Firewall covers your remote users, branch offices, data centers and cloud infrastructure. Best of all, it’s deeply integrated with Cloudflare, giving you a one-stop overview of everything that’s happening on your network.

A brief history of firewalls

We talked a lot about firewalls on Monday, including how our firewall-as-a-service solution is very different from traditional firewalls and helps security teams that want sophisticated inspections at the Application Layer. When we talk about the Application Layer, we’re referring to OSI Layer 7. This means we’re applying security features using semantics of the protocol. The most common example is HTTP, the protocol you’re using to visit this website. We have Gateway and our WAF to protect inbound and outbound HTTP requests, but what about Layer 3 and Layer 4 capabilities? Layer 3 and 4 refer to the packet and connection levels. These security features aren’t applied to HTTP requests, but instead to IP packets and (for example) TCP connections. A lot of folks in the CIO organization want to add extra layers of security and visibility without resorting to decryption at Layer 7. We’re excited to talk to you about two sets of new features that will make your lives easier: geo-blocking and threat intel integration to improve security posture, and packet captures to get you better visibility.

Threat Intel and IP Lists

Magic Firewall is great if you know exactly what you want to allow and block. You can put in rules that match exactly on IP source and destination, as well as bitslicing to verify the contents of various packets. However, there are many situations in which you don’t exactly know who the bad and good actors are: is this IP address that’s trying to access my network a perfectly fine consumer, or is it part of a botnet that’s trying to attack my network?

The same goes the other way: whenever someone inside your network is trying to create a connection to the Internet, how do you know whether it’s an obscure blog or a malware website? Clearly, you don’t want to play whack-a-mole and try to keep track of every malicious actor on the Internet by yourself. For most security teams, it’s nothing more than a waste of time! You’d much rather rely on a company that makes it their business to focus on this.

Today, we’re announcing Magic Firewall support for our in-house Threat Intelligence feed. Cloudflare sees approximately 28 million HTTP requests each second and blocks 76 billion cyber threats each day. With almost 20% of the top 10 million Alexa websites on Cloudflare, we see a lot of novel threats pop up every day. We use that data to detect malicious actors on the Internet and turn it into a list of known malicious IPs. And we don’t stop there: we also integrate with a number of third party vendors to augment our coverage.

To match on any of the threat intel lists, just set up a rule in the UI as normal:

Magic Firewall gets Smarter

Threat intel feed categories include Malware, Anonymizer and Botnet Command-and-Control centers. Malware and Botnet lists cover properties on the Internet distributing malware and known command and control centers. Anonymizers contain a list of known forward proxies that allow attackers to hide their IP addresses.

In addition to the managed lists, you also have the flexibility of creating your own lists, either to add your own known set of malicious IPs or to make management of your known good network endpoints easier. As an example, you may want to create a list of all your own servers. That way, you can easily block traffic to and from it from any rule, without having to replicate the list each time.

Another particularly gnarly problem that many of our customers deal with is geo restrictions. Many are restricted in where they are allowed (or want to) accept traffic from and to. The challenge here is that nothing about an IP address tells you anything about the geolocation of it. And even worse, IP addresses regularly change hands, moving from one country to the other.

As of today, you can easily block or allow traffic to any country, without the management hassle that comes with maintaining lists yourself. Country lists are kept up to date entirely by Cloudflare, all you need to do is set up a rule matching on the country and we’ll take care of the rest.

Magic Firewall gets Smarter

Packet captures at the edge

Finally, we’re releasing a very powerful feature: packet captures at the edge. A packet capture is a pcap file that contains all packets that were seen by a particular network box (usually a firewall or router) during a specific time frame. Packet captures are useful if you want to debug your network: why can’t my users connect to a particular website? Or you may want to get better visibility into a DDoS attack, so you can put up better firewall rules.

Traditionally, you’d log into your router or firewall and start up something like tcpdump. You’d set up a filter to only match on certain packets (packet capture files can quickly get very big) and grab the file. But what happens if you want coverage across your entire network: on-premises, offices and all your cloud environments? You’ll likely have different vendors for each of those locations and have to figure out how to get packet captures from all of them. Even worse, some of them might not even support grabbing packet captures.

With Magic Firewall, grabbing packet captures across your entire network becomes simple: because you run a single network-firewall-as-a-service, you can grab packets across your entire network in one go. This gets you instant visibility into exactly where that particular IP is interacting with your network, regardless of physical or virtual location. You have the option of grabbing all network traffic (warning, it might be a lot!) or set a filter to only grab a subset. Filters follow the same Wireshark syntax that Magic Firewall rules use:

(ip.src in $cf.anonymizer)

We think these are great additions to Magic Firewall, giving you powerful primitives to police traffic and tooling to gain visibility into what’s actually going on in your network. Threat Intel, geo blocking and IP lists are all available today — reach out to your account team to have them activated. Packet captures is entering early access later in December. Similarly, if you’re interested, please reach out to your account team!

Announcing Argo for Spectrum

Post Syndicated from Achiel van der Mandele original https://blog.cloudflare.com/argo-spectrum/

Announcing Argo for Spectrum

Announcing Argo for Spectrum

Today we’re excited to announce the general availability of Argo for Spectrum, a way to turbo-charge any TCP based application. With Argo for Spectrum, you can reduce latency, packet loss and improve connectivity for any TCP application, including common protocols like Minecraft, Remote Desktop Protocol and SFTP.

The Internet — more than just a browser

When people think of the Internet, many of us think about using a browser to view websites. Of course, it’s so much more! We often use other ways to connect to each other and to the resources we need for work. For example, you may interact with servers for work using SSH File Transfer Protocol (SFTP), git or Remote Desktop software. At home, you might play a video game on the Internet with friends.

To help people that protect these services against DDoS attacks, Spectrum launched in 2018 and extends Cloudflare’s DDoS protection to any TCP or UDP based protocol. Customers use it for a wide variety of use cases, including to protect video streaming (RTMP), gaming and internal IT systems. Spectrum also supports common VoIP protocols such as SIP and RTP, which have recently seen an increase in DDoS ransomware attacks. A lot of these applications are also highly sensitive to performance issues. No one likes waiting for a file to upload or dealing with a lagging video game.

Latency and throughput are the two metrics people generally discuss when talking about network performance. Latency refers to the amount of time a piece of data (a packet) takes to traverse between two systems. Throughput refers to the amount of bits you can actually send per second. This blog will discuss how these two interplay and how we improve them with Argo for Spectrum.

Argo to the rescue

There are a number of factors that cause poor performance between two points on the Internet, including network congestion, the distance between the two points, and packet loss. This is a problem many of our customers have, even on web applications. To help, we launched Argo Smart Routing in 2017, a way to reduce latency (or time to first byte, to be precise) for any HTTP request that goes to an origin.

That’s great for folks who run websites, but what if you’re working on an application that doesn’t speak HTTP? Up until now people had limited options for improving performance for these applications. That changes today with the general availability of Argo for Spectrum. Argo for Spectrum offers the same benefits as Argo Smart Routing for any TCP-based protocol.

Argo for Spectrum takes the same smarts from our network traffic and applies it to Spectrum. At time of writing, Cloudflare sits in front of approximately 20% of the Alexa top 10 million websites. That means that we see, in near real-time, which networks are congested, which are slow and which are dropping packets. We use that data and take action by provisioning faster routes, which sends packets through the Internet faster than normal routing. Argo for Spectrum works the exact same way, using the same intelligence and routing plane but extending it to any TCP based application.

Performance

But what does this mean for real application performance? To find out, we ran a set of benchmarks on Catchpoint. Catchpoint is a service that allows you to set up performance monitoring from all over the world. Tests are repeated at intervals and aggregate results are reported. We wanted to use a third party such as Catchpoint to get objective results (as opposed to running themselves).

For our test case, we used a file server in the Netherlands as our origin. We provisioned various tests on Catchpoint to measure file transfer performance from various places in the world: Rabat, Tokyo, Los Angeles and Lima.

Announcing Argo for Spectrum
Throughput of a 10MB file. Higher is better.

Depending on location, transfers saw increases of up to 108% (for locations such as Tokyo) and 85% on average. Why is it so much faster? The answer is bandwidth delay product. In layman’s terms, bandwidth delay product means that the higher the latency, the lower the throughput. This is because with transmission protocols such as TCP, we need to wait for the other party to acknowledge that they received data before we can send more.

As an analogy, let’s assume we’re operating a water cleaning facility. We send unprocessed water through a pipe to a cleaning facility, but we’re not sure how much capacity the facility has! To test, we send an amount of water through the pipe. Once the water has arrived, the facility will call us up and say, “we can easily handle this amount of water at a time, please send more.” If the pipe is short, the feedback loop is quick: the water will arrive, and we’ll immediately be able to send more without having to wait. If we have a very, very long pipe, we have to stop sending water for a while before we get confirmation that the water has arrived and there’s enough capacity.

The same happens with TCP: we send an amount of data to the wire and wait to get confirmation that it arrived. If the latency is high it reduces the throughput because we’re constantly waiting for confirmation. If latency is low we can throttle throughput at a high rate. With Spectrum and Argo, we help in two ways: the first is that Spectrum terminates the TCP connection close to the user, meaning that latency for that link is low. The second is that Argo reduces the latency between our edge and the origin. In concert, they create a set of low-latency connections, resulting in a low overall bandwidth delay product between users in origin. The result is a much higher throughput than you would otherwise get.

Argo for Spectrum supports any TCP based protocol. This includes commonly used protocols like SFTP, git (over SSH), RDP and SMTP, but also media streaming and gaming protocols such as RTMP and Minecraft. Setting up Argo for Spectrum is easy. When creating a Spectrum application, just hit the “Argo Smart Routing” toggle. Any traffic will automatically be smart routed.

Announcing Argo for Spectrum

Argo for Spectrum covers much more than just these applications: we support any TCP-based protocol. If you’re interested, reach out to your account team today to see what we can do for you.

Introducing Magic Firewall

Post Syndicated from Achiel van der Mandele original https://blog.cloudflare.com/introducing-magic-firewall/

Introducing Magic Firewall

Introducing Magic Firewall

Today we’re excited to announce Magic Firewall™, a network-level firewall delivered through Cloudflare to secure your enterprise. Magic Firewall covers your remote users, branch offices, data centers and cloud infrastructure. Best of all, it’s deeply integrated with Cloudflare One™, giving you a one-stop overview of everything that’s happening on your network.

Cloudflare Magic Transit™ secures IP subnets with the same DDoS protection technology that we built to keep our own global network secure. That helps ensure your network is safe from attack and available and it replaces physical appliances that have limits with Cloudflare’s network.

That still leaves some hardware onsite, though, for a different function: firewalls. Networks don’t just need protection from DDoS attacks; administrators need a way to set policies for all traffic entering and leaving the network. With Magic Firewall, we want to help your team deprecate those network firewall appliances and move that burden to the Cloudflare global network.

Firewall boxes are miserable to manage

Network firewalls have always been clunky. Not only are they expensive, they are bound by their own hardware constraints. If you need more CPU or memory, you have to buy more boxes. If you lack capacity, the entire network suffers, directly impacting employees that are trying to do their work. To compensate, network operators and security teams are forced to buy more capacity than we need, resulting in having to pay more than necessary.

We’ve heard this problem from our Magic Transit customers who are constantly running into capacity challenges:

“We’re constantly running out of memory and running into connection limits on our firewalls. It’s a huge problem.”

Network operators find themselves piecing together solutions from different vendors, mixing and matching features, and worrying about keeping policies in sync across the network. The result is more headache and added cost.

The solution isn’t more hardware

Some organizations then turn to even more vendors and purchase additional hardware to manage the patchwork firewall hardware they have deployed. Teams then have to balance refresh cycles, updates, and end of life management across even more platforms. These are band-aid solutions that do not solve the fundamental problem: how do we create a single view of the entire network that gives insights into what is happening (good and bad) and apply policy instantaneously, globally?

Introducing Magic Firewall
Traditional Firewall Architecture

Introducing Magic Firewall

Instead of more band-aids, we’re excited to launch Magic Firewall as a single, comprehensive, solution to network filtering. Unlike legacy appliances, Magic Firewall runs in the Cloudflare network. That network scales up or down with a customer’s needs at any given time.

Running in our network delivers an added benefit. Many customers backhaul network traffic to single chokepoints in order to perform firewalling operations, adding latency. Cloudflare operates data centers in 200 cities around the world and each of those points of presence is capable of delivering the same solution. Regional offices and data centers can instead rely on a Cloudflare Magic Firewall engine running within 100 milliseconds of their operation.

Integrated with Cloudflare One

Cloudflare One consists of products that allow you to apply a single filtering engine with consistent security controls to your entire network, not just part of it. The same types of controls that your organization wants to apply to traffic leaving your networks should be applied to traffic leaving your devices.

Magic Firewall will integrate with what you’re already using in Cloudflare. For example, traffic leaving endpoints outside of the network can reach Cloudflare using the Cloudflare WARP client where Gateway will apply the same rules your team configures for network level filtering. Branch offices and data centers can connect through Magic Transit with the same set of rules. This gives you a one-stop overview of your entire network instead of having to hunt down information across multiple devices and vendors.

How does it work?

So what is Magic Firewall? Magic Firewall is a way to replace your antiquated on-premises network firewall with an as-a-service solution, pushing your perimeter out to the edge. We already allow you to apply firewall rules at our edge with Magic Transit, but the process to add or change rules has previously involved working with your account team or Cloudflare support. Our first version, generally available in the next few months, will allow all our Magic Transit customers to apply static OSI Layer 3 & 4 mitigations completely self-service, at Cloudflare scale.

Introducing Magic Firewall Introducing Magic Firewall
Cloudflare applies firewall policies at every data center Meaning you have firewalls applying policies across the globe

Our first version of Magic Firewall will focus on static mitigations, allowing you to set a standard set of rules that apply to your entire network, whether devices or applications are sitting in the cloud, an employee’s device or a branch office. You’ll be able to express rules allowing or blocking based on:

  • Protocol
  • Source or destination IP and port
  • Packet length
  • Bit field match

Rules can be crafted in Wireshark syntax, a domain specific language common in the networking world and the same syntax we use across our other products. With this syntax, you can easily craft extremely powerful rules to precisely allow or deny any traffic in or out of your network. If you suspect there’s a bad actor inside or outside of your perimeter, simply log on to the dashboard and block that traffic. Rules are pushed out globally in seconds, shutting down threats at the edge.

Introducing Magic Firewall

Configuring firewalls should be easy and powerful. With Magic Firewall, rules can be configured using an easy UI that allows for complex logic. Or, just type the filter rule manually using Wireshark filter syntax and configure that way. Don’t want to mess with a UI? Rules can be added just as easily through the API.

What’s next?

Looking at packets is not enough… Even with firewall rules, teams still need visibility into what’s actually happening on their network: what’s happening inside of these datastreams? Is this legitimate traffic or do we have malicious actors either inside or outside of our network doing nefarious things? Deploying Cloudflare to sit between any two actors that interact with any of your assets (be they employee devices or services exposed to the Internet) allows us to enforce any policy, anywhere, either on where the traffic is coming from or what’s inside the traffic. Applying policies based on traffic type is just around the corner and we’re excited to announce that we’re planning to add additional capabilities to automatically detect intrusion events based on what’s happening inside datastreams in the near future.

We’re excited about this new journey. With Cloudflare One, we’re reinventing what the network looks like for corporations. We integrate access management, security features and performance across the board: for your network’s visitors but also for anyone inside it. All of this built on top of a network that was #BuiltForThis.

We’ll be opening up Magic Firewall in a limited beta, starting with existing Magic Transit customers. If you’re interested, please let us know.

Announcing support for gRPC

Post Syndicated from Achiel van der Mandele original https://blog.cloudflare.com/announcing-grpc/

Announcing support for gRPC

Today we’re excited to announce beta support for proxying gRPC, a next-generation protocol that allows you to build APIs at scale. With gRPC on Cloudflare, you get access to the security, reliability and performance features that you’re used to having at your fingertips for traditional APIs. Sign up for the beta today in the Network tab of the Cloudflare dashboard.

gRPC has proven itself to be a popular new protocol for building APIs at scale: it’s more efficient and built to offer superior bi-directional streaming capabilities. However, because gRPC uses newer technology, like HTTP/2, under the covers, existing security and performance tools did not support gRPC traffic out of the box. This meant that customers adopting gRPC to power their APIs had to pick between modernity on one hand, and things like security, performance, and reliability on the other. Because supporting modern protocols and making sure people can operate them safely and performantly is in our DNA, we set out to fix this.

When you put your gRPC APIs on Cloudflare, you immediately gain all the benefits that come with Cloudflare. Apprehensive of exposing your APIs to bad actors? Add security features such as WAF and Bot Management. Need more performance? Turn on Argo Smart Routing to decrease time to first byte. Or increase reliability by adding a Load Balancer.

And naturally, gRPC plugs in to API Shield, allowing you to add more security by enforcing client authentication and schema validation at the edge.

What is gRPC?

Protocols like JSON-REST have been the bread and butter of Internet facing APIs for several years. They’re great in that they operate over HTTP, their payloads are human readable, and a large body of tooling exists to quickly set up an API for another machine to talk to. However, the same things that make these protocols popular are also weaknesses; JSON, as an example, is inefficient to store and transmit, and expensive for computers to parse.

In 2015, Google introduced gRPC, a protocol designed to be fast and efficient, relying on binary protocol buffers to serialize messages before they are transferred over the wire. This prevents (normal) humans from reading them but results in much higher processing efficiency. gRPC has become increasingly popular in the era of microservices because it neatly addresses the shortfalls laid out above.

JSON Protocol Buffers
{ “foo”: “bar” } 0b111001001100001011000100000001100001010

gRPC relies on HTTP/2 as a transport mechanism. This poses a problem for customers trying to deploy common security technologies like web application firewalls, as most reverse proxy solutions (including Cloudflare’s HTTP stack, until today) downgrade HTTP requests down to HTTP/1.1 before sending them off to an origin.

Beyond microservices in a datacenter, the original use case for gRPC, adoption has grown in many other contexts. Many popular mobile apps have millions of users, that all rely on messages being sent back and forth between mobile phones and servers. We’ve seen many customers wire up API connectivity for their mobile apps by using the same gRPC API endpoints they already have inside their data centers for communication with clients in the outside world.

While this solves the efficiency issues with running services at scale, it exposes critical parts of these customers’ infrastructure to the Internet, introducing security and reliability issues. Today we are introducing support for gRPC at Cloudflare, to secure and improve the experience of running gRPC APIs on the Internet.

How does gRPC + Cloudflare work?

The engineering work our team had to do to add gRPC support is composed of a few pieces:

  1. Changes to the early stages of our request processing pipeline to identify gRPC traffic coming down the wire.
  2. Additional functionality in our WAF to “understand” gRPC traffic, ensuring gRPC connections are handled correctly within the WAF, including inspecting all components of the initial gRPC connection request.
  3. Adding support to establish HTTP/2 connections to customer origins for gRPC traffic, allowing gRPC to be proxied through our edge. HTTP/2 to origin support is currently limited to gRPC traffic, though we expect to expand the scope of traffic proxied back to origin over HTTP/2 soon.

What does this mean for you, a Cloudflare customer interested in using our tools to secure and accelerate your API? Because of the hard work we’ve done, enabling support for gRPC is a click of a button in the Cloudflare dashboard.

Using gRPC to build mobile apps at scale

Why does Cloudflare supporting gRPC matter? To dig in on one use case, let’s look at mobile apps. Apps need quick, efficient ways of interacting with servers to get the information needed to show on your phone. There is no browser, so they rely on APIs to get the information. An API stands for application programming interface and is a standardized way for machines (say, your phone and a server) to talk to each other.

Let’s say we’re a mobile app developer with thousands, or even millions of users. With this many users, using a modern protocol, gRPC, allows us to run less compute infrastructure than would be necessary with older, less efficient protocols like JSON-REST. But exposing these endpoints, naked, on the Internet is really scary. Up until now there were very few, if any, options for protecting gRPC endpoints against application layer attacks with a WAF and guarding against volumetric attacks with DDoS mitigation tools. That changes today, with Cloudflare adding gRPC to it’s set of supported protocols.  

With gRPC on Cloudflare, you get the full benefits of our security, reliability and performance products:

  • WAF for inspection of incoming gRPC requests. Use managed rules or craft your own.
  • Load Balancing to increase reliability: configure multiple gRPC backends to handle the load, let Cloudflare distribute the load across them. Backend selection can be done in round-robin fashion, based on health checks or load.
  • Argo Smart Routing to increase performance by transporting your gRPC messages faster than the Internet would be able to route them. Messages are routed around congestion on the Internet, resulting in an average reduction of time to first byte by 30%.

And of course, all of this works with API Shield, an easy way to add mTLS authentication to any API endpoint.

Enabling gRPC support

To enable gRPC support, head to the Cloudflare dashboard and go to the Network tab. From there you can sign up for the beta.

Announcing support for gRPC

We have limited seats available at launch, but will open up more broadly over the next few weeks. After signing up and toggling gRPC support, you’ll have to enable Cloudflare proxying on your domain on the DNS tab to activate Cloudflare for your gRPC API.

We’re excited to bring gRPC support to the masses, allowing you to add the security, reliability and performance benefits that you’re used to getting with Cloudflare. Enabling is just a click away. Take it for a spin and let us know what you think!