Tag Archives: Security Week

Everything you might have missed during Security Week 2023

Post Syndicated from Reid Tatoris original https://blog.cloudflare.com/security-week-2023-wrap-up/

Everything you might have missed during Security Week 2023

Everything you might have missed during Security Week 2023

Security Week 2023 is officially in the books. In our welcome post last Saturday, I talked about Cloudflare’s years-long evolution from protecting websites, to protecting applications, to protecting people. Our goal this week was to help our customers solve a broader range of problems, reduce external points of vulnerability, and make their jobs easier.

We announced 34 new tools and integrations that will do just that. Combined, these announcement will help you do five key things faster and easier:

  1. Making it easier to deploy and manage Zero Trust everywhere
  2. Reducing the number of third parties customers must use
  3. Leverage machine learning to let humans focus on critical thinking
  4. Opening up more proprietary Cloudflare threat intelligence to our customers
  5. Making it harder for humans to make mistakes

And to help you respond to the most current attacks in real time, we reported on how we’re seeing scammers use the Silicon Valley Bank news to phish new victims, and what you can do to protect yourself.

In case you missed any of the announcements, take a look at the summary and navigation guide below.

Monday

Blog Summary
Top phished brands and new phishing and brand protections Today we have released insights from our global network on the top 50 brands used in phishing attacks coupled with the tools customers need to stay safer. Our new phishing and brand protection capabilities, part of Security Center, let customers better preserve brand trust by detecting and even blocking “confusable” and lookalike domains involved in phishing campaigns.
How to stay safe from phishing Phishing attacks come in all sorts of ways to fool people. Email is definitely the most common, but there are others. Following up on our Top 50 brands in phishing attacks post, here are some tips to help you catch these scams before you fall for them.
Locking down your JavaScript: positive blocking with Page Shield policies Page Shield now ensures only vetted and secure JavaScript is being executed by browsers to stop unwanted or malicious JavaScript from loading to keep end user data safer.
Cloudflare Aegis: dedicated IPs for Zero Trust migration With Aegis, customers can now get dedicated IPs from Cloudflare we use to send them traffic. This allows customers to lock down services and applications at an IP level and build a protected environment that is application, protocol, and even IP-aware.
Mutual TLS now available for Workers mTLS support for Workers allows for communication with resources that enforce an mTLS connection. mTLS provides greater security for those building on Workers so they can identify and authenticate both the client and the server helps protect sensitive data.
Using Cloudflare Access with CNI We have introduced an innovative new approach to secure hosted applications via Cloudflare Access without the need for any installed software or custom code on application servers.

Tuesday

Blog Summary
No hassle migration from Zscaler to Cloudflare One with The Descaler Program Cloudflare is excited to launch the Descaler Program, a frictionless path to migrate existing Zscaler customers to Cloudflare One. With this announcement, Cloudflare is making it even easier for enterprise customers to make the switch to a faster, simpler, and more agile foundation for security and network transformation.
The state of application security in 2023 For Security Week 2023, we are providing updated insights and trends related to mitigated traffic, bot and API traffic, and account takeover attacks.
Adding Zero Trust signals to Sumo Logic for better security insights Today we’re excited to announce the expansion of support for automated normalization and correlation of Zero Trust logs for Logpush in Sumo Logic’s Cloud SIEM. Joint customers will reduce alert fatigue and accelerate the triage process by converging security and network data into high-fidelity insights.
Cloudflare One DLP integrates with Microsoft Information Protection labels Cloudflare One now offers Data Loss Prevention (DLP) detections for Microsoft Purview Information Protection labels. This extends the power of Microsoft’s labels to any of your corporate traffic in just a few clicks.
Scan and secure Atlassian with Cloudflare CASB We are unveiling two new integrations for Cloudflare CASB: one for Atlassian Confluence and the other for Atlassian Jira. Security teams can begin scanning for Atlassian- and Confluence-specific security issues that may be leaving sensitive corporate data at risk.
Zero Trust security with Ping Identity and Cloudflare Access Cloudflare Access and Ping Identity offer a powerful solution for organizations looking to implement Zero Trust security controls to protect their applications and data. Cloudflare is now offering full integration support, so Ping Identity customers can easily integrate their identity management solutions with Cloudflare Access to provide a comprehensive security solution for their applications

Wednesday

Blog Summary
Announcing Cloudflare Fraud Detection We are excited to announce Cloudflare Fraud Detection that will provide precise, easy to use tools that can be deployed in seconds to detect and categorize fraud such as fake account creation or card testing and fraudulent transactions. Fraud Detection will be in early access later this year, those interested can sign up here.
Automatically discovering API endpoints and generating schemas using machine learning Customers can use these new features to enforce a positive security model on their API endpoints even if they have little-to-no information about their existing APIs today.
Detecting API abuse automatically using sequence analysis With our new Cloudflare Sequence Analytics for APIs, organizations can view the most important sequences of API requests to their endpoints to better understand potential abuse and where to apply protections first.
Using the power of Cloudflare’s global network to detect malicious domains using machine learning Read our post on how we keep users and organizations safer with machine learning models that detect attackers attempting to evade detection with DNS tunneling and domain generation algorithms.
Announcing WAF Attack Score Lite and Security Analytics for business customers We are making the machine learning empowered WAF and Security analytics view available to our Business plan customers, to help detect and stop attacks before they are known.
Analyze any URL safely using the Cloudflare Radar URL Scanner We have made Cloudflare Radar’s newest free tool available, URL Scanner, providing an under-the-hood look at any webpage to make the Internet more transparent and secure for all.

Thursday

Blog Summary
Post-quantum crypto should be free, so we’re including it for free, forever One of our core beliefs is that privacy is a human right. To achieve that right, we are announcing that our implementations of post-quantum cryptography will be available to everyone, free of charge, forever.
No, AI did not break post-quantum cryptography The recent news reports of AI cracking post-quantum cryptography are greatly exaggerated. In this blog, we take a deep dive into the world of side-channel attacks and how AI has been used for more than a decade already to aid it.
Super Bot Fight Mode is now configurable We are making Super Bot Fight Mode even more configurable with new flexibility to allow legitimate, automated traffic to access their site.
How Cloudflare and IBM partner to help build a better Internet IBM and Cloudflare continue to partner together to help customers meet the unique security, performance, resiliency and compliance needs of their customers through the addition of exciting new product and service offerings.
Protect your key server with Keyless SSL and Cloudflare Tunnel integration Customers will now be able to use our Cloudflare Tunnels product to send traffic to the key server through a secure channel, without publicly exposing it to the rest of the Internet.

Friday

Blog Summary
Stop Brand Impersonation with Cloudflare DMARC Management Brand impersonation continues to be a big problem globally. Setting SPF, DKIM and DMARC policies is a great way to reduce that risk, and protect your domains from being used in spoofing emails. But maintaining a correct SPF configuration can be very costly and time consuming, and that’s why we’re launching Cloudflare DMARC Management.
How we built DMARC Management using Cloudflare Workers At Cloudflare, we use the Workers platform and our product stack to build new services. Read how we made the new DMARC Management solution entirely on top of our APIs.
Cloudflare partners with KnowBe4 to equip organizations with real-time security coaching to avoid phishing attacks Cloudflare’s cloud email security solution now integrates with KnowBe4, allowing mutual customers to offer real-time coaching to employees when a phishing campaign is detected by Cloudflare.
Introducing custom pages for Cloudflare Access We are excited to announce new options to customize user experience in Access, including customizable pages including login, blocks and the application launcher.
Cloudflare Access is the fastest Zero Trust proxy Cloudflare Access is 75% faster than Netskope and 50% faster than Zscaler, and our network is faster than other providers in 48% of last mile networks.

Saturday

Blog Summary
One-click ISO 27001 certified deployment of Regional Services in the EU Cloudflare announces one-click ISO certified region, a super easy way for customers to limit where traffic is serviced to ISO 27001 certified data centers inside the European Union.
Account level Security Analytics and Security Events: better visibility and control over all account zones at once All WAF customers will benefit fromAccount Security Analytics and Events. This allows organizations to new eyes on your account in Cloudflare dashboard to give holistic visibility. No matter how many zones you manage, they are all there!
Wildcard and multi-hostname support in Cloudflare Access We are thrilled to announce the full support of wildcard and multi-hostname application definitions in Cloudflare Access. Until now, Access had limitations that restricted it to a single hostname or a limited set of wildcards

Watch our Security Week sessions on Cloudflare TV

Watch all of the Cloudflare TV segments here.

What’s next?

While that’s it for Security Week 2023, you all know by now that Innovation weeks never end for Cloudflare. Stay tuned for a week full of new developer tools coming soon, and a week dedicated to making the Internet faster later in the year.

Wildcard and multi-hostname support in Cloudflare Access

Post Syndicated from Kenny Johnson original https://blog.cloudflare.com/access-wildcard-and-multi-hostname/

Wildcard and multi-hostname support in Cloudflare Access

Wildcard and multi-hostname support in Cloudflare Access

We are thrilled to announce the full support of wildcard and multi-hostname application definitions in Cloudflare Access. Until now, Access had limitations that restricted it to a single hostname or a limited set of wildcards. Before diving into these new features let’s review Cloudflare Access and its previous limitations around application definition.

Access and hostnames

Cloudflare Access is the gateway to applications, enforcing security policies based on identity, location, network, and device health. Previously, Access applications were defined as a single hostname. A hostname is a unique identifier assigned to a device connected to the internet, commonly used to identify a website, application, or server. For instance, “www.example.com” is a hostname.

Upon successful completion of the security checks, a user is granted access to the protected hostname via a cookie in their browser, in the form of a JSON Web Token (JWT). This cookie’s session lasts for a specific period of time defined by the administrators and any request made to the hostname must have this cookie present.

However, a single hostname application definition was not sufficient in certain situations, particularly for organizations with Single Page Applications and/or hundreds of identical hostnames.

Many Single Page Applications have two separate hostnames – one for the front-end user experience and the other for receiving API requests (e.g., app.example.com and api.example.com). This created a problem for Access customers because the front-end service could no longer communicate with the API as they did not share a session, leading to Access blocking the requests. Developers had to use different custom approaches to issue or share the Access JWT between different hostnames.

In many instances, organizations also deploy applications using a consistent naming convention, such as example.service123.example.com, especially for automatically provisioned applications. These applications often have the same set of security requirements. Previously, an Access administrator had to create a unique Access application per unique hostname, even if the services were functionally identical. This resulted in hundreds or thousands of Access applications needing to be created.

We aimed to make things easier for security teams as easier configuration means a more coherent security architecture and ultimately more secure applications.

We introduced two significant changes to Cloudflare Access: Multi-Hostname Applications and Wildcard Support.

Multi-Hostname Applications

Multi-Hostname Applications allow teams to protect multiple subdomains with a single Access app, simplifying the process and reducing the need for multiple apps.

Wildcard and multi-hostname support in Cloudflare Access

Access also takes care of JWT cookie issuance across all hostnames associated with a given application. This means that a front-end and API service on two different hostnames can communicate securely without any additional software changes.

Wildcards

A wildcard is a special character, in this case *, defines a specific application pattern to match instead of explicitly having to define each unique application. Access applications can now be defined using a wildcard anywhere in the subdomain or path of a hostname. This allows an administrator to protect hundreds of applications with a single application policy.

Wildcard and multi-hostname support in Cloudflare Access

In a scenario where an application requires additional security controls, Access is configured such that the most specific hostname definition wins (e.g., test.example.com will take precedence over *.example.com).

Give it a try!

Wildcard Applications are now available in open beta on the Cloudflare One Dashboard. Multi Hostname support will enter an open beta in the coming weeks. For more information, please see our product documentation about Multi-hostname applications and wildcards.

Account Security Analytics and Events: better visibility over all domains

Post Syndicated from Radwa Radwan original https://blog.cloudflare.com/account-security-analytics-and-events/

Account Security Analytics and Events: better visibility over all domains

Account Security Analytics and Events: better visibility over all domains

Cloudflare offers many security features like WAF, Bot management, DDoS, Zero Trust, and more! This suite of products are offered in the form of rules to give basic protection against common vulnerability attacks. These rules are usually configured and monitored per domain, which is very simple when we talk about one, two, maybe three domains (or what we call in Cloudflare’s terms, “zones”).

The zone-level overview sometimes is not time efficient

If you’re a Cloudflare customer with tens, hundreds, or even thousands of domains under your control, you’d spend hours going through these domains one by one, monitoring and configuring all security features. We know that’s a pain, especially for our Enterprise customers. That’s why last September we announced the Account WAF, where you can create one security rule and have it applied to the configuration of all your zones at once!

Account WAF makes it easy to deploy security configurations. Following the same philosophy, we want to empower our customers by providing visibility over these configurations, or even better, visibility on all HTTP traffic.

Today, Cloudflare is offering holistic views on the security suite by launching Account Security Analytics and Account Security Events. Now, across all your domains, you can monitor traffic, get insights quicker, and save hours of your time.

How do customers get visibility over security traffic today?

Before today, to view account analytics or events, customers either used to access each zone individually to check the events and analytics dashboards, or used zone GraphQL Analytics API or logs to collect data and send them to their preferred storage provider where they could collect, aggregate, and plot graphs to get insights for all zones under their account — in case ready-made dashboards were not provided.

Introducing Account Security Analytics and Events

Account Security Analytics and Events: better visibility over all domains

The new views are security focused, data-driven dashboards — similar to zone-level views, both have  similar data like: sampled logs and the top filters over many source dimensions (for example, IP addresses, Host, Country, ASN, etc.).

The main difference between them is that Account Security Events focuses on the current configurations on every zone you have, which makes reviewing mitigated requests (rule matches) easy. This step is essential in distinguishing between actual threats from false positives, along with maintaining optimal security configuration.

Part of the Security Events power is showing Events “by service” listing the security-related activity per security feature (for example, WAF, Firewall Rules, API Shield) and Events “by Action” (for example, allow, block, challenge).

On the other hand, Account Security Analytics view shows a wider angle with all HTTP traffic on all zones under the account, whether this traffic is mitigated, i.e., the security configurations took an action to prevent the request from reaching your zone, or not mitigated. This is essential in fine-tuning your security configuration, finding possible false negatives, or onboarding new zones.

The view also provides quick filters or insights of what we think are interesting cases worth exploring for ease of use. Many of the view components are similar to zone level Security Analytics that we introduced recently.

To get to know the components and how they interact, let’s have a look at an actual example.

Analytics walk-through when investigating a spike in traffic

Traffic spikes happen to many customers’ accounts; to investigate the reason behind them, and check what’s missing from the configurations, we recommend starting from Analytics as it shows mitigated and non-mitigated traffic, and to revise the mitigated requests to double check any false positives then Security Events is the go to place. That’s what we’ll do in this walk-through starting with the Analytics, finding a spike, and checking if we need further mitigation action.

Step 1: To navigate to the new views, sign into the Cloudflare dashboard and select the account you want to monitor. You will find Security Analytics and Security Events in the sidebar under Security Center.

Account Security Analytics and Events: better visibility over all domains

Step 2: In the Analytics dashboard, if you had a big spike in the traffic compared to the usual, there’s a big chance it’s a layer 7 DDoS attack. Once you spot one, zoom into the time interval in the graph.

Zooming into a traffic spike on the timeseries scale

By Expanding the top-Ns on top of the analytics page we can see here many observations:

Account Security Analytics and Events: better visibility over all domains

We can confirm it’s a DDoS attack as the peak of traffic does not come from one single IP address, It’s distributed over multiple source IPs. The “edge status code” indicates that there’s a rate limiting rule applied on this attack and it’s a GET method over HTTP/2.

Looking at the right hand side of the analytics we can see “Attack Analysis” indicating that these requests were clean from XSS, SQLi, and common RCE attacks. The Bot Analysis indicates it’s an automated traffic in the Bot Scores distribution; these two products add another layer of intelligence to the investigation process. We can easily deduce here that the attacker is sending clean requests through high volumetric attack from multiple IPs to take the web application down.

Account Security Analytics and Events: better visibility over all domains

Step 3: For this attack we can see we have rules in place to mitigate it, with the visibility we get the freedom to fine tune our configurations to have better security posture, if needed. we can filter on this attack fingerprint, for instance: add a filter on the referer `www.example.com` which is receiving big bulk of the attack requests, add filter on path equals `/`, HTTP method, query string, and a filter on the automated traffic with Bot score, we will see the following:

Account Security Analytics and Events: better visibility over all domains

Step 4: Jumping to Security Events to zoom in on our mitigation actions in this case, spike fingerprint is mitigated using two actions: Managed Challenge and Block.

Account Security Analytics and Events: better visibility over all domains

The mitigation happened on: Firewall rules and DDoS configurations, the exact rules are shown in the top events.

Account Security Analytics and Events: better visibility over all domains

Who gets the new views?

Starting this week all our customers on Enterprise plans will have access to Account Security Analytics and Security Events. We recommend having Account Bot Management, WAF Attack Score, and Account WAF to have access to the full visibility and actions.

What’s next?

The new Account Security Analytics and Events encompass metadata generated by the Cloudflare network for all domains in one place. In the upcoming period we will be providing a better experience to save our customers’ time in a simple way. We’re currently in beta, log into the dashboard, check out the views, and let us know your feedback.

One-click ISO 27001 certified deployment of Regional Services in the EU

Post Syndicated from Achiel van der Mandele original https://blog.cloudflare.com/one-click-iso-27001-deployment/

One-click ISO 27001 certified deployment of Regional Services in the EU

One-click ISO 27001 certified deployment of Regional Services in the EU

Today, we’re very happy to announce the general availability of a new region for Regional Services that allows you to limit your traffic to only ISO 27001 certified data centers inside the EU. This helps customers that have very strict requirements surrounding which data centers are allowed to decrypt and service traffic. Enabling this feature is a one-click operation right on the Cloudflare dashboard.

Regional Services – a recap

In 2020, we saw an increase in prospects asking about data localization. Specifically, increased regulatory pressure limited them from using vendors that operated at global scale. We launched Regional Services, a new way for customers to use the Cloudflare network. With Regional Services, we put customers back in control over which data centers are used to service traffic. Regional Services operates by limiting exactly which data centers are used to decrypt and service HTTPS traffic. For example, a customer may want to use only data centers inside the European Union to service traffic. Regional Services operates by leveraging our global network for DDoS protection but only decrypting traffic and applying Layer 7 products inside data centers that are located inside the European Union.

We later followed up with the Data Localization Suite and additional regions: India, Singapore and Japan.

With Regional Services, customers get the best of both worlds: we empower them to use our global network for volumetric DDoS protection whilst limiting where traffic is serviced. We do that by accepting the raw TCP connection at the closest data center but forwarding it on to a data center in-region for decryption. That means that only machines of the customer’s choosing actually see the raw HTTP request, which could contain sensitive data such as a customer’s bank account or medical information.

A new region and a new UI

Traditionally we’ve seen requests for data localization largely center around countries or geographic areas. Many types of regulations require companies to make promises about working only with vendors that are capable of restricting where their traffic is serviced geographically. Organizations can have many reasons for being limited in their choices, but they generally fall into two buckets: compliance and contractual commitments.

More recently, we are seeing that more and more companies are asking about security requirements. An often asked question about security in IT is: how do you ensure that something is safe? For instance, for a data center you might be wondering how physical access is managed. Or how often security policies are reviewed and updated. This is where certifications come in. A common certification in IT is the ISO 27001 certification:

Per the ISO.org:

“ISO/IEC 27001 is the world’s best-known standard for information security management systems (ISMS) and their requirements. Additional best practice in data protection and cyber resilience are covered by more than a dozen standards in the ISO/IEC 27000 family. Together, they enable organizations of all sectors and sizes to manage the security of assets such as financial information, intellectual property, employee data and information entrusted by third parties.”

In short, ISO 27001 is a certification that a data center can achieve that ensures that they maintain a set of security standards to keep the data center secure. With the new Regional Services region, HTTPS traffic will only be decrypted in data centers that hold the ISO 27001 certification. Products such as WAF, Bot Management and Workers will only be applied in those relevant data centers.

The other update we’re excited to announce is a brand new User Interface for configuring the Data Localization Suite. The previous UI was limited in that customers had to preconfigure a region for an entire zone: you couldn’t mix and match regions. The new UI allows you to do just that: each individual hostname can be configured for a different region, directly on the DNS tab:

One-click ISO 27001 certified deployment of Regional Services in the EU

Configuring a region for a particular hostname is now just a single click away. Changes take effect within seconds, making this the easiest way to configure data localization yet. For customers using the Metadata Boundary, we’ve also launched a self-serve UI that allows you to configure where logs flow:

One-click ISO 27001 certified deployment of Regional Services in the EU

We’re excited about these new updates that give customers more flexibility in choosing which of Cloudflare’s data centers to use as well as making it easier than ever to configure them. The new region and existing regions are now a one-click configuration option right from the dashboard. As always, we love getting feedback, especially on what new regions you’d like to see us add in the future. In the meantime, if you’re interested in using the Data Localization Suite, please reach out to your account team.

Cloudflare Access is the fastest Zero Trust proxy

Post Syndicated from David Tuber original https://blog.cloudflare.com/network-performance-update-security-week-2023/

Cloudflare Access is the fastest Zero Trust proxy

Cloudflare Access is the fastest Zero Trust proxy

During every Innovation Week, Cloudflare looks at our network’s performance versus our competitors. In past weeks, we’ve focused on how much faster we are compared to reverse proxies like Akamai, or platforms that sell serverless compute that compares to our Supercloud, like Fastly and AWS. This week, we’d like to provide an update on how we compare to other reverse proxies as well as an update to our application services security product comparison against Zscaler and Netskope. This product is part of our Zero Trust platform, which helps secure applications and Internet experiences out to the public Internet, as opposed to our reverse proxy which protects your websites from outside users.

In addition to our previous post showing how our Zero Trust platform compared against Zscaler, we also have previously shared extensive network benchmarking results for reverse proxies from 3,000 last mile networks around the world. It’s been a while since we’ve shown you our progress towards being #1 in every last mile network. We want to show that data as well as revisiting our series of tests comparing Cloudflare Access to Zscaler Private Access and Netskope Private Access. For our overall network tests, Cloudflare is #1 in 47% of the top 3,000 most reported networks. For our application security tests, Cloudflare is 50% faster than Zscaler and 75% faster than Netskope.

In this blog we’re going to talk about why performance matters for our products, do a deep dive on what we’re measuring to show that we’re faster, and we’ll talk about how we measured performance for each product.

Why does performance matter?

We talked about it in our last blog, but performance matters because it impacts your employees’ experience and their ability to get their job done. Whether it’s accessing services through access control products, connecting out to the public Internet through a Secure Web Gateway, or securing risky external sites through Remote Browser Isolation, all of these experiences need to be frictionless.

A quick summary: say Bob at Acme Corporation is connecting from Johannesburg out to Slack or Zoom to get some work done. If Acme’s Secure Web Gateway is located far away from Bob in London, then Bob’s traffic may go out of Johannesburg to London, and then back into Johannesburg to reach his email. If Bob tries to do something like a voice call on Slack or Zoom, his performance may be painfully slow as he waits for his emails to send and receive. Zoom and Slack both recommend low latency for optimal performance. That extra hop Bob has to take through his gateway could decrease throughput and increase his latency, giving Bob a bad experience.

As we’ve discussed before, if these products or experiences are slow, then something worse might happen than your users complaining: they may find ways to turn off the products or bypass them, which puts your company at risk. A Zero Trust product suite is completely ineffective if no one is using it because it’s slow. Ensuring Zero Trust is fast is critical to the effectiveness of a Zero Trust solution: employees won’t want to turn it off and put themselves at risk if they barely know it’s there at all.

Much like Zscaler, Netskope may outperform many older, antiquated solutions, but their network still fails to measure up to a highly performant, optimized network like Cloudflare’s. We’ve tested all of our Zero Trust products against Netskope equivalents, and we’re even bringing back Zscaler to show you how Zscaler compares against them as well. So let’s dig into the data and show you how and why we’re faster in a critical Zero Trust scenario, comparing Cloudflare Access to Zscaler Private Access and Netskope Private Access.

Cloudflare Access: the fastest Zero Trust proxy

Access control needs to be seamless and transparent to the user: the best compliment for a Zero Trust solution is employees barely notice it’s there. These services allow users to cache authentication information on the provider network, ensuring applications can be accessed securely and quickly to give users that seamless experience they want. So having a network that minimizes the number of logins required while also reducing the latency of your application requests will help keep your Internet experience snappy and reactive.

Cloudflare Access does all that 75% faster than Netskope and 50% faster than Zscaler, ensuring that no matter where you are in the world, you’ll get a fast, secure application experience:

Cloudflare Access is the fastest Zero Trust proxy

Cloudflare measured application access across ourselves, Zscaler and Netskope from 300 different locations around the world connecting to 6 distinct application servers in Hong Kong, Toronto, Johannesburg, São Paulo, Phoenix, and Switzerland. In each of these locations, Cloudflare’s P95 response time was faster than Zscaler and Netskope. Let’s take a look at the data when the application is hosted in Toronto, an area where Zscaler and Netskope should do well as it’s in a heavily interconnected region: North America.

Cloudflare Access is the fastest Zero Trust proxy

ZT Access – Response time (95th Percentile) – Toronto
95th Percentile Response (ms)
Cloudflare 2,182
Zscaler 4,071
Netskope 6,072

Cloudflare really stands out in regions with more diverse connectivity options like South America or Asia Pacific, where Zscaler compares better to Netskope than it does Cloudflare:

Cloudflare Access is the fastest Zero Trust proxy

When we look at application servers hosted locally in South America, Cloudflare stands out:

ZT Access – Response time (95th Percentile) – South America
95th Percentile Response (ms)
Cloudflare 2,961
Zscaler 9,271
Netskope 8,223

Cloudflare’s network shines here, allowing us to ingress connections close to the users. You can see this by looking at the Connect times in South America:

ZT Access – Connect time (95th Percentile) – South America
95th Percentile Connect (ms)
Cloudflare 369
Zscaler 1,753
Netskope 1,160

Cloudflare’s network sets us apart here because we’re able to get users onto our network faster and find the optimal routes around the world back to the application host. We’re twice as fast as Zscaler and three times faster than Netskope because of this superpower. Across all the different tests, Cloudflare’s Connect times is consistently faster across all 300 testing nodes.

Cloudflare Access is the fastest Zero Trust proxy

In our last blog, we looked at two distinct scenarios that need to be measured individually when we compared Cloudflare and Zscaler. The first scenario is when a user logs into their application and has to authenticate. In this case, the Zero Trust Access service will direct the user to a login page, the user will authenticate, and then be redirected to their application.

This is called a new session, because no authentication information is cached or exists on the Access network. The second scenario is called an existing session, when a user has already been authenticated and that authentication information can be cached. This scenario is usually much faster, because it doesn’t require an extra call to an identity provider to complete.

We like to measure these scenarios separately, because when we look at 95th percentile values, we would almost always be looking at new sessions if we combined new and existing sessions together. But across both scenarios, Cloudflare is consistently faster in every region. Let’s go back and look at an application hosted in Toronto, where users connecting to us connect faster than Zscaler and Netskope for both new and existing sessions.

ZT Access – Response Time (95th Percentile) – Toronto
New Sessions (ms) Existing Sessions (ms)
Cloudflare 1,276 1,022
Zscaler 2,415 1,797
Netskope 5,741 1,822

You can see that new sessions are generally slower as expected, but Cloudflare’s network and optimized software stack provides a consistently fast user experience. In scenarios where end-to-end connectivity can be more challenging, Cloudflare stands out even more. Let’s take a look at users in Asia connecting through to an application in Hong Kong.

ZT Access – Response Time (95th Percentile) – Hong Kong
New Sessions (ms) Existing Sessions (ms)
Cloudflare 2,582 2,075
Zscaler 4,956 3,617
Netskope 5,139 3,902

One interesting thing that stands out here is that while Cloudflare’s network is hyper-optimized for performance, Zscaler more closely compares to Netskope on performance than they do to Cloudflare. Netskope also performs poorly on new sessions, which indicates that their service does not react well when users are establishing new sessions.

We like to separate these new and existing sessions because it’s important to look at similar request paths to do a proper comparison. For example, if we’re comparing a request via Zscaler on an existing session and a request via Cloudflare on a new session, we could see that Cloudflare was much slower than Zscaler because of the need to authenticate. So when we contracted a third party to design these tests, we made sure that they took that into account.

For these tests, Cloudflare configured five application instances hosted in Toronto, Los Angeles, Sao Paulo, and Hong Kong. Cloudflare then used 300 different Catchpoint nodes from around the world to mimic a browser login as follows:

  • User connects to the application from a browser mimicked by a Catchpoint instance – new session
  • User authenticates against their identity provider
  • User accesses resource
  • User refreshes the browser page and tries to access the same resource but with credentials already present – existing session

This allows us to look at Cloudflare versus all the other products for application performance for both new and existing sessions, and we’ve shown that we’re faster. As we’ve mentioned, a lot of that is due to our network and how we get close to our users. So now we’re going to talk about how we compare to other large networks and how we get close to you.

Network effects make the user experience better

Getting closer to users improves the last mile Round Trip Time (RTT). As we discussed in the Access comparison, having a low RTT improves customer performance because new and existing sessions don’t have to travel very far to get to Cloudflare’s Zero Trust network. Embedding ourselves in these last mile networks helps us get closer to our users, which doesn’t just help Zero Trust performance, it helps web performance and developer performance, as we’ve discussed in prior blogs.

To quantify network performance, we have to get enough data from around the world, across all manner of different networks, comparing ourselves with other providers. We used Real User Measurements (RUM) to fetch a 100kb file from several different providers. Users around the world report the performance of different providers. The more users who report the data, the higher fidelity the signal is. The goal is to provide an accurate picture of where different providers are faster, and more importantly, where Cloudflare can improve. You can read more about the methodology in the original Speed Week 2021 blog post here.

We are constantly going through the process of figuring out why we were slow — and then improving. The challenges we faced were unique to each network and highlighted a variety of different issues that are prevalent on the Internet. We’re going to provide an overview of some of the efforts we use to improve our performance for our users.

But before we do, here are the results of our efforts since Developer Week 2022, the last time we showed off these numbers. Out of the top 3,000 networks in the world (by number of IPv4 addresses advertised), here’s a breakdown of the number of networks where each provider is number one in p95 TCP Connection Time, which represents the time it takes for a user on a given network to connect to the provider:

Cloudflare Access is the fastest Zero Trust proxy

Here’s what those numbers look like as of this week, Security Week 2023:

Cloudflare Access is the fastest Zero Trust proxy

As you can see, Cloudflare has extended its lead in being faster in more networks, while other networks that previously were faster like Akamai and Fastly lost their lead. This translates to the effects we see on the World Map. Here’s what that world map looked like in Developer Week 2022:

Cloudflare Access is the fastest Zero Trust proxy

Here’s how that world map looks today during Security Week 2023:

Cloudflare Access is the fastest Zero Trust proxy

As you can see, Cloudflare has gotten faster in Brazil, many countries in Africa including South Africa, Ethiopia, and Nigeria, as well as Indonesia in Asia, and Norway, Sweden, and the UK in Europe.

A lot of these countries benefited from the Edge Partner Program that we discussed in the Impact Week blog. A quick refresher: the Edge Partner Program encourages last mile ISPs to partner with Cloudflare to deploy Cloudflare locations that are embedded in the last mile ISP. This improves the last mile RTT and improves performance for things like Access. Since we last showed you this map, Cloudflare has deployed more partner locations in places like Nigeria, and Saudi Arabia, which have improved performance for users in all scenarios. Efforts like the Edge Partner Program help improve not just the Zero Trust scenarios like we described above, but also the general web browsing experience for end users who use websites protected by Cloudflare.

Next-generation performance in a Zero Trust world

In a non-Zero Trust world, you and your IT teams were the network operator — which gave you the ability to control performance. While this control was comforting, it was also a huge burden on your IT teams who had to manage middle mile connections between offices and resources. But in a Zero Trust world, your network is now… well, it’s the public Internet. This means less work for your teams — but a lot more responsibility on your Zero Trust provider, which has to manage performance for every single one of your users. The better your Zero Trust provider is at improving end-to-end performance, the better an experience your users will have and the less risk you expose yourself to. For real-time applications like authentication and secure web gateways, having a snappy user experience is critical.

A Zero Trust provider needs to not only secure your users on the public Internet, but it also needs to optimize the public Internet to make sure that your users continuously stay protected. Moving to Zero Trust doesn’t just reduce the need for corporate networks, it also allows user traffic to flow to resources more naturally. However, given your Zero Trust provider is going to be the gatekeeper for all your users and all your applications, performance is a critical aspect to evaluate to reduce friction for your users and reduce the likelihood that users will complain, be less productive, or turn the solutions off. Cloudflare is constantly improving our network to ensure that users always have the best experience, through programs like the Edge Partner Program and constantly improving our peering and interconnectivity. It’s this tireless effort that makes us the fastest Zero Trust provider.

Stop brand impersonation with Cloudflare DMARC Management

Post Syndicated from Joao Sousa Botto original https://blog.cloudflare.com/dmarc-management/

Stop brand impersonation with Cloudflare DMARC Management

Stop brand impersonation with Cloudflare DMARC Management

At the end of 2021 Cloudflare launched Security Center, a unified solution that brings together our suite of security products and unique Internet intelligence. It enables security teams to quickly identify potential security risks and threats to their organizations, map their attack surface and mitigate these risks with just a few clicks. While Security Center initially focused on application security, we are now adding crucial zero trust insights to further enhance its capabilities.

When your brand is loved and trusted, customers and prospects are looking forward to the emails you send them. Now picture them receiving an email from you: it has your brand, the subject is exciting, it has a link to register for something unique — how can they resist that opportunity?

But what if that email didn’t come from you? What if clicking on that link is a scam that takes them down the path of fraud or identity theft? And what if they think you did it? The truth is, even security minded people occasionally fall for well crafted spoof emails.

That poses a risk to your business and reputation. A risk you don’t want to take – no one does. Brand impersonation is a significant problem for organizations globally, and that’s why we’ve built DMARC Management – available in Beta today.

With DMARC Management you have full insight on who is sending emails on your behalf. You can one-click approve each source that is a legitimate sender for your domain, and then set your DMARC policy to reject any emails sent from unapproved clients.

Stop brand impersonation with Cloudflare DMARC Management

When the survey platform your company uses is sending emails from your domain, there’s nothing to worry about – you configured it that way. But if an unknown mail service from a remote country is sending emails using your domain that can be quite scary, and something you’ll want to address. Let’s see how.

Anti-spoofing mechanisms

Sender Policy Framework (SPF), DomainKeys Identified Mail (DKIM) and Domain-based Message Authentication Reporting and Conformance (DMARC) are three common email authentication methods. Together, they help prevent spammers, phishers, and other unauthorized parties from sending emails on behalf of a domain they do not own.

SPF is a way for a domain to list all the servers the company sends emails from. Think of it like a publicly available employee directory that helps someone to confirm if an employee works for an organization. SPF records list all the IP addresses of all the servers that are allowed to send emails from the domain.

DKIM enables domain owners to automatically “sign” emails from their domain. Specifically, DKIM uses public key cryptography:

  1. A DKIM record stores the domain’s public key, and mail servers receiving emails from the domain can check this record to obtain the public key.
  2. The private key is kept secret by the sender, who signs the email’s header with this key.
  3. Mail servers receiving the email can verify that the sender’s private key was used by applying the public key. This also guarantees that the email was not tampered with while in transit.

DMARC tells a receiving email server what to do after evaluating the SPF and DKIM results. A domain’s DMARC policy can be set in a variety of ways — it can instruct mail servers to quarantine emails that fail SPF or DKIM (or both), to reject such emails, or to deliver them.

It’s not trivial to configure and maintain SPF and DMARC, though. If your configuration is too strict, legitimate emails will be dropped or marked as spam. If it’s too relaxed, your domain might be misused for email spoofing. The proof is that these authentication mechanisms (SPF / DKIM / DMARC) have existed for over 10 years and still, there are still less than 6 million active DMARC records.

DMARC reports can help, and a full solution like DMARC Management reduces the burden of creating and maintaining a proper configuration.

DMARC reports

All DMARC-compliant mailbox providers support sending DMARC aggregated reports to an email address of your choice. Those reports list the services that have sent emails from your domain and the percentage of messages that passed DMARC, SPF and DKIM. They are extremely important because they give administrators the information they need to decide how to adjust their DMARC policies — for instance, that’s how administrators know if their legitimate emails are failing SPF and DKIM, or if a spammer is trying to send illegitimate emails.

Stop brand impersonation with Cloudflare DMARC Management

But beware, you probably don’t want to send DMARC reports to a human-monitored email address, as these come in fast and furious from virtually every email provider your organization sends messages to, and are delivered in XML format. Typically, administrators set up reports to be sent to a service like our DMARC Management, that boils them down to a more digestible form. Note: These reports do not contain personal identifiable information (PII).

DMARC Management automatically creates an email address for those reports to be sent to, and adds the corresponding RUA record to your Cloudflare DNS to announce to mailbox providers where to send reports to. And yes, if you’re curious, these email addresses are being created using Cloudflare Email Routing.

Note: Today, Cloudflare DNS is a requirement for DMARC Management. Cloudflare Area 1 customers will soon also be able to see DMARC reports even if they’re using third-party DNS services.

Stop brand impersonation with Cloudflare DMARC Management

As reports are received in this dedicated email address, they are processed by a Worker that extracts the relevant data, parses it and sends it over to our analytics solution. And you guessed again, that’s implemented using Email Workers. You can read more about the technical implementation here.

Taking action

Now that reports are coming in, you can review the data and take action.

Note: It may take up to 24 hours for mailbox providers to start sending reports and for these analytics to be available to you.

At the top of DMARC Management you have an at-a-glance view of the outbound security configuration for your domain, more specifically DMARC, DKIM, and SPF. DMARC Management will soon start reporting on inbound email security as well, which includes STARTTLS, MTA-STS, DANE, and TLS reporting.

Stop brand impersonation with Cloudflare DMARC Management

The middle section shows the email volume over time, with individual lines showing those that pass DMARC and those that fail.

Stop brand impersonation with Cloudflare DMARC Management

Below, you have additional details that include the number of email messages sent by each source (per the DMARC reports), and the corresponding DMARC, SPF and DKIM statistics. You can approve (that is, include in SPF) any of these sources by clicking on “…”, and you can easily spot applications that may not have DKIM correctly configured.

Stop brand impersonation with Cloudflare DMARC Management

Clicking on any source gives you the same DMARC, SPF and DKIM statistics per IP address of that source. This is how you identify if there’s an additional IP address you might need to include in your SPF record, for example.

Stop brand impersonation with Cloudflare DMARC Management

The ones that fail are the ones you’ll want to take action on, as they will need to either be approved (which technically means including in the SPF record) if legitimate, or stay unapproved and be rejected by the receiving server when the DMARC policy is configured with p=reject.

Getting to a DMARC reject policy is the goal, but you don’t want to apply such a restrictive policy until you have high confidence that all legitimate sending services are accounted for in SPF (and DKIM, if appropriate). That may take a few weeks, depending on the number of services you have sending messages from your domain, but with DMARC Management you will quickly grasp when you’re ready to go.

What else is needed

Once you have approved all your authorized email senders (sources) and configured DMARC to quarantine or reject, you should be confident that your brand and organization are much safer. From then on, keeping an eye on your approved sources list is a very lightweight operation that doesn’t take more than a few minutes per month from your team. Ideally, when new applications that send emails from your domain are deployed in your company, you would proactively include the corresponding IP addresses in your SPF record.

But even if you don’t, you will find new unapproved senders notices on your Security Center, under the Security Insights tab, alongside other important security issues you can review and manage.

Stop brand impersonation with Cloudflare DMARC Management

Or you can check the unapproved list on DMARC Management every few weeks.

Whenever you see a legitimate sender source show up as unapproved, you know what to do — click “…” and mark them as approved!

What’s coming next

DMARC Management takes email security to the next level, and this is only the beginning.

We’re excited to demonstrate our investments in features that provide customers even more insight into their security. Up next we’ll be connecting security analytics from Cloudflare’s Cloud Access Security Broker (CASB) into the Security Center.

Stop brand impersonation with Cloudflare DMARC Management

This product integration will provide customers a way to understand the status of their wider SaaS security at a glance. By surfacing the makeup of CASB Findings (or security issues identified in popular SaaS apps) by severity, health of the SaaS integration, and the number of hidden issues, IT and security administrators will have a way to understand the status of their wider security surface area from a single source.

Stay tuned for more news on CASB in Security Center. In the meantime you can join the waitlist for DMARC Management beta for free today and, if you haven’t yet, we recommend you also check out Cloudflare Area 1 and request a Phishing Risk Assessment to block phishing, spoof and spam emails from coming into your environment.

How we built DMARC Management using Cloudflare Workers

Post Syndicated from André Cruz original https://blog.cloudflare.com/how-we-built-dmarc-management/

How we built DMARC Management using Cloudflare Workers

What are DMARC reports

How we built DMARC Management using Cloudflare Workers

DMARC stands for Domain-based Message Authentication, Reporting, and Conformance. It’s an email authentication protocol that helps protect against email phishing and spoofing.

When an email is sent, DMARC allows the domain owner to set up a DNS record that specifies which authentication methods, such as SPF (Sender Policy Framework) and DKIM (DomainKeys Identified Mail), are used to verify the email’s authenticity. When the email fails these authentication checks DMARC instructs the recipient’s email provider on how to handle the message, either by quarantining it or rejecting it outright.

DMARC has become increasingly important in today’s Internet, where email phishing and spoofing attacks are becoming more sophisticated and prevalent. By implementing DMARC, domain owners can protect their brand and their customers from the negative impacts of these attacks, including loss of trust, reputation damage, and financial loss.

In addition to protecting against phishing and spoofing attacks, DMARC also provides reporting capabilities. Domain owners can receive reports on email authentication activity, including which messages passed and failed DMARC checks, as well as where these messages originated from.

DMARC management involves the configuration and maintenance of DMARC policies for a domain. Effective DMARC management requires ongoing monitoring and analysis of email authentication activity, as well as the ability to make adjustments and updates to DMARC policies as needed.

Some key components of effective DMARC management include:

  • Setting up DMARC policies: This involves configuring the domain’s DMARC record to specify the appropriate authentication methods and policies for handling messages that fail authentication checks. Here’s what a DMARC DNS record looks like:

v=DMARC1; p=reject; rua=mailto:[email protected]

This specifies that we are going to use DMARC version 1, our policy is to reject emails if they fail the DMARC checks, and the email address to which providers should send DMARC reports.

  • Monitoring email authentication activity: DMARC reports are an important tool for domain owners to ensure email security and deliverability, as well as compliance with industry standards and regulations. By regularly monitoring and analyzing DMARC reports, domain owners can identify email threats, optimize email campaigns, and improve overall email authentication.
  • Making adjustments as needed: Based on analysis of DMARC reports, domain owners may need to make adjustments to DMARC policies or authentication methods to ensure that email messages are properly authenticated and protected from phishing and spoofing attacks.
  • Working with email providers and third-party vendors: Effective DMARC management may require collaboration with email providers and third-party vendors to ensure that DMARC policies are being properly implemented and enforced.

Today we launched DMARC management. This is how we built it.

How we built it

As a leading provider of cloud-based security and performance solutions, we at Cloudflare take a specific approach to test our products. We “dogfood” our own tools and services, which means we use them to run our business. This helps us identify any issues or bugs before they affect our customers.

We use our own products internally, such as Cloudflare Workers, a serverless platform that allows developers to run their code on our global network. Since its launch in 2017, the Workers ecosystem has grown significantly. Today, there are thousands of developers building and deploying applications on the platform. The power of the Workers ecosystem lies in its ability to enable developers to build sophisticated applications that were previously impossible or impractical to run so close to clients. Workers can be used to build APIs, generate dynamic content, optimize images, perform real-time processing, and much more. The possibilities are virtually endless. We used Workers to power services like Radar 2.0, or software packages like Wildebeest.

Recently our Email Routing product joined forces with Workers, enabling processing incoming emails via Workers scripts. As the documentation states: “With Email Workers you can leverage the power of Cloudflare Workers to implement any logic you need to process your emails and create complex rules. These rules determine what happens when you receive an email.” Rules and verified addresses can all be configured via our API.

Here’s how a simple Email Worker looks like:

export default {
  async email(message, env, ctx) {
    const allowList = ["[email protected]", "[email protected]"];
    if (allowList.indexOf(message.headers.get("from")) == -1) {
      message.setReject("Address not allowed");
    } else {
      await message.forward("inbox@corp");
    }
  }
}

Pretty straightforward, right?

With the ability to programmatically process incoming emails in place, it seemed like the perfect way to handle incoming DMARC report emails in a scalable and efficient manner, letting Email Routing and Workers do the heavy lifting of receiving an unbound number of emails from across the globe. A high level description of what we needed is:

  1. Receive email and extract report
  2. Publish relevant details to analytics platform
  3. Store the raw report

Email Workers enable us to do #1 easily. We just need to create a worker with an email() handler. This handler will receive the SMTP envelope elements, a pre-parsed version of the email headers, and a stream to read the entire raw email.

For #2 we can also look into the Workers platform, and we will find the Workers Analytics Engine. We just need to define an appropriate schema, which depends both on what’s present in the reports and the queries we plan to do later. Afterwards we can query the data using either the GraphQL or SQL API.

For #3 we don’t need to look further than our R2 object storage. It is trivial to access R2 from a Worker. After extracting the reports from the email we will store them in R2 for posterity.

We built this as a managed service that you can enable on your zone, and added a dashboard interface for convenience, but in reality all the tools are available for you to deploy your own DMARC reports processor on top of Cloudflare Workers, in your own account, without having to worry about servers, scalability or performance.

Architecture

How we built DMARC Management using Cloudflare Workers

Email Workers is a feature of our Email Routing product. The Email Routing component runs in all our nodes, so any one of them is able to process incoming mail, which is important because we announce the Email ingress BGP prefix from all our datacenters. Sending emails to an Email Worker is as easy as setting a rule in the Email Routing dashboard.

How we built DMARC Management using Cloudflare Workers

When the Email Routing component receives an email that matches a rule to be delivered to a Worker, it will contact our internal version of the recently open-sourced workerd runtime, which also runs on all nodes. The RPC schema that governs this interaction is defined in a Capnproto schema, and allows the body of the email to be streamed to Edgeworker as it’s read. If the worker script decides to forward this email, Edgeworker will contact Email Routing using a capability sent in the original request.

jsg::Promise<void> ForwardableEmailMessage::forward(kj::String rcptTo, jsg::Optional<jsg::Ref<Headers>> maybeHeaders) {
  auto req = emailFwdr->forwardEmailRequest();
  req.setRcptTo(rcptTo);

  auto sendP = req.send().then(
      [](capnp::Response<rpc::EmailMetadata::EmailFwdr::ForwardEmailResults> res) mutable {
    auto result = res.getResponse().getResult();
    JSG_REQUIRE(result.isOk(), Error, result.getError());
  });
  auto& context = IoContext::current();
  return context.awaitIo(kj::mv(sendP));
}

In the context of DMARC reports this is how we handle the incoming emails:

  1. Fetch the recipient of the email being processed, this is the RUA that was used. RUA is a DMARC configuration parameter that indicates where aggregate DMARC processing feedback should be reported pertaining to a certain domain. This recipient can be found in the “to” attribute of the message.
const ruaID = message.to
  1. Since we handle DMARC reports for an unbounded number of domains, we use Workers KV to store some information about each one and key this information on the RUA. This also lets us know if we should be receiving these reports.
const accountInfoRaw = await env.KV_DMARC_REPORTS.get(dmarc:${ruaID})
  1. At this point, we want to read the entire email into an arrayBuffer in order to parse it. Depending on the size of the report we may run into the limits of the free Workers plan. If this happens, we recommend that you switch to the Workers Unbound resource model which does not have this issue.
const rawEmail = new Response(message.raw)
const arrayBuffer = await rawEmail.arrayBuffer()
  1. Parsing the raw email involves, among other things, parsing its MIME parts. There are multiple libraries available that allow one to do this. For example, you could use postal-mime:
const parser = new PostalMime.default()
const email = await parser.parse(arrayBuffer)
  1. Having parsed the email we now have access to its attachments. These attachments are the DMARC reports themselves and they can be compressed. The first thing we want to do is store them in their compressed form in R2 for long-term storage. They can be useful later on for re-processing or investigating interesting reports. Doing this is as simple as calling put() on the R2 binding. In order to facilitate retrieval later we recommend that you spread the report files across directories based on the current time.
await env.R2_DMARC_REPORTS.put(
    `${date.getUTCFullYear()}/${date.getUTCMonth() + 1}/${attachment.filename}`,
    attachment.content
  )
  1. We now need to look into the attachment mime type. The raw form of DMARC reports is XML, but they can be compressed. In this case we need to decompress them first. DMARC reporter files can use multiple compression algorithms. We use the MIME type to know which one to use. For Zlib compressed reports pako can be used while for ZIP compressed reports unzipit is a good choice.

  2. Having obtained the raw XML form of the report, fast-xml-parser has worked well for us in parsing them. Here’s how the DMARC report XML looks:

<feedback>
  <report_metadata>
    <org_name>example.com</org_name>
    <[email protected]</email>
   <extra_contact_info>http://example.com/dmarc/support</extra_contact_info>
    <report_id>9391651994964116463</report_id>
    <date_range>
      <begin>1335521200</begin>
      <end>1335652599</end>
    </date_range>
  </report_metadata>
  <policy_published>
    <domain>business.example</domain>
    <adkim>r</adkim>
    <aspf>r</aspf>
    <p>none</p>
    <sp>none</sp>
    <pct>100</pct>
  </policy_published>
  <record>
    <row>
      <source_ip>192.0.2.1</source_ip>
      <count>2</count>
      <policy_evaluated>
        <disposition>none</disposition>
        <dkim>fail</dkim>
        <spf>pass</spf>
      </policy_evaluated>
    </row>
    <identifiers>
      <header_from>business.example</header_from>
    </identifiers>
    <auth_results>
      <dkim>
        <domain>business.example</domain>
        <result>fail</result>
        <human_result></human_result>
      </dkim>
      <spf>
        <domain>business.example</domain>
        <result>pass</result>
      </spf>
    </auth_results>
  </record>
</feedback>
  1. We now have all the data in the report at our fingertips. What we do from here on depends a lot on how we want to present the data. For us, the goal was to display meaningful data extracted from them in our Dashboard. Therefore we needed an Analytics platform where we could push the enriched data. Enter, Workers Analytics Engine. The Analytics engine is perfect for this task since it allows us to send data to it from a worker, and exposes a GraphQL API to interact with the data afterwards. This is how we obtain the data to show in our dashboard.

In the future, we are also considering integrating Queues in the workflow to asynchronously process the report and avoid waiting for the client to complete it.

We managed to implement this project end-to-end relying only on the Workers infrastructure, proving that it’s possible, and advantageous, to build non-trivial apps without having to worry about scalability, performance, storage and security issues.

Open sourcing

As we mentioned before, we built a managed service that you can enable and use, and we will manage it for you. But, everything we did can also be deployed by you, in your account, so that you can manage your own DMARC reports. It’s easy, and free. To help you with that, we are releasing an open-source version of a Worker that processes DMARC reports in the way described above: https://github.com/cloudflare/dmarc-email-worker

If you don’t have a dashboard where to show the data, you can also query the Analytics Engine from a Worker. Or, if you want to store them in a relational database, then there’s D1 to the rescue. The possibilities are endless and we are excited to find out what you’ll build with these tools.

Please contribute, make your own, we’ll be listening.

Final words

We hope that this post has furthered your understanding of the Workers platform. Today Cloudflare takes advantage of this platform to build most of our services, and we think you should too.

Feel free to contribute to our open-source version and show us what you can do with it.

The Email Routing is also working on expanding the Email Workers API more functionally, but that deserves another blog soon.

Cloudflare partners with KnowBe4 to equip organizations with real-time security coaching to avoid phishing attacks

Post Syndicated from Ayush Kumar original https://blog.cloudflare.com/knowbe4-emailsecurity-integration/

Cloudflare partners with KnowBe4 to equip organizations with real-time security coaching to avoid phishing attacks

Cloudflare partners with KnowBe4 to equip organizations with real-time security coaching to avoid phishing attacks

Today, we are very excited to announce that Cloudflare’s cloud email security solution, Area 1, now integrates with KnowBe4, a leading security awareness training and simulated phishing platform. This integration allows mutual customers to offer real-time coaching to their employees when a phishing campaign is detected by Cloudflare’s email security solution.

We are all aware that phishing attacks often use email as a vector to deliver the fraudulent message. Cybercriminals use a range of tactics, such as posing as a trustworthy organization, using urgent or threatening language, or creating a sense of urgency to entice the recipient to click on a link or download an attachment.

Despite the increasing sophistication of these attacks and the solutions to stop them, human error remains the weakest link in this chain of events. This is because humans can be easily manipulated or deceived, especially when they are distracted or rushed. For example, an employee might accidentally click on a link in an email that looks legitimate but is actually a phishing attempt, or they might enter their password into a fake login page without realizing it. According to the 2021 Verizon Data Breach Investigations Report, phishing was the most common form of social engineering attack, accounting for 36% of all breaches. The report also noted that 85% of all breaches involved a human element, such as human error or social engineering.

Therefore, it is essential to educate and train individuals on how to recognize and avoid phishing attacks. This includes raising awareness of common phishing tactics and training individuals to scrutinize emails carefully before clicking on any links or downloading attachments.

Area1 integrates with KnowBe4

Our integration allows for the seamless integration of Cloudflare’s advanced email security capabilities with KnowBe4’s Security Awareness Training platform, KSMAT, and its real-time coaching product, SecurityCoach. This means that organizations using both products can now benefit from an added layer of security that detects and prevents email-based threats in real-time while also training employees to recognize and avoid such threats.

Organizations can offer real-time security coaching to their employees whenever our email security solution detects four types of events: malicious attachments, malicious links, spoofed emails, and suspicious emails. IT or security professionals can configure their real-time coaching campaigns to immediately deliver relevant training to their users related to a detected event.

“KnowBe4 is proud to partner with Cloudflare to provide a seamless integration with our new SecurityCoach product, which aims to deliver real-time security coaching and advice to help end users enhance their cybersecurity knowledge and strengthen their role in contributing to a strong security culture. KnowBe4 is actively working with Cloudflare to provide an API-based integration to connect our platform with systems that IT/security professionals already utilize, making rolling out new products to their teams an easy and unified process.”
Stu Sjouwerman, CEO, KnowBe4

By using the integration, organizations can ensure that their employees are not only protected by advanced security technology that detects and blocks malicious emails, but are also educated on how to identify and avoid these threats. This has been a commonly demanded feature from our customers and we have made it simple for them to implement it.

How it works

Create private key and public key in the Area 1 dashboard

Before you can set up this integration in your KnowBe4 (KMSAT) console, you will need to create a private key and public key with Cloudflare.

  • Log in to your Cloudflare Area 1 email security console as an admin.
  • Click the gear icon in the top-right corner of the page, and then navigate to the Service Accounts tab.
Cloudflare partners with KnowBe4 to equip organizations with real-time security coaching to avoid phishing attacks
  • Click + Add Service Account.
Cloudflare partners with KnowBe4 to equip organizations with real-time security coaching to avoid phishing attacks
  • In the NAME field, enter a name for your new service account.
Cloudflare partners with KnowBe4 to equip organizations with real-time security coaching to avoid phishing attacks
  • Click + Create Service Account.
  • In the pop-up window that opens, copy and save the private key somewhere that you can easily access. You will need this key to complete the setup process in the Set Up the Integration in your KnowBe4 (KMSAT) Console section below.
Cloudflare partners with KnowBe4 to equip organizations with real-time security coaching to avoid phishing attacks

Set up the integration in your KnowBe4 (KMSAT) Console

Once you have created a private key and public key in your Cloudflare Area 1 email security console, you can set up the integration in your KMSAT console. To register Cloudflare Area 1 email security with SecurityCoach in your KMSAT console, follow the steps below:

  • Log in to your KMSAT console and navigate to SecurityCoach > Setup > Security Vendor Integrations.
  • Locate Cloudflare Area 1 Email Security and click Configure.
Cloudflare partners with KnowBe4 to equip organizations with real-time security coaching to avoid phishing attacks
  • Enter the Public Key and Private Key that you saved in the ‘Create your private Key and public key’ section above.
Cloudflare partners with KnowBe4 to equip organizations with real-time security coaching to avoid phishing attacks
  • Click authorize. Once you’ve successfully authorized this integration, you can manage detection rules for Cloudflare Area 1 on the ‘Detection rules subtab’ of SecurityCoach.

SecurityCoach in action

Now that the SecurityCoach is set up, users within your organization will receive messages if Area 1 finds that a malicious email was sent to them. An example one can be seen below.

Cloudflare partners with KnowBe4 to equip organizations with real-time security coaching to avoid phishing attacks

This message not only alerts the user to be more scrutinous about emails they are receiving, since they now know they are being actively targeted, but also provides them with followup steps that they can take to ensure their account is as safe as possible. The image and text that shows up in the email can be configured from the KnowBe4 console giving customers full flexibility on what to communicate with their employees.

Cloudflare partners with KnowBe4 to equip organizations with real-time security coaching to avoid phishing attacks

What’s next

We’ll be expanding this integration with KnowBe4 to our other Zero Trust products in the coming months. If you have any questions or feedback on this integration, please contact your account team at Cloudflare. We’re excited to continue closely working with technology partners to expand existing and create new integrations that help customers on their Zero Trust journey.

Introducing custom pages for Cloudflare Access

Post Syndicated from Kenny Johnson original https://blog.cloudflare.com/access-custom-pages/

Introducing custom pages for Cloudflare Access

Introducing custom pages for Cloudflare Access

Over 10,000 organizations rely on Cloudflare Access to connect their employees, partners, and contractors to the applications they need. From small teams on our free plan to some of the world’s largest enterprises, Cloudflare Access is the Zero Trust front door to how they work together. As more users start their day with Cloudflare Access, we’re excited to announce new options to customize how those users experience our industry-leading Zero Trust solution. We’re excited to announce customizable Cloudflare Access pages including login, blocks and the application launcher.

Where does Cloudflare Access fit in a user’s workflow today?

Most teams we work with start their Zero Trust journey by replacing their existing virtual private network (VPN) with Cloudflare Access. The reasons vary. For some teams, their existing VPN allows too much trust by default and Access allows them to quickly build segmentation based on identity, device posture, and other factors. Other organizations deploy Cloudflare Access because they are exhausted from trying to maintain their VPN and dealing with end user complaints.

When those administrators begin setting up Cloudflare Access, they connect the resources they need to protect to Cloudflare’s network. They can deploy a Cloudflare Tunnel to create a secure, outbound-only, connection to Cloudflare, rely on our existing DNS infrastructure, or even force SaaS application logins through our network. Administrators can then layer on granular Zero Trust rules to determine who can reach a given resource.

To the end user, Cloudflare Access is just a security guard checking for identity, device posture, or other signals at every door. In most cases they should never need to think about us. Instead, they just enjoy a much faster experience with less hassle. When they attempt to reach an application or service, we check each and every request and connection for proof that they should be allowed.

When they do notice Cloudflare Access, they interact with screens that help them make a decision about what they need. In these cases we don’t just want to be a silent security guard – we want to be a helpful tour guide.

Introducing custom pages for Cloudflare Access

Cloudflare Access supports the ability for administrators to configure multiple identity providers simultaneously. Customers love this capability when they work with contractors or acquired teams. We can also configure this only for certain applications. When users arrive, though, we need to know which direction to send them for their initial authentication. We present this selection screen, along with guiding text provided by the administrator, to the user.

Introducing custom pages for Cloudflare Access

When teams move their applications behind Cloudflare Access, we become the front door to how they work. We use that position to present the user with all of the applications they can reach in a portal that allows them to click on any tile to launch the application.

Introducing custom pages for Cloudflare Access

In some cases, the user lacks sufficient permissions to reach the destination. Even though they are being blocked we still want to reduce confusion. Instead of just presenting a generic browser error or dropping a connection, we display a block page.

Why do these need to change?

More and more large enterprises are starting to adopt a Zero Trust VPN replacement and they’re selecting Cloudflare to do so. Unlike small teams that can send a short Slack message about an upcoming change to their employee workflow, some of the CIOs and CSOs that deploy Access need to anticipate questions and curiosity from tens of thousands of employees and contractors.

Those users do not know what Cloudflare is and we don’t need them to. Instead, we just want to securely connect them to the tools they need. To solve that, we need to give IT administrators more space to communicate and we need to get our branding out of the way.

What will I be able to customize?

Following the release of Access page customization, administrators will be able to customize: the login screen, access denied errors and the Access Application Launcher.

What’s next?

We are building page customization in Cloudflare Access following the existing template our reverse proxy customers can use to modify pages presented to end users. We’re excited to bring that standard experience to these workflows as well.

Even though we’re building on that pattern, we still want your feedback. Ahead of a closed beta we are looking for customers who want to provide input as we fine tune this new configuration option. Interested in helping shape this work? Let us know here.

Post-quantum crypto should be free, so we’re including it for free, forever

Post Syndicated from Wesley Evans original https://blog.cloudflare.com/post-quantum-crypto-should-be-free/

Post-quantum crypto should be free, so we’re including it for free, forever

Post-quantum crypto should be free, so we’re including it for free, forever

At Cloudflare, helping to build a better Internet is not just a catchy saying. We are committed to the long-term process of standards development. We love the work of pushing the fundamental technology of the Internet forward in ways that are accessible to everyone. Today we are adding even more substance to that commitment. One of our core beliefs is that privacy is a human right. We believe that to achieve that right the most advanced cryptography needs to be available to everyone, free of charge, forever. Today, we are announcing that our implementations of post-quantum cryptography will meet that standard: available to everyone, and included free of charge, forever.

We have a proud history of taking paid encryption products and launching it to the Internet at scale for Free. Even at the cost of short and long-term revenue because it’s the right thing to do. In 2014, we made SSL free for every Cloudflare customer with Universal SSL. As we make our implementations of post-quantum cryptography free forever today, we do it in the spirit of that first major announcement:

“Having cutting-edge encryption may not seem important to a small blog, but it is critical to advancing the encrypted-by-default future of the Internet. Every byte, however seemingly mundane, that flows encrypted across the Internet makes it more difficult for those who wish to intercept, throttle, or censor the web. In other words, ensuring your personal blog is available over HTTPS makes it more likely that a human rights organization or social media service or independent journalist will be accessible around the world. Together we can do great things.”

We hope that others will follow us in making their implementations of PQC free as well so that we can create a secure and private Internet without a “quantum” up-charge.

The Internet has matured since the 1990’s and the launch of SSL. What was once an experimental frontier has turned into the underlying fabric of modern society. It runs in our most critical infrastructure like power systems, hospitals, airports, and banks. We trust it with our most precious memories. We trust it with our secrets. That’s why the Internet needs to be private by default. It needs to be secure by default. It’s why we’re committed to ensuring that anyone and everyone can achieve post quantum security for free as well as start deploying it at scale today.

Our work on post-quantum crypto is driven by the thesis that quantum computers that can break conventional cryptography create a similar problem to the Year 2000 bug. We know there is going to be a problem in the future that could have catastrophic consequences for users, businesses, and even nation states. The difference this time is we don’t know the date and time that this break in the paradigm of how computers operate will occur. We need to prepare today to be ready for this threat.

To that end we have been preparing for this transition since 2018. At that time we were concerned about the implementation problems other large protocol transitions, like the move to TLS 1.3, had caused our customers and wanted to get ahead of it. Cloudflare Research over the last few years has become a leader and champion of the idea that PQC security wasn’t an afterthought for tomorrow but a real problem that needed to be solved today. We have collaborated with industry partners like Google and Mozilla, contributed to development through participation in the IETF, and even launched an open source experimental cryptography suite to help move the needle. We have tried hard to work with everyone that wanted to be a part of the process and show our work along the way.

As we have worked with our partners in both industry and academia to help prepare us and the Internet for a post-quantum future, we have become dismayed by an emerging trend. There are a growing number of vendors out there that want to cash in on the legitimate fear that nervous executives, privacy advocates, and government leaders have about quantum computing breaking traditional encryption. These vendors offer vague solutions based on unproven technologies like “Quantum Key Distribution” or “Post Quantum Security” libraries that package non-standard algorithms that haven’t been through public review with exorbitant price tags like RSA did in the 1990s. They often love to throw around phrases like “AI” and “Post Quantum” without really showing their work on how any of their systems actually function. Security and privacy are table stakes in the modern Internet, and no one should be charged just to get the baseline protection needed in our contemporary world.

Post-quantum crypto should be free, so we’re including it for free, forever

Launch your PQC transition today

Testing and adopting post-quantum cryptography in modern networks doesn’t have to be hard! In fact, Cloudflare customers can test PQC in their systems today, as we describe later in this post.

Currently, we support Kyber for key agreement on any traffic that uses TLS 1.3 including HTTP/3. (If you want a deep dive on our implementation check out our blog from last fall announcing the beta.) To help you test your traffic to Cloudflare domains with these new key agreement methods we have open-sourced forks for BoringSSL, Go and quic-go. For BoringSSL and Go, check out the sample code here.

If you use Tunnels with cloudflared then upgrading to PQC is super simple. Make sure you’re on at least version 2022.9.1 and simply run cloudflared --post-quantum.

After testing out how Cloudflare can help you implement PQC in your networks, it’s time to start to prepare yourself for the transition to PQC in all of your systems. This first step of inventorying and identifying is critical to a smooth rollout. We know first hand since we have undertaken an extensive evaluation of all of our systems to earn our FedRAMP Authorization certifications, and we are doing a similar evaluation again to transition all of our internal systems to PQC.

Post-quantum crypto should be free, so we’re including it for free, forever

How we are setting ourselves up for the future of quantum computing

Here’s a sneak preview of the path that we are developing right now to fully secure Cloudflare itself against the cryptographic threat of quantum computers. We can break that path down into three parts: internal systems, zero trust, and open source contributions.

The first part of our path to full PQC adoption at Cloudflare is around all of our connections. The connection between yourself and Cloudflare is just one part of the larger path of the connection. Inside our internal systems we are implementing two significant upgrades in 2023 to ensure that they are PQC secure as well.

The first is that we use BoringSSL for a substantial amount of connections. We currently use our fork and we are excited that upstream support for Kyber is underway. Any additional internal connections that use a different cryptographic system are being upgraded as well. The second major upgrade we are making is to shift the remaining internal connections that use TLS 1.2 to TLS 1.3. This combination of Kyber and TLS 1.3 will make our internal connections faster and more secure, even though we use a hybrid of classical and post-quantum secure cryptography. It’s a speed and security win-win. And we proved this power house combination would provide that speed and security over three and half years ago thanks to the groundbreaking work of Cloudflare Research and Google.

The next part of that path is all about using PQC and zero trust as allies together. As we think about the security posture of tomorrow being based around post-quantum cryptography, we have to look at the other critical security paradigm being implemented today: zero trust. Today, the zero trust vendor landscape is littered with products that fail to support common protocols like IPv6 and TLS 1.2, let alone the next generation of protocols like TLS 1.3 and QUIC that enable PQC. So many middleboxes struggle under the load of today’s modern protocols. They artificially downgrade connections and break end user security all in the name of inspecting traffic because they don’t have a better solution. Organizations big and small struggled to support customers that wanted the highest possible performance and security, while also keeping their businesses safe, because of the resistance of these vendors to adapt to modern standards. We do not want to repeat the mistakes of the past. We are planning and evaluating the needed upgrades to all of our zero trust products to support PQC out of the box. We believe that zero trust and post-quantum cryptography are not at odds with one another, but rather together are the future standard of security.

Finally, it’s not enough for us to do this for ourselves and for our customers. The Internet is only as strong as its weakest links in the connection chains that network us all together. Every connection on the Internet needs the strongest possible encryption so that businesses can be secure, and everyday users can be ensured of their privacy. We believe that this core technology should be vendor agnostic and open to everyone. To help make that happen our final part of the path is all about contributing to open source projects. We have already been focused on releases to CIRCL. CIRCL (Cloudflare Interoperable, Reusable Cryptographic Library) is a collection of cryptographic primitives written in Go. The goal of this library is to be used as a tool for experimental deployment of cryptographic algorithms targeting post quantum.

Later on this year we will be publishing as open source a set of easy to adopt, vendor-neutral roadmaps to help you upgrade your own systems to be secure against the future today. We want the security and privacy created by post quantum crypto to be accessible and free for everyone. We will also keep writing extensively about our post quantum journey. To learn more about how you can turn on PQC today, and how we have been building post-quantum cryptography at Cloudflare, please check out these resources:

No, AI did not break post-quantum cryptography

Post Syndicated from Lejla Batina (Guest author) original https://blog.cloudflare.com/kyber-isnt-broken/

No, AI did not break post-quantum cryptography

No, AI did not break post-quantum cryptography

News coverage of a recent paper caused a bit of a stir with this headline: “AI Helps Crack NIST-Recommended Post-Quantum Encryption Algorithm”. The news article claimed that Kyber, the encryption algorithm in question, which we have deployed world-wide, had been “broken.” Even more dramatically, the news article claimed that “the revolutionary aspect of the research was to apply deep learning analysis to side-channel differential analysis”, which seems aimed to scare the reader into wondering what will Artificial Intelligence (AI) break next?

Reporting on the paper has been wildly inaccurate: Kyber is not broken and AI has been used for more than a decade now to aid side-channel attacks. To be crystal clear: our concern is with the news reporting around the paper, not the quality of the paper itself. In this blog post, we will explain how AI is actually helpful in cryptanalysis and dive into the paper by Dubrova, Ngo, and Gärtner (DNG), that has been misrepresented by the news coverage. We’re honored to have Prof. Dr. Lejla Batina and Dr. Stjepan Picek, world-renowned experts in the field of applying AI to side-channel attacks, join us on this blog.

We start with some background, first on side-channel attacks and then on Kyber, before we dive into the paper.

Breaking cryptography

When one thinks of breaking cryptography, one imagines a room full of mathematicians puzzling over minute patterns in intercepted messages, aided by giant computers, until they figure out the key. Famously in World War II, the Nazis’ Enigma cipher machine code was completely broken in this way, allowing the Allied forces to read along with their communications.

No, AI did not break post-quantum cryptography

It’s exceedingly rare for modern established cryptography to get broken head-on in this way. The last catastrophically broken cipher was RC4, designed in 1987, while AES, designed in 1998, stands proud with barely a scratch. The last big break of a cryptographic hash was on SHA-1, designed in 1995, while SHA-2, published in 2001, remains untouched in practice.

So what to do if you can’t break the cryptography head-on? Well, you get clever.

Side-channel attacks

Can you guess the pin code for this gate?

No, AI did not break post-quantum cryptography

You can clearly see that some of the keys are more worn than the others, suggesting heavy use. This observation gives us some insight into the correct pin, namely the digits. But the correct order is not immediately clear. It might be 1580, 8510, or even 115085, but it’s a lot easier than trying every possible pin code. This is an example of a side-channel attack. Using the security feature (entering the PIN) had some unintended consequences (abrading the paint), which leaks information.

There are many different types of side channels, and which one you should worry about depends on the context. For instance, the sounds your keyboard makes as you type leaks what you write, but you should not worry about that if no one is listening in.

Remote timing side channel

When writing cryptography in software, one of the best known side channels is the time it takes for an algorithm to run. For example, let’s take the classic example of creating an RSA signature. Grossly simplified, to sign a message m with private key d, we compute the signature s as md (mod n). Computing the exponent of a big number is hard, but luckily, because we’re doing modular arithmetic, there is the square-and-multiply trick. Here is a naive implementation in pseudocode:

No, AI did not break post-quantum cryptography

The algorithm loops over the bits of the secret key, and does a multiply step if the current bit is a 1. Clearly, the runtime depends on the secret key. Not great, but if the attacker can only time the full run, then they only learn the number of 1s in the secret key. The typical catastrophic timing attack against RSA instead is hidden behind the “mod n”. In a naive implementation this modular reduction is slower if the number being reduced is larger or equal n. This allows an attacker to send specially crafted messages to tease out the secret key bit-by-bit and similar attacks are surprisingly practical.

Because of this, the mantra is: cryptography should run in “constant time”. This means that the runtime does not depend on any secret information. In our example, to remove the first timing issue, one would replace the if-statement with something equivalent to:

	s = ((s * powerOfM) mod n) * bit(s, i) + s * (1 - bit(s, i))

This ensures that the multiplication is always done. Similar countermeasures prevent practically all remote timing attacks.

Power side-channel

The story is quite different for power side-channel attacks. Again, the classic example is RSA signatures. If we hook up an oscilloscope to a smartcard that uses the naive algorithm from before, and measure the power usage while it signs, we can read off the private key by eye:

No, AI did not break post-quantum cryptography

Even if we use a constant-time implementation, there are still minute changes in power usage that can be detected. The underlying issue is that hardware gates that switch use more power than those that don’t. For instance, computing 127 + 64 takes more energy than 64 + 64.

No, AI did not break post-quantum cryptography
127+64 and 64+64 in binary. There are more switched bits in the first.

Masking
A common countermeasure against power side-channel leakage is masking. This means that before using the secret information, it is split randomly into shares. Then, the brunt of the computation is done on the shares, which are finally recombined.

In the case of RSA, before creating a new signature, one can generate a random r and compute md+r (mod n) and mr (mod n) separately. From these, the final signature md (mod n) can be computed with some extra care.

Masking is not a perfect defense. The parts where shares are created or recombined into the final value are especially vulnerable. It does make it harder for the attacker: they will need to collect more power traces to cut through the noise. In our example we used two shares, but we could bump that up even higher. There is a trade-off between power side-channel resistance and implementation cost.

One of the challenging parts in the field is to estimate how much secret information is actually leaked through the traces, and how to extract it. Here machine learning enters the picture.

Machine learning: extracting the key from the traces

Machine learning, of which deep learning is a part, represents the capability of a system to acquire its knowledge by extracting patterns from data —  in this case, the secrets from the power traces. Machine learning algorithms can be divided into several categories based on their learning style. The most popular machine learning algorithms in side-channel attacks follow the supervised learning approach. In supervised learning, there are two phases: 1) training, where a machine learning model is trained based on known labeled examples (e.g., side-channel measurements where we know the key) and 2) testing, where, based on the trained model and additional side-channel measurements (now, with an unknown key), the attacker guesses the secret key. A common depiction of such attacks is given in the figure below.

No, AI did not break post-quantum cryptography

While the threat model may sound counterintuitive, it is actually not difficult to imagine that the attacker will have access (and control) of a device similar to the one being attacked.

In side-channel analysis, the attacks following those two phases (training and testing) are called profiling attacks.

Profiling attacks are not new. The first such attack, called the template attack, appeared in 2002. Diverse machine learning techniques have been used since around 2010, all reporting good results and the ability to break various targets. The big breakthrough came in 2016, when the side-channel community started using deep learning. It greatly increased the effectiveness of power side-channel attacks both against symmetric-key and public-key cryptography, even if the targets were protected with, for instance, masking or some other countermeasures. To be clear: it doesn’t magically figure out the key, but it gets much better at extracting the leaked bits from a smaller number of power traces.

While machine learning-based side-channel attacks are powerful, they have limitations. Carefully implemented countermeasures make the attacks more difficult to conduct. Finding a good machine learning model that can break a target can be far from trivial: this phase, commonly called tuning, can last weeks on powerful clusters.

What will the future bring for machine learning/AI in side-channel analysis? Counter intuitively, we would like to see more powerful and easy to use attacks. You’d think that would make us worse off, but to the contrary it will allow us to better estimate how much actual information is leaked by a device. We also hope that we will be able to better understand why certain attacks work (or not), so that more cost-effective countermeasures can be developed. As such, the future for AI in side-channel analysis is bright especially for security evaluators, but we are still far from being able to break most of the targets in real-world applications.

Kyber

Kyber is a post-quantum (PQ) key encapsulation method (KEM). After a six-year worldwide competition, the National Institute of Standards and Technology (NIST) selected Kyber as the post-quantum key agreement they will standardize. The goal of a key agreement is for two parties that haven’t talked to each other before to agree securely on a shared key they can use for symmetric encryption (such as Chacha20Poly1305). As a KEM, it works slightly different with different terminology than a traditional Diffie–Hellman key agreement (such as X25519):

No, AI did not break post-quantum cryptography

When connecting to a website the client first generates a new ephemeral keypair that consists of a private and public key. It sends the public key to the server. The server then encapsulates  a shared key with that public key, which gives it a random shared key, which it keeps, and a ciphertext (in which the shared key is hidden), which the server returns to the client. The client can then use its private key to decapsulate the shared key from the ciphertext. Now the server and client can communicate with each other using the shared key.

Key agreement is particularly important to make secure against attacks of quantum computers. The reason is that an attacker can store traffic today, and crack the key agreement in the future, revealing the shared key and all communication encrypted with it afterwards. That is why we have already deployed support for Kyber across our network.

The DNG paper

With all the background under our belt, we’re ready to take a look at the DNG paper. The authors perform a power side-channel attack on their own masked implementation of Kyber with six shares.

Point of attack

They attack the decapsulation step. In the decapsulation step, after the shared key is extracted, it’s encapsulated again, and compared against the original ciphertext to detect tampering. For this re-encryption step, the precursor of the shared key—let’s call it the secret—is encoded bit-by-bit into a polynomial. To be precise, the 256-bit secret needs to be converted to a polynomial with 256 coefficients modulo q=3329, where the ith coefficient is (q+1)/2 if the ith bth is 1 and zero otherwise.

This function sounds simple enough, but creating a masked version is tricky. The rub is that the natural way to create shares of the secret is to have shares that xor together to be the secret, and that the natural way to share polynomials is to have shares that add together to get to the intended polynomial.

This is the two-shares implementation of the conversion that the DNG paper attacks:

No, AI did not break post-quantum cryptography

The code loops over the bits of the two shares. For each bit, it creates a mask, that’s 0xffff if the bit was 1 and 0 otherwise. Then this mask is used to add (q+1)/2 to the polynomial share if appropriate. Processing a 1 will use a bit more power. It doesn’t take an AI to figure out that this will be a leaky function. In fact, this pattern was pointed out to be weak back in 2016, and explicitly mentioned to be a risk for masked Kyber in 2020. Apropos, one way to mitigate this, is to process multiple bits at once — for the state of the art, tune into April 2023’s NIST PQC seminar. For the moment, let’s allow the paper its weak target.

The authors do not claim any fundamentally new attack here. Instead, they improve the effectiveness of the attack in two ways: the way they train the neural network, and how to use multiple traces more effectively by changing the ciphertext sent. So, what did they achieve?

Effectiveness

No, AI did not break post-quantum cryptography

To test the attack, they use a Chipwhisperer-lite board, which has a Cortex M4 CPU, which they downclock to 24Mhz. Power usage is sampled at 24Mhz, with high 10-bit precision.

To train the neural networks, 150,000 power traces are collected for decapsulation of different ciphertexts (with known shared key) for the same KEM keypair. This is already a somewhat unusual situation for a real-world attack: for key agreement KEM keypairs are ephemeral; generated and used only once. Still, there are certainly legitimate use cases for long-term KEM keypairs, such as for authentication, HPKE, and in particular ECH.

The training is a key step: different devices even from the same manufacturer can have wildly different power traces running the same code. Even if two devices are of the same model, their power traces might still differ significantly.

The main contribution highlighted by the authors is that they train their neural networks to attack an implementation with 6 shares, by starting with a neural network trained to attack an implementation with 5 shares. That one can be trained from a model to attack 4 shares, and so on. Thus to apply their method, of these 150,000 power traces, one-fifth must be from an implementation with 6 shares, another one-fifth from one with 5 shares, et cetera. It seems unlikely that anyone will deploy a device where an attacker can switch between the number of shares used in the masking on demand.

Given these affordances, the attack proper can commence. The authors report that, from a single power trace of a two-share decapsulation, they could recover the shared key under these ideal circumstances with probability… 0.12%. They do not report the numbers for single trace attacks on more than two shares.

When we’re allowed multiple traces of the same decapsulation, side-channel attacks become much more effective. The second trick is a clever twist on this: instead of creating a trace of decapsulation of exactly the same message, the authors rotate the ciphertext to move bits of the shared key in more favorable positions. With 4 traces that are rotations of the same message, the success probability against the two-shares implementation goes up to 78%. The six-share implementation stands firm at 0.5%. When allowing 20 traces from the six-share implementation, the shared key can be recovered with an 87% chance.

In practice

The hardware used in the demonstration might be somewhat comparable to a smart card, but it is very different from high-end devices such as smartphones, desktop computers and servers. Simple power analysis side-channel attacks on even just embedded 1GHz processors are much more challenging, requiring tens of thousands of traces using a high-end oscilloscope connected close to the processor. There are much better avenues for attack with this kind of physical access to a server: just connect the oscilloscope to the memory bus.

Except for especially vulnerable applications, such as smart cards and HSMs, power-side channel attacks are widely considered infeasible. Although sometimes, when the planets align,  an especially potent power side-channel attack can be turned into a remote timing attack due to throttling, as demonstrated by Hertzbleed. To be clear: the present attack does not even come close.

And even for these vulnerable applications, such as smart cards, this attack is not particularly potent or surprising. In the field, it is not a question of whether a masked implementation leaks its secrets, because it always does. It’s a question of how hard it is to actually pull off. Papers such as the DNG paper contribute by helping manufacturers estimate how many countermeasures to put in place, to make attacks too costly. It is not the first paper studying power side-channel attacks on Kyber and it will not be the last.

Wrapping up

AI did not completely undermine a new wave of cryptography, but instead is a helpful tool to deal with noisy data and discover the vulnerabilities within it. There is a big difference between a direct break of cryptography and a power side-channel attack. Kyber is not broken, and the presented power side-channel attack is not cause for alarm.

Super Bot Fight Mode is now configurable!

Post Syndicated from Adam Martinetti original https://blog.cloudflare.com/configurable-super-bot-fight-mode/

Super Bot Fight Mode is now configurable!

Super Bot Fight Mode is now configurable!

Millions of customers around the world use Cloudflare to keep their applications safe by blocking bot traffic to their website. We block an average of 336 million requests per day for self-service customers using a service called Super Bot Fight Mode. It is a crucial part of how customers keep their websites online.

While most customers use Cloudflare’s Verified Bot directory to securely allow good, automated traffic, some customers also like to write their own localized integration scripts to crawl and update their website, or perform other necessary maintenance functions. Because these bots are only used on a single website, they don’t fit our verified bot criteria the way a Google or Bing crawler does. This makes Super Bot Fight Mode difficult to manage for these types of customers.

Super Bot Fight Mode: now configurable!

Previously, Super Bot Fight Mode ran as an independent service on our global network and other Cloudflare security services were unable to affect its configuration. To solve this, we’ve rewritten Super Bot Fight Mode behind the scenes. It’s now a new managed ruleset in the new WAF, just like the OWASP Core Ruleset or the Cloudflare Managed Ruleset. This doesn’t change the interface, but brings Super Bot Fight Mode closer to where customers are managing their other security exceptions.

As we speak, the WAF team is carefully migrating all self-serve customers from our old Firewall Rules system to a new system. This new system, called Custom Rules, simplifies the exception process in the rules you write with no other changes or loss of functionality. In the old system we had two separate actions, “allow” and “bypass”. In the new Custom Rules, there’s only one action called “skip”. Rules that “skip” traffic can skip the rest of your custom rules (just like an “allow” rule would) and other Cloudflare services. As Cloudflare customers are given the “Skip” action, you will be able to see the option available to “skip” Super Bot Fight Mode. Here’s an example:

Super Bot Fight Mode is now configurable!

While we spoke to customers about their use cases for skipping Super Bot Fight Mode, one use-case kept popping up that didn’t quite fit the rest: WordPress Loopback requests. As many people know, as part of WordPress’ self-diagnostic capabilities, a WordPress site will make automated requests back to itself over the Internet to confirm its reachability and functionality. These loopback diagnostics can come from dozens of different community developed plugins, each implementing loopback requests slightly differently. To help accommodate an ever-growing diversity in diagnostic tools used in WordPress, we have added a simple configuration option to securely allow these loop-back requests.

Super Bot Fight Mode is now configurable!

In the future, we will be integrating this feature with the Cloudflare WordPress plugin to make it even easier to use WordPress with Cloudflare.

What’s next?

Self-serve customers with Custom Rules can create “Skip” rules to create exceptions for Super Bot Fight Mode today. We are currently rolling out Custom Rules to all of our customers. If you do not see this option available now, you should expect to see it in the next several weeks. If the lack of flexibility has prevented you from using Super Bot Fight Mode in the past, please log into the Cloudflare dashboard and try it with these new skip rules!

While we’ve added flexibility to customers’ Super Bot Fight Mode deployments, we know that Free plan customers want the same level of customization that self-serve customers do. Now that our migration of Super Bot Fight Mode to the new WAF is complete, we plan to do the same for the original Bot Fight Mode to allow more free customers than ever before to join us in the fight against bots.

Protect your key server with Keyless SSL and Cloudflare Tunnel integration

Post Syndicated from Dina Kozlov original https://blog.cloudflare.com/protect-your-key-server-with-keyless-ssl-and-cloudflare-tunnel-integration/

Protect your key server with Keyless SSL and Cloudflare Tunnel integration

Protect your key server with Keyless SSL and Cloudflare Tunnel integration

Today, we’re excited to announce a big security enhancement to our Keyless SSL offering. Keyless SSL allows customers to store their private keys on their own hardware, while continuing to use Cloudflare’s proxy services. In the past, the configuration required customers to expose the location of their key server through a DNS record – something that is publicly queryable. Now, customers will be able to use our Cloudflare Tunnels product to send traffic to the key server through a secure channel, without publicly exposing it to the rest of the Internet.

A primer on Keyless SSL

Security has always been a critical aspect of online communication, especially when it comes to protecting sensitive information. Today, Cloudflare manages private keys for millions of domains which allows the data communicated by a client to stay secure and encrypted. While Cloudflare adopts the strictest controls to secure these keys, certain industries such as financial or medical services may have compliance requirements that prohibit the sharing of private keys.In the past, Cloudflare required customers to upload their private key in order for us to provide our L7 services. That was, until we built out Keyless SSL in 2014, a feature that allows customers to keep their private keys stored on their own infrastructure while continuing to make use of Cloudflare’s services.

While Keyless SSL is compatible with any hardware that support PKCS#11 standard, Keyless SSL users frequently opt to secure their private keys within HSMs (Hardware Security Modules), which are specialized machines designed to be tamper proof and resistant to to unauthorized access or manipulation, secure against attacks, and optimized to efficiently execute cryptographic operations such as signing and decryption. To make it easy for customers to set this up, during Security Week in 2021, we launched integrations between Keyless SSL and HSM offerings from all major cloud providers.

Protect your key server with Keyless SSL and Cloudflare Tunnel integration

Strengthening the security of key servers even further

In order for Cloudflare to communicate with a customer’s key server, we have to know the IP address associated with it. To configure Keyless SSL, we ask customers to create a DNS record that indicates the IP address of their keyserver. As a security measure, we ask customers to keep this record under a long, random hostname such as “11aa40b4a5db06d4889e48e2f738950ddfa50b7349d09b5f.example.com”. While it adds a layer of obfuscation to the location of the key server, it does expose the IP address of the keyserver to the public Internet, allowing anyone to send requests to that server. We lock the connection between Cloudflare and the Keyless server down through Mutual TLS, so that the Keyless server should only accept the request if a Cloudflare client certificate associated with the Keyless client is served. While this allows the key server to drop any requests with an invalid or missing client certificate, the key server is still publicly exposed, making it susceptible to attacks.

Protect your key server with Keyless SSL and Cloudflare Tunnel integration

Instead, Cloudflare should be the only party that knows about this key server’s location, as it should be the only party making requests to it.

Enter: Cloudflare Tunnel

Instead of re-inventing the wheel, we decided to make use of an existing Cloudflare product that our customers use to protect the connections between Cloudflare and their origin servers — Cloudflare Tunnels!

Protect your key server with Keyless SSL and Cloudflare Tunnel integration

Cloudflare Tunnel gives customers the tools to connect incoming traffic to their private networks without exposing those networks to the Internet through a public hostname. It works by having customers install a Cloudflare daemon, called “cloudflared” which Cloudflare’s client will then connect to.

Now, customers will be able to use the same functionality but for connections made to their key server.

Getting started

Protect your key server with Keyless SSL and Cloudflare Tunnel integration

To set this up, customers will need to configure a virtual network on Cloudflare – this is where customers will tell us the IP address or hostname of their key server. Then, when uploading a Keyless certificate, instead of telling us the public hostname associated with the key server, customers will be able to tell us the virtual network that resolves to it. When making requests to the key server, Cloudflare’s gokeyless client will automatically connect to the “cloudflared” server and will continue to use Mutual TLS as an additional security layer on top of that connection. For more instructions on how to set this up , check out our Developer Docs.

If you’re an Enterprise customer and are interested in using Keyless SSL in conjunction with Cloudflare Tunnels, reach out to your account team today to get set up.

How Cloudflare and IBM partner to help build a better Internet

Post Syndicated from David McClure original https://blog.cloudflare.com/ibm-keyless-bots/

How Cloudflare and IBM partner to help build a better Internet

How Cloudflare and IBM partner to help build a better Internet

In this blog post, we wanted to highlight some ways that Cloudflare and IBM Cloud work together to help drive product innovation and deliver services that address the needs of our mutual customers. On our blog, we often discuss exciting new product developments and how we are solving real-world problems in our effort to make the internet better and many of our customers and partners play an important role.

IBM Cloud and Cloudflare have been working together since 2018 to integrate Cloudflare application security and performance products natively into IBM Cloud. IBM Cloud Internet Services (CIS) has customers across a wide range of industry verticals and geographic regions but they also have several specialist groups building unique service offerings.

The IBM Cloud team specializes in serving clients in highly regulated industries, aiming to ensure their resiliency, performance, security and compliance needs are met. One group that we’ve been working with recently is IBM Cloud for Financial Services. This group extends the capabilities of IBM Cloud to help serve the complex security and compliance needs of banks, financial institutions and fintech companies.

Bot Management

As malicious bot attacks get more sophisticated and manual mitigations become more onerous, a dynamic and adaptive solution is required for enterprises running Internet facing workloads. With Cloudflare Bot Management on IBM Cloud Internet Services, we aim to help IBM clients protect their Internet properties from targeted application abuse such as account takeover attacks, inventory hoarding, carding abuse and more. Bot Management will be available in the second quarter of 2023.

Threat actors specifically target financial services entities with Account Takeover Attacks, and this is where Cloudflare can help. As much as 71% of login requests we see come from bots (Source: Cloudflare Data) Cloudflare’s Bot Management is powered by a global machine learning model that analyses an average of 45 million HTTP requests a second to track botnets across our network. Cloudflare’s Bot Management solution has the potential to benefit all IBM CIS customers.

Supporting banks, financial institutions, and fintechs

IBM Cloud has been a leader when it comes to providing solutions for the financial services industry and has developed several key management solutions that are designed so clients only need to store their private keys in custom built devices.

The IBM CIS team wants to incorporate the right mix of security and performance, which necessitates the use of cloud-based DDoS, WAF, and Bot Management. Specifically, they wanted to incorporate the powerful security tools that were offered through IBM’s Enterprise-level Cloud Internet Services offerings. When using a cloud solution, it is necessary to proxy traffic which can create a potential challenge when it comes to managing private keys. While Cloudflare adopts strict controls to protect these keys, organizations in highly regulated industries may have security policies and compliance requirements that prevent them from sharing these private keys.

Enter Cloudflare’s Keyless SSL solution.

Cloudflare built Keyless SSL to allow customers to have total control over exactly where private keys are stored. With Keyless SSL and IBM’s key storage solutions, we aim to help enterprises benefit from the robust application protections available through Cloudflare’s WAF, including Cloudflare Bot Management, while still retaining control of their private keys.

“We aim to ensure our clients meet their resiliency, performance, security and compliance needs. The introduction of Keyless SSL and Bot Management security capabilities can further our collaborative accomplishments with Cloudflare and help enterprises, including those in regulated industries, to leverage cloud-native security and adaptive threat mitigation tools.”
Zane Adam, Vice President, IBM Cloud.

“Through our collaboration with IBM Cloud Internet Services, we get to draw on the knowledge and experience of IBM teams, such as the IBM Cloud for Financial Services team, and combine it with our incredible ability to innovate, resulting in exciting new product and service offerings.”
David McClure, Global Alliance Manager, Strategic Partnerships

If you want to learn more about how IBM leverages Cloudflare to protect their customers, visit: https://www.ibm.com/cloud/cloudflare

IBM experts are here to help you if you have any additional questions.

Announcing Cloudflare Fraud Detection

Post Syndicated from Adam Martinetti original https://blog.cloudflare.com/cloudflare-fraud-detection/

Announcing Cloudflare Fraud Detection

Announcing Cloudflare Fraud Detection

The world changed when the COVID-19 pandemic began. Everything moved online to a much greater degree: school, work, and, surprisingly, fraud. Although some degree of online fraud has existed for decades, the Federal Trade Commission reported consumers lost almost $8.8 billion in fraud in 2022 (an over 400% increase since 2019) and the continuation of a disturbing trend. People continue to spend more time alone than ever before, and that time alone makes them not just more targeted, but also more vulnerable to fraud. Companies are falling victim to these trends just as much as individuals: according to PWC’s Global Economic Crime and Fraud Survey, more than half of companies with at least $10 billion in revenue experienced some sort of digital fraud.

This is a familiar story in the world of bot attacks. Cloudflare Bot Management helps customers identify the automated tools behind online fraud, but it’s important to note that not all fraud is committed by bots. If the target is valuable enough, bad actors will contract out the exploitation of online applications to real people. Security teams need to look at more than just bots to better secure online applications and tackle modern, online fraud.

Today, we’re excited to announce Cloudflare Fraud Detection. Fraud Detection will give you precise, easy to use tools that can be deployed in seconds to any website on the Cloudflare network to help detect and categorize fraud. For every type of fraud we detect on your website, you will be able to choose the behavior that makes the most sense to you. While some customers will want to block fraudulent traffic at our edge, other customers may want to pass this information in headers to build integrations with their own app, or use our Cloudflare Workers platform to direct high risk users to load an alternate online experience with fewer capabilities.

The online fraud experience today

When we talk to organizations impacted by sophisticated, online fraud, the first thing we hear from frustrated security teams is that they know what they could do to stop fraud in a vacuum: they’ve proposed requiring email verification on signup, enforcing two-factor authentication for all logins, or blocking online purchases from anonymizing VPNs or countries they repeatedly see a disproportionately high number of charge-backs from. While all of these measures would undoubtedly reduce fraud, they would also make the user experience worse. The fear for every company is that a bad UX will mean slower adoption and less revenue, and that’s too steep a price to pay for most run-of-the-mill online fraud.

For those who’ve chosen to preserve that frictionless user experience and bear the cost of fraud, we see two big impacts: higher infrastructure costs and less efficient employees. Bad actors that abuse account creation endpoints or service availability endpoints often do so with floods of highly distributed HTTP requests, quickly moving through residential proxies to pass under IP based rate limiting rules. Without a way to identify fraudulent traffic with certainty, companies are forced to scale up their infrastructure to be able to serve new peaks in request traffic, even when they know the majority of this traffic is illegitimate. Engineering and Trust and Safety Teams suddenly have a whole new set of responsibilities: regularly banning IP addresses that will probably never be used again, routinely purging fraudulent data from over capacity databases, and even sometimes becoming de-facto fraud investigators. As a result, the organization incurs greater costs without any greater value to their customers.

Reduce modern fraud without hurting UX

Organizations have told us loud and clear that an effective fraud management solution needs to reliably stop bad actors before they can create fraudulent accounts, use stolen credit cards, or steal customer data all the while ensuring a frictionless user experience for real users. We are building novel and highly accurate detections, solving for the four common fraud types we hear the most demand for from businesses around the world:

  • Fake Account Creation: Bad actors signing up for many different accounts to gain access to promotional rewards, or more resources than a single user should have access to.
  • Account Takeover: Gaining unauthorized access to legitimate accounts, by means such as using stolen username and password combinations from other websites, guessing weak passwords, or abusing account recovery mechanisms.
  • Card Testing and Fraudulent Transactions: Testing the validity of stolen credit card details or using those same details to purchase goods or services.
  • Expediting: Obtaining limited availability goods or services by circumventing the normal user flow to complete orders more quickly than should be possible.

In order to trust your fraud management solution, organizations have to understand the decisions or predictions behind the detection of fraud. This is referred to as explainability. For example, it’s not enough to know a signup attempt was flagged as fraud. You need to know, for example, if a signup is fraudulent, exactly what field supplied by the user led us to think this was an issue, why it was an issue, and if it was part of a larger pattern. We will pass along this level of detail when we detect fraud so you can ensure we are only keeping the bad actors out.

Every business that deals with modern, online fraud has a different idea of what risks are acceptable, and a different preference for dealing with fraud once it’s been identified. To give customers maximum flexibility, we’re building Cloudflare’s fraud detection signals to be used individually, or combined with other Cloudflare security products in whichever way best fits each customer’s risk profile and use case, all while using the familiar Cloudflare Firewall Rules interface. Templated rules and suggestions will be available to provide guidance and help customers become familiar with the new features, but each customer will have the option of fully customizing how they want to protect each internet application. Customers can either block, rate-limit, or challenge requests at the edge, or send those signals upstream in request headers, to trigger custom in-application behavior.

Cloudflare provides application performance and security services to millions of sites, and we see 45 million HTTP requests per second on average. The massive diversity and volume of this traffic puts us in a unique position to analyze and defeat online fraud. Cloudflare Bot Management is already built to run our Machine Learning model that detects automated traffic on every request we see. To better tackle more challenging use cases like online fraud, we made our lightning fast Machine Learning even more performant. The typical Machine Learning model now executes in under 0.2 milliseconds, giving us the architecture we need to run multiple specific Machine Learning models in parallel without slowing down content delivery.

Stopping fake account creation and adding to Cloudflare’s defense in depth

Announcing Cloudflare Fraud Detection

The first problem our customers asked us to tackle is detecting fake account creation. Cloudflare is perfectly positioned to solve this because we see more account creation pages than anyone else. Using sampled fake account attack data from our customers, we started looking at signup submission data, and how threat intelligence curated by our Cloudforce One team might be helpful. We found that the data used in our Cloudflare One products was already able to identify 72% of fake accounts based on the signup details supplied by the bad actor, such as the email address or the domain they’re using in the attack. We are continuing to add more sources of threat intelligence data specific to fake accounts to get this number close to 100%. On top of these threat intelligence based rules, we are also training new machine learning models on this data as well, that will spot trends like popular fraud domains based on intelligence from the millions of domains we see across the Cloudflare network.

Making fraud inefficient by expediting detection

The second problem customers asked us to prioritize is expediting. As a reminder, expediting means visiting a succession of web pages faster than would be possible for a normal user, and sometimes skipping ahead in the order of web pages in order to efficiently exploit a resource.

For instance, let’s say that you have an Account Recovery page that is being spammed by a sophisticated group of bad actors, looking for vulnerable users they can steal reset tokens for. In this case, the fraudsters have access to a large number of valid email addresses and they’re testing which of these addresses may be used at your website. To prevent your account recovery process from being abused, we need to ensure that no single person can move through the account recovery process faster, or in a different order than a real person would.

In order to complete a valid password reset action on your site, you may know that a user should have made:

  • A GET request to render your login page
  • A POST request to the login page (at least one second after receiving the login page HTML)
  • A GET request to render the Account Recovery page (at least one second after receiving the POST response)
  • A POST request to the password reset page (at least one second after receiving the Account Recovery page HTML)
  • Taken a total time of less than 5 seconds to complete the process

To solve this, we will rely on encrypted data stored by the user in a token to help us determine if the user has visited all the necessary pages needed in a reasonable amount of time to be performing sensitive actions on your site. If your account recovery process is being abused, the encrypted token we supply acts as a VIP pass, allowing only authorized users to successfully complete the password recovery process. Without a pass indicating the user has gone through the normal recovery flow in the correct order and time, they are denied entry to complete a password recovery. By forcing the bad actor to behave the same as a legitimate user, we make their task of checking which of their compromised email addresses might be registered at your site an impossibly slow process, forcing them to move on to other targets.

Announcing Cloudflare Fraud Detection

These are just two of the first techniques we use to identify and block fraud. We are also building Account Takeover and Carding Abuse detections that we will be talking about in the future on this blog. As online fraud continues to evolve, we will continue to build new and unique detections, leveraging Cloudflare’s unique position to help keep the internet safe.

Where do I sign up?

Cloudflare’s mission is to help build a better Internet, and that includes dealing with the evolution of modern online fraud. If you’re spending hours cleaning up after fraud, or are tired of paying to serve web traffic to bad actors, you can join in the Cloudflare Fraud Detection Early Access in the second half of 2023 by submitting your contact information here. Early Access customers can opt in to providing training data sets right away, making our models more effective for their use cases. You’ll also get test access to our newest models, and future fraud protection features as soon as they roll out.

Announcing Cloudflare Fraud Detection

Automatically discovering API endpoints and generating schemas using machine learning

Post Syndicated from John Cosgrove original https://blog.cloudflare.com/ml-api-discovery-and-schema-learning/

Automatically discovering API endpoints and generating schemas using machine learning

Automatically discovering API endpoints and generating schemas using machine learning

Cloudflare now automatically discovers all API endpoints and learns API schemas for all of our API Gateway customers. Customers can use these new features to enforce a positive security model on their API endpoints even if they have little-to-no information about their existing APIs today.

The first step in securing your APIs is knowing your API hostnames and endpoints. We often hear that customers are forced to start their API cataloging and management efforts with something along the lines of “we email around a spreadsheet and ask developers to list all their endpoints”.

Can you imagine the problems with this approach? Maybe you have seen them first hand. The “email and ask” approach creates a point-in-time inventory that is likely to change with the next code release. It relies on tribal knowledge that may disappear with people leaving the organization. Last but not least, it is susceptible to human error.

Even if you had an accurate API inventory collected by group effort, validating that API was being used as intended by enforcing an API schema would require even more collective knowledge to build that schema. Now, API Gateway’s new API Discovery and Schema Learning features combine to automatically protect APIs across the Cloudflare global network and remove the need for manual API discovery and schema building.

API Gateway discovers and protects APIs

API Gateway discovers APIs through a feature called API Discovery. Previously, API Discovery used customer-specific session identifiers (HTTP headers or cookies) to identify API endpoints and display their analytics to our customers.

Doing discovery in this way worked, but it presented three drawbacks:

  1. Customers had to know which header or cookie they used in order to delineate sessions. While session identifiers are common, finding the proper token to use can take time.
  2. Needing a session identifier for API Discovery precluded us from monitoring and reporting on completely unauthenticated APIs. Customers today still want visibility into session-less traffic to ensure all API endpoints are documented and that abuse is at a minimum.
  3. Once the session identifier was input into the dashboard, customers had to wait up to 24 hours for the Discovery process to complete. Nobody likes to wait.

While this approach had drawbacks, we knew we could quickly deliver value to customers by starting with a session-based product. As we gained customers and passed more traffic through the system, we knew our new labeled data would be extremely useful to further build out our product. If we could train a machine learning model with our existing API metadata and the new labeled data, we would no longer need a session identifier to pinpoint which endpoints were for APIs. So we decided to build this new approach.

We took what we learned from the session identifier-based data and built a machine learning model to uncover all API traffic to a domain, regardless of session identifier. With our new Machine Learning-based API Discovery, Cloudflare continually discovers all API traffic routed through our network without any prerequisite customer input. With this release, API Gateway customers will be able to get started with API Discovery faster than ever, and they’ll uncover unauthenticated APIs that they could not discover before.

Session identifiers are still important to API Gateway, as they form the basis of our volumetric abuse prevention rate limits as well as our Sequence Analytics. See more about how the new approach performs in the “How it works” section below.

API Protection starting from nothing

Now that you’ve found new APIs using API Discovery, how do you protect them? To defend against attacks, API developers must know exactly how they expect their APIs to be used. Luckily, developers can programmatically generate an API schema file which codifies acceptable input to an API and upload that into API Gateway’s Schema Validation.

However, we already talked about how many customers can’t find their APIs as fast as their developers build them. When they do find APIs, it’s very difficult to accurately build a unique OpenAPI schema for each of potentially hundreds of API endpoints, given that security teams seldom see more than the HTTP request method and path in their logs.

When we looked at API Gateway’s usage patterns, we saw that customers would discover APIs but almost never enforce a schema. When we ask them ‘why not?’ the answer was simple: “Even when I know an API exists, it takes so much time to track down who owns each API so that they can provide a schema. I have trouble prioritizing those tasks higher than other must-do security items.” The lack of time and expertise was the biggest gap in our customers enabling protections.

So we decided to close that gap. We found that the same learning process we used to discover API endpoints could then be applied to endpoints once they were discovered in order to automatically learn a schema. Using this method we can now generate an OpenAPI formatted schema for every single endpoint we discover, in real time. We call this new feature Schema Learning. Customers can then upload that Cloudflare-generated schema into Schema Validation to enforce a positive security model.

Automatically discovering API endpoints and generating schemas using machine learning

How it works

Machine learning-based API discovery

With RESTful APIs, requests are made up of different HTTP methods and paths. Take for example the Cloudflare API. You’ll notice a common trend with the paths that might make requests to this API stand out amongst requests to this blog: API requests all start with /client/v4 and continue with the service name, a unique identifier, and sometimes service feature names and further identifiers.

How could we easily identify API requests? At first glance, these requests seem easy to programmatically discover with a heuristic like “path starts with /client”, but the core of our new Discovery contains a machine-learned model that powers a classifier that scores HTTP transactions. If API paths are so structured, why does one need machine-learning for this and can’t one just use some simple heuristic?

The answer boils down to the question: what actually constitutes an API request and how does it differ from a non-API request? Let’s look at two examples.

Like the Cloudflare API, many of our customers’ APIs follow patterns such as prefixing the path of their API request with an “api” identifier and a version, for example:  /api/v2/user/7f577081-7003-451e-9abe-eb2e8a0f103d.

So just looking for “api” or a version in the path is already a pretty good heuristic that tells us this is very likely part of an API, but it is unfortunately not always as easy.

Let’s consider two further examples, /users/7f577081-7003-451e-9abe-eb2e8a0f103d.jpg and /users/7f577081-7003-451e-9abe-eb2e8a0f103d, both just differ in a .jpg extension. The first path could just be a static resource like the thumbnail of a user. The second path does not give us a lot of clues just from the path alone.

Manually crafting such heuristics quickly becomes difficult. While humans are great at finding patterns, building heuristics is challenging considering the scale of the data that Cloudflare sees each day. As such, we use machine learning to automatically derive these heuristics such that we know that they are reproducible and adhere to a certain accuracy.

Input to the training are features of HTTP request/response samples such as the content-type or file extension that we collected through the session identifiers-based Discovery mentioned earlier. Unfortunately, not everything that we have in this data is clearly an API. Additionally, we also need samples that represent non-API traffic. As such, we started out with the session-identifier Discovery data, manually cleaned it up and derived further samples of non-API traffic. We took great care in trying to not overfit the model to the data. That is, we want that the model generalizes beyond the training data.

Automatically discovering API endpoints and generating schemas using machine learning

To train the model, we’ve used the CatBoost library for which we already have a good chunk of expertise as it also powers our Bot Management ML-models. In a simplification, one can regard the resulting model as a flow chart that tells us which conditions we should check after another, for example: if the path contains “api” then also check if there is no file extension and so forth. At the end of this flowchart is a score that tells us the likelihood that a HTTP transaction belongs to an API.

Given the trained model, we can thus input features of HTTP request/responses that run through the Cloudflare network and calculate the likelihood that this HTTP transaction belongs to an API or not. Feature extraction and model scoring is done in Rust and takes only a couple of microseconds on our global network. Since Discovery sources data from our powerful data pipeline, it is not actually necessary to score each transaction. We can reduce the load on our servers by only scoring those transactions that we know will end up in our data pipeline to begin with thus saving CPU time and allowing the feature to be cost effective.

With the classification results in our data pipeline, we can use the same API Discovery mechanism that we’ve been using for the session identifier-based discovery. This existing system works great and allows us to reuse code efficiently. It also aided us when comparing our results with the session identifier-based Discovery, as the systems are directly comparable.

For API Discovery results to be useful, Discovery’s first task is to simplify the unique paths we see into variables. We’ve talked about this before. It is not trivial to deduce the various different identifier schemes that we see across the global network, especially when sites use custom identifiers beyond a straightforward GUID or integer format. API Discovery aptly normalizes paths containing variables with the help of a few different variable classifiers and supervised learning.

Only after normalizing paths are the Discovery results ready for our users to use in a straightforward fashion.

The results: hundreds of found endpoints per customer

So, how does ML Discovery compare to the session identifier-based Discovery which relies on headers or cookies to tag API traffic?

Our expectation is that it detects a very similar set of endpoints. However, in our data we knew there would be two gaps. First, we sometimes see that customers are not able to cleanly dissect only API traffic using session identifiers. When this happens, Discovery surfaces non-API traffic. Second, since we required session identifiers in the first version of API Discovery, endpoints that are not part of a session (e.g. login endpoints or unauthenticated endpoints) were conceptually not discoverable.

The following graph shows a histogram of the number of endpoints detected on customer domains for both discovery variants.

Automatically discovering API endpoints and generating schemas using machine learning

From a bird’s eye perspective, the results look very similar, which is a good indicator that ML Discovery performs as it is supposed to. There are some differences already visible in this plot, which is also expected since we’ll also discover endpoints that are conceptually not discoverable with just a session identifier. In fact, if we take a closer look at a domain-by-domain comparison we see that there is no change for roughly ~46% of the domains. The next graph compares the difference (by percent of endpoints) between session-based and ML-based discovery:

Automatically discovering API endpoints and generating schemas using machine learning

For ~15% of the domains, we see an increase in endpoints between 1 and 50, and for ~9%, we see a similar reduction. For ~28% of the domains, we find more than 50 additional endpoints.

These results highlight that ML Discovery is able to surface additional endpoints that have previously been flying under the radar, and thus expands the set tools API Gateway offers to help bring order to your API landscape.

On-the-fly API protection through API schema learning

With API Discovery taken care of, how can a practitioner protect the newly discovered endpoints? We already looked at the API request metadata, so now let’s look at the API request body. The compilation of all expected formats for all API endpoints of an API is known as an API schema. API Gateway’s Schema Validation is a great way to protect against OWASP Top 10 API attacks, ensuring the body, path, and query string of a request contains the expected information for that API endpoint in an expected format. But what if you don’t know the expected format?

Even if the schema of a specific API is not known to a customer, the clients using this API will have been programmed to mostly send requests that conform to this unknown schema (or they would not be able to successfully query the endpoint). Schema Learning makes use of this fact and will look at successful requests to this API to reconstruct the input schema automatically for the customer. As an example, an API might expect the user-ID parameter in a request to have the form id12345-a. Even if this expectation is not explicitly stated, clients that want to have a successful interaction with the API will send user-IDs in this format.

Schema Learning first identifies all recent successful requests to an API-endpoint, and then parses the different input parameters for each request according to their position and type. After parsing all requests, Schema Learning looks at the different input values for each position and identifies which characteristics they have in common. After verifying that all observed requests share these commonalities, Schema Learning creates an input schema that restricts input to comply with these commonalities and that can directly be used for Schema Validation.

To allow for more accurate input schemas, Schema Learning identifies when a parameter can receive different types of input. Let’s say you wanted to write an OpenAPIv3 schema file and manually observe in a small sample of requests that a query parameter is a unix timestamp. You write an API schema that forces that query parameter to be an integer greater than the start of last year’s unix epoch. If your API also allowed that parameter in ISO 8601 format, your new rule would create false positives when the differently formatted (yet valid) parameter hit the API. Schema Learning automatically does all this heavy lifting for you and catches what manual inspection can’t.

To prevent false positives, Schema Learning performs a statistical test on the distribution of these values and only writes the schema when the distribution is bounded with high confidence.

So how well does it work? Below are some statistics about the parameter types and values we see:

Automatically discovering API endpoints and generating schemas using machine learning

Parameter learning classifies slightly more than half of all parameters as strings, followed by integers which make up almost a third. The remaining 17% are made up of arrays, booleans, and number (float) parameters, while object parameters are seen more rarely in the path and query.

Automatically discovering API endpoints and generating schemas using machine learning

The number of parameters in the path is usually very low, with 94% of all endpoints seeing at most one parameter in their path.

Automatically discovering API endpoints and generating schemas using machine learning

For the query, we do see a lot more parameters, sometimes reaching 50 different parameters for one endpoint!

Parameter learning is able to estimate numeric constraints with 99.9% confidence for the majority of parameters observed. These constraints can either be a maximum/minimum on the value, length, or size of the parameter, or a limited set of unique values that a parameter has to take.

Protect your APIs in minutes

Starting today, all API Gateway customers can now discover and protect APIs in just a few clicks, even if you’re starting with no previous information. In the Cloudflare dash, click into API Gateway and on to the Discovery tab to observe your discovered endpoints. These endpoints will be immediately available with no action required from you. Then, add relevant endpoints from Discovery into Endpoint Management. Schema Learning runs automatically for all endpoints added to Endpoint Management. After 24 hours, export your learned schema and upload it into Schema Validation.

Pro, Biz, and Enterprise customers that haven’t purchased API Gateway can get started by enabling the API Gateway trial inside the Cloudflare Dashboard or contacting their account manager.

What’s next

We plan to enhance Schema Learning by supporting more learned parameters in more formats, like POST body parameters with both JSON and URL-encoded formats as well as header and cookie schemas. In the future, Schema Learning will also notify customers when it detects changes in the identified API schema and present a refreshed schema.

We’d like to hear your feedback on these new features. Please direct your feedback to your account team so that we can prioritize the right areas of improvement. We look forward to hearing from you!

Detecting API abuse automatically using sequence analysis

Post Syndicated from John Cosgrove original https://blog.cloudflare.com/api-sequence-analytics/

Detecting API abuse automatically using sequence analysis

Detecting API abuse automatically using sequence analysis

Today, we’re announcing Cloudflare Sequence Analytics for APIs. Using Sequence Analytics, Customers subscribed to API Gateway can view the most important sequences of API requests to their endpoints. This new feature helps customers to apply protection to the most important endpoints first.

What is a sequence? It is simply a time-ordered list of HTTP API requests made by a specific visitor as they browse a website, use a mobile app, or interact with a B2B partner via API. For example, a portion of a sequence made during a bank funds transfer could look like:

Order Method Path Description
1 GET /api/v1/users/{user_id}/accounts user_id is the active user
2 GET /api/v1/accounts/{account_id}/balance account_id is one of the user’s accounts
3 GET /api/v1/accounts/{account_id}/balance account_id is a different account belonging to the user
4 POST /api/v1/transferFunds Containing a request body detailing an account to transfer funds from, an account to transfer funds to, and an amount of money to transfer

Why is it important to pay attention to sequences for API security? If the above API received requests for POST /api/v1/transferFunds without any of the prior requests, it would seem suspicious. Think about it: how would the API client know what the relevant account IDs are without listing them for the user? How would the API client know how much money is available to transfer? While this example may be obvious, the sheer number of API requests to any given production API can make it hard for human analysts to spot suspicious usage.

In security, one approach to defending against an untold number of threats that are impossible to screen by a team of humans is to create a positive security model. Instead of trying to block everything that could potentially be a threat, you allow all known good or benign traffic and block everything else by default.

Customers could already create positive security models with API Gateway in two main areas: volumetric abuse protection and schema validation. Sequences will form the third pillar of a positive security model for API traffic. API Gateway will be able to enforce the precedence of endpoints in any given API sequence. By establishing precedence within an API sequence, API Gateway will log or block any traffic that doesn’t match expectations, reducing abusive traffic.

Detecting abuse by sequence

When attackers attempt to exfiltrate data in an abusive way, they rarely follow the patterns of expected API traffic. Attacks often use special software to ‘fuzz’ the API, sending several requests with different request parameters hoping to find unexpected responses from the API indicating opportunities to exfiltrate data. Attackers can also manually send requests to APIs that attempt to trick the API in performing unauthorized actions, like granting an attacker elevated privileges or access to data through a Broken Object Level Authentication attack. Protecting APIs with rate limits is a common best practice; however, in both of the above examples attackers may deliberately execute request sequences slowly, in an attempt to thwart volumetric abuse detection.

Think of the sequence of requests above again, but this time imagine an attacker copying the legitimate funds transfer request and modifying the request payload in an attempt to trick the system:

Order Method Path Description
1 GET /api/v1/users/{user_id}/accounts user_id is the active user
2 GET /api/v1/accounts/{account_id}/balance account_id is one of the user’s accounts
3 GET /api/v1/accounts/{account_id}/balance account_id is a different account belonging to the user
4 POST /api/v1/transferFunds Containing a request body detailing an account to transfer funds from, an account to transfer funds to, and an amount of money to transfer
… attacker copies the request to a debugging tool like Postman …
5 POST /api/v1/transferFunds Attacker has modified the POST body to try and trick the API
6 POST /api/v1/transferFunds A further modified POST body to try and trick the API
7 POST /api/v1/transferFunds Another, further modified POST body to try and trick the API

If the customer knew beforehand that the funds transfer endpoint was critical to protect and only occurred once during a sequence, they could write a rule to ensure that it was never called twice in a row and a GET /balance always preceded a POST /transferFunds. But without prior knowledge of which endpoint sequences are critical to protect, how would the customer know which rules to define? A low rate limit is too risky, since an API user might legitimately have a few funds transfer requests to perform in a short amount of time. In the present reality there are few tools to prevent this type of abuse, and most customers are left with reactive efforts to clean up abuse with their application teams and fraud departments after it’s happened.

Ultimately, we believe that providing our customers with the ability to define positive security models on API request sequences requires a three-pronged approach:

  1. Sequence Analytics: Determining which sequences of API requests occurred and when, as well as summarizing the data into readily understandable form.
  2. Sequence Abuse Detection: Identifying which sequences of API requests are likely of benign or malicious origin.
  3. Sequence Mitigation: Identifying relevant rules on sequences of API requests for deciding which traffic to allow or block.

Challenges of sequence creation

Sequence Analytics presents some difficult technical challenges, because sessions may be long-lived and may consist of many requests. As a result, it is not sufficient to define sequences by session identifier alone. Instead, it was necessary for us to develop a solution capable of automatically identifying multiple sequences which occur within a given session. Additionally, since important sequences are not necessarily characterized by volume alone and the set of possible sequences is large, it was necessary to develop a solution capable of identifying important sequences, as opposed to simply surfacing frequent sequences.

To help illustrate these challenges for the example of api.cloudflare.com, we can group API requests by session and plot the number of distinct sequences versus sequence length:

Detecting API abuse automatically using sequence analysis

The plot is based on a one hour snapshot comprising approximately 88,000 sessions and 300 million API requests, with 302 distinct API endpoints. We process the data by applying a fixed-length sliding window to each session, then we count the total number of different fixed-length sequences (‘n-grams’) that we observe as a result of applying the sliding window. The plot displays results for a window size (‘n-gram length’) varying between 1 and 10 requests. Based on the plot, we observe a large number of possible sequences which grows with sequence length: As we increase the sliding window size, we see an increasingly large amount of different sequences in the sample. The smooth trend can be explained by the fact that we apply a sliding window (sessions may themselves contain many sequences) in combination with many long sessions relative to the sequence length.

Given the large number of possible sequences, trying to find abusive sequences is a ‘needles in a haystack’ situation.

Introducing Sequence Analytics

Here is a screenshot from the API Gateway dashboard highlighting Sequence Analytics:

Detecting API abuse automatically using sequence analysis

Let’s break down the new functionality seen in the screenshot.

API Gateway intelligently determines sequences of requests made by your API consumers using the methods described earlier in this article. API Gateway scores sequences by a metric we call Correlation Score. Sequence Analytics displays the top 20 sequences by highest correlation score, and we refer to these as your most important sequences. High-importance sequences contain API requests which are likely to occur together in order.

You should inspect each of your sequences to understand their correlation scores. High correlation score sequences may consist of rarely used endpoints (potentially anomalous user behavior) as well as commonly used endpoints (likely benign user behavior). Since the endpoints found in these sequences commonly occur together, they represent true usage patterns of your API. You should apply all possible API Gateway protections to these endpoints (rate limiting suggestions, Schema Validation, JWT Validation, and mTLS) and check their specific endpoint order with your development team.

We know customers want to explicitly set allowable behavior on their APIs beyond the active protections offered by API Gateway today. Coming soon, we’re releasing sequence precedence rules and enabling the ability to block requests based on those rules. The new sequence precedence rules will allow customers to specify the exact order of allowable API requests, bringing yet another way of establishing a positive security model to protect your API against unknown threats.

How to get started

All API Gateway customers now have access to Sequence Analytics. Navigate to a zone in the Cloudflare dashboard, then click the Security tab > API Gateway tab > Sequences tab. You’ll see the most important sequences that your API consumers request.

Pro, Biz, and Enterprise customers that haven’t purchased API Gateway can get started by enabling the API Gateway trial inside the Cloudflare Dashboard or contacting their account manager.

What’s next

Sequence-based detection is a powerful and unique capability that unlocks many new opportunities to identify and stop attacks. As we fine-tune the methods of identifying these sequences and shipping them to our global network, we will release custom sequence matching and real-time mitigation features at a future date. We will also ensure you have the actionable intelligence to take back to your team on who the API users were that attempted to request sequences that don’t match your policy.

Using the power of Cloudflare’s global network to detect malicious domains using machine learning

Post Syndicated from Jesse Kipp original https://blog.cloudflare.com/threat-detection-machine-learning-models/

Using the power of Cloudflare’s global network to detect malicious domains using machine learning

Using the power of Cloudflare’s global network to detect malicious domains using machine learning

Cloudflare secures outbound Internet traffic for thousands of organizations every day, protecting users, devices, and data from threats like ransomware and phishing. One way we do this is by intelligently classifying what Internet destinations are risky using the domain name system (DNS). DNS is essential to Internet navigation because it enables users to look up addresses using human-friendly names, like cloudflare.com. For websites, this means translating a domain name into the IP address of the server that can deliver the content for that site.

However, attackers can exploit the DNS system itself, and often use techniques to evade detection and control using domain names that look like random strings. In this blog, we will discuss two techniques threat actors use – DNS tunneling and domain generation algorithms – and explain how Cloudflare uses machine learning to detect them.

Domain Generation Algorithm (DGA)

Most websites don’t change their domain name very often. This is the point after all, having a stable human-friendly name to be able to connect to a resource on the Internet. However, as a side-effect stable domain names become a point of control, allowing network administrators to use restrictions on domain names to enforce policies, for example blocking access to malicious websites. Cloudflare Gateway – our secure web gateway service for threat defense – makes this easy to do by allowing administrators to block risky and suspicious domains based on integrated threat intelligence.

But what if instead of using a stable domain name, an attacker targeting your users generated random domain names to communicate with, making it more difficult to know in advance what domains to block? This is the idea of Domain Generation Algorithm domains (MITRE ATT&CK technique T1568.002).

After initial installation, malware reaches out to a command-and-control server to receive further instructions, this is called “command and control” (MITRE ATT&CK tactic TA0011). The attacker may send instructions to perform such actions as gathering and transmitting information about the infected device, downloading additional stages of malware, stealing credentials and private data and sending it to the server, or operating as a bot within a network to perform denial-of-service attacks. Using a domain generation algorithm to frequently generate random domain names to communicate with for command and control gives malware a way to bypass blocks on fixed domains or IP addresses. Each day the malware generates a random set of domain names. To rendezvous with the malware, the attacker registers one of these domain names and awaits communication from the infected device.

Speed in identifying these domains is important to disrupting an attack. Because the domains rotate each day, by the time the malicious disposition of a domain propagates through the cybersecurity community, the malware may have rotated to a new domain name. However, the random nature of these domain names (they are literally a random string of letters!) also gives us an opportunity to detect them using machine learning.

The machine learning model

To identify DGA domains,  we trained a model that extends a pre-trained transformers-based neural network. Transformers-based neural networks are the state-of-the-art technique in natural language processing, and underlie large language models and services like ChatGPT. They are trained by using adjacent words and context around a word or character to “learn” what is likely to come next.

Domain names largely contain words and abbreviations that are meaningful in human language. Looking at the top domains on Cloudflare Radar, we see that they are largely composed of words and common abbreviations, “face” and “book” for example, or “cloud” and “flare”. This makes the knowledge of human language encoded in transformer models a powerful tool for detecting random domain names.

For DGA models, we curated ground truth data that consisted of domain names observed from Cloudflare’s 1.1.1.1 DNS resolver for the negative class, and we used domain names from known domain generation algorithms for the positive class (all uses of DNS resolver data is completed in accordance with our privacy commitments).

Our final training set contained over 250,000 domain names, and was weighted to include more negative (not DGA domains) than positive cases. We trained three different versions of the model with different architectures: LSTM (Long Short-Term Memory Neural Network), LightGBM (binary classification), and a transformer-based model. We selected the transformer based model based on it having the highest accuracy and F1 score (the F1 score is a measure of model fit that penalizes having very different precision and recall, on an imbalanced data set the highest accuracy model might be the one that predicts everything either true or false, not what we want!), with an accuracy of over 99% on the test data.

To compute the score for a new domain never seen before by the model, the domain name is tokenized (i.e. broken up into individual components, in this case characters), and the sequence of characters are passed to the model. The transformers Python package from Hugging Face makes it easy to use these types of models for a variety of applications. The library supports summarization, question answering, translation, text generation, classification, and more. In this case we use sequence classification, together with a model that was customized for this task. The output of the model is a score indicating the chance that the domain was generated by a domain generation algorithm. If the score is over our threshold, we label the domain and a domain generation algorithm domain.

Deployment

The expansive view of domain names Cloudflare has from our 1.1.1.1 resolver means we can quickly observe DGA domains after they become active. We process all DNS query names that successfully resolve using this model, so a single successful resolution of the domain name anywhere in Cloudflare’s public resolver network can be detected.

From the queries observed on 1.1.1.1, we filter down first to new and newly seen domain names. We then apply our DGA classifier to the new and newly seen domain names, allowing us to detect activated command and control domains as soon as they are observed anywhere in the world by the 1.1.1.1 resolver.

Using the power of Cloudflare’s global network to detect malicious domains using machine learning

DNS Tunneling detection

In issuing commands or extracting data from an installed piece of malware, attackers seek to avoid detection. One way to send data and bypass traditional detection methods is to encode data within another protocol. When the attacker controls the authoritative name server for a domain, information can be encoded as DNS queries and responses. Instead of making a DNS query for a simple domain name, such as www.cloudflare.com, and getting a response like 104.16.124.96, attackers can send and receive long DNS queries and responses that contain encoded data.

Here is an example query made by an application performing DNS tunneling (query shortened and partially redacted):

3rroeuvx6bkvfwq7dvruh7adpxzmm3zfyi244myk4gmswch4lcwmkvtqq2cryyi.qrsptavsqmschy2zeghydiff4ogvcacaabc3mpya2baacabqtqcaa2iaaaaocjb.br1ns.example.com

The response data to a query like the one above can vary in length based on the response record type the server uses and the recursive DNS resolvers in the path. Generally, it is at most 255 characters per response record and looks like a random string of characters.

TXT jdqjtv64k2w4iudbe6b7t2abgubis

This ability to take an arbitrary set of bytes and send it to the server as a DNS query and receive a response in the answer data creates a bi-directional communication channel that can be used to transmit any data. The malware running on the infected host encodes the data it wants to transmit as a DNS query name and the infected host sends the DNS query to its resolver.

Since this query is not a true hostname, but actually encodes some data the malware wishes to transmit, the query is very likely to be unique, and is passed on to the authoritative DNS server for that domain.

The authoritative DNS server decodes the query back into the original data, and if necessary can transmit it elsewhere on the Internet. Responses go back the other direction, the response data is encoded as a query response (for example a TXT record) and sent back to the malware running on the infected host.

Using the power of Cloudflare’s global network to detect malicious domains using machine learning

One challenge with identifying this type of traffic, however, is that there are also many benign applications that use the DNS system to encode or transmit data as well. An example of a query that was classified as not DNS tunneling:

00641f74-8518-4f03-adc2-792a34ea2612.bbbb.example.com

As humans, we can see that the leading portion of this DNS query is a UUID. Queries like this are often used by security and monitoring applications and network appliances to check in. The leading portion of the query might be the unique id of the device or installation that is performing the check-in.

During the research and training phase our researchers identified a wide variety of different applications that use a large number of random looking DNS queries. Some examples of this include subdomains of content delivery networks, video streaming, advertising and tracking, security appliances, as well as DNS tunneling. Our researchers investigated and labeled many of these domains, and while doing so, identified features that can be used to distinguish between benign applications and true DNS tunneling.

The model

For this application, we trained a two-stage model. The first stage makes quick yes/no decisions about whether the domain might be a DNS tunneling domain. The second stage of the model makes finer-grained distinctions between legitimate domains that have large numbers of subdomains, such as security appliances or AV false-positive control, and malicious DNS tunneling.

The first stage is a gradient boosted decision tree that gives us an initial classification based on minimal information. A decision tree model is like playing 20 questions – each layer of the decision tree asks a yes or no question, which gets you closer to the final answer. Decision tree models are good at both predicting binary yes/no results as well as incorporating binary or nominal attributes into a prediction, and are fast and lightweight to execute, making them a good fit for this application. Gradient boosting is a reliable technique for training models that is particularly good at combining several attributes with weak predictive power into a strong predictor. It can be used to train multiple types of models including decision trees as well as numeric predictions.

If the first stage classifies the domain as “yes, potential DNS tunneling”, it is checked against the second stage, which incorporates data observed from Cloudflare’s 1.1.1.1 DNS resolver. This second model is a neural network model and refines the categorization of the first, in order to distinguish legitimate applications.

In this model, the neural network takes 28 features as input and classifies the domain into one of 17 applications, such as DNS tunneling, IT appliance beacons, or email delivery and spam related. Figure 2 shows a diagram generated from the popular Python software package Keras showing the layers of this neural network. We see the 28 input features at the top layer and at the bottom layer, the 17 output values indicating the prediction value for each type of application. This neural network is very small, having about 2,000 individual weights that can be set during the training process. In the next section we will see an example of a model that is based on a state-of-the-art pretrained model from a model family that has tens to hundreds of millions of predefined weights.

Using the power of Cloudflare’s global network to detect malicious domains using machine learning
Fig. 2, The keras.utils.plot_model() function draws a diagram of the neural network layers.

Figure 3 shows a plot of the feature values of the applications we are trying to distinguish in polar coordinates. Each color is the feature values of all the domains the model classified as a single type of application over a sample period. The position around the circle (theta) is the feature, and the distance from the center (rho) is the value of that feature. We can see how many of the applications have similar feature values.

When we observe a new domain and compute its feature values, our model uses those feature values to give us a prediction about which application the new domain resembles. As mentioned, the neural network has 28 inputs each of which is the value for a single feature and 17 outputs. The 17 output values represent the prediction that the domain is each of those 17 different types of applications, with malicious DNS tunneling being one of the 17 outputs. The job of the model is to convert the sometimes small differences between the feature values into a prediction. If the value of the malicious DNS tunneling output of the neural network is higher than the other outputs, the domain is labeled as a security threat.

Using the power of Cloudflare’s global network to detect malicious domains using machine learning
Fig. 3, Domains containing high-entropy DNS subdomains, visualized as feature plots. Each section around the circumference of the plot represents a different feature of the observed DNS queries. The distance from the center represents the value of that feature. Each color line is a distinct application, and machine learning helps us distinguish between these and classify them.

Deployment

For the DNS tunneling model, our system consumes the logs from our secure web gateway service. The first stage model is applied to all DNS queries. Domains that are flagged as possible DNS tunneling are then sent to the second stage where the prediction is refined using additional features.

Using the power of Cloudflare’s global network to detect malicious domains using machine learning

Looking forward: combining machine learning with human expertise

In September 2022, Cloudflare announced the general availability of our threat operations and research team, Cloudforce One, which allows our in-house experts to share insights directly with customers. Layering this human element on top of the ML models that we have already developed helps Cloudflare deliver additional protection threat protection for our customers, as we plan to explain in the next article in this blog series.

Until then, click here to create a free account, with no time limit for up to 50 users, and point just your DNS traffic, or all traffic (layers 4 to 7), to Cloudflare to protect your team, devices, and data with machine learning-driven threat defense.

Analyze any URL safely using the Cloudflare Radar URL Scanner

Post Syndicated from Stanley Chiang original https://blog.cloudflare.com/radar-url-scanner-early-access/

Analyze any URL safely using the Cloudflare Radar URL Scanner

Analyze any URL safely using the Cloudflare Radar URL Scanner

One of the first steps in an information security investigation is to gather as much context as possible. But compiling that information can become a sprawling task.

Cloudflare is excited to announce early access to a new, free tool — the Radar URL Scanner. Provide us a URL, and our scanner will compile a report containing a myriad of technical details: a phishing scan, SSL certificate data, HTTP request and response data, page performance data, DNS records, whether cookies are set to secure and HttpOnly, what technologies and libraries the page uses, and more.

Analyze any URL safely using the Cloudflare Radar URL Scanner

Let’s walk through a report on John Graham-Cumming’s blog as an example. Conveniently, all reports generated will be publicly accessible.

The first page is the summary tab, and you’ll see we’ve broken all the available data into the following categories: Security, Cookies, Network, Technology, DOM, and Performance. It’s a lot of content so we will jump through some highlights.

In the Summary tab itself, you’ll notice the submitted URL was https://blog.jgc.org. If we had received a URL short link, the scanner would have followed the redirects and generated a report for the final URL.

Analyze any URL safely using the Cloudflare Radar URL Scanner

The Security tab presents information to help determine whether a page is safe to visit with a phishing and certificates section. In our blog example, the report confirms the link we provided is not a phishing link, but there could easily be phishing scams trying to harvest personal information. We’re excited to enable wider access to our security infrastructure with this free tool.

Analyze any URL safely using the Cloudflare Radar URL Scanner

The Cookies tab can indicate how privacy friendly a website is to its users. We show all the cookies set and their attribute values to do this. In this report, the blog loaded 2 cookies. There’s the Secure flag. You’ll want that set to true as often as possible because this means the cookie may only be transmitted over HTTPS, preventing it from being observed by unauthorized parties. Additionally, cookies set to HttpOnly will be inaccessible to the JavaScript API, potentially mitigating XSS attacks from third-party scripts.

Analyze any URL safely using the Cloudflare Radar URL Scanner

The Technology tab enumerates the technologies, frameworks, libraries, etc that are used to power the page being scanned. Understanding the technology stack of a page can be very useful for when there are outages in a particular service, when exploits in popular libraries are discovered, or simply to understand what tools are most popular in the industry. John’s blog appears to use 7 different technologies including Google AdSense, Blogger, and Cloudflare.

Analyze any URL safely using the Cloudflare Radar URL Scanner

The Network tab shows all the HTTP transactions that occur on the page as well as the hostname’s associated DNS records. HTTP transactions are the requests and responses the page makes to load all its content. This tells engineers where the website is going to load its content. Our report of John’s blog shows a total of 82 requests.

Analyze any URL safely using the Cloudflare Radar URL Scanner

The tab also contains DNS records which are a great way to understand more about the fundamentals of the page. And of course, we at Cloudflare are big advocates for enabling DNSSEC.

Analyze any URL safely using the Cloudflare Radar URL Scanner

The DOM (Document Object Model) tab conveniently collates common information you may be looking for from within the page. We grouped together lists of all hyperlinks and global JavaScript variables. Additionally, we provide the raw HTML of the page for you to further analyze. Our report shows the blog’s landing page has 104 hyperlinks going off to other websites.

Analyze any URL safely using the Cloudflare Radar URL Scanner

The Performance tab presents a breakdown of the time it takes for the website to load. It’s not enough for a page to be secure for users. It must also be usable, and load speeds are a big factor in the overall experience. That’s why we’ve also included Performance Navigation Timing metrics alongside our more security and privacy oriented tabs.

Analyze any URL safely using the Cloudflare Radar URL Scanner

Under the hood, one of the great things about this tool is that the underlying scanning technology uses Cloudflare’s homegrown Workers Browser Rendering API to run all our headless scans. You can follow that link to join the waitlist and try it out for yourself.

In the future, we envision adding features to our scanner to complement the ones from this launch: API endpoints so you don’t need to rely on a GUI, private scans for more sensitive or recurring reports, and also security recommendations with integrations with the Cloudflare Security Center. And since this is a Radar product, not only can users expect the data generated to further enhance our security threat modeling, they can also look forward to us providing back insights and visualizations from the aggregate trends we observe.

The Radar URL Scanner tool’s journey to helping make the Internet more transparent and secure has only just begun, but we’re excited for you all to try it out here. If you have any questions or would like to discuss enterprise level features on your wishlist, feel free to reach out via Twitter at @CloudflareRadar or email us at [email protected].

Announcing WAF Attack Score Lite and Security Analytics for business customers

Post Syndicated from Radwa Radwan original https://blog.cloudflare.com/waf-attack-score-for-business-plan/

Announcing WAF Attack Score Lite and Security Analytics for business customers

Announcing WAF Attack Score Lite and Security Analytics for business customers

In December 2022 we announced the general availability of the WAF Attack Score. The initial release was for our Enterprise customers, but we always had the belief that this product should be enabled for more users. Today we’re announcing “WAF Attack Score Lite” and “Security Analytics” for our Business plan customers.

Looking back on “What is WAF Attack Score and Security Analytics?”

Vulnerabilities on the Internet appear almost on a daily basis. The CVE (common vulnerabilities and exposures) program has a list with over 197,000 records to track disclosed vulnerabilities.

That makes it really hard for web application owners to harden and update their system regularly, especially when we talk about critical libraries and the exploitation damage that can happen in case of information leak. That’s why web application owners tend to use WAFs (Web Application Firewalls) to protect their online presence.

Most WAFs use signature-based detections, which are rules created based on specific attacks that we know about. The signature-based method is very fast, has a low rate of false positives (these are the requests that are categorized as attack when they are actually legitimate), and is very efficient with most of the attack categories we know. However, they sometimes have a blind spot when a new attack happens, often called zero-day attacks. As soon as a new vulnerability is found, our security analysts take fast action to stop it in a matter of hours and update the WAF Managed Rules, yet we want to protect our customers during this time as well.

This is the main reason Cloudflare created a complementary feature to the WAF managed rules: a smart machine learning layer to help detect unknown attacks, and protect customers even during the time gap until rules are updated.

Early detection + Powerful mitigation = Safer Internet

Announcing WAF Attack Score Lite and Security Analytics for business customers

The performance of any machine learning drastically depends on the data it was trained on. Our machine learning uses a supervised model that was trained over hundreds of millions of requests generated by WAF Managed Rules, data varies between clean and malicious, some were blended with fuzzy techniques to enable catching similar patterns as covered in our blog “Improving the accuracy of our machine learning WAF”. At the moment, there are three types of attacks our machine learning model is optimized to find: SQL Injection (SQLi), Cross Site Scripting (XSS), and a wide range of Remote Code Execution (RCE) attacks such as shell injection, PHP injection, Apache Struts type compromises, Apache log4j, and similar attacks that result in RCE.

And the reason why we started with them is based on Cloudflare’s Application Security Report. These categories represent more than 24% of the mitigated layer 7 attacks over the last year in our WAF, therefore more prone to exploitations.

In the full Enterprise WAF Attack Score version we offer more granularity on the attack categories and we provide scores for each class where they can be configured freely per domain.

WAF Attack Score Lite Features for Business Plan

WAF Attack Score Lite and the Security Analytics view offer three main functions:

1- Attack detection: This happens through inspecting every incoming HTTP request, bucketing or classifying the requests into 4 types: Attacks, Likely Attacks, Likely Clean and Clean. At the moment there are three types of attacks our machine learning model is optimized to find: SQL Injection (SQLi), Cross Site Scripting (XSS), and a wide range of Remote Code Execution (RCE) attacks.

2- Attack mitigation: The ability to create WAF Custom Rules or WAF Rate Limiting Rules to mitigate requests. We’re exposing a new field cf.waf.score.class that  has preset values: attack, likely_attack, likely_clean and clean. customers can use this field in rules expressions and apply needed actions.

Announcing WAF Attack Score Lite and Security Analytics for business customers

3- Visibility over your entire traffic: Security Analytics is a new dashboard currently in beta. It provides a comprehensive view across all your HTTP traffic, which displays all requests whether they match rules or not. Security Analytics is a great tool for investigating false negatives and hardening your security configurations. Security Events is still available in (Security > Events) and Security Analytics is available in a separate tab (Security > Analytics).

Announcing WAF Attack Score Lite and Security Analytics for business customers

Deployment and configuration

In order to enable WAF Attack Score Lite and Security Analytics, you don’t need to take any action. The HTTP machine learning inspection rollout will start today, and Security Analytics will appear automatically to all Business plan customers by the time the rollout is completed in the upcoming weeks.

It’s worth mentioning that having the detection on and viewing the attack analysis in Security Analytics does not mean you’re blocking traffic. It only offers insights and provides the freedom to create rules and mitigate the desired requests. Creating a rule to block or challenge bad traffic is needed to take effect.

A common use case

Consider an attacker executing an attack using automated web requests to manipulate or disrupt web applications. One of the best ways to identify this type of traffic and mitigate these requests is by combining bot score with WAF Attack Score.

1- Go to the Security Analytics dashboard under Security > Analytics. On the right-hand side the Attack Analysis indicates the attack class. In this case, I can select “Attack” to apply a single filter, or use the quick filters under Insights to propagate multiple filters at once. In addition to the attack class, I can also select the Bot “Automated” filter.

Announcing WAF Attack Score Lite and Security Analytics for business customers

2- After filtering, Security Analytics provides the capability of scrolling down to see the logs and validate the results:

Announcing WAF Attack Score Lite and Security Analytics for business customers

3- Once the selected requests are confirmed, I can select the Create WAF Custom Rules option which will direct me to the Security Events with the pre-assigned filters to deploy a rule. In this case, I want to challenge the requests matched by the rule:

Announcing WAF Attack Score Lite and Security Analytics for business customers

And voila! You have a new rule that challenges traffic matching any automated attack variation.

Next steps

We have been working hard to provide maximum security and visibility for all our customers. This is only one step on this road! We will keep adding more product-focused analytics, and providing additional security against unknown attacks. Try it out, create a rule, and don’t hesitate to contact our sales team if you need the full version of WAF Attack Score.