What do you do in your current role and how long have you been at AWS?
I’m currently a vice president and distinguished engineer in the infrastructure organization at AWS. My role includes working on the AWS global network backbone, as well as focusing on denial-of-service detection and mitigation systems. I’ve been with AWS for over 12 years.
What initially got you interested in networking and how did that lead you to the anti-abuse and distributed denial of service (DDoS) space?
My interest in large global network infrastructure started when I was a teenager in the 1990s. I remember reading a magazine at the time that cataloged all the large IP transit providers on the internet, complete with network topology maps and sizes of links. It inspired me to want to work on the engineering teams that supported that. Over time, I was fortunate enough to move from working at a small ISP to a telecom carrier where I was able to work on their POP and backbone designs. It was there that I learned about the internet peering ecosystem and started collaborating with network operators from around the globe.
For the last 20-plus years, DDoS was always something I had to deal with to some extent. Namely, from the networking lens of preventing network congestion through traffic-engineering and capacity planning, as well as supporting the integration of DDoS traffic scrubbers into network infrastructure.
About three years ago, I became especially intrigued by the network abuse and DDoS space after using AWS network telemetry to observe the size of malicious events in the wild. I started to be interested in how mitigation could be improved, and how to break down the problem into smaller pieces to better understand the true sources of attack traffic. Instead of merely being an observer, I wanted to be a part of the solution and make it better. This required me to immerse myself into the domain, both from the perspective of learning the technical details and by getting hands-on and understanding the DDoS industry and environment as a whole. Part of how I did this was by engaging with my peers in the industry at other companies and absorbing years of knowledge from them.
How do you explain your job to your non-technical friends and family?
I try to explain both areas that I work on. First, that I help build the global network infrastructure that connects AWS and its customers to the rest of the world. I explain that for a home user to reach popular destinations hosted on AWS, data has to traverse a series of networks and physical cables that are interconnected so that the user’s home computer or mobile phone can send packets to another part of the world in less than a second. All that requires coordination with external networks, which have their own practices and policies on how they handle traffic. AWS has to navigate that complexity and build and operate our infrastructure with customer availability and security in mind. Second, when it comes to DDoS and network abuse, I explain that there are bad actors on the internet that use DDoS to cause impairment for a variety of reasons. It could be someone wanting to disrupt online gaming, video conferencing, or regular business operations for any given website or company. I work to prevent those events from causing any sort of impairment and trace back the source to disrupt that infrastructure launching them to prevent it from being effective in the future.
Recently, you were awarded the J.D. Falk Award by the Messaging Malware Mobile Anti-Abuse Working Group (M3AAWG) for IP Spoofing Mitigation. Congratulations! Please tell us more about the efforts that led to this.
Basically, there are three main types of DDoS attacks we observe: botnet-based unicast floods, spoofed amplification/reflection attacks, and proxy-driven HTTP request floods. The amplification/reflection aspect is interesting because it requires DDoS infrastructure providers to acquire compute resources behind providers that permit IP spoofing. IP spoofing itself has a long history on the internet, with a request for comment/best current practice (RFC/BCP) first written back in 2000 recommending that providers prevent this from occurring. However, adoption of this practice is still spotty on the internet.
At NANOG76, there was a proposal that these sorts of spoofed attacks could be traced by network operators in the path of the pre-amplification/reflection traffic (before it bounced off the reflectors). I personally started getting involved in this effort about two years ago. AWS operates a large global network and has network telemetry data that would help me identify pre-amplification/reflection traffic entering our network. This would allow me to triangulate the source network generating this. I then started engaging various networks directly that we connect to and provided them timestamps, spoofed source IP addresses, and specific protocols and ports involved with the traffic, hoping they could use their network telemetry to identify the customer generating it. From there, they’d engage with their customer to get the source shutdown or, failing that, implement packet filters on their customer to prevent spoofing.
Initially, only a few networks were capable of doing this well. This meant I had to spend a fair amount of energy in educating various networks around the globe on what spoofed traffic is, how to use their network telemetry to find it, and how to handle it. This was the most complicated and challenging part because this wasn’t on the radar of many networks out there. Up to this time, frontline network operations and abuse teams at various networks, including some very large ones, were not proficient in dealing with this.
The education I did included a variety of engagements, including sharing drawings with the day-in-the-life of a spoofed packet in a reflection attack, providing instructions on how to use their network telemetry tools, connecting them with their network telemetry vendors to help them, and even going so far as using other more exotic methods to identify which of their customers were spoofing and pointing out who they needed to analyze more deeply. In the end, it’s about getting other networks to be responsive and take action and, in the best cases, find spoofing on their own and act upon it.
Incredible! How did it feel accepting the award at the M3AAWG General Meeting in Brooklyn?
It was an honor to accept it and see some acknowledgement for the behind-the-scenes work that goes on to make the internet a better place.
What’s next for you in your work to suppress IP spoofing?
Continue tracing exercises and engaging with external providers. In particular, some of the network providers that experience challenges in dealing with spoofing and how we can improve their operations. Also, determining more effective ways to educate the hosting providers where IP spoofing is a common issue and making them implement proper default controls to not allow this behavior. Another aspect is being a force multiplier to enable others to spread the word and be part of the education process.
Looking ahead, what are some of your other goals for improving users’ online experiences and security?
Continually focusing on improving our DDoS defense strategies and working with customers to build tailored solutions that address some of their unique requirements. Across AWS, we have many services that are architected in different ways, so a key part of this is how do we raise the bar from a DDoS defense perspective across each of them. AWS customers also have their own unique architecture and protocols that can require developing new solutions to address their specific needs. On the disruption front, we will continue to focus on disrupting DDoS-as-a-service provider infrastructure beyond disrupting spoofing to disrupting botnets and the infrastructure associated with HTTP request floods.
With HTTP request floods being much more popular than byte-heavy and packet-heavy threat methods, it’s important to highlight the risks open proxies on the internet pose. Some of this emphasizes why there need to be some defaults in software packages to prevent misuse, in addition to network operators proactively identifying open proxies and taking appropriate action. Hosting providers should also recognize when their customer resources are communicating with large fleets of proxies and consider taking appropriate mitigations.
What are the most critical skills you would advise people need to be successful in network security?
I’m a huge proponent of being hands-on and diving into problems to truly understand how things are operating. Putting yourself outside your comfort zone, diving deep into the data to understand something, and translating that into outcomes and actions is something I highly encourage. After you immerse yourself in a particular domain, you can be much more effective at developing strategies and rapid prototyping to move forward. You can make incremental progress with small actions. You don’t have to wait for the perfect and complete solution to make some progress. I also encourage collaboration with others because there is incredible value in seeking out diverse opinions. There are resources out there to engage with, provided you’re willing to put in the work to learn and determine how you want to give back. The best people I’ve worked with don’t do it for public attention, blog posts, or social media status. They work in the background and don’t expect anything in return. They do it because of their desire to protect their customers and, where possible, the internet at large.
Lastly, if you had to pick an industry outside of security for your career, what would you be doing?
I’m over the maximum age allowed to start as an air traffic controller, so I suppose an air transport pilot or a locomotive engineer would be pretty neat.
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.
Want more AWS Security news? Follow us on Twitter.
Starting on Aug 25, 2023, we started to notice some unusually big HTTP attacks hitting many of our customers. These attacks were detected and mitigated by our automated DDoS system. It was not long however, before they started to reach record breaking sizes — and eventually peaked just above 201 million requests per second. This was nearly 3x bigger than our previous biggest attack on record.
Concerning is the fact that the attacker was able to generate such an attack with a botnet of merely 20,000 machines. There are botnets today that are made up of hundreds of thousands or millions of machines. Given that the entire web typically sees only between 1–3 billion requests per second, it's not inconceivable that using this method could focus an entire web’s worth of requests on a small number of targets.
Detecting and Mitigating
This was a novel attack vector at an unprecedented scale, but Cloudflare's existing protections were largely able to absorb the brunt of the attacks. While initially we saw some impact to customer traffic — affecting roughly 1% of requests during the initial wave of attacks — today we’ve been able to refine our mitigation methods to stop the attack for any Cloudflare customer without it impacting our systems.
We noticed these attacks at the same time two other major industry players — Google and AWS — were seeing the same. We worked to harden Cloudflare’s systems to ensure that, today, all our customers are protected from this new DDoS attack method without any customer impact. We’ve also participated with Google and AWS in a coordinated disclosure of the attack to impacted vendors and critical infrastructure providers.
This attack was made possible by abusing some features of the HTTP/2 protocol and server implementation details (see CVE-2023-44487 for details). Because the attack abuses an underlying weakness in the HTTP/2 protocol, we believe any vendor that has implemented HTTP/2 will be subject to the attack. This included every modern web server. We, along with Google and AWS, have disclosed the attack method to web server vendors who we expect will implement patches. In the meantime, the best defense is using a DDoS mitigation service like Cloudflare’s in front of any web-facing web or API server.
This post dives into the details of the HTTP/2 protocol, the feature that attackers exploited to generate these massive attacks, and the mitigation strategies we took to ensure all our customers are protected. Our hope is that by publishing these details other impacted web servers and services will have the information they need to implement mitigation strategies. And, moreover, the HTTP/2 protocol standards team, as well as teams working on future web standards, can better design them to prevent such attacks.
RST attack details
HTTP is the application protocol that powers the Web. HTTP Semantics are common to all versions of HTTP — the overall architecture, terminology, and protocol aspects such as request and response messages, methods, status codes, header and trailer fields, message content, and much more. Each individual HTTP version defines how semantics are transformed into a "wire format" for exchange over the Internet. For example, a client has to serialize a request message into binary data and send it, then the server parses that back into a message it can process.
HTTP/1.1 uses a textual form of serialization. Request and response messages are exchanged as a stream of ASCII characters, sent over a reliable transport layer like TCP, using the following format (where CRLF means carriage-return and linefeed):
HTTP/1.1 200 OK CRLFServer: cloudflareCRLFContent-Length: 100CRLFtext/html; charset=UTF-8CRLF<100 bytes of data>
This format frames messages on the wire, meaning that it is possible to use a single TCP connection to exchange multiple requests and responses. However, the format requires that each message is sent whole. Furthermore, in order to correctly correlate requests with responses, strict ordering is required; meaning that messages are exchanged serially and can not be multiplexed. Two GET requests, for https://blog.cloudflare.com/ and https://blog.cloudflare.com/page/2/, would be:
GET / HTTP/1.1 CRLFHost: blog.cloudflare.comCRLFGET /page/2 HTTP/1.1 CRLFHost: blog.cloudflare.comCRLF
With the responses:
HTTP/1.1 200 OK CRLFServer: cloudflareCRLFContent-Length: 100CRLFtext/html; charset=UTF-8CRLF<100 bytes of data>HTTP/1.1 200 OK CRLFServer: cloudflareCRLFContent-Length: 100CRLFtext/html; charset=UTF-8CRLF<100 bytes of data>
Web pages require more complicated HTTP interactions than these examples. When visiting the Cloudflare blog, your browser will load multiple scripts, styles and media assets. If you visit the front page using HTTP/1.1 and decide quickly to navigate to page 2, your browser can pick from two options. Either wait for all of the queued up responses for the page that you no longer want before page 2 can even start, or cancel in-flight requests by closing the TCP connection and opening a new connection. Neither of these is very practical. Browsers tend to work around these limitations by managing a pool of TCP connections (up to 6 per host) and implementing complex request dispatch logic over the pool.
HTTP/2 addresses many of the issues with HTTP/1.1. Each HTTP message is serialized into a set of HTTP/2 frames that have type, length, flags, stream identifier (ID) and payload. The stream ID makes it clear which bytes on the wire apply to which message, allowing safe multiplexing and concurrency. Streams are bidirectional. Clients send frames and servers reply with frames using the same ID.
In HTTP/2 our GET request for https://blog.cloudflare.com would be exchanged across stream ID 1, with the client sending one HEADERS frame, and the server responding with one HEADERS frame, followed by one or more DATA frames. Client requests always use odd-numbered stream IDs, so subsequent requests would use stream ID 3, 5, and so on. Responses can be served in any order, and frames from different streams can be interleaved.
Stream multiplexing and concurrency are powerful features of HTTP/2. They enable more efficient usage of a single TCP connection. HTTP/2 optimizes resources fetching especially when coupled with prioritization. On the flip side, making it easy for clients to launch large amounts of parallel work can increase the peak demand for server resources when compared to HTTP/1.1. This is an obvious vector for denial-of-service.
In order to provide some guardrails, HTTP/2 provides a notion of maximum active concurrent streams. The SETTINGS_MAX_CONCURRENT_STREAMS parameter allows a server to advertise its limit of concurrency. For example, if the server states a limit of 100, then only 100 requests can be active at any time. If a client attempts to open a stream above this limit, it must be rejected by the server using a RST_STREAM frame. Stream rejection does not affect the other in-flight streams on the connection.
The true story is a little more complicated. Streams have a lifecycle. Below is a diagram of the HTTP/2 stream state machine. Client and server manage their own views of the state of a stream. HEADERS, DATA and RST_STREAM frames trigger transitions when they are sent or received. Although the views of the stream state are independent, they are synchronized.
HEADERS and DATA frames include an END_STREAM flag, that when set to the value 1 (true), can trigger a state transition.
Let's work through this with an example of a GET request that has no message content. The client sends the request as a HEADERS frame with the END_STREAM flag set to 1. The client first transitions the stream from idle to open state, then immediately transitions into half-closed state. The client half-closed state means that it can no longer send HEADERS or DATA, only WINDOW_UPDATE, PRIORITY or RST_STREAM frames. It can receive any frame however.
Once the server receives and parses the HEADERS frame, it transitions the stream state from idle to open and then half-closed, so it matches the client. The server half-closed state means it can send any frame but receive only WINDOW_UPDATE, PRIORITY or RST_STREAM frames.
The response to the GET contains message content, so the server sends HEADERS with END_STREAM flag set to 0, then DATA with END_STREAM flag set to 1. The DATA frame triggers the transition of the stream from half-closed to closed on the server. When the client receives it, it also transitions to closed. Once a stream is closed, no frames can be sent or received.
Applying this lifecycle back into the context of concurrency, HTTP/2 states:
Streams that are in the "open" state or in either of the "half-closed" states count toward the maximum number of streams that an endpoint is permitted to open. Streams in any of these three states count toward the limit advertised in the SETTINGS_MAX_CONCURRENT_STREAMS setting.
In theory, the concurrency limit is useful. However, there are practical factors that hamper its effectiveness— which we will cover later in the blog.
HTTP/2 request cancellation
Earlier, we talked about client cancellation of in-flight requests. HTTP/2 supports this in a much more efficient way than HTTP/1.1. Rather than needing to tear down the whole connection, a client can send a RST_STREAM frame for a single stream. This instructs the server to stop processing the request and to abort the response, which frees up server resources and avoids wasting bandwidth.
Let's consider our previous example of 3 requests. This time the client cancels the request on stream 1 after all of the HEADERS have been sent. The server parses this RST_STREAM frame before it is ready to serve the response and instead only responds to stream 3 and 5:
Request cancellation is a useful feature. For example, when scrolling a webpage with multiple images, a web browser can cancel images that fall outside the viewport, meaning that images entering it can load faster. HTTP/2 makes this behaviour a lot more efficient compared to HTTP/1.1.
A request stream that is canceled, rapidly transitions through the stream lifecycle. The client's HEADERS with END_STREAM flag set to 1 transitions the state from idle to open to half-closed, then RST_STREAM immediately causes a transition from half-closed to closed.
Recall that only streams that are in the open or half-closed state contribute to the stream concurrency limit. When a client cancels a stream, it instantly gets the ability to open another stream in its place and can send another request immediately. This is the crux of what makes CVE-2023-44487 work.
Rapid resets leading to denial of service
HTTP/2 request cancellation can be abused to rapidly reset an unbounded number of streams. When an HTTP/2 server is able to process client-sent RST_STREAM frames and tear down state quickly enough, such rapid resets do not cause a problem. Where issues start to crop up is when there is any kind of delay or lag in tidying up. The client can churn through so many requests that a backlog of work accumulates, resulting in excess consumption of resources on the server.
A common HTTP deployment architecture is to run an HTTP/2 proxy or load-balancer in front of other components. When a client request arrives it is quickly dispatched and the actual work is done as an asynchronous activity somewhere else. This allows the proxy to handle client traffic very efficiently. However, this separation of concerns can make it hard for the proxy to tidy up the in-process jobs. Therefore, these deployments are more likely to encounter issues from rapid resets.
When Cloudflare's reverse proxies process incoming HTTP/2 client traffic, they copy the data from the connection’s socket into a buffer and process that buffered data in order. As each request is read (HEADERS and DATA frames) it is dispatched to an upstream service. When RST_STREAM frames are read, the local state for the request is torn down and the upstream is notified that the request has been canceled. Rinse and repeat until the entire buffer is consumed. However this logic can be abused: when a malicious client started sending an enormous chain of requests and resets at the start of a connection, our servers would eagerly read them all and create stress on the upstream servers to the point of being unable to process any new incoming request.
Something that is important to highlight is that stream concurrency on its own cannot mitigate rapid reset. The client can churn requests to create high request rates no matter the server's chosen value of SETTINGS_MAX_CONCURRENT_STREAMS.
Rapid Reset dissected
Here's an example of rapid reset reproduced using a proof-of-concept client attempting to make a total of 1000 requests. I've used an off-the-shelf server without any mitigations; listening on port 443 in a test environment. The traffic is dissected using Wireshark and filtered to show only HTTP/2 traffic for clarity. Download the pcap to follow along.
It's a bit difficult to see, because there are a lot of frames. We can get a quick summary via Wireshark's Statistics > HTTP2 tool:
The first frame in this trace, in packet 14, is the server's SETTINGS frame, which advertises a maximum stream concurrency of 100. In packet 15, the client sends a few control frames and then starts making requests that are rapidly reset. The first HEADERS frame is 26 bytes long, all subsequent HEADERS are only 9 bytes. This size difference is due to a compression technology called HPACK. In total, packet 15 contains 525 requests, going up to stream 1051.
Interestingly, the RST_STREAM for stream 1051 doesn't fit in packet 15, so in packet 16 we see the server respond with a 404 response. Then in packet 17 the client does send the RST_STREAM, before moving on to sending the remaining 475 requests.
Note that although the server advertised 100 concurrent streams, both packets sent by the client sent a lot more HEADERS frames than that. The client did not have to wait for any return traffic from the server, it was only limited by the size of the packets it could send. No server RST_STREAM frames are seen in this trace, indicating that the server did not observe a concurrent stream violation.
Impact on customers
As mentioned above, as requests are canceled, upstream services are notified and can abort requests before wasting too many resources on it. This was the case with this attack, where most malicious requests were never forwarded to the origin servers. However, the sheer size of these attacks did cause some impact.
First, as the rate of incoming requests reached peaks never seen before, we had reports of increased levels of 502 errors seen by clients. This happened on our most impacted data centers as they were struggling to process all the requests. While our network is meant to deal with large attacks, this particular vulnerability exposed a weakness in our infrastructure. Let's dig a little deeper into the details, focusing on how incoming requests are handled when they hit one of our data centers:
We can see that our infrastructure is composed of a chain of different proxy servers with different responsibilities. In particular, when a client connects to Cloudflare to send HTTPS traffic, it first hits our TLS decryption proxy: it decrypts TLS traffic, processes HTTP 1, 2 or 3 traffic, then forwards it to our "business logic" proxy. This one is responsible for loading all the settings for each customer, then routing the requests correctly to other upstream services — and more importantly in our case, it is also responsible for security features. This is where L7 attack mitigation is processed.
The problem with this attack vector is that it manages to send a lot of requests very quickly in every single connection. Each of them had to be forwarded to the business logic proxy before we had a chance to block it. As the request throughput became higher than our proxy capacity, the pipe connecting these two services reached its saturation level in some of our servers.
When this happens, the TLS proxy cannot connect anymore to its upstream proxy, this is why some clients saw a bare "502 Bad Gateway" error during the most serious attacks. It is important to note that, as of today, the logs used to create HTTP analytics are also emitted by our business logic proxy. The consequence of that is that these errors are not visible in the Cloudflare dashboard. Our internal dashboards show that about 1% of requests were impacted during the initial wave of attacks (before we implemented mitigations), with peaks at around 12% for a few seconds during the most serious one on August 29th. The following graph shows the ratio of these errors over a two hours while this was happening:
We worked to reduce this number dramatically in the following days, as detailed later on in this post. Both thanks to changes in our stack and to our mitigation that reduce the size of these attacks considerably, this number is today is effectively zero:
499 errors and the challenges for HTTP/2 stream concurrency
Another symptom reported by some customers is an increase in 499 errors. The reason for this is a bit different and is related to the maximum stream concurrency in a HTTP/2 connection detailed earlier in this post.
HTTP/2 settings are exchanged at the start of a connection using SETTINGS frames. In the absence of receiving an explicit parameter, default values apply. Once a client establishes an HTTP/2 connection, it can wait for a server's SETTINGS (slow) or it can assume the default values and start making requests (fast). For SETTINGS_MAX_CONCURRENT_STREAMS, the default is effectively unlimited (stream IDs use a 31-bit number space, and requests use odd numbers, so the actual limit is 1073741824). The specification recommends that a server offer no fewer than 100 streams. Clients are generally biased towards speed, so don't tend to wait for server settings, which creates a bit of a race condition. Clients are taking a gamble on what limit the server might pick; if they pick wrong the request will be rejected and will have to be retried. Gambling on 1073741824 streams is a bit silly. Instead, a lot of clients decide to limit themselves to issuing 100 concurrent streams, with the hope that servers followed the specification recommendation. Where servers pick something below 100, this client gamble fails and streams are reset.
There are many reasons a server might reset a stream beyond concurrency limit overstepping. HTTP/2 is strict and requires a stream to be closed when there are parsing or logic errors. In 2019, Cloudflare developed several mitigations in response to HTTP/2 DoS vulnerabilities. Several of those vulnerabilities were caused by a client misbehaving, leading the server to reset a stream. A very effective strategy to clamp down on such clients is to count the number of server resets during a connection, and when that exceeds some threshold value, close the connection with a GOAWAY frame. Legitimate clients might make one or two mistakes in a connection and that is acceptable. A client that makes too many mistakes is probably either broken or malicious and closing the connection addresses both cases.
While responding to DoS attacks enabled by CVE-2023-44487, Cloudflare reduced maximum stream concurrency to 64. Before making this change, we were unaware that clients don't wait for SETTINGS and instead assume a concurrency of 100. Some web pages, such as an image gallery, do indeed cause a browser to send 100 requests immediately at the start of a connection. Unfortunately, the 36 streams above our limit all needed to be reset, which triggered our counting mitigations. This meant that we closed connections on legitimate clients, leading to a complete page load failure. As soon as we realized this interoperability issue, we changed the maximum stream concurrency to 100.
Actions from the Cloudflare side
In 2019 several DoS vulnerabilities were uncovered related to implementations of HTTP/2. Cloudflare developed and deployed a series of detections and mitigations in response. CVE-2023-44487 is a different manifestation of HTTP/2 vulnerability. However, to mitigate it we were able to extend the existing protections to monitor client-sent RST_STREAM frames and close connections when they are being used for abuse. Legitimate client uses for RST_STREAM are unaffected.
In addition to a direct fix, we have implemented several improvements to the server's HTTP/2 frame processing and request dispatch code. Furthermore, the business logic server has received improvements to queuing and scheduling that reduce unnecessary work and improve cancellation responsiveness. Together these lessen the impact of various potential abuse patterns as well as giving more room to the server to process requests before saturating.
Mitigate attacks earlier
Cloudflare already had systems in place to efficiently mitigate very large attacks with less expensive methods. One of them is named "IP Jail". For hyper volumetric attacks, this system collects the client IPs participating in the attack and stops them from connecting to the attacked property, either at the IP level, or in our TLS proxy. This system however needs a few seconds to be fully effective; during these precious seconds, the origins are already protected but our infrastructure still needs to absorb all HTTP requests. As this new botnet has effectively no ramp-up period, we need to be able to neutralize attacks before they can become a problem.
To achieve this we expanded the IP Jail system to protect our entire infrastructure: once an IP is "jailed", not only it is blocked from connecting to the attacked property, we also forbid the corresponding IPs from using HTTP/2 to any other domain on Cloudflare for some time. As such protocol abuses are not possible using HTTP/1.x, this limits the attacker's ability to run large attacks, while any legitimate client sharing the same IP would only see a very small performance decrease during that time. IP based mitigations are a very blunt tool — this is why we have to be extremely careful when using them at that scale and seek to avoid false positives as much as possible. Moreover, the lifespan of a given IP in a botnet is usually short so any long term mitigation is likely to do more harm than good. The following graph shows the churn of IPs in the attacks we witnessed:
As we can see, many new IPs spotted on a given day disappear very quickly afterwards.
As all these actions happen in our TLS proxy at the beginning of our HTTPS pipeline, this saves considerable resources compared to our regular L7 mitigation system. This allowed us to weather these attacks much more smoothly and now the number of random 502 errors caused by these botnets is down to zero.
Another front on which we are making change is observability. Returning errors to clients without being visible in customer analytics is unsatisfactory. Fortunately, a project has been underway to overhaul these systems since long before the recent attacks. It will eventually allow each service within our infrastructure to log its own data, instead of relying on our business logic proxy to consolidate and emit log data. This incident underscored the importance of this work, and we are redoubling our efforts.
We are also working on better connection-level logging, allowing us to spot such protocol abuses much more quickly to improve our DDoS mitigation capabilities.
While this was the latest record-breaking attack, we know it won’t be the last. As attacks continue to become more sophisticated, Cloudflare works relentlessly to proactively identify new threats — deploying countermeasures to our global network so that our millions of customers are immediately and automatically protected.
Cloudflare has provided free, unmetered and unlimited DDoS protection to all of our customers since 2017. In addition, we offer a range of additional security features to suit the needs of organizations of all sizes. Contact us if you’re unsure whether you’re protected or want to understand how you can be.
Earlier today, Cloudflare, along with Google and Amazon AWS, disclosed the existence of a novel zero-day vulnerability dubbed the “HTTP/2 Rapid Reset” attack. This attack exploits a weakness in the HTTP/2 protocol to generate enormous, hyper-volumetric Distributed Denial of Service (DDoS) attacks. Cloudflare has mitigated a barrage of these attacks in recent months, including an attack three times larger than any previous attack we’ve observed, which exceeded 201 million requests per second (rps). Since the end of August 2023, Cloudflare has mitigated more than 1,100 other attacks with over 10 million rps — and 184 attacks that were greater than our previous DDoS record of 71 million rps.
This zero-day provided threat actors with a critical new tool in their Swiss Army knife of vulnerabilities to exploit and attack their victims at a magnitude that has never been seen before. While at times complex and challenging to combat, these attacks allowed Cloudflare the opportunity to develop purpose-built technology to mitigate the effects of the zero-day vulnerability.
If you are using Cloudflare for HTTP DDoS mitigation, you are protected. And below, we’ve included more information on this vulnerability, and resources and recommendations on what you can do to secure yourselves.
Deconstructing the attack: What every CSO needs to know
In late August 2023, our team at Cloudflare noticed a new zero-day vulnerability, developed by an unknown threat actor, that exploits the standard HTTP/2 protocol — a fundamental protocol that is critical to how the Internet and all websites work. This novel zero-day vulnerability attack, dubbed Rapid Reset, leverages HTTP/2’s stream cancellation feature by sending a request and immediately canceling it over and over.
By automating this trivial “request, cancel, request, cancel” pattern at scale, threat actors are able to create a denial of service and take down any server or application running the standard implementation of HTTP/2. Furthermore, one crucial thing to note about the record-breaking attack is that it involved a modestly-sized botnet, consisting of roughly 20,000 machines. Cloudflare regularly detects botnets that are orders of magnitude larger than this — comprising hundreds of thousands and even millions of machines. For a relatively small botnet to output such a large volume of requests, with the potential to incapacitate nearly any server or application supporting HTTP/2, underscores how menacing this vulnerability is for unprotected networks.
Threat actors used botnets in tandem with the HTTP/2 vulnerability to amplify requests at rates we have never seen before. As a result, our team at Cloudflare experienced some intermittent edge instability. While our systems were able to mitigate the overwhelming majority of incoming attacks, the volume overloaded some components in our network, impacting a small number of customers’ performance with intermittent 4xx and 5xx errors — all of which were quickly resolved.
Once we successfully mitigated these issues and halted potential attacks for all customers, our team immediately kicked off a responsible disclosure process. We entered into conversations with industry peers to see how we could work together to help move our mission forward and safeguard the large percentage of the Internet that relies on our network prior to releasing this vulnerability to the general public.
How is Cloudflare and the industry thwarting this attack?
There is no such thing as a “perfect disclosure.” Thwarting attacks and responding to emerging incidents requires organizations and security teams to live by an assume-breach mindset — because there will always be another zero-day, new evolving threat actor groups, and never-before-seen novel attacks and techniques.
This “assume-breach” mindset is a key foundation towards information sharing and ensuring in instances such as this that the Internet remains safe. While Cloudflare was experiencing and mitigating these attacks, we were also working with industry partners to guarantee that the industry at-large could withstand this attack.
During the process of mitigating this attack, our Cloudflare team developed and purpose-built new technology to stop these DDoS attacks and further improve our own mitigations for this and other future attacks of massive scale. These efforts have significantly increased our overall mitigation capabilities and resiliency. If you are using Cloudflare, we are confident that you are protected.
Our team also alerted web server software partners who are developing patches to ensure this vulnerability cannot be exploited — check their websites for more information.
Disclosures are never one and done. The lifeblood of Cloudflare is to ensure a better Internet, which stems from instances such as these. When we have the opportunity to work with our industry partners and governments to ensure there are no widespread impacts on the Internet, we are doing our part in increasing the cyber resiliency of every organization no matter the size or vertical.
What are the origins of the HTTP/2 Rapid Reset and these record-breaking attacks on Cloudflare?
It may seem odd that Cloudflare was one of the first companies to witness these attacks. Why would threat actors attack a company that has some of the most robust defenses against DDoS attacks in the world?
The reality is that Cloudflare often sees attacks before they are turned on more vulnerable targets. Threat actors need to develop and test their tools before they deploy them in the wild. Threat actors who possess record-shattering attack methods can have an extremely difficult time testing and understanding how large and effective they are, because they don't have the infrastructure to absorb the attacks they are launching. Because of the transparency that we share on our network performance, and the measurements of attacks they could glean from our public performance charts, this threat actor was likely targeting us to understand the capabilities of the exploit.
But that testing, and the ability to see the attack early, helps us develop mitigations for the attack that benefit both our customers and industry as a whole.
From CSO to CSO: What should you do?
I have been a CSO for over 20 years, on the receiving end of countless disclosures and announcements like this. But whether it was Log4J, Solarwinds, EternalBlueWannaCry/NotPetya, Heartbleed, or Shellshock, all of these security incidents have a commonality. A tremendous explosion that ripples across the world and creates an opportunity to completely disrupt any of the organizations that I have led — regardless of the industry or the size.
Many of these were attacks or vulnerabilities that we may have not been able to control. But regardless of whether the issue arose from something that was in my control or not, what has set any successful initiative I have led apart from those that did not lean in our favor was the ability to respond when zero-day vulnerabilities and exploits like this are identified.
While I wish I could say that Rapid Reset may be different this time around, it is not. I am calling all CSOs — no matter if you’ve lived through the decades of security incidents that I have, or this is your first day on the job — this is the time to ensure you are protected and stand up your cyber incident response team.
We’ve kept the information restricted until today to give as many security vendors as possible the opportunity to react. However, at some point, the responsible thing becomes to publicly disclose zero-day threats like this. Today is that day. That means that after today, threat actors will be largely aware of the HTTP/2 vulnerability; and it will inevitably become trivial to exploit and kickoff the race between defenders and attacks — first to patch vs. first to exploit. Organizations should assume that systems will be tested, and take proactive measures to ensure protection.
To me, this is reminiscent of a vulnerability like Log4J, due to the many variants that are emerging daily, and will continue to come to fruition in the weeks, months, and years to come. As more researchers and threat actors experiment with the vulnerability, we may find different variants with even shorter exploit cycles that contain even more advanced bypasses.
And just like Log4J, managing incidents like this isn’t as simple as “run the patch, now you’re done”. You need to turn incident management, patching, and evolving your security protections into ongoing processes — because the patches for each variant of a vulnerability reduce your risk, but they don’t eliminate it.
I don’t mean to be alarmist, but I will be direct: you must take this seriously. Treat this as a full active incident to ensure nothing happens to your organization.
Recommendations for a New Standard of Change
While no one security event is ever identical to the next, there are lessons that can be learned. CSOs, here are my recommendations that must be implemented immediately. Not only in this instance, but for years to come:
Understand your external and partner network’s external connectivity to remediate any Internet facing systems with the mitigations below.
Understand your existing security protection and capabilities you have to protect, detect and respond to an attack and immediately remediate any issues you have in your network.
Ensure your DDoS Protection resides outside of your data center because if the traffic gets to your datacenter, it will be difficult to mitigate the DDoS attack.
Ensure you have DDoS protection for Applications (Layer 7) and ensure you have Web Application Firewalls. Additionally as a best practice, ensure you have complete DDoS protection for DNS, Network Traffic (Layer 3) and API Firewalls
Ensure web server and operating system patches are deployed across all Internet Facing Web Servers. Also, ensure all automation like Terraform builds and images are fully patched so older versions of web servers are not deployed into production over the secure images by accident.
As a last resort, consider turning off HTTP/2 and HTTP/3 (likely also vulnerable) to mitigate the threat. This is a last resort only, because there will be a significant performance issues if you downgrade to HTTP/1.1
Consider a secondary, cloud-based DDoS L7 provider at perimeter for resilience.
Cloudflare’s mission is to help build a better Internet. If you are concerned with your current state of DDoS protection, we are more than happy to provide you with our DDoS capabilities and resilience for free to mitigate any attempts of a successful DDoS attack. We know the stress that you are facing as we have fought off these attacks for the last 30 days and made our already best in class systems, even better.
Application teams should consider the impact unexpected traffic floods can have on an application’s availability. Internet-facing applications can be susceptible to traffic that some distributed denial of service (DDoS) mitigation systems can’t detect. For example, hit-and-run events are a popular approach that use short-lived floods that reoccur at random intervals. Each burst is small enough to go unnoticed by mitigation systems, but still occur often enough and are large enough to be disruptive. Automatically detecting and blocking temporary sources of invalid traffic, combined with other best practices, can strengthen the resiliency of your applications and maintain customer trust.
Use resilient architectures
AWS customers can use prescriptive guidance to improve DDoS resiliency by reviewing the AWS Best Practices for DDoS Resiliency. It describes a DDoS-resilient reference architecture as a guide to help you protect your application’s availability.
The best practices above address the needs of most AWS customers; however, in this blog we cover a few outlier examples that fall outside normal guidance. Here are a few examples that might describe your situation:
You need to operate functionality that isn’t yet fully supported by an AWS managed service that takes on the responsibility of DDoS mitigation.
Migrating to an AWS managed service such as Amazon Route 53 isn’t immediately possible and you need an interim solution that mitigates risks.
Network ingress must be allowed from a wide public IP space that can’t be restricted.
Network floods are disruptive but not significant enough (too infrequent or too low volume) to be detected by your managed DDoS mitigation systems.
For these situations, VPC network ACLs can be used to deny invalid traffic. Normally, the limit on rules per network ACL makes them unsuitable for handling truly distributed network floods. However, they can be effective at mitigating network floods that aren’t distributed enough or large enough to be detected by DDoS mitigation systems.
Given the dynamic nature of network traffic and the limited size of network ACLs, it helps to automate the lifecycle of network ACL rules. In the following sections, I show you a solution that uses network ACL rules to automatically detect and block infrastructure layer traffic within 2–5 minutes and automatically removes the rules when they’re no longer needed.
Detecting anomalies in network traffic
You need a way to block disruptive traffic while not impacting legitimate traffic. Anomaly detection can isolate the right traffic to block. Every workload is unique, so you need a way to automatically detect anomalies in the workload’s traffic pattern. You can determine what is normal (a baseline) and then detect statistical anomalies that deviate from the baseline. This baseline can change over time, so it needs to be calculated based on a rolling window of recent activity.
Z-scores are a common way to detect anomalies in time-series data. The process for creating a Z-score is to first calculate the average and standard deviation (a measure of how much the values are spread out) across all values over a span of time. Then for each value in the time window calculate the Z-score as follows:
Z-score = (value – average) / standard deviation
A Z-score exceeding 3.0 indicates the value is an outlier that is greater than 99.7 percent of all other values.
To calculate the Z-score for detecting network anomalies, you need to establish a time series for network traffic. This solution uses VPC flow logs to capture information about the IP traffic in your VPC. Each VPC flow log record provides a packet count that’s aggregated over a time interval. Each flow log record aggregates the number of packets over an interval of 60 seconds or less. There isn’t a consistent time boundary for each log record. This means raw flow log records aren’t a predictable way to build a time series. To address this, the solution processes flow logs into packet bins for time series values. A packet bin is the number of packets sent by a unique source IP address within a specific time window. A source IP address is considered an anomaly if any of its packet bins over the past hour exceed the Z-score threshold (default is 3.0).
When overall traffic levels are low, there might be source IP addresses with a high Z-score that aren’t a risk. To mitigate against false positives, source IP addresses are only considered to be an anomaly if the packet bin exceeds a minimum threshold (default is 12,000 packets).
Let’s review the overall solution architecture.
This solution, shown in Figure 1, uses VPC flow logs to capture information about the traffic reaching the network interfaces in your public subnets. CloudWatch Logs Insights queries are used to summarize the most recent IP traffic into packet bins that are stored in Timestream. The time series table is queried to identify source IP addresses responsible for traffic that meets the anomaly threshold. Anomalous source IP addresses are published to an Amazon Simple Notification Service (Amazon SNS) topic. A Lambda function receives the SNS message and decides how to update the network ACL.
Figure 1: Automating the detection and mitigation of traffic floods using network ACLs
How it works
The numbered steps that follow correspond to the numbers in Figure 1.
Capture VPC flow logs. Your VPC is configured to stream flow logs to CloudWatch Logs. To minimize cost, the flow logs are limited to particular subnets and only include log fields required by the CloudWatch query. When protecting an endpoint that spans multiple subnets (such as a Network Load Balancer using multiple availability zones), each subnet shares the same network ACL and is configured with a flow log that shares the same CloudWatch log group.
Scheduled flow log analysis.Amazon EventBridge starts an AWS Step Functions state machine on a time interval (60 seconds by default). The state machine starts a Lambda function immediately, and then again after 30 seconds. The Lambda function performs steps 3–6.
Summarize recent network traffic. The Lambda function runs a CloudWatch Logs Insights query. The query scans the most recent flow logs (5-minute window) to summarize packet frequency grouped by source IP. These groupings are called packet bins, where each bin represents the number of packets sent by a source IP within a given minute of time.
Update time series database. A time series database in Timestream is updated with the most recent packet bins.
Use statistical analysis to detect abusive source IPs. A Timestream query is used to perform several calculations. The query calculates the average bin size over the past hour, along with the standard deviation. These two values are then used to calculate the maximum Z-score for all source IPs over the past hour. This means an abusive IP will remain flagged for one hour even if it stopped sending traffic. Z-scores are sorted so that the most abusive source IPs are prioritized. If a source IP meets these two criteria, it is considered abusive.
Maximum Z-score exceeds a threshold (defaults to 3.0).
Packet bin exceeds a threshold (defaults to 12,000). This avoids flagging source IPs during periods of overall low traffic when there is no need to block traffic.
Publish anomalous source IPs. Publish a message to an Amazon SNS topic with a list of anomalous source IPs. The function also publishes CloudWatch metrics to help you track the number of unique and abusive source IPs over time. At this point, the flow log summarizer function has finished its job until the next time it’s invoked from EventBridge.
Receive anomalous source IPs. The network ACL updater function is subscribed to the SNS topic. It receives the list of anomalous source IPs.
Update the network ACL. The network ACL updater function uses two network ACLs called blue and green. This verifies that the active rules remain in place while updating the rules in the inactive network ACL. When the inactive network ACL rules are updated, the function swaps network ACLs on each subnet. By default, each network ACL has a limit of 20 rules. If the number of anomalous source IPs exceeds the network ACL limit, the source IPs with the highest Z-score are prioritized. CloudWatch metrics are provided to help you track the number of source IPs blocked, and how many source IPs couldn’t be blocked due to network ACL limits.
This solution assumes you have one or more public subnets used to operate an internet-facing endpoint.
Deploy the solution
Follow these steps to deploy and validate the solution.
Figure 2: Save auto-nacl.zip to the auto-nacl-deploy directory
Step 2. Upload the CloudFormation templates and Python code to an S3 bucket
I extract the auto-nacl.zip file into my auto-nacl-deploy directory.
Figure 3: Expand auto-nacl.zip into the auto-nacl-deploy directory
The template.yaml file is used to create a CloudFormation stack with four nested stacks. You copy all files to an S3 bucket prior to creating the stacks.
To stage these files in Amazon S3, use an existing bucket or create a new one. For this example, I used an existing S3 bucket named auto-nacl-us-east-1. Using the Amazon S3 console, I created a folder named artifacts and then uploaded the extracted files to it. My bucket now looks like Figure 4.
Figure 4: Upload the extracted files to Amazon S3
Step 3. Gather information needed for the CloudFormation template parameters
There are six parameters required by the CloudFormation template.
The ID of the VPC that runs your application.
A comma-delimited list of public subnet IDs used by your endpoint.
The Internet Protocol (TCP or UDP) used by your endpoint.
The S3 bucket that contains the files you uploaded in Step 2. This bucket must be in the same AWS Region as the CloudFormation stack.
The S3 prefix (folder) of the files you uploaded in Step 2.
For the VpcId parameter, I use the VPC console to find the VPC ID for my application.
Figure 5: Find the VPC ID
For the SubnetIds parameter, I use the VPC console to find the subnet IDs for my application. My VPC has public and private subnets. For this solution, you only need the public subnets.
Figure 6: Find the subnet IDs
My application uses a Network Load Balancer that listens on port 80 to handle TCP traffic. I use 80 for ListenerPort and TCP for ListenerProtocol.
The next two parameters are based on the Amazon S3 location I used earlier. I use auto-nacl-us-east-1 for SourceCodeS3Bucket and artifacts for SourceCodeS3Prefix.
Step 4. Create the CloudFormation stack
I use the CloudFormation console to create a stack. The Amazon S3 URL format is https://<bucket>.s3.<region>.amazonaws.com/<prefix>/template.yaml. I enter the Amazon S3 URL for my environment, then choose Next.
Figure 7: Specify the CloudFormation template
I enter a name for my stack (for example, auto-nacl-1) along with the parameter values I gathered in Step 3. I leave all optional parameters as they are, then choose Next.
Figure 8: Provide the required parameters
I review the stack options, then scroll to the bottom and choose Next.
Figure 9: Review the default stack options
I scroll down to the Capabilities section and acknowledge the capabilities required by CloudFormation, then choose Submit.
Figure 10: Acknowledge the capabilities required by CloudFormation
I wait for the stack to reach CREATE_COMPLETE status. It takes 10–15 minutes to create all of the nested stacks.
Figure 11: Wait for the stacks to complete
Step 5. Monitor traffic mitigation activity using the CloudWatch dashboard
After the CloudFormation stacks are complete, I navigate to the CloudWatch console to open the dashboard. In my environment, the dashboard is named auto-nacl-1-MitigationDashboard-YS697LIEHKGJ.
Figure 12: Find the CloudWatch dashboard
Initially, the dashboard, shown in Figure 13, has little information to display. After an hour, I can see the following metrics from my sample environment:
The Network Traffic graph shows how many packets are allowed and rejected by network ACL rules. No anomalies have been detected yet, so this only shows allowed traffic.
The All Source IPs graph shows how many total unique source IP addresses are sending traffic.
The Anomalous Source Networks graph shows how many anomalous source networks are being blocked by network ACL rules (or not blocked due to network ACL rule limit). This graph is blank unless anomalies have been detected in the last hour.
The Anomalous Source IPs graph shows how many anomalous source IP addresses are being blocked (or not blocked) by network ACL rules. This graph is blank unless anomalies have been detected in the last hour.
The Packet Statistics graph can help you determine if the sensitivity should be adjusted. This graph shows the average packets-per-minute and the associated standard deviation over the past hour. It also shows the anomaly threshold, which represents the minimum number of packets-per-minute for a source IP address to be considered an anomaly. The anomaly threshold is calculated based on the CloudFormation parameter MinZScore.
anomaly threshold = (MinZScore * standard deviation) + average
Increasing the MinZScore parameter raises the threshold and reduces sensitivity. You can also adjust the CloudFormation parameter MinPacketsPerBin to mitigate against blocking traffic during periods of low volume, even if a source IP address exceeds the minimum Z-score.
The Blocked IPs grid shows which source IP addresses are being blocked during each hour, along with the corresponding packet bin size and Z-score. This grid is blank unless anomalies have been detected in the last hour.
Figure 13: Observe the dashboard after one hour
Let’s review a scenario to see what happens when my endpoint sees two waves of anomalous traffic.
By default, my network ACL allows a maximum of 20 inbound rules. The two default rules count toward this limit, so I only have room for 18 more inbound rules. My application sees a spike of network traffic from 20 unique source IP addresses. When the traffic spike begins, the anomaly is detected in less than five minutes. Network ACL rules are created to block the top 18 source IP addresses (sorted by Z-score). Traffic is blocked for about 5 minutes until the flood subsides. The rules remain in place for 1 hour by default. When the same 20 source IP addresses send another traffic flood a few minutes later, most traffic is immediately blocked. Some traffic is still allowed from two source IP addresses that can’t be blocked due to the limit of 18 rules.
Figure 14: Observe traffic blocked from anomalous source IP addresses
Customize the solution
You can customize the behavior of this solution to fit your use case.
Block many IP addresses per network ACL rule. To enable blocking more source IP addresses than your network ACL rule limit, change the CloudFormation parameter NaclRuleNetworkMask (default is 32). This sets the network mask used in network ACL rules and lets you block IP address ranges instead of individual IP addresses. By default, the IP address 192.0.2.1 is blocked by a network ACL rule for 192.0.2.1/32. Setting this parameter to 24 results in a network ACL rule that blocks 192.0.2.0/24. As a reminder, address ranges that are too wide might result in blocking legitimate traffic.
Only block source IPs that exceed a packet volume threshold. Use the CloudFormation parameter MinPacketsPerBin (default is 12,000) to set the minimum packets per minute. This mitigates against blocking source IPs (even if their Z-score is high) during periods of overall low traffic when there is no need to block traffic.
Adjust the sensitivity of anomaly detection. Use the CloudFormation parameter MinZScore to set the minimum Z-score for a source IP to be considered an anomaly. The default is 3.0, which only blocks source IPs with packet volume that exceeds 99.7 percent of all other source IPs.
Exclude trusted source IPs from anomaly detection. Specify an allow list object in Amazon S3 that contains a list of IP addresses or CIDRs that you want to exclude from network ACL rules. The network ACL updater function reads the allow list every time it handles an SNS message.
As covered in the preceding sections, this solution has a few limitations to be aware of:
CloudWatch Logs queries can only return up to 10,000 records. This means the traffic baseline can only be calculated based on the observation of 10,000 unique source IP addresses per minute.
The traffic baseline is based on a rolling 1-hour window. You might need to increase this if a 1-hour window results in a baseline that allows false positives. For example, you might need a longer baseline window if your service normally handles abrupt spikes that occur hourly or daily.
By default, a network ACL can only hold 20 inbound rules. This includes the default allow and deny rules, so there’s room for 18 deny rules. You can increase this limit from 20 to 40 with a support case; however, it means that a maximum of 18 (or 38) source IP addresses can be blocked at one time.
The speed of anomaly detection is dependent on how quickly VPC flow logs are delivered to CloudWatch. This usually takes 2–4 minutes but can take over 6 minutes.
CloudWatch Logs Insights queries are the main element of cost for this solution. See CloudWatch pricing for more information. The cost is about 7.70 USD per GB of flow logs generated per month.
To optimize the cost of CloudWatch queries, the VPC flow log record format only includes the fields required for anomaly detection. The CloudWatch log group is configured with a retention of 1 day. You can tune your cost by adjusting the anomaly detector function to run less frequently (the default is twice per minute). The tradeoff is that the network ACL rules won’t be updated as frequently. This can lead to the solution taking longer to mitigate a traffic flood.
Maintaining high availability and responsiveness is important to keeping the trust of your customers. The solution described above can help you automatically mitigate a variety of network floods that can impact the availability of your application even if you’ve followed all the applicable best practices for DDoS resiliency. There are limitations to this solution, but it can quickly detect and mitigate disruptive sources of traffic in a cost-effective manner. Your feedback is important. You can share comments below and report issues on GitHub.
If you have feedback about this post, submit comments in the Comments section below.
Want more AWS Security news? Follow us on Twitter.
Cloudflare has a unique vantage point on the Internet. From this position, we are able to see, explore, and identify trends that would otherwise go unnoticed. In this report we are doing just that and sharing our insights into Internet-wide application security trends.
Since the last report, our network is bigger and faster: we are now processing an average of 46 million HTTP requests/second and 63 million at peak. We consistently handle approximately 25 million DNS queries per second. That's around 2.1 trillion DNS queries per day, and 65 trillion queries a month. This is the sum of authoritative and resolver requests served by our infrastructure. Summing up both HTTP and DNS requests, we get to see a lot of malicious traffic. Focusing on HTTP requests only, in Q2 2023 Cloudflare blocked an average of 112 billion cyber threats each day, and this is the data that powers this report.
But as usual, before we dive in, we need to define our terms.
Throughout this report, we will refer to the following terms:
Mitigated traffic: any eyeball HTTP* request that had a “terminating” action applied to it by the Cloudflare platform. These include the following actions: BLOCK, CHALLENGE, JS_CHALLENGE and MANAGED_CHALLENGE. This does not include requests that had the following actions applied: LOG, SKIP, ALLOW. In contrast to last year, we now exclude requests that had CONNECTION_CLOSE and FORCE_CONNECTION_CLOSE actions applied by our DDoS mitigation system, as these technically only slow down connection initiation. They also accounted for a relatively small percentage of requests. Additionally, we improved our calculation regarding the CHALLENGE type actions to ensure that only unsolved challenges are counted as mitigated. A detailed description of actions can be found in our developer documentation.
Bot traffic/automated traffic: any HTTP* request identified by Cloudflare’s Bot Management system as being generated by a bot. This includes requests with a bot score between 1 and 29 inclusive. This has not changed from last year’s report.
API traffic: any HTTP* request with a response content type of XML or JSON. Where the response content type is not available, such as for mitigated requests, the equivalent Accept content type (specified by the user agent) is used instead. In this latter case, API traffic won’t be fully accounted for, but it still provides a good representation for the purposes of gaining insights.
Unless otherwise stated, the time frame evaluated in this post is the 3 month period from April 2023 through June 2023 inclusive.
Finally, please note that the data is calculated based only on traffic observed across the Cloudflare network and does not necessarily represent overall HTTP traffic patterns across the Internet.
* When referring to HTTP traffic we mean both HTTP and HTTPS.
Global traffic insights
Mitigated daily traffic stable at 6%, spikes reach 8%
Although daily mitigated HTTP requests decreased by 2 percentage points to 6% on average from 2021 to 2022, days with larger than usual malicious activity can be clearly seen across the network. One clear example is shown in the graph below: towards the end of May 2023, a spike reaching nearly 8% can be seen. This is attributable to large DDoS events and other activity that does not follow standard daily or weekly cycles and is a constant reminder that large malicious events can still have a visible impact at a global level, even at Cloudflare scale.
75% of mitigated HTTP requests were outright BLOCKed. This is a 6 percentage point decrease compared to the previous report. The majority of other requests are mitigated with the various CHALLENGE type actions, with managed challenges leading with ~20% of this subset.
Shields up: customer configured rules now biggest contributor to mitigated traffic
In our previous report, our automated DDoS mitigation system accounted for, on average, more than 50% of mitigated traffic. Over the past two quarters, due to both increased WAF adoption, but most likely organizations better configuring and locking down their applications from unwanted traffic, we’ve seen a new trend emerge, with WAF mitigated traffic surpassing DDoS mitigation. Most of the increase has been driven by WAF Custom Rule BLOCKs rather than our WAF Managed Rules, indicating that these mitigations are generated by customer configured rules for business logic or related purposes. This can be clearly seen in the chart below.
Note that our WAF Managed Rules mitigations (yellow line) are negligible compared to overall WAF mitigated traffic also indicating that customers are adopting positive security models by allowing known good traffic as opposed to blocking only known bad traffic. Having said that, WAF Managed Rules mitigations reached as much as 1.5 billion/day during the quarter.
Our DDoS mitigation is, of course, volumetric and the amount of traffic matching our DDoS layer 7 rules should not be underestimated, especially given that we are observing a number of novel attacks and botnets being spun up across the web. You can read a deep dive on DDoS attack trends in our Q2 DDoS threat report.
Aggregating the source of mitigated traffic, the WAF now accounts for approximately 57% of all mitigations. Tabular format below with other sources for reference.
Application owners are increasingly relying on geo location blocks
Given the increase in mitigated traffic from customer defined WAF rules, we thought it would be interesting to dive one level deeper and better understand what customers are blocking and how they are doing it. We can do this by reviewing rule field usage across our WAF Custom Rules to identify common themes. Of course, the data needs to be interpreted correctly, as not all customers have access to all fields as that varies by contract and plan level, but we can still make some inferences based on field “categories”. By reviewing all ~7M WAF Custom Rules deployed across the network and focusing on main groupings only, we get the following field usage distribution:
Used in percentage % of rules
Other HTTP fields (excluding URI)
Bot Management fields
IP reputation score
Notably, 40% of all deployed WAF Custom Rules use geolocation-related fields to make decisions on how to treat traffic. This is a common technique used to implement business logic or to exclude geographies from which no traffic is expected and helps reduce attack surface areas. While these are coarse controls which are unlikely to stop a sophisticated attacker, they are still efficient at reducing the attack surface.
Another notable observation is the usage of Bot Management related fields in 11% of WAF Custom Rules. This number has been steadily increasing over time as more customers adopt machine learning-based classification strategies to protect their applications.
Old CVEs are still exploited en masse
Contributing ~32% of WAF Managed Rules mitigated traffic overall, HTTP Anomaly is still the most common attack category blocked by the WAF Managed Rules. SQLi moved up to second position, surpassing Directory Traversal with 12.7% and 9.9% respectively.
If we look at the start of April 2023, we notice the DoS category far exceeding the HTTP Anomaly category. Rules in the DoS category are WAF layer 7 HTTP signatures that are sufficiently specific to match (and block) single requests without looking at cross request behavior and that can be attributed to either specific botnets or payloads that cause denial of service (DoS). Normally, as is the case here, these requests are not part of “distributed” attacks, hence the lack of the first “D” for “distributed” in the category name.
Tabular format for reference (top 10 categories):
Zooming in, and filtering on the DoS category only, we find that most of the mitigated traffic is attributable to one rule: 100031 / ce02fd… (old WAF and new WAF rule ID respectively). This rule, with a description of “Microsoft IIS – DoS, Anomaly:Header:Range – CVE:CVE-2015-1635” pertains to a CVE dating back to 2015 that affected a number of Microsoft Windows components resulting in remote code execution*. This is a good reminder that old CVEs, even those dating back more than 8 years, are still actively exploited to compromise machines that may be unpatched and still running vulnerable software.
* Due to rule categorisation, some CVE specific rules are still assigned to a broader category such as DoS in this example. Rules are assigned to a CVE category only when the attack payload does not clearly overlap with another more generic category.
Another interesting observation is the increase in Broken Authentication rule matches starting in June. This increase is also attributable to a single rule deployed across all our customers, including our FREE users: “WordPress – Broken Access Control, File Inclusion”. This rule is blocking attempts to access wp-config.php – the WordPress default configuration file which is normally found in the web server document root directory, but of course should never be accessed directly via HTTP.
On a similar note, CISA/CSA recently published a report highlighting the 2022 Top Routinely Exploited Vulnerabilities. We took this opportunity to explore how each CVE mentioned in CISA’s report was reflected in Cloudflare’s own data. The CISA/CSA discuss 12 vulnerabilities that malicious cyber actors routinely exploited in 2022. However, based on our analysis, two CVEs mentioned in the CISA report are responsible for the vast majority of attack traffic we have seen in the wild: Log4J and Atlassian Confluence Code Injection. Our data clearly suggests a major difference in exploit volume between the top two and the rest of the list. The following chart compares the attack volume (in logarithmic scale) of the top 6 vulnerabilities of the CISA list according to our logs.
Bot traffic insights
Our confidence in the Bot Management classification output remains very high. If we plot the bot scores across the analyzed time frame, we find a very clear distribution, with most requests either being classified as definitely bot (score below 30) or definitely human (score greater than 80), with most requests actually scoring less than 2 or greater than 95. This equates, over the same time period, to 33% of traffic being classified as automated (generated by a bot). Over longer time periods we do see the overall bot traffic percentage stable at 29%, and this reflects the data shown on Cloudflare Radar.
On average, more than 10% of non-verified bot traffic is mitigated
Compared to the last report, non-verified bot HTTP traffic mitigation is currently on a downward trend (down 6 percentage points). However, the Bot Management field usage within WAF Custom Rules is non negligible, standing at 11%. This means that there are more than 700k WAF Custom Rules deployed on Cloudflare that are relying on bot signals to perform some action. The most common field used is cf.client.bot, an alias to cf.bot_management.verified_bot which is powered by our list of verified bots and allows customers to make a distinction between “good” bots and potentially “malicious” non-verified ones.
Enterprise customers have access to the more powerful cf.bot_management.score which provides direct access to the score computed on each request, the same score used to generate the bot score distribution graph in the prior section.
The above data is also validated by looking at what Cloudflare service is mitigating unverified bot traffic. Although our DDoS mitigation system is automatically blocking HTTP traffic across all customers, this only accounts for 13% of non-verified bot mitigations. On the other hand, WAF, and mostly customer defined rules, account for 77% of such mitigations, much higher than mitigations across all traffic (57%) discussed at the start of the report. Note that Bot Management is specifically called out but refers to our “default” one-click rules, which are counted separately from the bot fields used in WAF Custom Rules.
Tabular format for reference:
API traffic insights
58% of dynamic (non cacheable) traffic is API related
The growth of overall API traffic observed by Cloudflare is not slowing down. Compared to last quarter, we are now seeing 58% of total dynamic traffic be classified as API related. This is a 3 percentage point increase as compared to Q1.
Our investment in API Gateway is also following a similar growth trend. Over the last quarter we have released several new API security features.
First, we’ve made API Discovery easier to use with a new inbox view. API Discovery inventories your APIs to prevent shadow IT and zombie APIs, and now customers can easily filter to show only new endpoints found by API Discovery. Saving endpoints from API Discovery places them into our Endpoint Management system.
Next, we’ve added a brand new API security feature offered only at Cloudflare: the ability to control API access by client behavior. We call it Sequence Mitigation. Customers can now create positive or negative security models based on the order of API paths accessed by clients. You can now ensure that your application’s users are the only ones accessing your API instead of brute-force attempts that ignore normal application functionality. For example, in a banking application you can now enforce that access to the funds transfer endpoint can only be accessed after a user has also accessed the account balance check endpoint.
We’re excited to continue releasing API security and API management features for the remainder of 2023 and beyond.
65% of global API traffic is generated by browsers
The percentage of API traffic generated by browsers has remained very stable over the past quarter. With this statistic, we are referring to HTTP requests that are not serving HTML based content that will be directly rendered by the browser without some preprocessing, such as those more commonly known as AJAX calls which would normally serve JSON based responses.
HTTP Anomalies are the most common attack vector on API endpoints
Just like last quarter, HTTP Anomalies remain the most common mitigated attack vector on API traffic. SQLi injection attacks, however, are non negligible, contributing approximately 11% towards the total mitigated traffic, closely followed by XSS attacks, at around 9%.
Tabular format for reference (top 5):
As we move our application security report to a quarterly cadence, we plan to deepen some of the insights and to provide additional data from some of our newer products such as Page Shield, allowing us to look beyond HTTP traffic, and explore the state of third party dependencies online.
Stay tuned and keep an eye on Cloudflare Radar for more frequent application security reports and insights.
In 2023, cybersecurity continues to be in most cases a need-to-have for those who don’t want to take chances on getting caught in a cyberattack and its consequences. Attacks have gotten more sophisticated, while conflicts (online and offline, and at the same time) continue, including in Ukraine. Governments have heightened their cyber warnings and put together strategies, including around critical infrastructure (including health and education). All of this, at a time when there were never so many online risks, but also people online — over five billion in July 2023, 64.5% of the now eight billion that are the world’s total population.
Here we take a look at what we’ve been discussing in 2023, so far, in our Cloudflare blog related to attacks and online security in general, with several August reading list suggestions. From new trends, products, initiatives or partnerships, including AI service safety, to record-breaking blocked cyberattacks. On that note, our AI hub (ai.cloudflare.com) was just launched.
Our global network — a.k.a. Supercloud — gives us a unique vantage point. Cloudflare’s extensive scale also helps enhance security, with preventive services powered by machine learning, like our recent WAF attack scoring system to stop attacks before they become known or even malware.
Recently, we announced our presence in more than 300 cities across over 100 countries, with interconnections to over 12,000 networks and still growing. We provide services for around 20% of websites online and to millions of Internet properties.
Attacks increasing. A readiness and trust game
Let’s start with providing some context. There are all sorts of attacks, but they have been, generally speaking, increasing. In Q2 2023, Cloudflare blocked an average of 140 billion cyber threats per day. One year ago, when we wrote a similar blog post, it was 124 billion, a 13% increase year over year. Attackers are not holding back, with more sophisticated attacks rising, and sectors such as education or healthcare as the target.
Artificial intelligence (AI), like machine learning, is not new, but it has been trending in 2023, and certain capabilities are more generally available. This has raised concerns about the quality of deception and even AI hackers.
This year, governments have also continued to release reports and warnings. In 2022, the US Cybersecurity and Infrastructure Security Agency (CISA) created the Shields Up initiative in response to Russia's invasion of Ukraine. In March 2023, the Biden-Harris Administration released the National Cybersecurity Strategy aimed at securing the Internet.
That said, here are the reading suggestions related to more general country related attacks, but also policy and trust cybersecurity:
One year of war in Ukraine: Internet trends, attacks, and resilience (✍️)
This blog post reports on Internet insights during the war in Europe, and discusses how Ukraine's Internet remained resilient in spite of dozens of attacks, and disruptions in three different stages of the conflict.
The White House’s National Cybersecurity Strategy asks the private sector to step up to fight cyber attacks. Cloudflare is ready (✍️)
The White House released in March 2023 the National Cybersecurity Strategy aimed at preserving and extending the open, free, global, interoperable, reliable, and securing the Internet. Cloudflare welcomed the Strategy, and the much-needed policy initiative, highlighting the need of defending critical infrastructure, where Zero Trust plays a big role. In the same month, Cloudflare announced its commitment to the 2023 Summit for Democracy. Also related to these initiatives, in March 2022, we launched our very own Critical Infrastructure Defense Project (CIDP), and in December 2022, Cloudflare launched Project Safekeeping, offering Zero Trust solutions to certain eligible entities in Australia, Japan, Germany, Portugal and the United Kingdom.
Secure by default: recommendations from the CISA’s newest guide, and how Cloudflare follows these principles to keep you secure (✍️)
In this April 2023 post we reviewed the “default secure” posture, and recommendations that were the focus of a recently published guide jointly authored by several international agencies. It had US, UK, Australia, Canada, Germany, Netherlands, and New Zealand contributions. Long story short, using all sorts of tools, machine learning and a secure-by-default and by-design approach, and a few principles, will make all the difference.
Nine years of Project Galileo and how the last year has changed it (✍️) + Project Galileo Report (✍️)
Between July 1, 2022, and May 5, 2023, Cloudflare mitigated 20 billion attacks against organizations protected under Project Galileo. This is an average of nearly 67.7 million cyber attacks per day over the last 10 months.
For LGBTQ+ organizations, we saw an average of 790,000 attacks mitigated per day over the last 10 months, with a majority of those classified as DDoS attacks.
Attacks targeting civil society organizations are generally increasing. We have broken down an attack aimed at a prominent organization, with the request volume climbing as high as 667,000 requests per second. Before and after this time the organization saw little to no traffic.
In Ukraine, spikes in traffic to organizations that provide emergency response and disaster relief coincide with bombings of the country over the 10-month period.
Project Cybersafe Schools: bringing security tools for free to small K-12 school districts in the US (✍️)
Already in August 2023, Cloudflare introduced an initiative aimed at small K-12 public school districts: Project Cybersafe Schools. Announced as part of the Back to School Safely: K-12 Cybersecurity Summit at the White House on August 7, Project Cybersafe Schools will support eligible K-12 public school districts with a package of Zero Trust cybersecurity solutions — for free, and with no time limit. In Q2 2023, Cloudflare blocked an average of 70 million cyber threats each day targeting the U.S. education sector, and a 47% increase in DDoS attacks quarter-over-quarter.
Privacy concerns also go hand in hand with security online, and we’ve provided further details on this topic earlier this year in relation to our investment in security to protect data privacy. Cloudflare also achieved a new EU Cloud Code of Conduct privacy validation.
DDoS attacks (distributed denial-of-service) are not new, but they’re still one of the main tools used by attackers. In Q2 2023, Cloudflare witnessed an unprecedented escalation in DDoS attack sophistication, and our report delves into this phenomenon. Pro-Russian hacktivists REvil, Killnet and Anonymous Sudan joined forces to attack Western sites. Mitel vulnerability exploits surged by a whopping 532%, and attacks on crypto rocketed up by 600%. Also, more broadly, attacks exceeding three hours have increased by 103% quarter-over-quarter.
This blog post and the corresponding Cloudflare Radar report shed light on some of these trends. On the other hand, in our Q1 2023 DDoS threat report, a surge in hyper-volumetric attacks that leverage a new generation of botnets that are comprised of Virtual Private Servers (VPS) was observed.
Killnet and AnonymousSudan DDoS attack Australian university websites, and threaten more attacks — here’s what to do about it (✍️)
In late March 2023, Cloudflare observed HTTP DDoS attacks targeting university websites in Australia. Universities were the first of several groups publicly targeted by the pro-Russian hacker group Killnet and their affiliate AnonymousSudan. This post not only shows a trend with these organized groups targeted attacks but also provides specific recommendations.
Uptick in healthcare organizations experiencing targeted DDoS attacks (✍️)
In early February 2023, Cloudflare, as well as other sources, observed an uptick in healthcare organizations targeted by a pro-Russian hacktivist group claiming to be Killnet. There was an increase in the number of these organizations seeking our help to defend against such attacks. Additionally, healthcare organizations that were already protected by Cloudflare experienced mitigated HTTP DDoS attacks.
Cloudflare mitigates record-breaking 71 million request-per-second DDoS attack (✍️)
Also in early February, Cloudflare detected and mitigated dozens of hyper-volumetric DDoS attacks, one of those that became a record-breaking one. The majority of attacks peaked in the ballpark of 50-70 million requests per second (rps) with the largest exceeding 71Mrps. This was the largest reported HTTP DDoS attack on record to date, more than 54% higher than the previous reported record of 46M rps in June 2022.
SLP: a new DDoS amplification vector in the wild (✍️)
This blog post from April 2023 highlights how researchers have published the discovery of a new DDoS reflection/amplification attack vector leveraging the SLP protocol (Service Location Protocol). The prevalence of SLP-based DDoS attacks is also expected to rise, but our automated DDoS protection system keeps Cloudflare customers safe.
Additionally, this year, also in April, a new and improved Network Analytics dashboard was introduced, providing security professionals insights into their DDoS attack and traffic landscape.
For the second year in a row we published our Application Security Report. There’s a lot to unpack here, in a year when, according to Netcraft, Cloudflare became the most commonly used web server vendor within the top million sites (it has now a 22% market share). Here are some highlights:
6% of daily HTTP requests (proxied by the Cloudflare network) are mitigated on average. It’s down two percentage points compared to last year.
DDoS mitigation accounts for more than 50% of all mitigated traffic, so it’s still the largest contributor to mitigated layer 7 (application layer) HTTP requests.
Compared to last year, however, mitigation by the Cloudflare WAF (Web Application Firewall) has grown significantly, and now accounts for nearly 41% of mitigated requests.
HTTP Anomaly (examples include malformed method names, null byte characters in headers, etc.) is the most frequent layer 7 attack vectors mitigated by the WAF.
30% of HTTP traffic is automated (bot traffic). 55% of dynamic (non cacheable) traffic is API related. 65% of global API traffic is generated by browsers.
16% of non-verified bot HTTP traffic is mitigated.
HTTP Anomaly surpasses SQLi (code injection technique used to attack data-driven applications) as the most common attack vector on API endpoints. Brute force account takeover attacks are increasing. Also, Microsoft Exchange is attacked more than WordPress.
How Cloudflare can help stop malware before it reaches your app (✍️)
In April 2023, we made the job of application security teams easier, by providing a content scanning engine integrated with our Web Application Firewall (WAF), so that malicious files being uploaded by end users, never reach origin servers in the first place. Since September 2022, our Cloudflare WAF became smarter in helping stop attacks before they are known.
Announcing WAF Attack Score Lite and Security Analytics for business customers (✍️)
In March 2023, we announced that our machine learning empowered WAF and Security analytics view were made available to our Business plan customers, to help detect and stop attacks before they are known. In a nutshell: Early detection + Powerful mitigation = Safer Internet. Or:
Phishing remains the primary way to breach organizations. According to CISA, 90% of cyber attacks begin with it. The FBI has been publishing Internet Crime Reports, and in the most recent, phishing continues to be ranked #1 in the top five Internet crime types. Reported phishing crimes and victim losses increased by 1038% since 2018, reaching 300,497 incidents in 2022. The FBI also referred to Business Email Compromise as the $43 billion problem facing organizations, with complaints increasing by 127% in 2022, resulting in $3.31 billion in related losses, compared to 2021.
In 2022, Cloudflare Area 1 kept 2.3 billion unwanted messages out of customer inboxes. This year, that number will be easily surpassed.
In August 2023, Cloudflare published its first phishing threats report — fully available here. The report explores key phishing trends and related recommendations, based on email security data from May 2022 to May 2023.
Some takeaways include how attackers using deceptive links was the #1 phishing tactic — and how they are evolving how they get you to click and when they weaponize the link. Also, identity deception takes multiple forms (including business email compromise (BEC) and brand impersonation), and can easily bypass email authentication standards.
More than one year ago, Cloudflare acquired Area 1 Security, and with that we added to our Cloudflare Zero Trust platform an essential cloud-native email security service that identifies and blocks attacks before they hit user inboxes. This year, we’ve obtained one of the best ways to provide customers assurance that the sensitive information they send to us can be kept safe: a SOC 2 Type II report.
Email Link Isolation: your safety net for the latest phishing attacks (✍️)
Back in January, during our CIO Week, Email Link Isolation was made generally available to all our customers. What is it? A safety net for the suspicious links that end up in inboxes and that users may click — anyone can click on the wrong link by mistake. This added protection turns Cloudflare Area 1 into the most comprehensive email security solution when it comes to protecting against malware, phishing attacks, etc. Also, in true Cloudflare fashion, it’s a one-click deployment.
Phishing attacks come in all sorts of ways to fool people. This high level “phish” guide, goes over the different types — while email is definitely the most common, there are others —, and provides some tips to help you catch these scams before you fall for them.
Top 50 most impersonated brands in phishing attacks and new tools you can use to protect your employees from them (✍️)
Here we go over arguably one of the hardest challenges any security team is constantly facing, detecting, blocking, and mitigating the risks of phishing attacks. During our Security Week in March, a Top 50 list of the most impersonated brands in phishing attacks was presented (spoiler alert: AT&T Inc., PayPal, and Microsoft are on the podium).
Additionally, it was also announced the expansion of the phishing protections available to Cloudflare One customers by automatically identifying — and blocking — so-called “confusable” domains. What is Cloudflare One? It’s our suite of products that provides a customizable, and integrated with what a company already uses, Zero Trust network-as-a-service platform. It’s built for that already mentioned ease of mind and fearless online use. Cloudflare One, along with the use of physical security keys, was what thwarted the sophisticated “Oktapus” phishing attack targeting Cloudflare employees last summer.
Groundbreaking technology brings groundbreaking challenges. Cloudflare has experience protecting some of the largest AI applications in the world, and in this blog post there are some tips and best practices for securing generative AI applications. Success in consumer-facing applications inherently expose the underlying AI systems to millions of users, vastly increasing the potential attack surface.
Using the power of Cloudflare’s global network to detect malicious domains using machine learning (✍️)
Taking into account the objective of preventing threats before they create havoc, here we go over that Cloudflare recently developed proprietary models leveraging machine learning and other advanced analytical techniques. These are able to detect security threats that take advantage of the domain name system (DNS), known as the phonebook of the Internet.
How sophisticated scammers and phishers are preying on customers of Silicon Valley Bank (✍️)
In order to breach trust and trick unsuspecting victims, threat actors overwhelmingly use topical events as lures. The news about what happened at Silicon Valley Bank earlier this year was one of the latest events to watch out for and stay vigilant against opportunistic phishing campaigns using SVB as the lure. At that time, Cloudforce One (Cloudflare’s threat operations and research team) significantly increased our brand monitoring focused on SVB’s digital presence.
How Cloudflare can help stop malware before it reaches your app (✍️)
In April 2023, Cloudflare launched a tool to make the job of application security teams easier, by providing a content scanning engine integrated with our Web Application Firewall (WAF), so that malicious files being uploaded by end users, never reach origin servers in the first place.
Analyze any URL safely using the Cloudflare Radar URL Scanner (✍️)
Cloudflare Radar is our free platform for Internet insights. In March, our URL Scanner was launched, allowing anyone to analyze a URL safely. The report that it creates contains a myriad of technical details, including a phishing scan. Many users have been using it for security reasons, but others are just exploring what’s under-the-hood look at any webpage.
Unmasking the top exploited vulnerabilities of 2022 (✍️)
Last, but not least, already from August 2023, this blog post focuses on the most commonly exploited vulnerabilities, according to the Cybersecurity and Infrastructure Security Agency (CISA). Given Cloudflare’s role as a reverse proxy to a large portion of the Internet, we delve into how the Common Vulnerabilities and Exposures (CVEs) mentioned by CISA are being exploited on the Internet, and a bit of what has been learned.
If you want to learn about making a website more secure (and faster) while loading third-party tools like Google Analytics 4, Facebook CAPI, TikTok, and others, you can get to know our Cloudflare Zaraz solution. It reached general availability in July 2023.
“The Internet was not built for what it has become”.
This is how one of Cloudflare’s S-1 document sections begins. It is also commonly referenced in our blog to show how this remarkable experiment, the network of networks, wasn’t designed for the role it now plays in our daily lives and work. Security, performance and privacy are crucial in a time when anyone can be the target of an attack, threat, or vulnerability. While AI can aid in mitigating attacks, it also adds complexity to attackers' tactics.
With that in mind, as we've highlighted in this 2023 reading list suggestions/online attacks guide, prioritizing the prevention of detrimental attack outcomes remains the optimal strategy. Hopefully, it will make some of the attacks on your company go unnoticed or be consequences-free, or even transform them into interesting stories to share when you access your security dashboard.
And because the future of a private and secure Internet is also in our minds, it's worth mentioning that in March 2022, Cloudflare enabled post-quantum cryptography support for all our customers. The topic of post-quantum cryptography, designed to be secure against the threat of quantum computers, is quite interesting and worth some delving into, but even without knowing what it is, it’s good to know that protection is already here.
Welcome to the second DDoS threat report of 2023. DDoS attacks, or distributed denial-of-service attacks, are a type of cyber attack that aims to disrupt websites (and other types of Internet properties) to make them unavailable for legitimate users by overwhelming them with more traffic than they can handle — similar to a driver stuck in a traffic jam on the way to the grocery store.
We see a lot of DDoS attacks of all types and sizes and our network is one of the largest in the world spanning more than 300 cities in over 100 countries. Through this network we serve over 63 million HTTP requests per second at peak and over 2 billion DNS queries every day. This colossal amount of data gives us a unique vantage point to provide the community access to insightful DDoS trends.
For our regular readers, you might notice a change in the layout of this report. We used to follow a set pattern to share our insights and trends about DDoS attacks. But with the landscape of DDoS threats changing as DDoS attacks have become more powerful and sophisticated, we felt it's time for a change in how we present our findings. So, we'll kick things off with a quick global overview, and then dig into the major shifts we're seeing in the world of DDoS attacks.
Reminder: an interactive version of this report is also available on Cloudflare Radar. Furthermore, we’ve also added a new interactive component that will allow you to dive deeper into attack activity in each country or region.
The DDoS landscape: a look at global patterns
The second quarter of 2023 was characterized by thought-out, tailored and persistent waves of DDoS attack campaigns on various fronts, including:
Multiple DDoS offensives orchestrated by pro-Russian hacktivist groups REvil, Killnet and Anonymous Sudan against Western interest websites.
An increase in deliberately engineered and targeted DNS attacks alongside a 532% surge in DDoS attacks exploiting the Mitel vulnerability (CVE-2022-26143). Cloudflare contributed to disclosing this zero-day vulnerability last year.
Attacks targeting Cryptocurrency companies increased by 600%, as a broader 15% increase in HTTP DDoS attacks was observed. Of these, we’ve noticed an alarming escalation in attack sophistication which we will cover more in depth.
Additionally, one of the largest attacks we’ve seen this quarter was an ACK flood DDoS attack which originated from a Mirai-variant botnet comprising approximately 11K IP addresses. The attack targeted an American Internet Service Provider. It peaked at 1.4 terabit per seconds (Tbps) and was automatically detected and mitigated by Cloudflare’s systems.
Despite general figures indicating an increase in overall attack durations, most of the attacks are short-lived and so was this one. This attack lasted only two minutes. However, more broadly, we’ve seen that attacks exceeding 3 hours have increased by 103% QoQ.
Now having set the stage, let’s dive deeper into these shifts we’re seeing in the DDoS landscape.
Hacktivist alliance dubbed “Darknet Parliament” aims at Western banks and SWIFT network
On June 14, Pro-Russian hacktivist groups Killnet, a resurgence of REvil and Anonymous Sudan announced that they have joined forces to execute “massive” cyber attacks on the Western financial system including European and US banks, and the US Federal Reserve System. The collective, dubbed “Darknet Parliament”, declared its first objective was to paralyze SWIFT (Society for Worldwide Interbank Financial Telecommunication). A successful DDoS attack on SWIFT could have dire consequences because it's the main service used by financial institutions to conduct global financial transactions.
Beyond a handful of publicized events such as the Microsoft outage which was reported by the media, we haven’t observed any novel DDoS attacks or disruptions targeting our customers. Our systems have been automatically detecting and mitigating attacks associated with this campaign. Over the past weeks, as many as 10,000 of these DDoS attacks were launched by the Darknet Parliament against Cloudflare-protected websites (see graph below).
Despite the hacktivists’ statements, Banking and Financial Services websites were only the ninth most attacked industry — based on attacks we’ve seen against our customers as part of this campaign.
The most attacked industries were Computer Software, Gambling & Casinos and Gaming. Telecommunications and Media outlets came in fourth and fifth, respectively. Overall, the largest attack we witnessed in this campaign peaked at 1.7 million requests per second (rps) and the average was 65,000 rps.
For perspective, earlier this year we mitigated the largest attack in recorded history peaking at 71 million rps. So these attacks were very small compared to Cloudflare scale, but not necessarily for an average website. Therefore, we shouldn’t underestimate the damage potential on unprotected or suboptimally configured websites.
Sophisticated HTTP DDoS attacks
An HTTP DDoS attack is a DDoS attack over the Hypertext Transfer Protocol (HTTP). It targets HTTP Internet properties such as websites and API gateways. Over the past quarter, HTTP DDoS attacks increased by 15% quarter-over-quarter (QoQ) despite a 35% decrease year-over-year (YoY).
Additionally, we've observed an alarming uptick in highly-randomized and sophisticated HTTP DDoS attacks over the past few months. It appears as though the threat actors behind these attacks have deliberately engineered the attacks to try and overcome mitigation systems by adeptly imitating browser behavior very accurately, in some cases, by introducing a high degree of randomization on various properties such as user agents and JA3 fingerprints to name a few. An example of such an attack is provided below. Each different color represents a different randomization feature.
Furthermore, in many of these attacks, it seems that the threat actors try to keep their attack rates-per-second relatively low to try and avoid detection and hide amongst the legitimate traffic.
This level of sophistication has previously been associated with state-level and state-sponsored threat actors, and it seems these capabilities are now at the disposal of cyber criminals. Their operations have already targeted prominent businesses such as a large VoIP provider, a leading semiconductor company, and a major payment & credit card provider to name a few.
Protecting websites against sophisticated HTTP DDoS attacks requires intelligent protection that is automated and fast, that leverages threat intelligence, traffic profiling and Machine Learning/statistical analysis to differentiate between attack traffic and user traffic. Moreover, even increasing caching where applicable can help reduce the risk of attack traffic impacting your origin. Read more about DDoS protection best practices here.
DNS Laundering DDoS attacks
The Domain Name System, or DNS, serves as the phone book of the Internet. DNS helps translate the human-friendly website address (e.g. www.cloudflare.com) to a machine-friendly IP address (e.g. 22.214.171.124). By disrupting DNS servers, attackers impact the machines’ ability to connect to a website, and by doing so making websites unavailable to users.
Over the past quarter, the most common attack vector was DNS-based DDoS attacks — 32% of all DDoS attacks were over the DNS protocol. Amongst these, one of the more concerning attack types we’ve seen increasing is the DNS Laundering attack which can pose severe challenges to organizations that operate their own authoritative DNS servers.
The term “Laundering” in the DNS Laundering attack name refers to the analogy of money laundering, the devious process of making illegally-gained proceeds, often referred to as "dirty money," appear legal. Similarly, in the DDoS world, a DNS Laundering attack is the process of making bad, malicious traffic appear as good, legitimate traffic by laundering it via reputable recursive DNS resolvers.
In a DNS Laundering attack, the threat actor will query subdomains of a domain that is managed by the victim’s DNS server. The prefix that defines the subdomain is randomized and is never used more than once or twice in such an attack. Due to the randomization element, recursive DNS servers will never have a cached response and will need to forward the query to the victim’s authoritative DNS server. The authoritative DNS server is then bombarded by so many queries until it cannot serve legitimate queries or even crashes all together.
From the protection point of view, the DNS administrators can’t block the attack source because the source includes reputable recursive DNS servers like Google’s 126.96.36.199 and Cloudflare’s 188.8.131.52. The administrators also cannot block all queries to the attacked domain because it is a valid domain that they want to preserve access to legitimate queries.
The above factors make it very challenging to distinguish legitimate queries from malicious ones. A large Asian financial institution and a North American DNS provider are amongst recent victims of such attacks. An example of such an attack is provided below.
Similar to the protection strategies outlined for HTTP applications, protecting DNS servers also requires a precise, fast, and automated approach. Leveraging a managed DNS service or a DNS reverse proxy such as Cloudflare’s can help absorb and mitigate the attack traffic. For those more sophisticated DNS attacks, a more intelligent solution is required that leverages statistical analysis of historical data to be able to differentiate between legitimate queries and attack queries.
The rise of the Virtual Machine Botnets
As we’ve previously disclosed, we are witnessing an evolution in botnet DNA. The era of VM-based DDoS botnets has arrived and with it hyper-volumetric DDoS attacks. These botnets are comprised of Virtual Machines (VMs, or Virtual Private Servers, VPS) rather than Internet of Things (IoT) devices which makes them so much more powerful, up to 5,000 times stronger.
Because of the computational and bandwidth resources that are at the disposal of these VM-based botnets, they’re able to generate hyper-volumetric attacks with a much smaller fleet size compared to IoT-based botnets.
These botnets have executed one largest recorded DDoS attacks including the 71 million request per second DDoS attack. Multiple organizations including an industry-leading gaming platform provider have already been targeted by this new generation of botnets.
Cloudflare has proactively collaborated with prominent cloud computing providers to combat these new botnets. Through the quick and dedicated actions of these providers, significant components of these botnets have been neutralized. Since this intervention, we have not observed any further hyper-volumetric attacks yet, a testament to the efficacy of our collaboration.
While we already enjoy a fruitful alliance with the cybersecurity community in countering botnets when we identify large-scale attacks, our goal is to streamline and automate this process further. We extend an invitation to cloud computing providers, hosting providers, and other general service providers to join Cloudflare’s free Botnet Threat Feed. This would provide visibility into attacks originating within their networks, contributing to our collective efforts to dismantle botnets.
“Startblast”: Exploiting Mitel vulnerabilities for DDoS attacks
This exploit operates by reflecting traffic off vulnerable servers, amplifying it in the process, with a factor as high as 220 billion percent. The vulnerability stems from an unauthenticated UDP port exposed to the public Internet, which could allow malicious actors to issue a 'startblast' debugging command, simulating a flurry of calls to test the system.
As a result, for each test call, two UDP packets are sent to the issuer, enabling an attacker to direct this traffic to any IP and port number to amplify a DDoS attack. Despite the vulnerability, only a few thousand of these devices are exposed, limiting the potential scale of attack, and attacks must run serially, meaning each device can only launch one attack at a time.
Overall, in the past quarter, we’ve seen additional emerging threats such as DDoS attacks abusing the TeamSpeak3 protocol. This attack vector increased by a staggering 403% this quarter.
TeamSpeak, a proprietary voice-over-Internet Protocol (VoIP) that runs over UDP to help gamers talk with other gamers in real time. Talking instead of just chatting can significantly improve a gaming team’s efficiency and help them win. DDoS attacks that target TeamSpeak servers may be launched by rival groups in an attempt to disrupt their communication path during real-time multiplayer games and thus impact their team’s performance.
DDoS hotspots: The origins of attacks
Overall, HTTP DDoS attacks increased by 15% QoQ despite a 35% decrease YoY. Additionally, network-layer DDoS attacks decreased this quarter by approximately 14%.
In terms of total volume of attack traffic, the US was the largest source of HTTP DDoS attacks. Three out of every thousand requests we saw were part of HTTP DDoS attacks originating from the US. China came in second place and Germany in third place.
Some countries naturally receive more traffic due to various factors such as market size, and therefore more attacks. So while it’s interesting to understand the total amount of attack traffic originating from a given country, it is also helpful to remove that bias by normalizing the attack traffic by all traffic to a given country.
When doing so, we see a different pattern. The US doesn’t even make it into the top ten. Instead, Mozambique, Egypt and Finland take the lead as the source countries of the most HTTP DDoS attack traffic relative to all of their traffic. Almost a fifth of all HTTP traffic originating from Mozambique IP addresses were part of DDoS attacks.
Using the same calculation methodology but for bytes, Vietnam remains the largest source of network-layer DDoS attacks (aka L3/4 DDoS attacks) for the second consecutive quarter — and the amount even increased by 58% QoQ. Over 41% of all bytes that were ingested in Cloudflare’s Vietnam data centers were part of L3/4 DDoS attacks.
Industries under attack: examining DDoS attack targets
When examining HTTP DDoS attack activity in Q2, Cryptocurrency websites were targeted with the largest amount of HTTP DDoS attack traffic. Six out of every ten thousand HTTP requests towards Cryptocurrency websites behind Cloudflare were part of these attacks. This represents a 600% increase compared to the previous quarter.
After Crypto, Gaming and Gambling websites came in second place as their attack share increased by 19% QoQ. Marketing and Advertising websites not far behind in third place with little change in their share of attacks.
However, when we look at the amount of attack traffic relative to all traffic for any given industry, the numbers paint a different picture. Last quarter, Non-profit organizations were attacked the most — 12% of traffic to Non-profits were HTTP DDoS attacks. Cloudflare protects more than 2,271 Non-profit organizations in 111 countries as part of Project Galileo which celebrated its ninth anniversary this year. Over the past months, an average of 67.7 million cyber attacks targeted Non-profits on a daily basis.
Overall, the amount of DDoS attacks on Non-profits increased by 46% bringing the percentage of attack traffic to 17.6%. However, despite this growth, the Management Consulting industry jumped to the first place with 18.4% of its traffic being DDoS attacks.
When descending the layers of the OSI model, the Internet networks that were most targeted belonged to the Information Technology and Services industry. Almost every third byte routed to them were part of L3/4 DDoS attacks.
Surprisingly enough, companies operating in the Music industry were the second most targeted industry, followed by Broadcast Media and Aviation & Aerospace.
Top attacked industries: a regional perspective
Cryptocurrency websites experienced the highest number of attacks worldwide, while Management Consulting and Non-profit sectors were the most targeted considering their total traffic. However, when we look at individual regions, the situation is a bit different.
The Telecommunications industry remains the most attacked industry in Africa for the second consecutive quarter. The Banking, Financial Services and Insurance (BFSI) industry follows as the second most attacked. The majority of the attack traffic originated from Asia (35%) and Europe (25%).
For the past two quarters, the Gaming and Gambling industry was the most targeted industry in Asia. In Q2, however, the Gaming and Gambling industry dropped to second place and Cryptocurrency took the lead as the most attacked industry (~50%). Substantial portions of the attack traffic originated from Asia itself (30%) and North America (30%).
For the third consecutive quarter, the Gaming & Gambling industry remains the most attacked industry in Europe. The Hospitality and Broadcast Media industries follow not too far behind as the second and third most attacked. Most of the attack traffic came from within Europe itself (40%) and from Asia (20%).
Surprisingly, half of all attack traffic targeting Latin America was aimed at the Sporting Goods industry. In the previous quarter, the BFSI was the most attacked industry. Approximately 35% of the attack traffic originated from Asia, and another 25% originated from Europe.
The Media & Newspaper industries were the most attacked in the Middle East. The vast majority of attack traffic originated from Europe (74%).
For the second consecutive quarter, Marketing & Advertising companies were the most attacked in North America (approximately 35%). Manufacturing and Computer Software companies came in second and third places, respectively. The main sources of the attack traffic were Europe (42%) and the US itself (35%).
This quarter, the Biotechnology industry was the most attacked. Previously, it was the Health & Wellness industry. Most of the attack traffic originated from Asia (38%) and Europe (25%).
Countries and regions under attack: examining DDoS attack targets
When examining the total volume of attack traffic, last quarter, Israel leaped to the front as the most attacked country. This quarter, attacks targeting Israeli websites decreased by 33% bringing it to the fourth place. The US takes the lead again as the most attacked country, followed by Canada and Singapore.
If we normalize the data per country and region and divide the attack traffic by the total traffic, we get a different picture. Palestine jumps to the first place as the most attacked country. Almost 12% of all traffic to Palestinian websites were HTTP DDoS attacks.
Last quarter, we observed a striking deviation at the network layer, with Finnish networks under Cloudflare's shield emerging as the primary target. This surge was likely correlated with the diplomatic talks that precipitated Finland's formal integration into NATO. Roughly 83% of all incoming traffic to Finland comprised cyberattacks, with China a close second at 68% attack traffic.
This quarter, however, paints a very different picture. Finland has receded from the top ten, and Chinese Internet networks behind Cloudflare have ascended to the first place. Almost two-thirds of the byte streams towards Chinese networks protected by Cloudflare were malicious. Following China, Switzerland saw half of its inbound traffic constituting attacks, and Turkey came third, with a quarter of its incoming traffic identified as hostile.
Ransom DDoS attacks
Occasionally, DDoS attacks are carried out to extort ransom payments. We’ve been surveying Cloudflare customers over three years now, and have been tracking the occurrence of Ransom DDoS attack events.
Unlike Ransomware attacks, where victims typically fall prey to downloading a malicious file or clicking on a compromised email link which locks, deletes or leaks their files until a ransom is paid, Ransom DDoS attacks can be much simpler for threat actors to execute. Ransom DDoS attacks bypass the need for deceptive tactics such as luring victims into opening dubious emails or clicking on fraudulent links, and they don't necessitate a breach into the network or access to corporate resources.
Over the past quarter, reports of Ransom DDoS attacks decreased. One out of ten respondents reported being threatened or subject to Ransom DDoS attacks.
Wrapping up: the ever-evolving DDoS threat landscape
In recent months, there's been an alarming escalation in the sophistication of DDoS attacks. And even the largest and most sophisticated attacks that we’ve seen may only last a few minutes or even seconds — which doesn’t give a human sufficient time to respond. Before the PagerDuty alert is even sent, the attack may be over and the damage is done. Recovering from a DDoS attack can last much longer than the attack itself — just as a boxer might need a while to recover from a punch to the face that only lasts a fraction of a second.
Security is not one single product or a click of a button, but rather a process involving multiple layers of defense to reduce the risk of impact. Cloudflare's automated DDoS defense systems consistently safeguard our clients from DDoS attacks, freeing them up to focus on their core business operations. These systems are complemented by the vast breadth of Cloudflare capabilities such as firewall, bot detection, API protection and even caching which can all contribute to reducing the risk of impact.
The DDoS threat landscape is evolving and increasingly complex, demanding more than just quick fixes. Thankfully, with Cloudflare's multi-layered defenses and automatic DDoS protections, our clients are equipped to navigate these challenges confidently. Our mission is to help build a better Internet, and so we continue to stand guard, ensuring a safer and more reliable digital realm for all.
How we calculate Ransom DDoS attack insights
Cloudflare’s systems constantly analyze traffic and automatically apply mitigation when DDoS attacks are detected. Each attacked customer is prompted with an automated survey to help us better understand the nature of the attack and the success of the mitigation. For over two years, Cloudflare has been surveying attacked customers. One of the questions in the survey asks the respondents if they received a threat or a ransom note. Over the past two years, on average, we collected 164 responses per quarter. The responses of this survey are used to calculate the percentage of Ransom DDoS attacks.
How we calculate geographical and industry insights
Source country At the application-layer, we use the attacking IP addresses to understand the origin country of the attacks. That is because at that layer, IP addresses cannot be spoofed (i.e., altered). However, at the network layer, source IP addresses can be spoofed. So, instead of relying on IP addresses to understand the source, we instead use the location of our data centers where the attack packets were ingested. We’re able to get geographical accuracy due to our large global coverage in over 285 locations around the world.
Target country For both application-layer and network-layer DDoS attacks, we group attacks and traffic by our customers’ billing country. This lets us understand which countries are subject to more attacks.
Target industry For both application-layer and network-layer DDoS attacks, we group attacks and traffic by our customers’ industry according to our customer relations management system. This lets us understand which industries are subject to more attacks.
Total volume vs. percentage For both source and target insights, we look at the total volume of attack traffic compared to all traffic as one data point. Additionally, we also look at the percentage of attack traffic towards or from a specific country, to a specific country or to a specific industry. This gives us an “attack activity rate” for a given country/industry which is normalized by their total traffic levels. This helps us remove biases of a country or industry that normally receives a lot of traffic and therefore a lot of attack traffic as well.
How we calculate attack characteristics To calculate the attack size, duration, attack vectors and emerging threats, we bucket attacks and then provide the share of each bucket out of the total amount for each dimension. On the new Radar component, these trends are calculated by number of bytes instead. Since attacks may vary greatly in number of bytes from one another, this could lead to trends differing between the reports and the Radar component.
General disclaimer and clarification
When we describe ‘top countries’ as the source or target of attacks, it does not necessarily mean that that country was attacked as a country, but rather that organizations that use that country as their billing country were targeted by attacks. Similarly, attacks originating from a country does not mean that that country launched the attacks, but rather that the attack was launched from IP addresses that have been mapped to that country. Threat actors operate global botnets with nodes all over the world, and in many cases also use Virtual Private Networks and proxies to obfuscate their true location. So if anything, the source country could indicate the presence of exit nodes or botnet nodes within that country.
Earlier today, April 25, 2023, researchers Pedro Umbelino at Bitsight and Marco Lux at Curesec published their discovery of CVE-2023-29552, a new DDoS reflection/amplification attack vector leveraging the SLP protocol. If you are a Cloudflare customer, your services are already protected from this new attack vector.
Service Location Protocol (SLP) is a “service discovery” protocol invented by Sun Microsystems in 1997. Like other service discovery protocols, it was designed to allow devices in a local area network to interact without prior knowledge of each other. SLP is a relatively obsolete protocol and has mostly been supplanted by more modern alternatives like UPnP, mDNS/Zeroconf, and WS-Discovery. Nevertheless, many commercial products still offer support for SLP.
Since SLP has no method for authentication, it should never be exposed to the public Internet. However, Umbelino and Lux have discovered that upwards of 35,000 Internet endpoints have their devices’ SLP service exposed and accessible to anyone. Additionally, they have discovered that the UDP version of this protocol has an amplification factor of up to 2,200x, which is the third largest discovered to-date.
Cloudflare expects the prevalence of SLP-based DDoS attacks to rise significantly in the coming weeks as malicious actors learn how to exploit this newly discovered attack vector.
Cloudflare customers are protected
If you are a Cloudflare customer, our automated DDoS protection system already protects your services from these SLP amplification attacks. To avoid being exploited to launch the attacks, if you are a network operator, you should ensure that you are not exposing the SLP protocol directly to the public Internet. You should consider blocking UDP port 427 via access control lists or other means. This port is rarely used on the public Internet, meaning it is relatively safe to block without impacting legitimate traffic. Cloudflare Magic Transit customers can use the Magic Firewall to craft and deploy such rules.
We’re pleased to introduce Cloudflare’s new and improved Network Analytics dashboard. It’s now available to Magic Transit and Spectrum customers on the Enterprise plan.
The dashboard provides network operators better visibility into traffic behavior, firewall events, and DDoS attacks as observed across Cloudflare’s global network. Some of the dashboard’s data points include:
Top traffic and attack attributes
Visibility into DDoS mitigations and Magic Firewall events
Detailed packet samples including full packets headers and metadata
This dashboard was the outcome of a full refactoring of our network-layer data logging pipeline. The new data pipeline is decentralized and much more flexible than the previous one — making it more resilient, performant, and scalable for when we add new mitigation systems, introduce new sampling points, and roll out new services. A technical deep-dive blog is coming soon, so stay tuned.
In this blog post, we will demonstrate how the dashboard helps network operators:
Understand their network better
Respond to DDoS attacks faster
Easily generate security reports for peers and managers
Understand your network better
One of the main responsibilities network operators bare is ensuring the operational stability and reliability of their network. Cloudflare’s Network Analytics dashboard shows network operators where their traffic is coming from, where it’s heading, and what type of traffic is being delivered or mitigated. These insights, along with user-friendly drill-down capabilities, help network operators identify changes in traffic, surface abnormal behavior, and can help alert on critical events that require their attention — to help them ensure their network’s stability and reliability.
Starting at the top, the Network Analytics dashboard shows network operators their traffic rates over time along with the total throughput. The entire dashboard is filterable, you can drill down using select-to-zoom, change the time-range, and toggle between a packet or bit/byte view. This can help gain a quick understanding of traffic behavior and identify sudden dips or surges in traffic.
Cloudflare customers advertising their own IP prefixes from the Cloudflare network can also see annotations for BGP advertisement and withdrawal events. This provides additional context atop of the traffic rates and behavior.
One of the many benefits of Cloudflare’s Network Analytics dashboard is its geographical accuracy. Identification of the traffic source usually involves correlating the source IP addresses to a city and country. However, network-layer traffic is subject to IP spoofing. Malicious actors can spoof (alter) their source IP address to obfuscate their origin (or their botnet’s nodes) while attacking your network. Correlating the location (e.g., the source country) based on spoofed IPs would therefore result in spoofed countries. Using spoofed countries would skew the global picture network operators rely on.
To overcome this challenge and provide our users accurate geoinformation, we rely on the location of the Cloudflare data center wherein the traffic was ingested. We’re able to achieve geographical accuracy with high granularity, because we operate data centers in over 285 locations around the world. We use BGP Anycast which ensures traffic is routed to the nearest data center within BGP catchment.
Detailed mitigation analytics
The dashboard lets network operators understand exactly what is happening to their traffic while it’s traversing the Cloudflare network. The All traffic tab provides a summary of attack traffic that was dropped by the three mitigation systems, and the clean traffic that was passed to the origin.
Each additional tab focuses on one mitigation system, showing traffic dropped by the corresponding mitigation system and traffic that was passed through it. This provides network operators almost the same level of visibility as our internal support teams have. It allows them to understand exactly what Cloudflare systems are doing to their traffic and where in the Cloudflare stack an action is being taken.
Using the detailed tabs, users can better understand the systems’ decisions and which rules are being applied to mitigate attacks. For example, in the Advanced TCP Protection tab, you can view how the system is classifying TCP connection states. In the screenshot below, you can see the distribution of packets according to connection state. For example, a sudden spike in Out of sequence packets may result in the system dropping them.
Note that the presence of tabs differ slightly for Spectrum customers because they do not have access to the Advanced TCP Protection and Magic Firewall tabs. Spectrum customers only have access to the first two tabs.
Respond to DDoS attacks faster
Cloudflare detects and mitigates the majority of DDoS attacks automatically. However, when a network operator responds to a sudden increase in traffic or a CPU spike in their data centers, they need to understand the nature of the traffic. Is this a legitimate surge due to a new game release for example, or an unmitigated DDoS attack? In either case, they need to act quickly to ensure there are no disruptions to critical services.
The Network Analytics dashboard can help network operators quickly pattern traffic by switching the time-series’ grouping dimensions. They can then use that pattern to drop packets using the Magic Firewall. The default dimension is the outcome indicating whether traffic was dropped or passed. But by changing the time series dimension to another field such as the TCP flag, Packet size, or Destination port a pattern can emerge.
In the example below, we have zoomed in on a surge of traffic. By setting the Protocol field as the grouping dimension, we can see that there is a 5 Gbps surge of UDP packets (totalling at 840 GB throughput out of 991 GB in this time period). This is clearly not the traffic we want, so we can hover and click the UDP indicator to filter by it.
We can then continue to pattern the traffic, and so we set the Source port to be the grouping dimension. We can immediately see that, in this case, the majority of traffic (838 GB) is coming from source port 123. That’s no bueno, so let’s filter by that too.
We can continue iterating to identify the main pattern of the surge. An example of a field that is not necessarily helpful in this case is the Destination port. The time series is only showing us the top five ports but we can already see that it is quite distributed.
We move on to see what other fields can contribute to our investigation. Using the Packet size dimension yields good results. Over 771 GB of the traffic are delivered over 286 byte packets.
Assuming that our attack is now sufficiently patterned, we can create a Magic Firewall rule to block the attack by combining those fields. You can combine additional fields to ensure you do not impact your legitimate traffic. For example, if the attack is only targeting a single prefix (e.g., 192.0.2.0/24), you can limit the scope of the rule to that prefix.
If needed for attack mitigation or network troubleshooting, you can also view and export packet samples along with the packet headers. This can help you identify the pattern and sources of the traffic.
Another important role of the network security team is to provide decision makers an accurate view of their threat landscape and network security posture. Understanding those will enable teams and decision makers to prepare and ensure their organization is protected and critical services are kept available and performant. This is where, again, the Network Analytics dashboard comes in to help. Network operators can use the dashboard to understand their threat landscape — which endpoints are being targeted, by which types of attacks, where are they coming from, and how does that compare to the previous period.
Using the Network Analytics dashboard, users can create a custom report — filtered and tuned to provide their decision makers a clear view of the attack landscape that’s relevant to them.
In addition, Magic Transit and Spectrum users also receive an automated weekly Network DDoS Report which includes key insights and trends.
Extending visibility from Cloudflare’s vantage point
As we’ve seen in many cases, being unprepared can cost organizations substantial revenue loss, it can negatively impact their reputation, reduce users’ trust as well as burn out teams that need to constantly put out fires reactively. Furthermore, impact to organizations that operate in the healthcare industry, water, and electric and other critical infrastructure industries can cause very serious real-world problems, e.g., hospitals not being able to provide care for patients.
The Network Analytics dashboard aims to reduce the effort and time it takes network teams to investigate and resolve issues as well as to simplify and automate security reporting. The data is also available via GraphQL API and Logpush to allow teams to integrate the data into their internal systems and cross references with additional data points.
Welcome to the first DDoS threat report of 2023. DDoS attacks, or distributed denial-of-service attacks, are a type of cyber attack that aim to overwhelm Internet services such as websites with more traffic than they can handle, in order to disrupt them and make them unavailable to legitimate users. In this report, we cover the latest insights and trends about the DDoS attack landscape as we observed across our global network.
Kicking off 2023 with a bang
Threat actors kicked off 2023 with a bang. The start of the year was characterized by a series of hacktivist campaigns against Western targets including banking, airports, healthcare and universities — mainly by the pro-Russian Telegram-organized groups Killnet and more recently by AnonymousSudan.
While Killnet-led and AnonymousSudan-led cyberattacks stole the spotlight, we haven’t witnessed any novel or exceedingly large attacks by them.
We did see, however, an increase of hyper-volumetric DDoS attacks launched by other threat actors — with the largest one peaking above 71 million requests per second (rps) — exceeding Google’s previous world record of 46M rps by 55%.
Back to Killnet and AnonymousSudan, while no noteworthy attacks were reported, we shouldn’t underestimate the potential risks. Unprotected Internet properties can still be, and have been, taken down by Killnet-led or AnonymousSudan-led cyber campaigns. Organizations should take proactive defensive measures to reduce the risks.
Business as usual for South American Telco targeted by terabit-strong attacks thanks to Cloudflare
Another large attack we saw in Q1 was a 1.3 Tbps (terabits per second) DDoS attack that targeted a South American Telecommunications provider. The attack lasted only a minute. It was a multi-vector attack involving DNS and UDP attack traffic. The attack was part of a broader campaign which included multiple Terbit-strong attacks originating from a 20,000-strong Mirai-variant botnet. Most of the attack traffic originated from the US, Brazil, Japan, Hong Kong, and India. Cloudflare systems automatically detected and mitigated it without any impact to the customer’s networks.
Hyper-volumetric attacks leverage a new generation of botnets that are comprised of Virtual Private Servers (VPS) instead of Internet of Things (IoT) devices.
Historically, large botnets relied on exploitable IoT devices such as smart security cameras to orchestrate their attacks. Despite the limited throughput of each IoT device, together — usually numbering in the hundreds of thousands or millions — they generated enough traffic to disrupt their targets.
The new generation of botnets uses a fraction of the amount of devices, but each device is substantially stronger. Cloud computing providers offer virtual private servers to allow start ups and businesses to create performant applications. The downside is that it also allows attackers to create high-performance botnets that can be as much as 5,000x stronger. Attackers gain access to virtual private servers by compromising unpatched servers and hacking into management consoles using leaked API credentials.
Cloudflare has been working with key cloud computing providers to crack down on these VPS-based botnets. Substantial portions of such botnets have been disabled thanks to the cloud computing providers’ rapid response and diligence. Since then, we have yet to see additional hyper-volumetric attacks — a testament to the fruitful collaboration.
We have excellent collaboration with the cyber-security community to take down botnets once we detect such large-scale attacks, but we want to make this process even simpler and more automated.
We invite Cloud computing providers, hosting providers and general service providers to sign up for Cloudflare’s free Botnet Threat Feed to gain visibility on attacks launching from within their networks — and help us dismantle botnets.
Key highlights from this quarter
In Q1, 16% of surveyed customers reported a Ransom DDoS attack — remains steady compared to the previous quarter but represents a 60% increase YoY.
Non-profit organizations and Broadcast Media were two of the most targeted industries. Finland was the largest source of HTTP DDoS attacks in terms of percentage of attack traffic, and the main target of network-layer DDoS attacks. Israel was the top most attacked country worldwide by HTTP DDoS attacks.
Large scale volumetric DDoS attacks — attacks above 100 Gbps — increased by 6% QoQ. DNS-based attacks became the most popular vector. Similarly, we observed surges in SPSS-bas in ed DDoS attacks, DNS amplification attacks, and GRE-based DDoS attacks.
Ransom DDoS attacks
Often, DDoS attacks are carried out to extort ransom payments. We continue to survey Cloudflare customers and track the ratio of DDoS events where the target received a ransom note. This number has been steadily rising through 2022 and currently stands at 16% – the same as in Q4 2022.
As opposed to Ransomware attacks, where usually the victim is tricked into downloading a file or clicking on an email link that encrypts and locks their computer files until they pay a ransom fee, Ransom DDoS attacks can be much easier for attackers to execute. Ransom DDoS attacks don’t require tricking the victim into opening an email or clicking a link, nor do they require a network intrusion or a foothold into the corporate assets.
In a Ransom DDoS attack, the attacker doesn’t need access to the victim’s computer but rather just needs to bombard them with a sufficiently large amount of traffic to take down their websites, DNS servers, and any other type of Internet-connected property to make it unavailable or with poor performance to users. The attacker will demand a ransom payment, usually in the form of Bitcoin, to stop and/or avoid further attacks.
The months of January 2023 and March 2023 were the second highest in terms of Ransom DDoS activity as reported by our users. The highest month thus far remains November 2022 — the month of Black Friday, Thanksgiving, and Singles Day in China — a lucrative month for threat actors.
Who and what are being attacked?
Top targeted countries
Perhaps related to the judicial reform and opposing protests, in Q1, Israel jumps to the first place as the country targeted by the most HTTP DDoS attack traffic — even above the United States of America. This is an astonishing figure. Just short of a single percent of all HTTP traffic that Cloudflare processed in the first quarter of the year, was part of HTTP DDoS attacks that targeted Israeli websites. Following closely behind Israel are the US, Canada, and Turkey.
In terms of the percentage of attack traffic compared to all traffic to a given country, Slovenia and Georgia came at the top. Approximately 20% of all traffic to Slovenian and Georgian websites were HTTP DDoS attacks. Next in line were the small Caribbean dual-island nation, Saint Kitts and Nevis, and Turkey. While Israel was the top in the previous graph, here it has found its placement as the ninth most attacked country — above Russia. Still high compared to previous quarters.
Looking at the total amount of network-layer DDoS attack traffic, China came in first place. Almost 18% of all network-layer DDoS attack traffic came from China. Closely in second, Singapore came in second place with a 17% share. The US came in third, followed by Finland.
When we normalize attacks to a country by all traffic to that country, Finland jumps to the first place, perhaps due to its newly approved NATO membership. Nearly 83% of all traffic to Finland was network-layer attack traffic. China followed closely with 68% and Singapore again with 49%.
Top targeted industries
In terms of overall bandwidth, globally, Internet companies saw the largest amount of HTTP DDoS attack traffic. Afterwards, it was the Marketing and Advertising industry, Computer Software industry, Gaming / Gambling and Telecommunications.
By percentage of attack traffic out of total traffic to an industry, Non-profits were the most targeted in the first quarter of the year, followed by Accounting firms. Despite the uptick of attacks on healthcare, it didn’t make it into the top ten. Also up there in the top were Chemicals, Government, and Energy Utilities & Waste industries. Looking at the US, almost 2% of all traffic to US Federal websites were part of DDoS attacks.
On a regional scale, the Gaming & Gambling industry was the most targeted in Asia, Europe, and the Middle East. In South and Central America, the Banking, Financial Services and Insurance (BFSI) industry was the most targeted. In North America it was the Marketing & Advertising industry followed by Telecommunications — which was also the most attacked industry in Africa. Last by not least, in Oceania, the Health, Wellness and Fitness industry was the most targeted by HTTP DDoS attacks.
Diving lower in the OSI stack, based on the total volume of L3/4 attack traffic, the most targeted industries were Information Technology and Services, Gaming / Gambling, and Telecommunications.
When comparing the attack traffic to the total traffic per industry, we see a different picture. Almost every second byte transmitted to Broadcast Media companies was L3/4 DDoS attack traffic.
Where attacks are coming from
Top source countries
In the first quarter of 2023, Finland was the largest source of HTTP DDoS attacks in terms of the percentage of attack traffic out of all traffic per country. Closely after Finland, the British Virgin Islands came in second place, followed by Libya and Barbados.
In terms of absolute volumes, the most HTTP DDoS attack traffic came from US IP addresses. China came in second, followed by Germany, Indonesia, Brazil, and Finland.
On the L3/4 side of things, Vietnam was the largest source of L3/4 DDoS attack traffic. Almost a third of all L3/4 traffic we ingested in our Vietnam data centers was attack traffic. Following Vietnam were Paraguay, Moldova, and Jamaica.
What attack types and sizes we see
Attack size and duration
When looking at the types of attacks that are launched against our customers and our own network and applications, we can see that the majority of attacks are short and small; 86% of network-layer DDoS attacks end within 10 minutes, and 91% of attacks never exceed 500 Mbps.
Only one out of every fifty attacks ever exceeds 10 Gbps, and only one out of every thousand attacks exceeds 100 Gbps.
Having said that, larger attacks are slowly increasing in quantity and frequency. Last quarter, attacks exceeding 100 Gbps saw a 67% increase QoQ in their quantity. This quarter, the growth has slowed down a bit to 6%, but it’s still growing. In fact, there was an increase in all volumetric attacks excluding the ‘small’ bucket where the majority fall into — as visualized in the graph below. The largest growth was in the 10-100 Gbps range; an 89% increase QoQ.
This quarter we saw a tectonic shift. With a 22% share, SYN floods scooched to the second place, making DNS-based DDoS attacks the most popular attack vector (30%). Almost a third of all L3/4 DDoS attacks were DNS-based; either DNS floods or DNS amplification/reflection attacks. Not far behind, UDP-based attacks came in third with a 21% share.
Every quarter we see the reemergence of old and sometimes even ancient attack vectors. What this tells us is that even decade-old vulnerabilities are still being exploited to launch attacks. Threat actors are recycling and reusing old methods — perhaps hoping that organizations have dropped those protections against older methods.
In the first quarter of 2023, there was a massive surge in SPSS-based DDoS attacks, DNS amplification attacks and GRE-based DDoS attacks.
SPSS-based DDoS attacks increased by 1,565% QoQ
The Statistical Product and Service Solutions (SPSS) is an IBM-developed software suite for use cases such as data management, business intelligence, and criminal investigation. The Sentinel RMS License Manager server is used to manage licensing for software products such as the IBM SPSS system. Back in 2021, two vulnerabilities (CVE-2021-22713 and CVE-2021-38153) were identified in the Sentinel RMS License Manager server which can be used to launch reflection DDoS attacks. Attackers can send large amounts of specially crafted license requests to the server, causing it to generate a response that is much larger than the original request. This response is sent back to the victim’s IP address, effectively amplifying the size of the attack and overwhelming the victim’s network with traffic. This type of attack is known as a reflection DDoS attack, and it can cause significant disruption to the availability of software products that rely on the Sentinel RMS License Manager, such as IBM SPSS Statistics. Applying the available patches to the license manager is essential to prevent these vulnerabilities from being exploited and to protect against reflection DDoS attacks.
DNS amplification DDoS attacks increased by 958% QoQ
DNS amplification attacks are a type of DDoS attack that involves exploiting vulnerabilities in the Domain Name System (DNS) infrastructure to generate large amounts of traffic directed at a victim’s network. Attackers send DNS requests to open DNS resolvers that have been misconfigured to allow recursive queries from any source, and use these requests to generate responses that are much larger than the original query. The attackers then spoof the victim’s IP address, causing the large responses to be directed at the victim’s network, overwhelming it with traffic and causing a denial of service. The challenge of mitigating DNS amplification attacks is that the attack traffic can be difficult to distinguish from legitimate traffic, making it difficult to block at the network level. To mitigate DNS amplification attacks, organizations can take steps such as properly configuring DNS resolvers, implementing rate-limiting techniques, and using traffic filtering tools to block traffic from known attack sources.
GRE-based DDoS attacks increased by 835% QoQ
GRE-based DDoS attacks involve using the Generic Routing Encapsulation (GRE) protocol to flood a victim’s network with large amounts of traffic. Attackers create multiple GRE tunnels between compromised hosts to send traffic to the victim’s network. These attacks are difficult to detect and filter, as the traffic appears as legitimate traffic on the victim’s network. Attackers can also use source IP address spoofing to make it appear that the traffic is coming from legitimate sources, making it difficult to block at the network level. GRE-based DDoS attacks pose several risks to targeted organizations, including downtime, disruption of business operations, and potential data theft or network infiltration. Mitigating these attacks requires the use of advanced traffic filtering tools that can detect and block attack traffic based on its characteristics, as well as techniques such as rate limiting and source IP address filtering to block traffic from known attack sources.
The DDoS threat landscape
In recent months, there has been an increase in longer and larger DDoS attacks across various industries, with volumetric attacks being particularly prominent. Non-profit and Broadcast Media companies were some of the top targeted industries. DNS DDoS attacks also became increasingly prevalent.
As DDoS attacks are typically carried out by bots, automated detection and mitigation are crucial for effective defense. Cloudflare’s automated systems provide constant protection against DDoS attacks for our customers, allowing them to focus on other aspects of their business. We believe that DDoS protection should be easily accessible to organizations of all sizes, and have been offering free and unlimited protection since 2017.
At Cloudflare, our mission is to help build a better Internet — one that is more secure and faster Internet for all.
We invite you to join our DDoS Trends Webinar to learn more about emerging threats and effective defense strategies.
A note about methodologies
How we calculate Ransom DDoS attack insights Cloudflare’s systems constantly analyze traffic and automatically apply mitigation when DDoS attacks are detected. Each attacked customer is prompted with an automated survey to help us better understand the nature of the attack and the success of the mitigation. For over two years, Cloudflare has been surveying attacked customers. One of the questions in the survey asks the respondents if they received a threat or a ransom note. Over the past two years, on average, we collected 164 responses per quarter. The responses of this survey are used to calculate the percentage of Ransom DDoS attacks.
How we calculate geographical and industry insights Source country At the application-layer, we use the attacking IP addresses to understand the origin country of the attacks. That is because at that layer, IP addresses cannot be spoofed (i.e., altered). However, at the network layer, source IP addresses can be spoofed. So, instead of relying on IP addresses to understand the source, we instead use the location of our data centers where the attack packets were ingested. We’re able to get geographical accuracy due to our large global coverage in over 285 locations around the world.
Target country For both application-layer and network-layer DDoS attacks, we group attacks and traffic by our customers’ billing country. This lets us understand which countries are subject to more attacks.
Target industry For both application-layer and network-layer DDoS attacks, we group attacks and traffic by our customers’ industry according to our customer relations management system. This lets us understand which industries are subject to more attacks.
Total volume vs. percentage For both source and target insights, we look at the total volume of attack traffic compared to all traffic as one data point. Additionally, we also look at the percentage of attack traffic towards or from a specific country, to a specific country or to a specific industry. This gives us an “attack activity rate” for a given country/industry which is normalized by their total traffic levels. This helps us remove biases of a country or industry that normally receives a lot of traffic and therefore a lot of attack traffic as well.
How we calculate attack characteristics To calculate the attack size, duration, attack vectors and emerging threats, we bucket attacks and then provide the share of each bucket out of the total amount for each dimension.
General disclaimer and clarification When we describe ‘top countries’ as the source or target of attacks, it does not necessarily mean that that country was attacked as a country, but rather that organizations that use that country as their billing country were targeted by attacks. Similarly, attacks originating from a country does not mean that that country launched the attacks, but rather that the attack was launched from IP addresses that have been mapped to that country. Threat actors operate global botnets with nodes all over the world, and in many cases also use Virtual Private Networks and proxies to obfuscate their true location. So if anything, the source country could indicate the presence of exit nodes or botnet nodes within that country.
This was a weekend of record-breaking DDoS attacks. Over the weekend, Cloudflare detected and mitigated dozens of hyper-volumetric DDoS attacks. The majority of attacks peaked in the ballpark of 50-70 million requests per second (rps) with the largest exceeding 71 million rps. This is the largest reported HTTP DDoS attack on record, more than 35% higher than the previous reported record of 46M rps in June 2022.
The attacks were HTTP/2-based and targeted websites protected by Cloudflare. They originated from over 30,000 IP addresses. Some of the attacked websites included a popular gaming provider, cryptocurrency companies, hosting providers, and cloud computing platforms. The attacks originated from numerous cloud providers, and we have been working with them to crack down on the botnet.
Over the past year, we’ve seen more attacks originate from cloud computing providers. For this reason, we will be providing service providers that own their own autonomous system a free Botnet threat feed. The feed will provide service providers threat intelligence about their own IP space; attacks originating from within their autonomous system. Service providers that operate their own IP space can now sign up to the early access waiting list.
Is this related to the Super Bowl or Killnet?
No. This campaign of attacks arrives less than two weeks after the Killnet DDoS campaign that targeted healthcare websites. Based on the methods and targets, we do not believe that these recent attacks are related to the healthcare campaign. Furthermore, yesterday was the US Super Bowl, and we also do not believe that this attack campaign is related to the game event.
What are DDoS attacks?
Distributed Denial of Service attacks are cyber attacks that aim to take down Internet properties and make them unavailable for users. These types of cyberattacks can be very efficient against unprotected websites and they can be very inexpensive for the attackers to execute.
An HTTP DDoS attack usually involves a flood of HTTP requests towards the target website. The attacker’s objective is to bombard the website with more requests than it can handle. Given a sufficiently high amount of requests, the website’s server will not be able to process all of the attack requests along with the legitimate user requests. Users will experience this as website-load delays, timeouts, and eventually not being able to connect to their desired websites at all.
To make attacks larger and more complicated, attackers usually leverage a network of bots — a botnet. The attacker will orchestrate the botnet to bombard the victim’s websites with HTTP requests. A sufficiently large and powerful botnet can generate very large attacks as we’ve seen in this case.
However, building and operating botnets requires a lot of investment and expertise. What is the average Joe to do? Well, an average Joe that wants to launch a DDoS attack against a website doesn’t need to start from scratch. They can hire one of numerous DDoS-as-a-Service platforms for as little as $30 per month. The more you pay, the larger and longer of an attack you’re going to get.
Why DDoS attacks?
Over the years, it has become easier, cheaper, and more accessible for attackers and attackers-for-hire to launch DDoS attacks. But as easy as it has become for the attackers, we want to make sure that it is even easier – and free – for defenders of organizations of all sizes to protect themselves against DDoS attacks of all types.
Unlike Ransomware attacks, Ransom DDoS attacks don’t require an actual system intrusion or a foothold within the targeted network. Usually Ransomware attacks start once an employee naively clicks an email link that installs and propagates the malware. There’s no need for that with DDoS attacks. They are more like a hit-and-run attack. All a DDoS attacker needs to know is the website’s address and/or IP address.
Is there an increase in DDoS attacks?
Yes. The size, sophistication, and frequency of attacks has been increasing over the past months. In our latest DDoS threat report, we saw that the amount of HTTP DDoS attacks increased by 79% year-over-year. Furthermore, the amount of volumetric attacks exceeding 100 Gbps grew by 67% quarter-over-quarter (QoQ), and the number of attacks lasting more than three hours increased by 87% QoQ.
But it doesn’t end there. The audacity of attackers has been increasing as well. In our latest DDoS threat report, we saw that Ransom DDoS attacks steadily increased throughout the year. They peaked in November 2022 where one out of every four surveyed customers reported being subject to Ransom DDoS attacks or threats.
Should I be worried about DDoS attacks?
Yes. If your website, server, or networks are not protected against volumetric DDoS attacks using a cloud service that provides automatic detection and mitigation, we really recommend that you consider it.
Cloudflare customers shouldn’t be worried, but should be aware and prepared. Below is a list of recommended steps to ensure your security posture is optimized.
What steps should I take to defend against DDoS attacks?
Cloudflare’s systems have been automatically detecting and mitigating these DDoS attacks.
Cloudflare offers many features and capabilities that you may already have access to but may not be using. So as extra precaution, we recommend taking advantage of these capabilities to improve and optimize your security posture:
Ensure all DDoS Managed Rules are set to default settings (High sensitivity level and mitigation actions) for optimal DDoS activation.
Cloudflare Enterprise customers that are subscribed to the Advanced DDoS Protection service should consider enabling Adaptive DDoS Protection, which mitigates attacks more intelligently based on your unique traffic patterns.
Deploy firewall rules and rate limiting rules to enforce a combined positive and negative security model. Reduce the traffic allowed to your website based on your known usage.
Ensure your origin is not exposed to the public Internet (i.e., only enable access to Cloudflare IP addresses). As an extra security precaution, we recommend contacting your hosting provider and requesting new origin server IPs if they have been targeted directly in the past.
Customers with access to Managed IP Lists should consider leveraging those lists in firewall rules. Customers with Bot Management should consider leveraging the threat scores within the firewall rules.
Enable caching as much as possible to reduce the strain on your origin servers, and when using Workers, avoid overwhelming your origin server with more subrequests than necessary.
Defending against DDoS attacks is critical for organizations of all sizes. While attacks may be initiated by humans, they are executed by bots — and to play to win, you must fight bots with bots. Detection and mitigation must be automated as much as possible, because relying solely on humans to mitigate in real time puts defenders at a disadvantage. Cloudflare’s automated systems constantly detect and mitigate DDoS attacks for our customers, so they don’t have to. This automated approach, combined with our wide breadth of security capabilities, lets customers tailor the protection to their needs.
We’ve been providing unmetered and unlimited DDoS protection for free to all of our customers since 2017, when we pioneered the concept. Cloudflare’s mission is to help build a better Internet. A better Internet is one that is more secure, faster, and reliable for everyone – even in the face of DDoS attacks.
Over the past few days, Cloudflare, as well as other sources, have observed healthcare organizations targeted by a pro-Russian hacktivist group claiming to be Killnet. There has been an increase in the amount of healthcare organizations coming to us to help get out from under these types of attacks. Multiple healthcare organizations behind Cloudflare have also been targeted by HTTP DDoS attacks and Cloudflare has helped them successfully mitigate these attacks. The United States Department of Health and Human Services issued an Analyst Note detailing the threat of Killnet-related cyberattacks to the healthcare industry.
A rise in political tensions and escalation of the conflict in Ukraine are all factors that play into the current cybersecurity threat landscape. Unlike traditional warfare, the Internet has enabled and empowered groups of individuals to carry out targeted attacks regardless of their location or involvement. Distributed-denial-of-Service (DDoS) attacks have the unfortunate advantage of not requiring an intrusion or a foothold to be launched and have, unfortunately, become more accessible than ever before.
The attacks observed by the Cloudflare global network do not show a clear indication that they are originating from a single botnet and the attack methods and sources seem to vary. This could indicate the involvement of multiple threat actors acting on behalf of Killnet or it could indicate a more sophisticated, coordinated attack.
Cloudflare application services customers are protected against the attacks. Cloudflare systems have been automatically detecting and mitigating the attacks on behalf of our customers. Our team continues to monitor the situation closely and is prepared to deploy countermeasures, if needed.
As an extra precaution, customers in the Healthcare industry are advised to follow the mitigation recommendations in the “How to Prepare” section below.
Who is Killnet?
Killnet is a group of pro-Russian individuals that gather and communicate on a Telegram channel. The channel provides a space for pro-Russian sympathizers to volunteer their expertise by participating in cyberattacks against Western interests. Previously, in the fourth quarter of 2022, Killnet called to attack US airport websites.
Why DDoS attacks?
DDoS attacks, unlike ransomware, do not require an intrusion or foothold in the target network to be launched. Much like how physical addresses are publicly available via directories or for services like mail delivery, IP addresses and domain names are also publicly available. Unfortunately, this means that every domain name (layer 7) and every network that connects to the Internet (layers 3 & 4) must proactively prepare to defend against DDoS attacks. DDoS attacks are not new threats, but they have become larger, more sophisticated, and more frequent in recent years.
How to prepare
While Cloudflare’s systems have been automatically detecting and mitigating these DDoS attacks, we recommend additional precautionary measures to improve your security posture:
Ensure all other DDoS Managed Rules are set to default settings (High sensitivity level and mitigation actions) for optimal DDoS activation
Cloudflare Enterprise customers with Advanced DDoS should consider enabling Adaptive DDoS Protection, which mitigates traffic that deviates based on your traffic profiles
Deploy firewall rules and rate-limiting rules to enforce a combined positive and negative security model. Reduce the traffic allowed to your website based on your known usage.
Ensure your origin is not exposed to the public Internet (i.e. only enable access to Cloudflare IP addresses)
Customers with access to Managed IP Lists should consider leveraging those lists in firewall rules
Enable caching as much as possible to reduce the strain on your origin servers
Though attacks are launched by humans, they are carried out by bots. Defenders who do not leverage automated defenses are at a disadvantage. Cloudflare has helped, and will continue to help, our customers in the healthcare industry prepare for and respond to these attacks.
Under attack? We can help. Visit this webpage or call us at +1 (888) 99 FLARE
Welcome to our DDoS Threat Report for the fourth and final quarter of 2022. This report includes insights and trends about the DDoS threat landscape – as observed across Cloudflare’s global network.
In the last quarter of the year, as billions around the world celebrated holidays and events such as Thanksgiving, Christmas, Hanukkah, Black Friday, Singles’ Day, and New Year, DDoS attacks persisted and even increased in size, frequency, and sophistication whilst attempting to disrupt our way of life.
Cloudflare’s automated DDoS defenses stood firm and mitigated millions of attacks in the last quarter alone. We’ve taken all of those attacks, aggregated, analyzed, and prepared the bottom lines to help you better understand the threat landscape.
Global DDoS insights
In the last quarter of the year, despite a year-long decline, the amount of HTTP DDoS attack traffic still increased by 79% YoY. While most of these attacks were small, Cloudflare constantly saw terabit-strong attacks, DDoS attacks in the hundreds of millions of packets per second, and HTTP DDoS attacks peaking in the tens of millions of requests per second launched by sophisticated botnets.
Volumetric attacks surged; the number of attacks exceeding rates of 100 gigabits per second (Gbps) grew by 67% quarter-over-quarter (QoQ), and the number of attacks lasting more than three hours increased by 87% QoQ.
Ransom DDoS attacks steadily increased this year. In Q4, over 16% of respondents reported receiving a threat or ransom demand as part of the DDoS attack that targeted their Internet properties.
Industries most targeted by DDoS attacks
HTTP DDoS attacks constituted 35% of all traffic to Aviation and Aerospace Internet properties.
Similarly, over a third of all traffic to the Gaming/Gambling and Finance industries was network-layer DDoS attack traffic.
A whopping 92% of traffic to Education Management companies was part of network-layer DDoS attacks. Likewise, 73% of traffic to the Information Technology and Services and the Public Relations & Communications industries were also network-layer DDoS attacks.
Source and targets of DDoS attacks
In Q4, 93% of network-layer traffic to Chinese Internet properties behind Cloudflare were part of network-layer DDoS attacks. Similarly, over 86% of traffic to Cloudflare customers in Lithuania and 80% of traffic to Cloudflare customers in Finland was attack traffic.
On the application-layer, over 42% of all traffic to Georgian Internet properties behind Cloudflare was part of HTTP DDoS attacks, followed by Belize with 28%, and San Marino in third place with just below 20%. Almost 20% of all traffic from Libya that Cloudflare saw was application-layer DDoS attack traffic.
Over 52% of all traffic recorded in Cloudflare’s data centers in Botswana was network-layer DDoS attack traffic. Similarly, in Cloudflare’s data centers in Azerbaijan, Paraguay, and Palestine, network-layer DDoS attack traffic constituted approximately 40% of all traffic.
Quick note: this quarter, we’ve made a change to our algorithms to improve the accuracy of our data which means that some of these data points are incomparable to previous quarters. Read more about these changes in the next section Changes to the report methodologies.
Sign up to the DDoS Trends Webinar to learn more about the emerging threats and how to defend against them.
Changes to the report methodologies
Since our first report in 2020, we’ve always used percentages to represent attack traffic, i.e., the percentage of attack traffic out of all traffic including legitimate/user traffic. We did this to normalize the data, avoid data biases, and be more flexible when it comes to incorporating new mitigation system data into the report.
In this report, we’ve introduced changes to the methods used to calculate some of those percentages when we bucket attacks by certain dimensions such as target country, source country, or target industry. In the application-layer sections, we previously divided the amount of attack HTTP/S requests to a given dimension by all the HTTP/S requests to all dimensions. In the network-layer section, specifically in Target industries and Target countries, we used to divide the amount of attack IP packets to a given dimension by the total attack packets to all dimensions.
From this report onwards, we now divide the attack requests (or packets) to a given dimension only by the total requests (or packets) to that given dimension. We made these changes in order to align our calculation methods throughout the report and improve the data accuracy so it better represents the attack landscape.
For example, the top industry attacked by application-layer DDoS attacks using the previous method was the Gaming and Gambling industry. The attack requests towards that industry accounted for 0.084% of all traffic (attack and non-attack) to all industries. Using that same old method, the Aviation and Aerospace industry came in 12th place. Attack traffic towards the Aviation and Aerospace industry accounted for 0.0065% of all traffic (attack and non-attack) to all industries. However, using the new method, the Aviation and Aerospace industry came in as the number one most attacked industry — attack traffic formed 35% of all traffic (attack and non-attack) towards that industry alone. Again using the new method, the Gaming and Gambling industry came in 14th place — 2.4% of its traffic was attack traffic.
The old calculation method used in previous reports to calculate the percentage of attack traffic for each dimension was the following:
The new calculation method used from this report onwards is the following:
The changes apply to the following metrics:
Target industries of application-layer DDoS attacks
Target countries of application-layer DDoS attacks
Source of application-layer DDoS attacks
Target industries of network-layer DDoS attacks
Target countries of network-layer DDoS attacks
No other changes were made in the report. The Source of network-layer DDoS attacks metrics already use this method since the first report. Also, no changes were made to the Ransom DDoS attacks, DDoS attack rate, DDoS attack duration, DDoS attack vectors, and Top emerging threats sections. These metrics do not take legitimate traffic into consideration and no methodology alignment was needed.
With that in mind, let’s dive in deeper and explore these insights and trends. You can also view an interactive version of this report on Cloudflare Radar.
Ransom DDoS attacks
As opposed to Ransomware attacks, where the victim is tricked into downloading a file or clicking on an email link that encrypts and locks their computer files until they pay a ransom fee, Ransom DDoS attacks can be much easier for attackers to launch. Ransom DDoS attacks don’t require tricking the victim into opening an email or clicking a link, nor do they require a network intrusion or a foothold to be carried out.
In a Ransom DDoS attack, the attacker doesn’t need access to the victim’s computer but rather just floods them with enough traffic to negatively impact their Internet services. The attacker will demand a ransom payment, usually in the form of Bitcoin, to stop and/or avoid further attacks.
In the last quarter of 2022, 16% of Cloudflare customers that responded to our survey reported being targeted by HTTP DDoS attacks accompanied by a threat or a ransom note. This represents a 14% increase QoQ but a 16% decrease YoY in reported Ransom DDoS attacks.
How we calculate Ransom DDoS attack trends Cloudflare’s systems constantly analyze traffic and automatically apply mitigation when DDoS attacks are detected. Each DDoS’d customer is prompted with an automated survey to help us better understand the nature of the attack and the success of the mitigation. For over two years, Cloudflare has been surveying attacked customers. One of the questions in the survey asks the respondents if they received a threat or a ransom note. Over the past two years, on average, we collected 187 responses per quarter. The responses of this survey are used to calculate the percentage of Ransom DDoS attacks.
Application-layer DDoS attack landscape
Application-layer DDoS attacks, specifically HTTP/S DDoS attacks, are cyber attacks that usually aim to disrupt web servers by making them unable to process legitimate user requests. If a server is bombarded with more requests than it can process, the server will drop legitimate requests and – in some cases – crash, resulting in degraded performance or an outage for legitimate users.
Application-layer DDoS attack trends
When we look at the graph below, we can see a clear downward trend in attacks each quarter this year. However, despite the downward trend, HTTP DDoS attacks still increased by 79% when compared to the same quarter of previous year.
Target industries of application-layer DDoS attacks
In the quarter where many people travel for the holidays, the Aviation and Aerospace was the most attacked industry. Approximately 35% of traffic to the industry was part of HTTP DDoS attacks. In second place, the Events Services industry saw over 16% of its traffic as HTTP DDoS attacks.
In the following places were the Media and Publishing, Wireless, Government Relations, and Non-profit industries. To learn more about how Cloudflare protects non-profit and human rights organizations, read our recent Impact Report.
When we break it down regionally, and after excluding generic industry buckets like Internet and Software, we can see that in North America and Oceania the Telecommunications industry was the most targeted. In South America and Africa, the Hospitality industry was the most targeted. In Europe and Asia, Gaming & Gambling industries were the most targeted. And in the Middle East, the Education industry saw the most attacks.
Target countries of application-layer DDoS attacks
Bucketing attacks by our customers’ billing address helps us understand which countries are more frequently attacked. In Q4, over 42% of all traffic to Georgian HTTP applications behind Cloudflare was DDoS attack traffic.
In second place, Belize-based companies saw almost a third of their traffic as DDoS attacks, followed by San Marino in third with just below 20% of its traffic being DDoS attack traffic.
Source of application-layer DDoS attacks
Quick note before we dive in. If a country is found to be a major source of DDoS attacks, it doesn’t necessarily mean that it is that country that launches the attacks. Most often with DDoS attacks, attackers are launching attacks remotely in an attempt to hide their true location. Top source countries are more often indicators that there are botnet nodes operating from within that country, perhaps hijacked servers or IoT devices.
In Q4, almost 20% of all HTTP traffic originating from Libya was part of HTTP DDoS attacks. Similarly, 18% of traffic originating from Timor-Leste, an island country in Southeast Asia just north of Australia, was attack traffic. DDoS attack traffic also accounted for 17% of all traffic originating from the British Virgin Islands and 14% of all traffic originating from Afghanistan.
Network-layer DDoS attacks
While application-layer attacks target the application (Layer 7 of the OSI model) running the service that end users are trying to access (HTTP/S in our case), network-layer DDoS attacks aim to overwhelm network infrastructure, such as in-line routers and servers, and the Internet link itself.
Network-layer DDoS attack trends
After a year of steady increases in network-layer DDoS attacks, in the fourth and final quarter of the year, the amount of attacks actually decreased by 14% QoQ and 13% YoY.
Now let’s dive a little deeper to understand the various attack properties such as the attack volumetric rates, durations, attack vectors, and emerging threats.
DDoS attack rate While the vast majority of attacks are relatively short and small, we did see a spike in longer and larger attacks this quarter. The amount of volumetric network-layer DDoS attacks with a rate exceeding 100 Gbps increased by 67% QoQ. Similarly, attacks in the range of 1-100 Gbps increased by ~20% QoQ, and attacks in the range of 500 Mbps to 1 Gbps increased by 108% QoQ.
Below is an example of one of those attacks exceeding 100 Gbps that took place the week after Thanksgiving. This was a 1 Tbps DDoS attack targeted at a Korean-based hosting provider. This particular attack was an ACK flood, and it lasted roughly one minute. Since the hosting provider was using Magic Transit, Cloudflare’s L3 DDoS protection service, the attack was automatically detected and mitigated.
While bit-intensive attacks usually aim to clog up the Internet connection to cause a denial of service event, packet-intensive attacks attempt to crash in-line devices. If an attack sends more packets than you can handle, the servers and other in-line appliances might not be able to process legitimate user traffic, or even crash altogether.
DDoS attack duration In Q4, the amount of shorter attacks lasting less than 10 minutes decreased by 76% QoQ, and the amount of longer attacks increased. Most notably, attacks lasting 1-3 hours increased by 349% QoQ and the amount of attacks lasting more than three hours increased by 87% QoQ. Most of the attacks, over 67% of them, lasted 10-20 minutes.
DDoS attack vectors The attack vector is a term used to describe the attack method. In Q4, SYN floods remained the attacker’s method of choice — in fact, almost half of all network-layer DDoS attacks were SYN floods.
As a recap, SYN floods are a flood of SYN packets (TCP packets with the Synchronize flag turned on, i.e., the bit set to 1). SYN floods take advantage of the statefulness of the Three-way TCP handshake — which is the way to establish a connection between a server and a client.
The client starts off by sending a SYN packet, the server responds with a Synchronize-acknowledgement (SYN/ACK) packet and waits for the client’s Acknowledgement (ACK) packet. For every connection, a certain amount of memory is allocated. In the SYN flood, the source IP addresses may be spoofed (altered) by the attacker, causing the server to respond with the SYN/ACK packets to the spoofed IP addresses — which most likely ignore the packet. The server then naively waits for the never arriving ACK packets to complete the handshake. After a while, the server times out and releases those resources. However, given a sufficient amount of SYN packets in a short amount of time, they may be enough to drain the server’s resources and render it unable to handle legitimate user connections or even crash altogether.
After SYN floods, with a massive drop in share, DNS floods and amplification attacks came in second place, accounting for ~15% of all network-layer DDoS attacks. And in third UDP-based DDoS attacks and floods with a 9% share.
Emerging DDoS threats In Q4, Memcached-based DDoS attacks saw the highest growth — a 1,338% increase QoQ. Memcached is a database caching system for speeding up websites and networks. Memcached servers that support UDP can be abused to launch amplification/reflection DDoS attacks. In this case, the attacker would request content from the caching system and spoof the victim’s IP address as the source IP in the UDP packets. The victim will be flooded with the Memcache responses which can be amplified by a factor of up to 51,200x.
In second place, SNMP-based DDoS attacks increased by 709% QoQ. Simple Network Management Protocol (SNMP) is a UDP-based protocol that is often used to discover and manage network devices such as printers, switches, routers, and firewalls of a home or enterprise network on UDP well-known port 161. In an SNMP reflection attack, the attacker sends out numerous SNMP queries while spoofing the source IP address in the packet as the targets to devices on the network that, in turn, reply to that target’s address. Numerous responses from the devices on the network results in the target network being DDoSed.
In third place, VxWorks-based DDoS attacks increased by 566% QoQ. VxWorks is a real-time operating system (RTOS) often used in embedded systems such as Internet of Things (IoT) devices. It also is used in networking and security devices, such as switches, routers, and firewalls. By default, it has a debug service enabled which not only allows anyone to do pretty much anything to those systems, but it can also be used for DDoS amplification attacks. This exploit (CVE-2010-2965) was exposed as early as 2010 and as we can see it is still being used in the wild to generate DDoS attacks.
Target industries of network-layer DDoS attacks
In Q4, the Education Management industry saw the highest percentage of network-layer DDoS attack traffic — 92% of all traffic routed to the industry was network-layer DDoS attack traffic.
Not too far behind, in the second and third places, the Information Technology and Services alongside the Public Relations and Communications industries also saw a significant amount of network-layer DDoS attack traffic (~73%). With a high margin, the Finance, Gaming / Gambling, and Medical Practice industries came in next with approximately a third of their traffic flagged as attack traffic.
Target countries of network-layer DDoS attacks
Grouping attacks by our customers’ billing country lets us understand which countries are subject to more attacks. In Q4, a staggering 93% of traffic to Chinese Internet properties behind Cloudflare was network-layer DDoS attack traffic.
In second place, Lithuanian Internet properties behind Cloudflare saw 87% of their traffic belonging to network-layer DDoS attack traffic. Following were Finland, Singapore, and Taiwan with the highest percentage of attack traffic.
Source of network-layer DDoS attacks
In the application-layer, we used the attacking IP addresses to understand the origin country of the attacks. That is because at that layer, IP addresses cannot be spoofed (i.e., altered). However, in the network layer, source IP addresses can be spoofed. So, instead of relying on IP addresses to understand the source, we instead use the location of our data centers where the attack packets were ingested. We’re able to get geographical accuracy due to our large global coverage in over 275+ locations around the world.
In Q4, over 52% of the traffic we ingested in our Botswana-based data center was attack traffic. Not too far behind, over 43% of traffic in Azerbaijan was attack traffic, followed by Paraguay, Palestine, Laos, and Nepal.
Please note: Internet Service Providers may sometimes route traffic differently which may skew results. For example, traffic from China may be hauled through California due to various operational considerations.
Understanding the DDoS threat landscape
This quarter, longer and larger attacks became more frequent. Attack durations increased across the board, volumetric attacks surged, and Ransom DDoS attacks continued to rise. During the 2022 holiday season, the top targeted industries for DDoS attacks at the application-layer were Aviation/Aerospace and Events Services. Network-layer DDoS attacks targeted Gaming/Gambling, Finance, and Education Management companies. We also saw a shift in the top emerging threats, with Memcashed-based DDoS attacks continuing to increase in prevalence.
Defending against DDoS attacks is critical for organizations of all sizes. While attacks may be initiated by humans, they are executed by bots — and to play to win, you must fight bots with bots. Detection and mitigation must be automated as much as possible, because relying solely on humans puts defenders at a disadvantage. Cloudflare’s automated systems constantly detect and mitigate DDoS attacks for our customers, so they don’t have to.
Over the years, it has become easier, cheaper, and more accessible for attackers and attackers-for-hire to launch DDoS attacks. But as easy as it has become for the attackers, we want to make sure that it is even easier – and free – for defenders of organizations of all sizes to protect themselves against DDoS attacks of all types. We’ve been providing unmetered and unlimited DDoS protection for free to all of our customers since 2017 — when we pioneered the concept. Cloudflare’s mission is to help build a better Internet. A better Internet is one that is more secure, faster, and reliable for everyone – even in the face of DDoS attacks.
Sign up to the DDoS Trends Webinar to learn more about the emerging threats and how to defend against them.
Welcome to our DDoS Threat Report for the third quarter of 2022. This report includes insights and trends about the DDoS threat landscape – as observed across Cloudflare’s global network.
Multi-terabit strong DDoS attacks have become increasingly frequent. In Q3, Cloudflare automatically detected and mitigated multiple attacks that exceeded 1 Tbps. The largest attack was a 2.5 Tbps DDoS attack launched by a Mirai botnet variant, aimed at the Minecraft server, Wynncraft. This is the largest attack we’ve ever seen from the bitrate perspective.
It was a multi-vector attack consisting of UDP and TCP floods. However, Wynncraft, a massively multiplayer online role-playing game Minecraft server where hundreds and thousands of users can play on the same server, didn’t even notice the attack, since Cloudflare filtered it out for them.
General DDoS attack trends
Overall this quarter, we’ve seen:
An increase in DDoS attacks compared to last year.
Longer-lasting volumetric attacks, a spike in attacks generated by the Mirai botnet and its variants.
Surges in attacks targeting Taiwan and Japan.
Application-layer DDoS attacks
HTTP DDoS attacks increased by 111% YoY, but decreased by 10% QoQ.
HTTP DDoS attacks targeting Taiwan increased by 200% QoQ; attacks targeting Japan increased by 105% QoQ.
Reports of Ransom DDoS attacks increased by 67% YoY and 15% QoQ.
Network-layer DDoS attacks
L3/4 DDoS attacks increased by 97% YoY and 24% QoQ.
L3/4 DDoS attacks by Mirai botnets increased by 405% QoQ.
The Gaming / Gambling industry was the most targeted by L3/4 DDoS attacks including a massive 2.5 Tbps DDoS attack.
This report is based on DDoS attacks that were automatically detected and mitigated by Cloudflare’s DDoS Protection systems. To learn more about how it works, check out this deep-dive blog post.
Ransom DDoS attacks are attacks where the attacker demands a ransom payment, usually in the form of Bitcoin, to stop/avoid the attack. In Q3, 15% of Cloudflare customers that responded to our survey reported being targeted by HTTP DDoS attacks accompanied by a threat or a ransom note. This represents a 15% increase QoQ and 67% increase YoY of reported ransom DDoS attacks.
Diving into Q3, we can see that since June 2022, there was a steady decline in reports of ransom attacks. However, in September, the reports of ransom attacks spiked again. In the month of September, almost one out of every four respondents reported receiving a ransom DDoS attack or threat — the highest month in 2022 so far.
How we calculate Ransom DDoS attack trends Our systems constantly analyze traffic and automatically apply mitigation when DDoS attacks are detected. Each DDoS’d customer is prompted with an automated survey to help us better understand the nature of the attack and the success of the mitigation. For over two years, Cloudflare has been surveying attacked customers. One of the questions in the survey asks the respondents if they received a threat or a ransom note demanding payment in exchange to stop the DDoS attack. Over the past year, on average, we collected 174 responses per quarter. The responses of this survey are used to calculate the percentage of Ransom DDoS attacks.
Application-layer DDoS attacks
Application-layer DDoS attacks, specifically HTTP DDoS attacks, are attacks that usually aim to disrupt a web server by making it unable to process legitimate user requests. If a server is bombarded with more requests than it can process, the server will drop legitimate requests and – in some cases – crash, resulting in degraded performance or an outage for legitimate users.
Application-layer DDoS attack trends
When we look at the graph below, we can see a clear trend of approximately 10% decrease in attacks each quarter since 2022 Q1. However, despite the downward trend, when comparing Q3 of 2022 to Q3 of 2021, we can see that HTTP DDoS attacks still increased by 111% YoY.
When we dive into the months of the quarter, attacks in September and August were fairly evenly distributed; 36% and 35% respectively. In July, the amount of attacks was the lowest for the quarter (29%).
Application-layer DDoS attacks by industry
By bucketing the attacks by our customers’ industry of operation, we can see that HTTP applications operated by Internet companies were the most targeted in Q3. Attacks on the Internet industry increased by 131% QoQ and 300% YoY.
The second most attacked industry was the Telecommunications industry with an increase of 93% QoQ and 2,317% (!) YoY. In third place was the Gaming / Gambling industry with a more conservative increase of 17% QoQ and 36% YoY.
Application-layer DDoS attacks by target country
Bucketing attacks by our customers’ billing address gives us an understanding of which countries are more attacked. HTTP applications operated by US companies were the most targeted in Q3. US-based websites saw an increase of 60% QoQ and 105% YoY in attacks targeting them. After the US, was China with a 332% increase QoQ and an 800% increase YoY.
Looking at Ukraine, we can see that attacks targeting Ukrainian websites increased by 67% QoQ but decreased by 50% YoY. Furthermore, attacks targeting Russian websites increased by 31% QoQ and 2,400% (!) YoY.
In East Asia, we can see that attacks targeting Taiwanese companies increased by 200% QoQ and 60% YoY, and attacks targeting Japanese companies increased by 105% QoQ.
When we zoom in on specific countries, we can identify the below trends that may reveal interesting insights regarding the war in Ukraine and geopolitical events in East Asia:
In Ukraine, we see a surprising change in the attacked industries. Over the past two quarters, Broadcasting, Online Media and Publishing companies were targeted the most in what appeared to be an attempt to silence information and make it unavailable to civilians. However, this quarter, those industries dropped out of the top 10 list. Instead, the Marketing & Advertising industry took the lead (40%), followed by Education companies (20%), and Government Administration (8%).
In Russia, attacks on the Banking, Financial Services and Insurance (BFSI) industry continue to persist (25%). Be that as it may, attacks on the BFSI sector still decreased by 44% QoQ. In second place is the Events Services industry (20%), followed by Cryptocurrency (16%), Broadcast Media (13%), and Retail (11%). A significant portion of the attack traffic came from Germany-based IP addresses, and the rest were globally distributed.
In Taiwan, the two most attacked industries were Online Media (50%) and Internet (23%). Attacks to those industries were globally distributed indicating the usage of botnets.
In Japan, the most attacked industry was Internet/Media & Internet (52%), Business Services (12%), and Government – National (11%).
Application-layer DDoS attack traffic by source country
Before digging into specific source country metrics, it is important to note that while country of origin is interesting, it is not necessarily indicative of where the attacker is located. Oftentimes with DDoS attacks, they are launched remotely, and attackers will go to great lengths to hide their actual location in an attempt to avoid being caught. If anything, it is indicative of where botnet nodes are located. With that being said, by mapping the attacking IP address to their location, we can understand where attack traffic is coming from.
After two consecutive quarters, China replaced the US as the main source of HTTP DDoS attack traffic. In Q3, China was the largest source of HTTP DDoS attack traffic. Attack traffic from China-registered IP addresses increased by 29% YoY and 19% QoQ. Following China was India as the second-largest source of HTTP DDoS attack traffic — an increase of 61% YoY. After India, the main sources were the US and Brazil.
Looking at Ukraine, we can see that this quarter there was a drop in attack traffic originating from Ukrainian and Russian IP addresses — a decrease of 29% and 11% QoQ, respectively. However, YoY, attack traffic from within those countries still increased by 47% and 18%, respectively.
Another interesting data point is that attack traffic originating from Japanese IP addresses increased by 130% YoY.
Network-layer DDoS attacks
While application-layer attacks target the application (Layer 7 of the OSI model) running the service that end users are trying to access (HTTP/S in our case), network-layer attacks aim to overwhelm network infrastructure (such as in-line routers and servers) and the Internet link itself.
Network-layer DDoS attack trends
In Q3, we saw a large surge in L3/4 DDoS attacks — an increase of 97% YoY and a 24% QoQ. Furthermore, when we look at the graph we can see a clear trend, over the past three quarters, of an increase in attacks.
Drilling down into the quarter, it’s apparent that the attacks were, for the most part, evenly distributed throughout the quarter — with a slightly larger share for July.
Network-layer DDoS attacks by Industry
The Gaming / Gambling industry was hit by the most L3/4 DDoS attacks in Q3. Almost one out of every five bytes Cloudflare ingested towards Gaming / Gambling networks was part of a DDoS attack. This represents a whopping 381% increase QoQ.
The second most targeted industry was Telecommunications — almost 6% of bytes towards Telecommunications networks were part of DDoS attacks. This represents a 58% drop from the previous quarter where Telecommunications was the top most attacked industry by L3/4 DDoS attacks.
Following were the Information Technology and Services industry along with the Software industry. Both saw significant growth in attacks — 89% and 150% QoQ, respectively.
Network-layer DDoS attacks by target country
In Q3, Singapore-based companies saw the most L3/4 DDoS attacks — over 15% of all bytes to their networks were associated with a DDoS attack. This represents a dramatic 1,175% increase QoQ.
The US comes in second after a 45% decrease QoQ in attack traffic targeting US networks. In third, China, with a 62% QoQ increase. Attacks on Taiwan companies also increased by 200% QoQ.
Network-layer DDoS attacks by ingress country
In Q3, Cloudflare’s data centers in Azerbaijan saw the largest percentage of attack traffic. More than a third of all packets ingested there were part of a L3/4 DDoS attack. This represents a 44% increase QoQ and a huge 59-fold increase YoY.
Similarly, our data centers in Tunisia saw a dramatic increase in attack packets – 173x the amount in the previous year. Zimbabwe and Germany also saw significant increases in attacks.
Zooming into East Asia, we can see that our data centers in Taiwan saw an increase of attacks — 207% QoQ and 1,989% YoY. We saw similar numbers in Japan where attacks increased by 278% QoQ and 1,921% YoY.
Looking at Ukraine, we actually see a dip in the amount of attack packets we observed in our Ukraine-based and Russia-based data centers — 49% and 16% QoQ, respectively.
Attack vectors & Emerging threats
An attack vector is the method used to launch the attack or the method of attempting to achieve denial-of-service. With a combined share of 71%, SYN floods and DNS attacks remain the most popular DDoS attack vectors in Q3.
Last quarter, we saw a resurgence of attacks abusing the CHARGEN protocol, the Ubiquity Discovery Protocol, and Memcached reflection attacks. While the growth in Memcached DDoS attacks also slightly grew (48%), this quarter, there was a more dramatic increase in attacks abusing the BitTorrent protocol (1,221%), as well as attacks launched by the Mirai botnet and its variants.
BitTorrent DDoS attacks increased by 1,221% QoQ The BitTorrent protocol is a communication protocol that’s used for peer to peer file sharing. To help the BitTorrent clients find and download the files efficiently, BitTorrent clients may use BitTorrent Trackers or Distributed Hash Tables (DHT) to identify the peers that are seeding the desired file. This concept can be abused to launch DDoS attacks. A malicious actor can spoof the victim’s IP address as a seeder IP address within Trackers and DHT systems. Then clients would request the files from those IPs. Given a sufficient number of clients requesting the file, it can flood the victim with more traffic than it can handle.
Mirai DDoS attacks increased by 405% QoQ Mirai is malware that infects smart devices that run on ARC processors, turning them into a network of bots that can be used to launch DDoS attacks. This processor runs a stripped-down version of the Linux operating system. If the default username-and-password combo is not changed, Mirai is able to log in to the device, infect it, and take over. The botnet operator can instruct the botnet to launch a flood of UDP packets at the victim’s IP address to bombard them.
Network-layer DDoS attacks by Attack Rates & Duration
While Terabit-strong attacks are becoming more frequent, they are still the outliers. The majority of attacks are tiny (in terms of Cloudflare scale). Over 95% of attacks peaked below 50,000 packets per second (pps) and over 97% below 500 Megabits per second (Mbps). We call this “cyber vandalism”.
What is cyber vandalism? As opposed to “classic” vandalism where the purpose is to cause deliberate destruction of or damage to public or private physical property — such as graffiti on the side of a building — in the cyberworld, cyber vandalism is the act of causing deliberate damage to Internet properties. Today the source codes for various botnets are available online and there are a number of free tools that can be used to launch a flood of packets. By directing those tools to Internet properties, any script-kid can use those tools to launch attacks against their school during exam season or any other website they desire to take down or disrupt. This is as opposed to organized crime, Advanced Persistent Threat actors, and state-level actors that can launch much larger and sophisticated attacks.
Similarly, most of the attacks are very short and end within 20 minutes (94%). This quarter we did see an increase of 9% in attacks of 1-3 hours, and a 3% increase in attacks over 3 hours — but those are still the outliers.
Even with the largest attacks, such as the 2.5 Tbps attack we mitigated earlier this quarter, and the 26M request per second attack we mitigated back in the summer, the peak of the attacks were short-lived. The entire 2.5 Tbps attack lasted about 2 minutes, and the peak of the 26M rps attack only 15 seconds. This emphasizes the need for automated, always-on solutions. Security teams can’t respond quick enough. By the time the security engineer looks at the PagerDuty notification on their phone, the attack has subsided.
Attacks may be initiated by humans, but they are executed by bots — and to play to win, you must fight bots with bots. Detection and mitigation must be automated as much as possible, because relying solely on humans puts defenders at a disadvantage. Cloudflare’s automated systems constantly detect and mitigate DDoS attacks for our customers, so they don’t have to.
Over the years, it has become easier, cheaper, and more accessible for attackers and attackers-for-hire to launch DDoS attacks. But as easy as it has become for the attackers, we want to make sure that it is even easier – and free – for defenders of organizations of all sizes to protect themselves against DDoS attacks of all types. We’ve been providing unmetered and unlimited DDoS protection for free to all of our customers since 2017 — when we pioneered the concept.
Cloudflare’s mission is to help build a better Internet. A better Internet is one that is more secure, faster, and reliable for everyone – even in the face of DDoS attacks.
In 2017, we made unmetered DDoS protection available to all our customers, regardless of their size or whether they were on a Free or paid plan. Today we are doing the same for Rate Limiting, one of the most successful products of the WAF family.
Rate Limiting is a very effective tool to manage targeted volumetric attacks, takeover attempts, bots scraping sensitive data, attempts to overload computationally expensive API endpoints and more. To manage these threats, customers deploy rules that limit the maximum rate of requests from individual visitors on specific paths or portions of their applications.
Until today, customers on a Free, Pro or Business plan were able to purchase Rate Limiting as an add-on with usage-based cost of $5 per million requests. However, we believe that an essential security tool like Rate Limiting should be available to all customers without restrictions.
Since we launched unmetered DDoS, we have mitigated huge attacks, like a 2 Tbps multi-vector attack or the most recent 26 million requests per second attack. We believe that releasing an unmetered version of Rate Limiting will increase the overall security posture of millions of applications protected by Cloudflare.
Today, we are announcing that Free, Pro and Business plans include Rate Limiting rules without extra charges.
…and we are not just dropping any Rate Limiting extra charges, we are also releasing an updated version of the product which is built on the powerful ruleset engine and allows building rules like in Custom Rules. This is the same engine which powers the enterprise-grade Advanced Rate Limiting. The new ‘Rate limiting rules’ will appear in your dashboard starting this week.
No more usage-based charges, just rate limiting when you need and how much you need it.
Note: starting today, September 29th, Pro and Business customers have the new product available in their dashboard. Free customers will get their rules enabled during the week starting on October 3rd 2022.
End of usage-based charges
New customers get new Rate Limiting by default while existing customers will be able to run both products in parallel: new and previous version.
For new customers, new Rate Limiting rules will be included in each plan according to the following table:
Number of rules
When using these rules, no additional charges will be added to your account. No matter how much traffic these rules handle.
Existing customers will be granted the same amount of rules in the new, unmetered, system as the rules they’re currently using in the previous version (as of September 20, 2022). For example, if you are a Business customer with nine active rules in the previous version, you will get nine rules in the new system as well.
The previous version of Rate Limiting will still be subject to charges when in use. If you want to take advantage of the unmetered option, we recommend rewriting your rules in the new engine. As outlined below, new Rate Limiting offers all the capabilities of the previous version of Rate Limiting and more. In the future, the previous version of Rate Limiting will be deprecated, however we will give plenty of time to self-migrate rules.
New rate limiting engine for all
A couple of weeks ago, we announced that Cloudflare was named a Leader in the Gartner® Magic Quadrant™ for Web Application and API Protection (WAAP). One of the key services offered in our WAAP portfolio is Advanced Rate Limiting.
The recent Advanced Rate Limiting has shown great success among our Enterprise customers. Advanced Rate Limiting allows an unprecedented level of control on how to manage incoming traffic rate. We decided to give the same rule-building experience to all of our customers as well as some of its new features.
A summary of the feature set is outlined in the following table:
ENT with WAF Essential
ENT with Advanced Rate Limiting
Fields available (request)
Host URI Path Full URI Query
Host URI Path Full URI Query Method Source IP User Agent
Same WAF Essential. Request Bot score(1) and body fields(2)
Available with access to response headers and response status code
Available with access to response headers and response status code
Available with access to response headers and response status code
IP IP with NAT awareness
IP IP with NAT awareness Query Host Headers Cookie ASN Country Path JA3(2) JSON field (New!)
Max Counting period
Included in monthly subscription
Included in monthly subscription
Included in contracted plan
Included in contracted plan
(1): Requires Bots Management add-on (2): Requires specific plan
Leveraging the ruleset engine. Previous version of Rate Limiting allows customers to scope the rule based on a single path and method of the request. Thanks to the ruleset engine, customers can now write rules like they do in Custom Rules and combine multiple parameters of the HTTP request.
For example, Pro domains can combine multiple paths in the same rule using the OR or AND operators. Business domains can also write rules using Source IP or User Agent. This allows enforcing different rates for specific User Agents. Furthermore, Business customers can now scope Rate Limiting to specific IPs (using IP List, for example) or exclude IPs where no attack is expected.
Counting and mitigation expressions are now separate. A feature request we often heard about was the ability to track the rate of requests on a specific path (such as ‘/login’) and, when an IP exceeds the threshold, block every request from the same IP hitting anywhere on your domain.Business and Enterprise customers can now achieve this by using the counting expression which is separate from the mitigation. The former defines what requests are used to compute the rate while the letter defines what requests are mitigated once the threshold has been reached.
Another use case for using the counting expression is when you need to use Origin Status Code or HTTP Response Headers. If you need to use these fields, we recommend creating a counting expression that includes response parameters and explicitly writing a filter that defines what the request parameters that will trigger a block action.
Counting dimensions. Similarly to the previous version, Free, Pro and Business customers will get the IP-based Rate Limiting. When we say IP-based we refer to the way we group (or count) requests. You can set a rule that enforces a maximum rate of request from the same IPs. If you set a rule to limit 10 requests over one minute, we will count requests from individual IPs until they reach the limit and then block for a period of time.
Advanced Rate Limiting users are able to group requests based on additional characteristics, such as API keys, cookies, session headers, ASN, query parameters, a JSON body field (e.g. the username value of a login request) and more.
What do Enterprise customers get? Enterprise customers can get Rate Limiting as part of their contract. When included in their contract, they get access to 100 rules, a more comprehensive list of fields available in the rule builder, and they get to upgrade to Advanced Rate Limiting. Please reach out to your account team to learn more.
More information on how to use new Rate Limiting can be found in the documentation.
Additional information for existing customers
If you are a Free, Pro or Business customer, you will automatically get the new product in the dashboard. We will entitle you with as many unmetered Rate Limiting rules as you are using in the previous version.
If you are an Enterprise customer using the previous version of Rate Limiting, please reach out to the account team to discuss the options to move to new Rate Limiting.
To take advantage of the unmetered functionality, you will need to migrate your rules to the new system. The previous version will keep working as usual, and you might be charged based on the traffic that its rules evaluate.
Long term, the previous version of Rate Limiting will be deprecated and when this happens all rules still running on the old system will cease to run.
The WAF team has plans to further expand our Rate Limiting capabilities. Features we are considering include better analytics to support the rule creation. Furthermore, new Rate Limiting can now benefit from new fields made available in the WAF as soon as they are released. For example, Enterprise customers can combine Bot Score or the new WAF Attack Score to create a more fine grain security posture.
We’re pleased to introduce Cloudflare’s free Botnet Threat Feed for Service Providers. This includes all types of service providers, ranging from hosting providers to ISPs and cloud compute providers.
This feed will give service providers threat intelligence on their own IP addresses that have participated in HTTP DDoS attacks as observed from the Cloudflare network — allowing them to crack down on abusers, take down botnet nodes, reduce their abuse-driven costs, and ultimately reduce the amount and force of DDoS attacks across the Internet. We’re giving away this feed for free as part of our mission to help build a better Internet.
Service providers that operate their own IP space can now sign up to the early access waiting list.
Cloudflare’s unique vantage point on DDoS attacks
Cloudflare provides services to millions of customers ranging from small businesses and individual developers to large enterprises, including 29% of Fortune 1000 companies. Today, about 20% of websites rely directly on Cloudflare’s services. This gives us a unique vantage point on tremendous amounts of DDoS attacks that target our customers.
DDoS attacks, by definition, are distributed. They originate from botnets of many sources — in some cases, from hundreds of thousands to millions of unique IP addresses. In the case of HTTP DDoS attacks, where the victims are flooded with HTTP requests, we know that the source IP addresses that we see are the real ones — they’re not spoofed (altered). We know this because to initiate an HTTP request a connection must be established between the client and server. Therefore, we can reliably identify the sources of the attacks to understand the origins of the attacks.
As we’ve seen in previous attacks, such as the 26 million request per second DDoS attack that was launched by the Mantis botnet, a significant portion originated from service providers such as French-based OVH (Autonomous System Number 16276), the Indonesian Telkomnet (ASN 7713), the US-based iboss (ASN 137922), the Libyan Ajeel (ASN 37284), and others.
The service providers are not to blame. Their networks and infrastructure are abused by attackers to launch attacks. But, it can be hard for service providers to identify the abusers. In some cases, we’ve seen as little as one single IP of a service provider participate in a DDoS attack consisting of thousands of bots — all scattered across many service providers. And so, the service providers usually only see a small fraction of the attack traffic leaving their network, and it can be hard to correlate it to malicious activity.
Even more so, in the case of HTTPS DDoS attacks, the service provider would only see encrypted gibberish leaving their network without any possibility to decrypt or understand if it is malicious or legitimate traffic. However, at Cloudflare, we see the entire attack and all of its sources, and can use that to help service providers stop the abusers and attacks.
Leveraging our unique vantage point, we go to great lengths to ensure that our threat intelligence includes actual attackers and not legitimate clients.
Partnering with service providers around the world to help build a better Internet
Since our previous experience mitigating Mantis botnet attacks, we’ve been working with providers around the world to help them crack down on abusers. We realized the potential and decided to double down on this effort. The result is that each service provider can subscribe to a feed of their own offending IPs, for free, so they can take action and take down the abused systems.
Our mission at Cloudflare is to help build a better Internet — one that is safer, more performant, and more reliable for everyone. We believe that providing this threat intelligence will help us all move in that direction — cracking down on DDoS attackers and taking down malicious botnets.
If you are a service provider and operate your own IP space, you can now sign up to the early access waiting list.
Forester has recognised Cloudflare as a Leader in The Forrester Wave™: Web Application Firewalls, Q3 2022 report. The report evaluated 12 Web Application Firewall (WAF) providers on 24 criteria across current offering, strategy and market presence.
You can register for a complimentary copy of the report here. The report helps security and risk professionals select the correct offering for their needs.
We believe this achievement, along with recent WAF developments, reinforces our commitment and continued investment in the Cloudflare Web Application Firewall (WAF), one of our core product offerings.
The WAF, along with our DDoS Mitigation and CDN services, has in fact been an offering since Cloudflare’s founding, and we could not think of a better time to receive this recognition: Birthday Week.
We’d also like to take this opportunity to thank Forrester.
Leading WAF in strategy
Cloudflare received the highest score of all assessed vendors in the strategy category. We also received the highest possible scores in 10 criteria, including:
Rule creation and modification
Security operations feedback loops
According to Forrester, “Cloudflare Web Application Firewall shines in configuration and rule creation”, “Cloudflare stands out for its active online user community and its associated response time metrics”, and “Cloudflare is a top choice for those prioritizing usability and looking for a unified application security platform.”
Protecting web applications
The core value of any WAF is to keep web applications safe from external attacks by stopping any compromise attempt. Compromises can in fact lead to complete application take over and data exfiltration resulting in financial and reputational damage to the targeted organization.
The Log4Shell criterion in the Forrester Wave report is an excellent example of a real world use case to demonstrate this value.
Log4Shell was a high severity vulnerability discovered in December 2021 that affected the popular Apache Log4J software commonly used by applications to implement logging functionality. The vulnerability, when exploited, allows an attacker to perform remote code execution and consequently take over the target application.
Due to the popularity of this software component, many organizations worldwide were potentially at risk after the immediate public announcement of the vulnerability on December 9, 2021.
We believe that we scored the highest possible score in the Log4Shell criterion due to our fast response to the announcement, by ensuring that all customers using the Cloudflare WAF were protected against the exploit in less than 17 hours globally.
We did this by deploying new managed rules (virtual patching) that were made available to all customers. The rules were deployed with a block action ensuring exploit attempts never reached customer applications.
The Cloudflare WAF ultimately “bought” valuable time for our customers to patch their back end systems before attackers may have been able to find and attempt compromise of vulnerable applications.
You can read about our response and our actions following the Log4Shell announcement in great detail on our blog.
Use the Cloudflare WAF today
We’re pleased to introduce Advanced DDoS Alerts. Advanced DDoS Alerts are customizable and provide users the flexibility they need when managing many Internet properties. Users can easily define which alerts they want to receive — for which DDoS attack sizes, protocols and for which Internet properties.
This release includes two types of Advanced DDoS Alerts:
Advanced HTTP DDoS Attack Alerts – Available to WAF/CDN customers on the Enterprise plan, who have also subscribed to the Advanced DDoS Protection service.
Advanced L3/4 DDoS Attack Alerts – Available to Magic Transit and Spectrum BYOIP customers on the Enterprise plan.
Standard DDoS Alerts are available to customers on all plans, including the Free plan. Advanced DDoS Alerts are part of Cloudflare’s Advanced DDoS service.
Distributed Denial of Service attacks are cyber attacks that aim to take down your Internet properties and make them unavailable for your users. As early as 2017, Cloudflare pioneered the Unmetered DDoS Protection to provide all customers with DDoS protection, without limits, to ensure that their Internet properties remain available. We’re able to provide this level of commitment to our customers thanks to our automated DDoS protection systems. But if the systems operate automatically, why even be alerted?
Well, to put it plainly, when our DDoS protection systems kick in, they insert ephemeral rules inline to mitigate the attack. Many of our customers operate business critical applications and services. When our systems make a decision to insert a rule, customers might want to be able to verify that all the malicious traffic is mitigated, and that legitimate user traffic is not. Our DDoS alerts begin firing as soon as our systems make a mitigation decision. Therefore, by informing our customers about a decision to insert a rule in real time, they can observe and verify that their Internet properties are both protected and available.
Managing many Internet properties
The standard DDoS Alerts alert you on DDoS attacks that target any and all of your Cloudflare-protected Internet properties. However, some of our customers may manage large numbers of Internet properties ranging from hundreds to hundreds of thousands. The standard DDoS Alerts would notify users every time one of those properties would come under attack — which could become very noisy.
The Advanced DDoS Alerts address this concern by allowing users to select the specific Internet properties that they want to be notified about; zones and hostnames for WAF/CDN customers, and IP prefixes for Magic Transit and Spectrum BYOIP customers.
One (attack) size doesn’t fit all
The standard DDoS Alerts alert you on DDoS attacks of any size. Well, almost any size. We implemented minimal alert thresholds to avoid spamming our customers’ email inboxes. Those limits are very small and not customer-configurable. As we’ve seen in the recent DDoS trends report, most of the attacks are very small — another reason why the standard DDoS Alert could become noisy for customers that only care about very large attacks. On the opposite end of the spectrum, choosing not to alert may become too quiet for customers that do want to be notified about smaller attacks.
The Advanced DDoS Alerts let customers choose their own alert threshold. WAF/CDN customers can define the minimum request-per-second rate of an HTTP DDoS attack alert. Magic Transit and Spectrum BYOIP customers can define the packet-per-second and Megabit-per-second rates of a L3/4 DDoS attack alert.
Not all protocols are created equal
As part of the Advanced L3/4 DDoS Alerts, we also let our users define the protocols to be alerted on. If a Magic Transit customer manages mostly UDP applications, they may not care if TCP-based DDoS attacks target it. Similarly, if a Spectrum BYOIP customer only cares about HTTP/TCP traffic, other-protocol-based attacks could be of no concern to them.
Creating an Advanced DDoS Alert
We’ll show here how to create an Advanced HTTP DDoS Alert, but the process to create a L3/4 alert is similar. You can view a more detailed guide on our developers website.
First, click here or log in to your Cloudflare account, navigate to Notifications and click Add. Then select the Advanced HTTP DDoS Attack Alert or Advanced L3/4 DDoS Attack Alert (based on your eligibility). Give your alert a name, an optional description, add your preferred delivery method (e.g., Webhook) and click Next.
Second, select the domains you’d like to be alerted on. You can also narrow it down to specific hostnames. Define the minimum request-per-second rate to be alerted on, click Save, and voilà.
Actionable alerts for making better decisions
Cloudflare Advanced DDoS Alerts aim to provide our customers with configurable controls to make better decisions for their own environments. Customers can now be alerted on attacks based on which domain/prefix is being attacked, the size of the attack, and the protocol of the attack. We recognize that the power to configure and control DDoS attack alerts should ultimately be left up to our customers, and we are excited to announce the availability of this functionality.
Want to learn more about Advanced DDoS Alerts? Visit our developer site.
Interested in upgrading to get Advanced DDoS Alerts? Contact your account team.
Every Internet property is unique, with its own traffic behaviors and patterns. For example, a website may only expect user traffic from certain geographies, and a network might only expect to see a limited set of protocols.
Understanding that the traffic patterns of each Internet property are unique is what led us to develop the Adaptive DDoS Protection system. Adaptive DDoS Protection joins our existing suite of automated DDoS defenses and takes it to the next level. The new system learns your unique traffic patterns and adapts to protect against sophisticated DDoS attacks.
Adaptive DDoS Protection is now generally available to Enterprise customers:
HTTP Adaptive DDoS Protection – available to WAF/CDN customers on the Enterprise plan, who have also subscribed to the Advanced DDoS Protection service.
L3/4 Adaptive DDoS Protection – available to Magic Transit and Spectrum customers on an Enterprise plan.
Adaptive DDoS Protection learns your traffic patterns
The Adaptive DDoS Protection system creates a traffic profile by looking at a customer’s maximal rates of traffic every day, for the past seven days. The profiles are recalculated every day using the past seven-day history. We then store the maximal traffic rates seen for every predefined dimension value. Every profile uses one dimension and these dimensions include the source country of the request, the country where the Cloudflare data center that received the IP packet is located, user agent, IP protocol, destination ports and more.
So, for example, for the profile that uses the source country as a dimension, the system will log the maximal traffic rates seen per country. e.g. 2,000 requests per second (rps) for Germany, 3,000 rps for France, 10,000 rps for Brazil, and so on. This example is for HTTP traffic, but Adaptive DDoS protection also profiles L3/4 traffic for our Magic Transit and Spectrum Enterprise customers.
Another note on the maximal rates is that we use the 95th percentile rates. This means that we take a look at the maximal rates and discard the top 5% of the highest rates. The purpose of this is to eliminate outliers from the calculations.
Calculating traffic profiles is done asynchronously — meaning that it does not induce any latency to our customers’ traffic. The system then distributes a compact profile representation across our network that can be consumed by our DDoS protection systems to be used to detect and mitigate DDoS attacks in a much more cost-efficient manner.
In addition to the traffic profiles, the Adaptive DDoS Protection also leverages Cloudflare’s Machine Learning generated Bot Scores as an additional signal to differentiate between user and automated traffic. The purpose of using these scores is to differentiate between legitimate spikes in user traffic that deviates from the traffic profile, and a spike of automated and potentially malicious traffic.
Out of the box and easy to use
Adaptive DDoS Protection just works out of the box. It automatically creates the profiles, and then customers can tweak and tune the settings as they need via DDoS Managed Rules. Customers can change the sensitivity level, leverage expression fields to create overrides (e.g. exclude this type of traffic), and change the mitigation action to tailor the behavior of the system to their specific needs and traffic patterns.
Adaptive DDoS Protection complements the existing DDoS protection systems which leverages dynamic fingerprinting to detect and mitigate DDoS attacks. The two work in tandem to protect our customers from DDoS attacks. When Cloudflare customers onboard a new Internet property to Cloudflare, the dynamic fingerprinting protects them automatically and out of the box — without requiring any user action. Once the Adaptive DDoS Protection learns their legitimate traffic patterns and creates a profile, users can turn it on to provide an extra layer of protection.
Rules included as part of the Adaptive DDoS Protection
As part of this release, we’re pleased to announce the following capabilities as part of Cloudflare’s Adaptive DDoS Protection:
WAF/CDN customers on the Enterprise plan with Advanced DDoS
Magic Transit & Spectrum Enterprise customers
Client IP Country & region
User Agent (globally, not per customer*)
Combination of IP Protocol and Destination Port
*The User-Agent-aware feature analyzes, learns and profiles all the top user agents that we see across the Cloudflare network. This feature helps us identify DDoS attacks that leverage legacy or wrongly configured user agents.
Excluding UA-aware DDoS Protection, Adaptive DDoS Protection rules are deployed in Log mode. Customers can observe the traffic that’s flagged, tweak the sensitivity if needed, and then deploy the rules in mitigation mode. You can follow the steps outlined in this guide to do so.
Making the impact of DDoS attacks a thing of the past
Our mission at Cloudflare is to help build a better Internet. The DDoS Protection team’s vision is derived from this mission: our goal is to make the impact of DDoS attacks a thing of the past. Cloudflare’s Adaptive DDoS Protection takes us one step closer to achieving that vision: making Cloudflare’s DDoS protection even more intelligent, sophisticated, and tailored to our customer’s unique traffic patterns and individual needs.
Want to learn more about Cloudflare’s Adaptive DDoS Protection? Visit our developer site.
Interested in upgrading to get access to Adaptive DDoS Protection? Contact your account team.
A protection group is a resource that you create by grouping your Shield Advanced protected resources, so that the service considers them to be a single protected entity. A protection group can contain many different resources that compose your application, and the resources may be part of multiple protection groups spanning different AWS Regions within an AWS account. Common patterns that you might use when designing protection groups include aligning resources to applications, application teams, or environments (such as production and staging), and by product tiers (such as free or paid). For more information about setting up protection groups, see Managing AWS Shield Advanced protection groups.
Why should you consider using a protection group?
The benefits of protection groups differ for infrastructure layer (layer 3 and layer 4) events and application layer (layer 7) events. For layer 3 and layer 4 events, protection groups can reduce the time it takes for Shield Advanced to begin mitigations. For layer 7 events, protection groups add an additional reporting mechanism. There is no change in the mechanism that Shield Advanced uses internally for detection of an event, and you do not lose the functionality of individual resource-level detections. You receive both group-level and individual resource-level Amazon CloudWatch metrics to consume for operational use. Let’s look at the benefits for each layer in more detail.
Layers 3 and 4: Accelerate time to mitigate for DDoS events
For infrastructure layer (layer 3 and layer 4) events, Shield Advanced monitors the traffic volume to your protected resource. An abnormal traffic deviation signals the possibility of a DDoS attack, and Shield Advanced then puts mitigations in place. By default, Shield Advanced observes the elevation of traffic to a resource over multiple consecutive time intervals to establish confidence that a layer 3/layer 4 event is under way. In the absence of a protection group, Shield Advanced follows the default behavior of waiting to establish confidence before it puts mitigation in place for each resource. However, if the resources are part of a protection group, and if the service detects that one resource in a group is targeted, Shield Advanced uses that confidence for other resources in the group. This can accelerate the process of putting mitigations in place for those resources.
Consider a case where you have an application deployed in different AWS Regions, and each stack is fronted with a Network Load Balancer (NLB). When you enable Shield Advanced on the Elastic IP addresses associated with the NLB in each Region, you can optionally add those Elastic IP addresses to a protection group. If an actor targets one of the NLBs in the protection group and a DDoS attack is detected, Shield Advanced will lower the threshold for implementing mitigations on the other NLBs associated with the protection group. If the scope of the attack shifts to target the other NLBs, Shield Advanced can potentially mitigate the attack faster than if the NLB was not in the protection group.
Note: This benefit applies only to Elastic IP addresses and Global Accelerator resource types.
Layer 7: Reduce false positives and improve accuracy of detection for DDoS events
Shield Advanced detects application layer (layer 7) events when you associate a web access control list (web ACL) in AWS WAF with it. Shield Advanced consumes request data for the associated web ACL, analyzes it, and builds a traffic baseline for your application. The service then uses this baseline to detect anomalies in traffic patterns that might indicate a DDoS attack.
When you group resources in a protection group, Shield Advanced aggregates the data from individual resources and creates the baseline for the whole group. It then uses this aggregated baseline to detect layer 7 events for the group resource. It also continues to monitor and report for the resources individually, regardless of whether they are part of protection groups or not.
Shield Advanced provides three types of aggregation to choose from (sum, mean, and max) to aggregate the volume data of individual resources to use as a baseline for the whole group. We’ll look at the three types of aggregation, with a use case for each, in the next section.
Note: Traffic aggregation is applicable only for layer 7 detection.
Case 1: Blue/green deployments
Blue/green is a popular deployment strategy that increases application availability and reduces deployment risk when rolling out changes. The blue environment runs the current application version, and the green environment runs the new application version. When testing is complete, live application traffic is directed to the green environment, and the blue environment is dismantled.
During blue/green deployments, the traffic to your green resources can go from zero load to full load in a short period of time. Shield Advanced layer 7 detection uses traffic baselining for individual resources, so newly created resources like an Application Load Balancer (ALB) that are part of a blue/green operation would have no baseline, and the rapid increase in traffic could cause Shield Advanced to declare a DDoS event. In this scenario, the DDoS event could be a false positive.
Figure 1: A blue/green deployment with ALBs in a protection group. Shield is using the sum of total traffic to the group to baseline layer 7 traffic for the group as a single unit
In the example architecture shown in Figure 1, we have configured Shield to include all resources of type ALB in a single protection group with aggregation type sum. Shield Advanced will use the sum of traffic to all resources in the protection group as an additional baseline. We have only one ALB (called blue) to begin with. When you add the green ALB as part of your deployment, you can optionally add it to the protection group. As traffic shifts from blue to green, the total traffic to the protection group remains the same even though the volume of traffic changes for the individual resources that make up the group. After the blue ALB is deleted, the Shield Advanced baseline for that ALB is deleted with it. At this point, the green ALB hasn’t existed for sufficient time to have its own accurate baseline, but the protection group baseline persists. You could still receive a DDoSDetected CloudWatch metric with a value of 1 for individual resources, but with a protection group you have the flexibility to set one or more alarms based on the group-level DDoSDetected metric. Depending on your application’s use case, this can reduce non-actionable event notifications.
Note: You might already have alarms set for individual resources, because the onboarding wizard in Shield Advanced provides you an option to create alarms when you add protection to a resource. So, you should review the alarms you already have configured before you create a protection group. Simply adding a resource to a protection group will not reduce false positives.
Case 2: Resources that have traffic patterns similar to each other
Client applications might interact with multiple services as part of a single transaction or workflow. These services can be behind their own dedicated ALBs or CloudFront distributions and can have traffic patterns similar to each other. In the example architecture shown in Figure 2, we have two services that are always called to satisfy a user request. Consider a case where you add a new service to the mix. Before protection groups existed, setting up such a new protected resource, such as ALB or CloudFront, required Shield Advanced to build a brand-new baseline. You had to wait for a certain minimum period before Shield Advanced could start monitoring the resource, and the service would need to monitor traffic for a few days in order to be accurate.
Figure 2: Deploying a new service and including it in a protection group with an existing baseline. Shield is using the mean aggregation type to baseline traffic for the group.
For improved accuracy of detection of level 7 events, you can cause Shield Advanced to inherit the baseline of existing services that are part of the same transaction or workflow. To do so, you can put your new resource in a protection group along with an existing service or services, and set the aggregation type to mean. Shield Advanced will take some time to build up an accurate baseline for the new service. However, the protection group has an established baseline, so the new service won’t be susceptible to decreased accuracy of detection for that period of time. Note that this setting will not stop Shield Advanced from sending notifications for the new service individually; however, you might prefer to take corrective action based on the detection for the group instead.
Case 3: Resources that share traffic in a non-uniform way
Consider the case of a CloudFront distribution with an ALB as origin. If the content is cached in CloudFront edge locations, the traffic reaching the application will be lower than that received by the edge locations. Similarly, if there are multiple origins of a CloudFront distribution, the traffic volumes of individual origins will not reflect the aggregate traffic for the application. Scenarios like invalidation of cache or an origin failover can result in increased traffic at one of the ALB origins. This could cause Shield Advanced to send “1” as the value for the DDoSDetected CloudWatch metric for that ALB. However, you might not want to initiate an alarm or take corrective action in this case.
Figure 3: CloudFront and ALBs in a protection group with aggregation type max. Shield is using CloudFront’s baseline for the group
You can combine the CloudFront distribution and origin (or origins) in a protection group with the aggregation type set to max. Shield Advanced will consider the CloudFront distribution’s traffic volume as the baseline for the protection group as a whole. In the example architecture in Figure 3, a CloudFront distribution fronts two ALBs and balances the load between the two. We have bundled all three resources (CloudFront and two ALBs) into a protection group. In case one ALB fails, the other ALB will receive all the traffic. This way, although you might receive an event notification for the active ALB at the individual resource level if Shield detects a volumetric event, you might not receive it for the protection group because Shield Advanced will use CloudFront traffic as the baseline for determining the increase in volume. You can set one or more alarms and take corrective action according to your application’s use case.
In this blog post, we showed you how AWS Shield Advanced provides you with the capability to group resources in order to consider them a single logical entity for DDoS detection and mitigation. This can help reduce the number of false positives and accelerate the time to mitigation for your protected applications.
A Shield Advanced subscription provides additional capabilities, beyond those discussed in this post, that supplement your perimeter protection. It provides integration with AWS WAF for level 7 DDoS detection, health-based detection for reducing false positives, enhanced visibility into DDoS events, assistance from the Shield Response team, custom mitigations, and cost-protection safeguards. You can learn more about Shield Advanced capabilities in the AWS Shield Advanced User Guide.
If you have feedback about this blog post, submit comments in the Comments section below. You can also start a new thread on AWS Shield re:Post to get answers from the community.
Want more AWS Security news? Follow us on Twitter.
The collective thoughts of the interwebz
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.