All posts by Patrick R. Donahue

Secure by default: recommendations from the CISA’s newest guide, and how Cloudflare follows these principles to keep you secure

Post Syndicated from Patrick R. Donahue original https://blog.cloudflare.com/secure-by-default-understanding-new-cisa-guide/

Secure by default: recommendations from the CISA’s newest guide, and how Cloudflare follows these principles to keep you secure

Secure by default: recommendations from the CISA’s newest guide, and how Cloudflare follows these principles to keep you secure

When you buy a new house, you shouldn’t have to worry that everyone in the city can unlock your front door with a universal key before you change the lock. You also shouldn’t have to walk around the house with a screwdriver and tighten the window locks and back door so that intruders can’t pry them open. And you really shouldn’t have to take your alarm system offline every few months to apply critical software updates that the alarm vendor could have fixed with better software practices before they installed it.

Similarly, you shouldn’t have to worry that when you buy a network discovery tool it can be accessed by any attacker until you change the password, or that your expensive hardware-based firewalls can be recruited to launch DDoS attacks or run arbitrary code without the need to authenticate.

This “default secure” posture is the focus of a recently published guide jointly authored by the Cybersecurity and Infrastructure Agency (CISA), NSA, FBI, and six other international agencies representing the United Kingdom, Australia, Canada, Germany, Netherlands, and New Zealand. In the guide, the authors implore technology vendors to follow Secure-by-Design and Secure-by-Default principles, shifting the burden of security as much as possible away from the end-user and back towards the manufacturer:

The authoring agencies strongly encourage every technology manufacturer to build their products in a way that prevents customers from having to constantly perform monitoring, routine updates, and damage control on their systems to mitigate cyber intrusions. Manufacturers are encouraged to take ownership of improving the security outcomes of their customers. Historically, technology manufacturers have relied on fixing vulnerabilities found after the customers have deployed the products, requiring the customers to apply those patches at their own expense. Only by incorporating Secure-by-Design practices will we break the vicious cycle of creating and applying fixes.

In this post we’ll review some of the authors’ recommendations, discuss how Cloudflare applies these principles to the products that we build, and provide some suggestions on what other organizations can do to support similar initiatives internally.

Secure-by-Default: building products that require minimal hardening

Cloudflare makes cybersecurity products that protect employees, applications, and networks from attack. Typically, the ideas for new products and features come from one of two places: i) customers who are expressing a risk they’re worried about; or ii) our own internal Security team asking for help better securing Cloudflare’s internal network from threats. (The products that we build for our Security team are also then made available to our customers, once they’re battle tested internally.)

Wherever the source, when a product manager thinks through a new product offering, they first socialize the idea around the company for feedback. Often this feedback includes encouragement to make the product more “magical”. What this means in practice is that customers should have to do less, but get more; our job is essentially to make security administrators’ lives easier so they can focus their time where it’s most needed. An early example of this approach can be found in our blog post announcing Universal SSL in 2014:

For all customers, we will now automatically provision a SSL certificate on CloudFlare’s network that will accept HTTPS connections for a customer’s domain and subdomains.

The idea sounds simple but in 2014 this approach to SSL/TLS was unique in the industry: every other platform required customers to take some action before their website was encrypted-in-transit using HTTPS to protect against snooping and impersonation. Security administrators either had to go acquire the certificate themselves and upload (and renew) it, or manually perform some steps to demonstrate ownership to a certificate authority (CA). Because Cloudflare both manages authoritative DNS for our customers and runs a global reverse proxy, we can take care of all these steps automagically. Additionally, as new SSL/TLS attacks are discovered, we automatically improve how our servers negotiate encryption with browsers and API clients to keep our customers secure. No customer configuration or oversight is required.

We agree with CISA’s statement that “[t]he complexity of security configuration should not be a customer problem.” And aim to build products that materially improve security with little to no customer action beyond putting their employees, applications, and networks behind Cloudflare:

Secure-by-Default products are those that are secure to use “out of the box” with little to no configuration changes necessary and security features available without additional cost. Together, these two principles move much of the burden of staying secure to manufacturers and reduce the chances that customers will fall victim to security incidents resulting from misconfigurations, insufficiently fast patching, or many other common issues.

Another example of our Secure-by-Default approach is how we protect against “0 day” attacks in our Web Application Firewall using machine learning (ML). Zero day attacks are security vulnerabilities discovered by attackers or researchers before the software vendor is aware of the issue (or has had a chance to release a patch). Often the attack is exploited “in the wild” before customers are able to plug the holes in their systems, or their upstream security vendors are able to virtually patch the issue. A recent, widely-exploited 0 day was Log4j; software manufacturers using this library in their code raced to update their software as quickly as possible. But many took days, weeks, or even months to do so.

Cloudflare is proud of the speed at which we responded to Log4j, and the fact we provide the highest severity WAF protections to all plans including Free—but it’s always a race against the clock. We created the ML-computed WAF Attack Score to provide our customers with a more Secure-by-Default system that didn’t rely on new rules being raced out, or making reactive configuration changes. The way most WAFs work is they match properties of an incoming HTTP request against a set of “signatures”, which are essentially patterns described using regular expressions. We do that too, but we also train ML models on the “true positive” matches, which allow us to infer the likelihood a new request is malicious even when it doesn’t match a signature. Customers can write one rule up front that blocks high-confidence malicious requests, and they’re protected against 0 days thereafter. Secure by default, even against attacks that have not yet been discovered.

One final example of this approach can be found in how we designed Cloudflare One, the zero trust suite we initially built to protect Cloudflare’s own employees and networks. When we opened Cloudflare One to businesses of all sizes, we wanted a secure-by-default way to connect and protect corporate networks that didn’t require poking a bunch of holes in network firewalls.

Instead of this traditional route that requires security administrators to make upfront changes and avoid firewall configuration drift over time, we designed Cloudflare Tunnel to establish mutually authenticated, encrypted connections directly to Cloudflare’s edge. Additionally, we wanted to completely shut off access to our customers’ networks by default, except for access to specific applications by strongly-authenticated users rather than IP and port holes that aren’t tied to a known identity.

Secure-by-Design: continual (re)investment in secure development practices

Secure defaults that require minimal customer invention are critically important, but not sufficient on their own to protect our customers. How the products are built by engineering and evaluated by our CSO organization for adherence to secure development practices is just as important in minimizing vulnerabilities that may result in customer compromise. And none of that matters without the support from executive leadership to make significant investments that may not immediately result in visible customer benefit.

Cloudflare’s engineering team builds products using the most secure development practices and tools available at the time of implementation, that sufficiently meet the requirements and architectural constraints. The options available evolve over time of course, so what was most appropriate (and secure) back in 2013 when we wrote the initial version of the Cloudflare WAF may no longer be the best option in 2023. Lua made sense for us for the reasons outlined in this talk, but when the WAF was starting to show its age in 2017 we had a choice: continue bolting on features quickly to try to close the gap with competitive products, or invest in a memory-safe language that improved security and performance at the cost of near-term momentum?

We knew that if we designed our underlying WAF platform correctly, customers—at scale—could more easily adopt other Application Security products such as Bot Management and our new API Gateway. Our existing core WAF functionality would also benefit from new evaluation engines, running 40% faster and becoming more resilient. But proposing an entire rewrite of a system that processed millions of requests per second in a relatively nascent language, Rust, was not a small undertaking or ask. Fortunately we had the full support of Cloudflare’s executive and technical leadership teams to make this investment, which is critical for security as CISA et al. write (emphasis added):

Secure-by-Design development requires the investment of significant resources by software manufacturers at each layer of the product design and development process that cannot be “bolted on” later. It requires strong leadership by the manufacturer’s top business executives to make security a business priority, not just a technical feature.

[and]

Manufacturers are encouraged to make hard tradeoffs and investments, including those that will be “invisible” to the customers, such as migrating to programming languages that eliminate widespread vulnerabilities. They should prioritize features, mechanisms, and implementation of tools that protect customers rather than product features that seem appealing but enlarge the attack surface.

The end result of our efforts was a new WAF rule evaluation engine entirely written in Rust—a performant, memory-safe language that is immune to buffer overflow attacks and has positioned us well for the future. After that rewrite, our Cache team also embarked on a similarly XL-sized project called Pingora, which replaced NGINX with a Rust-based reverse proxy engine called Pingora. These projects were costly, but improved the security posture of our customers:

The authoring agencies acknowledge that taking ownership of the security outcomes for customers and ensuring this level of customer security may increase development costs.

However, investing in “Secure-by-Design” practices while developing new technology products and maintaining existing ones can substantially improve the security posture of customers and reduce the likelihood of being compromised.

Secure-by-Default and Secure-by-Design: implementing these principles into your organization

Building secure products that are easy to adopt and require minimal ongoing customer oversight is paramount in today’s threat environment, but it takes an aligned organization to deliver. Below are some techniques that Cloudflare employs to solve our customers’ security problems, and shift the operational burden away from their network towards ours:

1. Perform as much logic as you can in code you control and can update without user intervention
Like many readers, I’m the technical support person for my parents. Their home networking equipment is quite modern and sends me alerts when there are critical security updates, but I’m always afraid if I apply updates without being onsite something might go wrong.

Professional security administrators face the same problem when dealing with enterprise networking equipment. When software gets shipped into heterogeneous customer environments, things can go wrong. Having a single software stack that runs on every server in our fleet has made it immeasurably easier to stay on top of software updates for our customers.

To the extent your organization can shift the operational burden away from your customer to your own infrastructure, the easier it will be for people to adopt your products and keep them secure. Relying on overburdened administrators to apply patches, especially if there’s risk of downtime, is a difficult proposition.

2. Educate executive leadership on the importance of continual reinvestment in modern security standards, and run experiments to build credibility
Today’s economic environment is challenging: customers are being forced to do more with less, while the software providers they depend upon are no longer hiring at the rate they once were. The appropriate prioritization of scarce engineering resources across new features, technical debt, and security hardening is not obvious and is likely met internally with differing opinions. Laying out clear business cases for adopting  secure-by-default and secure-by-design mindsets is thus even more critical for improving security outcomes without obvious customer-visible benefit.

Projects should also be appropriately scoped, and experiments should be run early and often. Do not wait until you are nearly through a project before letting others play with and review your proof-of-concepts and code. You may find support within the organization where you did not expect it, accelerating projects and increasing the likelihood that they succeed. You’ll also be able to demonstrate unexpected benefits that customers will embrace, helping build a base of support for the sustained effort.

3. Empower your security practitioners to provide feedback early and often in the development cycle
The skill set required to code new products and features does not perfectly overlap with the skill set required to spot security risks in them. Application security experts are helpful because they can quickly pattern match security “code smells” with other projects they’ve previously reviewed and helped harden.

You should embed your security experts within your product engineering teams so that they can provide guidance at the earliest (and lowest cost) phase of development. Having these experts review your functional specifications may save development cycles downstream.

4. Incentivize products that do more for customers “automagically”
People respond to incentives. If your business is built on selling professional services to enterprise customers, there is little incentive for your software developers to minimize the effort required during the installation, tuning, and hardening process.

If your products are designed to be consumed by hundreds of thousands of customers of all sizes, you have no choice but to do more for customers out-of-the-box. Otherwise, your support organization will be overwhelmed and your customers will be vulnerable.

5. Avoid default passwords at all costs
Every day, Cloudflare mitigates DDoS attacks launched by botnets comprised of insecure-by-default devices. Manufacturers ship IoT devices and home networking equipment with default or easy-to-guess passwords, and many proxy vendors require no authentication out of the box.

If these manufacturers followed the principles outlined in the CISA guide, these attacks would decrease in both intensity and frequency, as fewer and fewer devices can be recruited for attack amplification.

Killnet and AnonymousSudan DDoS attack Australian university websites, and threaten more attacks — here’s what to do about it

Post Syndicated from Patrick R. Donahue original https://blog.cloudflare.com/ddos-attacks-on-australian-universities/

Killnet and AnonymousSudan DDoS attack Australian university websites, and threaten more attacks — here’s what to do about it

Killnet and AnonymousSudan DDoS attack Australian university websites, and threaten more attacks — here’s what to do about it

Over the past 24 hours, Cloudflare has observed HTTP DDoS attacks targeting university websites in Australia. Universities were the first of several groups publicly targeted by the pro-Russian hacker group Killnet and their affiliate AnonymousSudan, as revealed in a recent Telegram post. The threat actors called for additional attacks against 8 universities, 10 airports, and 8 hospital websites in Australia beginning on Tuesday, March 28.

Killnet is a loosely formed group of individuals who collaborate via Telegram. Their Telegram channels provide a space for pro-Russian sympathizers to volunteer their expertise by participating in cyberattacks against western interests.

Killnet and AnonymousSudan DDoS attack Australian university websites, and threaten more attacks — here’s what to do about it
Figure: % of traffic constituting DDoS attacks for organizations in Australia

This is not the first time Cloudflare has reported on Killnet activity. On February 2,  2023 we noted in a blog that a pro-Russian hacktivist group — claiming to be part of Killnet — was targeting multiple healthcare organizations in the US. In October 2022, Killnet called to attack US airport websites, and attacked the US Treasury the following month.

As seen with past attacks from this group, these most recent attacks do not seem to be originating from a single botnet, and the attack methods and sources seem to vary, suggesting the involvement of multiple individual threat actors with varying degrees of skill.

DDoS (Distributed Denial of Service) attacks often make headlines due to their ability to disrupt critical services. Cloudflare recently announced that it had blocked the largest attack to date, which peaked at 71 million requests per second (rps) and was 54% higher than the previous record attack from June 2022.

DDoS attacks are designed to overwhelm networks with massive amounts of malicious traffic, and when executed correctly, can disrupt service or take networks offline. The size, sophistication, and frequency of attacks have been increasing over the past months.

What is Killnet and AnonymousSudan?

Killnet is not a traditional hacking group: it does not have membership, it does not have tools or infrastructure, and it does not operate for financial gain. Instead, Killnet is a space for pro-Russian “hacktivist” sympathizers to volunteer their expertise by participating in cyberattacks against western interests. This collaboration happens entirely in the open via Telegram, where anyone is welcome to join.

Killnet was formed shortly after (and likely in response to) the IT Army of Ukraine, and it emulates their tactics. Most days, administrators of the Killnet telegram channel will put out a call for volunteers to attack some particular target. Participants share many different tools and techniques for launching successful attacks, and inexperienced individuals are often coached on how to launch cyber attacks by those who are more experienced.

AnonymousSudan is another nontraditional hacking group similar to Killnet who is ostensibly composed of Sudanese “hacktivists”. The two groups have recently begun collaborating to attack various western interests.

Attackers, including from these groups, are becoming more audacious in  the size and scale of the organizations they are targeting. What this means for businesses, especially those with limited cyber resources, is an increasing threat level against vulnerable networks.

Organizations of all sizes need to be prepared for the eventuality of a significant DDoS attack against their networks. Detection and mitigation of attacks should ideally be automated as much as possible, because relying solely on humans to mitigate in real time puts attackers in the driver’s seat.

How should I protect my organization against DDoS?

Cloudflare customers are protected against DDoS attacks; our systems have been automatically detecting and mitigating the attack. Our team continues to monitor the situation and will deploy countermeasures as needed.

As an additional step of precaution, customers in the Education, Travel, and Healthcare industries are advised to follow the below recommendations.

  1. Ensure all other DDoS Managed Rules are set to default settings (High sensitivity level and mitigation actions).
  2. Enterprise customers with Advanced DDoS should consider enabling Adaptive DDoS Protection.
  3. Deploy firewall rules and rate-limiting rules to enforce a combined positive and negative security model. Reduce the traffic allowed to your website based on your known usage.
  4. Turn on Bot Fight Mode or the equivalent level (SBFM, Enterprise Bot Management) available to you.
  5. Ensure your origin is not exposed to the public Internet, i.e., only enable access to Cloudflare IP addresses.
  6. Enable caching as much as possible to reduce the strain on your origin servers, and when using Workers, avoid overwhelming your origin server with more subrequests than necessary
  7. Enable DDoS alerting.

As easy as it has become for the attackers to launch DDoS attacks, we want to make sure that it is even easier – and free – for defenders of organizations of all sizes to protect themselves against DDoS attacks of all types. We’ve been providing unmetered and unlimited DDoS protection for free to all of our customers since 2017. Cloudflare’s mission is to help build a better Internet. A better Internet is one that is more secure, faster, and reliable for everyone – even in the face of DDoS attacks.

If you’d like to learn more about key DDoS trends, download the Cloudflare DDoS Threat Report for quarterly insights.

Cloudforce One is now generally available: empower your security team with threat data, tooling, and access to industry experts

Post Syndicated from Patrick R. Donahue original https://blog.cloudflare.com/cloudforce-one-is-now-ga/

Cloudforce One is now generally available: empower your security team with threat data, tooling, and access to industry experts

Cloudforce One is now generally available: empower your security team with threat data, tooling, and access to industry experts

Cloudflare’s threat operations and research team, Cloudforce One, is now open for business and has begun conducting threat briefings. Access to the team is available via an add-on subscription, and includes threat data and briefings, security tools, and the ability to make requests for information (RFIs) to the team.

Fill out this form or contact your account team to learn more.

Subscriptions come in two packages, and are priced based on number of employees: “Premier” includes our full history of threat data, bundled RFIs, and an API quota designed to support integrations with SIEMs. “Core” level includes reduced history and quotas. Both packages include access to all available security tools, including a threat investigation portal and sinkholes-as-a-service.

If you’re an enterprise customer interested in understanding the type of threat briefings that Cloudforce One customers receive, you can register here for “YackingYeti: How a Russian threat group targets Ukraine—and the world”, scheduled for October 12. The briefing will include Q&A with Blake Darché, head of Cloudforce One, and an opportunity to learn more about the team and offering.

Requests for Information (RFIs) and Briefings

The Cloudforce One team is composed of analysts assigned to five subteams: Malware Analysis, Threat Analysis, Active Mitigation and Countermeasures, Intelligence Analysis, and Intelligence Sharing. Collectively, they have tracked many of the most sophisticated cyber criminals on the Internet while at the National Security Agency (NSA), USCYBERCOM, and Area 1 Security, and have worked closely with similar organizations and governments to disrupt these threat actors. They’ve also been prolific in publishing “finished intel” reports on security topics of significant geopolitical importance, such as targeted attacks against governments, technology companies, the energy sector, and law firms, and have regularly briefed top organizations around the world on their efforts.

Cloudforce One is now generally available: empower your security team with threat data, tooling, and access to industry experts

Included with a Cloudforce One subscription is the ability to make “requests for information” (RFIs) to these experts. RFIs can be on any security topic of interest, and will be analyzed and responded to in a timely manner. For example, the Cloudforce One Malware Analysis team can accept uploads of possible malware and provide a technical analysis of the submitted resource. Each plan level comes with a fixed number of RFIs, and additional requests can be added.

In addition to customer-specific requests, Cloudforce One conducts regular briefings on a variety of threats and threat actors—those targeting specific industries as well as more general topics of interest.

Threat Data

The best way to understand threats facing networks and applications connected to the Internet is to operate and protect critical, large scale Internet infrastructure. And to defend attacks against millions of customers, large and small. Since our early days, Cloudflare has set out to build one of the world’s largest global networks to do just that. Every day we answer trillions of DNS queries, track the issuance of millions SSL/TLS certificates in our CT log, inspect millions of emails for threats, route multiple petabytes of traffic to our customers’ networks, and proxy trillions of HTTP requests destined for our customers’ applications. Each one of these queries and packets provides a unique data point that can be analyzed at scale and anonymized into actionable threat data—now available to our Cloudforce One customers.

Data sets now available in the dashboard and via API for subscribers include IP, ASN, and domain intelligence, passive DNS resolutions; threat actor cards with indicators of compromise (IoC), open port, and new Managed IP Lists are planned for release later this year.

Security Tools

Security analysts and threat hunting teams are being forced to do more with less in today’s operating environment, but that doesn’t reduce their need for reliable tools that can quickly identify and eliminate risks.

Bundled with Cloudforce One are several security tools that can be deployed as services to expedite threat hunting and remediation:

Threat Investigation Portal

  • Located within Security Center, the Investigate tab is your portal for querying current and historical threat data on IPs, ASNs, URLs (new!), and domains.
  • URLs can now be scanned for phishing contents, with heuristic and machine learning-scored results presented on demand.
Cloudforce One is now generally available: empower your security team with threat data, tooling, and access to industry experts

Brand Protection (new!)

  • Also located within the Security Center, the Brand Protection tab can be used to register keywords or assets (e.g., corporate logos, etc.) that customers wish to be notified of when they appear on the Internet.
Cloudforce One is now generally available: empower your security team with threat data, tooling, and access to industry experts
Cloudforce One is now generally available: empower your security team with threat data, tooling, and access to industry experts

Sinkholes (new!)

  • Sinkholes can be created on-demand, as a service, to monitor hosts infected with malware and prevent them from communicating with command-and-control (C2) servers.
  • After creating a sinkhole via API, an IP will be returned which can be used with DNS products like Cloudflare Gateway to route web requests to safe sinkholes (and away from C2 servers). Sinkholes can be used to intercept SMTP traffic.
  • Premier customers can also bring their own IP address space to use for sinkholes, to accommodate egress firewall filtering or other use cases. In the future we plan to extend our sinkhole capability to the network layer, which will allow it to be deployed alongside offerings such as Magic Transit and Magic WAN.
Cloudforce One is now generally available: empower your security team with threat data, tooling, and access to industry experts

Getting Started with Cloudforce One

Cloudforce One is open for business and ready to answer your security inquiries. Speak to your account manager or fill out this form to learn more. We hope to see you on the upcoming webinar!

Bring your own license and threat feeds to use with Cloudflare One

Post Syndicated from Patrick R. Donahue original https://blog.cloudflare.com/bring-your-own-threat-feeds-with-cloudflare-one/

Bring your own license and threat feeds to use with Cloudflare One

Bring your own license and threat feeds to use with Cloudflare One

At Cloudflare, we strive to make our customers’ lives simpler by building products that solve their problems, are extremely easy to use, and integrate well with their existing tech stack. Another element of ensuring that we fit well with existing deployments is integrating seamlessly with additional solutions that customers subscribe to, and making sure those solutions work collaboratively together to solve a pain point.

Today, we are announcing new integrations that enable our customers to integrate third-party threat intel data with the rich threat intelligence from Cloudflare One products — all within the Cloudflare dashboard. We are releasing this feature in partnership with Mandiant, Recorded Future, and VirusTotal, and will be adding new partners in the coming months.

Customers of these threat intel partners can upload their API keys to the Cloudflare Security Center to enable the use of additional threat data to create rules within Cloudflare One products such as Gateway and Magic Firewall, and infrastructure security products including the Web Application Firewall and API Gateway. Additionally, search results from Security Center’s threat investigations portal will also be automatically enriched with licensed data.

Entering your API keys

Customers will be able to enter their keys by navigating to Security Center → Reference Data, and clicking on the ellipsis next to desired rows and selecting “Edit API key”. Once a valid key has been added, the status listed on the row should change from “No key provided” to “Active key”.

Bring your own license and threat feeds to use with Cloudflare One

Mandiant

Mandiant Advantage customers with a Threat Intelligence subscription can enter their API keys and leverage  Mandiant’s most popular feeds of FQDN and IP address indicators of security threats and their related context throughout Cloudflare One products.

These include lists organized by threat category and aggregations of most active malicious infrastructure. By curating the most recent data and data relevant to your infrastructure on the Cloudflare network, Cloudflare will make it easy to take advantage of active and relevant indicators of malicious activity from Mandiant’s extensive threat intelligence data. Cloudflare takes care of importing the data and refreshing it regularly to help protect you from the latest threats Mandiant sees on the frontlines. Cloudflare products such as Gateway, Magic Firewall, and Web Application Firewall (WAF) will have access to the threat intelligence data and make it easy to operationalize using the same rule builder you use today.

“As cyber threats continue to rapidly evolve, organizations require up-to-date and relevant intelligence integrated with their preferred technology solutions to comprehensively protect their environments. Together, Mandiant and Cloudflare are enabling our mutual customers to better protect themselves from malicious actors that are active on the front lines right now”.
– Robert Wallace, Senior Director, Strategy,  Mandiant

Bring your own license and threat feeds to use with Cloudflare One

Recorded Future

Recorded Future customers can upload their API key to unlock use of Security Control Feeds. Once you have set up your API key, Recorded Future intelligence will also be available in the rule builder of Cloudflare Gateway and Magic Firewall. Cloudflare will present the intelligence that is relevant to and actionable by the product being configured. Intelligence will be regularly updated for you, freeing you to focus on the security policies and actions that are relevant for your organization.

For example, customers will be able to create a rule that blocks connections where the source or destination IP is in the Security Control feed “​​Command and Control – IP Addresses [Prevent]”. This list will be automatically updated daily for each customer who has a valid API key.

Bring your own license and threat feeds to use with Cloudflare One

As threats accelerate and converge in the world around us, Recorded Future and Cloudflare are working together to empower customers with the right intelligence at the right time, to keep our people and infrastructure safe.
– Craig Adams, Chief Product & Engineering officer, Recorded Future

VirusTotal

Virus Total Premium customers can upload their API key to augment and enrich Security Center search results for IPs, domains, and URLs. In the future we plan to add additional object types such as binary files.

Results will be automatically populated within a new card in the ‘Investigate’ tab. When searching an IP address, you will see a summary of the IP address information from VirusTotal including the overall results of the last analysis (e.g., harmless, suspicious, malicious, etc.), reputation score, tags, community votes, and the top files (if any) associated with that IP address by communications.

“Cybersecurity teams face a challenging environment as attackers become more sophisticated. They need complete visibility and real-time threat intelligence from multiple sources to combat malicious threats. We are partnering with Cloudflare to help our mutual customers outsmart adversaries.”
– Emiliano Martinez Contreras, Head of Product for VirusTotal — Google

Want to get started?

If you are interested in gaining access during our beta testing phase, please complete this form. And if there are additional data vendors you would like to see us integrate with, including your own sources, click here.

Democratizing email security: protecting individuals and businesses of all sizes from phishing and malware attacks

Post Syndicated from Patrick R. Donahue original https://blog.cloudflare.com/democratizing-email-security/

Democratizing email security: protecting individuals and businesses of all sizes from phishing and malware attacks

Democratizing email security: protecting individuals and businesses of all sizes from phishing and malware attacks

Since our founding, Cloudflare has been on a mission to take expensive, complex security solutions typically only available to the largest companies and make them easy to use and accessible to everyone. In 2011 and 2015 we did this for the web application firewall and SSL/TLS markets, simplifying the process of protecting websites from application vulnerabilities and encrypting HTTP requests down to single clicks; in 2020, during the start of the COVID-19 pandemic, we made our Zero Trust suite available to everyone; and today—in the face of heightened phishing attacks—we’re doing the same for the email security market.

Once the acquisition of Area 1 closes, as we expect early in the second quarter of 2022, we plan to give all paid self-serve plans access to their email security technology at no additional charge. Control, customization, and visibility via analytics will vary with plan level, and the highest flexibility and support levels will be available to Enterprise customers for purchase.

All self-serve users will also get access to a more feature-packed version of the Zero Trust solution we made available to everyone in 2020. Zero Trust services are incomplete without an email security solution, and CISA’s recent report makes that clearer than ever: over 90% of successful cyber attacks start with a phishing email, so we expect that over time analysts will have no choice but to include email in their definitions of secure access and zero edges.

If you’re interested in reserving your place in line, register your interest by logging into your Cloudflare account at dash.cloudflare.com, selecting your domain, clicking Email, and then “Join Waitlist” at the top of the page; we’ll reach out after the Area 1 acquisition is completed, and the integration is ready, in the order we received your request.

One-click deployment

If you’re already managing your authoritative DNS with Cloudflare, as nearly 100% of non-Enterprise plans are, there will just be a single click to get started. Once clicked, we’ll start returning different MX records to anyone trying to send email to your domain. This change will attract all emails destined for your domain, during which they’ll be run through Area 1’s models and potentially be quarantined or flagged. Customers of Microsoft Office 365 will also be able to take advantage of APIs for an even deeper integration and capabilities like post-delivery message redaction.

Democratizing email security: protecting individuals and businesses of all sizes from phishing and malware attacks

In addition to routing and filtering email, we’ll also automagically take care of your DNS email security records such as SPF, DKIM, DMARC, etc. We launched a tool to help with this last year, and soon we’ll be making it even more comprehensive and easier to use.

Integration with other Zero Trust products

As we wrote in the acquisition announcement post on this blog, we’re excited to integrate email security with other products in our Zero Trust suite. For customers of Gateway and Remote Browser Isolation (RBI), we’ll automatically route potentially suspicious domains and links through these protective layers. Our built-in data loss prevention (DLP) technology will also be wired into Area 1’s technology in deployments where visibility into outbound email is available.

Improving threat intelligence with new data sources

In addition to integrating directly with Zero Trust products, we’re excited about connecting threat data sources from Area 1 into existing Cloudflare products and vice versa. For example, phishing infrastructure identified during Area 1’s Internet-wide scans will be displayed within the recently launched Cloudflare Security Center, and 1.1.1.1’s trillions of queries per month will help Area 1 identify new domains that may be threats. Domains that are newly registered, or registered with slight variations of legitimate domains, are often warning signs of an upcoming phishing attack.

Getting started

Cloudflare has been a happy customer of Area 1’s technology for years, and we’re excited to open it up to all of our customers as soon as possible. If you’re excited as we are about being able to use this in your Pro or Business plan, reserve your place in line today within the Email tab for your domain. Or if you’re an Enterprise customer and want to get started immediately, fill out this form or contact your Customer Success Manager.

Investigating threats using the Cloudflare Security Center

Post Syndicated from Patrick R. Donahue original https://blog.cloudflare.com/security-center-investigate/

Investigating threats using the Cloudflare Security Center

Investigating threats using the Cloudflare Security Center

Cloudflare blocks a lot of diverse security threats, with some of the more interesting attacks targeting the “long tail” of the millions of Internet properties we protect. The data we glean from these attacks trains our machine learning models and improves the efficacy of our network and application security products, but historically hasn’t been available to query directly. This week, we’re changing that.

All customers will soon be granted access to our new threat investigations portal, Investigate, in the Cloudflare Security Center (first launched in December 2021). Additionally, we’ll be annotating threats across our analytics platform with this intelligence to streamline security workflows and tighten feedback loops.

What sorts of data might you want to look up here? Let’s say you’re seeing an IP address in your logs and want to learn which hostnames have pointed to it via DNS, or you’re seeing a cluster of attacks come from an autonomous system (AS) you’re not familiar with. Or maybe you want to investigate a domain name to see how it’s been categorized from a threat perspective. Simply enter any of those items into the omni search box, and we’ll tell you everything we know.

IPs and hostnames will be available to query this week, followed by AS details to give you insight into the networks that communicate with your Cloudflare accounts. Next month as we move to general availability we’ll add data types and properties. Integrations with partners will allow you to use your existing license keys to see all your threat data in a single, unified interface. We also plan to show how both your infrastructure and corporate employees are interacting with any objects you look up, e.g., you can see how many times an IP triggers a WAF or API Shield rule, or how many times your employees attempted to resolve a domain that’s known to serve malware.

Annotations in the dashboard: actionable intelligence in context

Looking up threat data on an ad hoc basis is great, but it’s better when that data is annotated directly in logs and analytics. Starting this week, we will begin rolling out intelligence that is available in Investigate in the dashboard where it is relevant to your workflow. We’re starting with the web application firewall analytics for your websites that are behind Cloudflare.

Say you are investigating a security alert for a large number of requests that are blocked by a web application firewall rule. You might see that the alert was caused by an IP address probing your website for commonly exploited software vulnerabilities. If the IP in question were a cloud IP or flagged as an anonymizer, contextual intelligence will show that information directly on the analytics page.

This context can help you see patterns. Are attacks coming from anonymizers or the Tor network? Are they coming from cloud virtual machines? An IP is just an IP. But seeing a credential stuffing attack coming from anonymizers is a pattern that enables a proactive response, “Is my bot management configuration up-to-date?”

Investigating threats using the Cloudflare Security Center

Cloudflare’s network vantage point and how this informs our data

The scale at which each product suite operates at Cloudflare is staggering. At peak, Cloudflare handles 44 million HTTP requests a second, from more than 250 cities in over 100 countries. The Cloudflare network responds to over 1.2 trillion DNS queries per day, and it has 121 Tbps of network capacity to serve traffic and mitigate denial of service attacks across all products. But on top of this immense scale, Cloudflare’s architecture enables refining raw data and combining intelligence from all of our products to paint a holistic picture of the security landscape.

We are able to take signals refined from the raw data generated by each product and combine them with signals from other products and capabilities to enhance our network and threat data capabilities. It is a common paradigm for security products to be built to have positive flywheel effects among users of the products. If one customer sees a new piece of malware, an endpoint protection vendor can deploy an update that will detect and block this malware for all their other customers. If a botnet attacks one customer, this provides information that can be used to find the signature of that botnet and protect other customers. If a device participates in a DDoS (Distributed Denial of Service) attack, that information can be used to make the network able to faster detect and mitigate future DDoS attacks. Cloudflare’s breadth of product offerings means that the flywheel effect benefits to users accumulate not just between users, but between products as well.

Let’s look at some examples:

DNS resolution and certificate transparency

First, Cloudflare operates 1.1.1.1, one of the largest recursive DNS resolvers in the world. We operate it in a privacy-forward manner, so here at Cloudflare we do not know who or what IP performed a query, nor are we able to correlate queries together to distinct anonymous users. However, through the requests the resolver handles, Cloudflare sees newly registered and newly seen domains. Additionally, Cloudflare has one of the most advanced SSL/TLS encryption products on the market, and as part of that is a member organization helping to maintain the Certificate Transparency logs. These are public logs of every TLS certificate issued by a root certificate authority that is trusted by web browsers. Between these two products, Cloudflare has an unmatched view of what domains are out there on the Internet and when they become active. We use this information not only to populate our new and newly seen domains categories for our Gateway product, but we feed these domains into machine learning models that label suspicious or potentially malicious domains early in their lifecycle.

Email security

As another example, with the acquisition of Area 1, Cloudflare will bring a new set of mutually-reinforcing capabilities into its product offering. All the signals we can generate for a domain from our 1.1.1.1 resolver will become available to help identify malicious email, and Area 1’s years of expertise in identifying malicious email will be able to feed back into Cloudflare’s Gateway product and 1.1.1.1 for Families DNS resolver. In the past, data integrations like this would have been performed by IT or security teams. Instead, data will be able to flow seamlessly between the points on your organization’s attack surface, mutually reinforcing the quality of the analysis and classification. The entire Cloudflare Zero Trust toolkit, including request logging, blocking, and remote browser isolation will be available to handle potentially malicious links delivered via email, using the same policies already in place for other security risks.

Over the last few years, Cloudflare has integrated the use of machine learning in many of our product offerings, but today we’ve launched a new tool that puts the data and signals that power our network security into our customer’s hands as well. Whether responding to security incidents, threat hunting, or proactively setting security policies to protect for your organization, you, human, can now be part of the Cloudflare network as well. Cloudflare’s unique position in the network means that your insights can be fed back into the network to protect not just your organization across all Cloudflare products it uses, but also can participate in mutual insight and defense among all Cloudflare customers.

Looking forward

Cloudflare can cover your organization’s whole attack surface: defending websites, protecting devices and SaaS applications with Cloudflare Zero Trust, your locations and offices with Magic Transit, and your email communications. Security Center is here to make sure you have all the information you need to understand the cyber security risks present today, and to help you defend your organization using Cloudflare.

“What is the wiper malware that I hear about on the news, and how do I protect my company from it?” We hear your questions, and we’re going to give you answers. Not just raw information, but what is relevant to you and how you use the Internet. We have big plans for Security Center. A file scanning portal will provide you with information about JavaScript files seen by Page Shield, executable files scanned by Gateway, and the ability to upload and scan files. Indicators of Compromise like IP addresses and domains will link to information about the relevant threat actors, when known, giving you more information about the techniques and tactics you are faced with, and information about how Cloudflare products can be used to defend against them. CVE search will let you find information on software vulnerabilities, along with the same easy-to-understand Cloudflare perspective you are used to reading on this blog to help decode the jargon and technical language. With today’s release, we’re just getting started.

May I ask who’s calling, please? A recent rise in VoIP DDoS attacks

Post Syndicated from Patrick R. Donahue original https://blog.cloudflare.com/attacks-on-voip-providers/

May I ask who’s calling, please? A recent rise in VoIP DDoS attacks

May I ask who’s calling, please? A recent rise in VoIP DDoS attacks

Over the past month, multiple Voice over Internet Protocol (VoIP) providers have been targeted by Distributed Denial of Service (DDoS) attacks from entities claiming to be REvil. The multi-vector attacks combined both L7 attacks targeting critical HTTP websites and API endpoints, as well as L3/4 attacks targeting VoIP server infrastructure. In some cases, these attacks resulted in significant impact to the targets’ VoIP services and website/API availability.

Cloudflare’s network is able to effectively protect and accelerate voice and video infrastructure because of our global reach, sophisticated traffic filtering suite, and unique perspective on attack patterns and threat intelligence.

If you or your organization have been targeted by DDoS attacks, ransom attacks and/or extortion attempts, seek immediate help to protect your Internet properties. We recommend not paying the ransom, and to report it to your local law enforcement agencies.

Voice (and video, emojis, conferences, cat memes and remote classrooms) over IP

Voice over IP (VoIP) is a term that’s used to describe a group of technologies that allow for communication of multimedia over the Internet. This technology enables your FaceTime call with your friends, your virtual classroom lessons over Zoom and even some “normal” calls you make from your cell phone.

May I ask who’s calling, please? A recent rise in VoIP DDoS attacks

The principles behind VoIP are similar to traditional digital calls over circuit-switched networks. The main difference is that the encoded media, e.g., voice or video, is partitioned into small units of bits that are transferred over the Internet as the payloads of IP packets according to specially defined media protocols.

This “packet switching” of voice data, as compared to traditional “circuit switching”, results in much more efficient use of network resources. As a result, calling over VoIP can be much more cost-effective than calls made over the POTS (“plain old telephone service”). Switching to VoIP can cut down telecom costs for businesses by more than 50%, so it’s no surprise that one in every three businesses has already adopted VoIP technologies. VoIP is flexible, scalable, and has been especially useful in bringing people together remotely during the pandemic.

A key protocol behind most VoIP calls is the heavily adopted Signal Initiation Protocol (SIP). SIP was originally defined in RFC-2543 (1999) and designed to serve as a flexible and modular protocol for initiating calls (“sessions”), whether voice or video, or two-party or multiparty.

Speed is key for VoIP

Real-time communication between people needs to feel natural, immediate and responsive. Therefore, one of the most important features of a good VoIP service is speed. The user experiences this as natural sounding audio and high definition video, without lag or stutter. Users’ perceptions of call quality are typically closely measured and tracked using metrics like Perceptual Evaluation of Speech Quality and Mean Opinion Scores. While SIP and other VoIP protocols can be implemented using TCP or UDP as the underlying protocols, UDP is typically chosen because it’s faster for routers and servers to process them.

May I ask who’s calling, please? A recent rise in VoIP DDoS attacks

UDP is a protocol that is unreliable, stateless and comes with no Quality of Service (QoS) guarantees. What this means is that the routers and servers typically use less memory and computational power to process UDP packets and therefore can process more packets per second. Processing packets faster results in quicker assembly of the packets’ payloads (the encoded media), and therefore a better call quality.

Under the guidelines of faster is better, VoIP servers will attempt to process the packets as fast as possible on a first-come-first-served basis. Because UDP is stateless, it doesn’t know which packets belong to existing calls and which attempt to initiate a new call. Those details are in the SIP headers in the form of requests and responses which are not processed until further up the network stack.

When the rate of packets per second increases beyond the router’s or server’s capacity, the faster is better guideline actually turns into a disadvantage. While a traditional circuit-switched system will refuse new connections when its capacity is reached and attempt to maintain the existing connections without impairment, a VoIP server, in its race to process as many packets as possible, will not be able to handle all packets or all calls when its capacity is exceeded. This results in latency and disruptions for ongoing calls, and failed attempts of making or receiving new calls.

Without proper protection in place, the race for a superb call experience comes at a security cost which attackers learned to take advantage of.

DDoSing VoIP servers

Attackers can take advantage of UDP and the SIP protocol to overwhelm unprotected VoIP servers with floods of specially-crafted UDP packets. One way attackers overwhelm VoIP servers is by pretending to initiate calls. Each time a malicious call initiation request is sent to the victim, their server uses computational power and memory to authenticate the request. If the attacker can generate enough call initiations, they can overwhelm the victim’s server and prevent it from processing legitimate calls. This is a classic DDoS technique applied to SIP.

May I ask who’s calling, please? A recent rise in VoIP DDoS attacks

A variation on this technique is a SIP reflection attack. As with the previous technique, malicious call initiation requests are used. However, in this variation, the attacker doesn’t send the malicious traffic to the victim directly. Instead, the attacker sends them to many thousands of random unwitting SIP servers all across the Internet, and they spoof the source of the malicious traffic to be the source of the intended victim. That causes thousands of SIP servers to start sending unsolicited replies to the victim, who must then use computational resources to discern whether they are legitimate. This too can starve the victim server of resources needed to process legitimate calls, resulting in a widespread denial of service event for users. Without the proper protection in place, VoIP services can be extremely susceptible to DDoS attacks. Once against a classic DDoS attack type being used against SIP.

The graph below shows a recent multi-vector UDP DDoS attack that targeted VoIP infrastructure protected by Cloudflare’s Magic Transit service. The attack peaked just above 70 Gbps and 16M packets per second. While it’s not the largest attack we’ve ever seen, attacks of this size can have large impact on unprotected infrastructure. This specific attack lasted a bit over 10 hours and was automatically detected and mitigated.

May I ask who’s calling, please? A recent rise in VoIP DDoS attacks

[Alt text: Graph of a 70 Gbps DDoS attack against a VoIP provider]

Below are two additional graphs of similar attacks seen last week against SIP infrastructure. In the first chart we see multiple protocols being used to launch the attack, with the bulk of traffic coming from (spoofed) DNS reflection and other common amplification and reflection vectors. These attacks peaked at over 130 Gbps and 17.4M pps.

May I ask who’s calling, please? A recent rise in VoIP DDoS attacks

May I ask who’s calling, please? A recent rise in VoIP DDoS attacks

[Alt text: Graph of a 130 Gbps DDoS attack against a different VoIP provider]

Protecting VoIP services without sacrificing performance

One of the most important factors for delivering a quality VoIP service is speed. The lower the latency, the better. Cloudflare’s Magic Transit service can help protect critical VoIP infrastructure without impacting latency and call quality.

May I ask who’s calling, please? A recent rise in VoIP DDoS attacks

[alt text: Diagram of Cloudflare Magic Transit routing]

Cloudflare’s Anycast architecture, coupled with the size and scale of our network, minimizes and can even improve latency for traffic routed through Cloudflare versus the public Internet. Check out our recent post from Cloudflare’s Speed Week for more details on how this works, including test results demonstrating a performance improvement of 36% on average across the globe for a real customer network using Magic Transit.

May I ask who’s calling, please? A recent rise in VoIP DDoS attacks

[alt text: World map of Cloudflare locations

Furthermore, every packet that is ingested in a Cloudflare data center is analyzed for DDoS attacks using multiple layers of out-of-path detection to avoid latency. Once an attack is detected, the edge generates a real-time fingerprint that matches the characteristics of the attack packets. The fingerprint is then matched in the Linux kernel eXpress Data Path (XDP) to quickly drop attack packets at wirespeed without inflicting collateral damage on legitimate packets. We have also recently deployed additional specific mitigation rules to inspect UDP traffic to determine whether it is valid SIP traffic.

May I ask who’s calling, please? A recent rise in VoIP DDoS attacks

The detection and mitigation is done autonomously within every single Cloudflare edge server — there is no “scrubbing center” with limited capacity and limited deployment scope in the equation. Additionally, threat intelligence is automatically shared across our network in real-time to ‘teach’ other edge servers about the attack.

Edge detections are also completely configurable. Cloudflare Magic Transit customers can use the L3/4 DDoS Managed Ruleset to tune and optimize their DDoS protection settings, and also craft custom packet-level (including deep packet inspection) firewall rules using the Magic Firewall to enforce a positive security model.

Bringing people together, remotely

Cloudflare’s mission is to help build a better Internet. A big part of that mission is making sure that people around the world can communicate with their friends, family and colleagues uninterrupted — especially during these times of COVID. Our network is uniquely positioned to help keep the world connected, whether that is by helping developers build real-time communications systems or by keeping VoIP providers online.

Our network’s speed and our always-on, autonomous DDoS protection technology helps VoIP providers to continue serving their customers without sacrificing performance or having to give in to ransom DDoS extortionists.

Talk to a Cloudflare specialist to learn more.

Under attack? Contact our hotline to speak with someone immediately.

Upgrading the Cloudflare China Network: better performance and security through product innovation and partnership

Post Syndicated from Patrick R. Donahue original https://blog.cloudflare.com/upgrading-the-cloudflare-china-network/

Upgrading the Cloudflare China Network: better performance and security through product innovation and partnership

Upgrading the Cloudflare China Network: better performance and security through product innovation and partnership

Core to Cloudflare’s mission of helping build a better Internet is making it easy for our customers to improve the performance, security, and reliability of their digital properties, no matter where in the world they might be. This includes Mainland China. Cloudflare has had customers using our service in China since 2015 and recently, we expanded our China presence through a partnership with JD Cloud, the cloud division of Chinese Internet giant, JD.com. We’ve also had a local office in Beijing for several years, which has given us a deep understanding of the Chinese Internet landscape as well as local customers.

The new Cloudflare China Network built in partnership with JD Cloud has been live for several months, with significant performance and security improvements compared to the previous in-country network. Today, we’re excited to describe the improvements we made to our DNS and DDoS systems, and provide data demonstrating the performance gains customers are seeing. All customers licensed to operate in China can now benefit from these innovations, with the click of a button in the Cloudflare dashboard or via the API.

Serving DNS inside China

With over 14% of all domains on the Internet using Cloudflare’s nameservers we are the largest DNS provider. Furthermore, we pride ourselves on consistently being among the fastest authoritative nameservers, answering about 12 million DNS queries per second on average (in Q2 2021). We achieve this scale and performance by running our DNS platform on our global network in more than 200 cities, in over 100 countries.

Not too long ago, a user in mainland China accessing a website using Cloudflare DNS did not fully benefit from these advantages. Their DNS queries had to leave the country and, in most cases, cross the Pacific Ocean to reach our nameservers outside of China. This network distance introduced latency and sometimes even packet drops, resulting in a poor user experience.

With the new China Network offering built on JD Cloud’s infrastructure, customers are now able to serve their DNS in mainland China. This means DNS queries are answered directly from one of the JD Cloud Points of Presence (PoPs), leading to faster response times and improved reliability.

Once a user signs up a domain and opts in to serve their DNS in China we will assign two nameservers, from two of the following three domains:

cf-ns.com
cf-ns.net
cf-ns.tech

We selected these Top Level Domains (TLDs) because they offer the best possible performance from within mainland China. They are chosen to always be different from the TLD of the domain using them. For example, example.com will be assigned nameservers using the .tech and .net TLD. This gives us “glueless delegations” for customers’ nameservers, allowing us to dynamically return nameserver IP addresses instead of static glue records.

A “glue record” (or just “glue”) is a mapping between nameservers and IPs that’s added by registrars to break circular lookup dependencies when a domain uses a nameserver with the same TLD. For example, imagine a resolver asks the .com TLD nameserver: “Where do I find the nameservers for example.com?” and this domain is using ns1.example.com and ns2.example.com as nameservers. If .com just replied: “Go and ask ns1.example.com or ns2.example.com.” the resolver would come back to .com with the same question and this would never stop. One solution is to add glue at .com, so the answer can be: “The nameservers for example.com are ns1.example.com and ns2.example.com, and they can be reached at 192.0.2.78 and 203.0.113.55.”.

By using different TLDs, as described above, we don’t need to rely on glue records for customers’ nameservers. This way, we can ensure that queries will always be answered from the nearest point of presence (PoP) leading to a faster DNS response. Another advantage of serving dynamic nameserver IPs is the ability to distribute queries across different PoPs, which helps to spread load more efficiently and mitigate attacks.

Mitigating DDoS attacks within China

Everywhere in the world except for China and India, we use a technique known as anycast routing to distribute DDoS attacks and absorb them in data centers as close to the traffic source as possible. But as we first wrote in 2015, the Internet in China works a bit differently than the rest of the world so anycast-based mitigation was not an option:

Unlike much of the rest of the world where network routing is open, in China core Internet access is largely controlled by two ISPs: China Telecom and China Unicom. [Today this list also includes China Mobile.] These ISPs control IP address allocation and routing inside the country. Even the Chinese Internet giants rarely own their own IP address allocations, or use BGP to control routing across the Chinese Internet. This makes BGP Anycast and many of the other routing techniques we use across Cloudflare’s network impossible inside of China.

The lack of anycast in China requires a different approach to mitigating attacks, and our expansion with JD Cloud pushed us to further improve the edge-based mitigation system we wrote about earlier this year. Most importantly, we pushed the detection and mitigation of application (L7) attacks to the edge, reducing our time to mitigate and improving the resiliency of the system by removing a dependency on other core data centers for instructions. In the first quarter of 2021, we mitigated 81% of all L7 attacks at the edge.

For the larger network-based (L3/L4) attacks, we worked closely with JD Cloud to augment our in-data center protections with remote signaling to China Telecom, China Unicom, and China Mobile. These integrations allow us to remotely — and automatically — signal from our edge-based mitigation systems when we want upstream filtering assistance from the ISP. Mitigating attacks at the edge is faster than relying on centralized data centers, and in the first quarter of 2021 98.6% of all L3/4 DDoS attacks were mitigated without centralized communication. Attacks exceeding certain thresholds can also be re-routed to large scrubbing centers, a technique that doesn’t make sense in an anycast world but is useful when unicast is the only option.

Beyond the improved mitigation controls, we also developed new traffic engineering processes to move traffic from overloaded data centers to locations with more spare resources. These controls are already used outside of China, but doing so within the country required integration with our DNS systems.

Lastly, because all of our data centers run the same software stack, the work we did to improve the underlying components of DDoS detection and mitigation systems within China has already made its way back to our data centers outside of China.

Improving performance

Cloudflare on JD Cloud is significantly faster than our previous in-country network, allowing us to accelerate the delivery of our customers’ web properties in China.

To compare the Cloudflare PoPs on JD Cloud vs. our previous in-country network, we deployed a test zone to simulate a customer website on both China networks. We tested each website with the same two origin networks. Both origins are commonly used public cloud providers. One site was hosted in the northwest region of the United States, and the other in Western Europe.

For both zones, we assigned DNS nameservers in China to reduce out-of-country latency incurred during DNS lookups (more details are on DNS below). To test our caching, we used a monitoring and benchmarking service with a wide variety of clients in various Chinese cities and provinces to download 100 kilobyte, 1 megabyte, and 10 megabyte files every 15 minutes over the course of 36 hours.

Latency, as measured by Round Trip Time (RTT) from the client to our JD Cloud PoPs, was reduced at least 30% across tests for all file sizes. This subsequently reduced our Time to First Byte (TTFB) metrics. Reducing latency — and making it more consistent, i.e., improving jitter — has the most impact on other performance metrics, as latency and the slow-start process is the bottleneck for the vast majority of TCP connections.

Our latency reduction comes from the quality of the JD Cloud network, their placement of the PoPs within China, and our ability to direct clients to the closest PoP. As we continue to add more capacity and PoPs in partnership with JD Cloud in the future, we only expect our latency metrics to get even better.

Dynamic Content

Upgrading the Cloudflare China Network: better performance and security through product innovation and partnership

Static Content

Upgrading the Cloudflare China Network: better performance and security through product innovation and partnership

DNS Response Time

Upgrading the Cloudflare China Network: better performance and security through product innovation and partnership

Looking forward and welcoming new customers in China

Cloudflare’s sustained product investments in China, in partnership with JD Cloud, have resulted in significant performance and security improvements over our previous in-country network first launched in 2015.

Specifically, innovations in DNS and DDoS mitigation technology, alongside an improved network design and distribution of PoPs, have resulted in better security for our customers and at least a 30% performance boost.

This new network is open for business, and interested customers should reach out to learn more.

Protecting against recently disclosed Microsoft Exchange Server vulnerabilities: CVE-2021-26855, CVE-2021-26857, CVE-2021-26858, and CVE-2021-27065

Post Syndicated from Patrick R. Donahue original https://blog.cloudflare.com/protecting-against-microsoft-exchange-server-cves/

Protecting against recently disclosed Microsoft Exchange Server vulnerabilities: CVE-2021-26855, CVE-2021-26857, CVE-2021-26858, and CVE-2021-27065

Enabling the Cloudflare WAF and Cloudflare Specials ruleset protects against exploitation of unpatched CVEs: CVE-2021-26855, CVE-2021-26857, CVE-2021-26858, and CVE-2021-27065.

Cloudflare has deployed managed rules protecting customers against a series of remotely exploitable vulnerabilities that were recently found in Microsoft Exchange Server. Web Application Firewall customers with the Cloudflare Specials ruleset enabled are automatically protected against CVE-2021-26855, CVE-2021-26857, CVE-2021-26858, and CVE-2021-27065.

If you are running Exchange Server 2013, 2016, or 2019, and do not have the Cloudflare Specials ruleset enabled, we strongly recommend that you do so. You should also follow Microsoft’s urgent recommendation to patch your on-premise systems immediately. These vulnerabilities are actively being exploited in the wild by attackers to exfiltrate email inbox content and move laterally within organizations’ IT systems.

Edge Mitigation

If you are running the Cloudflare WAF and have enabled the Cloudflare Specials ruleset, there is nothing else you need to do. We have taken the unusual step of immediately deploying these rules in “Block” mode given active attempted exploitation.

If you wish to disable the rules for any reason, e.g., you are experiencing a false positive mitigation, you can do so by following these instructions:

  1. Login to the Cloudflare Dashboard and click on the Cloudflare Firewall tab and then Managed Rules.
  2. Click on the “Advanced” link at the bottom of the Cloudflare Managed Ruleset card and search for rule ID 100179. Select any appropriate action or disable the rule.
  3. Repeat step #2 for rule ID 100181.

Server Side Mitigation

In addition to blocking attacks at the edge, we recommend that you follow Microsoft’s urgent recommendation to patch your on-premise systems immediately. For those that are unable to immediately patch their systems, Microsoft posted yesterday with interim mitigations that can be applied.

To determine whether your system is (still) exploitable, you can run an Nmap script posted by Microsoft to GitHub: https://github.com/microsoft/CSS-Exchange/blob/main/Security/http-vuln-cve2021-26855.nse.

Vulnerability Details

The attacks observed in the wild take advantage of multiple CVEs that can result in exfiltration of email inboxes and remote code execution when chained together. Security researchers at Volexity have published a detailed analysis of the zero-day vulnerabilities.

Briefly, attackers are:

  1. First exploiting a server-side request forgery (SSRF) vulnerability documented as CVE-2021-26855 to send arbitrary HTTP requests and authenticate as the Microsoft Exchange server.
  2. Using this SYSTEM-level authentication to send SOAP payloads that are insecurely deserialized by the Unified Messaging Service, as documented in CVE-2021-26857. An example of the malicious SOAP payload can be found in the Volexity post linked above.
  3. Additionally taking advantage of CVE-2021-26858 and CVE-2021-27065 to upload arbitrary files such as webshells that allow further exploitation of the system along with a base to move laterally to other systems and networks. These file writes require authentication but this can be bypassed using CVE-2021-26855.

All 4 of the CVEs listed above are blocked by the recently deployed Cloudflare Specials rules: 100179 and 100181. Additionally, existing rule ID 100173, also enabled to Block by default, partially mitigates the vulnerability by blocking the upload of certain scripts.

Additional Recommendations

Organizations can deploy additional protections against this type of attack by adopting a Zero Trust model and making the Exchange server available only to trusted connections. The CVE guidance recommends deploying a VPN or other solutions to block attempts to reach public endpoints. In addition to the edge mitigations from the Cloudflare WAF, your team can protect your Exchange server by using Cloudflare for Teams to block all unauthorized requests.

Holistic web protection: industry recognition for a prolific 2020

Post Syndicated from Patrick R. Donahue original https://blog.cloudflare.com/cloudflare-named-the-innovation-leader-in-holistic-web-protection/

Holistic web protection: industry recognition for a prolific 2020

I love building products that solve real problems for our customers. These days I don’t get to do so as much directly with our Engineering teams. Instead, about half my time is spent with customers listening to and learning from their security challenges, while the other half of my time is spent with other Cloudflare Product Managers (PMs) helping them solve these customer challenges as simply and elegantly as possible. While I miss the deeply technical engineering discussions, I am proud to have the opportunity to look back every year on all that we’ve shipped across our application security teams.

Taking the time to reflect on what we’ve delivered also helps to reinforce my belief in the Cloudflare approach to shipping product: release early, stay close to customers for feedback, and iterate quickly to deliver incremental value. To borrow a term from the investment world, this approach brings the benefits of compounded returns to our customers: we put new products that solve real-world problems into their hands as quickly as possible, and then reinvest the proceeds of our shared learnings immediately back into the product.

It is these sustained investments that allow us to release a flurry of small improvements over the course of a year, and be recognized by leading industry analyst firms for the capabilities we’ve accumulated and distributed to our customers. Today we’re excited to announce that Frost & Sullivan has named Cloudflare the Innovation Leader in their Frost Radar™: Global Holistic Web Protection Market Report. Frost & Sullivan’s view that this market “will gradually absorb the markets formed around legacy and point solutions” is consistent with our view of the world, and we’re leading the way in “the consolidation of standalone WAF, DDoS mitigation, and Bot Risk Management solutions” they believe is “poised to happen before 2025”.

Holistic web protection: industry recognition for a prolific 2020
Image © 2020 Frost & Sullivan from Frost Radar™: Global Holistic Web Protection Market Report

We are honored to receive this recognition, based on the analysis of 10 providers’ competitive strengths and opportunities as assessed by Frost & Sullivan. The rest of this post explains some of the capabilities that we shipped in 2020 across our Web Application Firewall (WAF), Bot Management, and Distributed Denial-of-Service product lines—the scope of Frost & Sullivan’s report. Get a copy of the Frost & Sullivan Frost Radar report to see why Cloudflare was named the Innovation Leader here.

2020 Web Security Themes and Roundup

Before jumping into specific product and feature launches, I want to briefly explain how we think about building and delivering our web security capabilities. The most important “product” by far that’s been built at Cloudflare over the past 10 years is the massive global network that moves bits securely around the world, as close to the speed of light as possible. Building our features atop this network allows us to reject the legacy tradeoff of performance or security. And equipping customers with the ability to program and extend the network with Cloudflare Workers and Firewall Rules allows us to focus on quickly delivering useful security primitives such as functions, operators, and ML-trained data—then later packaging them up in streamlined user interfaces.

We talk internally about building up the “toolbox” of security controls so customers can express their desired security posture, and that’s how we think about many of the releases over the past year that are discussed below. We begin by providing the saw, hammer, and nails, and let expert builders construct whatever defenses they see fit. By watching how these tools are put to use and observing the results of billions of attempts to evade the erected defenses, we learn how to improve and package them together as a whole for those less inclined to build from components. Most recently we did this with API Shield, providing a guided template to create “positive security” models within Firewall Rules using existing primitives plus new data structures for strong authentication such as Cloudflare-managed client SSL/TLS certificates. Each new tool added to the toolbox increases the value of the existing tools. Each new web request—good or bad—improves the models that our threat intelligence and Bot Management capabilities depend upon.

Web application firewall (WAF) usability at scale

Holistic web protection: industry recognition for a prolific 2020

Last year we spoke with many customers about our plan to decouple configuration from the zone/domain model and allow rules to be set for arbitrary paths and groups of services across an account. In 4Q2020 we put this granular control in the hands of a few developers and some of our most sophisticated enterprise customers, and we’re currently collecting and incorporating feedback before defaulting the capabilities on for new customers.

Rules are great, especially with increased flexibility, but without data structures and request enrichment at the edge (such as the Bot Management techniques described below) they cannot act on anything beyond static properties of the request. In 3Q2020 we released our IP Lists capabilities and customers have been steadily uploading their home-grown and third-party subscription lists. These lists can be referenced anywhere in a customer’s account as named variables and then combined with all other attributes of the request, even Bot Management scores, e.g., http.request.uri.path contains “/login” and (not ip.src in $pingdom_probes and cf.bot_management.score < 30) is a Firewall Rule filter that blocks all bots except Pingdom from accessing the login endpoint.

Requests that are blocked or challenged need to find their way as quickly as possible to our customers’ SOCs for triage, investigation and, occasionally, incident response, so we upgraded our edge-logging framework in 2Q2020 to push real time security-specific logs directly to customer SIEMs. And in 4Q2020, we released the ability to encrypt sensitive payloads within these logs using customer-provided encryption keys and novel encryption algorithms termed “Hybrid Public Key Encryption” (HPKE), and a data localization suite to provide control over where our customers’ data is stored and protected.

Built predominantly in 4Q2020 and currently being tested in the Firewall Rules engine is a brand new implementation of our Rate Limiting engine. By moving this matching and enforcement logic from a standalone tool to a component within a performant, memory-safe, expressive engine built in Rust, we have increased the utility of existing functions. Additional examples of improving this library of capabilities include the work completed in 1Q2020 to add HMAC functions and regex-based HTTP header and body inspection to the engine.

Bots and machine learning (ML)

Holistic web protection: industry recognition for a prolific 2020

In addition to making edge data sets accessible for request evaluation, we continued to invest heavily within our Bot Management team to provide actionable data so that our customers could decide what (if any) automated traffic they wanted to allow to interact with their applications. Our highest priority for Bot research and development has always been efficacy, and last year was no different. A significant portion of our engineering effort was dedicated to our detection engines — both updating and iterating on existing systems or creating entirely new detection engines from scratch.

In 1Q2020 we completed a total rewrite of our Machine Learning engine, and are continually focused on improving the efficacy of our ML engines. To do this, we draw on one of our major competitive advantages: the massive amount of data flowing through Cloudflare’s network. The early 2020 upgrade to our ML model nearly doubled the number of features we use to evaluate and score requests. And to help customers better understand why requests are flagged as bots, we have recently complemented the bot likelihood score in our logs with attribution to the specific engine that generated the score.

Also in 1Q2020, we upgraded our behavioral analysis engine to incorporate more features and increase overall accuracy. This engine conducts histogram-based outlier scoring and is now fully deployed to nearly all Bot Management zones.

In 2Q2020, we developed a lightweight JavaScript element that further advanced our browser fingerprinting capabilities and aids in detection. Specifically, we now silently challenge browsers and detect if a browser is misrepresenting its User Agent. This technique will be incorporated into our ML models and combined with our heuristics engine for more accurate browser fingerprinting. This feature is entirely optional and can be enabled or disabled by customers through our UI and API. Customers with extremely performance sensitive zones or traffic types that are unsuitable for JavaScript (such as API or some mobile app traffic) can still be accurately scored by our Bot Management engine.

In addition to detection, we also spent (and will continue to spend) engineering effort on mitigation. Our entire JavaScript and CAPTCHA challenge platform was rewritten in the last year and deployed to our customer zones in a staged fashion in the second half of 2020. Our new platform is faster and more robust at detecting automated systems attempting to solve the challenges. More importantly, this platform allows us to further invest in new challenge types and modes as we enter 2021.

The biggest and most well received feature released in 2020 was our dedicated Bot Management analytics, released in 3Q2020. We now present informative graphs that double as diagnostic tools. Customers have found that analytics are far more than interesting charts and statistics: in the case of Bot Management, analytics are essential to spotting and subsequently eliminating false positives.

Last but definitely not least, we announced the deprecation of the __cfduid cookie in 4Q2020 which was used primarily to detect bots but caused confusion for some customers including questions about whether they needed to display a cookie banner because of what we do.

To get a sense of the Bot Attack trends we saw in the first half of 2020, take a read through this blog post. And if you’re curious about how our ML models and heuristic engines work to keep your properties safe, this deep dive by Alex Bocharov, Machine Learning Tech Lead on the Bots team, is an excellent guide.

API and IoT security and protection

Holistic web protection: industry recognition for a prolific 2020

At the beginning of 4Q2020, we released a product called API Shield that was purpose built to secure, protect, and accelerate API traffic — and will eventually provide much of the common functionality expected in traditional API Gateways. The UI for API Shield was built on top of Firewall Rules for maximum flexibility, and will serve as the jump-off point for configuring additional API security features we have planned this year.

As part of API Shield, every customer now gets a fully managed, domain-scoped private CA generated for each of their zones, and we plan to continue working closely with the SSL/TLS team to expand CA management options based on feedback. Since the release, we’ve seen great adoption from in particular IoT companies focused on locking down their APIs using short-lived client certificates distributed out to devices. Customers can also now upload OpenAPI schemas to be matched against incoming requests from these devices, with bad requests being dropped at the edge rather than passed on to origin infrastructure.

Another capability we released in 4Q2020 was support for gRPC-based API traffic. Since that release, customers have expressed significant interest in using Cloudflare as a secure API gateway between easy-to-use customer-facing JSON endpoints and internal-facing gRPC or GraphQL endpoints. Like most customer challenges at Cloudflare, early adopters are looking to solve these use cases initially with Cloudflare Workers, but we’re keeping an eye on whether there are aspects for which we’ll want to provide first-class feature support.

Distributed Denial-of-Service (DDoS) protections for web applications and APIs

Holistic web protection: industry recognition for a prolific 2020

The application-layer security of a web application or API is of minimal importance if the service itself is not available due to a persistent DDoS attack at L3-L7. While mitigating such attacks has long been one of Cloudflare’s strengths, attack methodologies evolve and we continued to invest heavily in 2020 to drop attacks more quickly, more efficiently, and more precisely; as a result, automatic mitigation techniques are applied immediately and most malicious traffic is blocked in less than 3 seconds.

Early in 2020 we responded to a persistent increase in smaller, more localized attacks by fine-tuning a system that can autonomously detect attacks on any server in any datacenter. In the month prior to us first posting about this tool, it mitigated almost 300,000 network-layer attacks, roughly 55 times greater than the tool we previously relied upon. This new tool, dubbed “dosd”, leverages Linux’s eXpress Data Path (XDP) and allows our system to quickly — and automatically — deploy rules eBPF rules that run on each packet received. We further enhanced our edge mitigation capabilities in 3Q2020 by developing and releasing a protection layer that can operate even in environments where we only see one side of the TCP flow. These network layer protections help protect our customers who leverage both Magic Transit to protect their IP ranges and our WAF to protect their applications and APIs.

To document and provide visibility into these attacks, we released a GraphQL-backed interface in 1Q2020 called Network Analytics. Network Analytics extends the visibility of attacks against our customers’ services from L7 to L3, and includes detailed attack logs containing data such as top source and destination IPs and ports, ASNs, data centers, countries, bit rates, protocol and TCP flag distributions. A litany of improvements made to this graphical rendering engine over the course of 2020 have benefitted all analytics tools using the same front-end. In 4Q2020, Network Analytics was extended to provide traffic and attack insights into Cloudflare Spectrum-protected applications, which are terminated at L4 (TCP/UDP).

Towards the end of 4Q2020, we released real-time DDoS attack alerting capable of sending emails or pages via PagerDuty to alert security teams of ongoing attacks and mitigations. This capability was released just in time to assist with the onslaught of ransomware attacks that Cloudflare helped detect and defend against. For additional context on unique attacks we fought off in 2020, consider reading about an acoustics inspired attack, a 754 million packet-per-second, or a roundup of attacks from 1Q2020, 2Q2020, or 3Q2020.

Wrapping up and looking towards 2021

2020 was a tough year around the world. Throughout what has also been, and continues to be, a period of heightened cyberattacks and breaches, we feel proud that our teams were able to release a steady flow of new and improved capabilities across several critical security product areas reviewed by Frost & Sullivan. These releases culminated in far greater protections for customers at the end of the year than the beginning, and a recognition for our sustained efforts.

We are pleased to have been named the Innovation Leader in their Frost Radar™: Global Holistic Web Protection Market Report, which “addresses organizations’ demand for consolidated, single pane of glass solutions, which not only reduce the security gaps of legacy products but also provide simplified management capabilities”.

As we look towards 2021 we plan to continue releasing early and often, listening to feedback from our customers, and delivering incremental value along the way. If you have ideas on what additional capabilities you’d like to use to protect your applications and networks, we’d love to hear them below in the comments.

Introducing API Shield

Post Syndicated from Patrick R. Donahue original https://blog.cloudflare.com/introducing-api-shield/

Introducing API Shield

APIs are the lifeblood of modern Internet-connected applications. Every millisecond they carry requests from mobile applications—place this food delivery order, “like” this picture—and directions to IoT devices—unlock the car door, start the wash cycle, my human just finished a 5k run—among countless other calls.

They’re also the target of widespread attacks designed to perform unauthorized actions or exfiltrate data, as data from Gartner increasingly shows: “by 2021, 90% of web-enabled applications will have more surface area for attack in the form of exposed APIs rather than the UI, up from 40% in 2019, and “Gartner predicted that, by 2022, API abuses will move from an infrequent to the most-frequent attack vector, resulting in data breaches for enterprise web applications”. Of the 18 million requests per second that traverse Cloudflare’s network, 50% are directed towards APIs—with the majority of these requests blocked as malicious.

To combat these threats, Cloudflare is making it simple to secure APIs through the use of strong client certificate-based identity and strict schema-based validation. As of today, these capabilities are available free for all plans within our new “API Shield” offering. And as of today, the security benefits also extend to gRPC-based APIs, which use binary formats such as protocol buffers rather than JSON, and have been growing in popularity with our customer base.

Introducing API Shield

Continue reading to learn more about the new capabilities, or jump right to the “Demonstration” paragraph for examples of how to get started configuring your first API Shield rule.

Positive security models and client certificates

A “positive security” model is one that allows only known behavior and identities, while rejecting everything else. It is the opposite of the traditional “negative security” model enforced by a Web Application Firewall (WAF) that allows everything except for requests coming from problematic IPs, ASNs, countries or requests with problematic signatures (SQL injection attempts, etc.).

Implementing a positive security model for APIs is the most direct way to eliminate the noise of credential stuffing attacks and other automated scanning tools. And the first step towards a positive model is deploying strong authentication such as mutual TLS authentication, which is not vulnerable to the reuse or sharing of passwords.

Just as we simplified the issuance of server certificates back in 2014 with Universal SSL, API Shield reduces the process of issuing client certificates to clicking a few buttons in the Cloudflare Dashboard. By providing a fully hosted private public key infrastructure (PKI), you can focus on your applications and features—rather than operating and securing your own certificate authority (CA).

Introducing API Shield

Enforcing valid requests with schema validation

Once developers can be sure that only legitimate clients (with SSL certificates in hand) are connecting to their APIs, the next step in implementing a positive security model is making sure that those clients are making valid requests. Extracting a client certificate from a device and reusing elsewhere is difficult, but not impossible, so it’s also important to make sure that the API is being called as intended.

Requests containing extraneous input may not have been anticipated by the API developer, and can cause problems if processed directly by the application, so these should be dropped at the edge if possible. API Schema validation works by matching the contents of API requests—the query parameters that come after the URL and contents of the POST body—against a contract or “schema” that contains the rules for what is expected. If validation fails, the API call is blocked protecting the origin from an invalid request or a malicious payload.

Schema validation is currently in closed beta for JSON payloads, with gRPC/protocol buffer support on the roadmap. If you would like to join the beta please open a support ticket with the subject “API Schema Validation Beta”. After the beta has ended, we plan to make schema validation available as part of the API Shield user interface.

Introducing API Shield

Demonstration

To demonstrate how the APIs powering IoT devices and mobile applications can be secured, we have built an API Shield demonstration using client certificates and schema validation.

Temperatures are captured by an IoT device, represented in the demo by a Raspberry Pi 3 Model B+ with an external infrared temperature sensor, and then transmitted via a POST request to a Cloudflare-protected API. Temperatures are subsequently retrieved by GET requests and then displayed in a mobile application built in Swift for iOS.

In both cases, the API was actually built using Cloudflare Workers® and Workers KV, but can be replaced by any Internet-accessible endpoint.

1. API Configuration

Before configuring the IoT device and mobile application to communicate securely with the API, we need to bootstrap the API endpoints. To keep the example simple, while also allowing for additional customization, we’ve implemented the API as a Cloudflare Worker (borrowing code from the To-Do List tutorial).

In this particular example the temperatures are stored in Workers KV using the source IP address as a key, but this could easily be replaced by a value from the client certificate, e.g., the fingerprint. The code below saves a temperature and timestamp into KV when a POST is made, and returns the most recent 5 temperatures when a GET request is made.

const defaultData = { temperatures: [] }

const getCache = key => TEMPERATURES.get(key)
const setCache = (key, data) => TEMPERATURES.put(key, data)

async function addTemperature(request) {

    // pull previously recorded temperatures for this client
    const ip = request.headers.get('CF-Connecting-IP')
    const cacheKey = `data-${ip}`
    let data
    const cache = await getCache(cacheKey)
    if (!cache) {
        await setCache(cacheKey, JSON.stringify(defaultData))
        data = defaultData
    } else {
        data = JSON.parse(cache)
    }

    // append the recorded temperatures with the submitted reading (assuming it has both temperature and a timestamp)
    try {
        const body = await request.text()
        const val = JSON.parse(body)

        if (val.temperature && val.time) {
            data.temperatures.push(val)
            await setCache(cacheKey, JSON.stringify(data))
            return new Response("", { status: 201 })
        } else {
            return new Response("Unable to parse temperature and/or timestamp from JSON POST body", { status: 400 })
        }
    } catch (err) {
        return new Response(err, { status: 500 })
    }
}

function compareTimestamps(a,b) {
    return -1 * (Date.parse(a.time) - Date.parse(b.time))
}

// return the 5 most recent temperature measurements
async function getTemperatures(request) {
    const ip = request.headers.get('CF-Connecting-IP')
    const cacheKey = `data-${ip}`

    const cache = await getCache(cacheKey)
    if (!cache) {
        return new Response(JSON.stringify(defaultData), { status: 200, headers: { 'content-type': 'application/json' } })
    } else {
        data = JSON.parse(cache)
        const retval = JSON.stringify(data.temperatures.sort(compareTimestamps).splice(0,5))
        return new Response(retval, { status: 200, headers: { 'content-type': 'application/json' } })
    }
}

async function handleRequest(request) {

    if (request.method === 'POST') {
        return addTemperature(request)
    } else {
        return getTemperatures(request)
    }

}

addEventListener('fetch', event => {
  event.respondWith(handleRequest(event.request))
})

Before adding mutual TLS authentication, we’ll test POST’ing a random temperature reading:

$ TEMPERATURE=$(echo $((361 + RANDOM %11)) | awk '{printf("%.2f",$1/10.0)}')
$ TIMESTAMP=$(date -u +"%Y-%m-%dT%H:%M:%SZ")

$ echo -e "$TEMPERATURE\n$TIMESTAMP"
36.30
2020-09-28T02:57:49Z

$ curl -v -H "Content-Type: application/json" -d '{"temperature":'''$TEMPERATURE''', "time": "'''$TIMESTAMP'''"}' https://shield.upinatoms.com/temps 2>&1 | grep "< HTTP/2"
< HTTP/2 201 

And here’s a subsequent read of that temperature, along with the previous 4 that were submitted:

$ curl -s https://shield.upinatoms.com/temps | jq .
[
  {
    "temperature": 36.3,
    "time": "2020-09-28T02:57:49Z"
  },
  {
    "temperature": 36.7,
    "time": "2020-09-28T02:54:56Z"
  },
  {
    "temperature": 36.2,
    "time": "2020-09-28T02:33:08Z"
  },
    {
    "temperature": 36.5,
    "time": "2020-09-28T02:29:22Z"
  },
  {
    "temperature": 36.9,
    "time": "2020-09-28T02:27:19Z"
  } 
]

2. Client certificate issuance

With our API in hand, it’s time to lock it down to require a valid client certificate. Before doing so we’ll want to generate those certificates. To do so, you can either go to the SSL/TLS → Client Certificates tab of the Cloudflare Dashboard and click “Create Certificate” or you can automate the process via API calls.

Because most developers at scale will be generating their own private keys and CSRs and requesting that they be signed via API, we’ll show that process here. Using Cloudflare’s PKI toolkit CFSSL we’ll first create a bootstrap certificate fo the iOS application, and then we’ll create a certificate for the IoT device:

$ cat <<'EOF' | tee -a csr.json
{
    "hosts": [
        "ios-bootstrap.devices.upinatoms.com"
    ],
    "CN": "ios-bootstrap.devices.upinatoms.com",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [{
        "C": "US",
        "L": "Austin",
        "O": "Temperature Testers, Inc.",
        "OU": "Tech Operations",
        "ST": "Texas"
    }]
}
EOF

$ cfssl genkey csr.json | cfssljson -bare certificate
2020/09/27 21:28:46 [INFO] generate received request
2020/09/27 21:28:46 [INFO] received CSR
2020/09/27 21:28:46 [INFO] generating key: rsa-2048
2020/09/27 21:28:47 [INFO] encoded CSR

$ mv certificate-key.pem ios-key.pem
$ mv certificate.csr ios.csr

// and do the same for the IoT sensor
$ sed -i.bak 's/ios-bootstrap/sensor-001/g' csr.json
$ cfssl genkey csr.json | cfssljson -bare certificate
...
$ mv certificate-key.pem sensor-key.pem
$ mv certificate.csr sensor.csr
Generate a private key and CSR for the IoT device and iOS application
// we need to replace actual newlines in the CSR with ‘\n’ before POST’ing
$ CSR=$(cat ios.csr | perl -pe 's/\n/\\n/g')
$ request_body=$(< <(cat <<EOF
{
  "validity_days": 3650,
  "csr":"$CSR"
}
EOF
))

// save the response so we can view it and then extract the certificate
$ curl -H 'X-Auth-Email: YOUR_EMAIL' -H 'X-Auth-Key: YOUR_API_KEY' -H 'Content-Type: application/json' -d “$request_body” https://api.cloudflare.com/client/v4/zones/YOUR_ZONE_ID/client_certificates > response.json

$ cat response.json | jq .
{
  "success": true,
  "errors": [],
  "messages": [],
  "result": {
    "id": "7bf7f70c-7600-42e1-81c4-e4c0da9aa515",
    "certificate_authority": {
      "id": "8f5606d9-5133-4e53-b062-a2e5da51be5e",
      "name": "Cloudflare Managed CA for account 11cbe197c050c9e422aaa103cfe30ed8"
    },
    "certificate": "-----BEGIN CERTIFICATE-----\nMIIEkzCCA...\n-----END CERTIFICATE-----\n",
    "csr": "-----BEGIN CERTIFICATE REQUEST-----\nMIIDITCCA...\n-----END CERTIFICATE REQUEST-----\n",
    "ski": "eb2a48a19802a705c0e8a39489a71bd586638fdf",
    "serial_number": "133270673305904147240315902291726509220894288063",
    "signature": "SHA256WithRSA",
    "common_name": "ios-bootstrap.devices.upinatoms.com",
    "organization": "Temperature Testers, Inc.",
    "organizational_unit": "Tech Operations",
    "country": "US",
    "state": "Texas",
    "location": "Austin",
    "expires_on": "2030-09-26T02:41:00Z",
    "issued_on": "2020-09-28T02:41:00Z",
    "fingerprint_sha256": "84b045d498f53a59bef53358441a3957de81261211fc9b6d46b0bf5880bdaf25",
    "validity_days": 3650
  }
}

$ cat response.json | jq .result.certificate | perl -npe 's/\\n/\n/g; s/"//g' > ios.pem

// now ask that the second client certificate signing request be signed
$ CSR=$(cat sensor.csr | perl -pe 's/\n/\\n/g')
$ request_body=$(< <(cat <<EOF
{
  "validity_days": 3650,
  "csr":"$CSR"
}
EOF
))

$ curl -H 'X-Auth-Email: YOUR_EMAIL' -H 'X-Auth-Key: YOUR_API_KEY' -H 'Content-Type: application/json' -d "$request_body" https://api.cloudflare.com/client/v4/zones/YOUR_ZONE_ID/client_certificates | perl -npe 's/\\n/\n/g; s/"//g' > sensor.pem
Ask Cloudflare to sign the CSRs with the private CA issued for your zone

3. API Shield rule creation

With certificates in hand we can now configure the API endpoint to require their use. Below is a demonstration of how to create such a rule.

The steps include specifying which hostnames to prompt for certificates, e.g., shield.upinatoms.com, and then creating the API Shield rule.

Introducing API Shield

4. IoT Device Communication

To prepare the IoT device for secure communication with our API endpoint we need to embed the certificate on the device, and then point our application to it so it can be used when making the POST request to the API endpoint.

We securely copied the private key and certificate into /etc/ssl/private/sensor-key.pem and /etc/ssl/certs/sensor.pem, and then modified our sample script to point to these files:

import requests
import json
from datetime import datetime

def readSensor():

    # Takes a reading from a temperature sensor and store it to temp_measurement 

    dateTimeObj = datetime.now()
    timestampStr = dateTimeObj.strftime(‘%Y-%m-%dT%H:%M:%SZ’)

    measurement = {'temperature':str(36.5),'time':timestampStr}
    return measurement

def main():

    print("Cloudflare API Shield [IoT device demonstration]")

    temperature = readSensor()
    payload = json.dumps(temperature)
    
    url = 'https://shield.upinatoms.com/temps'
    json_headers = {'Content-Type': 'application/json'}
    cert_file = ('/etc/ssl/certs/sensor.pem', '/etc/ssl/private/sensor-key.pem')
    
    r = requests.post(url, headers = json_headers, data = payload, cert = cert_file)
    
    print("Request body: ", r.request.body)
    print("Response status code: %d" % r.status_code)

When the script attempts to connect to https://shield.upinatoms.com/temps, Cloudflare requests that a ClientCertificate is sent, and our script sends the contents of sensor.pem before demonstrating it has possession of sensor-key.pem as required to complete the SSL/TLS handshake.

If we fail to send the client certificate or attempt to include extraneous fields in the API request, the schema validation (configuration not shown) fails and the request is rejected:

Cloudflare API Shield [IoT device demonstration]
Request body:  {"temperature": "36.5", "time": "2020-09-28T15:52:19Z"}
Response status code: 403

If instead a valid certificate is presented and the payload follows the schema previously uploaded, our script POSTs the latest temperature reading to the API.

Cloudflare API Shield [IoT device demonstration]
Request body:  {"temperature": "36.5", "time": "2020-09-28T15:56:45Z"}
Response status code: 201

5. Mobile Application (iOS) Communication

Now that temperature requests have been sent to our API endpoint, it’s time to read them securely from our mobile application using one of the client certificates.

For purposes of brevity, we’re going to embed a “bootstrap” certificate and key as a PKCS#12 file within the application bundle. In a real world deployment, this bootstrap certificate should only be used alongside users’ credentials to authenticate to an API endpoint that can return a unique user certificate. Corporate users will want to use MDM to distribute certificates so that the underlying mobile

Package the certificate and private key

Before adding the bootstrap certificate and private key, we need to combine them into a binary PKCS#12 file. This binary file will then be added to our iOS application bundle.

$ openssl pkcs12 -export -out bootstrap-cert.pfx -inkey ios-key.pem -in ios.pem
Enter Export Password:
Verifying - Enter Export Password:

Add the certificate bundle to your iOS application

Within XCode, click File → Add Files To “[Project Name]” and select your .pfx file. Make sure to check “Add to target” before confirming.

Modify your URLSession code to use the client certificate

This article provides a nice walkthrough of using a PKCS#11 class and URLSessionDelegate  to modify your application to complete mutual TLS authentication when connecting to an API that requires it.

Looking Forward

In the coming months, we plan to expand API Shield with a number of additional features designed to protect API traffic. For customers that want to use their own PKI, we will provide the ability to import their own CAs, something available today as part of Cloudflare Access.

As we receive feedback on our schema validation beta, we will look to make the capability generally available to all customers. If you’re trying out the beta and have thoughts to share, we’d love to hear your feedback.

Beyond certificates and schema validation, we’re excited to layer on additional API security capabilities as well as deep analytics to help you better understand your APIs. If you there are features you’d like to see, let us know in the comments below!