Tag Archives: General Availability

General availability for WAF Content Scanning for file malware protection

Post Syndicated from Radwa Radwan original https://blog.cloudflare.com/waf-content-scanning-for-malware-detection


File upload is a common feature in many web applications. Applications may allow users to upload files like images of flood damage to file an insurance claim, PDFs like resumes or cover letters to apply for a job, or other documents like receipts or income statements. However, beneath the convenience lies a potential threat, since allowing unrestricted file uploads can expose the web server and your enterprise network to significant risks related to security, privacy, and compliance.

Cloudflare recently introduced WAF Content Scanning, our in-line malware file detection and prevention solution to stop malicious files from reaching the web server, offering our Enterprise WAF customers an additional line of defense against security threats.

Today, we’re pleased to announce that the feature is now generally available. It will be automatically rolled out to existing WAF Content Scanning customers before the end of March 2024.

In this blog post we will share more details about the new version of the feature, what we have improved, and reveal some of the technical challenges we faced while building it. This feature is available to Enterprise WAF customers as an add-on license, contact your account team to get it.

What to expect from the new version?

The feedback from the early access version has resulted in additional improvements. The main one is expanding the maximum size of scanned files from 1 MB to 15 MB. This change required a complete redesign of the solution’s architecture and implementation. Additionally, we are improving the dashboard visibility and the overall analytics experience.

Let’s quickly review how malware scanning operates within our WAF.

Behind the scenes

WAF Content Scanning operates in a few stages: users activate and configure it, then the scanning engine detects which requests contain files, the files are sent to the scanner returning the scan result fields, and finally users can build custom rules with these fields. We will dig deeper into each step in this section.

Activate and configure

Customers can enable the feature via the API, or through the Settings page in the dashboard (Security → Settings) where a new section has been added for incoming traffic detection configuration and enablement. As soon as this action is taken, the enablement action gets distributed to the Cloudflare network and begins scanning incoming traffic.

Customers can also add a custom configuration depending on the file upload method, such as a base64 encoded file in a JSON string, which allows the specified file to be parsed and scanned automatically.

In the example below, the customer wants us to look at JSON bodies for the key “file” and scan them.

This rule is written using the wirefilter syntax.

Engine runs on traffic and scans the content

As soon as the feature is activated and configured, the scanning engine runs the pre-scanning logic, and identifies content automatically via heuristics. In this case, the engine logic does not rely on the Content-Type header, as it’s easy for attackers to manipulate. When relevant content or a file has been found, the engine connects to the antivirus (AV) scanner in our Zero Trust solution to perform a thorough analysis and return the results of the scan. The engine uses the scan results to propagate useful fields that customers can use.

Integrate with WAF

For every request where a file is found, the scanning engine returns various fields, including:

cf.waf.content_scan.has_malicious_obj,
cf.waf.content_scan.obj_sizes,
cf.waf.content_scan.obj_types, 
cf.waf.content_scan.obj_results

The scanning engine integrates with the WAF where customers can use those fields to create custom WAF rules to address various use cases. The basic use case is primarily blocking malicious files from reaching the web server. However, customers can construct more complex logic, such as enforcing constraints on parameters such as file sizes, file types, endpoints, or specific paths.

In-line scanning limitations and file types

One question that often comes up is about the file types we detect and scan in WAF Content Scanning. Initially, addressing this query posed a challenge since HTTP requests do not have a definition of a “file”, and scanning all incoming HTTP requests does not make sense as it adds extra processing and latency. So, we had to decide on a definition to spot HTTP requests that include files, or as we call it, “uploaded content”.

The WAF Content Scanning engine makes that decision by filtering out certain content types identified by heuristics. Any content types not included in a predefined list, such as text/html, text/x-shellscript, application/json, and text/xml, are considered uploaded content and are sent to the scanner for examination. This allows us to scan a wide range of content types and file types without affecting the performance of all requests by adding extra processing. The wide range of files we scan includes:

  • Executable (e.g., .exe, .bat, .dll, .wasm)
  • Documents (e.g., .doc, .docx, .pdf, .ppt, .xls)
  • Compressed (e.g., .7z, .gz, .zip, .rar)
  • Image (e.g., .jpg, .png, .gif, .webp, .tif)
  • Video and audio files within the 15 MB file size range.

The file size scanning limit of 15 Megabytes comes from the fact that the in-line file scanning as a feature is running in real time, which offers safety to the web server and instant access to clean files, but also impacts the whole request delivery process. Therefore, it’s crucial to scan the payload without causing significant delays or interruptions; namely increased CPU time and latency.

Scaling the scanning process to 15 MB

In the early design of the product, we built a system that could handle requests with a maximum body size of 1 MB, and increasing the limit to 15 MB had to happen without adding any extra latency. As mentioned, this latency is not added to all requests, but only to the requests that have uploaded content. However, increasing the size with the same design would have increased the latency by 15x for those requests.

In this section, we discuss how we previously managed scanning files embedded in JSON request bodies within the former architecture as an example, and why it was challenging to expand the file size using the same design, then compare the same example with the changes made in the new release to overcome the extra latency in details.

Old architecture used for the Early Access release

In order for customers to use the content scanning functionality in scanning files embedded in JSON request bodies, they had to configure a rule like:

lookup_json_string(http.request.body.raw, “file”)

This means we should look in the request body but only for the “file” key, which in the image below contains a base64 encoded string for an image.

When the request hits our Front Line (FL) NGINX proxy, we buffer the request body. This will be in an in-memory buffer, or written to a temporary file if the size of the request body exceeds the NGINX configuration of client_body_buffer_size. Then, our WAF engine executes the lookup_json_string function and returns the base64 string which is the content of the file key. The base64 string gets sent via Unix Domain Sockets to our malware scanner, which does MIME type detection and returns a verdict to the file upload scanning module.

This architecture had a bottleneck that made it hard to expand on: the expensive latency fees we had to pay. The request body is first buffered in NGINX and then copied into our WAF engine, where rules are executed. The malware scanner will then receive the execution result — which, in the worst scenario, is the entire request body — over a Unix domain socket. This indicates that once NGINX buffers the request body, we send and buffer it in two other services.

New architecture for the General Availability release

In the new design, the requirements were to scan larger files (15x larger) while not compromising on performance. To achieve this, we decided to bypass our WAF engine, which is where we introduced the most latency.

In the new architecture, we made the malware scanner aware of what is needed to execute the rule, hence bypassing the Ruleset Engine (RE). For example, the configuration “lookup_json_string(http.request.body.raw, “file”)”, will be represented roughly as:

{
   Function: lookup_json_string
   Args: [“file”]
}

This is achieved by walking the Abstract Syntax Tree (AST) when the rule is configured, and deploying the sample struct above to our global network. The struct’s values will be read by the malware scanner, and rule execution and malware detection will happen within the same service. This means we don’t need to read the request body, execute the rule in the Ruleset Engine (RE) module, and then send the results over to the malware scanner.

The malware scanner will now read the request body from the temporary file directly, perform the rule execution, and return the verdict to the file upload scanning module.

The file upload scanning module populates these fields, so they can be used to write custom rules and take actions. For example:

all(cf.waf.content_scan.obj_results[*] == "clean")

This module also enriches our logging pipelines with these fields, which can then be read in Log Push, Edge Log Delivery, Security Analytics, and Firewall Events in the dashboard. For example, this is the security log in the Cloudflare dashboard (Security → Analytics) for a web request that triggered WAF Content Scanning:

WAF content scanning detection visibility

Using the concept of incoming traffic detection, WAF Content Scanning enables users to identify hidden risks through their traffic signals in the analytics before blocking or mitigating matching requests. This reduces false positives and permits security teams to make decisions based on well-informed data. Actually, this isn’t the only instance in which we apply this idea, as we also do it for a number of other products, like WAF Attack Score and Bot Management.

We have integrated helpful information into our security products, like Security Analytics, to provide this data visibility. The Content Scanning tab, located on the right sidebar, displays traffic patterns even if there were no WAF rules in place. The same data is also reflected in the sampled requests, and you can create new rules from the same view.

On the other hand, if you want to fine-tune your security settings, you will see better visibility in Security Events, where these are the requests that match specific rules you have created in WAF.

Last but not least, in our Logpush datastream, we have included the scan fields that can be selected to send to any external log handler.

What’s next?

Before the end of March 2024, all current and new customers who have enabled WAF Content Scanning will be able to scan uploaded files up to 15 MB. Next, we’ll focus on improving how we handle files in the rules, including adding a dynamic header functionality. Quarantining files is also another important feature we will be adding in the future. If you’re an Enterprise customer, reach out to your account team for more information and to get access.

Post-quantum cryptography goes GA

Post Syndicated from Wesley Evans original http://blog.cloudflare.com/post-quantum-cryptography-ga/

Post-quantum cryptography goes GA

Post-quantum cryptography goes GA

Over the last twelve months, we have been talking about the new baseline of encryption on the Internet: post-quantum cryptography. During Birthday Week last year we announced that our beta of Kyber was available for testing, and that Cloudflare Tunnel could be enabled with post-quantum cryptography. Earlier this year, we made our stance clear that this foundational technology should be available to everyone for free, forever.

Today, we have hit a milestone after six years and 31 blog posts in the making: we’re starting to roll out General Availability1 of post-quantum cryptography support to our customers, services, and internal systems as described more fully below. This includes products like Pingora for origin connectivity, 1.1.1.1, R2, Argo Smart Routing, Snippets, and so many more.

This is a milestone for the Internet. We don't yet know when quantum computers will have enough scale to break today's cryptography, but the benefits of upgrading to post-quantum cryptography now are clear. Fast connections and future-proofed security are all possible today because of the advances made by Cloudflare, Google, Mozilla, the National Institutes of Standards and Technology in the United States, the Internet Engineering Task Force, and numerous academic institutions

Post-quantum cryptography goes GA

What does General Availability mean? In October 2022 we enabled X25519+Kyber as a beta for all websites and APIs served through Cloudflare. However, it takes two to tango: the connection is only secured if the browser also supports post-quantum cryptography. Starting August 2023, Chrome is slowly enabling X25519+Kyber by default.

The user’s request is routed through Cloudflare’s network (2). We have upgraded many of these internal connections to use post-quantum cryptography, and expect to be done upgrading all of our internal connections by the end of 2024. That leaves as the final link the connection (3) between us and the origin server.

We are happy to announce that we are rolling out support for X25519+Kyber for most inbound and outbound connections as Generally Available for use including origin servers and Cloudflare Workers fetch()es.

Plan Support for post-quantum outbound connections
Free Started roll-out. Aiming for 100% by the end of the October.
Pro and business Aiming for 100% by the end of year.
Enterprise Roll-out begins February 2024. 100% by March 2024.

For our Enterprise customers, we will be sending out additional information regularly over the course of the next six months to help prepare you for the roll-out. Pro, Business, and Enterprise customers can skip the roll-out and opt-in within your zone today, or opt-out ahead of time using an API described in our companion blog post. Before rolling out for Enterprise in February 2024, we will add a toggle on the dashboard to opt out.

If you're excited to get started now, check out our blog with the technical details and flip on post-quantum cryptography support via the API!

What’s included and what is next?

With an upgrade of this magnitude, we wanted to focus on our most used products first and then expand outward to cover our edge cases. This process has led us to include the following products and systems in this roll out:

1.1.1.1
AMP
API Gateway
Argo Smart Routing
Auto Minify
Automatic Platform Optimization
Automatic Signed Exchange
Cloudflare Egress
Cloudflare Images
Cloudflare Rulesets
Cloudflare Snippets
Cloudflare Tunnel
Custom Error Pages
Flow Based Monitoring
Health checks
Hermes
Host Head Checker
Magic Firewall
Magic Network Monitoring
Network Error Logging
Project Flame
Quicksilver
R2 Storage
Request Tracer
Rocket Loader
Speed on Cloudflare Dash
SSL/TLS
Traffic Manager
WAF, Managed Rules
Waiting Room
Web Analytics

If a product or service you use is not listed here, we have not started rolling out post-quantum cryptography to it yet. We are actively working on rolling out post-quantum cryptography to all products and services including our Zero Trust products. Until we have achieved post-quantum cryptography support in all of our systems, we will publish an update blog in every Innovation Week that covers which products we have rolled out post-quantum cryptography to, the products that will be getting it next, and what is still on the horizon.

Products we are working on bringing post-quantum cryptography support to soon:

Cloudflare Gateway
Cloudflare DNS
Cloudflare Load Balancer
Cloudflare Access
Always Online
Zaraz
Logging
D1
Cloudflare Workers
Cloudflare WARP
Bot Management

Why now?

As we announced earlier this year, post-quantum cryptography will be included for free in all Cloudflare products and services that can support it. The best encryption technology should be accessible to everyone – free of charge – to help support privacy and human rights globally.

As we mentioned in March:

“What was once an experimental frontier has turned into the underlying fabric of modern society. It runs in our most critical infrastructure like power systems, hospitals, airports, and banks. We trust it with our most precious memories. We trust it with our secrets. That’s why the Internet needs to be private by default. It needs to be secure by default.”

Our work on post-quantum cryptography is driven by the thesis that quantum computers that can break conventional cryptography create a similar problem to the Year 2000 bug. We know there is going to be a problem in the future that could have catastrophic consequences for users, businesses, and even nation states. The difference this time is we don’t know how the date and time that this break in the computational paradigm will occur. Worse, any traffic captured today could be decrypted in the future. We need to prepare today to be ready for this threat.

We are excited for everyone to adopt post-quantum cryptography into their systems. To follow the latest developments of our deployment of post-quantum cryptography and third-party client/server support, check out pq.cloudflareresearch.com and keep an eye on this blog.

***

1We are using a preliminary version of Kyber, NIST’s pick for post-quantum key agreement. Kyber has not been finalized. We expect a final standard to be published in 2024 under the name ML-KEM, which we will then adopt promptly while deprecating support for X25519Kyber768Draft00.

Announcing Cloudflare Incident Alerts

Post Syndicated from Mia Malden original http://blog.cloudflare.com/incident-alerts/

Announcing Cloudflare Incident Alerts

Announcing Cloudflare Incident Alerts

A lot of people rely on Cloudflare. We serve over 46 million HTTP requests per second on average; millions of customers use our services, including 31% of the Fortune 1000. And these numbers are only growing.

Given the privileged position we sit in to help the Internet to operate, we’ve always placed a very large emphasis on transparency during incidents. But we’re constantly striving to do better.

That’s why today we are excited to announce Incident Alerts — available via email, webhook, or PagerDuty. These notifications are accessible easily in the Cloudflare dashboard, and they’re customizable to prevent notification overload. And best of all, they’re available to everyone; you simply need a free account to get started.

Lifecycle of an incident

Announcing Cloudflare Incident Alerts

Without proper transparency, incidents cause confusion and waste resources for anyone that relies on the Internet. With so many different entities working together to make the Internet operate, diagnosing and troubleshooting can be complicated and time-consuming. By far the best solution is for providers to have transparent and proactive alerting, so any time something goes wrong, it’s clear exactly where the problem is.

Cloudflare incident response

We understand the importance of proactive and transparent alerting around incidents. We have worked to improve communications by directly alerting enterprise level customers and allowing everyone to subscribe to an RSS feed or leverage the Cloudflare Status API. Additionally, we update the Cloudflare status page — which catalogs incident reports, updates, and resolutions — throughout an incident’s lifecycle, as well as tracking scheduled maintenance.

However, not everyone wants to use the Status API or subscribe to an RSS feed. Both of these options require some infrastructure and programmatic efforts from the customer’s end, and neither offers simple configuration to filter out noise like scheduled maintenance. For those who don’t want to build anything themselves, visiting the status page is still a pull, rather than a push, model. Customers themselves need to take it upon themselves to monitor Cloudflare’s status — and timeliness in these situations can make a world of difference.

Without a proactive channel of communication, there can be a disconnect between Cloudflare and our customers during incidents. Although we update the status page as soon as possible, the lack of a push notification represents a gap in meeting our customers’ expectations. The new Cloudflare Incident Alerts aim to remedy that.

Simple, free, and fast notifications

We want to proactively notify you as soon as a Cloudflare incident may be affecting your service —- without any programmatic steps on your end. Unlike the Status API and an RSS feed, Cloudflare Incident Alerts are configurable through just a few clicks in the dashboard, and you can choose to receive email, PagerDuty, or web hook alerts for incidents involving specific products at different levels of impact. The Status API will continue to be available.

With this multidimensional granularity, you can filter notifications by specific service and severity. If you are, for example, a Cloudflare for SaaS customer, you may want alerts for delays in custom hostname activation but not for increased latency on Stream. Likewise, you may only care about critical incidents instead of getting notified for minor incidents. Incident Alerts give you the ability to choose.

Announcing Cloudflare Incident Alerts
Lifecycle of an Incident

How to filter incidents to fit your needs

You can filter incident notifications with the following categories:

  • Cloudflare Sites and Services: get notified when an incident is affecting certain products or product areas.
  • Impact level: get notified for critical, major, and/or minor incidents.

These categories are not mutually exclusive. Here are a few possible configurations:

  • Notify me via email for all critical incidents.
  • Notify me via webhook for critical & major incidents affecting Pages.
  • Notify me via PagerDuty for all incidents affecting Stream.

With over fifty different alerts available via the dashboard, you can tailor your notifications to what you need. You can customize not only which alerts you are receiving but also how you would like to be notified. With PagerDuty, webhooks, and email integrated into the system, you have the flexibility of choosing what will work best with your working environment. Plus, with multiple configurations within many of the available notifications, we make it easy to only get alerts about what you want, when you want them.

Try it out

You can start to configure incident alerts on your Cloudflare account today. Here’s how:

  1. Navigate to the Cloudflare dashboard → Notifications.
  2. Select “Add”.
  3. Select “Incident Alerts”.
  4. Enter your notification name and description.
  5. Select the impact level(s) and component(s) for which you would like to be notified. If either field is left blank, it will default to all impact levels or all components, respectively.
  6. Select how you want to receive the notifications:
  7. Check PagerDuty
  8. Add Webhook
  9. Add email recipient
  10. Select “Save”.
  11. Test the notification by selecting “Test” on the right side of its row.
Announcing Cloudflare Incident Alerts

For more information on Cloudflare’s Alert Notification System, visit our documentation here.

Cloudflare Zaraz steps up: general availability and new pricing

Post Syndicated from Yair Dovrat original http://blog.cloudflare.com/cloudflare-zaraz-steps-up-general-availability-and-new-pricing/

Cloudflare Zaraz steps up: general availability and new pricing

This post is also available in Deutsch, Français.

Cloudflare Zaraz has transitioned out of beta and is now generally available to all customers. It is included under the free, paid, and enterprise plans of the Cloudflare Developer Platform. Visit our docs to learn more on our different plans.

Cloudflare Zaraz steps up: general availability and new pricing

Zaraz Is part of Cloudflare Developer Platform

Cloudflare Zaraz is a solution that developers and marketers use to load third-party tools like Google Analytics 4, Facebook CAPI, TikTok, and others. With Zaraz, Cloudflare customers can easily transition to server-side data collection with just a few clicks, without the need to set up and maintain their own cloud environment or make additional changes to their website for installation. Server-side data collection, as facilitated by Zaraz, simplifies analytics reporting from the server rather than loading numerous JavaScript files on the user's browser. It's a rapidly growing trend due to browser limitations on using third-party solutions and cookies. The result is significantly faster websites, plus enhanced security and privacy on the web.

We've had Zaraz in beta mode for a year and a half now. Throughout this time, we've dedicated our efforts to meeting as many customers as we could, gathering feedback, and getting a deep understanding of our users' needs before addressing them. We've been shipping features at a high rate and have now reached a stage where our product is robust, flexible, and competitive. It also offers unique features not found elsewhere, thanks to being built on Cloudflare’s global network, such as Zaraz’s Worker Variables. We have cultivated a strong and vibrant discord community, and we have certified Zaraz developers ready to help anyone with implementation and configuration.

With more than 25,000 websites running Zaraz today – from personal sites to those of some of the world's biggest companies – we feel confident it's time to go out of beta, and introduce our new pricing system. We believe this pricing is not only generous to our customers, but also competitive and sustainable. We view this as the next logical step in our ongoing commitment to our customers, for whom we're building the future.

If you're building a web application, there's a good chance you've spent at least some time implementing third-party tools for analytics, marketing performance, conversion optimization, A/B testing, customer experience and more. Indeed, according to the Web Almanac report, 94% percent of mobile pages used at least one third-party solution in 2022, and third-party requests accounted for 45% of all requests made by websites. It's clear that third-party solutions are everywhere. They have become an integral part of how the web has evolved. Third-party tools are here to stay, and they require effective developer solutions. We are building Zaraz to help developers manage the third-party layer of their website properly.

Starting today, Cloudflare Zaraz is available to everyone for free under their Cloudflare dashboard, and the paid version of Zaraz is included in the Workers Paid plan. The Free plan is designed to meet the needs of most developers who want to use Zaraz for personal use cases. For a price starting at $5/month, customers of the Workers Paid plan can enjoy the extensive list of features that makes Zaraz powerful, deploy Zaraz on their professional projects, and utilize the pay-as-you-go system. This is in addition to everything else included in the Workers Paid plan. The Enterprise plan, on the other hand, addresses the needs of larger businesses looking to leverage our platform to its fullest potential.

How is Zaraz priced

Zaraz pricing is based on two components: Zaraz Loads and the set of features. A Zaraz Load is counted each time a web page loads the Zaraz script within it, and/or the Pageview trigger is being activated. For Single Page Applications, each URL navigation is counted as a new Zaraz Load. Under the Zaraz Monitoring dashboard, you can find a report showing how many Zaraz Loads your website has generated during a specific time period. Zaraz Loads and features are factored into our billing as follows:

Cloudflare Zaraz steps up: general availability and new pricing

Free plan

The Free Plan has a limit of 100,000 Zaraz Loads per month per account. This should allow almost everyone wanting to use Zaraz for personal use cases, like personal websites or side projects, to do so for free. After 100,000 Zaraz Loads, Zaraz will simply stop functioning.

Following the same logic, the free plan includes everything you need in order to use Zaraz for personal use cases. That includes Auto-injection, Zaraz Debugger, Zaraz Track and Zaraz Set from our Web API, Consent Management Platform (CMP), Data Layer compatibility mode, and many more.

If your websites generate more than 100,000 Zaraz loads combined, you will need to upgrade to the Workers Paid plan to avoid service interruption. If you desire some of the more advanced features, you can upgrade to Workers Paid and get access for only $5/month.

The Workers Paid Plan includes the first 200,000 Zaraz Loads per month per account, free of charge.

If you exceed the free Zaraz Loads allocations, you'll be charged $0.50 for every additional 1,000 Zaraz Loads, but the service will continue to function. (You can set notifications to get notified when you exceed a certain threshold of Zaraz Loads, to keep track of your usage.)

Workers Paid customers can enjoy most of Zaraz robust and existing features, amongst other things, this includes: Zaraz E-commerce from our Web API, Custom Endpoints, Workers Variables, Preview/Publish Workflow, Privacy Features, and more.

If your websites generate Zaraz Loads in the millions, you might want to consider the Workers Enterprise plan. Beyond the free 200,000 Zaraz Loads per month for your account, it offers additional volume discounts based on your Zaraz Loads usage as well as Cloudflare’s professional services.

Enterprise plan

The Workers Enterprise Plan includes the first 200,000 Zaraz Loads per month per account free of charge. Based on your usage volume, Cloudflare’s sales representatives can offer compelling discounts. Get in touch with us here. Workers Enterprise customers enjoy all paid enterprise features.

I already use Zaraz, what should I do?

If you were using Zaraz under the free beta, you have a period of two months to adjust and decide how you want to go about this change. Nothing will change until September 20, 2023. In the meantime we advise you to:

  1. Get more clarity of your Zaraz Loads usage. Visit Monitoring to check how many Zaraz Loads you had in the previous couple of months. If you are worried about generating more than 100,000 Zaraz Loads per month, you might want to consider upgrading to Workers Paid via the plans page, to avoid service interruption. If you generate a big amount of Zaraz Loads, you’d probably want to reach out to your sales representative and get volume discounts. You can leave your details here, and we’ll get back to you.
  2. Check if you are using one of the paid features as listed in the plans page. If you are, then you would need to purchase a Workers Paid subscription, starting at $5/month via the plans page. On September 20, these features will cease to work unless you upgrade.

* Please note, as of now, free plan users won't have access to any paid features. However, if you're already using a paid feature without a Workers Paid subscription, you can continue to use it risk-free until September 20. After this date, you'll need to upgrade to keep using any paid features.

We are here for you

As we make this important transition, we want to extend our sincere gratitude to all our beta users who have provided invaluable feedback and have helped us shape Zaraz into what it is today. We are excited to see Zaraz move beyond its beta stage and look forward to continuing to serve your needs and helping you build better, faster, and more secure web experiences. We know this change comes with adjustments, and we are committed to making the transition as smooth as possible. In the next couple of days, you can expect an email from us, with clear next steps and a way to get advice in case of need. You can always get in touch directly with the Cloudflare Zaraz team on Discord, or the community forum.

Thank you for joining us on this journey and for your ongoing support and trust in Cloudflare Zaraz. Let's continue to build the future of the web together!

GA Week 2022: what you may have missed

Post Syndicated from John Graham-Cumming original https://blog.cloudflare.com/ga-week-2022-recap/

GA Week 2022: what you may have missed

Back in 2019, we worked on a chart for Cloudflare’s IPO S-1 document that showed major releases since Cloudflare was launched in 2010. Here’s that chart:

GA Week 2022: what you may have missed

Of course, that chart doesn’t show everything we’ve shipped, but the curve demonstrates a truth about a growing company: we keep shipping more and more products and services. Some of those things start with a beta, sometimes open and sometimes private. But all of them become generally available after the beta period.

Back in, say, 2014, we only had a few major releases per year. But as the years have progressed and the company has grown we have constant updates, releases and changes. This year a confluence of products becoming generally available in September meant it made sense to wrap them all up into GA Week.

GA Week has now finished, and the team is working to put the finishing touches on Birthday Week (coming this Sunday!), but here’s a recap of everything that we launched this week.

What launched Summary Available for?
Monday (September 19)
Cloudforce One Our threat operations and research team, Cloudforce One, is now open for business and has begun conducting threat briefings. Enterprise
Improved Access Control: Domain Scoped Roles are now generally available It is possible to scope your users’ access to specific domains with Domain Scoped Roles. This will allow all users access to roles, and the ability to access within zones. Currently available to all Free plans, and coming to Enterprise shortly.
Account WAF now available to Enterprise customers Users can manage and configure the WAF for all of their zones from a single pane of glass. This includes custom rulesets and managed rulesets (Core/OWASP and Managed). Enterprise
Introducing Cloudflare Adaptive DDoS Protection – our new traffic profiling system for mitigating DDoS attacks Cloudflare’s new Adaptive DDoS Protection system learns your unique traffic patterns and constantly adapts to protect you against sophisticated DDoS attacks. Built into our Advanced DDoS product
Introducing Advanced DDoS Alerts Cloudflare’s Advanced DDoS Alerts provide tailored and actionable notifications in real-time. Built into our Advanced DDoS product
Tuesday (September 20)
Detect security issues in your SaaS apps with Cloudflare CASB By leveraging API-driven integrations, receive comprehensive visibility and control over SaaS apps to prevent data leaks, detect Shadow IT, block insider threats, and avoid compliance violations. Enterprise Zero Trust
Cloudflare Data Loss Prevention now Generally Available Data Loss Prevention is now available for Cloudflare customers, giving customers more options to protect their sensitive data. Enterprise Zero Trust
Cloudflare One Partner Program acceleration The Cloudflare One Partner Program gains traction with existing and prospective partners. Enterprise Zero Trust
Isolate browser-borne threats on any network with WAN-as-a-Service Defend any network from browser-borne threats with Cloudflare Browser Isolation by connecting legacy firewalls over IPsec / GRE Zero Trust
Cloudflare Area 1 – how the best Email Security keeps getting better Cloudflare started using Area 1 in 2020 and later acquired the company in 2022. We were most impressed how phishing, responsible for 90+% of cyberattacks, basically became a non-issue overnight when we deployed Area 1. But our vision is much bigger than preventing phishing attacks. Enterprise Zero Trust
Wednesday (September 21)
R2 is now Generally Available R2 gives developers object storage minus the egress fees. With the GA of R2, developers will be free to focus on innovation instead of worrying about the costs of storing their data. All plans
Stream Live is now Generally Available Stream live video to viewers at a global scale. All plans
The easiest way to build a modern SaaS application With Workers for Platforms, your customers can build custom logic to meet their needs right into your application. Enterprise
Going originless with Cloudflare Workers – Building a Todo app – Part 1: The API Today we go through Part 1 in a series on building completely serverless applications on Cloudflare’s Developer Platform. Free for all Workers users
Store and Retrieve your logs on R2 Log Storage on R2: a cost-effective solution to store event logs for any of our products! Enterprise (as part of Logpush)
SVG support in Cloudflare Images Cloudflare Images now supports storing and delivering SVG files. Part of Cloudflare Images
Thursday (September 22)
Regional Services Expansion Cloudflare is launching the Data Localization Suite for Japan, India and Australia. Enterprise
API Endpoint Management and Metrics are now GA API Shield customers can save, update, and monitor the performance of API endpoints. Enterprise
Cloudflare Zaraz supports Managed Components and DLP to make third-party tools private Third party tools are the only thing you can’t control on your website, unless you use Managed Components with Cloudflare Zaraz. Available on all plans
Logpush: now lower cost and with more visibility Logpush jobs can now be filtered to contain only logs of interest. Also, you can receive alerts when jobs are failing, as well as get statistics on the health of your jobs. Enterprise

Of course, you won’t have to wait a year for more products to become GA. We’ll be shipping betas and making products generally available throughout the year. And we’ll continue iterating on our products so that all of them become leaders.

As we said at the start of GA Week:

“But it’s not just about making products work and be available, it’s about making the best-of-breed. We ship early and iterate rapidly. We’ve done this over the years for WAF, DDoS mitigation, bot management, API protection, CDN and our developer platform. Today, analyst firms such as Gartner, Forrester and IDC recognize us as leaders in all those areas.”

Now, onwards to Birthday Week!

Logpush: now lower cost and with more visibility

Post Syndicated from Duc Nguyen original https://blog.cloudflare.com/logpush-filters-alerts/

Logpush: now lower cost and with more visibility

Logpush: now lower cost and with more visibility

Logs are a critical part of every successful application. Cloudflare products and services around the world generate massive amounts of logs upon which customers of all sizes depend. Structured logging from our products are used by customers for purposes including analytics, debugging performance issues, monitoring application health, maintaining security standards for compliance reasons, and much more.

Logpush is Cloudflare’s product for pushing these critical logs to customer systems for consumption and analysis. Whenever our products generate logs as a result of traffic or data passing through our systems from anywhere in the world, we buffer these logs and push them directly to customer-defined destinations like Cloudflare R2, Splunk, AWS S3, and many more.

Today we are announcing three new key features related to Cloudflare’s Logpush product. First, the ability to have only logs matching certain criteria be sent. Second, the ability to get alerted when logs are failing to be pushed due to customer destinations having issues or network issues occurring between Cloudflare and the customer destination. In addition, customers will also be able to query for analytics around the health of Logpush jobs like how many bytes and records were pushed, number of successful pushes, and number of failing pushes.

Filtering logs before they are pushed

Because logs are both critical and generated with high volume, many customers have to maintain complex infrastructure just to ingest and store logs, as well as deal with ever-increasing related costs. On a typical day, a real, example customer receives about 21 billion records, or 2.1 terabytes (about 24.9 TB uncompressed) of gzip compressed logs. Over the course of a month, that could easily be hundreds of billions of events and hundreds of terabytes of data.

It is often unnecessary to store and analyze all of this data, and customers could get by with specific subsets of the data matching certain criteria. For example, a customer might want just the set of HTTP data that had status code >= 400, or the set of firewall data where the action taken was to block the user.
We can now achieve this in our Logpush jobs by setting specific filters on the fields of the log messages themselves. You can use either our API or the Cloudflare dashboard to set up filters.

To do this in the dashboard, either create a new Logpush job or modify an existing job. You will see the option to set certain filters. For example, an ecommerce customer might want to receive logs only for the checkout page where the bot score was non-zero:

Logpush: now lower cost and with more visibility

Logpush job alerting

When logs are a critical part of your infrastructure, you want peace of mind that logging infrastructure is healthy. With that in mind, we are announcing the ability to get notified when your Logpush jobs have been retrying to push and failing for 24 hours.

To set up alerts in the Cloudflare dashboard:

1. First, navigate to “Notifications” in the left-panel of the account view

2. Next, Click the “add” button

Logpush: now lower cost and with more visibility

3. Select the alert “Failing Logpush Job Disabled”

Logpush: now lower cost and with more visibility

4. Configure the alert and click Save.

That’s it — you will receive an email alert if your Logpush job is disabled.

Logpush Job Health API

We have also added the ability to query for stats related to the health of your Logpush jobs to our graphql API. Customers can now use our GraphQL API to query for things like the number of bytes pushed, number of compressed bytes pushed, number of records pushed, the status of each push, and much more. Using these stats, customers can have greater visibility into a core part of infrastructure. The GraphQL API is self documenting so full details about the new logpushHealthAdaptiveGroups node can be found using any GraphQL client, but head to GraphQL docs for more information.

Below are a couple example queries of how you can use the GraphQL to find stats related to your Logpush jobs.

Query for number of pushes to S3 that resulted in status code != 200

query
{
  viewer
  {
    zones(filter: { zoneTag: $zoneTag})
    {
      logpushHealthAdaptiveGroups(filter: {
        datetime_gt:"2022-08-15T00:00:00Z",
        destinationType:"s3",
        status_neq:200
      }, 
      limit:10)
      {
        count,
        dimensions {
          jobId,
          status,
          destinationType
        }
      }
    }
  }
}

Getting the number of bytes, compressed bytes and records that were pushed

query
{
  viewer
  {
    zones(filter: { zoneTag: $zoneTag})
    {
      logpushHealthAdaptiveGroups(filter: {
        datetime_gt:"2022-08-15T00:00:00Z",
        destinationType:"s3",
        status:200
      }, 
      limit:10)
      {
        sum {
          bytes,
          bytesCompressed,
          records
        }
      }
    }
  }
}

Summary

Logpush is a robust and flexible platform for customers who need to integrate their own logging and monitoring systems with Cloudflare. Different Logpush jobs can be deployed to support multiple destinations or, with filtering, multiple subsets of logs.

Customers who haven’t created Logpush jobs are encouraged to do so. Try pushing your logs to R2 for safe-keeping! For customers who don’t currently have access to this powerful tool, consider upgrading your plan.

Cloudflare Zaraz supports Managed Components and DLP to make third-party tools private

Post Syndicated from Yo'av Moshe original https://blog.cloudflare.com/zaraz-uses-managed-components-and-dlp-to-make-tools-private/

Cloudflare Zaraz supports Managed Components and DLP to make third-party tools private

Cloudflare Zaraz supports Managed Components and DLP to make third-party tools private

When it comes to privacy, much is in your control as a website owner. You decide what information to collect, how to transmit it, how to process it, and where to store it. If you care for the privacy of your users, you’re probably taking action to ensure that these steps are handled sensitively and carefully. If your website includes no third party tools at all – no analytics, no conversion pixels, no widgets, nothing at all – then it’s probably enough! But… If your website is one of the other 94% of the Internet, you have some third-party code running in it. Unfortunately, you probably can’t tell what exactly this code is doing.

Third-party tools are great. Your product team, marketing team, BI team – they’re all right when they say that these tools make a better website. Third-party tools can help you understand your users better, embed information such as maps, chat widgets, or measure and attribute conversions more accurately. The problem doesn’t lay with the tools themselves, but with the way they are implemented – third party scripts.

Third-party scripts are pieces of JavaScript that your website is loading, often from a remote web server. Those scripts are then parsed by the browser, and they can generally do everything that your website can do. They can change the page completely, they can write cookies, they can read form inputs, URLs, track visitors and more. It is mostly a restrictions-less system. They were built this way because it used to be the only way to create a third-party tool.

Over the years, companies have suffered a lot of third party scripts. Those scripts were sometimes hacked, and started hijacking information from visitors to websites that were using them. More often, third party scripts are simply collecting information that could be sensitive, exposing the website visitors in ways that the website owner never intended.

Recently we announced that we’re open sourcing Managed Components. Managed Components are a new API to load third-party tools in a secure and privacy-aware way. It changes the way third-party tools load, because by default there are no more third-party scripts in it at all. Instead, there are components, which are controlled with a Components Manager like Cloudflare Zaraz.

In this blogpost we will discuss how to use Cloudflare Zaraz for granting and revoking permissions from components, and for controlling what information flows into components. Even more exciting, we’re also announcing the upcoming DLP features of Cloudflare Zaraz, that can report, mask and remove PII from information shared with third-parties by mistake.

How are Managed Components better

Because Managed Components run isolated inside a Component Manager, they are more private by design. Unlike a script that gets unlimited access to everything on your website, a Managed Component is transparent about what kind of access it needs, and operates under a Component Manager that grants and revokes permissions.

Cloudflare Zaraz supports Managed Components and DLP to make third-party tools private
Cloudflare Zaraz supports Managed Components and DLP to make third-party tools private

When you add a Managed Component to your website, the Component Manager will list all the permissions required for this component. Such permissions could be “setting cookies”, “making client-side network requests”, “installing a widget” and more. Depending on the tool, you’ll be able to remove permissions that are optional, if your website maintains a more restrictive approach to privacy.

Aside from permissions, the Component Manager also lets you choose what information is exposed to each Managed Component. Perhaps you don’t want to send IP addresses to Facebook? Or rather not send user agent strings to Mixpanel? Managed Components put you in control by telling you exactly what information is consumed by each tool, and letting you filter, mask or hide it according to your needs.

Data Loss Prevention with Cloudflare Zaraz

Another area we’re working on is developing DLP features that let you decide what information to forward to different Managed Components not only by the field type, e.g. “user agent header” or “IP address”, but by the actual content. DLP filters can scan all information flowing into a Managed Component and detect names, email addresses, SSN and more – regardless of which field they might be hiding under.

Our DLP Filters will be highly flexible. You can decide to only enable them for users from specific geographies, for users on specific pages, for users with a certain cookie, and you can even mix-and-match different rules. After configuring your DLP filter, you can set what Managed Components you want it to apply for – letting you filter information differently according to the receiving target.

For each DLP filter you can choose your action type. For example, you might want to not send any information in which the system detected a SSN, but to only report a warning if a first name was detected. Masking will allow you to replace an email address like [email protected] with [email protected], making sure events containing email addresses are still sent, but without exposing the address itself.

While there are many DLP tools available in the market, we believe that the integration between Cloudflare Zaraz’s DLP features and Managed Components is the safest approach, because the DLP rules are effectively fencing the information not only before it is being sent, but before the component even accesses it.

Getting started with Managed Components and DLP

Cloudflare Zaraz is the most advanced Component Manager, and you can start using it today. We recently also announced an integrated Consent Management Platform. If your third-party tool of course is missing a Managed Component, you can always write a Managed Component of your own, as the technology is completely open sourced.

While we’re working on bringing advanced permissions handling, data masking and DLP Filters to all users, you can sign up for the closed beta, and we’ll contact you shortly.

API Endpoint Management and Metrics are now GA

Post Syndicated from Jin-Hee Lee original https://blog.cloudflare.com/api-management-metrics/

API Endpoint Management and Metrics are now GA

API Endpoint Management and Metrics are now GA

The Internet is an endless flow of conversations between computers. These conversations, the  constant exchange of information from one computer to another, are what allow us to interact with the Internet as we know it. Application Programming Interfaces (APIs) are the vital channels that carry these conversations, and their usage is quickly growing: in fact, more than half of the traffic handled by Cloudflare is for APIs, and this is increasing twice as fast as traditional web traffic.

In March, we announced that we’re expanding our API Shield into a full API Gateway to make it easy for our customers to protect and manage those conversations. We already offer several features that allow you to secure your endpoints, but there’s more to endpoints than their security. It can be difficult to keep track of many endpoints over time and understand how they’re performing. Customers deserve to see what’s going on with their API-driven domains and have the ability to manage their endpoints.

Today, we’re excited to announce that the ability to save, update, and monitor the performance of all your API endpoints is now generally available to API Shield customers. This includes key performance metrics like latency, error rate, and response size that give you insights into the overall health of your API endpoints.

API Endpoint Management and Metrics are now GA

A Refresher on APIs

The bar for what we expect an application to do for us has risen tremendously over the past few years. When we open a browser, app, or IoT device, we expect to be able to connect to data instantly, compare dozens of flights within seconds, choose a menu item from a food delivery app, or see the weather for ten locations at once.

How are applications able to provide this kind of dynamic engagement for their users? They rely on APIs, which provide access to data and services—either from the application developer or from another company. APIs are fundamental in how computers (or services) talk to each other and exchange information.

You can think of an API as a waiter: say a customer orders a delicious bowl of Mac n Cheese. The waiter accepts this order from the customer, communicates the request to the chef in a format the chef can understand, and then delivers the Mac n Cheese back to the customer (assuming the chef has the ingredients in stock). The waiter is the crucial channel of communication, which is exactly what the API does.

API Endpoint Management and Metrics are now GA

Managing API Endpoints

The first step in managing APIs is to get a complete list of all the endpoints exposed to the internet. API Discovery automatically does this for any traffic flowing through Cloudflare. Undiscovered APIs can’t be monitored by security teams (since they don’t know about them) and they’re thus less likely to have proper security policies and best practices applied. However, customers have told us they also want the ability to manually add and manage APIs that are not yet deployed, or they want to ignore certain endpoints (for example those in the process of deprecation). Now, API Shield customers can choose to save endpoints found by Discovery or manually add endpoints to API Shield.

But security vulnerabilities aren’t the only risk or area of concern with APIs – they can be painfully slow or connections can be unsuccessful. We heard questions from our customers such as: what are my most popular endpoints? Is this endpoint significantly slower than it was yesterday? Are any endpoints returning errors that may indicate a problem with the application?

That’s why we built Performance Metrics into API Shield, which allows our customers to quickly answer these questions themselves with real-time data.

Prioritizing Performance

API Endpoint Management and Metrics are now GA

Once you’ve discovered, saved, or removed endpoints, you want to know what’s going well and what’s not. To end-users, a huge part of what defines the experience as “going well” is good performance. Poor performance can lead to a frustrating experience: when you’re shopping online and press a button to check out, you don’t want to wait around for minutes for the page to load. And you certainly never want to see a dreaded error symbol telling you that you can’t get what you came for.

Exposing performance metrics of API endpoints puts concrete numerical data into your developers’ hands to tell you how things are going. When things are going poorly, these dashboard metrics will point out exactly which aspect of performance is causing concern: maybe you expected to see a spike in requests, but find out that request count is normal and latency is just higher than usual.

Empowering our customers to make data-driven decisions to better manage their APIs ends up being a win for our customers and our customers’ customers, who expect to seamlessly engage with the domain’s APIs and get exactly what they came for.

Management and Performance Metrics in the Dashboard

So, what’s available today? Log onto your Cloudflare dashboard, go to the domain-level Security tab, and open up the API Shield page. Here, you’ll see the Endpoint Management tab, which shows you all the API endpoints that you’ve saved, alongside placeholders for metrics that will soon be gathered.

API Endpoint Management and Metrics are now GA

Here you can easily delete endpoints you no longer want to track, or click manually add additional endpoints. You can also export schemas for each host to share internally or externally.

API Endpoint Management and Metrics are now GA

Once you’ve saved the endpoints that you want to keep tabs on, Cloudflare will start collecting data on its performance and make it available to you as soon as possible.

In Endpoint Management, you can see a few summary metrics in the collapsed view of each endpoint, including recommended rate limits, average latency, and error rate. It can be difficult to tell whether things are going well or not just from seeing a value alone, so we added sparklines that show relative performance, comparing an endpoint’s current metrics with its usual or previous data.

API Endpoint Management and Metrics are now GA

If you want to view further details about a given endpoint, you can expand it for additional metrics such as response size and errors separated by 4xx and 5xx. The expanded view also allows you to view all metrics at a single timestamp by hovering over the charts.

API Endpoint Management and Metrics are now GA

For each saved endpoint, customers can see the following metrics:

  • Request count: total number of requests to the endpoint over time.
  • Rate limiting recommendation per 10 minutes, which is guided by the request count.
  • Latency: average origin response time, in milliseconds (ms). How long does it take from the moment a visitor makes a request to the moment the visitor gets a response back from the origin?
  • Error rate vs. overall traffic: grouped by 4xx, 5xx, and their sum.
  • Response size: average size of the response (in bytes) returned to the request.

You can toggle between viewing these metrics on a 24-hour period or a 7-day period, depending on the scale on which you’d like to view your data. And in the expanded view, we provide a percentage difference between the averages of the current vs. the previous period. For example, say I’m viewing my metrics on a 24-hour timeline. My average latency yesterday was 10 ms, and my average latency today is 30 ms, so the dashboard shows a 200% increase. We also use anomaly detection to bring attention to endpoints that have concerning performance changes.

API Endpoint Management and Metrics are now GA

Additional improvements to Discovery and Schema Validation

As part of making endpoint management GA, we’re also adding two additional enhancements to API Shield.

First, API Discovery now accepts cookies — in addition to authorization headers — to discover endpoints and suggest rate limiting thresholds. Previously, you could only identify an API session with HTTP headers, which didn’t allow customers to protect endpoints that use cookies as session identifiers. Now these endpoints can be protected as well. Simply go to the API Shield tab in the dashboard, choose edit session identifiers, and either change the type, or click Add additional identifier.

API Endpoint Management and Metrics are now GA

Second, we added the ability to validate the body of requests via Schema Validation for all customers. Schema Validation allows you to provide an OpenAPI schema (a template for your API traffic) and have Cloudflare block non-conformant requests as they arrive at our edge. Previously, you provided specific headers, cookies, and other features to validate. Now that we can validate the body of requests, you can use Schema Validation to confirm every element of a request matches what is expected. If a request contains strange information in the payload, we’ll notice. Note: customers who have already uploaded schemas will need to re-upload to take advantage of body validation.

Take a look at our developer documentation for more details on both of these features.

Get started

Endpoint Management, performance metrics, schema exporting, discovery via cookies, and schema body validation are all available now for all API Shield customers. To use them, log into the Cloudflare dashboard, click on Security in the navigation bar, and choose API Shield. Once API Shield is enabled, you’ll be able to start discovering endpoints immediately. You can also use all features through our API.

If you aren’t yet protecting a website with Cloudflare, it only takes a few minutes to sign up.

Regional Services comes to India, Japan and Australia

Post Syndicated from Achiel van der Mandele original https://blog.cloudflare.com/regional-services-comes-to-apac/

Regional Services comes to India, Japan and Australia

This post is also available in Deutsch, Français.

Regional Services comes to India, Japan and Australia

We announced the Data Localization Suite in 2020, when requirements for data localization were already important in the European Union. Since then, we’ve witnessed a growing trend toward localization globally. We are thrilled to expand our coverage to these countries in Asia Pacific, allowing more customers to use Cloudflare by giving them precise control over which parts of the Cloudflare network are able to perform advanced functions like WAF or Bot Management that require inspecting traffic.

Regional Services, a recap

In 2020, we introduced (Regional Services), a new way for customers to use Cloudflare. With Regional Services, customers can limit which data centers actually decrypt and inspect traffic. This helps because certain customers are affected by regulations on where they are allowed to service traffic. Others have agreements with their customers as part of contracts specifying exactly where traffic is allowed to be decrypted and inspected.

As one German bank told us: “We can look at the rules and regulations and debate them all we want. As long as you promise me that no machine outside the European Union will see a decrypted bank account number belonging to one of my customers, we’re happy to use Cloudflare in any capacity”.

Under normal operation, Cloudflare uses its entire network to perform all functions. This is what most customers want: leverage all of Cloudflare’s data centers so that you always service traffic to eyeballs as quickly as possible. Increasingly, we are seeing customers that wish to strictly limit which data centers service their traffic. With Regional Services, customers can use Cloudflare’s network but limit which data centers perform the actual decryption. Products that require decryption, such as WAF, Bot Management and Workers will only be applied within those data centers.

How does Regional Services work?

You might be asking yourself: how does that even work? Doesn’t Cloudflare operate an anycast network? Cloudflare was built from the bottom up to leverage anycast, a routing protocol. All of Cloudflare’s data centers advertise the same IP addresses through Border Gateway Protocol. Whichever data center is closest to you from a network point of view is the one that you’ll hit.

This is great for two reasons. The first is that the closer the data center to you, the faster the reply. The second great benefit is that this comes in very handy when dealing with large DDoS attacks. Volumetric DDoS attacks throw a lot of bogus traffic at you, which overwhelms network capacity. Cloudflare’s anycast network is great at taking on these attacks because they get distributed across the entire network.

Anycast doesn’t respect regional borders, it doesn’t even know about them. Which is why out of the box, Cloudflare can’t guarantee that traffic inside a country will also be serviced there. Although typically you’ll hit a data center inside your country, it’s very possible that your Internet Service Provider will send traffic to a network that might route it to a different country.

Regional Services solves that: when turned on, each data center becomes aware of which region it is operating in. If a user from a country hits a data center that doesn’t match the region that the customer has selected, we simply forward the raw TCP stream in encrypted form. Once it reaches a data center inside the right region, we decrypt and apply all Layer 7 products. This covers products such as CDN, WAF, Bot Management and Workers.

Let’s take an example. A user is in Kerala, India and their Internet Service Provider has determined that the fastest path to one of our data centers is to Colombo, Sri Lanka. In this example, a customer may have selected India as the sole region within which traffic should be serviced. The Colombo data center sees that this traffic is meant for the India region. It does not decrypt, but instead forwards it to the closest data center inside India. There, we decrypt and products such as WAF and Workers are applied as if the traffic had hit the data center directly.

Regional Services comes to India, Japan and Australia

Bringing Regional Services to Asia

Historically, we’ve seen most interest in Regional Services in geographic regions such as the European Union and the Americas. Over the past few years, however, we are seeing a lot of interest from Asia Pacific. Based on customer feedback and analysis on regulations we quickly concluded there were three key regions we needed to support: India, Japan and Australia. We’re proud to say that all three are now generally available for use today.

But we’re not done yet! We realize there are many more customers that require localization to their particular region. We’re looking to add many more in the near future and are working hard to make it easier to support more of them. If you have a region in mind, we’d love to hear it!

India, Japan and Australia are all live today! If you’re interested in using the Data Localization Suite, contact your account team!

Store and retrieve your logs on R2

Post Syndicated from Shelley Jones original https://blog.cloudflare.com/store-and-retrieve-logs-on-r2/

Store and retrieve your logs on R2

Store and retrieve your logs on R2

Following today’s announcement of General Availability of Cloudflare R2 object storage, we’re excited to announce that customers can also store and retrieve their logs on R2.

Cloudflare’s Logging and Analytics products provide vital insights into customers’ applications. Though we have a breadth of capabilities, logs in particular play a pivotal role in understanding what occurs at a granular level; we produce detailed logs containing metadata generated by Cloudflare products via events flowing through our network, and they are depended upon to illustrate or investigate anything (and everything) from the general performance or health of applications to closely examining security incidents.

Until today, we have only provided customers with the ability to export logs to 3rd-party destinations – to both store and perform analysis. However, with Log Storage on R2 we are able to offer customers a cost-effective solution to store event logs for any of our products.

The cost conundrum

We’ve unpacked the commercial impact in a previous blog post, but to recap, the cost of storage can vary broadly depending on the volume of requests Internet properties receive. On top of that – and specifically pertaining to logs – there’s usually more expensive fees to access that data whenever the need arises. This can be incredibly problematic, especially when customers are having to balance their budget with the need to access their logs – whether it’s to mitigate a potential catastrophe or just out of curiosity.

With R2, not only do we not charge customers egress costs, but we also provide the opportunity to make further operational savings by centralizing storage and retrieval. Though, most of all, we just want to make it easy and convenient for customers to access their logs via our Retrieval API – all you need to do is provide a time range!

Logs on R2: get started!

Why would you want to store your logs on Cloudflare R2? First, R2 is S3 API compatible, so your existing tooling will continue to work as is. Second, not only is R2 cost-effective for storage, we also do not charge any egress fees if you want to get your logs out of Cloudflare to be ingested into your own systems. You can store logs for any Cloudflare product, and you can also store what you need for as long as you need; retention is completely within your control.

Storing Logs on R2

To create Logpush jobs pushing to R2, you can use either the dashboard or Cloudflare API. Using the dashboard, you can create a job and select R2 as the destination during configuration:

Store and retrieve your logs on R2

To use the Cloudflare API to create the job, do something like:

curl -s -X POST 'https://api.cloudflare.com/client/v4/zones/<ZONE_ID>/logpush/jobs' \
-H "X-Auth-Email: <EMAIL>" \
-H "X-Auth-Key: <API_KEY>" \
-d '{
 "name":"<DOMAIN_NAME>",
"destination_conf":"r2://<BUCKET_PATH>/{DATE}?account-id=<ACCOUNT_ID>&access-key-id=<R2_ACCESS_KEY_ID>&secret-access-key=<R2_SECRET_ACCESS_KEY>",
 "dataset": "http_requests",
"logpull_options":"fields=ClientIP,ClientRequestHost,ClientRequestMethod,ClientRequestURI,EdgeEndTimestamp,EdgeResponseBytes,EdgeResponseStatus,EdgeStartTimestamp,RayID&timestamps=rfc3339",
 "kind":"edge"
}' | jq .

Please see Logpush over R2 docs for more information.

Log Retrieval on R2

If you have your logs pushed to R2, you could use the Cloudflare API to retrieve logs in specific time ranges like the following:

curl -s -g -X GET 'https://api.cloudflare.com/client/v4/accounts/<ACCOUNT_ID>/logs/retrieve?start=2022-09-25T16:00:00Z&end=2022-09-25T16:05:00Z&bucket=<YOUR_BUCKET>&prefix=<YOUR_FILE_PREFIX>/{DATE}' \
-H "X-Auth-Email: <EMAIL>" \
-H "X-Auth-Key: <API_KEY>" \ 
-H "R2-Access-Key-Id: R2_ACCESS_KEY_ID" \
-H "R2-Secret-Access-Key: R2_SECRET_ACCESS_KEY" | jq .

See Log Retrieval API for more details.

Now that you have critical logging infrastructure on Cloudflare, you probably want to be able to monitor the health of these Logpush jobs as well as get relevant alerts when something needs your attention.

Looking forward

While we have a vision to build out log analysis and forensics capabilities on top of R2 – and a roadmap to get us there – we’d still love to hear your thoughts on any improvements we can make, particularly to our retrieval options.

Get setup on R2 to start pushing logs today! If your current plan doesn’t include Logpush, storing logs on R2 is another great reason to upgrade!

SVG support in Cloudflare Images

Post Syndicated from Paulo Costa original https://blog.cloudflare.com/svg-support-in-cloudflare-images/

SVG support in Cloudflare Images

SVG support in Cloudflare Images

Cloudflare Images was announced one year ago on this very blog to help you solve the problem of delivering images in the right size, right quality and fast. Very fast.

It doesn’t really matter if you only run a personal blog, or a portal with thousands of vendors and millions of end-users. Doesn’t matter if you need one hundred images to be served one thousand times each at most, or if you deal with tens of millions of new, unoptimized, images that you deliver billions of times per month.

We want to remove the complexity of dealing with the need to store, to process, resize, re-encode and serve the images using multiple platforms and vendors.

At the time we wrote:

Images is a single product that stores, resizes, optimizes and serves images. We built Cloudflare Images, so customers of all sizes can build a scalable and affordable image pipeline in minutes.

We supported the most common formats, such as JPG, WebP, PNG and GIF.

We did not feel the need to support SVG files. SVG files are inherently scalable, so there is nothing to resize on the server side before serving them to your audience. One can even argue that SVG files are documents that can generate images through mathematical formulas of vectors and nodes, but are not images per se.

There was also the clear notion that SVG files were a potential risk due to known and well documented vulnerabilities. We knew we could do something from the security angle, but still, why go through that workload if it didn’t make sense in the first place to consider an SVG as a supported format.

Not supporting SVG files, though, did bring a set of challenges to an increasing number of our customers. Some stats already show that around 50% of websites serve SVG files, which matches the pulse we got from talking with many of you, customers and community.

If you relied on SVGs, you had to select a second storage location or a second image platform elsewhere. That commonly resulted in an egress fee when serving an uncached file from that source, and it goes against what we want for our product: one image pipeline to cover all your needs.

We heard loud and clear, and starting from today, you can store and serve SVG files, safely, with Cloudflare Images.

SVG, what is so special about them?

The Scalable Vector Graphics file type is great for serving all kinds of illustrations, charts, logos, and icons.

SVG files don’t represent images as pixels, but as geometric shapes (lines, arcs, polygons) that can be drawn with perfect sharpness at any resolution.

Let’s use now a complex image as an example, filled with more than four hundred paths and ten thousand nodes:

SVG support in Cloudflare Images

Contrary to the bitmaps where pixels arrange together to create the visual perception of an image to the human eye, that vector image can be resized with no quality loss. That happens because resizing that SVG to 300% of its original size is redefining the size of the vectors to 300%, not expanding pixels to 300%.

This becomes evident when we’re dealing with small resolution images.

Here is the 100px width SVG from the Toroid shown above:

SVG support in Cloudflare Images

and the correspondent 100 pixels width PNG:

SVG support in Cloudflare Images

Now here is the same SVG with the HTML width attribute set at 300px:

SVG support in Cloudflare Images

and the same PNG you saw before, but, upscaled by 3x, so the width is also 300px:

SVG support in Cloudflare Images

The visual quality loss on the PNG is obvious when it gets scaled up.

Keep in mind: The Toroid shown above is stored in an SVG file of 142Kb. And that is a very complex and heavy SVG file already.

Now, if you do want to display a PNG with an original width of 1024px to present a high quality image of the same Toroid above, the size will become an issue:

SVG support in Cloudflare Images

The new 1024px PNG, however, weighs 344 KB. That’s about 2.4 times the weight of the unique SVG that you could use in any size.

Think about the storage and bandwidth savings when all you need to do with an SVG, to get the exact same displayed image is use a width=”1024” in your HTML. It requires less than half of the kilobytes used on the PNG.

Couple all of this with the flexibility of using attributes like viewbox in your HTML code, and you can pan, zoom, crop, scale, all without ever needing anything other than the one and original SVG file.

Here’s an example of an SVG being resized on the client side, with no visual quality loss:

Let’s do a quick summary of what we covered so far: SVG files are wonderful for vector images like illustrations, charts, logos, and are infinitely scalable with no need to resize on the server side;

the same generated image, but on a bitmap is either heavier than the SVG when used in high resolutions, or with very noticeable loss of visual quality when scaled up from a lower resolution.

So, what are the downsides of using SVG files?

SVG files aren’t just images. They are XML-based documents that are as powerful as HTML pages. They can contain arbitrary JavaScript, fetch external content from other URLs or embed HTML elements. This gives SVG files much more power than expected from a simple image.

Throughout the years, numerous exploits have been known, identified and corrected.

Some old attacks were very rudimentary, yet effective. The famous Billion Laughs exploited how XML uses Entities and declares them in the Document Type Definition, and how it handles recursion.

Entities can be something as simple as a declaration of a text string, or a nested reference to other previous entities.

If you defined a first entity with a simple string, and then created a second entity calling 10 times the first one, and then a third entity calling 10 times the second one up until a 10th one of the same kind, you were requiring a parser to generate an output of a billion strings as defined on the very simple first entity. This would most commonly exhaust resources on the server parsing the XML, and form a DoS. While that particular limitation from the XML parsing got widely addressed through XML parser memory caps and lazy loading of entities, more complex attacks became a regular thing in recent years.

The common themes in these more recent attacks have been XSS (cross-site-scripting) and foreign objects referenced in the XML content. In both cases, using SVG inside <object> tags in your HTML is an invitation for any ill-intended file to reach your end-users. So, what exactly can we do about it and make you trust any SVG file you serve?

The SVG filter

We’ve developed a filter that simplifies SVG files to only features used for images, so that serving SVG images from any source is just as safe as serving a JPEG or PNG, while preserving SVG’s vector graphics capabilities.

  • We remove scripting. This prevents SVG files from being used for cross-site scripting attacks. Although browsers don’t allow scripts in <img>, they would run scripts when SVG files are opened directly as a top-level document.
  • We remove hyperlinks to other documents. This makes SVG files less attractive for SEO spam and phishing.
  • We remove references to cross-origin resources. This stops 3rd parties from tracking who is viewing the image.

What’s left is just an image.

SVG files can also contain embedded images in other formats, like JPEG and PNG, in the form of Data URLs. We treat these embedded images just like other images that we process, and optimize them too. We don’t support SVG files embedded in SVG recursively, though. It does open the door to recursive parsing leading to resource exhaustion on the parser. While the most common browsers are already limiting SVG recursion to one level, the potential to exploit that door led us to not include, at least for now, this capability on our filter.

We do set Content-Security-Policy (CSP) headers in all our HTTP response headers to disable unwanted features, and that alone acts as first defense, but filtering acts in more depth in case these headers are lost (e.g. if the image was saved as a file and served elsewhere).

Our tool is open-source. It’s written in Rust and can filter SVG files in a streaming fashion without buffering, so it’s fast enough for filtering on the fly.

The SVG format is pretty complex, with lots of features. If there is safe SVG functionality that we don’t support yet, you can report issues and contribute to development of the filter.

You can see how the tool actually works by looking at the tests folder in the open-source repository,  where a sample unfiltered XML and the already filtered version are present.

Here’s how a diff of those files looks like:

SVG support in Cloudflare Images

Removed are the external references, foreignObjects and any other potential threats.

How you can use SVG files in Cloudflare Images

Starting now you can upload SVG files to Cloudflare Images and serve them at will. Uploading the images can be done like for any other supported format, via UI or API.

Variants, named or flexible, are intended to transform bitmap (raster) images into whatever size you want to serve them.

SVG files, as vector images, do not require resizing inside the Images pipeline.

This results in a banner with the following message when you’re previewing an SVG in the UI:

SVG support in Cloudflare Images

And as a result, all variants listed will show the exact same image in the exact same dimensions.

Because an image is worth a thousand words, especially when trying to describe behaviors, here is what will it look like if you scroll through the variants preview:

With Cloudflare Images you do get a default Public Variant listed when you start using the product, and so you can immediately start serving your SVG files using it, just like this:

https://imagedelivery.net/<your_account_hash>/<your_SVG_ID>/public

And, as shown from above, you can use any of your variant names to serve the image, as it won’t affect the output at all.

If you’re an Image Resizing customer, you can also benefit from serving your files with our tool. Make sure you head to the Developer Documentation pages to see how.

What’s next?

You can subscribe to Cloudflare Images directly in the dashboard, and starting from today you can use the product to store and serve SVG files.

If you want to contribute to further developments of the filtering too and help expand its abilities, check out our SVG-Hush Tool repo.

You can also connect directly with the team in our Cloudflare Developers Discord Server.

Going originless with Cloudflare Workers – Building a Todo app – Part 1: The API

Post Syndicated from Kabir Sikand original https://blog.cloudflare.com/workers-todo-part-1/

Going originless with Cloudflare Workers – Building a Todo app – Part 1: The API

Going originless with Cloudflare Workers – Building a Todo app – Part 1: The API

A few months ago we launched Custom Domains into an open beta. Custom Domains allow you to hook up your Workers to the Internet, without having to deal with DNS records or certificates – just enter a valid hostname and Cloudflare will do the rest! The beta’s over, and Custom Domains are now GA.

Custom Domains aren’t just about a seamless developer experience; they also allow you to build a globally distributed instantly scalable application on Cloudflare’s Developer Platform. That’s because Workers leveraging Custom Domains have no concept of an ‘Origin Server’. There’s no ‘home’ to phone to – and that also means your application can use the power of Cloudflare’s global network to run your application, well, everywhere. It’s truly serverless.

Let’s build “Todo”, but without the servers

Today we’ll start a series of posts outlining a simple todo list application. We’ll start with an API and hook it up to the Internet using Custom Domains.

With Custom Domains, you’re treating the whole network as the application server. Any time a request comes into a Cloudflare data center, Workers are triggered in that data center and connect to resources across the network as needed. Our developers don’t need to think about regions, or replication, or spinning up the right number of instances to handle unforeseen load. Instead, just deploy your Workers and Cloudflare will handle the rest.

For our todo application, we begin by building an API Gateway to perform routing, any authorization checks, and drop invalid requests. We then fan out to each individual use case in a separate Worker, so our teams can independently make updates or add features to each endpoint without a full redeploy of the whole application. Finally, each Worker has a D1 binding to be able to create, read, update, and delete records from the database. All of this happens on Cloudflare’s global network, so your API is truly available everywhere. The architecture will look something like this:

Going originless with Cloudflare Workers – Building a Todo app – Part 1: The API

Bootstrap the D1 Database

First off, we’re going to need a D1 database set up, with a schema for our todo application to run on. If you’re not familiar with D1, it’s Cloudflare’s serverless database offering – explained in more detail here. To get started, we use the wrangler d1 command to create a new database:

npx wrangler d1 create <todo | custom-database-name>

After executing this command, you will be asked to add a snippet of code to your wrangler.toml file that looks something like this:

[[ d1_databases ]]
binding = "db" # i.e. available in your Worker on env.db
database_name = "<todo | custom-database-name>"
database_id = "<UUID>"

Let’s save that for now, and we’ll put these into each of our private microservices in a few moments. Next, we’re going to create our database schema. It’s a simple todo application, so it’ll look something like this, with some seeded data:

db/schema.sql

DROP TABLE IF EXISTS todos;
CREATE TABLE todos (id INTEGER PRIMARY KEY, todo TEXT, todoStatus BOOLEAN NOT NULL CHECK (todoStatus IN (0, 1)));
INSERT INTO todos (todo, todoStatus) VALUES ("Fold my laundry", 0),("Get flowers for mum’s birthday", 0),("Find Nemo", 0),("Water the monstera", 1);

You can bootstrap your new D1 database by running:

npx wrangler d1 execute <todo | custom-database-name> --file=./schema.sql

Then validate your new data by running a query through Wrangler using the following command:

npx wrangler d1 execute <todo | custom-database-name> --command='SELECT * FROM todos';

Great! We’ve now got a database running entirely on Cloudflare’s global network.

Build the endpoint Workers

To talk to your database, we’ll spin up a series of private microservices for each endpoint in our application. We want to be able to create, read, update, delete, and list our todos. The full source code for each is available here. Below is code from a Worker that lists all our todos from D1.

list/src/list.js

export default {
   async fetch(request, env) {
     const { results } = await env.db.prepare(
       "SELECT * FROM todos"
     ).all();
     return Response.json(results);
   },
 };

The Worker ‘todo-list’ needs to be able to access D1 from the environment variable db. To do this, we’ll define the D1 binding in our wrangler.toml file. We also specify that workers_dev is false, preventing a preview from being generated via workers.dev (we want this to be a private microservice).

list/wrangler.toml

name = "todo-list"
main = "src/list.js"
compatibility_date = "2022-09-07"
workers_dev = false
usage_model = "unbound"

[[ d1_databases ]]
binding = "db" # i.e. available in your Worker on env.db
database_name = "<todo | custom-database-name>"
database_id = "UUID"

Finally, use wrangler publish to deploy this microservice.

todo/list on ∞main [!] 
› wrangler publish
 ⛅️ wrangler 0.0.0-893830aa
-----------------------------------------------------------------------
Retrieving cached values for account from ../../../node_modules/.cache/wrangler
Your worker has access to the following bindings:
- D1 Databases:
  - db: todo (UUID)
Total Upload: 4.71 KiB / gzip: 1.60 KiB
Uploaded todo-list (0.96 sec)
No publish targets for todo-list (0.00 sec)

Notice that wrangler mentions there are no ‘publish targets’ for todo-list. That’s because we haven’t hooked todo-list up to any HTTP endpoints. That’s fine! We’re going to use Service Bindings to route requests through a gateway worker, as described in the architecture diagram above.

Next, reuse these steps to create similar microservices for each of our create, read, update, and delete endpoints. The source code is available to follow along.

Tying it all together with an API Gateway

Each of our Workers are able to talk to the D1 database, but how can our application talk to our API? We’ll build out a simple API gateway to route incoming requests to the appropriate microservice. For the purposes of our application, we’re using a combination of URL pathname and request method to detect which endpoint is appropriate.

gateway/src/gateway.js

export default {
 async fetch(request, env) {
   try{
     const url = new URL(request.url)
     const idPattern = new URLPattern({ pathname: '/:id' })
     if (idPattern.test(request.url)) {
       switch (request.method){
         case 'GET':
           return await env.get.fetch(request.clone())
         case 'PATCH':
           return await env.update.fetch(request.clone())
         case 'DELETE':
           return await env.delete.fetch(request.clone())
         default:
           return new Response("Unsupported method for endpoint /:id", {status: 405})
       }
     } else if (url.pathname == '/') {
       switch (request.method){
         case 'GET':
           return await env.list.fetch(request.clone())
         case 'POST':
           return await env.create.fetch(request.clone())
         default:
           return new Response("Unsupported method for endpoint /", {status: 405})
       }
     }
     return new Response("Not found. Supported endpoints are /:id and /", {status: 404})
   } catch(e) {
     return new Response(e, {status: 500})
   }
 },
};

With our API gateway all set, we just need to expose our application to the Internet using a Custom Domain, and hook up our Service Bindings, so the gateway Worker can access each appropriate microservice. We’ll set this up in a wrangler.toml.

gateway/wrangler.toml

name = "todo-gateway"
main = "src/gateway.js"
compatibility_date = "2022-09-07"
workers_dev = false
usage_model = "unbound"
 
routes =  [
   {pattern="todos.radiobox.tv", custom_domain=true, zone_name="radiobox.tv"}
]
 
services = [
   {binding = "get",service = "todo-get"},
   {binding = "delete",service = "todo-delete"},
   {binding = "create",service = "todo-create"},
   {binding = "update",service = "todo-update"},
   {binding = "list",service = "todo-list"}
]

Next, use wrangler publish to deploy your application to the Cloudflare network. Seconds later, you’ll have a simple, functioning todo API built entirely on Cloudflare’s Developer Platform!

› wrangler publish
 ⛅️ wrangler 0.0.0-893830aa
-----------------------------------------------------------------------
Retrieving cached values for account from ../../../node_modules/.cache/wrangler
Your worker has access to the following bindings:
- Services:
  - get: todo-get
  - delete: todo-delete
  - create: todo-create
  - update: todo-update
  - list: todo-list
Total Upload: 1.21 KiB / gzip: 0.42 KiB
Uploaded todo-gateway (0.62 sec)
Published todo-gateway (0.51 sec)
  todos.radiobox.tv (custom domain - zone name: radiobox.tv)

Natively Global

Since it’s built natively on Cloudflare, you can also include Cloudflare’s security suite in front of the application. If we want to prevent SQL Injection attacks for this endpoint, we can enable the appropriate Managed WAF rules on our todos API endpoint. Alternatively, if we wanted to prevent global access to our API (only allowing privileged clients to access the application), we can simply put Cloudflare Access in front, with custom Access Rules.

Going originless with Cloudflare Workers – Building a Todo app – Part 1: The API

With Custom Domains on Workers, you’re able to easily create applications that are native to Cloudflare’s global network, instantly. Best of all, your developers don’t need to worry about maintaining DNS records or certificate renewal – Cloudflare handles it all on their behalf. We’d like to give a huge shout out to the 5,000+ developers who used Custom Domains during the open beta period, and those that gave feedback along the way to make this possible. Can’t wait to see what you build next! As always, if you have any questions or would like to get involved, please join us on Discord.

Tune in next time to see how we can build a frontend for our application. In the meantime, you can play around with the todos API we built today at todos.radiobox.tv.

The easiest way to build a modern SaaS application

Post Syndicated from Tanushree Sharma original https://blog.cloudflare.com/workers-for-platforms-ga/

The easiest way to build a modern SaaS application

The easiest way to build a modern SaaS application

The Software as a Service (SaaS) model has changed the way we work – 80% of businesses use at least one SaaS application. Instead of investing in building proprietary software or installing and maintaining on-prem licensed software, SaaS vendors provide businesses with out-of-the-box solutions.

SaaS has many benefits over the traditional software model: cost savings, continuous updates and scalability, to name a few. However, any managed solution comes with trade-offs. As a business, one of the biggest challenges in adopting SaaS tooling is loss of customization. Not every business uses software in the same way and as you grow as a SaaS company it’s not long until you get customers saying “if only I could do X”.

Enter Workers for Platforms – Cloudflare’s serverless functions offering for SaaS businesses. With Workers for Platforms, your customers can build custom logic to meet their requirements right into your application.

We’re excited to announce that Workers for Platforms is now in GA for all Enterprise customers! If you’re an existing customer, reach out to your Customer Success Manager (CSM) to get access. For new customers, fill out our contact form to get started.

The conundrum of customization

As a SaaS business invested in capturing the widest market, you want to build a universal solution that can be used by customers of different sizes, in various industries and regions. However, every one of your customers has a unique set of tools and vendors and best practices. A generalized platform doesn’t always meet their needs.

For SaaS companies, once you get in the business of creating customizations yourself, it can be hard to keep up with seemingly never ending requests. You want your engineering teams to focus on building out your core business instead of building and maintaining custom solutions for each of your customer’s use cases.

With Workers for Platforms, you can give your customers the ability to write code that customizes your software. This gives your customers the flexibility to meet their exact use case while also freeing up internal engineering time  – it’s a win-win!

How is this different from Workers?

Workers is Cloudflare’s serverless execution environment that runs your code on Cloudflare’s global network. Workers is lightning fast and scalable; running at data centers in 275+ cities globally and serving requests from as close as possible to the end user. Workers for Platforms extends the power of Workers to our customer’s developers!

So, what’s new?

Dispatch Worker: As a platform customer, you want to have full control over how end developer code fits in with your APIs. A Dispatch Worker is written by our platform customers to run their own logic before dispatching (aka routing) to Workers written by end developers. In addition to routing, it can be used to run authentication, create boilerplate functions and sanitize responses.

User Workers: User Workers are written by end developers, that is, our customers’ developers. End developers can deploy User Workers to script automated actions, create integrations or modify response payload to return custom content. Unlike self-managed Function-as-a-Service (FaaS) options, with Workers for Platforms, end developers don’t need to worry about setting up and maintaining their code on any 3rd party platform. All they need to do is upload their code and you – or rather Cloudflare – takes care of the rest.

Unlimited Scripts: Yes, you read that correctly! With hundreds-plus end developers, the existing 100 script limit for Workers won’t cut it for Workers for Platforms customers. Some of our Workers for Platforms customers even deploy a new script each time their end developers make a change to their code in order to maintain version control and to easily revert to a previous state if a bug is deployed.

Dynamic Dispatch Namespaces: If you’ve used Workers before, you may be familiar with a feature we released earlier this year, Service Bindings. Service Bindings are a way for two Workers to communicate with each other. They allow developers to break up their applications into modules that can be chained together. Service Bindings explicitly link two Workers together, and they’re meant for use cases where you know exactly which Workers need to communicate with each other.

Service Bindings don’t work in the Workers for Platforms model because User Workers are uploaded ad hoc. Dynamic Dispatch Namespaces is our solution to this! A Dispatch Namespace is composed of a collection of User Workers. With Dispatch Namespaces, a Dispatch Worker can be used to call any User Worker in a namespace (similar to how Service Bindings work) but without needing to explicitly pre-define the relationship.

Read more about how to use these features below!

How to use Workers for Platforms

The easiest way to build a modern SaaS application

Dispatch Workers

Dispatch Workers are the entry point for requests to Workers in a Dispatch Namespace. The Dispatch Worker can be used to run any functions ahead of User Workers. They can make a request to any User Workers in the Dispatch Namespace, and they ultimately handle the routing to User Workers.

Dispatch Workers are created the same way as a regular Worker, except they need a Dispatch Namespace binding in the project’s wrangler.toml configuration file.

[[dispatch_namespaces]]
binding = "dispatcher"
namespace = "api-prod"

In the example below, this Dispatch Worker reads the subdomain from the path and calls the appropriate User Worker. Alternatively you can use KV, D1 or your data store of choice to map identifying parameters from an incoming request to a User Worker.

export default {
 async fetch(request, env) {
   try {
       // parse the URL, read the subdomain
       let worker_name = new URL(request.url).host.split('.')[0]
       let user_worker = env.dispatcher.get(worker_name)
       return user_worker.fetch(request)
   } catch (e) {
       if (e.message == 'Error: Worker not found.') {
           // we tried to get a worker that doesn't exist in our dispatch namespace
           return new Response('', {status: 404})
       }
       // this could be any other exception from `fetch()` *or* an exception
       // thrown by the called worker (e.g. if the dispatched worker has
       // `throw MyException()`, you could check for that here).
       return new Response(e.message, {status: 500})
   }
 }

}

Uploading User Workers

User Workers must be uploaded to a Dispatch Namespace through the Cloudflare API (wrangler support coming soon!). This code snippet below uses a simple HTML form to take in a script and customer id and then uploads it to the Dispatch Namespace.

export default {
 async fetch(request: Request) {
   try {
     // on form submit
     if (request.method === "POST"){
       const str = JSON.stringify(await request.json())
       const upload_obj = JSON.parse(str)
       await upload(upload_obj.customerID, upload_obj.script)
   }
   //render form
     return new Response (html, {
       headers: {
         "Content-Type": "text/html"
       }
     })
   } catch (e) {
       // form error
       return new Response(e.message, {status: 500})
   }
 }
}

async function upload(customerID:string, script:string){
 const scriptName = customerID;
 const scriptContent = script;
 const accountId = "<ACCOUNT_ID>";
 const dispatchNamespace = "api-prod";
 const url = `https://api.cloudflare.com/client/v4/accounts/${accountId}/workers/dispatch/namespaces/${dispatchNamespace}/scripts/${scriptName}`;
 // construct and send request
 const response = await fetch(url, {
   method: "PUT",
   body: scriptContent,
   headers: {
     "Content-Type": "application/javascript",
     "X-Auth-Email": "<EMAIL>",
     "X-Auth-Key": "<API_KEY>"
   }
 });

 const result = (await response.json());
 if (response.status != 200) {
   throw new Error(`Upload error`);
 }
}

It’s that simple. With Dispatch Namespaces and Dispatch Workers, we’re giving you the building blocks to customize your SaaS applications. Along with the Platforms APIs, we’re also releasing a Workers for Platforms UI on the Cloudflare dashboard where you can view your Dispatch Namespaces, search scripts and view analytics for User Workers.

The easiest way to build a modern SaaS application

To view an end to end example, check out our Workers for Platforms example application.

Get started today!

We’re releasing Workers for Platforms to all Cloudflare Enterprise customers. If you’re interested, reach out to your Customer Success Manager (CSM) to get access. To get started, take a look at our Workers for Platforms starter project and developer documentation.

We also have plans to release this down to the Workers Paid plan. Stay tuned on the Cloudflare Discord (channel name: workers-for-platforms-beta) for updates.

What’s next?

We’ve heard lots of great feature requests from our early Workers for Platforms customers. Here’s a preview of what’s coming next on the roadmap:

  • Fine-grained controls over User Workers: custom script limits, allowlist/blocklist for fetch requests
  • GraphQL and Logs: Metrics for User Workers by tag
  • Plug and play Platform Development Kit
  • Tighter integration with Cloudflare for SaaS custom domains

If you have specific feature requests in mind, please reach out to your CSM or get in touch through the Discord!

Stream Live is now Generally Available

Post Syndicated from Brendan Irvine-Broque original https://blog.cloudflare.com/stream-live-ga/

Stream Live is now Generally Available

Stream Live is now Generally Available

Today, we’re excited to announce that Stream Live is out of beta, available to everyone, and ready for production traffic at scale. Stream Live is a feature of Cloudflare Stream that allows developers to build live video features in websites and native apps.

Since its beta launch, developers have used Stream to broadcast live concerts from some of the world’s most popular artists directly to fans, build brand-new video creator platforms, operate a global 24/7 live OTT service, and more. While in beta, Stream has ingested millions of minutes of live video and delivered to viewers all over the world.

Bring your big live events, ambitious new video subscription service, or the next mobile video app with millions of users — we’re ready for it.

Streaming live video at scale is hard

Live video uses a massive amount of bandwidth. For example, a one-hour live stream at 1080p at 8Mbps is 3.6GB. At typical cloud provider egress prices, even a little egress can break the bank.

Live video must be encoded on-the-fly, in real-time. People expect to be able to watch live video on their phone, while connected to mobile networks with less bandwidth, higher latency and frequently interrupted connections. To support this, live video must be re-encoded in real-time into multiple resolutions, allowing someone’s phone to drop to a lower resolution and continue playback. This can be complex (Which bitrates? Which codecs? How many?) and costly: running a fleet of virtual machines doesn’t come cheap.

Ingest location matters — Live streaming protocols like RTMPS send video over TCP. If a single packet is dropped or lost, the entire connection is brought to a halt while the packet is found and re-transmitted. This is known as “head of line blocking”. The further away the broadcaster is from the ingest server, the more network hops, and the more likely packets will be dropped or lost, ultimately resulting in latency and buffering for viewers.

Delivery location matters — Live video must be cached and served from points of presence as close to viewers as possible. The longer the network round trips, the more likely videos will buffer or drop to a lower quality.

Broadcasting protocols are in flux — The most widely used protocol for streaming live video, RTMPS, has been abandoned since 2012, and dates back to the era of Flash video in the early 2000s. A new emerging standard, SRT, is not yet supported everywhere. And WebRTC has only recently evolved into an option for high definition one-to-many broadcasting at scale.

The old way to solve this has been to stitch together separate cloud services from different vendors. One vendor provides excellent content delivery, but no encoding. Another provides APIs or hardware to encode, but leaves you to fend for yourself and build your own storage layer. As a developer, you have to learn, manage and write a layer of glue code around the esoteric details of video streaming protocols, codecs, encoding settings and delivery pipelines.

Stream Live is now Generally Available

We built Stream Live to make streaming live video easy, like adding an <img> tag to a website. Live video is now a fundamental building block of Internet content, and we think any developer should have the tools to add it to their website or native app.

With Stream, you or your users stream live video directly to Cloudflare, and Cloudflare delivers video directly to your viewers. You never have to manage internal encoding, storage, or delivery systems — it’s just live video in and live video out.

Our network, our hardware = a solution only Cloudflare can provide

We’re not the only ones building APIs for live video — but we are the only ones with our own global network and hardware that we control and optimize for video. That lets us do things that others can’t, like sub-second glass-to-glass latency using RTMPS and SRT playback at scale.

Newer video codecs require specialized hardware encoders, and while others are beholden to the hardware limitations of public cloud providers, we’re already hard at work installing the latest encoding hardware in our own racks, so that you can deliver high resolution video with even less bandwidth. Our goal is to make what is otherwise only available to video giants available directly to you — stay tuned for some exciting updates on this next week.

Most providers limit how many destinations you can restream or simulcast to. Because we operate our own network, we’ve never even worried about this, and let you restream to as many destinations as you need.

Operating our own network lets us price Stream based on minutes of video delivered — unlike others, we don’t pay someone else for bandwidth and then pass along their costs to you at a markup. The status quo of charging for bandwidth or per-GB storage penalizes you for delivering or storing high resolution content. If you ask why a few times, most of the time you’ll discover that others are pushing their own cost structures on to you.

Encoding video is compute-intensive, delivering video is bandwidth intensive, and location matters when ingesting live video. When you use Stream, you don’t need to worry about optimizing performance, finding a CDN, and/or tweaking configuration endlessly. Stream takes care of this for you.

Free your live video from the business models of big platforms

Nearly every business uses live video, whether to engage with customers, broadcast events or monetize live content. But few have the specialized engineering resources to deliver live video at scale on their own, and wire together multiple low level cloud services. To date, many of the largest content creators have been forced to depend on a shortlist of social media apps and streaming services to deliver live content at scale.

Unlike the status quo, who force you to put your live video in their apps and services and fit their business models, Stream gives you full control of your live video, on your website or app, on any device, at scale, without pushing your users to someone else’s service.

Free encoding. Free ingestion. Free analytics. Simple per-minute pricing.

Others Stream
Encoding $ per minute Free
Ingestion $ per GB Free
Analytics Separate product Free
Live recordings Minutes or hours later Instant
Storage $ per GB per minute stored
Delivery $ per GB per minute delivered

Other platforms charge for ingestion and encoding. Many even force you to consider where you’re streaming to and from, the bitrate and frames per second of your video, and even which of their datacenters you’re using.

With Stream, encoding and ingestion are free. Other platforms charge for delivery based on bandwidth, penalizing you for delivering high quality video to your viewers. If you stream at a high resolution, you pay more.

With Stream, you don’t pay a penalty for delivering high resolution video. Stream’s pricing is simple — minutes of video delivered and stored. Because you pay per minute, not per gigabyte, you can stream at the ideal resolution for your viewers without worrying about bandwidth costs.

Other platforms charge separately for analytics, requiring you to buy another product to get metrics about your live streams.

With Stream, analytics are free. Stream provides an API and Dashboard for both server-side and client-side analytics, that can be queried on a per-video, per-creator, per-country basis, and more. You can use analytics to identify which creators in your app have the most viewed live streams, inform how much to bill your customers for their own usage, identify where content is going viral, and more.

Other platforms tack on live recordings or DVR mode as a separate add-on feature, and recordings only become available minutes or even hours after a live stream ends.

With Stream, live recordings are a built-in feature, made available instantly after a live stream ends. Once a live stream is available, it works just like any other video uploaded to Stream, letting you seamlessly use the same APIs for managing both pre-recorded and live content.

Build live video into your website or app in minutes

Stream Live is now Generally Available

Cloudflare Stream enables you or your users to go live using the same protocols and tools that broadcasters big and small use to go live to YouTube or Twitch, but gives you full control over access and presentation of live streams.

Step 1: Create a live input

Create a new live input from the Stream Dashboard or use use the Stream API:

Request

curl -X POST \
-H "Authorization: Bearer <YOUR_API_TOKEN>" \
-d "{"recording": { "mode": "automatic" } }" \
https://api.cloudflare.com/client/v4/accounts/<YOUR_CLOUDFLARE_ACCOUNT_ID>/stream/live_inputs

Response

{
"result": {
"uid": "<UID_OF_YOUR_LIVE_INPUT>",
"rtmps": {
"url": "rtmps://live.cloudflare.com:443/live/",
"streamKey": "<PRIVATE_RTMPS_STREAM_KEY>"
},
...
}
}

Step 2: Use the RTMPS key with any live broadcasting software, or in your own native app

Copy the RTMPS URL and key, and use them with your live streaming application. We recommend using Open Broadcaster Software (OBS) to get started, but any RTMPS or SRT compatible software should be able to interoperate with Stream Live.

Enter the Stream RTMPS URL and the Stream Key from Step 1:

Stream Live is now Generally Available

Step 3: Preview your live stream in the Cloudflare Dashboard

In the Stream Dashboard, within seconds of going live, you will see a preview of what your viewers will see, along with the real-time connection status of your live stream.

Stream Live is now Generally Available

Step 4: Add live video playback to your website or app

Stream your video using our Stream Player embed code, or use any video player that supports HLS or DASH — live streams can be played in both websites or native iOS and Android apps.

For example, on iOS, all you need to do is provide AVPlayer with the URL to the HLS manifest for your live input, which you can find via the API or in the Stream Dashboard.

import SwiftUI
import AVKit

struct MyView: View {
    // Change the url to the Cloudflare Stream HLS manifest URL
    private let player = AVPlayer(url: URL(string: "https://customer-9cbb9x7nxdw5hb57.cloudflarestream.com/8f92fe7d2c1c0983767649e065e691fc/manifest/video.m3u8")!)

    var body: some View {
        VideoPlayer(player: player)
            .onAppear() {
                player.play()
            }
    }
}

struct MyView_Previews: PreviewProvider {
    static var previews: some View {
        MyView()
    }
}

To run a complete example app in XCode, follow this guide in the Stream Developer docs.

Companies are building whole new video platforms on top of Stream

Developers want control, but most don’t have time to become video experts. And even video experts building innovative new platforms don’t want to manage live streaming infrastructure.

Switcher Studio’s whole business is live video — their iOS app allows creators and businesses to produce their own branded, multi camera live streams. Switcher uses Stream as an essential part of their live streaming infrastructure. In their own words:

“Since 2014, Switcher has helped creators connect to audiences with livestreams. Now, our users create over 100,000 streams per month. As we grew, we needed a scalable content delivery solution. Cloudflare offers secure, fast delivery, and even allowed us to offer new features, like multistreaming. Trusting Cloudflare Stream lets our team focus on the live production tools that make Switcher unique.”

While Stream Live has been in beta, we’ve worked with many customers like Switcher, where live video isn’t just one product feature, it is the core of their product. Even as experts in live video, they choose to use Stream, so that they can focus on the unique value they create for their customers, leaving the infrastructure of ingesting, encoding, recording and delivering live video to Cloudflare.

Start building live video into your website or app today

It takes just a few minutes to sign up and start your first live stream, using the Cloudflare Dashboard, with no code required to get started, but APIs for when you’re ready to start building your own live video features. Give it a try — we’re ready for you, no matter the size of your audience.

R2 is now Generally Available

Post Syndicated from Aly Cabral original https://blog.cloudflare.com/r2-ga/

R2 is now Generally Available

R2 is now Generally Available

R2 gives developers object storage, without the egress fees. Before R2, cloud providers taught us to expect a data transfer tax every time we actually used the data we stored with them. Who stores data with the goal of never reading it? No one. Yet, every time you read data, the egress tax is applied. R2 gives developers the ability to access data freely, breaking the ecosystem lock-in that has long tied the hands of application builders.

In May 2022, we launched R2 into open beta. In just four short months we’ve been overwhelmed with over 12k developers (and rapidly growing) getting started with R2. Those developers came to us with a wide range of use cases from podcast applications to video platforms to ecommerce websites, and users like Vecteezy who was spending six figures in egress fees. We’ve learned quickly, gotten great feedback, and today we’re excited to announce R2 is now generally available.

We wouldn’t ask you to bet on tech we weren’t willing to bet on ourselves. While in open beta, we spent time moving our own products to R2. One such example, Cloudflare Images, proudly serving thousands of customers in production, is now powered by R2.

What can you expect from R2?

S3 Compatibility

R2 gives developers a familiar interface for object storage, the S3 API. WIth S3 Compatibility, you can easily migrate your applications and start taking advantage of what R2 has to offer right out of the gate.

Let’s take a look at some basic data operations in javascript. To try this out on your own, you’ll need to generate an Access Key.

// First we import our bindings as usual
import {
  S3Client,
  ListBucketsCommand,
} from "@aws-sdk/client-s3";

// Then we create a new client. Note that while R2 requires a region for S3 compatibility, only “auto” is supported
const S3 = new S3Client({
  region: "auto",
  endpoint: `https://${ACCOUNT_ID}.r2.cloudflarestorage.com`,
  credentials: {
    accessKeyId: ACCESS_KEY_ID, //  fill in your own
    secretAccessKey: SECRET_ACCESS_KEY, // fill in your own
  },
});

// And now we can use our client to list associated buckets just like we would with any other S3 compatible object storage
console.log(
  await S3.send(
    new ListBucketsCommand('')
  )
);

Regardless of the language, the S3 API offers familiarity. We have examples in Go, Java, PHP, and Ruby.

Region: Automatic

We don’t want to live in a world where developers are spending time looking into a crystal ball and predicting where application traffic might come from. Choosing a region as the first step in application development forces optimization decisions long before the first users show up.

While S3 compatibility requires you to specify a region, the only region we support is ‘auto’. Today, R2 automatically selects a bucket location in the closest available region to the create bucket request. If I create a bucket from my home in Austin, that bucket will live in the closest available R2 region to Austin.

In the future, R2 will use data access patterns to automatically optimize where data is stored for the best user experience.

Cloudflare Workers Integration

The Workers platform offers developers powerful compute across Cloudflare’s network. When you deploy on Workers, your code is deployed to Cloudflare’s 280+ locations across the globe, automatically. When paired with R2, Workers allows developers to add custom logic around their data without any performance overhead. Workers is built on isolates and not containers, and as a result you don’t have to deal with lengthy cold starts.

Let’s try creating a simple REST API for an R2 bucket! First, create your bucket and then add an R2 binding to your worker.

export default {
  async fetch(request, env) {
    const url = new URL(request.url);
    const key = url.pathname.slice(1); // we’ll derive a key from the url path

    switch (request.method) {
      // For writes, we capture the request body and write that out to our bucket under the associated key
      case 'PUT':
        await env.MY_BUCKET.put(key, request.body);
        return new Response(`Put ${key} successfully!`);

      // For reads, we’ll use our key to perform a lookup
      case 'GET':
        const object = await env.MY_BUCKET.get(key);

        // if we don’t find the given key we’ll return a 404 error
        if (object === null) {
          return new Response('Object Not Found', { status: 404 });
        }

        const headers = new Headers();
        object.writeHttpMetadata(headers);
        headers.set('etag', object.httpEtag);

        return new Response(object.body, {
          headers,
        });
    }
  },
};

Through this Workers API, we can add all sorts of useful logic to the hot path of a R2 request.

Presigned URLs

Sometimes you’ll want to give your users permissions to specific objects in R2 without requiring them to jump through hoops. Through pre-signed URLs you can delegate your permissions to your users for any unique combination of object and action. Mint a pre-signed URL to let a user upload a file or share a file without giving access to the entire bucket.

import {
  S3Client,
  PutObjectCommand
} from "@aws-sdk/client-s3";

import { getSignedUrl } from "@aws-sdk/s3-request-presigner";

const S3 = new S3Client({
  region: "auto",
  endpoint: `https://${ACCOUNT_ID}.r2.cloudflarestorage.com`,
  credentials: {
    accessKeyId: ACCESS_KEY_ID,
    secretAccessKey: SECRET_ACCESS_KEY,
  },
});

// With getSignedUrl we can produce a custom url with a one hour expiration which will allow our end user to upload their dog pic
console.log(
  await getSignedUrl(S3, new PutObjectCommand({Bucket: 'my-bucket-name', Key: 'dog.png'}), { expiresIn: 3600 })
)

Presigned URLs make it easy for developers to build applications that let end users safely access R2 directly.

Public buckets

Enabling public access for a R2 bucket allows you to expose that bucket to unauthenticated requests. While doing so on its own is of limited use, when those buckets are linked to a domain under your account on Cloudflare you can enable other Cloudflare features such as Access, Cache and bot management seamlessly on top of your data in R2.

Bottom line: public buckets help to bridge the gap between domain oriented Cloudflare features and the buckets you have in R2.

Transparent Pricing

R2 will never charge for egress. The pricing model depends on three factors alone: storage volume, Class A operations (writes, lists) and Class B operations (reads).

  • Storage is priced at $0.015 / GB, per month.
  • Class A operations cost $4.50 / million.
  • Class B operations cost $0.36 / million.

But before you’re ready to start paying for R2, we allow you to get up and running at absolutely no cost. The included usage is as follows:

  • 10 GB-months of stored data
  • 1,000,000 Class A operations, per month
  • 10,000,000 Class B operations, per month

What’s next?

Making R2 generally available is just the beginning of our object storage journey. We’re excited to share what we plan to build next.

Object Lifecycles

In the future R2 will allow developers to set policies on objects. For example, setting a policy that deletes an object 60 days after it was last accessed. Object Lifecycles pushes object management down to the object store.

Jurisdictional Restrictions

While we don’t have plans to support regions explicitly, we know that data locality is important for a good deal of compliance use cases. Jurisdictional restrictions will allow developers to set a jurisdiction like the ‘EU’ that would prevent guarantee data from leavingwould not leave the jurisdiction.

Live Migration without Downtime

For large datasets, migrations are live and ongoing, as it takes time to move data over. Cache reserve is an easy way to quickly migrate your assets into a managed R2 instance to reduce your egress costs at the touch of a button. In the future, we’ll be extending this mechanism so that you can migrate any of your existing S3 object storage buckets to R2.

We invite everyone to sign up and get started with R2 today. Join the growing community of developers building on Cloudflare. If you have any feedback or questions, find us on our Discord server here! We can’t wait to see what you build.

Cloudflare Area 1 – how the best Email Security keeps getting better

Post Syndicated from Joao Sousa Botto original https://blog.cloudflare.com/email-security/

Cloudflare Area 1 - how the best Email Security keeps getting better

Cloudflare Area 1 - how the best Email Security keeps getting better

On February 23, 2022, after being a customer for two years and seeing phishing attacks virtually disappear from our employee’s mailboxes, Cloudflare announced the acquisition of Area 1 Security.

Thanks to its unique technology (more on that below) Cloudflare Area 1 can proactively identify and protect against phishing campaigns before they happen, and potentially prevent the 90%+ of all cyberattacks that Deloitte research identified as starting with an email. All with little to no impact on employee productivity.

But preventing 90% of the attacks is not enough, and that’s why Cloudflare Area 1 email security is part of our Zero Trust platform. Here’s what’s new.

Email Security on your Cloudflare Dashboard

Starting today you will find a dedicated Email Security section on your Cloudflare dashboard. That’s the easiest way for any Cloudflare customer to get familiar with and start using Cloudflare Area 1 Email Security.

From there you can easily request a trial, which gives you access to the full product for 30 days.

Our team will guide you through the setup, which will take just a few minutes. That’s the beauty of not having to install and tune a Secure Email Gateway (SEG). You can simply configure Area 1 inline or connect through the API, journaling, or other connectors – none of these options disrupt mail flow or the end user experience. And you don’t need any new hardware, appliances or agents.

Once the trial starts, you’ll be able to review detection metrics and forensics in real time, and will receive real-time updates from the Area 1 team on incidents that require immediate attention.

At the end of the trial you will also have a Phishing Risk Assessment where our team will walk you through the impact of the mitigated attacks and answer your questions.

Cloudflare Area 1 - how the best Email Security keeps getting better

Another option you’ll see on the Email Security section of the Cloudflare Dashboard is to explore the Area 1 demo.

At the click of a button you’ll enter the Area 1 portal of a fictitious company where you can see the product in action. You can interact with the full product, including our advanced message classifiers, the BEC protections, real time view of spoofed domains, and our unique message search and trace capabilities.

Cloudflare Area 1 - how the best Email Security keeps getting better

Product Expansions

Being cloud-native has allowed us to develop some unique capabilities. Most notably, we scan the Internet for attacker infrastructure, sources and delivery mechanisms to stop phishing attacks days before they hit an inbox. These are state of the art machine-learning models using the threat intelligence data that Area 1 has accumulated since it was founded nine years ago, and now they also incorporate data from the 124 billion cyber threats that Cloudflare blocks each day and its 1.7 trillion daily DNS queries.

Since the product is cloud-based and no local appliances are involved, these unique datasets and models benefit every customer immediately and apply to the full range of email attack types (URLs, payloads, BEC), vectors (email, web, network), and attack channels (external, internal, trusted partners). Additionally, the threat datasets, observables and Indicators of Compromise (IOC) are now additional signals to Cloudflare Gateway (part of Zero Trust), extending protection beyond email and giving Cloudflare customers the industry’s utmost protection against converged or blended threats.

The expertise Area 1 gained through this relentless focus on Threat Research and Threat Operations (i.e., disrupting actors once identified) is also leading to a new large scale initiative to make every Cloudflare customer, and the broader Internet, safer – Cloudforce One.

The Cloudforce One team is composed of analysts assigned to five subteams: Malware Analysis, Threat Analysis, Active Mitigation and Countermeasures, Intelligence Analysis, and Intelligence Sharing. Collectively, they have tracked many of the most sophisticated cyber criminals on the Internet while at the National Security Agency (NSA), USCYBERCOM, and Area 1 Security, and have worked closely with similar organizations and governments to disrupt these threat actors. They’ve also been prolific in publishing “finished intel” reports on security topics of significant geopolitical importance, such as targeted attacks against governments, technology companies, the energy sector, and law firms, and have regularly briefed top organizations around the world on their efforts.

The team will help protect all Cloudflare customers by working closely with our existing product, engineering, and security teams to improve our products based on tactics, techniques, and procedures (TTPs) observed in the wild. Customers will get better protection without having to take any action.

Additionally, customers can purchase a subscription to Cloudforce One (now generally available), and get access to threat data and briefings, dedicated security tools, and the ability to make requests for information (RFIs) to the team’s threat operations staff. RFIs can be on any security topic of interest, and will be analyzed and responded to in a timely manner. For example, the Cloudforce One Malware Analysis team can accept uploads of possible malware and provide a technical analysis of the submitted resource.

Lastly, SPF/DKIM/DMARC policies are another tool that can be used to prevent Email Spoofing and have always been a critical part of Area 1’s threat models. Cloudflare Area 1 customers receive weekly DMARC sender reports to understand the efficacy of their configuration, but customers have also asked for help in setting up SPF/DKIM/DMARC records for their own domains.

It was only logical to make Cloudflare’s Email Security DNS Wizard part of our Email Security stack to guide customers through their initial SPF, DKIM and DMARC configuration. The wizard is now available to all customers using Cloudflare DNS, and will soon be available to Cloudflare Area 1 customers using a third party DNS. Getting SPF/DKIM/DMARC right can be complex, but it is a necessary and vital part of making the Internet safer, and this solution will help you build a solid foundation.

You’ll be hearing from us very soon regarding more expansions to the Area 1 feature set. In the meantime, if you want to experience Area 1 first-hand sign up for a Phishing Risk Assessment here or explore the interactive demo through the Email section of your Cloudflare Dashboard.

Isolate browser-borne threats on any network with WAN-as-a-Service

Post Syndicated from Tim Obezuk original https://blog.cloudflare.com/magic-gateway-browser-isolation/

Isolate browser-borne threats on any network with WAN-as-a-Service

Isolate browser-borne threats on any network with WAN-as-a-Service

Defending corporate networks from emerging threats is no easy task for security teams who manage complex stacks of firewalls, DNS and HTTP filters, and DLP and sandboxing appliances. Layering new defenses, such as Remote Browser Isolation to mitigate browser-borne threats that target vulnerabilities in unpatched browsers, can be complex for administrators who first have to plan how to integrate a new solution within their existing networks.

Today, we’re making it easier for administrators to integrate Cloudflare Browser Isolation into their existing network from any traffic source such as IPsec and GRE via our WAN-as-a-service, Magic WAN. This new capability enables administrators to connect on-premise networks to Cloudflare and protect Internet activity from browser-borne malware and zero day threats, without installing any endpoint software or nagging users to update their browsers.

Before diving into the technical details, let’s recap how Magic WAN and Browser Isolation fit into network perimeter architecture and a defense-in-depth security strategy.

Isolate browser-borne threats on any network with WAN-as-a-Service

Securing networks at scale with Magic WAN

Companies have historically secured their networks by building a perimeter out of on-premise routers, firewalls, dedicated connectivity and additional appliances for each layer of the security stack. Expanding the security perimeter pushes networks to their limits as centralized solutions become saturated, congested and add latency, and decentralizing adds complexity, operational overhead and cost.

These challenges are further compounded as security teams introduce more sophisticated security measures such as Browser Isolation. Cloudflare eliminates the complexity, fragility and performance limitations of legacy network perimeters by displacing on-premise firewalls with cloud firewalls hosted on our global network. This enables security teams to focus on delivering a layered security approach and successfully deploy Browser Isolation without the latency and scale constraints of legacy approaches.

Securing web browsing activity with Browser Isolation

A far cry from their humble origins as document viewers, web browsers have evolved into extraordinarily complex pieces of software capable of running untrusted code from any connected server on the planet. In 2022 alone, Chromium, the engine that powers more than 70% of all web browsing activity and is used by everyone to access sensitive data in email and internal applications has seen six disclosed zero-day vulnerabilities.

In spite of this persistent and ongoing security risk, the patching of browsers is often left to the end-user who chooses when to hit update (while also restarting their browser and disrupting productivity). Patching browsers typically takes days and users remain exposed to malicious website code until it is complete.

Isolate browser-borne threats on any network with WAN-as-a-Service

To combat this risk Browser Isolation takes a zero trust approach to web browsing and executes all website code in a remote browser. Should malicious code be executed, it occurs remotely from the user in an isolated container. The end-user and their connected network is insulated from the impact of the attack.

Magic WAN + Browser Isolation

Customers who have networks protected by Magic WAN can now enable Browser Isolation through HTTP policies.

Connect your network to Cloudflare and enable Secure Web Gateway

Magic WAN enables connecting any network to Cloudflare over IPsec, GRE, Private Network connectivity. The steps for this process may vary significantly depending on your vendor. See our developer documentation for more information.

Create an isolation policy

Isolation policies function the same with Magic WAN as they do for traffic sourced from devices with our Roaming Client (WARP) installed.

Navigate to the Cloudflare Zero Trust dashboard → Gateway → HTTP Policies and create a new HTTP policy with an isolate action.

Isolate browser-borne threats on any network with WAN-as-a-Service

See our developer documentation to learn more about isolation policies.

Enable non-identity on-ramp support

Prior to this release, Magic WAN + Browser Isolation traffic presented a block page. Existing customers will continue to see this block page. To enable Browser Isolation traffic for Magic Gateway navigate to: Cloudflare Zero Trust → Settings → Browser Isolation → Non-identity on-ramp support and select Enable.

Configuration complete

Once configured traffic that matches your isolation criteria is transparently intercepted and served through a remote browser. End-users are automatically connected to a remote browser at the closest Cloudflare data center. This keeps latency to a minimum, ensuring a positive end-user experience while mitigating security threats.

Try Cloudflare Browser

Isolate browser-borne threats on any network with WAN-as-a-Service

Interested in testing our remote browsing experience? Visit this landing page to request demo access to Browser Isolation. This service is hosted on our global network, and you’ll be connected to a real remote browser hosted in a nearby Cloudflare data center.

What’s next?

We’re excited to continue integrating new on-ramps to consistently protect users from web based threats on any device and any network. Stay tuned for updates on deploying Browser Isolation via Proxy PAC files and deploying in-line on top of self-hosted Access applications.

Cloudflare One Partner Program acceleration

Post Syndicated from Steve Pataky original https://blog.cloudflare.com/cloudflare-one-partner-program-acceleration/

Cloudflare One Partner Program acceleration

Cloudflare One Partner Program acceleration

Just a few short months ago, Cloudflare announced the launch of the Cloudflare One Partner Program.  Many customers want to start their journeys to Zero Trust but are not sure where or how to start. It became clear there was a significant opportunity to partner with the channel – to combine Cloudflare’s complete Zero Trust portfolio with a broad set of Cloudflare-enabled, channel-delivered professional services to help customers navigate meaningful ways to adopt a Zero Trust architecture. Underscoring this need to partner was the fact that over the last six months we saw a 50% increase in new Cloudflare Zero Trust customers being won with the channel.

Clearly customers are ready to cut through the market hype of Zero Trust and start implementing – with the right platform of products and services – and the right value contribution of their channel partners.

Since the launch of the Cloudflare One Partner Program, we’ve engaged with hundreds of partners through our recruiting campaigns and in our Zero Trust Roadshow. This has provided a tremendous amount of feedback on what is working and why we believe we have the right program at the right time. This feedback has consistently centered around a few key themes:

A broad Zero Trust platform – our channel partners see the value in having a broad zero trust platform that acknowledges the Zero Trust journey for their customers is not a “one size fits all.”  It takes the right set of cloud-native technologies to fulfill the varied requirements from smaller, mid-market customers to the largest enterprises. One customer may start the transition to Zero Trust Network Architecture (ZTNA) by phasing out VPNs for 3rd parties while another may start by replacing VPNs for their remote workers.

For others, the journey may start with the need to streamline their SaaS security or a compliance-driven need to protect web traffic from modern threats. We even see customers starting their Zero Trust journey by applying advanced, cloud-native protection to their email.

Each of these real customer use cases represents an “on-ramp” to Zero Trust architecture, rooted in a specific business need and desired outcome for the customer. Our partners tell us that having a broad Zero Trust platform comprising each of the services needed to fulfill these use cases means they are enabled to assess exactly what their customers need and compose the best starting point for their entry to Zero Trust.

Bundles make configuration and design easy – The Cloudflare One Partner Program provides exclusive access to a set of Zero Trust solution bundles optimized for the real use cases that partners encounter when helping their customers map out a Zero Trust strategy.

Cloudflare Zero Trust Essentials, Advanced and Premier bundles combine the required services to deliver a well orchestrated solution and are available direct from Cloudflare or through Distributors. The feedback from our partners SE community is that the bundles can save a significant amount of time in solution design and configuration.

Partner-delivered professional services – Customers of all sizes need channel partners to help them find the value in a Zero Trust architecture – to identify that first use case that will allow them to start their transformation. The Cloudflare One Partner Program acknowledges this critical role the channel plays in assessing customer requirements, designing and implementing the solution, and providing ongoing support.

For partners with existing services practices, our new enablement, certification, service blueprints and tools helps them light up their Zero Trust services offerings. For partners who don’t yet possess these capabilities, Cloudflare back-stops them with packaged service offerings delivered by authorized service partners. This creates a selling environment that ensures we all can find the best possible solution for every customer, design and deliver that solution in a highly efficient way and provide consistent ongoing support.

At our partner recruiting events, two representative tools that get super positive feedback  – A Roadmap to Zero Trust Architecture and our 90 Minute Zero Trust Assessment – both of which are proving highly valuable in helping partners jump start a meaningful Zero Trust dialog with their customers.

Reward for Value – In addition to delivering the broad Zero Trust platform, bundles and services enablement, the Cloudflare One Partner Program acknowledges the critical role and full contribution of our partners to bringing Zero Trust to life for their customers. Reward for Value is our partner financial incentive structure that rewards for developing Zero Trust opportunities (deal registration), designing a bundled solution and delivering professional services. This is an important acknowledgement that we can drive Zero Trust architectures to the market faster with the channel than we could do on our own. Our partners love the Reward for Value model, and we believe it’s an important foundation to building a mutually rewarding relationship with the channel.

If the Cloudflare One Partner Program resonates with you, and you’re serious about helping your customers find value in a Zero Trust architecture, let’s talk. We’d love to share more about all the Program elements outlined in this blog and how you can put them to work for your business. We’re building our Zero Trust channel one great partner at a time – are you next?

For more details visit our Cloudflare One Partner Program website.

Detect security issues in your SaaS apps with Cloudflare CASB

Post Syndicated from Alex Dunbrack original https://blog.cloudflare.com/casb-ga/

Detect security issues in your SaaS apps with Cloudflare CASB

This post is also available in 简体中文, 日本語, Deutsch, Français and Español.

Detect security issues in your SaaS apps with Cloudflare CASB

It’s GA Week here at Cloudflare, meaning some of our latest and greatest endeavors are here and ready to be put in the hands of Cloudflare customers around the world. One of those releases is Cloudflare’s API-driven Cloud Access Security Broker, or CASB, one of the newest additions to our Zero Trust platform.

Starting today, IT and security administrators can begin using Cloudflare CASB to connect, scan, and monitor their third-party SaaS applications for a wide variety of security issues – all in just a few clicks.

Detect security issues in your SaaS apps with Cloudflare CASB

Whether it’s auditing Google Drive for data exposure and file oversharing, checking Microsoft 365 for misconfigurations and insecure settings, or reviewing third-party access for Shadow IT, CASB is now here to help organizations establish a direct line of sight into their SaaS app security and DLP posture.

The problem

Try to think of a business or organization that uses fewer than 10 SaaS applications. Hard, isn’t it?

It’s 2022, and by now, most of us have noticed the trend of mass SaaS adoption balloon over recent years, with some organizations utilizing hundreds of third-party services across a slew of internal functions. Google Workspace and Microsoft 365 for business collaboration. Slack and Teams for communication. Salesforce for customer management, GitHub for version control… the list goes on and on and on.

And while the average employee might see these products as simply tools used in their day-to-day work, the reality is much starker than that. Inside these services lie some of an organization’s most precious, sensitive, business-critical data – something IT and security teams don’t take lightly and strive to protect at all costs.

But there hasn’t been a great way for these teams to ensure their data and the applications that contain it are kept secure. Go user by user, file by file, SaaS app by SaaS app and review everything for what could be potentially problematic? For most organizations, that’s just simply not realistic.

So, doing what Cloudflare does best, how are we helping our users get a grip on this wave of growing security risk in an intuitive and manageable way?

The solution

Connect your most critical SaaS applications in just minutes and clicks

It all starts with a simple integration process, connecting your favorite SaaS applications to Cloudflare CASB in just a few clicks. Once connected, you’ll instantly begin to see Findings – or identified security issues – appear on your CASB home page.

CASB utilizes each vendor’s API to scan and identify a range of application-specific security issues that span several domains of information security, including misconfigurations and insecure settings, file sharing security, Shadow IT, best practices not being followed, and more.

Today CASB supports integrations with Google Workspace, Microsoft 365, Slack, and GitHub, with a growing list of other critical applications not far behind. Have a SaaS app you want to see next? Let us know!

See how all your files have been shared

Detect security issues in your SaaS apps with Cloudflare CASB

One of the easiest ways for employees to accidentally expose internal information is usually with just the flick of a switch – changing a sharing setting to Share this file to anyone with the link.

Cloudflare CASB provides users an exhaustive list of files that have questionable, often insecure, sharing settings, giving them a fast and reliable way to address low-hanging fruit exposures and get ahead of data protection incidents.

Identify insecure settings and bad practices

Detect security issues in your SaaS apps with Cloudflare CASB

How we configure our SaaS apps dictates how they keep our data secure. Would you know if that one important GitHub repository had its visibility changed from Private to Public overnight? And why does one of our IT admins not have 2FA enabled on their account?

With Cloudflare CASB, users can now see those issues in just a few clicks and prioritize misconfigurations that might not expose just one file, but the entirety of them across your organization’s SaaS footprint.

Discover third-party apps with shadowy permissions

Detect security issues in your SaaS apps with Cloudflare CASB

With the advent of frictionless product signups comes the rise of third-party applications that have breezed past approval processes and internal security reviews to lay claim to data and other sensitive resources. You guessed it, we’re talking about Shadow IT.

Cloudflare CASB adds a layer of access visibility beyond what traditional network-based Shadow IT discovery tools (like Cloudflare Gateway) can accomplish on their own, providing a detailed list of access that’s been granted to third-party services via those easy Sign in with Google buttons.

So, why does this matter in the context of Zero Trust?

While we’re here to talk about CASB, it would be remiss if we didn’t acknowledge how CASB is only one piece of the puzzle in the wider context of Zero Trust.

Zero Trust is all about broad security coverage and simple interconnectivity with how employees access, navigate, and leverage the complex systems and services needed to operate every day. Where Cloudflare Access and Gateway have provided users with granular access control and visibility into how employees traverse systems, and where Browser Isolation and our new in-line DLP offering protect users from malicious sites and limit sensitive data flying over the wire, CASB adds coverage for one of enterprise security’s final frontiers: visibility into data at-rest, who/what has access to it, and the practices that make it easier or harder for someone to access it inappropriately.

How to get started

As we’ve found through CASB’s beta program over the last few months, SaaS sprawl and misuse compounds with time – we’ve already identified more than five million potential security issues across beta users, with some organizations seeing several thousand files flagged as needing a sharing setting review.

So don’t hesitate to get started on your SaaS app wrangling and cleanup journey; it’s easier than you might think.

To get started, create a free Zero Trust account to try it out with 50 free seats, and then get in touch with our team here to learn more about how Cloudflare CASB can help at your organization. We can’t wait to hear what you think.

Cloudflare Data Loss Prevention now Generally Available

Post Syndicated from Noelle Gotthardt original https://blog.cloudflare.com/inline-dlp-ga/

Cloudflare Data Loss Prevention now Generally Available

This post is also available in 简体中文, 日本語, Deutsch, Français and Español.

Cloudflare Data Loss Prevention now Generally Available

In July 2022, we announced beta access to our newest Zero Trust product, Data Loss Prevention (DLP). Today, we are even more excited to announce that DLP is Generally Available to customers! Any customer can now get visibility and control of sensitive data moving into, out of, and around their corporate network. If you are interested, check out the bottom of this post.

What is DLP?

Data Loss Prevention helps you overcome one of their biggest challenges: identifying and protecting sensitive data. The migration to the cloud has made tracking and controlling sensitive information more difficult than ever. Employees are using an ever-growing list of tools to manipulate a vast amount of data. Meanwhile, IT and security managers struggle to identify who should have access to sensitive data, how that data is stored, and where that data is allowed to go.

Data Loss Prevention enables you to protect your data based on its characteristics, such as keywords or patterns. As traffic moves into and out of corporate infrastructure, the traffic is inspected for indicators of sensitive data. If the indicators are found, the traffic is allowed or blocked based on the customers’ rules.

The most common use for DLP is the protection of Personally Identifiable Information (PII), but many customers are interested in protecting intellectual property, source code, corporate financial information, or any other information vital to the business. Proper data usage can include who used the data, where the data was sent, and how the data is stored.

How does DLP see my corporate traffic?

DLP is part of Cloudflare One, our Zero Trust network-as-a-service platform that connects users to enterprise resources. Cloudflare One runs traffic from data centers, offices, and remote users, through the Cloudflare network. This offers a wide variety of opportunities to secure the traffic, including validating identity and device posture, filtering corporate traffic to protect from malware and phishing, checking the configurations on SaaS applications, and using Browser Isolation to make web surfing safer for employees. All of this is done with the performance of our global network and managed with one control plane.

Cloudflare Data Loss Prevention now Generally Available

How does it work?

DLP leverages the HTTP filtering abilities of Cloudflare One. As your traffic runs through our network, you can apply rules and route traffic based on information in the HTTP request. There are a wide variety of options for filtering, such as domain, URL, application, HTTP method, and many more. You can use these options to segment the traffic you wish to DLP inspect.

When DLP is applied, the relevant HTTP requests are decompressed, decoded, and scanned for regex matches. Numeric regex matches are then algorithmically validated when possible, such as with checksum calculations or Luhn’s algorithm. However, some numeric detections do not adhere to algorithmic validation, such as US Social Security numbers.

If sensitive data is identified by the detection, the data transfer can be allowed or blocked according to the customer’s ruleset.

How do I use it?

Let’s dive further in to see how this all actually comes to life. To use DLP in the Zero Trust Dashboard, navigate to the DLP Profiles tab under Gateway:

Cloudflare Data Loss Prevention now Generally Available

Decide on the type of data you want to protect. We currently detect credit card numbers and US Social Security numbers, but this is where we intend to grow a robust library of DLP detections.  Our next steps are custom and additional predefined detections, including more international identifiers and financial record numbers, which will be arriving soon.

When you have decided, select Configure to enable detections:

Cloudflare Data Loss Prevention now Generally Available

Enable the detections you want to use. As described above, these card number detections are made using regexes and validated with Luhn’s algorithm. You can make numeric detections for card numbers or detect strings matching card names, such as “American Express.”

Cloudflare Data Loss Prevention now Generally Available

Then apply the detections to a Gateway HTTP policy on the traffic of your choosing. Here we applied DLP to Google Drive traffic. This policy will block uploads and downloads to Google Drive that contain US Social Security Numbers.

Cloudflare Data Loss Prevention now Generally Available

Holistic data protection with Cloudflare Zero Trust

Inspecting HTTP traffic for the presence of sensitive data with DLP is one critical way organizations can reduce the risk of data exfiltration, strengthen regulatory compliance, and improve overall data governance.

Implementing DLP is just one step towards a more holistic approach to securing data.

To that end, our Cloudflare Zero Trust platform offers more comprehensive controls over how any user on any device accesses and interacts with data – all from a single management interface:

We have architected our DLP service to work seamlessly with these ZTNA, SWG, CASB, and other security services. As we continue to deepen our DLP capabilities, this platform approach uniquely equips us to address our customers’ needs with flexibility.

Get Access to Data Loss Prevention

To get access to DLP, reach out for a consultation, or contact your account manager.