Tag Archives: Cloudflare Workers

Building scheduling system with Workers and Durable Objects

Post Syndicated from Adam Janiš original https://blog.cloudflare.com/building-scheduling-system-with-workers-and-durable-objects/

Building scheduling system with Workers and Durable Objects

Building scheduling system with Workers and Durable Objects

We rely on technology to help us on a daily basis – if you are not good at keeping track of time, your calendar can remind you when it’s time to prepare for your next meeting. If you made a reservation at a really nice restaurant, you don’t want to miss it! You appreciate the app to remind you a day before your plans the next evening.

However, who tells the application when it’s the right time to send you a notification? For this, we generally rely on scheduled events. And when you are relying on them, you really want to make sure that they occur. Turns out, this can get difficult. The scheduler and storage backend need to be designed with scale in mind – otherwise you may hit limitations quickly.

Workers, Durable Objects, and Alarms are actually a perfect match for this type of workload. Thanks to the distributed architecture of Durable Objects and their storage, they are a reliable and scalable option. Each Durable Object has access to its own isolated storage and alarm scheduler, both being automatically replicated and failover in case of failures.

There are many use cases where having a reliable scheduler can come in handy: running a webhook service, sending emails to your customers a week after they sign up to keep them engaged, sending invoices reminders, and more!

Today, we’re going to show you how to build a scalable service that will schedule HTTP requests on a specific schedule or as one-off at a specific time as a way to guide you through any use case that requires scheduled events.

Building scheduling system with Workers and Durable Objects

Quick intro into the application stack

Before we dive in, here are some of the tools we’re going to be using today:

The application is going to have following components:

  • Scheduling system API to accept scheduled requests and manage Durable Objects
  • Unique Durable Object per scheduled request, each with
  • Storage – keeping the request metadata, such as URL, body, or headers.
  • Alarm – a timer (trigger) to wake Durable Object up.

While we will focus on building the application, the Cloudflare global network will take care of the rest – storing and replicating our data, and making sure to wake our Durable Objects up when the time’s right. Let’s build it!

Initialize new Workers project

Get started by generating a completely new Workers project using the wrangler init command, which makes creating new projects quick & easy.

wrangler init -y durable-objects-requests-scheduler

Building scheduling system with Workers and Durable Objects

For more information on how to install, authenticate, or update Wrangler, check out https://developers.cloudflare.com/workers/wrangler/get-started

Preparing TypeScript types

From my personal experience, at least a draft of TypeScript types significantly helps to be more productive down the road, so let’s prepare and describe our scheduled request in advance. Create a file types.ts in src directory and paste the following TypeScript definitions.

src/types.ts

export interface Env {
    DO_REQUEST: DurableObjectNamespace
}
 
export interface ScheduledRequest {
  url: string // URL of the request
  triggerAt?: number // optional, unix timestamp in milliseconds, defaults to `new Date()`
  requestInit?: RequestInit // optional, includes method, headers, body
}

A scheduled request Durable Object class & alarm

Based on our architecture design, each scheduled request will be saved into its own Durable Object, effectively separating storage and alarms from each other and allowing our scheduling system to scale horizontally – there is no limit to the number of Durable Objects we create.

In the end, the Durable Object class is a matter of a couple of lines. The code snippet below accepts and saves the request body to a persistent storage and sets the alarm timer. Workers runtime will wake up the Durable Object and call the alarm() method to process the request.

The alarm method reads the scheduled request data from the storage, then processes the request, and in the end reschedules itself in case it’s configured to be executed on a recurring schedule.

src/request-durable-object.ts

import { ScheduledRequest } from "./types";
 
export class RequestDurableObject {
  id: string|DurableObjectId
  storage: DurableObjectStorage
 
  constructor(state:DurableObjectState) {
    this.storage = state.storage
    this.id = state.id
  }
 
    async fetch(request:Request) {
    // read scheduled request from request body
    const scheduledRequest:ScheduledRequest = await request.json()
     
    // save scheduled request data to Durable Object storage, set the alarm, and return Durable Object id
    this.storage.put("request", scheduledRequest)
    this.storage.setAlarm(scheduledRequest.triggerAt || new Date())
    return new Response(JSON.stringify({
      id: this.id.toString()
    }), {
      headers: {
        "content-type": "application/json"
      }
    })
  }
 
  async alarm() {
    // read the scheduled request from Durable Object storage
    const scheduledRequest:ScheduledRequest|undefined = await this.storage.get("request")
 
    // call fetch on scheduled request URL with optional requestInit
    if (scheduledRequest) {
      await fetch(scheduledRequest.url, scheduledRequest.requestInit ? webhook.requestInit : undefined)
 
      // cleanup scheduled request once done
      this.storage.deleteAll()
    }
  }
}

Wrangler configuration

Once we have the Durable Object class, we need to create a Durable Object binding by instructing Wrangler where to find it and what the exported class name is.

wrangler.toml

name = "durable-objects-request-scheduler"
main = "src/index.ts"
compatibility_date = "2022-08-02"
 
# added Durable Objects configuration
[durable_objects]
bindings = [
  { name = "DO_REQUEST", class_name = "RequestDurableObject" },
]
 
[[migrations]]
tag = "v1"
new_classes = ["RequestDurableObject"]

Scheduling system API

The API Worker will accept POST HTTP methods only, and is expecting a JSON body with scheduled request data – what URL to call, optionally what method, headers, or body to send. Any other method than POST will return 405 – Method Not Allowed HTTP error.

HTTP POST /:scheduledRequestId? will create or override a scheduled request, where :scheduledRequestId is optional Durable Object ID returned from a scheduling system API before.

src/index.ts

import { Env } from "./types"
export { RequestDurableObject } from "./request-durable-object"
 
export default {
    async fetch(
        request: Request,
        env: Env
    ): Promise<Response> {
        if (request.method !== "POST") {
            return new Response("Method Not Allowed", {status: 405})
        }
 
        // parse the URL and get Durable Object ID from the URL
        const url = new URL(request.url)
        const idFromUrl = url.pathname.slice(1)
 
        // construct the Durable Object ID, use the ID from pathname or create a new unique id
        const doId = idFromUrl ? env.DO_REQUEST.idFromString(idFromUrl) : env.DO_REQUEST.newUniqueId()
 
        // get the Durable Object stub for our Durable Object instance
        const stub = env.DO_REQUEST.get(doId)
 
        // pass the request to Durable Object instance
        return stub.fetch(request)
    },
}

It’s good to mention that the script above does not implement any listing of scheduled or processed webhooks. Depending on how the scheduling system would be integrated, you can save each created Durable Object ID to your existing backend, or write your own registry – using one of the Workers storage options.

Starting a local development server and testing our application

We are almost done! Before we publish our scheduler system to the Cloudflare edge, let’s start Wrangler in a completely local mode to run a couple of tests against it and to see it in action – which will work even without an Internet connection!

wrangler dev --local
Building scheduling system with Workers and Durable Objects

The development server is listening on localhost:8787, which we will use for scheduling our first request. The JSON request payload should match the TypeScript schema we defined in the beginning – required URL, and optional triggerEverySeconds number or triggerAt unix timestamp. When only the required URL is passed, the request will be dispatched right away.

An example of request payload that will send a GET request to https://example.com every 30 seconds.

{
	"url":  "https://example.com",
	"triggerEverySeconds": 30,
}
> curl -X POST -d '{"url": "https://example.com", "triggerEverySeconds": 30}' http://localhost:8787
{"id":"000000018265a5ecaa5d3c0ab6a6997bf5638fdcb1a8364b269bd2169f022b0f"}
Building scheduling system with Workers and Durable Objects

From the wrangler logs we can see the scheduled request ID 000000018265a5ecaa5d3c0ab6a6997bf5638fdcb1a8364b269bd2169f022b0f is being triggered in 30s intervals.

Need to double the interval? No problem, just send a new POST request and pass the request ID as a pathname.

> curl -X POST -d '{"url": "https://example.com", "triggerEverySeconds": 60}' http://localhost:8787/000000018265a5ecaa5d3c0ab6a6997bf5638fdcb1a8364b269bd2169f022b0f
{"id":"000000018265a5ecaa5d3c0ab6a6997bf5638fdcb1a8364b269bd2169f022b0f"}

Every scheduled request gets a unique Durable Object ID with its own storage and alarm. As we demonstrated, the ID becomes handy when you need to change the settings of the scheduled request, or to deschedule them completely.

Publishing to the network

Following command will bundle our Workers application, export and bind Durable Objects, and deploy it to our workers.dev subdomain.

wrangler publish
Building scheduling system with Workers and Durable Objects

That’s it, we are live! 🎉 The URL of your deployment is shown in the Workers logs. In a reasonably short period of time we managed to write our own scheduling system that is ready to handle requests at scale.

You can check full source code in Workers templates repository, or experiment from your browser without installing any dependencies locally using the StackBlitz template.

What to see or build next

Running Zig with WASI on Cloudflare Workers

Post Syndicated from Daniel Harper original https://blog.cloudflare.com/running-zig-with-wasi-on-cloudflare-workers/

Running Zig with WASI on Cloudflare Workers

Running Zig with WASI on Cloudflare Workers

After the recent announcement regarding WASI support in Workers, I decided to see what it would take to get code written in Zig to run as a Worker, and it turned out to be trivial. This post documents the process I followed as a new user of Zig. It’s so exciting to see how Cloudflare Workers is a polyglot platform allowing you to write programs in the language you love, or the language you’re learning!

Hello, World!

I’m not a Zig expert by any means, and to keep things entirely honest I’ve only just started looking into the language, but we all have to start somewhere. So, if my Zig code isn’t perfect please bear with me. My goal was to build a real, small program using Zig and deploy it on Cloudflare Workers. And to see how fast I can go from a blank screen to production code.

My goal for this wasn’t ambitious, just read some text from stdin and print it to stdout with line numbers, like running cat -n. But it does show just how easy the Workers paradigm is. This Zig program works identically on the command-line on my laptop and as an HTTP API deployed on Cloudflare Workers.

Here’s my code. It reads a line from stdin and outputs the same line prefixed with a line number. It terminates when there’s no more input.

const std = @import("std");

pub fn main() anyerror!void {
	// setup allocator
	var gpa = std.heap.GeneralPurposeAllocator(.{}){};
	defer std.debug.assert(!gpa.deinit());
	const allocator = gpa.allocator();

	// setup streams
	const stdout = std.io.getStdOut().writer();
	const in = std.io.getStdIn();
	var reader = std.io.bufferedReader(in.reader()).reader();

	var counter: u32 = 1;

	// read input line by line
	while (try reader.readUntilDelimiterOrEofAlloc(allocator, '\n', std.math.maxInt(usize))) |line| {
    	    defer allocator.free(line);
    	    try stdout.print("{d}\t{s}\n", .{counter, line});
    	    counter = counter + 1;
	}
}

To build Zig code, you create a build.zig file that defines how to build your project. For this trivial case I just opted to build an executable from the sources

const std = @import("std");

pub fn build(b: *std.build.Builder) void {
	const target = b.standardTargetOptions(.{});
	const mode = b.standardReleaseOptions();

	const exe = b.addExecutable("print-with-line-numbers", "src/main.zig");
	exe.setTarget(target);
	exe.setBuildMode(mode);
	exe.install();
}

By running zig build the compiler will run and output a binary under zig-out/bin

$ zig build

$ ls zig-out/bin
print-with-line-numbers

$ echo "Hello\nWorld" | ./zig-out/bin/print-with-line-numbers
1    Hello
2    World

WASI

The next step is to get this running on Workers, but first I need to compile it into WASM with WASI support.

Thankfully, this comes out of the box with recent versions of Zig, so you can just tell the compiler to build your executable using the wasm32-wasi target, which will produce a file that can be run on any WASI-compatible WebAssembly runtime, such as wasmtime.

This same .wasm file can be run in wasmtime and deployed directly to Cloudflare Workers. This makes building, testing and deploying seamless.

$ zig build -Dtarget=wasm32-wasi

$ ls zig-out/bin
print-with-line-numbers.wasm

$ echo "Hello\nWorld" | wasmtime ./zig-out/bin/print-with-line-numbers.wasm
1    Hello
2    World

Zig on Workers

With our binary ready to go, the last piece is to get it running on Cloudflare Workers using wrangler2. That is as simple as publishing the .wasm file on workers.dev. If you don’t have a workers.dev account, you can follow the tutorial on our getting started guide that will get you from code to deployment within minutes!

In fact, once I signed up for my account, all I needed to do was complete the first two steps, install wrangler and login.

$ npx [email protected] login
Attempting to login via OAuth...
Opening a link in your default browser: https://dash.cloudflare.com/oauth2/auth
Successfully logged in.

Then, I ran the following command to publish my worker:

$ npx [email protected] publish --name print-with-line-numbers --compatibility-date=2022-07-07 zig-out/bin/print-with-line-numbers.wasm
Uploaded print-with-line-numbers (3.04 sec)
Published print-with-line-numbers (6.28 sec)
  print-with-line-numbers.workers.dev

With that step completed, the worker is ready to run and can be invoked by calling the URL printed from the output above.

echo "Hello\nWorld" | curl https://print-with-line-numbers.workers.dev -X POST --data-binary @-
1    Hello
2    World

Success!

Conclusion

What impressed me the most here was just how easy this process was.

First, I had a binary compiled for the architecture of my laptop, then I compiled the code into WebAssembly by just passing a flag to the compiler, and finally I had this running on workers without having to change any code.

Granted, this program was not very complicated and does not do anything other than read from STDIN and write to STDOUT, but it gives me confidence of what is possible, especially as technology like WASI matures.

Announcing support for WASI on Cloudflare Workers

Post Syndicated from Ben Yule original https://blog.cloudflare.com/announcing-wasi-on-workers/

Announcing support for WASI on Cloudflare Workers

Announcing support for WASI on Cloudflare Workers

Today, we are announcing experimental support for WASI (the WebAssembly System Interface) on Cloudflare Workers and support within wrangler2 to make it a joy to work with. We continue to be incredibly excited about the entire WebAssembly ecosystem and are eager to adopt the standards as they are developed.

A Quick Primer on WebAssembly

So what is WASI anyway? To understand WASI, and why we’re excited about it, it’s worth a quick recap of WebAssembly, and the ecosystem around it.

WebAssembly promised us a future in which code written in compiled languages could be compiled to a common binary format and run in a secure sandbox, at near native speeds. While WebAssembly was designed with the browser in mind, the model rapidly extended to server-side platforms such as Cloudflare Workers (which has supported WebAssembly since 2017).

WebAssembly was originally designed to run alongside Javascript, and requires developers to interface directly with Javascript in order to access the world outside the sandbox. To put it another way, WebAssembly does not provide any standard interface for I/O tasks such as interacting with files, accessing the network, or reading the system clock. This means if you want to respond to an event from the outside world, it’s up to the developer to handle that event in JavaScript, and directly call functions exported from the WebAssembly module. Similarly, if you want to perform I/O from within WebAssembly, you need to implement that logic in Javascript and import it into the WebAssembly module.

Custom toolchains such as Emscripten or libraries such as wasm-bindgen have emerged to make this easier, but they are language specific and add a tremendous amount of complexity and bloat. We’ve even built our own library, workers-rs, using wasm-bindgen that attempts to make writing applications in Rust feel native within a Worker – but this has proven not only difficult to maintain, but requires developers to write code that is Workers specific, and is not portable outside the Workers ecosystem.

We need more.

The WebAssembly System Interface (WASI)

WASI aims to provide a standard interface that any language compiling to WebAssembly can target. You can read the original post by Lin Clark here, which gives an excellent introduction – code cartoons and all. In a nutshell, Lin describes WebAssembly as an assembly language for a ‘conceptual machine’, whereas WASI is a systems interface for a ‘conceptual operating system.’

This standardization of the system interface has paved the way for existing toolchains to cross-compile existing codebases to the wasm32-wasi target. A tremendous amount of progress has already been made, specifically within Clang/LLVM via the wasi-sdk and Rust toolchains. These toolchains leverage a version of Libc, which provides POSIX standard API calls, that is built on top of WASI ‘system calls.’ There are even basic implementations in more fringe toolchains such as TinyGo and SwiftWasm.

Practically speaking, this means that you can now write applications that not only interoperate with any WebAssembly runtime implementing the standard, but also any POSIX compliant system! This means the exact same ‘Hello World!’ that runs on your local Linux/Mac/Windows WSL machine.

Show me the code

WASI sounds great, but does it actually make my life easier? You tell us. Let’s run through an example of how this would work in practice.

First, let’s generate a basic Rust “Hello, world!” application, compile, and run it.

$ cargo new hello_world
$ cd ./hello_world
$ cargo build --release
   Compiling hello_world v0.1.0 (/Users/benyule/hello_world)
    Finished release [optimized] target(s) in 0.28s
$ ./target/release/hello_world
Hello, world!

It doesn’t get much simpler than this. You’ll notice we only define a main() function followed by a println to stdout.

fn main() {
    println!("Hello, world!");
}

Now, let’s take the exact same program and compile against the wasm32-wasi target, and run it in an ‘off the shelf’ wasm runtime such as Wasmtime.

$ cargo build --target wasm32-wasi --release
$ wasmtime target/wasm32-wasi/release/hello_world.wasm

Hello, world!

Neat! The same code compiles and runs in multiple POSIX environments.

Finally, let’s take the binary we just generated for Wasmtime, but instead publish it to Workers using Wrangler2.

$ npx [email protected] dev target/wasm32-wasi/release/hello_world.wasm
$ curl http://localhost:8787/

Hello, world!

Unsurprisingly, it works! The same code is compatible in multiple POSIX environments and the same binary is compatible across multiple WASM runtimes.

Running your CLI apps in the cloud

The attentive reader may notice that we played a small trick with the HTTP request made via cURL. In this example, we actually stream stdin and stdout to/frome the Worker using the HTTP request and response body respectively. This pattern enables some really interesting use cases, specifically, programs designed to run on the command line can be deployed as ‘services’ to the cloud.

‘Hexyl’ is an example that works completely out of the box. Here, we ‘cat’ a binary file on our local machine and ‘pipe’ the output to curl, which will then POST that output to our service and stream the result back. Following the steps we used to compile our ‘Hello World!’, we can compile hexyl.

$ git clone [email protected]:sharkdp/hexyl.git
$ cd ./hexyl
$ cargo build --target wasm32-wasi --release

And without further modification we were able to take a real-world program and create something we can now run or deploy. Again, let’s tell wrangler2 to preview hexyl, but this time give it some input.

$ npx [email protected] dev target/wasm32-wasi/release/hexyl.wasm
$ echo "Hello, world\!" | curl -X POST --data-binary @- http://localhost:8787

┌────────┬─────────────────────────┬─────────────────────────┬────────┬────────┐
│00000000│ 48 65 6c 6c 6f 20 77 6f ┊ 72 6c 64 21 0a          │Hello wo┊rld!_   │
└────────┴─────────────────────────┴─────────────────────────┴────────┴────────┘

Give it a try yourself by hitting https://hexyl.examples.workers.dev.

echo "Hello world\!" | curl https://hexyl.examples.workers.dev/ -X POST --data-binary @- --output -

A more useful example, but requires a bit more work, would be to deploy a utility such as swc (swc.rs), to the cloud and use it as an on demand JavaScript/TypeScript transpilation service. Here, we have a few extra steps to ensure that the compiled output is as small as possible, but it otherwise runs out-of-the-box. Those steps are detailed in https://github.com/zebp/wasi-example-swc, but for now let’s gloss over that and interact with the hosted example.

$ echo "const x = (x, y) => x * y;" | curl -X POST --data-binary @- https://swc-wasi.examples.workers.dev/ --output -

var x=function(a,b){return a*b}

Finally, we can also do the same with C/C++, but requires a little more lifting to get our Makefile right. Here we show an example of compiling zstd and uploading it as a streaming compression service.

https://github.com/zebp/wasi-example-zstd

$ echo "Hello world\!" | curl https://zstd.examples.workers.dev/ -s -X POST --data-binary @- | file -

What if I want to use WASI from within a JavaScript Worker?

Wrangler can make it really easy to deploy code without having to worry about the Workers ecosystem, but in some cases you may actually want to invoke your WASI based WASM module from Javascript. This can be achieved with the following simple boilerplate. An updated README will be kept at https://github.com/cloudflare/workers-wasi.

import { WASI } from "@cloudflare/workers-wasi";
import demoWasm from "./demo.wasm";

export default {
  async fetch(request, _env, ctx) {
    // Creates a TransformStream we can use to pipe our stdout to our response body.
    const stdout = new TransformStream();
    const wasi = new WASI({
      args: [],
      stdin: request.body,
      stdout: stdout.writable,
    });

    // Instantiate our WASM with our demo module and our configured WASI import.
    const instance = new WebAssembly.Instance(demoWasm, {
      wasi_snapshot_preview1: wasi.wasiImport,
    });

    // Keep our worker alive until the WASM has finished executing.
    ctx.waitUntil(wasi.start(instance));

    // Finally, let's reply with the WASM's output.
    return new Response(stdout.readable);
  },
};

Now with our JavaScript boilerplate and wasm, we can easily deploy our worker with Wrangler’s WASM feature.

$ npx wrangler publish
Total Upload: 473.89 KiB / gzip: 163.79 KiB
Uploaded wasi-javascript (2.75 sec)
Published wasi-javascript (0.30 sec)
  wasi-javascript.zeb.workers.dev

Back to the future

For those of you who have been around for the better part of the past couple of decades, you may notice this looks very similar to RFC3875, better known as CGI (The Common Gateway Interface). While our example here certainly does not conform to the specification, you can imagine how this can be extended to turn the stdin of a basic ‘command line’ application into a full-blown http handler.

We are thrilled to learn where developers take this from here. Share what you build with us on Discord or Twitter!

Internship Experience: Software Development Intern

Post Syndicated from Ulysses Kee original https://blog.cloudflare.com/internship-experience-software-development-intern/

Internship Experience: Software Development Intern

Before we dive into my experience interning at Cloudflare, let me quickly introduce myself. I am currently a master’s student at the National University of Singapore (NUS) studying Computer Science. I am passionate about building software that improves people’s lives and making the Internet a better place for everyone. Back in December 2021, I joined Cloudflare as a Software Development Intern on the Partnerships team to help improve the experience that Partners have when using the platform. I was extremely excited about this opportunity and jumped at the prospect of working on serverless technology to build viable tools for our partners and customers. In this blog post, I detail my experience working at Cloudflare and the many highlights of my internship.

Interview Experience

The process began for me back when I was taking a software engineering module at NUS where one of my classmates had shared a job post for an internship at Cloudflare. I had known about Cloudflare’s DNS service prior and was really excited to learn more about the internship opportunity because I really resonated with the company’s mission to help build a better Internet.

I knew right away that this would be a great opportunity and submitted my application. Soon after, I heard back from the recruiting team and went through the interview process – the entire interview process was extremely accommodating and is definitely the most enjoyable interview experience I have had. Throughout the process, I was constantly asked about the kind of things I would like to work on and the relevance of the work that I would be doing. I felt that this thorough communication carried on throughout the internship and really was a cornerstone of my experience interning at Cloudflare.

My Internship

My internship began with onboarding and training, and then after, I had discussions with my mentor, Ayush Verma, on the projects we aimed to complete during the internship and the order of objectives. The main issues we wanted to address was the current manual process that our internal teams and partners go through when they want to duplicate the configuration settings on a zone, or when they want to compare one zone to other zones to ensure that there are no misconfigurations. As you can imagine, with the number of different configurations offered on the Cloudflare dashboard for customers, it could take significant time to copy over every setting and rule manually from one zone to another. Additionally, this process, when done manually, poses a potential threat for misconfigurations due to human error. Furthermore, as more and more customers onboard different zones onto Cloudflare, there needs to be a more automated and improved way for them to make these configuration setups.

Initially, we discussed using Terraform as Cloudflare already supports terraform automation. However, this approach would only cater towards customers and users that have more technical resources and, in true Cloudflare spirit, we wanted to keep it simple such that it could be used by any and everyone. Therefore, we decided to leverage the publicly available Cloudflare APIs and create a browser-based application that interacts with these APIs to display configurations and make changes easily from a simple UI.

With the end goal of simplifying the experience for our partners and customers in duplicating zone configurations, we decided to build a Zone Copier web application solely built on Cloudflare Workers. This tool would, in a click of a button, automatically copy over every setting that can be copied from one zone to another, significantly reducing the amount of time and effort required to make the changes.

Alongside the Zone Copier, we would have some auxiliary tools such as a Zone Viewer, and Zone Comparison, where a customer can easily have a full view of their configurations on a single webpage and be able to compare different zones that they use respectively. These other applications improve upon the existing methods through which Cloudflare users can view their zone configurations, and allow for the direct comparison between different zones.

Importantly, these applications are not to replace the Cloudflare Dashboard, but to complement it instead – for deeper dives into a single particular configuration setting, the Cloudflare Dashboard remains the way to go.

To begin building the web application, I spent the first few weeks diving into the publicly available APIs offered by Cloudflare as part of the v4 API to verify the outputs of each endpoint, and the type of data that would be sent as a response from a request. This took much longer than expected as certain endpoints provided different default responses for a zone that has either an empty setting – for example, not having any Firewall Rules created – or uses a nested structure for its relevant response. These different potential responses have to be examined so that when the web application calls the respective API endpoint, the responses are handled appropriately. This process was quite manual as each endpoint had to be verified individually to ensure the output would work seamlessly with the application.

Once I completed my research, I was able to start designing the web application. Building the web application was a very interesting experience as the stack rested solely on Workers, a serverless application platform. My prior experiences building web applications used servers that require the deployment of a server built using Express and Node.js, whereas for my internship project, I completely relied on a backend built using the itty-router library on Workers to interface with the publicly available Cloudflare APIs. I found this extremely exciting as building a serverless application required less overhead compared to setting up a server and deploying it, and using Workers itself has many other added benefits such as zero cold starts. This introduction to serverless technology and my experience deep-diving into the capabilities of Workers has really opened my eyes to the possibilities that Workers as a platform can offer. With Workers, you can deploy any application on Cloudflare’s global network like I did!

For the frontend of the web application, I used React and the Chakra-UI library to build the user interface for which the Zone Viewer, Zone Comparison, and Zone Copier, is based on. The routing between different pages was done using React Router and the application is deployed directly through Workers.

Here is a screenshot of the application:

Internship Experience: Software Development Intern

Presenting the prototype application

As developers will know, the best way to obtain feedback for the tool that you’re building is to directly have your customers use them and let you know what they think of your application and the kind of features they want to have built on top of it. Therefore, once we had a prototype version of the web application for the Zone Viewer and Zone Comparison complete, we presented the application to the Solutions Engineering team to hear their thoughts on the impact the tool would have on their work and additional features they would like to see built on the application. I found this process very enriching as they collectively mentioned how impactful the application would be for their work and the value add this project provides to them.

Some interesting feedback and feature requests I received were:

  1. The Zone Copier would definitely be very useful for our partners who have to replicate the configuration of one zone to another regularly, and it’s definitely going to help make sure there are less human errors in the process of configuring the setups.
  2. Besides duplicating configurations from zone-to-zone, could we use this to replicate the configurations from a best-in-class setup for different use cases and allow partners to deploy this with a few clicks?
  3. Can we use this tool to generate quarterly reports?
  4. The Zone Viewer would be very helpful for us when we produce documentation on a particular zone’s configuration as part of a POC report.
  5. The Zone Viewer will also give us much deeper insight to better understand the current zone configurations and provide recommendations to improve it.

It was also a very cool experience speaking to the broad Solutions Engineering team as I found that many were very technically inclined and had many valid suggestions for improving the architecture and development of the applications. A special thanks to Edwin Wong for setting up the sharing session with the internal team, and many thanks to Xin Meng, AQ Jiao, Yonggil Choi, Steve Molloy, Kyouhei Hayama, Claire Lim and Jamal Boutkabout for their great insight and suggestions!

Impact of Cloudflare outside of work

While Cloudflare is known for its impeccable transparency throughout the company, and the stellar products it provides in helping make the Internet better, I wanted to take this opportunity to talk about the other endeavors that the company has too.

Cloudflare is part of the Pledge 1%, where the company dedicates 1% of products and 1% of our time to give back to the local communities as well as all the communities we support online around the world.

I took part in one of these activities, where we spent a morning cleaning up parts of the East Coast Park beach, by picking up trash and litter that had been left behind by other park users. Here’s a picture of us from that morning:

Internship Experience: Software Development Intern

From day one, I have been thoroughly impressed by Cloudflare’s commitment to its culture and the effort everyone at Cloudflare puts in to make the company a great place to work and have a positive impact on the surrounding community.

In addition to giving back to the community, other aspects of company culture include having a good team spirit and safe working environment where you feel appreciated and taken care of. At Cloudflare, I have found that everyone is very understanding of work commitments. I faced a few challenges during the internship where I had to spend additional time on university related projects and work, and my manager has always been very supportive and understanding if I required additional time to complete parts of the internship project.

Concluding takeaways

My experience interning at Cloudflare has been extremely positive, and I have seen first hand how transparent the company is with not only its employees but also its customers, and it truly is a great place to work. Cloudflare’s collaborative culture allowed me to access members from different teams, to obtain their thoughts and assistance with certain issues that I faced from time to time. I would not have been able to produce an impactful project without the help of the different brilliant, and motivated, people I worked with across the span of the internship, and I am truly grateful for such a rewarding experience.

We are getting ready to open intern roles for this coming Fall, so we encourage you to visit our careers page frequently, to be up-to-date on all the opportunities we have within our teams.

Announcing our Spring Developer Challenge

Post Syndicated from Albert Zhao original https://blog.cloudflare.com/announcing-our-spring-developer-challenge/

Announcing our Spring Developer Challenge

Announcing our Spring Developer Challenge

After many announcements from Platform Week, we’re thrilled to make one more: our Spring Developer Challenge!

The theme for this challenge is building real-time, collaborative applications — one of the most exciting use-cases emerging in the Cloudflare ecosystem. This is an opportunity for developers to merge their ideas with our newly released features, earn recognition on our blog, and take home our best swag yet.

Here’s a list of our tools that will get you started:

  • Workers can either be powerful middleware connecting your app to different APIs and an origin — or it can be the entire application itself. We recommend using Worktop, a popular framework for Workers, if you need TypeScript support, routing, and well-organized submodules. Worktop can also complement your existing app even if it already uses a framework,  such as Svelte.
  • Cloudflare Pages makes it incredibly easy to deploy sites, which you can make into truly dynamic apps by putting a Worker in front or using the Pages Functions (beta).
  • Durable Objects are great for collaborative apps because you can use websockets while coordinating state at the edge, seen in this chat demo. To help scale any load, we also recommend Durable Object Groups.
  • Workers KV provides a global key-value data store that securely stores and quickly serves data across Cloudflare’s network. R2 allows you to store enormous amounts of data without trapping you with costly egress services.

Last year, our Developer Spotlight series highlighted how developers around the world built entire applications on Cloudflare. Our Discord server maintained that momentum with users demonstrating that any type of application can be built. Need a way to organize thousands of lines of JSON? JSON Hero, built with Remix and deployed with Workers, provides an incredibly readable UI for your JSON files. Trying to deploy a GraphQL server for your app that scales? helix-flare deploys a GraphQL server easily through Workers and uses Durable Objects to coordinate data.

We hope developers continue to explore the boundaries of what they can build on Cloudflare as our platform evolves. During our Summer Developer Challenge in 2021, we received over 1,200 submissions that revealed you can build almost any app imaginable with Workers, Pages, and the rest of the developer ecosystem. We sent out hundreds of swag boxes to participants, to show our appreciation. The ensuing unboxing videos on Twitter and YouTube thrilled our team.

This year’s Spring Developer Challenge is all about making real-time, collaborative apps such as chat rooms, games, web-based editing tools, or anything else in your imagination! Here are the rules:

  • You must be at least 18 years old to participate
  • You can work in teams of up to 10 people per submission
  • The deadline to submit your repo is May 24

Enter the challenge by going to this site.

As you build your app, join our Discord if you or your team need any help. We will be enthusiastically reviewing submissions, promoting them on Twitter, and sending out swag boxes.

If you’re new to Cloudflare or have an exciting idea as a developer, this is your opportunity to see how far our platform has evolved and get rewarded for it!

Send email using Workers with MailChannels

Post Syndicated from Erwin van der Koogh original https://blog.cloudflare.com/sending-email-from-workers-with-mailchannels/

Send email using Workers with MailChannels

Send email using Workers with MailChannels

Here at Cloudflare we often talk about HTTP and related protocols as we work to help build a better Internet. However, the Simple Mail Transfer Protocol (SMTP) — used to send emails — is still a massive part of the Internet too.

Even though SMTP is turning 40 years old this year, most businesses still rely on email to validate user accounts, send notifications, announce new features, and more.

Sending an email is simple from a technical standpoint, but getting an email actually delivered to an inbox can be extremely tricky. Because of the enormous amount of spam that is sent every single day, all major email providers are very wary of things like new domains and IP addresses that start sending emails.

That is why we are delighted to announce a partnership with MailChannels. MailChannels has created an email sending service specifically for Cloudflare Workers that removes all the friction associated with sending emails. To use their service, you do not need to validate a domain or create a separate account. MailChannels filters spam before sending out an email, so you can feel safe putting user-submitted content in an email and be confident that it won’t ruin your domain reputation with email providers. But the absolute best part? Thanks to our friends at MailChannels, it is completely free to send email.

In the words of their CEO Ken Simpson: “Cloudflare Workers and Pages are changing the game when it comes to ease of use and removing friction to get started. So when we sat down to see what friction we could remove from sending out emails, it turns out that with our incredible anti-spam and anti-phishing, the answer is “everything”. We can’t wait to see what applications the community is going to build on top of this.”

The only constraint currently is that the integration only works when the request comes from a Cloudflare IP address. So it won’t work yet when you are developing on your local machine or running a test on your build server.

First let’s walk you through how to send out your first email using a Worker.

export default {
  async fetch(request) {
    send_request = new Request('https://api.mailchannels.net/tx/v1/send', {
      method: 'POST',
      headers: {
        'content-type': 'application/json',
      },
      body: JSON.stringify({
        personalizations: [
          {
            to: [{ email: '[email protected]', name: 'Test Recipient' }],
          },
        ],
        from: {
          email: '[email protected]',
          name: 'Workers - MailChannels integration',
        },
        subject: 'Look! No servers',
        content: [
          {
            type: 'text/plain',
            value: 'And no email service accounts and all for free too!',
          },
        ],
      }),
    })
  },
}

That is all there is to it. You can modify the example to make it send whatever email you want.

The MailChannels integration makes it easy to send emails to and from anywhere with Workers. However, we also wanted to make it easier to send emails to yourself from a form on your website. This is perfect for quickly and painlessly setting up pages such as “Contact Us” forms, landing pages, and sales inquiries.

The Pages Plugin Framework that we announced earlier this week allows other people to email you without exposing your email address.

The only thing you need to do is copy and paste the following code snippet in your /functions/_middleware.ts file. Now, every form that has the data-static-form-name attribute will automatically be emailed to you.

import mailchannelsPlugin from "@cloudflare/pages-plugin-mailchannels";

export const onRequest = mailchannelsPlugin({
  personalizations: [
    {
      to: [{ name: "ACME Support", email: "[email protected]" }],
    },
  ],
  from: { name: "Enquiry", email: "[email protected]" },
  respondWith: () =>
    new Response(null, {
      status: 302,
      headers: { Location: "/thank-you" },
    }),
});

Here is an example of what such a form would look like. You can make the form as complex as you like, the only thing it needs is the data-static-form-name attribute. You can give it any name you like to be able to distinguish between different forms.

<!DOCTYPE html>
<html>
  <body>
    <h1>Contact</h1>
    <form data-static-form-name="contact">
      <div>
        <label>Name<input type="text" name="name" /></label>
      </div>
      <div>
        <label>Email<input type="email" name="email" /></label>
      </div>
      <div>
        <label>Message<textarea name="message"></textarea></label>
      </div>
      <button type="submit">Send!</button>
    </form>
  </body>
</html>

So as you can see there is no barrier left when it comes to sending out emails. You can copy and paste the above Worker or Pages code into your projects and immediately start to send email for free.

If you have any questions about using MailChannels in your Workers, or want to learn more about Workers in general, please join our Cloudflare Developer Discord server.

Route to Workers, automate your email processing

Post Syndicated from Joao Sousa Botto original https://blog.cloudflare.com/announcing-route-to-workers/

Route to Workers, automate your email processing

Route to Workers, automate your email processing

Cloudflare Email Routing has quickly grown to a few hundred thousand users, and we’re incredibly excited with the number of feature requests that reach our product team every week. We hear you, we love the feedback, and we want to give you all that you’ve been asking for. What we don’t like is making you wait, or making you feel like your needs are too unique to be addressed.

That’s why we’re taking a different approach – we’re giving you the power tools that you need to implement any logic you can dream of to process your emails in the fastest, most scalable way possible.

Today we’re announcing Route to Workers, for which we’ll start a closed beta soon. You can join the waitlist today.

How this works

When using Route to Workers your Email Routing rules can have a Worker process the messages reaching any of your custom Email addresses.

Route to Workers, automate your email processing

Even if you haven’t used Cloudflare Workers before, we are making onboarding as easy as can be. You can start creating Workers straight from the Email Routing dashboard, with just one click.

Route to Workers, automate your email processing

After clicking Create, you will be able to choose a starter that allows you to get up and running with minimal effort. Starters are templates that pre-populate your Worker with the code you would write for popular use cases such as creating a blocklist or allowlist, content based filtering, tagging messages, pinging you on Slack for urgent emails, etc.

Route to Workers, automate your email processing

You can then use the code editor to make your new Worker process emails in exactly the way you want it to – the options are endless.

Route to Workers, automate your email processing

And for those of you that prefer to jump right into writing their own code, you can go straight to the editor without using a starter. You can write Workers with a language you likely already know. Cloudflare built Workers to execute JavaScript and WebAssembly and has continuously added support for new languages.

The Workers you’ll use for processing emails are just regular Workers that listen to incoming events, implement some logic, and reply accordingly. You can use all the features that a normal Worker would.

The main difference being that instead of:

export default {
  async fetch(request, env, ctx) {
    handleRequest(request);
  }
}

You’ll have:

export default {
  async email(message, env, ctx) {
    handleEmail(message);
  }
}

The new `email` event will provide you with the “from”, “to” fields, the full headers, and the raw body of the message. You can then use them in any way that fits your use case, including calling other APIs and orchestrating complex decision workflows. In the end, you can decide what action to take, including rejecting or forwarding the email to one of your Email Routing destination addresses.

With these capabilities you can easily create logic that, for example, only accepts messages coming from one specific address and, when one matches the criteria, forwards to one or more of your verified destination addresses while also immediately alerting you on Slack. Code for such feature could be as simple as this:

export default {
   async email(message, env, ctx) {
       switch (message.to) {
           case "[email protected]":
               await fetch("https://webhook.slack/notification", {
                   body: `Got a marketing email from ${ message.from }, subject: ${ message.headers.get("subject") }`,
               });
               sendEmail(message, [
                   "[email protected]",
                   "[email protected]",
               ]);
               break;

           default:
               message.reject("Unknown address");
       }
   },
};

Route to Workers enables everyone to programmatically process their emails and use them as triggers for any other action. We think this is pretty powerful.

Process up to 100,000 emails/day for free

The first 100,000 Worker requests (or Email Triggers) each day are free, and paid plans start at just $5 per 10 million requests. You will be able to keep track of your Email Workers usage right from the Email Routing dashboard.

Route to Workers, automate your email processing

Join the Waitlist

You can join the waitlist today by going to the Email section of your dashboard, navigating to the Email Workers tab, and clicking the Join Waitlist button.

Route to Workers, automate your email processing

We are expecting to start the closed beta in just a few weeks, and can’t wait to hear about what you’ll build with it!

As usual, if you have any questions or feedback about Email Routing, please come see us in the Cloudflare Community and the Cloudflare Discord.

Introducing Custom Domains for Workers

Post Syndicated from Kabir Sikand original https://blog.cloudflare.com/custom-domains-for-workers/

Introducing Custom Domains for Workers

Introducing Custom Domains for Workers

Today, we’re happy to announce Custom Domains for Workers. Custom Domains allow you to hook up a domain to your Worker, without having to fuss about certificates, origin servers or DNS – it just works. Let’s take a look at how we built Custom Domains and how you can use them.

The magic of Cloudflare DNS

Under the hood, we’re leveraging Cloudflare DNS to register your Worker as the origin for your domain. All you need to do is head to your Worker, go to the Triggers tab, and click Add Custom Domain. Cloudflare will handle creating the DNS record and issuing a certificate on your behalf. In seconds, your domain will point to your Worker, and all you need to worry about is writing your code. We’ll also help guide you through the process of creating these new records and replace any existing ones. We built this with a straightforward ethos in mind: we should be clear and transparent about actions we’re taking, and make it easy to understand.

We’ve made a few welcome changes when you’re using a Custom Domain on your Worker. First off, when you send a request to any path on that Custom Domain, your Worker will be triggered. No need to create a route with /* at the end.

Introducing Custom Domains for Workers

Second, your Custom Domain Worker is considered the ‘origin server’. That means, no need to `fetch(event.request)` once you’re in that Worker; instead, talk to any internal or external services you need to by creating request objects in your code, or talk to other Workers services using any of our available bindings. We’ve increased the limit of external requests you can make, when using our Unbound usage model, to 1,000. You can talk to any services you’d like to – things like payment, communication, analytics, or tracking services come to mind, not to mention your databases. If that’s not enough for your use case, feel free to reach out via Discord or support, and we’ll happily chat.

Introducing Custom Domains for Workers

Finally, what if you need to talk to your Worker from another one? Since these Workers act as an origin server, you can just send a normal request to the associated endpoint, and we’ll invoke that Worker – even if it’s on the same Cloudflare zone.

Introducing Custom Domains for Workers

Let’s build an example application

We’ll start by writing our application. I have an example that I’ve called api-gateway below. It checks the incoming request path, and delegates work to downstream Workers using Service Bindings. For any privileged endpoints, it performs an authorization check before executing that code:

export default {
 async fetch(request, environment) {
   const url = new URL(request.url);
   switch (url.pathname) {
     case '/login':
       return await environment.login.fetch(request);

     case '/logout':
       return await environment.logout.fetch(request);

     case '/admin': {
       // Check that the "Authorization" header is sent when authenticated.
       const authCheck = await environment.auth.fetch(request.clone());
       if (authCheck.status != 200) { return authCheck }
       // If the auth check passes, send the request to the /admin endpoint
       return await environment.admin.fetch(request);
     }

     case '/robots.txt':
       return new Response(null, { status: 204 });
   }
   return new Response('Not Found.', { status: 404 });
 }
}```

Now that I have a working application, I want to serve it on my Custom Domain. To hook this up, head over to Workers, Triggers, click ‘Add Custom Domain’, and type in your desired hostname. You’ll be guided through a simple workflow to generate your new Worker record, and your Worker will be the target.

Best of all, with Custom Domains you can reap the performance benefits of DNS-routable Workers; Cloudflare never has to look through a routing table to invoke your Worker. And, by leveraging Service Bindings, you can customize your routing to your heart’s content – using URL parameters, headers, request content, or query strings to appropriately invoke the right Worker at the right time.

We’re excited to see what you build with Custom Domains. Custom Domains are available in an Open Beta starting today. Support is built right into the Cloudflare Dashboard and API’s, and CLI support via Wrangler is coming soon.

Durable Objects Alarms — a wake-up call for your applications

Post Syndicated from Matt Alonso original https://blog.cloudflare.com/durable-objects-alarms/

Durable Objects Alarms — a wake-up call for your applications

Durable Objects Alarms — a wake-up call for your applications

Since we launched Durable Objects, developers have leveraged them as a novel building block for distributed applications.

Durable Objects provide globally unique instances of a JavaScript class a developer writes, accessed via a unique ID. The Durable Object associated with each ID implements some fundamental component of an application — a banking application might have a Durable Object representing each bank account, for example. The bank account object would then expose methods for incrementing a balance, transferring money or any other actions that the application needs to do on the bank account.

Durable Objects work well as a stateful backend for applications — while Workers can instantiate a new instance of your code in any of Cloudflare’s data centers in response to a request, Durable Objects guarantee that all requests for a given Durable Object will reach the same instance on Cloudflare’s network.

Each Durable Object is single-threaded and has access to a stateful storage API, making it easy to build consistent and highly-available distributed applications on top of them.

This system makes distributed systems’ development easier — we’ve seen some impressive applications launched atop Durable Objects, from collaborative whiteboarding tools to conflict-free replicated data type (CRDT) systems for coordinating distributed state launch.

However, up until now, there’s been a piece missing — how do you invoke a Durable Object when a client Worker is not making requests to it?

As with any distributed system, Durable Objects can become unavailable and stop running. Perhaps the machine you were running on was unplugged, or the datacenter burned down and is never coming back, or an individual object exceeded its memory limit and was reset. Before today, a subsequent request would reinitialize the Durable Object on another machine, but there was no way to programmatically wake up an Object.

Durable Objects Alarms are here to change that, unlocking new use cases for Durable Objects like queues and deferred processing.

What is a Durable Object Alarm?

Durable Object Alarms allow you, from within your Durable Object, to schedule the object to be woken up at a time in the future. When the alarm’s scheduled time comes, the Durable Object’s alarm() handler will be called. If this handler throws an exception, the alarm will be automatically retried using exponential backoff until it succeeds — alarms have guaranteed at-least-once execution.

How are Alarms different from Workers Cron Triggers?

Alarms are more fine-grained than Cron Triggers. While a Workers service can have up to three Cron Triggers configured at once, it can have an unlimited amount of Durable Objects, each of which can have a single alarm active at a time.

Alarms are directly scheduled from and invoke a function within your Durable Object. Cron Triggers, on the other hand, are not programmatic — they execute based on their schedules, which have to be configured via the Cloudflare Dashboard or centralized configuration APIs.

How do I use Alarms?

First, you’ll need to add the durable_object_alarms compatibility flag to your wrangler.toml.

compatibility_flags = ["durable_object_alarms"]

Next, implement an alarm() handler in your Durable Object that will be called when the alarm executes. From anywhere else in your Durable Object, call state.storage.setAlarm() and pass in a time for the alarm to run at. You can use state.storage.getAlarm() to retrieve the currently set alarm time.

In this example, we implemented an alarm handler that wakes the Durable Object up once every 10 seconds to batch requests to a single Durable Object, deferring processing until there is enough work in the queue for it to be worthwhile to process them.

export default {
  async fetch(request, env) {
    let id = env.BATCHER.idFromName("foo");
    return await env.BATCHER.get(id).fetch(request);
  },
};

const SECONDS = 1000;

export class Batcher {
  constructor(state, env) {
    this.state = state;
    this.storage = state.storage;
    this.state.blockConcurrencyWhile(async () => {
      let vals = await this.storage.list({ reverse: true, limit: 1 });
      this.count = vals.size == 0 ? 0 : parseInt(vals.keys().next().value);
    });
  }
  async fetch(request) {
    this.count++;

    // If there is no alarm currently set, set one for 10 seconds from now
    // Any further POSTs in the next 10 seconds will be part of this kh.
    let currentAlarm = await this.storage.getAlarm();
    if (currentAlarm == null) {
      this.storage.setAlarm(Date.now() + 10 * SECONDS);
    }

    // Add the request to the batch.
    await this.storage.put(this.count, await request.text());
    return new Response(JSON.stringify({ queued: this.count }), {
      headers: {
        "content-type": "application/json;charset=UTF-8",
      },
    });
  }
  async alarm() {
    let vals = await this.storage.list();
    await fetch("http://example.com/some-upstream-service", {
      method: "POST",
      body: Array.from(vals.values()),
    });
    await this.storage.deleteAll();
    this.count = 0;
  }
}

Once every 10 seconds, the alarm() handler will be called. In the event an unexpected error terminates the Durable Object, it will be re-instantiated on another machine, following a short delay, after which it can continue processing.

Under the hood, Alarms are implemented by making reads and writes to the storage layer. This means Alarm get and set operations follow the same rules as any other storage operation – writes are coalesced with other writes, and reads have a defined ordering. See our blog post on the caching layer we implemented for Durable Objects for more information.

Durable Objects Alarms guarantee fault-tolerance

Alarms are designed to have no single point of failure and to run entirely on our edge – every Cloudflare data center running Durable Objects is capable of running alarms, including migrating Durable Objects from unhealthy data centers to healthy ones as necessary to ensure that their Alarm executes. Single failures should resolve in under 30 seconds, while multiple failures may take slightly longer.

We achieve this by storing alarms in the same distributed datastore that backs the Durable Object storage API. This allows alarm reads and writes to behave identically to storage reads and writes and to be performed atomically with them, and ensures that alarms are replicated across multiple datacenters.

Within each data center capable of running Durable Objects, there are multiple processes responsible for tracking upcoming alarms and triggering them, providing fault tolerance and scalability within the data center. A single elected leader in each data center is responsible for detecting failure of other data centers and assigning responsibility of those alarms to healthy local processes in its own data center. In the event of leader failure, another leader will be elected and become responsible for executing Alarms in the data center. This allows us to guarantee at-least-once execution for all Alarms.

How do I get started?

Alarms are a great way to build new distributed primitives, like queues, atop Durable Objects. They also provide a method for guaranteeing work within a Durable Object will complete, without relying on a client request to “kick” the Object.

You can get started with Alarms now by enabling Durable Objects in the Cloudflare dashboard. For more info, check the developer docs or jump in our Discord.

Announcing Workers for Platforms: making every application on the Internet more programmable

Post Syndicated from Rita Kozlov original https://blog.cloudflare.com/workers-for-platforms/

Announcing Workers for Platforms: making every application on the Internet more programmable

Announcing Workers for Platforms: making every application on the Internet more programmable

As a business, whether a startup or Fortune 500 company, your number one priority is to make your customers happy and successful with your product. To your customers, however, success and happiness sometimes seems to be just one feature away.

If only you could customize X, we’ll be able to use your product” – the largest prospect in your pipeline. “If you just let us do Y,  we’ll expand our usage of your product by 10x” – your most strategic existing customer.

You want your product to be everything to everybody, but engineering can only keep up so quickly, so, what gives?

Today, we’re announcing Workers for Platforms, our tool suite to help make any product programmable, and help our customers deliver value to their customers and developers instantaneously.

A more programmable interface

One way to give your customers the ability to programmatically interact with your product is by providing them with APIs. That is a big part of why APIs are so prolific today — enabling code (whether your own, or that of a 3rd party) to engage with your applications is nothing short of revolutionary.

But there’s still a problem. While APIs can give developers the ability to interact with your application programmatically, developers are ultimately always limited by the abstractions exposed to them by the API. You, an application owner, have to have predicted how the customer would use your product, and then built out the API to support the use case. If there’s one thing I have learned as a product manager, it’s almost impossible to predict how customers will use a product. And if there’s a second thing I’ve learned, it’s that even with plentiful engineering resources, it’s also almost impossible to build all the functionality required to keep said customers happy.

There is another way, however.

Functions, in contrast to APIs, provide the lowest level primitives (rather than abstractions on top of them). This lets the developer define the right behavior from there — and they can even define their own APIs on top.

In this sense, functions and APIs are actually complementary to each other — you may even choose to call another API directly from your function. For example, if you’re handling an event in a messaging system, you could implement your own feature to send an email by calling an email API, or create a ticket in your ticketing system, etc.

This gets at why we’re so excited about Workers for Platforms: it enables you to expose a direct way for your customers’ developers to bring their own logic to any application. We think it’s going to unlock a wave of customer-led innovation on top of companies that adopt it, and has the potential to be as impactful to building applications on the web as the API has been.

A better experience for developers

While Workers for Platforms expose a more powerful paradigm for making product programmable, they also result in a better experience for you as a developer.

Today, as a developer, before you can even get started using APIs or webhooks, there’s a list of tedious tasks you have to deal with first. First, you have to set up somewhere to host your code, whether a server (or serverless function), and expose it via an external endpoint to be able. You have to deal with ops, custom tokens, figuring out the new authentication schema, all before you get started. Then you have to maintain that service, and make sure that it stays up to ensure that events are always processed properly.

Announcing Workers for Platforms: making every application on the Internet more programmable

With functions embedded directly into the products you’re using, you can just start writing the code.

Announcing Workers for Platforms: making every application on the Internet more programmable

Why hasn’t this model been embraced until now?

Allowing developers to program how events work seems obvious, but just because it’s obvious doesn’t mean it’s easy.

At Cloudflare, we encountered this very problem five years ago — we were onboarding larger and larger customers onto our network, each needing to dictate the fate of a request in their own way. While Page Rules offered a way to modify behavior by URL, customers wanted to control behavior based on cookie, header, geolocation, and more!

We realized our engineering team couldn’t keep up with every request, so we decided to allow customers to bend our product to their own needs.

As we looked for an approach to this problem, we looked for a solution that would meet the two following requirements:

  1. Performance requirement: it’s unacceptable for a CDN, which should make your site faster to introduce latency. How do we make this so fast you don’t even notice it’s there?
  2. Security requirement: how do we run untrusted code securely?

While these requirements are especially critical when you offer performance and security products, solving these challenges is critical when giving your customers the ability to program your product. If the function needs to run on the critical path to your user, introducing latency is equally unacceptable. And of course, no one wants to get breached just to have their users be able to program.

Creating a really fast and secure multi-tenant environment is no easy feat.

When we evaluated our options for solving this problem, we first turned to technologies that existed for solving this problem on the server — serverless functions already existed at time, but were powered by containers, which would introduce cold-start, which was, well, a non-starter. So we turned to the browser, or specifically Chrome, which was powered by V8, and decided to take the same approach, and run it on our servers.

And while the approach sounds simple (and perhaps in retrospect obvious, as these things tend to seem), running a large multi-tenant development platform at scale is no small effort. If a part of the purpose of allowing customers to program your offering is to free up engineering efforts to focus on building new features, the effort it takes to maintain and scale such a development platform may defeat the purpose.

What we realized recently was that we weren’t alone in trying to solve this problem.

Companies like Shopify, building their next generation programmable storefront, Oxygen, were trying to solve the same thing. They wanted to enable their customers to run custom storefronts, and be able to offer the best performance possible, while maintaining a secure, multi-tenant environment.

“Shopify is the Internet’s commerce infrastructure, with millions of merchants using the platform,” said Zach Koch, product director, custom storefronts, at Shopify. “Partnering with Cloudflare, we’re able to give developers the tools they need to build unique and performant storefronts. We are excited to work with Cloudflare to alleviate some complexities of building commerce experiences – like scalability and global availability – so that developers can instead focus on what makes their brand distinct.”

How can you build your next platform on Workers for Platforms?

Working with platforms like Shopify to help them address their developers’ needs, helped us realize another thing — developer experience is not one-size-fits-all. That is, while we’re building our platform for a broad set of developers, eCommerce developers might have a much more specialized set of needs, best solved by a tailored developer experience. And, while the underlying technology is the same, making platforms their experiences using the same high level concepts as our direct customers need doesn’t make sense.

Since no one knows your customers better than you, we want you, the platform provider,  to design the best experience for your users. Workers for Platforms exposes a new set of tools and APIs to integrate directly into the deployment flow you want to design (see what we did there?).

Tags API to manage your functions at scale

Using our APIs, whenever a developer wants to deploy a script on your platform, you can call our APIs to deploy a new Worker in the background. Unlike our traditional Workers offering, Workers for Platforms is designed to be used at scale, to manage hundreds of thousands to millions of Cloudflare Workers.

Depending on how you manage your deployment services, or users, we now also provide the option to use tags to manage groupings of scripts. For example, if a user deletes their account, and you would like to make sure all their Workers are cleaned up. With tags, you can now add any arbitrary tags (such as user ID) per script, to enable bulk actions.

Trace Workers

Where there’s smoke, there’s fire, and where there’s code, well, bugs are also bound to be. When giving developers the tools to write and deploy code, you must also give them the means to debug it.

Trace Workers allow you to collect any information about a request that was handled by a Worker, including any logs or exceptions, and pass them onto your customer. A Trace Worker is a Worker that will receive information about the execution of other Workers, and can forward it to a destination of your choosing, enabling use cases such as live logging or long term storage.

Here is a simple trace Worker that sends its trace data to an HTTP endpoint:

addEventListener("trace", event => {
  event.waitUntil(fetch("http://example.com/trace", {
    method: "POST",
    body: JSON.stringify(event.traces),
  }))
})

Here is an example of what the data in event.traces might look like:

[
  {
    "scriptName": "Example script",
    "outcome": "exception",
    "eventTimestamp": 1587058642005,
    "event": {
      "request": {
        "url": "https://example.com/some/requested/url",
        "method": "GET",
        "headers": [
          "cf-ray": "57d55f210d7b95f3",
          "x-custom-header-name": "my-header-value"
        ],
        "cf": {
          "colo": "SJC"
        }
      },
    },
    "logs": [
      {
        "message": ["string passed to console.log()"],
        "level": "log",
        "timestamp": 1587058642005
      }
    ],
    "exceptions": [
      {
        "name": "Error",
        "message": "Threw a sample exception",
        "timestamp": 1587058642005
      }
    ]
  }
]

Chaining multiple Workers together using Dynamic Dispatch

From working with a few of our early customers, another need we were hearing about often was the ability to run your own code, before running your customer’s code. Perhaps you want to run a layer of authentication, sanitize input or output, or even provide useful information downstream (like user or account IDs).

For this you may want to maintain your own Worker. However, when it’s done executing, you want to be able to call the next Worker, with your customer’s code.

Example:

let user_worker = dispatcher.get('customer-worker-123');
let response = await user_worker.fetch(request);

Custom domains, and more!

The features above are only the new Workers features we enabled for our customers as of this week, but our goal is to provide all the tools you need to build your platform. For example, you can use Workers for Platforms with Cloudflare for SaaS to create custom domains. (And stay tuned for the “and more!”).

How do I get access?

As is the case with any new product we release, we have no doubt we have so much to learn from our customers and their use cases. Since we want to support you, and make sure you’re set up for success, if you’re interested, we’d love to get to know you and your use case, and get you set up with all the tools you need. To get started, we ask that you fill out our form, and we’ll  get in touch with you.

In the meantime, you’re welcome to get started checking out our developer docs, or saying hi in our Discord.

Just getting started

We faced this problem ourselves five years ago — we needed to give our customers the ability to augment our offering in a way that worked for them, so we did just that when we launched Cloudflare Workers. Allowing our customers to program our global network to meet their needs has enabled us to support more customers on our development platform, while enabling our engineering team to focus on turning the most requested customizations into features.

We look forward to seeing both what your developers build on your platform (and we believe you, yourself will be surprised with the use cases developers come up with that you could never dream up yourself), and what your engineering team is able to tackle in parallel!

Workers visibility: announcing Logpush for Worker’s Trace Events

Post Syndicated from Tanushree Sharma original https://blog.cloudflare.com/logpush-for-workers/

Workers visibility: announcing Logpush for Worker’s Trace Events

Workers visibility: announcing Logpush for Worker’s Trace Events

Writing an application is like building a rocket. Countless hours in development and thousands of moving parts all come down to one moment – launch day. Picture the countdown: T minus 10 seconds. The entire team is making sure that things are running smoothly by monitoring dashboards that measure the health of every part of the system.

It’s every developer’s dream to get the level of visibility that NASA has in their mission control room, but for their own code. For flight directors and engineering directors alike, it’s important to have visibility into the systems that are built throughout development and after release. Today, we’re excited to announce Logpush for Worker’s Trace Events, making it easier than ever to gain visibility into applications built on Workers.

Workers Visibility Today

Today, we have lots of tools that are used to find out what’s happening in a Worker.

These tools are awesome for debugging, generalizing trends and monitoring Workers on third parties. They emphasize ease of use and make it effortless to get visibility quickly from your Workers.

As Workers have evolved, we’re now seeing more adoption from larger enterprises and platform companies who are using Workers as a foundation for their customers to develop on top of. When building complex and dynamic applications, customers, especially those building on Workers in the SaaS world need access to raw logs.

Bringing Trace Events to Logpush

Coming soon — we’re adding Workers execution logs to Logpush! Cloudflare Enterprise customers get access to Logpush for any of our available products, including CDN, WAF, Spectrum and many more. We’re excited to add Workers to this list.

Logpush for Worker’s Trace Events will include unstructured console.log() messages, exceptions, and metadata about requests/responses. Here is a sample Trace Event for a fetch request:

{"accountID":123456,"scriptName":"cloudflare-workers-script","outcome":"ok","duration":0.5,"CPUTime":4,"eventType":"fetch","event":{"request":{"url":"https://workersdevtest.com","method":"GET","headers":{"accept":"*/*","accept-encoding":"gzip","connection":"Keep-Alive"},"cf":{"clientTcpRtt":40,"tlsVersion":"TLSv1.2","httpProtocol":"HTTP/2","edgeRequestKeepAliveStatus":1,"country":"CA","asn":16591}},"subrequests":{"request":{"url":"https://example.com","method":"GET","headers":{"x-custom-header":"my-header-value"}},"logs":[{"message":["foo"],"level":"log","timestamp":1587491479166}],"exceptions":[]}}}

With this new dataset, our Enterprise customers will be able to send Workers logs to their preferred cloud storage destination such as GCS or R2 or to analysis platforms like Splunk or New Relic. Logpush handles batching and can scale no matter how much traffic your Worker gets.

This brings new ways to get transparency into Workers! You can pinpoint when a fetch request fails, find out which call is adding the most lag in your application, and track down specific log lines to debug the end user experience. Also, combine Workers logs with HTTP request logs to get a better picture of the full request lifecycle.

It also opens up doors for SaaS companies building on Workers. SaaS companies can get visibility into how their customer’s applications are performing and expose logs to their customer’s developers to make debugging and troubleshooting much easier.

Mission Control in the making

wrangler tail, Workers Analytics Engine (coming later this week!) and Logpush for Worker’s Trace Events are an elite trio to give visibility into every aspect of a Worker.

When you’re deep in the mix of development, wrangler tail is by your side to help you crush bugs and eliminate errors.  With Workers Analytics Engine, you can instrument business logic and query aggregates within seconds to populate dashboards for monitoring. Logpush for Trace Events is there for when you need to debug very specific cases and get an exact record of what happened.

Customers big and small are using Cloudlfare Workers for their next launch, and we’re building tools to make that happen successfully. We’re bringing Logpsush for Trace Events to our Enterprise customers very soon. Stay tuned for updates.

A new era for Cloudflare Pages builds

Post Syndicated from Nevi Shah original https://blog.cloudflare.com/cloudflare-pages-build-improvements/

A new era for Cloudflare Pages builds

A new era for Cloudflare Pages builds

Music is flowing through your headphones. Your hands are flying across the keyboard. You’re stringing together a masterpiece of code. The momentum is building up as you put on the finishing touches of your project. And at last, it’s ready for the world to see. Heart pounding with excitement and the feeling of victory, you push changes to the main branch…. only to end up waiting for the build to execute each step and spit out the build logs.

Starting afresh

Since the launch of Cloudflare Pages, there is no doubt that the build experience has been its biggest source of criticism. From the amount of waiting to inflexibility of CI workflow, Pages had a lot of opportunity for growth and improvement. With Pages, our North Star has always been designing a developer platform that fits right into your workflow and oozes simplicity. User pain points have been and always will be our priority, which is why today we are thrilled to share a list of exciting updates to our build times, logs and settings!

Over the last three quarters, we implemented a new build infrastructure that speeds up Pages builds, so you can iterate quickly and efficiently. In February, we soft released the Pages Fast Builds Beta, allowing you to opt in to this new infrastructure on a per-project basis. This not only allowed us to test our implementation, but also gave our community the opportunity to try it out and give us direct feedback in Discord. Today we are excited to announce the new build infrastructure is now generally available and automatically enabled for all existing and new projects!

Faster build times

As a developer, your time is extremely valuable, and we realize Pages builds were slow. It was obvious that creating an infrastructure that built projects faster and smarter was one of our top requirements.

Looking at a Pages build, there are four main steps: (1) initializing the build environment, (2) cloning your git repository, (3) building the application, and (4) deploying to Cloudflare’s global network. Each of these steps is a crucial part of the build process, and upon investigating areas suitable for optimization, we directed our efforts to cutting down on build initialization time.

In our old infrastructure, every time a build job was submitted, we created a new virtual machine to run that build, costing our users precious dev time. In our new infrastructure, we start jobs on machines that are ready and waiting to be used, taking a major chunk of time away from the build initialization step. This step previously ran for 2+ minutes, but with our new infrastructure update, projects are expected to see a build initialization time cut down to 2-3 SECONDS.

This means less time waiting and more time iterating on your code.

Fast and secure

In our old build infrastructure, because we spun up a new virtual machine (VM) for every build, it would take several minutes to boot up and initialize with the Pages build image needed to execute the build. Alternatively, one could reuse a collection of VMs, assigning a new build to the next available VM, but containers share a kernel with the host operating system, making them far less isolated, posing a huge security risk. This could allow a malicious actor to perform a “container escape” to break out of their sandbox. We wanted the best of both worlds: the speed of a container with the isolation of a virtual machine.

Enter gVisor, a container sandboxing technology that drastically limits the attack surface of a host. In the new infrastructure, each container running with gVisor is given its own independent application “kernel,” instead of directly sharing the kernel with its host. Then, to address the speed, we keep a cluster of virtual machines warm and ready to execute builds so that when a new Pages deployment is triggered, it takes just a few seconds for a new gVisor container to start up and begin executing meaningful work in a secure sandbox with near native performance.

Stream your build logs

After we solidified a fast and secure build, we wanted to enhance the user facing build experience. Because a build may not be successful every time, providing you with the tools you need to debug and access that information as fast as possible is crucial. While we have a long list of future improvements for a better logging experience, today we are starting by enabling you to stream your build logs.

Prior to today, with the aforementioned build steps required to complete a Pages build, you were required to wait until the build completed in order to view the resulting build logs. Easily addressable issues like incorrectly inputting the build command or specifying an environment variable would have required waiting for the entire build to finish before understanding the problem.

Today, we’re giving you the power to understand your build issues as soon as they happen. Spend less time waiting for your logs and start debugging the events of your builds within a second or less after they happen!

Control Branch Builds

Finally, the build experience does not just include the events during execution but everything leading up to the trigger of a build. For our final trick, we’re enabling our users to have full control of the precise branches they’d like to include and exclude for automatic deployments.

Before today, Pages submitted builds for every commit in both production and preview environments, which led to queued builds and even more waiting if you exceeded your concurrent build limit. We wanted to provide even more flexibility to control your CI workflow. Now you can configure your build settings to specify branches to build, as well as skip ad hoc commits.

Specify branches to build

While “unlimited staging” is one of Pages’ greatest advantages, depending on your setup, sometimes automatic deployments to the preview environment can cause extra noise.

In the Pages build configuration setting, you can specify automatic deployments to be turned off for the production environment, the preview environment, or specific preview branches. In a more extreme case, you can even pause all deployments so that any commit sent to your git source will not trigger a new Pages build.

Additionally, in your project’s settings, you can now configure the specific Preview branches you would like to include and exclude for automatic deployments. To make this configuration an even more powerful tool, you can use wildcard syntax to set rules for existing branches as well as any newly created preview branches.

A new era for Cloudflare Pages builds

Read more in our Pages docs on how to get started with configuring automatic deployments with Wildcard Syntax.

Using CI Skip

Sometimes commits need to be skipped on an ad hoc basis. A small update to copy or a set of changes within a small timespan don’t always require an entire site rebuild. That’s why we also implemented a CI Skip command for your commit message, signaling to Pages that the update should be skipped by our builder.

With both CI Skip and configured build rules, you can keep track of your site changes in Pages’ deployment history.

A new era for Cloudflare Pages builds

Where we’re going

We’re extremely excited to bring these updates to you today, but of course, this is only the beginning of improving our build experience. Over the next few quarters, we will be bringing more to the build experience to create a seamless developer journey from site inception to launch.

Incremental builds and caching

From beta testing, we noticed that our new infrastructure can be less impactful on larger projects that use heavier frameworks such as Gatsby. We believe that every user on our developer platform, regardless of their use case, has the right to fast builds. Up next, we will be implementing incremental builds to help Pages identify only the deltas between commits and rebuild only files that were directly updated. We will also be implementing other caching strategies such as caching external dependencies to save time on subsequent builds.

Build image updates

Because we’ve been using the same build image we launched Pages with back in 2021, we are going to make some major updates. Languages release new versions all the time, and we want to make sure we update and maintain the latest versions. An updated build image will mean faster builds, more security and of course supporting all the latest versions of languages and tools we provide. With new build image versions being released, we will allow users to opt in to the updated builds in order to maintain compatibility with all existing projects.

Productive error messaging

Lastly, while streaming build logs helps you to identify those easily addressable issues, the infamous “Internal error occurred” is sometimes a little more cryptic to decipher depending on the failure. While we recently published a “Debugging Cloudflare Pages” guide, in the future we’d like to provide the error feedback in a more productive manner, so you can easily identify the issue.

Have feedback?

As always, your feedback defines our roadmap. With all the updates we’ve made to our build experience, it’s important we hear from you! You can get in touch with our team directly through Discord. Navigate to our Pages specific section and check out our various channels specific to different parts of the product!

Join us at Cloudflare Connect!

Interested in learning more about building with Cloudflare Pages? If you’re based in the New York City area, join us on Thursday, May 12th for a series of workshops on how to build a full stack application on Pages! Follow along with a fully hands-on lab, featuring Pages in conjunction with other products like Workers, Images and Cloudflare Gateway, and hear directly from our product managers. Register now!

Cloudflare Relay Worker

Post Syndicated from Matt Boyle original https://blog.cloudflare.com/cloudflare-relay-worker/

Cloudflare Relay Worker

Cloudflare Relay Worker

Our Notification Center offers first class support for a variety of popular services (a list of which are available here). However, even with such extensive support, you may use a tool that isn’t on that list. In that case, it is possible to leverage Cloudflare Workers in combination with a generic webhook to deliver notifications to any service that accepts webhooks.

Today, we are excited to announce that we are open sourcing a Cloudflare Worker that will make it as easy as possible for you to transform our generic webhook response into any format you require. Here’s how to do it.

For this example, we are going to write a Cloudflare Worker that takes a generic webhook response, transforms it into the correct format and delivers it to Rocket Chat, a popular customer service messaging platform.  When Cloudflare sends you a generic webhook, it will have the following schema, where “text” and “data” will vary depending on the alert that has fired:

{
   "name": "Your custom webhook",
   "text": "The alert text",
   "data": {
       "some": "further",
       "info": [
           "about",
           "your",
           "alert",
           "in"
       ],
       "json": "format"
   },
   "ts": 123456789
}

Whereas Rocket Chat is looking for this format:

{
   "text": "Example message",
   "attachments": [
       {
           "title": "Rocket Chat",
           "title_link": "https://rocket.chat",
           "text": "Rocket.Chat, the best open source chat",
           "image_url": "/images/integration-attachment-example.png",
           "color": "#764FA5"
       }
   ]
}

Getting Started

Firstly, you’ll need to ensure you are ready to develop on the Cloudflare Workers platform. You can find more information on how to do that here. For the purpose of this example, we will assume you have a Cloudflare account and Wrangler, the Workers CLI, setup.

Next, let us see the steps to extend the notifications system in detail.

Step 1
Clone the webhook relay worker GitHub repository: git clone [email protected]:cloudflare/cf-webhook-relay.git

Step 2
Check the webhook payload format required by your communication tool. In this specific case, it would look like the Rocket Chat example payload shared above.

Step 3
Sign up for Rocket Chat and add a webhook integration to accept incoming webhook notifications.

Cloudflare Relay Worker

Step 4
Configure an encrypted wrangler secret for request authentication and the Rocket Chat URL for sending requests in your Worker: Environment variables · Cloudflare Workers docs (for this example, the secret is not encrypted.)

Cloudflare Relay Worker

Step 5
Modify your worker to accept POST webhook requests with the secret configured as a query param for authentication.

if (headers.get("cf-webhook-auth") !== WEBHOOK_SECRET) {
    return new Response(":(", {
        headers: {'content-type': 'text/plain'},
            status: 401
    })
}

Step 6
Convert the incoming request payload from the notification system (like in the example shared above) to the Rocket Chat format in the worker.

let incReq = await request.json()
let msg = incReq.text
let webhookName = incReq.name
let rocketBody = {
    "text": webhookName,
    "attachments": [
        {
            "title": "Cloudflare Webhook",
            "text": msg,
            "title_link": "https://cloudflare.com",
            "color": "#764FA5"
        }
    ]
}

Step 7
Configure the Worker to send POST requests to the Rocket Chat webhook with the converted payload.

const rocketReq = {
    headers: {
        'content-type': 'application/json',
    },
    method: 'POST',
    body: JSON.stringify(rocketBody),
}
const response = await fetch(
    ROCKET_CHAT_URL,
    rocketReq,
)
const res = await response.json()
console.log(res)
return new Response(":)", {
    headers: {'content-type': 'text/plain'},
})

Step 8
Set up deployment configuration in your wrangler.toml file and publish your Worker. You can now see the Worker in the Cloudflare dashboard.

Cloudflare Relay Worker

Step 9
You can manage and monitor the Worker with a variety of available tools.

Cloudflare Relay Worker

Step 10
Add the Worker URL as a generic webhook to the notification destinations in the Cloudflare dashboard: Configure webhooks · Cloudflare Fundamentals docs.

Cloudflare Relay Worker
Cloudflare Relay Worker

Step 11
Create a notification with the destination as the configured generic webhook: Create a Notification · Cloudflare Fundamentals docs.

Cloudflare Relay Worker

Step 12
Tada! With your Cloudflare Worker running, you can now receive all notifications to Rocket Chat. We can configure in the same way for any communication tool.

Cloudflare Relay Worker

We know that a notification system is essential to proactively monitor any issues that may arise within a project. We are excited with this announcement to make notifications available to any communication service without having to worry too much about the system’s compatibility to them. We have lots of updates planned, like adding more alertable events to choose from and extending our support to a wide range of webhook services to receive them.

If you’re interested in building scalable services and solving interesting technical problems, we are hiring engineers on our team in Austin & Lisbon.

Happy Earth Day: Announcing Green Compute open beta

Post Syndicated from Kabir Sikand original https://blog.cloudflare.com/earth-day-2022-green-compute-open-beta/

Happy Earth Day: Announcing Green Compute open beta

Happy Earth Day: Announcing Green Compute open beta

At Cloudflare, we are on a mission to help build a better Internet. We continue to grow our network, and it is important for us to do so responsibly.

Since Earth Day 2021, some pieces of this effort have included:

And we are just getting started. We are working to make the Cloudflare network — and our customers’ websites, applications, and networks — as efficient as possible in terms of design, hardware, systems, and protocols. After all, we do not want to lose sight of our responsibilities to our home: our planet Earth.

Green Compute for Workers Cron Triggers

During Impact Week last year, we began testing Green Compute in a closed beta. Green Compute makes Workers Cron Triggers run only in facilities that are powered by renewable energy. We are hoping to incentivize more facilities to implement responsible climate and energy policies.

With Green Compute enabled, Workers Cron Triggers will run only on Cloudflare points of presence that are located in data centers that are powered by 100% renewable energy.

Based on carbon accounting and renewable energy standards like RE100, all Cloudflare operations are considered 100% powered by renewable energy because we purchase the same amount of renewable energy as the total energy we use globally.

However, the data centers we operate in are co-located with facilities owned by other companies. Although our renewable energy purchases cover the energy used by our equipment, all other energy consumed at that facility may or may not be renewable.

Renewable energy can be purchased in a number of ways, including:

  • Through on-site generation (wind turbines, solar panels)
  • Directly from renewable energy producers through contractual agreements called Power Purchase Agreements (PPA)
  • In the form of Renewable Energy Credits (RECs, IRECs, GoOs) from an energy credit market that helps offset energy use on-site
Happy Earth Day: Announcing Green Compute open beta

How to get started with Green Compute

Today, we are happy to announce that we are bringing this function to all our developers — Green Compute is now in open beta. To get started, head to your Workers App, click Change Setting under Compute Setting, and select Green Compute.

Happy Earth Day: Announcing Green Compute open beta

Now, any time you create a scheduled workload, you will see a leaf icon indicating that your schedules will be run in data centers powered by renewable energy sources. With just one click, you can help build a better Internet.

Next steps

  • Want to get more involved with the Cloudflare Developer community, help shape what we build next, or give feedback on Green Compute? Enter our Discord and join the conversation.
  • Learn more about how Cloudflare is helping build a better Internet via Cloudflare Impact.

Get updates on the health of your origin where you need them

Post Syndicated from Darius Jankauskas original https://blog.cloudflare.com/get-updates-on-the-health-of-your-origin-where-you-need-them/

Get updates on the health of your origin where you need them

Get updates on the health of your origin where you need them

We are thrilled to announce the availability of Health Checks in the Cloudflare Dashboard’s Notifications tab, available to all Pro, Business, and Enterprise customers. Now, you can get critical alerts on the health of your origin without checking your inbox! Keep reading to learn more about how this update streamlines notification management and unlocks countless ways to stay informed on the health of your servers.

Keeping your site reliable

We first announced Health Checks when we realized some customers were setting up Load Balancers for their origins to monitor the origins’ availability and responsiveness. The Health Checks product provides a similarly powerful interface to Load Balancing, offering users the ability to ensure their origins meet criteria such as reachability, responsiveness, correct HTTP status codes, and correct HTTP body content. Customers can also receive email alerts when a Health Check finds their origin is unhealthy based on their custom criteria. In building a more focused product, we’ve added a slimmer, monitoring-based configuration, Health Check Analytics, and made it available for all paid customers. Health Checks run in multiple locations within Cloudflare’s edge network, meaning customers can monitor site performance across geographic locations.

What’s new with Health Checks Notifications

Health Checks email alerts have allowed customers to respond quickly to incidents and guarantee minimum disruption for their users. As before, Health Checks users can still select up to 20 email recipients to notify if a Health Check finds their site to be unhealthy. And if email is the right tool for your team, we’re excited to share that we have jazzed up our notification emails and added details on which Health Check triggered the notification, as well as the status of the Health Check:

Get updates on the health of your origin where you need them
New Health Checks email format with Time, Health Status, and Health Check details

That being said, monitoring an inbox is not ideal for many customers needing to stay proactively informed. If email is not the communication channel your team typically relies upon, checking emails can at best be inconvenient and, at worst, allow critical updates to be missed. That’s where the Notifications dashboard comes in.

Users can now create Health Checks notifications within the Cloudflare Notifications dashboard. By integrating Health Checks with Cloudflare’s powerful Notification platform, we have unlocked myriad new ways to ensure customers receive critical updates for their origin health. One of the key benefits of Cloudflare’s Notifications is webhooks, which give customers the flexibility to sync notifications with various external services. Webhook responses contain JSON-encoded information, allowing users to ingest them into their internal or third-party systems.

For real-time updates, users can now use webhooks to send Health Check alerts directly to their preferred instant messaging platforms such as Slack, Google Chat, or Microsoft Teams, to name a few. Beyond instant messaging, customers can also use webhooks to send Health Checks notifications to their internal APIs or their Security Information and Event Management (SIEM) platforms, such as DataDog or Splunk, giving customers a single pane of glass for all Cloudflare activity, including notifications and event logs. For more on how to configure popular webhooks, check out our developer docs. Below, we’ll walk you through a couple of powerful webhook applications. But first, let’s highlight a couple of other ways the update to Health Checks notifications improves user experience.

By including Health Checks in the Notifications tab, users need only to access one page for a single source of truth where they can manage their account notifications.  For added ease of use, users can also migrate to Notification setup directly from the Health Check creation page as well.  From within a Health Check, users will also be able to see what Notifications are tied to it.

Additionally, configuring notifications for multiple Health Checks is now simplified. Instead of configuring notifications one Health Check at a time, users can now set up notifications for multiple or even all Health Checks from a single workflow.

Get updates on the health of your origin where you need them

Also, users can now access Health Checks notification history, and Enterprise customers can integrate Health Checks directly with PagerDuty. Last but certainly not least, as Cloudflare’s Notifications infrastructure grows in capabilities, Health Checks will be able to leverage all of these improvements. This guarantees Health Checks users the most timely and versatile notification capabilities that Cloudflare offers now and into the future.

Setting up a Health Checks webhook

To get notifications for health changes at an origin, we first need to set up a Health Check for it. In this example, we’ll monitor HTTP responses: leave the Type and adjacent fields to their defaults. We will also monitor HTTP response codes: add `200` as an expected code, which will cause any other HTTP response code to trigger an unhealthy status.

Get updates on the health of your origin where you need them

Creating the webhook notification policy

Once we’ve got our Health Check set up, we can create a webhook to link it to. Let’s start with a popular use case and send our Health Checks to a Slack channel. Before creating the webhook in the Cloudflare Notifications dashboard, we enable webhooks in our Slack workspace and retrieve the webhook URL of the Slack channel we want to send notifications to. Next, we navigate to our account’s Notifications tab to add the Slack webhook as a Destination, entering the name and URL of our webhook — the secret will populate automatically.

Get updates on the health of your origin where you need them
Webhook creation page with the following user input: Webhook name, Slack webhook url, and auto-populated Secret

Once we hit Save and Test, we will receive a message in the designated Slack channel verifying that our webhook is working.

Get updates on the health of your origin where you need them
Message sent in Slack via the configured webhook, verifying the webhook is working

This webhook can now be used for any notification type available in the Notifications dashboard. To have a Health Check notification sent to this Slack channel, simply add a new Health Check notification from the Notifications dashboard, selecting the Health Check(s) to tie to this webhook and the Slack webhook we just created. And, voilà! Anytime our Health Check detects a response status code other than 200 or goes from unhealthy to healthy, this Slack channel will be notified.

Get updates on the health of your origin where you need them
Health Check notification sent to Slack, indicating our server is online and Healthy.

Create an Origin Health Status Page

Let’s walk through another powerful webhooks implementation with Health Checks. Using the Health Check we configured in our last example, let’s create a simple status page using Cloudflare Workers and Durable Objects that stores an origin’s health, updates it upon receiving a webhook request, and displays a status page to visitors.

Writing our worker
You can find the code for this example in this GitHub repository, if you want to clone it and try it out.

We’ve got our Health Check set up, and we’re ready to write our worker and durable object. To get started, we first need to install wrangler, our CLI tool for testing and deploying more complex worker setups.

$ wrangler -V
wrangler 1.19.8

The examples in this blog were tested in this wrangler version.

Then, to speed up writing our worker, we will use a template to generate a project:

$ wrangler generate status-page [https://github.com/cloudflare/durable-objects-template](https://github.com/cloudflare/durable-objects-template)
$ cd status-page

The template has a Durable Object with the name Counter. We’ll rename that to Status, as it will store and update the current state of the page.

For that, we update wrangler.toml to use the correct name and type, and rename the Counter class in index.mjs.

name = "status-page"
# type = "javascript" is required to use the `[build]` section
type = "javascript"
workers_dev = true
account_id = "<Cloudflare account-id>"
route = ""
zone_id = ""
compatibility_date = "2022-02-11"
 
[build.upload]
# Upload the code directly from the src directory.
dir = "src"
# The "modules" upload format is required for all projects that export a Durable Objects class
format = "modules"
main = "./index.mjs"
 
[durable_objects]
bindings = [{name = "status", class_name = "Status"}]

Now, we’re ready to fill in our logic. We want to serve two different kinds of requests: one at /webhook that we will pass to the Notification system for updating the status, and another at / for a rendered status page.

First, let’s write the /webhook logic. We will receive a JSON object with a data and a text field. The `data` object contains the following fields:

time - The time when the Health Check status changed.
status - The status of the Health Check.
reason - The reason why the Health Check failed.
name - The Health Check name.
expected_codes - The status code the Health Check is expecting.
actual_code - The actual code received from the origin.
health_check_id - The id of the Health Check pushing the webhook notification. 

For the status page we are using the Health Check name, status, and reason (the reason a Health Check became unhealthy, if any) fields. The text field contains a user-friendly version of this data, but it is more complex to parse.

 async handleWebhook(request) {
  const json = await request.json();
 
  // Ignore webhook test notification upon creation
  if ((json.text || "").includes("Hello World!")) return;
 
  let healthCheckName = json.data?.name || "Unknown"
  let details = {
    status: json.data?.status || "Unknown",
    failureReason: json.data?.reason || "Unknown"
  }
  await this.state.storage.put(healthCheckName, details)
 }

Now that we can store status changes, we can use our state to render a status page:

 async statusHTML() {
  const statuses = await this.state.storage.list()
  let statHTML = ""
 
  for(let[hcName, details] of statuses) {
   const status = details.status || ""
   const failureReason = details.failureReason || ""
   let hc = `<p>HealthCheckName: ${hcName} </p>
         <p>Status: ${status} </p>
         <p>FailureReason: ${failureReason}</p> 
         <br/>`
   statHTML = statHTML + hc
  }
 
  return statHTML
 }
 
 async handleRoot() {
  // Default of healthy for before any notifications have been triggered
  const statuses = await this.statusHTML()
 
  return new Response(`
     <!DOCTYPE html>
     <head>
      <title>Status Page</title>
      <style>
       body {
        font-family: Courier New;
        padding-left: 10vw;
        padding-right: 10vw;
        padding-top: 5vh;
       }
      </style>
     </head>
     <body>
       <h1>Status of Production Servers</h1>
       <p>${statuses}</p>
     </body>
     `,
   {
    headers: {
     'Content-Type': "text/html"
    }
   })
 }

Then, we can direct requests to our two paths while also returning an error for invalid paths within our durable object with a fetch method:

async fetch(request) {
  const url = new URL(request.url)
  switch (url.pathname) {
   case "/webhook":
    await this.handleWebhook(request);
    return new Response()
   case "/":
    return await this.handleRoot();
   default:
    return new Response('Path not found', { status: 404 })
  }
 }

Finally, we can call that fetch method from our worker, allowing the external world to access our durable object.

export default {
 async fetch(request, env) {
  return await handleRequest(request, env);
 }
}
async function handleRequest(request, env) {
 let id = env.status.idFromName("A");
 let obj = env.status.get(id);
 
 return await obj.fetch(request);
}

Testing and deploying our worker

When we’re ready to deploy, it’s as simple as running the following command:

$ wrangler publish --new-class Status

To test the change, we create a webhook pointing to the path the worker was deployed to. On the Cloudflare account dashboard, navigate to Notifications > Destinations, and create a webhook.

Get updates on the health of your origin where you need them
Webhook creation page, with the following user input: name of webhook, destination URL, and optional secret is left blank.

Then, while still in the Notifications dashboard, create a Health Check notification tied to the status page webhook and Health Check.

Get updates on the health of your origin where you need them
Notification creation page, with the following user input: Notification name and the Webhook created in the previous step added.

Before getting any updates the status-page worker should look like this:

Get updates on the health of your origin where you need them
Status page without updates reads: “Status of Production Servers”

Webhooks get triggered when the Health Check status changes. To simulate the change of Health Check status we take the origin down, which will send an update to the worker through the webhook. This causes the status page to get updated.

Get updates on the health of your origin where you need them
Status page showing the name of the Health Check as “configuration-service”, Status as “Unhealthy”, and the failure reason as “TCP connection failed”. 

Next, we simulate a return to normal by changing the Health Check expected status back to 200. This will make the status page show the origin as healthy.

Get updates on the health of your origin where you need them
Status page showing the name of the Health Check as “configuration-service”, Status as “Healthy”, and the failure reason as “No failures”. 

If you add more Health Checks and tie them to the webhook durable object, you will see the data being added to your account.

Authenticating webhook requests

We already have a working status page! However, anyone possessing the webhook URL would be able to forge a request, pushing arbitrary data to an externally-visible dashboard. Obviously, that’s not ideal. Thankfully, webhooks provide the ability to authenticate these requests by supplying a secret. Let’s create a new webhook that will provide a secret on every request. We can generate a secret by generating a pseudo-random UUID with Python:

$ python3
>>> import uuid
>>> uuid.uuid4()
UUID('4de28cf2-4a1f-4abd-b62e-c8d69569a4d2')

Now we create a new webhook with the secret.

Get updates on the health of your origin where you need them
Webhook creation page, with the following user input: name of webhook, destination URL, and now with a secret added.

We also want to provide this secret to our worker. Wrangler has a command that lets us save the secret.

$ wrangler secret put WEBHOOK_SECRET
Enter the secret text you'd like assigned to the variable WEBHOOK_SECRET on the script named status-page:
4de28cf2-4a1f-4abd-b62e-c8d69569a4d2
🌀 Creating the secret for script name status-page
✨ Success! Uploaded secret WEBHOOK_SECRET.

Wrangler will prompt us for the secret, then provide it to our worker. Now we can check for the token upon every webhook notification, as the secret is provided by the header cf-webhook-auth. By checking the header’s value against our secret, we can authenticate incoming webhook notifications as genuine. To do that, we modify handleWebhook:

 async handleWebhook(request) {
  // ensure the request we receive is from the Webhook destination we created
  // by examining its secret value, and rejecting it if it's incorrect
  if((request.headers.get('cf-webhook-auth') != this.env.WEBHOOK_SECRET) {
    return
   }
  ...old code here
 }

This origin health status page is just one example of the versatility of webhooks, which allowed us to leverage Cloudflare Workers and Durable Objects to support a custom Health Checks application. From highly custom use cases such as this to more straightforward, out-of-the-box solutions, pairing webhooks and Health Checks empowers users to respond to critical origin health updates effectively by delivering that information where it will be most impactful.

Migrating to the Notifications Dashboard

The Notifications dashboard is now the centralized location for most Cloudflare services. In the interest of consistency and streamlined administration, we will soon be limiting Health Checks notification setup to the Notifications dashboard.  Many existing Health Checks customers have emails configured via our legacy alerting system. Over the next three months, we will support the legacy system, giving customers time to transition their Health Checks notifications to the Notifications dashboard. Customers can expect the following timeline for the phasing out of existing Health Checks notifications:

  • For now, customers subscribed to legacy emails will continue to receive them unchanged, so any parsing infrastructure will still work. From within a Health Check, you will see two options for configuring notifications–the legacy format and a deep link to the Notifications dashboard.
  • On May 24, 2022, we will disable the legacy method for the configuration of email notifications from the Health Checks dashboard.
  • On June 28, 2022, we will stop sending legacy emails, and adding new emails at the /healthchecks endpoint will no longer send email notifications.

We strongly encourage all our users to migrate existing Health Checks notifications to the Notifications dashboard within this timeframe to avoid lapses in alerts.

IDC MarketScape positions Cloudflare as a Leader among worldwide Commercial CDN providers

Post Syndicated from Vivek Ganti original https://blog.cloudflare.com/idc-marketscape-cdn-leader-2022/

IDC MarketScape positions Cloudflare as a Leader among worldwide Commercial CDN providers

IDC MarketScape positions Cloudflare as a Leader among worldwide Commercial CDN providers

We are thrilled to announce that Cloudflare has been positioned in the Leaders category in the IDC MarketScape: Worldwide Commercial CDN 2022 Vendor Assessment(doc #US47652821, March 2022).

You can download a complimentary copy here.

The IDC MarketScape evaluated 10 CDN vendors based on their current capabilities and future strategies for delivering Commercial CDN services. Cloudflare is recognized as a Leader.

At Cloudflare, we release products at a dizzying pace. When we talk to our customers, we hear again and again that they appreciate Cloudflare for our relentless innovation. In 2021 alone, over the course of seven Innovation Weeks, we launched a diverse set of products and services that made our customers’ experiences on the Internet even faster, more secure, more reliable, and more private.

We leverage economies of scale and network effects to innovate at a fast pace. Of course, there’s more to our secret sauce than our pace of innovation. In the report, IDC notes that Cloudflare is “a highly innovative vendor and continues to invest in its competencies to support advanced technologies such as virtualization, serverless, AI/ML, IoT, HTTP3, 5G and (mobile) edge computing.” In addition, IDC also recognizes Cloudflare for its “integrated SASE offering (that) is appealing to global enterprise customers.”

Built for the modern Internet

Building fast scalable applications on the modern Internet requires more than just caching static content on servers around the world. Developers need to be able to build applications without worrying about underlying infrastructure. A few years ago, we set out to revolutionize the way applications are built, so developers didn’t have to worry about scale, speed, or even compliance. Our goal was to let them build the code, while we handle the rest. Our serverless platform, Cloudflare Workers, aimed to be the easiest, most powerful, and most customizable platform for developers to build and deploy their applications.

Workers was designed from the ground up for an edge-first serverless model. Since Cloudflare started with a distributed edge network, rather than trying to push compute from large centralized data centers out into the edge, working under those constraints forced us to innovate.

Today, Workers services hundreds of thousands of developers, ranging from hobbyists to enterprises all over the world, serving millions of requests per second.

IDC MarketScape positions Cloudflare as a Leader among worldwide Commercial CDN providers

According to the IDC MarketScape: “The Cloudflare Workers developer platform, based on an isolate serverless architecture, is highly customizable and provides customers with a shortened time to market which is crucial in this digitally led market.”

Data you care about, at your fingertips

Our customers today have access to extensive analytics on the dashboard and via the API around network performance, firewall actions, cache ratios, and more. We provide analytics based on raw events, which means that we go beyond simple metrics and provide powerful filtering and analysis capabilities on high-dimensionality data.

And our insights are actionable. For example, customers who are looking to optimize cache performance can analyze specific URLs and see not just hits and misses but content that is expired or revalidated (indicating a short URL). All events, both directly in the console and in the logs, are available within 30 seconds or less.

IDC MarketScape positions Cloudflare as a Leader among worldwide Commercial CDN providers

The IDC MarketScape notes that the “self-serve portal and capabilities that include dashboards with detailed analytics as well as actionable content delivery and security analytics are complemented by a comprehensive enhanced services suite for enterprise grade customers.”

The only unified SASE solution in the market

Cloudflare’s SASE offering, Cloudflare One, continues to grow and provides a unified and comprehensive solution to our customers. With our SASE offering, we aim to be the network that any business can plug into to deliver the fastest, most reliable, and most secure experiences to their customers. Cloudflare One combines network connectivity services with Zero Trust security services on our purpose-built global network.

IDC MarketScape positions Cloudflare as a Leader among worldwide Commercial CDN providers

Cloudflare Access and Gateway services natively work together to secure connectivity for any user to any application and Internet destination. To enhance threat and data protection, Cloudflare Browser Isolation and CASB services natively work across both Access and Gateway to fully control data in transit, at rest, and in use.

The old model of the corporate network has been made obsolete by mobile, SaaS, and the public cloud. We believe Zero Trust networking is the future, and we are proud to be enabling that future. The IDC MarketScape notes: “Cloudflare’s enterprise security Zero Trust services stack is extensive and meets secure access requirements of the distributed workforce. Its data localization suite and integrated SASE offering is appealing to global enterprise customers.“

A one-stop, truly global solution

Many global companies today looking to do business in China often are restricted in their operations due to the country’s unique regulatory, political, and trade policies.

Core to Cloudflare’s mission of helping build a better Internet is making it easy for our customers to improve the performance, security, and reliability of their digital properties, no matter where in the world they might be, and this includes mainland China. Our partnership with JD Cloud & AI allows international businesses to grow their online presence in China without having to worry about managing separate tools with separate vendors for security and performance in China.

Just last year, we made advancements to allow customers to be able to serve their DNS in mainland China. This means DNS queries are answered directly from one of the JD Cloud Points of Presence (PoPs), leading to faster response times and improved reliability. This in addition to providing DDoS protection, WAF, serverless compute, SSL/TLS, and caching services from more than 35 locations in mainland China.

IDC MarketScape positions Cloudflare as a Leader among worldwide Commercial CDN providers

Here’s what the IDC MarketScape noted about  Cloudflare’s China network: “Cloudflare’s strategic partnership with JD Cloud enables the vendor to provide its customers cached content in-country at any of their China data centers from origins outside of mainland China and provide the same Internet performance, security, and reliability experience in China as the rest of the world.”

A unified network that is fast, secure, reliable, customizable, and global

One of the earliest architectural decisions we made was to run the same software stack of our services across our ever-growing fleet of servers and data centers. So whether it is content caching, serverless compute, zero trust functionality, or our other performance, security, or reliability services, we run them from all of our physical points of presence. This also translates into faster performance and robust security policies for our customers, all managed from the same dashboard or APIs. This strategy has been a key enabler for us to expand our customer base significantly over the years. Today, Cloudflare’s network spans 250 cities across 100+ countries and has millions of customers, of which more than 140,000 are paying customers.

In the IDC MarketScape: Worldwide Commercial CDN 2022 Vendor Assessment, IDC notes, “[Cloudflare’s] clear strategy to invest in new technology but also expand its network as well as its sales machine across these new territories has resulted in a tremendous growth curve in the past years.”

To that, we’d humbly like to say that we are just getting started.

Stay tuned for more product and feature announcements on our blog. If you’re interested in contributing to Cloudflare’s mission, we’d love to hear from you.

A Workers optimization that reduces your bill

Post Syndicated from Kenton Varda original https://blog.cloudflare.com/workers-optimization-reduces-your-bill/

A Workers optimization
that reduces your bill

A Workers optimization
that reduces your bill

Recently, we made an optimization to the Cloudflare Workers runtime which reduces the amount of time Workers need to spend in memory. We’re passing the savings on to you for all your Unbound Workers.

Background

Workers are often used to implement HTTP proxies, where JavaScript is used to rewrite an HTTP request before sending it on to an origin server, and then to rewrite the response before sending it back to the client. You can implement any kind of rewrite in a Worker, including both rewriting headers and bodies.

Many Workers, though, do not actually modify the response body, but instead simply allow the bytes to pass through from the origin to the client. In this case, the Worker’s application code has finished executing as soon as the response headers are sent, before the body bytes have passed through. Historically, the Worker was nevertheless considered to be “in use” until the response body had fully finished streaming.

For billing purposes, under the Workers Unbound pricing model, we charge duration-memory (gigabyte-seconds) for the time in which the Worker is in use.

The change

On December 15-16, we made a change to the way we handle requests that are streaming through the response without modifying the content. This change means that we can mark application code as “idle” as soon as the response headers are returned.

Since no further application code will execute on behalf of the request, the system does not need to keep the request state in memory – it only needs to track the low-level native sockets and pump the bytes through. So now, during this time, the Worker will be considered idle, and could even be evicted before the stream completes (though this would be unlikely unless the stream lasts for a very long time).

Visualized it looks something like this:

A Workers optimization
that reduces your bill

As a result of this change, we’ve seen that the time a Worker is considered “in use” by any particular request has dropped by an average of 70%. Of course, this number varies a lot depending on the details of each Worker. Some may see no benefit, others may see an even larger benefit.

This change is totally invisible to the application. To any external observer, everything behaves as it did before. But, since the system now considers a Worker to be idle during response streaming, the response streaming time will no longer be billed. So, if you saw a drop in your bill, this is why!

But it doesn’t stop there!

The change also applies to a few other frequently used scenarios, namely Websocket proxying, reading from the cache and streaming from KV.

WebSockets: once a Worker has arranged to proxy through a WebSocket, as long as it isn’t handling individual messages in your Worker code, the Worker does not remain in use during the proxying. The change applies to regular stateless Workers, but not to Durable Objects, which are not usually used for proxying.

export default {
  async fetch(request: Request) {
    //Do anything before
    const upgradeHeader = request.headers.get('Upgrade')
    if (upgradeHeader || upgradeHeader === 'websocket') {
      return await fetch(request)
    }
    //Or with other requests
  }
}

Reading from Cache: If you return the response from a cache.match call, the Worker is considered idle as soon as the response headers are returned.

export default {
  async fetch(request: Request) {
    let response = await caches.default.match('https://example.com')
    if (response) {
      return response
    }
    // get/create response and put into cache
  }
}

Streaming from KV: And lastly, when you stream from KV. This one is a bit trickier to get right, because often people retrieve the value from KV as a string, or JSON object and then create a response with that value. But if you fetch the value as a stream, as done in the example below, you can create a Response with the ReadableStream.

interface Env {
  MY_KV_NAME: KVNamespace
}

export default {
  async fetch(request: Request, env: Env) {
    const readableStream = await env.MY_KV_NAME.get('hello_world.pdf', { type: 'stream' })
    if (readableStream) {
      return new Response(readableStream, { headers: { 'content-type': 'application/pdf' } })
    }
  },
}

Interested in Workers Unbound?

If you are already using Unbound, your bill will have automatically dropped already.

Now is a great time to check out Unbound if you haven’t already, especially since recently, we’ve also removed the egress fees. Unbound allows you to build more complex workloads on our platform and only pay for what you use.

We are always looking for opportunities to make Workers better. Often that improvement takes the form of powerful new features such as the soon-to-be released Service Bindings and, of course, performance enhancements. This time, we are delighted to make Cloudflare Workers even cheaper than they already were.

Miniflare 2.0: fully-local development and testing for Workers

Post Syndicated from Brendan Coll original https://blog.cloudflare.com/miniflare/

Miniflare 2.0: fully-local development and testing for Workers

Miniflare 2.0: fully-local development and testing for Workers

In July 2021, I launched Miniflare 1.0, a fun, full-featured, fully-local simulator for Workers, on the Cloudflare Workers Discord server. What began as a pull request to the cloudflare-worker-local project has now become an official Cloudflare project and a core part of the Workers ecosystem, being integrated into wrangler 2.0. Today, I’m thrilled to announce the release of the next major version: a more modular, lightweight, and accurate Miniflare 2.0. 🔥

Background: Why Miniflare was created

Miniflare 2.0: fully-local development and testing for Workers

At the end of 2020, I started to build my first Workers app. Initially I used the then recently released wrangler dev, but found it was taking a few seconds before changes were reflected. While this was still impressive considering it was running on the Workers runtime, I was using Vite to develop the frontend, so I knew a significantly faster developer experience was possible.

I then found cloudflare-worker-local and cloudworker, which were local Workers simulators, but didn’t have support for newer features like Workers Sites. I wanted a magical simulator that would just work ✨ in existing projects, focusing on the developer experience, and — by the reception of Miniflare 1.0 — I wasn’t the only one.

Miniflare 1.0 brought near instant reloads, source map support (so you could see where errors were thrown), cleaner logs (no more { unknown object }s or massive JSON stack traces), a pretty error page that highlighted the cause of the error, step-through debugger support, and more.

Miniflare 2.0: fully-local development and testing for Workers

Pretty-error page powered by `youch`

Miniflare 2.0: fully-local development and testing for Workers

The next iteration: What’s new in version 2

In the relatively short time since the launch of Miniflare 1.0 in July, Workers as a platform has improved dramatically. Durable Objects now have input and output gates for ensuring consistency without explicit transactions, Workers has compatibility dates allowing developers to opt-into backwards-incompatible fixes, and you can now write Workers using JavaScript modules.

Miniflare 2 supports all these features and has been completely redesigned with three primary design goals:

  1. Modular: Miniflare 2 splits Workers components (KV, Durable Objects, etc.) into separate packages (@miniflare/kv, @miniflare/durable-objects, etc.) that you can import on their own for testing. This will also make it easier to add support for new, unreleased features like R2 Storage.
  2. Lightweight: Miniflare 1 included 122 third-party packages with a total install size of 88.3MB. Miniflare 2 reduces this to 23 packages and 6MB by leveraging features included with Node.js 16.
  3. Accurate: Miniflare 2 replicates the quirks and thrown errors of the real Workers runtime, so you’ll know before you deploy if things are going to break. Of course, wrangler dev will always be the most accurate preview, running on the real edge with real data, but Miniflare 2 is really close!

It also adds a new live-reload feature and first-class support for testing with Jest for an even more enjoyable developer experience.

Getting started with local development

As mentioned in the introduction, Miniflare 2.0 is now integrated into wrangler 2.0, so you just need to run npx [email protected] dev --local to start a fully-local Worker development server or npx [email protected] pages dev to start a Cloudflare Pages Functions server. Make sure you’ve got the latest release of Node.js installed.

However, if you’re using Wrangler 1 or want to customize your local environment, you can install Miniflare standalone. If you’ve got an existing worker with a wrangler.toml file, just run npx miniflare --live-reload to start a live-reloading development server. Miniflare will automatically load configuration like KV namespaces or Durable Object bindings from your wrangler.toml file and secrets from a .env file.

Miniflare is highly configurable. For example, if you want to persist KV data between restarts, include the --kv-persist flag. See the Miniflare docs or run npx miniflare --help for many more options, like running multiple workers or starting an HTTPS server.

If you’ve got a scheduled event handler, you can manually trigger it by visiting http://localhost:8787/cdn-cgi/mf/scheduled in your browser.

Testing for Workers with Jest

Jest is one of the most popular JavaScript testing frameworks, so it made sense to add first-class support for it. Miniflare 2.0 includes a custom test environment that gives your tests access to Workers runtime APIs.

For example, suppose we have the following worker, written using JavaScript modules, that stores the number of times each URL is visited in Workers KV:

Aside: Workers KV is not designed for counters as it’s eventually consistent. In a real worker, you should use Durable Objects. This is just a simple example.

// src/index.mjs
export async function increment(namespace, key) {
  // Get the current count from KV
  const currentValue = await namespace.get(key);
  // Increment the count, defaulting it to 0
  const newValue = parseInt(currentValue ?? "0") + 1;
  // Store and return the new count
  await namespace.put(key, newValue.toString());
  return newValue;
}

export default {
  async fetch(request, env, ctx) {
    // Use the pathname for a key
    const url = new URL(request.url);
    const key = url.pathname;
    // Increment the key
    const value = await increment(env.COUNTER_NAMESPACE, key);
    // Return the new incremented count
    return new Response(`count for ${key} is now ${value}`);
  },
};
# wrangler.toml
kv_namespaces = [
  { binding = "COUNTER_NAMESPACE", id = "..." }
]

[build.upload]
format = "modules"
dist = "src"
main = "./index.mjs"

…we can write unit tests like so:

// test/index.spec.mjs
import worker, { increment } from "../src/index.mjs";

// When using `format = "modules"`, bindings are included in the `env` parameter,
// which we don't have access to in tests. Miniflare therefore provides a custom
// global method to access these.
const { COUNTER_NAMESPACE } = getMiniflareBindings();

test("should increment the count", async () => {
  // Seed the KV namespace
  await COUNTER_NAMESPACE.put("a", "3");

  // Perform the increment
  const newValue = await increment(COUNTER_NAMESPACE, "a");
  const storedValue = await COUNTER_NAMESPACE.get("a");

  // Check the return value of increment
  expect(newValue).toBe(4);
  // Check increment had the side effect of updating KV
  expect(storedValue).toBe("4");
});

test("should return new count", async () => {
  // Note we're using Worker APIs in our test, without importing anything extra
  const request = new Request("http://localhost/a");
  const response = await worker.fetch(request, { COUNTER_NAMESPACE });

  // Each test gets its own isolated storage environment, so the changes to "a"
  // are *undone* automatically. This means at the start of this test, "a"
  // wasn't in COUNTER_NAMESPACE, so it defaulted to 0, and the count is now 1.
  expect(await response.text()).toBe("count for /a is now 1");
});
// jest.config.js
const { defaults } = require("jest-config");

module.exports = {
  testEnvironment: "miniflare", // ✨
  // Tell Jest to look for tests in .mjs files too
  testMatch: [
    "**/__tests__/**/*.?(m)[jt]s?(x)",
    "**/?(*.)+(spec|test).?(m)[tj]s?(x)",
  ],
  moduleFileExtensions: ["mjs", ...defaults.moduleFileExtensions],
};

…and run them with:

# Install dependencies
$ npm install -D jest jest-environment-miniflare
# Run tests with experimental ES modules support
$ NODE_OPTIONS=--experimental-vm-modules npx jest

For more details about the custom test environment and isolated storage, see the Miniflare docs or this example project that also uses TypeScript and Durable Objects.

Not using Jest? Miniflare lets you write your own integration tests with vanilla Node.js or any other test framework. For an example using AVA, see the Miniflare docs or this repository.

How Miniflare works

Let’s now dig deeper into how some interesting parts of Miniflare work.

Miniflare is powered by Node.js, a JavaScript runtime built on Chrome’s V8 JavaScript engine. V8 is the same engine that powers the Cloudflare Workers runtime, but Node and Workers implement different runtime APIs on top of it. To ensure Node’s APIs aren’t visible to users’ worker code and to inject Workers’ APIs, Miniflare uses the Node.js vm module. This lets you run arbitrary code in a custom V8 context.

A core part of Workers are the Request and Response classes. Miniflare gets these from undici, a project written by the Node team to bring fetch to Node. For service workers, we also need a way to addEventListeners and dispatch events using the EventTarget API, which was added in Node 15.

With that we can build a mini-miniflare:

import vm from "vm";
import { Request, Response } from "undici";

// An instance of this class will become the global scope of our Worker,
// extending EventTarget for addEventListener and dispatchEvent
class ServiceWorkerGlobalScope extends EventTarget {
  constructor() {
    super();

    // Add Worker runtime APIs
    this.Request = Request;
    this.Response = Response;

    // Make sure this is bound correctly when EventTarget methods are called
    this.addEventListener = this.addEventListener.bind(this);
    this.removeEventListener = this.removeEventListener.bind(this);
    this.dispatchEvent = this.dispatchEvent.bind(this);
  }
}

// An instance of this class will be passed as the event parameter to "fetch"
// event listeners
class FetchEvent extends Event {
  constructor(type, init) {
    super(type);
    this.request = init.request;
  }

  respondWith(response) {
    this.response = response;
  }
}

// Create a V8 context to run user code in
const globalScope = new ServiceWorkerGlobalScope();
const context = vm.createContext(globalScope);

// Example user worker code, this could be loaded from the file system
const workerCode = `
addEventListener("fetch", (event) => {
  event.respondWith(new Response("Hello mini-miniflare!"));
})
`;
const script = new vm.Script(workerCode);

// Run the user's code, registering the "fetch" event listener
script.runInContext(context);

// Create an example request, this could come from an incoming HTTP request
const request = new Request("http://localhost:8787/");
const event = new FetchEvent("fetch", { request });

// Dispatch the event and log the response
globalScope.dispatchEvent(event);
console.log(await event.response.text()); // Hello mini-miniflare!

Plugins

Miniflare 2.0: fully-local development and testing for Workers

Dependency graph of the Miniflare monorepo.

There are a lot of Workers runtime APIs, so adding and configuring them all manually as above would be tedious. Therefore, Miniflare 2 has a plugin system that allows each package to export globals and bindings to be included in the sandbox. Options have annotations describing their type, CLI flag, and where to find them in Wrangler configuration files:

@Option({
  // Define type for runtime validation of the CLI flag
  type: OptionType.ARRAY,
  // Use --kv instead of auto-generated --kv-namespace for the CLI flag
  name: "kv",
  // Define -k as an alias
  alias: "k",
  // Displayed in --help
  description: "KV namespace to bind",
  // Where to find this option in wrangler.toml
  fromWrangler: (config) => config.kv_namespaces?.map(({ binding }) => binding),
})
kvNamespaces?: string[];

Durable Objects

Before input and output gates were added, you usually needed to use the transaction() method to ensure consistency:

async function incrementCount() {
  let value;
  await this.storage.transaction(async (txn) => {
    value = await txn.get("count");
    await txn.put("count", value + 1);
  });
  return value;
}

Miniflare implements this using optimistic-concurrency control (OCC). However, input and output gates are now available, so to avoid race conditions when simulating newly-written Durable Object code, Miniflare 2 needed to implement them.

From the description in the gates announcement blog post:

Input gates: While a storage operation is executing, no events shall be delivered to the object except for storage completion events. Any other events will be deferred until such a time as the object is no longer executing JavaScript code and is no longer waiting for any storage operations. We say that these events are waiting for the "input gate" to open.

…we can see input gates need to have two methods, one for closing the gate while a storage operation is running and one for waiting until the input gate is open:

class InputGate {
  async runWithClosed<T>(closure: () => Promise<T>): Promise<T> {
    // 1. Close the input gate
    // 2. Run the closure and store the result
    // 3. Open the input gate
    // 4. Return the result
  }

  async waitForOpen(): Promise<void> {
    // 1. Check if the input gate is open
    // 2. If it is, return
    // 3. Otherwise, wait until it is
  }
}

Each Durable Object has its own InputGate. In the storage implementation, we call runWithClosed to defer other events until the storage operation completes:

class DurableObjectStorage {
  async get<Value>(key: string): Promise<Value | undefined> {
    return this.inputGate.runWithClosed(() => {
      // Get key from storage
    });
  }
}

…and whenever we’re ready to deliver another event, we call waitForOpen:

import { fetch as baseFetch } from "undici";

async function fetch(input, init) {
  const response = await baseFetch(input, init);
  await inputGate.waitForOpen();
  return response;
}

You may have noticed a problem here. Where does inputGate come from in fetch? We only have one global scope for the entire Worker and all its Durable Objects, so we can’t have a fetch per Durable Object InputGate. We also can’t ask the user to pass it around as another parameter to all functions that need it. We need some way of storing it in a context that’s passed around automatically between potentially async functions. For this, we can use another lesser-known Node module, async_hooks, which includes the AsyncLocalStorage class:

import { AsyncLocalStorage } from "async_hooks";

const inputGateStorage = new AsyncLocalStorage<InputGate>();

const inputGate = new InputGate();
await inputGateStorage.run(inputGate, async () => {
  // This closure will run in an async context with inputGate
  await fetch("https://example.com");
});

async function fetch(input: RequestInfo, init: RequestInit): Promise<Response> {
  const response = await baseFetch(input, init);
  // Get the input gate in the current async context
  const inputGate = inputGateStorage.getStore();
  await inputGate.waitForOpen();
  return response;
}

Durable Objects also include a blockConcurrencyWhile(closure) method that defers events until the closure completes. This is exactly the runWithClosed() method:

class DurableObjectState {
  // ...

  blockConcurrencyWhile<T>(closure: () => Promise<T>): Promise<T> {
    return this.inputGate.runWithClosed(closure);
  }
}

However, there’s a problem with what we’ve got at the moment. Consider the following code:

export class CounterObject {
  constructor(state: DurableObjectState) {
    state.blockConcurrencyWhile(async () => {
      const res = await fetch("https://example.com");
      this.data = await res.text();
    });
  }
}

blockConcurrencyWhile closes the input gate, but fetch won’t return until the input gate is open, so we’re deadlocked! To fix this, we need to make InputGates nested:

class InputGate {
  constructor(private parent?: InputGate) {}

  async runWithClosed<T>(closure: () => Promise<T>): Promise<T> {
    // 1. Close the input gate, *and any parents*
    // 2. *Create a new child input gate with this as its parent*
    const childInputGate = new InputGate(this);
    // 3. Run the closure, *under the child input gate's context*
    // 4. Open the input gate, *and any parents*
    // 5. Return the result
  }
}

Now the input gate outside of blockConcurrencyWhile will be closed, so fetches to the Durable Object will be deferred, but the input gate inside the closure will be open, so the fetch can return.

This glosses over some details, but you can check out the gates implementation for additional context and comments. 🙂

HTMLRewriter

HTMLRewriter is another novel class that allows parsing and transforming HTML streams. In the edge Workers runtime, it’s powered by C-bindings to the lol-html Rust library. Luckily, Ivan Nikulin built WebAssembly bindings for this, so we’re able to use the same library in Node.js.

However, these were missing support for async handlers that allow you to access external resources when rewriting:

class UserElementHandler {
  async element(node) {
    const response = await fetch("/user");
    // ...
  }
}

The WebAssembly bindings Rust code includes something like:

macro_rules! make_handler {
  ($handler:ident, $JsArgType:ident, $this:ident) => {
    move |arg: &mut _| {
      // `js_arg` here is the `node` parameter from above
      let js_arg = JsValue::from(arg);
      // $handler here is the `element` method from above
      match $handler.call1(&$this, &js_arg) {
        Ok(res) => {
          // Check if this is an async handler
          if let Some(promise) = res.dyn_ref::<JsPromise>() {
            await_promise(promise);
          }
          Ok(())
        }
        Err(e) => ...,
      }
    }
  };
}

The key thing to note here is that the Rust move |...| { ... } closure is synchronous, but handlers can be asynchronous. This is like trying to await a Promise in a non-async function.

To solve this, we use the Asyncify feature of Binaryen, a set of tools for working with WebAssembly modules. Whenever we call await_promise, Asyncify unwinds the current WebAssembly stack into some temporary storage. Then in JavaScript, we await the Promise. Finally, we rewind the stack from the temporary storage to the previous state and continue rewriting where we left off.

You can find the full implementation in the html-rewriter-wasm package.

Miniflare 2.0: fully-local development and testing for Workers

The future of Miniflare

As mentioned earlier, Miniflare is now included in wrangler 2.0. Try it out and let us know what you think!

I’d like to thank everyone on the Workers team at Cloudflare for building such an amazing platform and supportive community. Special thanks to anyone who’s contributed to Miniflare, opened issues, given suggestions, or asked questions in the Discord server.

Maybe now I can finish off my original workers project… 😅

Supporting Remix with full stack Cloudflare Pages

Post Syndicated from Greg Brimble original https://blog.cloudflare.com/remix-on-cloudflare-pages/

Supporting Remix with full stack Cloudflare Pages

Supporting Remix with full stack Cloudflare Pages

We announced the open beta of full stack Cloudflare Pages in November and have since seen widespread uptake from developers looking to add dynamic functionality to their applications. Today, we’re excited to announce Pages’ support for Remix applications, powered by our full stack platform.

The new kid on the block: Remix

Remix is a new framework that is focused on fully utilizing the power of the web. Like Cloudflare Workers, it uses modern JavaScript APIs, and it places emphasis on web fundamentals such as meaningful HTTP status codes, caching and optimizing for both usability and performance. One of the biggest features of Remix is its transportability: Remix provides a platform-agnostic interface and adapters allowing it to be deployed to a growing number of providers. Cloudflare Workers was available at Remix’s launch, but what makes Workers different in this case, is the native compatibility that Workers can offer.

One of the main inspirations for Remix was the way Cloudflare Workers uses native web APIs for handling HTTP requests and responses. It’s a brilliant decision because developers are able to reuse knowledge on the server that they gained building apps in the browser! Remix runs natively on Cloudflare Workers, and the results we’ve seen so far are fantastic. We are incredibly excited about the potential that Cloudflare Workers and Pages unlocks for building apps that run at the edge!
Michael Jackson, CEO at Remix

This native compatibility means that as you learn how to write applications in Remix, you’re also learning how to write Cloudflare Workers (and vice versa). But it also means better performance! Rather than having a Node.js process running on a server — which could be far away from your users, could be overwhelmed in the case of high traffic, and has to map between Node.js’ runtime and the modern Fetch API — you can deploy to Cloudflare’s network and requests will be routed to any one of our 250+ locations. This means better performance for your users, with 95% of the entire Internet-connected world lying within 50ms of a Cloudflare presence, and 80% of the Internet-connected world within 20ms.

Integrating with Cloudflare

More often than not, full stack applications need some place to store data. Cloudflare offers three all-encompassing options here:

  • KV, our high performance and globally replicated key-value datastore.
  • Durable Objects, our strongly consistent coordination primitive which can be restricted to a given jurisdiction.
  • R2 (coming soon!), our fast and reliable object storage.

Remix already tightly integrates with KV for session storage, and a Durable Objects integration is in progress. Additionally, Cloudflare’s other features, such as geolocating incoming requests, HTMLRewriter and our Cache API, are all available from within your Remix application.

Deploying to Cloudflare Pages

Cloudflare Pages was already capable of serving static assets from the Cloudflare edge, but now with November’s release of serverless functions powered by Cloudflare Workers, it has evolved into an entire platform perfectly suited for hosting full stack applications.

To get started with Remix and Cloudflare Pages today, run the following in your terminal, and select “Cloudflare Pages” when asked “Where do you want to deploy?”:

npx [email protected]

Then create a repository on GitHub or GitLab, git commit, and git push the newly created folder. Finally, navigate to Cloudflare Pages, select your repository, and select “Remix” from the dropdown of framework presets. Your new application will be available on your pages.dev subdomain, or you can connect it to any of your custom domains.

Your folder will have a functions/[[path]].ts file. This is the functions integration where we serve your Remix application on all paths of your website. The app folder is where the bulk of your Remix application’s logic is. With Pages’ support for rollbacks and preview deployments, you can safely test any changes to your application, and, with the wrangler 2.0 beta, testing locally is just a simple case of npm run dev.

The future of frameworks on Cloudflare Pages

Remix is the second framework to integrate natively with full stack Cloudflare Pages, following SvelteKit, which was available at launch. But this is just the beginning! We have a lot more in store for our integration with Remix and other frameworks. Stay tuned for improvements on  Pages’ build times and other areas of the developer experience, as well as new features to the platform.

Join our community!

If you are new to the Cloudflare Pages and Workers world, join our Discord server and show us what you’re building. Whether it’s a new full stack application on Remix or even a simple static site, we’d love to hear from you.

Maximum redirects, minimum effort: Announcing Bulk Redirects

Post Syndicated from Sam Marsh original https://blog.cloudflare.com/maximum-redirects-minimum-effort-announcing-bulk-redirects/

Maximum redirects, minimum effort: Announcing Bulk Redirects

404: Not Found

Maximum redirects, minimum effort: Announcing Bulk Redirects

The Internet is a dynamic place. Websites are constantly changing as technologies and business practices evolve. What was front-page news is quickly moved into a sub-directory. To ensure website visitors continue to see the correct webpage even if it has been moved, administrators often implement URL redirects.

A URL redirect is a mapping from one location on the Internet to another, effectively telling the visitor’s browser that the location of the page has changed, and where they can now find it. This is achieved by providing a virtual ‘link’ between the content’s original and new location.

URL Redirects have typically been implemented as Page Rules within Cloudflare, up to a maximum of 125 URL redirects per zone. This limitation meant customers with a need for more URL redirects had to implement alternative solutions such Cloudflare Workers to achieve their goals.

To simplify the management and implementation of URL redirects at scale we have created Bulk Redirects. Bulk Redirects is a new product that allows an administrator to upload and enable hundreds of thousands of URL redirects within minutes, without having to write a single line of code.

We’ve moved!

Mail forwarding is a product offered by postal services such as USPS and Royal Mail that allows you to continue to receive letters and parcels even if they are sent to an address where you no longer reside.

The postal services achieve this by effectively maintaining a register of your new location and your old location. This allows the systems to detect ‘this letter is for Sam Marsh at address A, but he now lives at address B, therefore send the mail there’.

Maximum redirects, minimum effort: Announcing Bulk Redirects
Photo by Christian Lue on Unsplash

This problem can be solved by manually updating my bank, online shops, etc. and having them send the parcels and letters directly to my new address. However, that assumes I know of every person and business who has my address. And it also relies on those people to remember I moved address and make updates on their side. For example, Grandma Marsh might have forgotten about my new address — I’ve moved a lot — and she may send my birthday card to my old address. Or all those Christmas cards from people who I don’t speak with regularly. Those will go to my old address also.  To solve this, I can use mail forwarding to ensure I still receive my cards and other mail, even though I no longer live at that address.

URL redirects are the Internet equivalent of mail forwarding.

URL redirects are effectively a table with two columns; what traffic am I looking for, and where should I send that traffic to? This mapping allows an administrator to define “whenever visitors go to https://www.cloudflare.com/bots I want to redirect them to the new location https://www.cloudflare.com/pg-lp/bot-mitigation-fight-mode.

With this technology, our sales and marketing teams can use the vanity URL all across the Internet, safe in the knowledge that should the backend systems change they won’t need to go to all the places this URL has been posted and update it. Instead, the intermediary system that handles the URL redirects can be updated. One location. Not thousands.

Why use URL redirects?

URL redirects are used to solve a number of use cases. One such common use case is to use URL redirects to force all visitors to connect to the website over a secure HTTPS connection, instead of via plain HTTP, to improve security. It’s such a common use case we created a toggle in the Cloudflare dashboard, “Always use HTTPS”, which redirects all HTTP requests to HTTPS when enabled.

URL redirects are also used for vanity domains and hyperlinks. In these scenarios, URL redirects are deployed to provide a mapping of short, user-friendly URLs to long, server-friendly URLs.  Not only are shorter URLs more memorable, but they are better scoring from an SEO perspective. According to Backlinko, ‘Excessively long URLs may hurt a page’s search engine visibility. In fact, several industry studies have found that short URLs tend to have a slight edge in Google’s search results.’.

Maximum redirects, minimum effort: Announcing Bulk Redirects
Photo by Hal Gatewood on Unsplash

Another use case is where a company may have a local domain for each of their markets, which they want to redirect back to the main website, e.g., redirect www.example.fr and www.example.de to www.example.com/eu/fr and www.example.com/eu/de, respectively.

This also covers company acquisition, where a company is acquired and the acquiring company wants to redirect hyperlinks to the relevant pages on their own website, e.g., redirect www.example.com to www.companyB.com/portfolio/example.

Finally, one of the most common use cases for URL redirects is to maintain uptime during a website migration. As companies migrate their websites from one platform to another, or one domain to another, URL redirects ensure visitors continue to see the correct content. Without these URL redirects, hyperlinks in emails, blogs, marketing brochures, etc. would fail to load, potentially costing the business revenue in lost sales and brand damage. For example, www.example.com/products/golf/product-goes-here would redirect to the new website at products.example.com/golf/product-goes-here.

How are URL redirects implemented today?

Ensuring these URL redirects are executed correctly is often the job of the reverse proxy — a server which sits between the client and the origin whose job is, amongst many others, to re-route received traffic to the correct destination.

For example, when using NGINX, a popular web server, the administrator would have a line in the config similar to the one below to implement a URL redirect:

`rewrite ^/oldpage$ http://www.example.com/newpage permanent;`

Historically, these web servers were located physically within a company’s data center. Administrators then had full control over the URLs received, and could create the redirect rules as and when needed.

As the world rapidly migrates on-premise applications and solutions to the cloud, administrators can find themselves in a situation where they can no longer do what they previously could. Not being responsible for the origin has a number of benefits, but it also comes with drawbacks such as lack of ‘control’. Previously, an administrator could quickly add a few config lines to the web server in front of their ecommerce platform. Moving to an online hosted platform makes this much more difficult to do.

As such, administrators have moved to platforms like Cloudflare where functionality such as URL redirects can be implemented in the cloud without the need to have administrator access to the origin.

The first way to implement a URL Redirect in Cloudflare is via a Forwarding URL Page Rule. Users can create a Page Rule which matches on a specific URL and redirects matching traffic to another specific URL, along with a status code — either a permanent redirect (301) or a temporary redirect (302):

Maximum redirects, minimum effort: Announcing Bulk Redirects

Another method is to use Cloudflare Workers to implement URL redirects, either individually or as a map. For example, the code below is used to create a URL redirect map which runs when the Worker is invoked:

const redirectMap = new Map([
 ["/bulk1", "https://" + externalHostname + "/redirect2"],
 ["/bulk2", "https://" + externalHostname + "/redirect3"],
 ["/bulk3", "https://" + externalHostname + "/redirect4"],
 ["/bulk4", "https://google.com"],
])

This snippet is taken from the Cloudflare Workers examples library and can be used to scale beyond the 125 URL redirect limit of Page Rules. However, it does require the administrator to be comfortable working with code and correctly configuring their Cloudflare Workers.

Introducing: Bulk Redirects

Speaking with Cloudflare users about URL redirects and their experience with our product offerings, “Give me a product which lets me upload thousands of URL redirects to Cloudflare via a GUI” was a very common request. Customers we interviewed typically wanted a simple way to upload a list of ‘from,to,response code’ without having to write a single line of code. And that’s what we are announcing today.

Bulk Redirects is now available for all Cloudflare plans. It is an account/organization-level product capable of supporting hundreds of thousands of URL redirects, all configured via the dashboard without having to write a single line of code.

The system is implemented in two parts. The first part is the Bulk Redirect List. This is effectively the redirect map, or ‘edge dictionary’, where users can upload their URL redirects:

Maximum redirects, minimum effort: Announcing Bulk Redirects

Each URL redirect within the list contains three main elements. The first two elements are Source URL (the URL we are looking for) and Target URL (the URL we are going to redirect matching traffic to).

There is also the Status code. This is the ‘type’ of redirect. In addition to 301 (Moved Permanently) and 302 (Moved Temporarily) redirects, we have added support for the newer 307 (Temporary Redirect) and 308 (Permanent Redirect) redirect status codes.

We have added support for specifying destination ports within the Target URL field also, allowing URL redirects to non-standard ports, e.g., “Target URL: www.example.com:8443”.

If you have many URL redirects, you can upload them via a CSV file.

There are also four additional parameters available for each individual URL redirect.

Firstly, we have added two options to replace the ambiguity and confusion caused by the use of asterisks as wildcards. Take this source URL as an example: *.example.com/a/b. Would you expect www.example.com/a/b to match? How about example.com/a/b, or www.example.com/path*? Asterisks used as wildcards cause confusion and misunderstanding, and also increase the cost of implementation and maintenance from an engineering perspective. Therefore, we are not implementing them in Bulk Redirects.

Instead, we have added two discrete options: Include subdomains and Subpath matching. The Include subdomains option, once enabled, will match all subdomains to the left of the domain portion of the URL as well as the domain specified. For example, if there is a URL redirect with a source URL of example.com/a then traffic to b.example.com/a and c.b.example.com/a will also be redirected.

The Subpath matching option focuses on the opposite end of the URL. If this option is enabled, the redirect applies to the URL as well as all its subpaths. For example, if we have a URL redirect on www.example.com/foo with subpath matching enabled, we will match on that specific URL as well as all subpaths, e.g., www.example.com/foo/a, www.example.com/foo/a/, ` www.example.com/foo/a/b/c`, etc., but not www.example.com/foobar.

These options provide a tremendous amount of flexibility and granularity for each URL redirect. However, for most use cases only the source URL, target URL, and status code options will need to be set.

Secondly, we have added two options relating to retaining portions of the original HTTP request: Preserve path suffix and Preserve query string. If subpath matching is enabled, Preserve path suffix can be used to copy the URI path from the originally requested URL and add it to the destination URL. For example, if there is a URL redirect of Source URL: example.co.uk, Target URL: www.example.com/a, then requests to example.co.uk/target will be redirected to www.example.com/a/target with both options enabled. Preserve query string can be used independently of the other options, and carries forward the URI query from the originally requested URL to the new URL.

Lists by themselves do not provide any redirection, they are simply the ‘lookup table’. To enable them we need to reference them via a Bulk Redirect Rule.

The rules themselves are very simple. By default, the user experience is to provide a name for the rule, a description, and select the Bulk Redirect List that should be invoked.

Maximum redirects, minimum effort: Announcing Bulk Redirects

For users who require more granularity and control there are additional settings available under the Advanced options toggle. Within this section there are two editable sections:  Expression and Key.

Maximum redirects, minimum effort: Announcing Bulk Redirects

The first field, Expression, specifies the conditions that must be met in order for the rule to run. By default, all URL redirects of the specified list will apply.

The second field, Key, is closely related to the expression. The key is used in combination with the specified list to select the URL redirect to apply. The field used for the key should always be the same as the field used in the expression, i.e., the key should be http.request.full_uri if the field in the expression is http.request.full_uri, or conversely, the key should be raw.http.request.full_uri if the field in the expression is raw.http.request.full_uri.

There are two main use cases for modifying these settings. Firstly, users can edit these options to increase specificity in the trigger, e.g., ip.src.country == "GB" and http.request.full_uri in $redirect_list. This is useful for ensuring Bulk Redirect Lists are only applied when a visitor comes from specific countries, subnets, or ASNs — or also only applying a URL redirect list if the visitor is a verified bot, or the bot score is >35.

Secondly, users can edit these options to amend the URL being matched and used as a lookup in the given list, i.e., the user may choose to have URL redirects in their list(s) specifically for URLs that would be normalized, e.g., URLs containing specific percent-encoding.  To ensure these URL redirects still trigger, the settings in Advanced options should be used to edit the expression and key to use the raw.http.request.full_uri field instead.

Automating via the API

Another way to manage bulk redirects is via our API. Customers wishing to automate the addition of bulk redirects can use the API to either simply add URL redirects to an existing list, or automate the entire workflow — creating a list, adding URL redirects to the list, and enabling the list via a new redirect rule.

There are three main calls when creating bulk redirects via the API:

  1. Create the redirect list
  2. Load with URL redirects
  3. Enable via a rule (You will also need to create the ruleset if doing this for the first time).

For step 1, first create a mass redirect list via the API call:

curl --location --request POST 'https://api.cloudflare.com/client/v4/accounts/<ACCOUNT_ID>/rules/lists' \
--header 'X-Auth-Email: <EMAIL_ADDRESS>' \
--header 'Content-Type: application/json' \
--header 'X-Auth-Key: <API_KEY>' \
--data-raw '{
 "name": "my_redirect_list_2",
 "description": "My redirect list 2",
 "kind": "redirect"
}'

The output will look similar to:

{
  "result": {
    "id": "499b94da726d4dbc9ce6bf6c96ef8b6a",
    "name": "my_redirect_list_2",
    "description": "My redirect list 2",
    "kind": "redirect",
    "num_items": 0,
    "num_referencing_filters": 0,
    "created_on": "2021-12-04T06:43:43Z",
    "modified_on": "2021-12-04T06:43:43Z"
  },
  "success": true,
  "errors": [],
  "messages": []
}

Capture the value of “id”, as this is the list id we will then add URL redirects to.

Next, in step 2 we will add URL redirects to our newly created list by executing a POST call to the id we captured previously – with our URL redirects in the body:

curl --location --request POST 'https://api.cloudflare.com/client/v4/accounts/<ACCOUNT_ID>/rules/lists/<LIST_ID>/items' \
--header 'X-Auth-Email: <EMAIL_ADDRESS>' \
--header 'Content-Type: application/json' \
--header 'X-Auth-Key: <API_KEY>' \
--data-raw '[
 {
   "redirect": {
     "source_url": "www.example.com/a",
     "target_url": "https://www.example.net/a"
   }
 },
 {
   "redirect": {
     "source_url": "www.example.com/b",
     "target_url": "https://www.example.net/a/b",
     "status_code": 307,
     "include_subdomains": true
   }
 },
 {
   "redirect": {
     "source_url": "www.example.com/c",
     "target_url": "www.example.net/c",
     "status_code": 307,
     "include_subdomains": true
   }   
 }
]'

The output will look similar to:

{
  "result": {
    "operation_id": "491ab6411acf4a12a6c72df1385b095a"
  },
  "success": true,
  "errors": [],
  "messages": []
}

In step 3 we enable this list by creating a new mass redirect rule within the mass redirect account-level ruleset.

Note, if this is the first time you are creating a redirect rule you will need to use a different API call to create the ruleset. See the documentation here for more details. All subsequent updates to the rulesets are made by calls similar to below.

Firstly, we need to find our account-level rulesets id. To do this we need to get a list of all account-level rulesets and look for the ruleset with the phase http_request_redirect:

curl --location --request GET 'https://api.cloudflare.com/client/v4/accounts/<ACCOUNT_ID>/rulesets \
--header 'X-Auth-Email: <EMAIL_ADDRESS>' \
--header 'Content-Type: application/json' \
--header 'X-Auth-Key: <API_KEY>'

The output will look similar to:

{
   "result": [
       {
           "id": "efb7b8c949ac4650a09736fc376e9aee",
           "name": "Cloudflare Managed Ruleset",
           "description": "Created by the Cloudflare security team, this ruleset is designed to provide fast and effective protection for all your applications. It is frequently updated to cover new vulnerabilities and reduce false positives.",
           "source": "firewall_managed",
           "kind": "managed",
           "version": "34",
           "last_updated": "2021-10-25T18:33:27.512161Z",
           "phase": "http_request_firewall_managed"
       },
       {
           "id": "4814384a9e5d4991b9815dcfc25d2f1f",
           "name": "Cloudflare OWASP Core Ruleset",
           "description": "Cloudflare's implementation of the Open Web Application Security Project (OWASP) ModSecurity Core Rule Set. We routinely monitor for updates from OWASP based on the latest version available from the official code repository",
           "source": "firewall_managed",
           "kind": "managed",
           "version": "33",
           "last_updated": "2021-10-25T18:33:29.023088Z",
           "phase": "http_request_firewall_managed"
       },
       {
           "id": "5ff4477e596448749d67da859230ac3d",
           "name": "My redirect ruleset",
           "description": "",
           "kind": "root",
           "version": "1",
           "last_updated": "2021-12-04T06:32:58.058744Z",
           "phase": "http_request_redirect"
       }
   ],
   "success": true,
   "errors": [],
   "messages": []
}

Our redirect ruleset is at the bottom of the output. Next we will add our new bulk redirect rule to this ruleset:

curl --location --request PUT 'https://api.cloudflare.com/client/v4/accounts/<ACCOUNT_ID>/rulesets/<RULESET_ID> \
--header 'X-Auth-Email: <EMAIL_ADDRESS>' \
--header 'Content-Type: application/json' \
--header 'X-Auth-Key: <API_KEY> \
--data-raw '{
     "rules": [
   {
     "expression": "http.request.full_uri in $my_redirect_list",
     "description": "Bulk Redirect rule 2",
     "action": "redirect",
     "action_parameters": {
       "from_list": {
         "name": "my_redirect_list_2",
         "key": "http.request.full_uri"
       }
     }
   }
 ]
}'

The output will look similar to:

{
  "result": {
    "id": "5ff4477e596448749d67da859230ac3d",
    "name": "My redirect ruleset",
    "description": "",
    "kind": "root",
    "version": "2",
    "rules": [
      {
        "id": "615cf6ac24c04f439138fdc16bd20535",
        "version": "1",
        "action": "redirect",
        "action_parameters": {
          "from_list": {
            "name": "my_redirect_list_2",
            "key": "http.request.full_uri"
          }
        },
        "expression": "http.request.full_uri in $my_redirect_list",
        "description": "Bulk Redirect rule 2",
        "last_updated": "2021-12-04T07:04:16.701379Z",
        "ref": "615cf6ac24c04f439138fdc16bd20535",
        "enabled": true
      }
    ],
    "last_updated": "2021-12-04T07:04:16.701379Z",
    "phase": "http_request_redirect"
  },
  "success": true,
  "errors": [],
  "messages": []
}

With those API calls executed, our new list is created, loaded with URL redirects and enabled by the bulk redirect rule. Visitors to the URLs specified in our list will now be redirected appropriately.

Account-level benefits

One of the driving forces behind this product is the desire to make life easier for those customers with a large number of zones on Cloudflare. For these customers, URL redirects are a pain point when using Page Rules, as they need to navigate into each zone and configure URL redirects one at a time. This doesn’t scale very well.

Bulk Redirects add real value for customers in this situation. Instead of having to navigate into 400 zones and create one or two Page Rules for each, an administrator can now create and upload a single Bulk Redirect List, which contains all the URL redirects for the zones under management.

This means that if the customer simply wants 399 of those 400 zones to redirect to the “primary zone”, they can create a bulk redirect list with 399 entries, all pointing to example.com, and enable the Subpath matching and Include subdomains options on each. This vastly simplifies the management of the estate.

The same premise also applies to SSL for SaaS customers. For example, if example.com has 20 custom hostnames in their zone, customers can now create a Bulk Redirect List and Rule for each custom hostname, grouping each customer’s URL redirects into their own logical buckets.

Bulk Redirects is a game changer for companies with a large number of zones and customers under management.

Allowances

Bulk Redirects are available for all accounts. The packaging model for Bulk Redirects closely resembles that of “IP Lists”. Accounts are entitled to a set number of Edge Rules (from which “Bulk Redirect Rules” draws down), Bulk Redirect Lists, and URL Redirects depending on the highest Cloudflare plan within their account.

Feature Enterprise Business Pro Free
Edge Rules (for use of Bulk Redirect Rules) 50+ 15 15 15
Bulk Redirect Lists 25+ 5 5 5
URL Redirects 10,000+ 500 500 20

For example, an account with ten zones, all on the Free plan, would be entitled to 15 Edge Rules, 5 Bulk Redirect Lists, and 20 URL Redirects that can be stored within those lists.

An account with one Pro zone and 2 Free plan zones would be entitled to 15 Edge Rules, 5 Bulk Redirect Lists, and 500 URL Redirects that can be stored within those lists.

Enterprise customers have a default of 10,000 URL Redirects to be used across 25 lists. However, these numbers are negotiable on enquiry.

Planned enhancements

We intend to make a number of incremental improvements to the product in the coming months, specifically to the list experience to allow for the editing of URL redirects and also for searching within lists.

In the near future we intend to bring to market a product to fulfill the other common request for URL redirects, and deliver ‘Dynamic URL Redirects’. Whilst Bulk Redirects supports hundreds of thousands of URL redirects, those URL redirects are relatively prescriptive — from a.com/b to b.com/a, for example.  There is still a requirement for supporting more complex, rich URL redirects, e.g., device-specific URL redirects, country-specific URL redirects, URL redirects that allow regular expressions in their target URL, and so forth. We aspire to offer a full range of functionality to support as many use cases as possible.

Try it now

Bulk Redirects can be used to improve operations, simplify complex configurations, and ease website management, amongst many other use cases.  Try out Bulk Redirects yourself today.