Tag Archives: Cloudflare Workers

A Workers optimization that reduces your bill

Post Syndicated from Kenton Varda original https://blog.cloudflare.com/workers-optimization-reduces-your-bill/

A Workers optimization
that reduces your bill

A Workers optimization
that reduces your bill

Recently, we made an optimization to the Cloudflare Workers runtime which reduces the amount of time Workers need to spend in memory. We’re passing the savings on to you for all your Unbound Workers.

Background

Workers are often used to implement HTTP proxies, where JavaScript is used to rewrite an HTTP request before sending it on to an origin server, and then to rewrite the response before sending it back to the client. You can implement any kind of rewrite in a Worker, including both rewriting headers and bodies.

Many Workers, though, do not actually modify the response body, but instead simply allow the bytes to pass through from the origin to the client. In this case, the Worker’s application code has finished executing as soon as the response headers are sent, before the body bytes have passed through. Historically, the Worker was nevertheless considered to be “in use” until the response body had fully finished streaming.

For billing purposes, under the Workers Unbound pricing model, we charge duration-memory (gigabyte-seconds) for the time in which the Worker is in use.

The change

On December 15-16, we made a change to the way we handle requests that are streaming through the response without modifying the content. This change means that we can mark application code as “idle” as soon as the response headers are returned.

Since no further application code will execute on behalf of the request, the system does not need to keep the request state in memory – it only needs to track the low-level native sockets and pump the bytes through. So now, during this time, the Worker will be considered idle, and could even be evicted before the stream completes (though this would be unlikely unless the stream lasts for a very long time).

Visualized it looks something like this:

A Workers optimization
that reduces your bill

As a result of this change, we’ve seen that the time a Worker is considered “in use” by any particular request has dropped by an average of 70%. Of course, this number varies a lot depending on the details of each Worker. Some may see no benefit, others may see an even larger benefit.

This change is totally invisible to the application. To any external observer, everything behaves as it did before. But, since the system now considers a Worker to be idle during response streaming, the response streaming time will no longer be billed. So, if you saw a drop in your bill, this is why!

But it doesn’t stop there!

The change also applies to a few other frequently used scenarios, namely Websocket proxying, reading from the cache and streaming from KV.

WebSockets: once a Worker has arranged to proxy through a WebSocket, as long as it isn’t handling individual messages in your Worker code, the Worker does not remain in use during the proxying. The change applies to regular stateless Workers, but not to Durable Objects, which are not usually used for proxying.

export default {
  async fetch(request: Request) {
    //Do anything before
    const upgradeHeader = request.headers.get('Upgrade')
    if (upgradeHeader || upgradeHeader === 'websocket') {
      return await fetch(request)
    }
    //Or with other requests
  }
}

Reading from Cache: If you return the response from a cache.match call, the Worker is considered idle as soon as the response headers are returned.

export default {
  async fetch(request: Request) {
    let response = await caches.default.match('https://example.com')
    if (response) {
      return response
    }
    // get/create response and put into cache
  }
}

Streaming from KV: And lastly, when you stream from KV. This one is a bit trickier to get right, because often people retrieve the value from KV as a string, or JSON object and then create a response with that value. But if you fetch the value as a stream, as done in the example below, you can create a Response with the ReadableStream.

interface Env {
  MY_KV_NAME: KVNamespace
}

export default {
  async fetch(request: Request, env: Env) {
    const readableStream = await env.MY_KV_NAME.get('hello_world.pdf', { type: 'stream' })
    if (readableStream) {
      return new Response(readableStream, { headers: { 'content-type': 'application/pdf' } })
    }
  },
}

Interested in Workers Unbound?

If you are already using Unbound, your bill will have automatically dropped already.

Now is a great time to check out Unbound if you haven’t already, especially since recently, we’ve also removed the egress fees. Unbound allows you to build more complex workloads on our platform and only pay for what you use.

We are always looking for opportunities to make Workers better. Often that improvement takes the form of powerful new features such as the soon-to-be released Service Bindings and, of course, performance enhancements. This time, we are delighted to make Cloudflare Workers even cheaper than they already were.

Miniflare 2.0: fully-local development and testing for Workers

Post Syndicated from Brendan Coll original https://blog.cloudflare.com/miniflare/

Miniflare 2.0: fully-local development and testing for Workers

Miniflare 2.0: fully-local development and testing for Workers

In July 2021, I launched Miniflare 1.0, a fun, full-featured, fully-local simulator for Workers, on the Cloudflare Workers Discord server. What began as a pull request to the cloudflare-worker-local project has now become an official Cloudflare project and a core part of the Workers ecosystem, being integrated into wrangler 2.0. Today, I’m thrilled to announce the release of the next major version: a more modular, lightweight, and accurate Miniflare 2.0. 🔥

Background: Why Miniflare was created

Miniflare 2.0: fully-local development and testing for Workers

At the end of 2020, I started to build my first Workers app. Initially I used the then recently released wrangler dev, but found it was taking a few seconds before changes were reflected. While this was still impressive considering it was running on the Workers runtime, I was using Vite to develop the frontend, so I knew a significantly faster developer experience was possible.

I then found cloudflare-worker-local and cloudworker, which were local Workers simulators, but didn’t have support for newer features like Workers Sites. I wanted a magical simulator that would just work ✨ in existing projects, focusing on the developer experience, and — by the reception of Miniflare 1.0 — I wasn’t the only one.

Miniflare 1.0 brought near instant reloads, source map support (so you could see where errors were thrown), cleaner logs (no more { unknown object }s or massive JSON stack traces), a pretty error page that highlighted the cause of the error, step-through debugger support, and more.

Miniflare 2.0: fully-local development and testing for Workers

Pretty-error page powered by `youch`

Miniflare 2.0: fully-local development and testing for Workers

The next iteration: What’s new in version 2

In the relatively short time since the launch of Miniflare 1.0 in July, Workers as a platform has improved dramatically. Durable Objects now have input and output gates for ensuring consistency without explicit transactions, Workers has compatibility dates allowing developers to opt-into backwards-incompatible fixes, and you can now write Workers using JavaScript modules.

Miniflare 2 supports all these features and has been completely redesigned with three primary design goals:

  1. Modular: Miniflare 2 splits Workers components (KV, Durable Objects, etc.) into separate packages (@miniflare/kv, @miniflare/durable-objects, etc.) that you can import on their own for testing. This will also make it easier to add support for new, unreleased features like R2 Storage.
  2. Lightweight: Miniflare 1 included 122 third-party packages with a total install size of 88.3MB. Miniflare 2 reduces this to 23 packages and 6MB by leveraging features included with Node.js 16.
  3. Accurate: Miniflare 2 replicates the quirks and thrown errors of the real Workers runtime, so you’ll know before you deploy if things are going to break. Of course, wrangler dev will always be the most accurate preview, running on the real edge with real data, but Miniflare 2 is really close!

It also adds a new live-reload feature and first-class support for testing with Jest for an even more enjoyable developer experience.

Getting started with local development

As mentioned in the introduction, Miniflare 2.0 is now integrated into wrangler 2.0, so you just need to run npx [email protected] dev --local to start a fully-local Worker development server or npx [email protected] pages dev to start a Cloudflare Pages Functions server. Make sure you’ve got the latest release of Node.js installed.

However, if you’re using Wrangler 1 or want to customize your local environment, you can install Miniflare standalone. If you’ve got an existing worker with a wrangler.toml file, just run npx miniflare --live-reload to start a live-reloading development server. Miniflare will automatically load configuration like KV namespaces or Durable Object bindings from your wrangler.toml file and secrets from a .env file.

Miniflare is highly configurable. For example, if you want to persist KV data between restarts, include the --kv-persist flag. See the Miniflare docs or run npx miniflare --help for many more options, like running multiple workers or starting an HTTPS server.

If you’ve got a scheduled event handler, you can manually trigger it by visiting http://localhost:8787/cdn-cgi/mf/scheduled in your browser.

Testing for Workers with Jest

Jest is one of the most popular JavaScript testing frameworks, so it made sense to add first-class support for it. Miniflare 2.0 includes a custom test environment that gives your tests access to Workers runtime APIs.

For example, suppose we have the following worker, written using JavaScript modules, that stores the number of times each URL is visited in Workers KV:

Aside: Workers KV is not designed for counters as it’s eventually consistent. In a real worker, you should use Durable Objects. This is just a simple example.

// src/index.mjs
export async function increment(namespace, key) {
  // Get the current count from KV
  const currentValue = await namespace.get(key);
  // Increment the count, defaulting it to 0
  const newValue = parseInt(currentValue ?? "0") + 1;
  // Store and return the new count
  await namespace.put(key, newValue.toString());
  return newValue;
}

export default {
  async fetch(request, env, ctx) {
    // Use the pathname for a key
    const url = new URL(request.url);
    const key = url.pathname;
    // Increment the key
    const value = await increment(env.COUNTER_NAMESPACE, key);
    // Return the new incremented count
    return new Response(`count for ${key} is now ${value}`);
  },
};
# wrangler.toml
kv_namespaces = [
  { binding = "COUNTER_NAMESPACE", id = "..." }
]

[build.upload]
format = "modules"
dist = "src"
main = "./index.mjs"

…we can write unit tests like so:

// test/index.spec.mjs
import worker, { increment } from "../src/index.mjs";

// When using `format = "modules"`, bindings are included in the `env` parameter,
// which we don't have access to in tests. Miniflare therefore provides a custom
// global method to access these.
const { COUNTER_NAMESPACE } = getMiniflareBindings();

test("should increment the count", async () => {
  // Seed the KV namespace
  await COUNTER_NAMESPACE.put("a", "3");

  // Perform the increment
  const newValue = await increment(COUNTER_NAMESPACE, "a");
  const storedValue = await COUNTER_NAMESPACE.get("a");

  // Check the return value of increment
  expect(newValue).toBe(4);
  // Check increment had the side effect of updating KV
  expect(storedValue).toBe("4");
});

test("should return new count", async () => {
  // Note we're using Worker APIs in our test, without importing anything extra
  const request = new Request("http://localhost/a");
  const response = await worker.fetch(request, { COUNTER_NAMESPACE });

  // Each test gets its own isolated storage environment, so the changes to "a"
  // are *undone* automatically. This means at the start of this test, "a"
  // wasn't in COUNTER_NAMESPACE, so it defaulted to 0, and the count is now 1.
  expect(await response.text()).toBe("count for /a is now 1");
});
// jest.config.js
const { defaults } = require("jest-config");

module.exports = {
  testEnvironment: "miniflare", // ✨
  // Tell Jest to look for tests in .mjs files too
  testMatch: [
    "**/__tests__/**/*.?(m)[jt]s?(x)",
    "**/?(*.)+(spec|test).?(m)[tj]s?(x)",
  ],
  moduleFileExtensions: ["mjs", ...defaults.moduleFileExtensions],
};

…and run them with:

# Install dependencies
$ npm install -D jest jest-environment-miniflare
# Run tests with experimental ES modules support
$ NODE_OPTIONS=--experimental-vm-modules npx jest

For more details about the custom test environment and isolated storage, see the Miniflare docs or this example project that also uses TypeScript and Durable Objects.

Not using Jest? Miniflare lets you write your own integration tests with vanilla Node.js or any other test framework. For an example using AVA, see the Miniflare docs or this repository.

How Miniflare works

Let’s now dig deeper into how some interesting parts of Miniflare work.

Miniflare is powered by Node.js, a JavaScript runtime built on Chrome’s V8 JavaScript engine. V8 is the same engine that powers the Cloudflare Workers runtime, but Node and Workers implement different runtime APIs on top of it. To ensure Node’s APIs aren’t visible to users’ worker code and to inject Workers’ APIs, Miniflare uses the Node.js vm module. This lets you run arbitrary code in a custom V8 context.

A core part of Workers are the Request and Response classes. Miniflare gets these from undici, a project written by the Node team to bring fetch to Node. For service workers, we also need a way to addEventListeners and dispatch events using the EventTarget API, which was added in Node 15.

With that we can build a mini-miniflare:

import vm from "vm";
import { Request, Response } from "undici";

// An instance of this class will become the global scope of our Worker,
// extending EventTarget for addEventListener and dispatchEvent
class ServiceWorkerGlobalScope extends EventTarget {
  constructor() {
    super();

    // Add Worker runtime APIs
    this.Request = Request;
    this.Response = Response;

    // Make sure this is bound correctly when EventTarget methods are called
    this.addEventListener = this.addEventListener.bind(this);
    this.removeEventListener = this.removeEventListener.bind(this);
    this.dispatchEvent = this.dispatchEvent.bind(this);
  }
}

// An instance of this class will be passed as the event parameter to "fetch"
// event listeners
class FetchEvent extends Event {
  constructor(type, init) {
    super(type);
    this.request = init.request;
  }

  respondWith(response) {
    this.response = response;
  }
}

// Create a V8 context to run user code in
const globalScope = new ServiceWorkerGlobalScope();
const context = vm.createContext(globalScope);

// Example user worker code, this could be loaded from the file system
const workerCode = `
addEventListener("fetch", (event) => {
  event.respondWith(new Response("Hello mini-miniflare!"));
})
`;
const script = new vm.Script(workerCode);

// Run the user's code, registering the "fetch" event listener
script.runInContext(context);

// Create an example request, this could come from an incoming HTTP request
const request = new Request("http://localhost:8787/");
const event = new FetchEvent("fetch", { request });

// Dispatch the event and log the response
globalScope.dispatchEvent(event);
console.log(await event.response.text()); // Hello mini-miniflare!

Plugins

Miniflare 2.0: fully-local development and testing for Workers

Dependency graph of the Miniflare monorepo.

There are a lot of Workers runtime APIs, so adding and configuring them all manually as above would be tedious. Therefore, Miniflare 2 has a plugin system that allows each package to export globals and bindings to be included in the sandbox. Options have annotations describing their type, CLI flag, and where to find them in Wrangler configuration files:

@Option({
  // Define type for runtime validation of the CLI flag
  type: OptionType.ARRAY,
  // Use --kv instead of auto-generated --kv-namespace for the CLI flag
  name: "kv",
  // Define -k as an alias
  alias: "k",
  // Displayed in --help
  description: "KV namespace to bind",
  // Where to find this option in wrangler.toml
  fromWrangler: (config) => config.kv_namespaces?.map(({ binding }) => binding),
})
kvNamespaces?: string[];

Durable Objects

Before input and output gates were added, you usually needed to use the transaction() method to ensure consistency:

async function incrementCount() {
  let value;
  await this.storage.transaction(async (txn) => {
    value = await txn.get("count");
    await txn.put("count", value + 1);
  });
  return value;
}

Miniflare implements this using optimistic-concurrency control (OCC). However, input and output gates are now available, so to avoid race conditions when simulating newly-written Durable Object code, Miniflare 2 needed to implement them.

From the description in the gates announcement blog post:

Input gates: While a storage operation is executing, no events shall be delivered to the object except for storage completion events. Any other events will be deferred until such a time as the object is no longer executing JavaScript code and is no longer waiting for any storage operations. We say that these events are waiting for the "input gate" to open.

…we can see input gates need to have two methods, one for closing the gate while a storage operation is running and one for waiting until the input gate is open:

class InputGate {
  async runWithClosed<T>(closure: () => Promise<T>): Promise<T> {
    // 1. Close the input gate
    // 2. Run the closure and store the result
    // 3. Open the input gate
    // 4. Return the result
  }

  async waitForOpen(): Promise<void> {
    // 1. Check if the input gate is open
    // 2. If it is, return
    // 3. Otherwise, wait until it is
  }
}

Each Durable Object has its own InputGate. In the storage implementation, we call runWithClosed to defer other events until the storage operation completes:

class DurableObjectStorage {
  async get<Value>(key: string): Promise<Value | undefined> {
    return this.inputGate.runWithClosed(() => {
      // Get key from storage
    });
  }
}

…and whenever we’re ready to deliver another event, we call waitForOpen:

import { fetch as baseFetch } from "undici";

async function fetch(input, init) {
  const response = await baseFetch(input, init);
  await inputGate.waitForOpen();
  return response;
}

You may have noticed a problem here. Where does inputGate come from in fetch? We only have one global scope for the entire Worker and all its Durable Objects, so we can’t have a fetch per Durable Object InputGate. We also can’t ask the user to pass it around as another parameter to all functions that need it. We need some way of storing it in a context that’s passed around automatically between potentially async functions. For this, we can use another lesser-known Node module, async_hooks, which includes the AsyncLocalStorage class:

import { AsyncLocalStorage } from "async_hooks";

const inputGateStorage = new AsyncLocalStorage<InputGate>();

const inputGate = new InputGate();
await inputGateStorage.run(inputGate, async () => {
  // This closure will run in an async context with inputGate
  await fetch("https://example.com");
});

async function fetch(input: RequestInfo, init: RequestInit): Promise<Response> {
  const response = await baseFetch(input, init);
  // Get the input gate in the current async context
  const inputGate = inputGateStorage.getStore();
  await inputGate.waitForOpen();
  return response;
}

Durable Objects also include a blockConcurrencyWhile(closure) method that defers events until the closure completes. This is exactly the runWithClosed() method:

class DurableObjectState {
  // ...

  blockConcurrencyWhile<T>(closure: () => Promise<T>): Promise<T> {
    return this.inputGate.runWithClosed(closure);
  }
}

However, there’s a problem with what we’ve got at the moment. Consider the following code:

export class CounterObject {
  constructor(state: DurableObjectState) {
    state.blockConcurrencyWhile(async () => {
      const res = await fetch("https://example.com");
      this.data = await res.text();
    });
  }
}

blockConcurrencyWhile closes the input gate, but fetch won’t return until the input gate is open, so we’re deadlocked! To fix this, we need to make InputGates nested:

class InputGate {
  constructor(private parent?: InputGate) {}

  async runWithClosed<T>(closure: () => Promise<T>): Promise<T> {
    // 1. Close the input gate, *and any parents*
    // 2. *Create a new child input gate with this as its parent*
    const childInputGate = new InputGate(this);
    // 3. Run the closure, *under the child input gate's context*
    // 4. Open the input gate, *and any parents*
    // 5. Return the result
  }
}

Now the input gate outside of blockConcurrencyWhile will be closed, so fetches to the Durable Object will be deferred, but the input gate inside the closure will be open, so the fetch can return.

This glosses over some details, but you can check out the gates implementation for additional context and comments. 🙂

HTMLRewriter

HTMLRewriter is another novel class that allows parsing and transforming HTML streams. In the edge Workers runtime, it’s powered by C-bindings to the lol-html Rust library. Luckily, Ivan Nikulin built WebAssembly bindings for this, so we’re able to use the same library in Node.js.

However, these were missing support for async handlers that allow you to access external resources when rewriting:

class UserElementHandler {
  async element(node) {
    const response = await fetch("/user");
    // ...
  }
}

The WebAssembly bindings Rust code includes something like:

macro_rules! make_handler {
  ($handler:ident, $JsArgType:ident, $this:ident) => {
    move |arg: &mut _| {
      // `js_arg` here is the `node` parameter from above
      let js_arg = JsValue::from(arg);
      // $handler here is the `element` method from above
      match $handler.call1(&$this, &js_arg) {
        Ok(res) => {
          // Check if this is an async handler
          if let Some(promise) = res.dyn_ref::<JsPromise>() {
            await_promise(promise);
          }
          Ok(())
        }
        Err(e) => ...,
      }
    }
  };
}

The key thing to note here is that the Rust move |...| { ... } closure is synchronous, but handlers can be asynchronous. This is like trying to await a Promise in a non-async function.

To solve this, we use the Asyncify feature of Binaryen, a set of tools for working with WebAssembly modules. Whenever we call await_promise, Asyncify unwinds the current WebAssembly stack into some temporary storage. Then in JavaScript, we await the Promise. Finally, we rewind the stack from the temporary storage to the previous state and continue rewriting where we left off.

You can find the full implementation in the html-rewriter-wasm package.

Miniflare 2.0: fully-local development and testing for Workers

The future of Miniflare

As mentioned earlier, Miniflare is now included in wrangler 2.0. Try it out and let us know what you think!

I’d like to thank everyone on the Workers team at Cloudflare for building such an amazing platform and supportive community. Special thanks to anyone who’s contributed to Miniflare, opened issues, given suggestions, or asked questions in the Discord server.

Maybe now I can finish off my original workers project… 😅

Supporting Remix with full stack Cloudflare Pages

Post Syndicated from Greg Brimble original https://blog.cloudflare.com/remix-on-cloudflare-pages/

Supporting Remix with full stack Cloudflare Pages

Supporting Remix with full stack Cloudflare Pages

We announced the open beta of full stack Cloudflare Pages in November and have since seen widespread uptake from developers looking to add dynamic functionality to their applications. Today, we’re excited to announce Pages’ support for Remix applications, powered by our full stack platform.

The new kid on the block: Remix

Remix is a new framework that is focused on fully utilizing the power of the web. Like Cloudflare Workers, it uses modern JavaScript APIs, and it places emphasis on web fundamentals such as meaningful HTTP status codes, caching and optimizing for both usability and performance. One of the biggest features of Remix is its transportability: Remix provides a platform-agnostic interface and adapters allowing it to be deployed to a growing number of providers. Cloudflare Workers was available at Remix’s launch, but what makes Workers different in this case, is the native compatibility that Workers can offer.

One of the main inspirations for Remix was the way Cloudflare Workers uses native web APIs for handling HTTP requests and responses. It’s a brilliant decision because developers are able to reuse knowledge on the server that they gained building apps in the browser! Remix runs natively on Cloudflare Workers, and the results we’ve seen so far are fantastic. We are incredibly excited about the potential that Cloudflare Workers and Pages unlocks for building apps that run at the edge!
Michael Jackson, CEO at Remix

This native compatibility means that as you learn how to write applications in Remix, you’re also learning how to write Cloudflare Workers (and vice versa). But it also means better performance! Rather than having a Node.js process running on a server — which could be far away from your users, could be overwhelmed in the case of high traffic, and has to map between Node.js’ runtime and the modern Fetch API — you can deploy to Cloudflare’s network and requests will be routed to any one of our 250+ locations. This means better performance for your users, with 95% of the entire Internet-connected world lying within 50ms of a Cloudflare presence, and 80% of the Internet-connected world within 20ms.

Integrating with Cloudflare

More often than not, full stack applications need some place to store data. Cloudflare offers three all-encompassing options here:

  • KV, our high performance and globally replicated key-value datastore.
  • Durable Objects, our strongly consistent coordination primitive which can be restricted to a given jurisdiction.
  • R2 (coming soon!), our fast and reliable object storage.

Remix already tightly integrates with KV for session storage, and a Durable Objects integration is in progress. Additionally, Cloudflare’s other features, such as geolocating incoming requests, HTMLRewriter and our Cache API, are all available from within your Remix application.

Deploying to Cloudflare Pages

Cloudflare Pages was already capable of serving static assets from the Cloudflare edge, but now with November’s release of serverless functions powered by Cloudflare Workers, it has evolved into an entire platform perfectly suited for hosting full stack applications.

To get started with Remix and Cloudflare Pages today, run the following in your terminal, and select “Cloudflare Pages” when asked “Where do you want to deploy?”:

npx [email protected]

Then create a repository on GitHub or GitLab, git commit, and git push the newly created folder. Finally, navigate to Cloudflare Pages, select your repository, and select “Remix” from the dropdown of framework presets. Your new application will be available on your pages.dev subdomain, or you can connect it to any of your custom domains.

Your folder will have a functions/[[path]].ts file. This is the functions integration where we serve your Remix application on all paths of your website. The app folder is where the bulk of your Remix application’s logic is. With Pages’ support for rollbacks and preview deployments, you can safely test any changes to your application, and, with the wrangler 2.0 beta, testing locally is just a simple case of npm run dev.

The future of frameworks on Cloudflare Pages

Remix is the second framework to integrate natively with full stack Cloudflare Pages, following SvelteKit, which was available at launch. But this is just the beginning! We have a lot more in store for our integration with Remix and other frameworks. Stay tuned for improvements on  Pages’ build times and other areas of the developer experience, as well as new features to the platform.

Join our community!

If you are new to the Cloudflare Pages and Workers world, join our Discord server and show us what you’re building. Whether it’s a new full stack application on Remix or even a simple static site, we’d love to hear from you.

Maximum redirects, minimum effort: Announcing Bulk Redirects

Post Syndicated from Sam Marsh original https://blog.cloudflare.com/maximum-redirects-minimum-effort-announcing-bulk-redirects/

Maximum redirects, minimum effort: Announcing Bulk Redirects

404: Not Found

Maximum redirects, minimum effort: Announcing Bulk Redirects

The Internet is a dynamic place. Websites are constantly changing as technologies and business practices evolve. What was front-page news is quickly moved into a sub-directory. To ensure website visitors continue to see the correct webpage even if it has been moved, administrators often implement URL redirects.

A URL redirect is a mapping from one location on the Internet to another, effectively telling the visitor’s browser that the location of the page has changed, and where they can now find it. This is achieved by providing a virtual ‘link’ between the content’s original and new location.

URL Redirects have typically been implemented as Page Rules within Cloudflare, up to a maximum of 125 URL redirects per zone. This limitation meant customers with a need for more URL redirects had to implement alternative solutions such Cloudflare Workers to achieve their goals.

To simplify the management and implementation of URL redirects at scale we have created Bulk Redirects. Bulk Redirects is a new product that allows an administrator to upload and enable hundreds of thousands of URL redirects within minutes, without having to write a single line of code.

We’ve moved!

Mail forwarding is a product offered by postal services such as USPS and Royal Mail that allows you to continue to receive letters and parcels even if they are sent to an address where you no longer reside.

The postal services achieve this by effectively maintaining a register of your new location and your old location. This allows the systems to detect ‘this letter is for Sam Marsh at address A, but he now lives at address B, therefore send the mail there’.

Maximum redirects, minimum effort: Announcing Bulk Redirects
Photo by Christian Lue on Unsplash

This problem can be solved by manually updating my bank, online shops, etc. and having them send the parcels and letters directly to my new address. However, that assumes I know of every person and business who has my address. And it also relies on those people to remember I moved address and make updates on their side. For example, Grandma Marsh might have forgotten about my new address — I’ve moved a lot — and she may send my birthday card to my old address. Or all those Christmas cards from people who I don’t speak with regularly. Those will go to my old address also.  To solve this, I can use mail forwarding to ensure I still receive my cards and other mail, even though I no longer live at that address.

URL redirects are the Internet equivalent of mail forwarding.

URL redirects are effectively a table with two columns; what traffic am I looking for, and where should I send that traffic to? This mapping allows an administrator to define “whenever visitors go to https://www.cloudflare.com/bots I want to redirect them to the new location https://www.cloudflare.com/pg-lp/bot-mitigation-fight-mode.

With this technology, our sales and marketing teams can use the vanity URL all across the Internet, safe in the knowledge that should the backend systems change they won’t need to go to all the places this URL has been posted and update it. Instead, the intermediary system that handles the URL redirects can be updated. One location. Not thousands.

Why use URL redirects?

URL redirects are used to solve a number of use cases. One such common use case is to use URL redirects to force all visitors to connect to the website over a secure HTTPS connection, instead of via plain HTTP, to improve security. It’s such a common use case we created a toggle in the Cloudflare dashboard, “Always use HTTPS”, which redirects all HTTP requests to HTTPS when enabled.

URL redirects are also used for vanity domains and hyperlinks. In these scenarios, URL redirects are deployed to provide a mapping of short, user-friendly URLs to long, server-friendly URLs.  Not only are shorter URLs more memorable, but they are better scoring from an SEO perspective. According to Backlinko, ‘Excessively long URLs may hurt a page’s search engine visibility. In fact, several industry studies have found that short URLs tend to have a slight edge in Google’s search results.’.

Maximum redirects, minimum effort: Announcing Bulk Redirects
Photo by Hal Gatewood on Unsplash

Another use case is where a company may have a local domain for each of their markets, which they want to redirect back to the main website, e.g., redirect www.example.fr and www.example.de to www.example.com/eu/fr and www.example.com/eu/de, respectively.

This also covers company acquisition, where a company is acquired and the acquiring company wants to redirect hyperlinks to the relevant pages on their own website, e.g., redirect www.example.com to www.companyB.com/portfolio/example.

Finally, one of the most common use cases for URL redirects is to maintain uptime during a website migration. As companies migrate their websites from one platform to another, or one domain to another, URL redirects ensure visitors continue to see the correct content. Without these URL redirects, hyperlinks in emails, blogs, marketing brochures, etc. would fail to load, potentially costing the business revenue in lost sales and brand damage. For example, www.example.com/products/golf/product-goes-here would redirect to the new website at products.example.com/golf/product-goes-here.

How are URL redirects implemented today?

Ensuring these URL redirects are executed correctly is often the job of the reverse proxy — a server which sits between the client and the origin whose job is, amongst many others, to re-route received traffic to the correct destination.

For example, when using NGINX, a popular web server, the administrator would have a line in the config similar to the one below to implement a URL redirect:

`rewrite ^/oldpage$ http://www.example.com/newpage permanent;`

Historically, these web servers were located physically within a company’s data center. Administrators then had full control over the URLs received, and could create the redirect rules as and when needed.

As the world rapidly migrates on-premise applications and solutions to the cloud, administrators can find themselves in a situation where they can no longer do what they previously could. Not being responsible for the origin has a number of benefits, but it also comes with drawbacks such as lack of ‘control’. Previously, an administrator could quickly add a few config lines to the web server in front of their ecommerce platform. Moving to an online hosted platform makes this much more difficult to do.

As such, administrators have moved to platforms like Cloudflare where functionality such as URL redirects can be implemented in the cloud without the need to have administrator access to the origin.

The first way to implement a URL Redirect in Cloudflare is via a Forwarding URL Page Rule. Users can create a Page Rule which matches on a specific URL and redirects matching traffic to another specific URL, along with a status code — either a permanent redirect (301) or a temporary redirect (302):

Maximum redirects, minimum effort: Announcing Bulk Redirects

Another method is to use Cloudflare Workers to implement URL redirects, either individually or as a map. For example, the code below is used to create a URL redirect map which runs when the Worker is invoked:

const redirectMap = new Map([
 ["/bulk1", "https://" + externalHostname + "/redirect2"],
 ["/bulk2", "https://" + externalHostname + "/redirect3"],
 ["/bulk3", "https://" + externalHostname + "/redirect4"],
 ["/bulk4", "https://google.com"],
])

This snippet is taken from the Cloudflare Workers examples library and can be used to scale beyond the 125 URL redirect limit of Page Rules. However, it does require the administrator to be comfortable working with code and correctly configuring their Cloudflare Workers.

Introducing: Bulk Redirects

Speaking with Cloudflare users about URL redirects and their experience with our product offerings, “Give me a product which lets me upload thousands of URL redirects to Cloudflare via a GUI” was a very common request. Customers we interviewed typically wanted a simple way to upload a list of ‘from,to,response code’ without having to write a single line of code. And that’s what we are announcing today.

Bulk Redirects is now available for all Cloudflare plans. It is an account/organization-level product capable of supporting hundreds of thousands of URL redirects, all configured via the dashboard without having to write a single line of code.

The system is implemented in two parts. The first part is the Bulk Redirect List. This is effectively the redirect map, or ‘edge dictionary’, where users can upload their URL redirects:

Maximum redirects, minimum effort: Announcing Bulk Redirects

Each URL redirect within the list contains three main elements. The first two elements are Source URL (the URL we are looking for) and Target URL (the URL we are going to redirect matching traffic to).

There is also the Status code. This is the ‘type’ of redirect. In addition to 301 (Moved Permanently) and 302 (Moved Temporarily) redirects, we have added support for the newer 307 (Temporary Redirect) and 308 (Permanent Redirect) redirect status codes.

We have added support for specifying destination ports within the Target URL field also, allowing URL redirects to non-standard ports, e.g., “Target URL: www.example.com:8443”.

If you have many URL redirects, you can upload them via a CSV file.

There are also four additional parameters available for each individual URL redirect.

Firstly, we have added two options to replace the ambiguity and confusion caused by the use of asterisks as wildcards. Take this source URL as an example: *.example.com/a/b. Would you expect www.example.com/a/b to match? How about example.com/a/b, or www.example.com/path*? Asterisks used as wildcards cause confusion and misunderstanding, and also increase the cost of implementation and maintenance from an engineering perspective. Therefore, we are not implementing them in Bulk Redirects.

Instead, we have added two discrete options: Include subdomains and Subpath matching. The Include subdomains option, once enabled, will match all subdomains to the left of the domain portion of the URL as well as the domain specified. For example, if there is a URL redirect with a source URL of example.com/a then traffic to b.example.com/a and c.b.example.com/a will also be redirected.

The Subpath matching option focuses on the opposite end of the URL. If this option is enabled, the redirect applies to the URL as well as all its subpaths. For example, if we have a URL redirect on www.example.com/foo with subpath matching enabled, we will match on that specific URL as well as all subpaths, e.g., www.example.com/foo/a, www.example.com/foo/a/, ` www.example.com/foo/a/b/c`, etc., but not www.example.com/foobar.

These options provide a tremendous amount of flexibility and granularity for each URL redirect. However, for most use cases only the source URL, target URL, and status code options will need to be set.

Secondly, we have added two options relating to retaining portions of the original HTTP request: Preserve path suffix and Preserve query string. If subpath matching is enabled, Preserve path suffix can be used to copy the URI path from the originally requested URL and add it to the destination URL. For example, if there is a URL redirect of Source URL: example.co.uk, Target URL: www.example.com/a, then requests to example.co.uk/target will be redirected to www.example.com/a/target with both options enabled. Preserve query string can be used independently of the other options, and carries forward the URI query from the originally requested URL to the new URL.

Lists by themselves do not provide any redirection, they are simply the ‘lookup table’. To enable them we need to reference them via a Bulk Redirect Rule.

The rules themselves are very simple. By default, the user experience is to provide a name for the rule, a description, and select the Bulk Redirect List that should be invoked.

Maximum redirects, minimum effort: Announcing Bulk Redirects

For users who require more granularity and control there are additional settings available under the Advanced options toggle. Within this section there are two editable sections:  Expression and Key.

Maximum redirects, minimum effort: Announcing Bulk Redirects

The first field, Expression, specifies the conditions that must be met in order for the rule to run. By default, all URL redirects of the specified list will apply.

The second field, Key, is closely related to the expression. The key is used in combination with the specified list to select the URL redirect to apply. The field used for the key should always be the same as the field used in the expression, i.e., the key should be http.request.full_uri if the field in the expression is http.request.full_uri, or conversely, the key should be raw.http.request.full_uri if the field in the expression is raw.http.request.full_uri.

There are two main use cases for modifying these settings. Firstly, users can edit these options to increase specificity in the trigger, e.g., ip.src.country == "GB" and http.request.full_uri in $redirect_list. This is useful for ensuring Bulk Redirect Lists are only applied when a visitor comes from specific countries, subnets, or ASNs — or also only applying a URL redirect list if the visitor is a verified bot, or the bot score is >35.

Secondly, users can edit these options to amend the URL being matched and used as a lookup in the given list, i.e., the user may choose to have URL redirects in their list(s) specifically for URLs that would be normalized, e.g., URLs containing specific percent-encoding.  To ensure these URL redirects still trigger, the settings in Advanced options should be used to edit the expression and key to use the raw.http.request.full_uri field instead.

Automating via the API

Another way to manage bulk redirects is via our API. Customers wishing to automate the addition of bulk redirects can use the API to either simply add URL redirects to an existing list, or automate the entire workflow — creating a list, adding URL redirects to the list, and enabling the list via a new redirect rule.

There are three main calls when creating bulk redirects via the API:

  1. Create the redirect list
  2. Load with URL redirects
  3. Enable via a rule (You will also need to create the ruleset if doing this for the first time).

For step 1, first create a mass redirect list via the API call:

curl --location --request POST 'https://api.cloudflare.com/client/v4/accounts/<ACCOUNT_ID>/rules/lists' \
--header 'X-Auth-Email: <EMAIL_ADDRESS>' \
--header 'Content-Type: application/json' \
--header 'X-Auth-Key: <API_KEY>' \
--data-raw '{
 "name": "my_redirect_list_2",
 "description": "My redirect list 2",
 "kind": "redirect"
}'

The output will look similar to:

{
  "result": {
    "id": "499b94da726d4dbc9ce6bf6c96ef8b6a",
    "name": "my_redirect_list_2",
    "description": "My redirect list 2",
    "kind": "redirect",
    "num_items": 0,
    "num_referencing_filters": 0,
    "created_on": "2021-12-04T06:43:43Z",
    "modified_on": "2021-12-04T06:43:43Z"
  },
  "success": true,
  "errors": [],
  "messages": []
}

Capture the value of “id”, as this is the list id we will then add URL redirects to.

Next, in step 2 we will add URL redirects to our newly created list by executing a POST call to the id we captured previously – with our URL redirects in the body:

curl --location --request POST 'https://api.cloudflare.com/client/v4/accounts/<ACCOUNT_ID>/rules/lists/<LIST_ID>/items' \
--header 'X-Auth-Email: <EMAIL_ADDRESS>' \
--header 'Content-Type: application/json' \
--header 'X-Auth-Key: <API_KEY>' \
--data-raw '[
 {
   "redirect": {
     "source_url": "www.example.com/a",
     "target_url": "https://www.example.net/a"
   }
 },
 {
   "redirect": {
     "source_url": "www.example.com/b",
     "target_url": "https://www.example.net/a/b",
     "status_code": 307,
     "include_subdomains": true
   }
 },
 {
   "redirect": {
     "source_url": "www.example.com/c",
     "target_url": "www.example.net/c",
     "status_code": 307,
     "include_subdomains": true
   }   
 }
]'

The output will look similar to:

{
  "result": {
    "operation_id": "491ab6411acf4a12a6c72df1385b095a"
  },
  "success": true,
  "errors": [],
  "messages": []
}

In step 3 we enable this list by creating a new mass redirect rule within the mass redirect account-level ruleset.

Note, if this is the first time you are creating a redirect rule you will need to use a different API call to create the ruleset. See the documentation here for more details. All subsequent updates to the rulesets are made by calls similar to below.

Firstly, we need to find our account-level rulesets id. To do this we need to get a list of all account-level rulesets and look for the ruleset with the phase http_request_redirect:

curl --location --request GET 'https://api.cloudflare.com/client/v4/accounts/<ACCOUNT_ID>/rulesets \
--header 'X-Auth-Email: <EMAIL_ADDRESS>' \
--header 'Content-Type: application/json' \
--header 'X-Auth-Key: <API_KEY>'

The output will look similar to:

{
   "result": [
       {
           "id": "efb7b8c949ac4650a09736fc376e9aee",
           "name": "Cloudflare Managed Ruleset",
           "description": "Created by the Cloudflare security team, this ruleset is designed to provide fast and effective protection for all your applications. It is frequently updated to cover new vulnerabilities and reduce false positives.",
           "source": "firewall_managed",
           "kind": "managed",
           "version": "34",
           "last_updated": "2021-10-25T18:33:27.512161Z",
           "phase": "http_request_firewall_managed"
       },
       {
           "id": "4814384a9e5d4991b9815dcfc25d2f1f",
           "name": "Cloudflare OWASP Core Ruleset",
           "description": "Cloudflare's implementation of the Open Web Application Security Project (OWASP) ModSecurity Core Rule Set. We routinely monitor for updates from OWASP based on the latest version available from the official code repository",
           "source": "firewall_managed",
           "kind": "managed",
           "version": "33",
           "last_updated": "2021-10-25T18:33:29.023088Z",
           "phase": "http_request_firewall_managed"
       },
       {
           "id": "5ff4477e596448749d67da859230ac3d",
           "name": "My redirect ruleset",
           "description": "",
           "kind": "root",
           "version": "1",
           "last_updated": "2021-12-04T06:32:58.058744Z",
           "phase": "http_request_redirect"
       }
   ],
   "success": true,
   "errors": [],
   "messages": []
}

Our redirect ruleset is at the bottom of the output. Next we will add our new bulk redirect rule to this ruleset:

curl --location --request PUT 'https://api.cloudflare.com/client/v4/accounts/<ACCOUNT_ID>/rulesets/<RULESET_ID> \
--header 'X-Auth-Email: <EMAIL_ADDRESS>' \
--header 'Content-Type: application/json' \
--header 'X-Auth-Key: <API_KEY> \
--data-raw '{
     "rules": [
   {
     "expression": "http.request.full_uri in $my_redirect_list",
     "description": "Bulk Redirect rule 2",
     "action": "redirect",
     "action_parameters": {
       "from_list": {
         "name": "my_redirect_list_2",
         "key": "http.request.full_uri"
       }
     }
   }
 ]
}'

The output will look similar to:

{
  "result": {
    "id": "5ff4477e596448749d67da859230ac3d",
    "name": "My redirect ruleset",
    "description": "",
    "kind": "root",
    "version": "2",
    "rules": [
      {
        "id": "615cf6ac24c04f439138fdc16bd20535",
        "version": "1",
        "action": "redirect",
        "action_parameters": {
          "from_list": {
            "name": "my_redirect_list_2",
            "key": "http.request.full_uri"
          }
        },
        "expression": "http.request.full_uri in $my_redirect_list",
        "description": "Bulk Redirect rule 2",
        "last_updated": "2021-12-04T07:04:16.701379Z",
        "ref": "615cf6ac24c04f439138fdc16bd20535",
        "enabled": true
      }
    ],
    "last_updated": "2021-12-04T07:04:16.701379Z",
    "phase": "http_request_redirect"
  },
  "success": true,
  "errors": [],
  "messages": []
}

With those API calls executed, our new list is created, loaded with URL redirects and enabled by the bulk redirect rule. Visitors to the URLs specified in our list will now be redirected appropriately.

Account-level benefits

One of the driving forces behind this product is the desire to make life easier for those customers with a large number of zones on Cloudflare. For these customers, URL redirects are a pain point when using Page Rules, as they need to navigate into each zone and configure URL redirects one at a time. This doesn’t scale very well.

Bulk Redirects add real value for customers in this situation. Instead of having to navigate into 400 zones and create one or two Page Rules for each, an administrator can now create and upload a single Bulk Redirect List, which contains all the URL redirects for the zones under management.

This means that if the customer simply wants 399 of those 400 zones to redirect to the “primary zone”, they can create a bulk redirect list with 399 entries, all pointing to example.com, and enable the Subpath matching and Include subdomains options on each. This vastly simplifies the management of the estate.

The same premise also applies to SSL for SaaS customers. For example, if example.com has 20 custom hostnames in their zone, customers can now create a Bulk Redirect List and Rule for each custom hostname, grouping each customer’s URL redirects into their own logical buckets.

Bulk Redirects is a game changer for companies with a large number of zones and customers under management.

Allowances

Bulk Redirects are available for all accounts. The packaging model for Bulk Redirects closely resembles that of “IP Lists”. Accounts are entitled to a set number of Edge Rules (from which “Bulk Redirect Rules” draws down), Bulk Redirect Lists, and URL Redirects depending on the highest Cloudflare plan within their account.

Feature Enterprise Business Pro Free
Edge Rules (for use of Bulk Redirect Rules) 50+ 15 15 15
Bulk Redirect Lists 25+ 5 5 5
URL Redirects 10,000+ 500 500 20

For example, an account with ten zones, all on the Free plan, would be entitled to 15 Edge Rules, 5 Bulk Redirect Lists, and 20 URL Redirects that can be stored within those lists.

An account with one Pro zone and 2 Free plan zones would be entitled to 15 Edge Rules, 5 Bulk Redirect Lists, and 500 URL Redirects that can be stored within those lists.

Enterprise customers have a default of 10,000 URL Redirects to be used across 25 lists. However, these numbers are negotiable on enquiry.

Planned enhancements

We intend to make a number of incremental improvements to the product in the coming months, specifically to the list experience to allow for the editing of URL redirects and also for searching within lists.

In the near future we intend to bring to market a product to fulfill the other common request for URL redirects, and deliver ‘Dynamic URL Redirects’. Whilst Bulk Redirects supports hundreds of thousands of URL redirects, those URL redirects are relatively prescriptive — from a.com/b to b.com/a, for example.  There is still a requirement for supporting more complex, rich URL redirects, e.g., device-specific URL redirects, country-specific URL redirects, URL redirects that allow regular expressions in their target URL, and so forth. We aspire to offer a full range of functionality to support as many use cases as possible.

Try it now

Bulk Redirects can be used to improve operations, simplify complex configurations, and ease website management, amongst many other use cases.  Try out Bulk Redirects yourself today.

Zaraz use Workers to make third-party tools secure and fast

Post Syndicated from Yo'av Moshe original https://blog.cloudflare.com/zaraz-use-workers-to-make-third-party-tools-secure-and-fast/

Zaraz use Workers to make third-party tools secure and fast

Zaraz use Workers to make third-party tools secure and fast

We decided to create Zaraz around the end of March 2020. We were working on another product when we noticed everyone was asking us about the performance impact of having many third-parties on their website. Third-party content is an important part of the majority of websites today, powering analytics, chatbots, conversion pixels, widgets — you name it. The definition of third-party is an asset, often JavaScript, hosted outside the primary site-user relationship, that is not under the direct control of the site owner but is present with ‘approval’. Yair wrote in detail about the process of measuring the impact of these third-party tools, and how we pivoted our startup, but I wanted to write about how we built Zaraz and what it actually does behind the scenes.

Third parties are great in that they let you integrate already-made solutions with your website, and you barely need to do any coding. Analytics? Just drop this code snippet. Chat widget? Just add this one. Third-party vendors will usually instruct you on how to add their tool, and from that point on things should just be working. Right? But when you add third-party code, it usually fetches even more code from remote sources, meaning you have less and less control over whatever is happening in your visitors’ browsers. How can you guarantee that none of the multitude of third parties you have on your website wasn’t hacked, and started stealing information, mining cryptocurrencies or logging key presses on your visitors’ computers?

It doesn’t even have to be a deliberate hack. As we investigated more and more third-party tools, we noticed a pattern — sometimes it’s easier for a third-party vendor to collect everything, rather than being selective or careful about it. More often than not, user emails would find their way into a third-party tool, which could very easily put the website owner in trouble due to GDPR, CCPA, or similar.

How third-party tools work today

Usually, when you add a third party to your page, you’re asked to add a piece of JavaScript code to the <head> of your HTML. Google Analytics is by far the most popular third-party, so let’s see how it’s done there:

<!-- Google Analytics -->
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','https://www.google-analytics.com/analytics.js','ga');

ga('create', 'UA-XXXXX-Y', 'auto');
ga('send', 'pageview');
</script>
<!-- End Google Analytics -->

In this case, and in most other cases, the snippet that you’re pasting actually calls more JavaScript code to be executed. The snippet above creates a new <script> element, gives it the https://www.google-analytics.com/analytics.js src attribute, and appends it to the DOM. The browser then loads the analytics.js script, which includes more JavaScript code than the snippet itself, and sometimes asks the browser to download even more scripts, some of them bigger than analytics.js itself. So far, however, no analytics data has been captured at all, although this is why you’ve added Google Analytics in the first place.

The last line in the snippet, ga('send', 'pageview');, uses a function defined in the analytics.js file to finally send the pageview. The function is needed because it is what is capturing the analytics data — it fetches the kind of browser, the screen resolution, the language, etc…  Then, it constructs a URL that includes all the data, and  sends a request to this URL. It’s only after this step that the analytics information gets captured. Every user behavior event you record using Google Analytics will result in another request.

The reality is that the vast majority of tools consist of more than one resource file, and that it’s practically impossible to know in advance what a tool is going to load without testing it on your website. You can use Request Map Generator to get a visual representation of all the resources loaded on your website, including how they call each other. Below is a Request Map of a demo e-commerce website we created:

Zaraz use Workers to make third-party tools secure and fast

That big blue circle is our website’s resources, and all other circles are third-party tools. You can see how the big green circle is actually a sub-request of the main Facebook pixel (fbevents.js), and how many tools, like LinkedIn on top right, are creating a redirect chain in order to sync some data, on the expense of forcing the browser to make more and more network requests.

A new place to run a tag manager — the edge

Since we want to make third-parties faster, more secure, and private, we had to develop a fundamental new way of thinking about them and a new system for how they run. We came up with a plan: build a platform where third-parties can run code outside the browser, while still getting access to the information they need and being able to talk with the DOM when necessary. We don’t believe third parties are evil: they never intended to slow down the Internet for everyone, they just didn’t have another option. Being able to run code on the edge and run it fast opened up new possibilities and changed all that, but the transition is hard.

By moving third-party code to run outside the browser, we get multiple wins.

  • The website will load faster and be more interactive. The browser rendering your website can now focus on the most important thing — your website. The downloading, parsing and execution of all the third-party scripts will no longer compete or even block the rendering and interactivity of your website.
  • Control over the data sent to third-parties. Third-party tools often automatically collect information from the page and from the browser to, for example, measure site behaviour/usage. In many cases, this information should stay private. For example, most tools collect the document.location, but we often see a “reset password” page including the user email in the URL, meaning emails are unknowingly being sent and saved by third-party providers, usually without consent. Moving the execution of the third parties to the edge means we have full visibility into what is being sent. This means we can provide alerts and filters in case tools are trying to collect Personally Identifiable Information or mask the private parts of the data before they reach third-party servers. This feature is currently not available on the public beta, but contact us if you want to start using it today.
  • By reducing the amount of code being executed in the browser and by scanning all code that is executed in it, we can continuously verify that the code hasn’t been tampered with and that it only does what it is intended to do. We are working to connect Zaraz with Cloudflare Page Shield to do this automatically.

When you configure a third-party tool through a normal tag manager, a lot happens in the browsers of your visitors which is out of your control. The tag manager will load and then evaluate all trigger rules to decide which tools to load. It would then usually append the script tags of those tools to the DOM of the page, making the browser fetch the scripts and execute them. These scripts come from untrusted or unknown origins, increasing the risk of malicious code execution in the browser. They can also block the browser from becoming interactive until they are completely executed. They are generally free to do whatever they want in the browser, but most commonly they would then collect some information and send it to some endpoint on the third-party server. With Zaraz, the browser essentially does none of that.

Zaraz use Workers to make third-party tools secure and fast

Choosing Cloudflare Workers

When we set about coding Zaraz, we quickly understood that our infrastructure decisions would have a massive impact on our service. In fact, choosing the wrong one could mean we have no service at all. The most common alternative to Zaraz is traditional Tag Management software. They generally have no server-side component: whenever a user “publishes” a configuration, a JavaScript file is rendered and hosted as a static asset on a CDN. With Zaraz the idea is to move most of the evaluation of code out of the browser, and respond with a dynamically generated JavaScript code each time. We needed to find a solution that would allow us to have a server-side component, but would be as fast as a CDN. Otherwise, there was a risk we might end up slowing down websites instead of making them faster.

We needed Zaraz to be served from a place close to the visiting user. Since setting up servers all around the world seemed like too big of a task for a very young startup, we looked at a few distributed serverless platforms. We approached this search with a small list of requirements:

  • Run JavaScript: Third-party tools all use JavaScript. If we were to port them to run in a cloud environment, the easiest way to do so would be to be able to use JavaScript as well.
  • Secure: We are processing sensitive data. We can’t afford the risk of someone hacking into our EC2 instance. We wanted to make sure that data doesn’t stay on some server after we sent our HTTP response.
  • Fully programmable: Some CDNs allow setting complicated rules for handling a request, but altering HTTP headers, setting redirects or HTTP response codes isn’t enough. We need to generate JavaScript code on the fly, meaning we need full control over the responses. We also need to use some external JavaScript libraries.
  • Extremely fast and globally distributed: In the very early stages of the company, we already had customers in the USA, Europe, India, and Israel. As we were preparing to show them a Proof of Concept, we needed to be sure it would be fast wherever they are. We were competing with tag managers and Customer Data Platforms that have a pretty fast response time, so we need to be able to respond as fast as if our content was statically hosted on a CDN, or faster.

Initially we thought we would need to create Docker containers that would run around the globe and would use their own HTTP server, but then a friend from our Y Combinator batch said we should check out Cloudflare Workers.

At first, we thought it wouldn’t work — Workers doesn’t work like a Node.js application, and we felt that limitation would prevent us from building what we wanted. We planned to let Workers handle the requests coming from users’ browsers, and then use an AWS Lambda for the heavy lifting of actually processing data and sending it to third-party vendors.

Our first attempt with Workers was very simple: just confirming we could use it to actually return dynamic browser-side JavaScript that is generated on-the-fly:

addEventListener('fetch', (event) => {
 event.respondWith(handleRequest(event.request))
})
 
async function handleRequest(request) {
   let code = '(function() {'
  
   if (request.headers.get('user-agent').includes('Firefox')) {
     code += `console.log('Hello Firefox!');`
   } else {
     code += `console.log('Hey other browsers...');`
   }
  
   code += '})();'
  
   return new Response(code, {
     headers: { 'content-type': 'text/javascript' }
   });
}

It was a tiny example, but I remember calling Yair afterwards and saying “this could actually work!”. It proved the flexibility of Workers. We just created an endpoint that served a JavaScript file, this JavaScript file was dynamically generated, and the response time was less than 10ms. We could now put <script src="path/to/worker.js"> in our HTML and treat this Worker like a normal JavaScript file.

As we took a deeper look, we found Workers answering demand after demand from our list, and learned we could even do the most complicated things inside Workers. The Lambda function started doing less and less, and was eventually removed. Our little Node.js proof-of-concept was easily converted to Workers.

Using the Cloudflare Workers platform: “standing on the shoulders of giants”

When we raised our seed round we heard many questions like “if this can work, how come it wasn’t built before?” We often said that while the problem has been a long standing one, accessible edge computing is a new possibility. Later, on our first investors update after creating the prototype, we told them about the unbelievably fast response time we managed to achieve and got much praise for it — talk about “standing on the shoulders of giants”. Workers simply checked all our boxes. Running JavaScript and using the same V8 engine as the browser meant that we could keep the same environment when porting tools to run on the cloud (it also helped with hiring). It also opened the possibility of later on using WebAssembly for certain tasks. The fact that Workers are serverless and stateless by default was a selling point for our own trustworthiness: we told customers we couldn’t save their personal data even by mistake, which was true. The integration between webpack and Wrangler meant that we could write a full-blown application — with modules and external dependencies — to shift 100% of our logic into our Worker. And the performance helped us ace all our demos.

As we were building Zaraz, the Workers platform got more advanced. We ended up using Workers KV for storing user configuration, and Durable Objects for communicating between Workers. Our main Worker holds server-side implementations of more than 50 popular third-party tools, replacing hundreds of thousands of JavaScript lines of code that traditionally run inside browsers. It’s an ever growing list, and we recently also published an SDK that allows third-party vendors to build support for their tools by themselves. For the first time, they can do it in a secure, private, and fast environment.

A new way to build third-parties

Most third-party tools do two fundamental things: First, they collect some information from the browser such as screen resolution, current URL, page title or cookie content. Second, they send it to their server. It is often simple, but when a website has tens of these tools, and each of them query for the information it needs and then sends its requests, it can cause a real slowdown. On Zaraz, this looks very different: Every tool provides a run function, and when Zaraz evaluates the user request and decides to load a tool, it executes this run function. This is how we built integrations for over 50 different tools, all from different categories, and this is how we’re inviting third-party vendors to write their own integrations into Zaraz.

run({system, utils}) { 
  // The `system` object includes information about the current page, browser, and more 
  const { device, page, cookies } = system
  // The `utils` are a set of functions we found useful across multiple tools
  const { getCookieString, waitUntil } = utils

  // Get the existing cookie content, or create a new UUID instead
  const cookieName = 'visitor-identifier'
  const sessionCookie = cookies[cookieName] || crypto.randomUUID()

  // Build the payload
  const payload = {
    session: sessionCookie,
    ip: device.ip,
    resolution: device.resolution,
    ua: device.userAgent,
    url: page.url.href,
    title: page.title,
  }

  // Construct the URL
  const baseURL = 'https://example.com/collect?'
  const params = new URLSearchParams(payload)
  const finalURL = baseURL + params

  // Send a request to the third-party server from the edge
  waitUntil(fetch(finalURL))
  
  // Save or update the cookie in the browser
  return getCookieString(cookieName, sessionCookie)
}

The above code runs in our Cloudflare Worker, instead of the browser. Previously, having 10x more tools meant 10x more requests browsers rendering your website needed to make, and 10x more JavaScript code they needed to evaluate. This code would often be repetitive, for example, almost every tool implements their own “get cookie” function. It’s also 10x more origins you have to trust no one is tampering with. When running tools on the edge, this doesn’t affect the browser at all: you can add as many tools as you want, but they wouldn’t be loading in the browser, so they will have no effect.

In this example, we first check for the existence of a cookie that identifies the session, called “visitor-identifier”. If it exists, we read its value; if not, we generate a new UUID for it. Note that the power of Workers is all accessible here: we use crypto.randomUUID() just like we can use any other Workers functionality. We then collect all the information our example tool needs — user agent, current URL, page title, screen resolution, client IP address — and the content of the “visitor-identifier” cookie. We construct the final URL that the Worker needs to send a request to, and we then use waitUntil to make sure the request gets there. Zaraz’s version of fetch gives our tools automatic logging, data loss prevention and retries capabilities.

Lastly, we return the value of the getCookieString function. Whatever string is returned by the run function is passed to the visitor as browser-side JavaScript. In this case, getCookieString returns something like document.cookie = 'visitor-identifier=5006e6fa-7ce6-45ef-8724-c846f1953369; Path=/; Max-age=31536000';, causing the browser to create a first-party cookie. The next time a user loads a page, the visitor-identifier cookie should exist, causing Zaraz to reuse the UUID instead of creating a new one.

This system of run functions allows us to separate and isolate each tool to run independently of the rest of the system, while still providing it with all the required context and data coming from the browser, and the capabilities of Workers. We are inviting third-party vendors to work with us to build the future of secure, private and fast third-party tools.

A new events system

Many third-party tools need to collect behavioral information during a user visit. For example, you might want to place a conversation pixel right after a user clicked “submit” on the credit card form. Since we moved tools to the cloud, you can’t access their libraries from the browser context anymore. For that we created zaraz.track() — a method that allows you to call tools programmatically, and optionally provide them with more information:

document.getElementById("credit-card-form").addEventListener("submit", () => {
  zaraz.track("card-submission", {
    value: document.getElementById("total").innerHTML,
    transaction: "X-98765",
  });
});

In this example, we’re letting Zaraz know about a trigger called “card-submission”, and we associate some data with it — the value of the transaction that we’re taking from an element with the ID total, and a transaction code that is hardcoded and gets printed directly from our backend.

In the Zaraz interface, configured tools can be subscribed to different and multiple triggers. When the code above gets triggered, Zaraz checks, on the edge, what tools are subscribed to the card-submission trigger, and it then calls them with the right additional data supplied, populating their requests with the transaction code and its value.

This is different from how traditional tag managers work: GTM’s dataLayer.push serves a similar purpose, but is evaluated client-side. The result is that GTM itself, when used intensively, will grow its script so much that it can become the heaviest tool a website loads. Each event sent using dataLayer.push will cause repeated evaluation of code in the browser, and each tool that will match the evaluation will execute code in the browser, and might call more external assets again. As these events are usually coupled with user interactions, this often makes interacting with a website feel slow, because running the tools is occupying the main thread. With Zaraz, these tools exist and are evaluated only at the edge, improving the website’s speed and security.

Zaraz use Workers to make third-party tools secure and fast

You don’t have to be coder to use triggers. The Zaraz dashboard allows you to choose from a predefined set of templates like click listeners, scroll events and more, that you can attach to any element on your website without touching your code. When you combine zaraz.track() with the ability to program your own tools, what you get is essentially a one-liner integration of Workers into your website. You can write any backend code you want and Zaraz will take care of calling it exactly at the right time with the right parameters.

Joining Cloudflare

When new customers started using Zaraz, we noticed a pattern: the best teams we worked with chose Cloudflare, and some were also moving parts of their backend infrastructure to Workers. We figured we could further improve performance and integration for companies using Cloudflare as well. We could inline parts of the code inside the page and then further reduce the amount of network requests. Integration also allowed us to remove the time it takes to DNS resolve our script, because we could use Workers to proxy Zaraz into our customers’ domains. Integrating with Cloudflare made our offering even more compelling.

Back when we were doing Y Combinator in Winter 2020 and realized how much third parties could affect a websites’ performance, we saw a grand mission ahead of us: creating a faster, private, and secure web by reducing the amount of third-party bloat. This mission remained the same to this day. As our conversations with Cloudflare got deeper, we were excited to realize that we’re talking with people who share the same vision. We are thrilled for the opportunity to scale our solutions to millions of websites on the Internet, making them faster and safer and even reducing carbon emissions.

If you would like to explore the free beta version, please click here. If you are an enterprise and have additional/custom requirements, please click here to join the waitlist. To join our Discord channel, click here.

Announcing native support for Stripe’s JavaScript SDK in Cloudflare Workers

Post Syndicated from Kristian Freeman original https://blog.cloudflare.com/announcing-stripe-support-in-workers/

Announcing native support for Stripe’s JavaScript SDK in Cloudflare Workers

This post is also available in 日本語, 简体中文.

Announcing native support for Stripe’s JavaScript SDK in Cloudflare Workers

Handling payments inside your apps is crucial to building a business online. For many developers, the leading choice for handling payments is Stripe. Since my first encounter with Stripe about seven years ago, the service has evolved far beyond simple payment processing. In the e-commerce example application I shared last year, Stripe managed a complete seller marketplace, using the Connect product. Stripe’s product suite is great for developers looking to go beyond accepting payments.

Earlier versions of Stripe’s SDK had core Node.js dependencies, like many popular JavaScript packages. In Stripe’s case, it interacted directly with core Node.js libraries like net/http, to handle HTTP interactions. For Cloudflare Workers, a V8-based runtime, this meant that the official Stripe JS library didn’t work; you had to fall back to using Stripe’s (very well-documented) REST API. By doing so, you’d lose the benefits of using Stripe’s native JS library — things like automatic type-checking in your editor, and the simplicity of function calls like stripe.customers.create(), instead of manually constructed HTTP requests, to interact with Stripe’s various pieces of functionality.

In April, we wrote that we were focused on increasing the number of JavaScript packages that are compatible with Workers:

We won’t stop until our users can import popular Node.js libraries seamlessly. This effort will be large-scale and ongoing for us, but we think it’s well worth it.

I’m excited to announce general availability for the stripe JS package for Cloudflare Workers. You can now use the native Stripe SDK directly in your projects! To get started, install stripe in your project: npm i stripe.

This also opens up a great opportunity for developers who have deployed sites to Cloudflare Pages to begin using Stripe right alongside their applications. With this week’s announcement of serverless function support in Cloudflare Pages, you can accept payments for your digital product, or handle subscriptions for your membership site, with a few lines of JavaScript added to your Pages project. There’s no additional configuration, and it scales automatically, just like your Pages site.

To showcase this, we’ve prepared an example open-source repository, showing how you can integrate Stripe Checkout into your Pages application. There’s no additional configuration needed — as we announced yesterday, Pages’ new Workers Functions features allows you to deploy infinitely-scalable functions just by adding a new functions folder to your project. See it in action at stripe.pages.dev, or check out the open-source repository on GitHub.

With the SDK installed, you can begin accepting payments directly in your applications. The below example shows how to initiate a new Checkout session and redirect to Stripe’s hosted checkout page:

const Stripe = require("stripe");

const stripe = Stripe(STRIPE_API_KEY, {
  httpClient: Stripe.createFetchHttpClient()
});

export default {
  async fetch(request) {
    const session = await stripe.checkout.sessions.create({
      line_items: [{
        price_data: {
          currency: 'usd',
          product_data: {
            name: 'T-shirt',
          },
          unit_amount: 2000,
        },
        quantity: 1,
      }],
      payment_method_types: [
        'card',
      ],
      mode: 'payment',
      success_url: `${YOUR_DOMAIN}/success.html`,
      cancel_url: `${YOUR_DOMAIN}/cancel.html`,
    });

    return Response.redirect(session.url)
  }
}

With support for the Stripe SDK natively in Cloudflare Workers, you aren’t just limited to payment processing either. Any JavaScript example that’s currently in Stripe’s extensive documentation works directly in Workers, without needing to make any changes.

In particular, using Workers to handle the multitude of available Stripe webhooks means that you can get better visibility into how your existing projects are working, without needing to spin up any new infrastructure. The below example shows how you can securely validate incoming webhook requests to a Workers function, and perform business logic by parsing the data inside that webhook:

const Stripe = require("stripe");

const stripe = Stripe(STRIPE_API_KEY, {
  httpClient: Stripe.createFetchHttpClient()
});

export default {
  async fetch(request) {
    const body = await request.text()
    const sig = request.headers.get('stripe-signature')
    const event = stripe.webhooks.constructEvent(body, sig, secret)

    // Handle the event
    switch (event.type) {
      case 'payment_intent.succeeded':
        const paymentIntent = event.data.object;
        // Then define and call a method to handle the successful payment intent.
        // handlePaymentIntentSucceeded(paymentIntent);
        break;
      case 'payment_method.attached':
        const paymentMethod = event.data.object;
        // Then define and call a method to handle the successful attachment of a PaymentMethod.
        // handlePaymentMethodAttached(paymentMethod);
        break;
      // ... handle other event types
      default:
        console.log(`Unhandled event type ${event.type}`);
    }

    // Return a response to acknowledge receipt of the event
    return new Response(JSON.stringify({ received: true }), {
      headers: { 'Content-type': 'application/json' }
    })
  }
}

We’re also announcing a new Workers template in partnership with Stripe. The template will help you get up and running with Stripe and Workers, using our best practices.

In less than five minutes, you can begin accepting payments for your next digital product or membership business. You can also handle and validate incoming webhooks at the edge, from a single codebase. This comes with no upfront cost, server provisioning, or any of the standard scaling headaches. Take your serverless functions, deploy them to Cloudflare’s edge, and start making money!

“We’re big fans of Cloudflare Workers over here at Stripe. Between the wild performance at the edge and fantastic serverless development experience, we’re excited to see what novel ways you all use Stripe to make amazing apps.”
Brian Holt, Stripe

We’re thrilled with this incredible update to Stripe’s JavaScript SDK. We’re also excited to see what you’ll build with native Stripe support in Workers. Join our Discord server and share what you’ve built in the #what-i-built channel — get your invite here.

An Open-Source CMS on the Cloudflare Stack: Introductory Post

Post Syndicated from Luke Edwards original https://blog.cloudflare.com/production-saas-intro/

An Open-Source CMS on the Cloudflare Stack: Introductory Post

An Open-Source CMS on the Cloudflare Stack: Introductory Post

The Cloudflare documentation is a great resource when learning concepts, reviewing API usage notes, or when you’re in need of a concise snippet to illustrate those APIs or concepts. But, as comprehensive as it is, new users to the Cloudflare Workers platform must bridge a large gap to go from the introductory example snippets to a real, production-ready application. While some of this may be specific to Workers (as with any platform), developers everywhere are figuring out how applications should be built in a serverless world. Building large serverless applications entails a learning curve journey, regardless of a developer’s experience level.

At Cloudflare, we’re intimately aware of this because we also had to go through the same transition. Our engineers are world-class and expertfully design and craft products that compliment the distributed paradigm… but experts aren’t born overnight! We have been there, and we want to help jumpstart and aid others’ understanding.

With this in mind, we decided to do something unique to the industry: we are developing an example feature-complete SaaS application that will be built entirely on the Cloudflare stack. It is and will continue to be completely free, open-sourced on GitHub, and developed in public. This will be an incredible time as it can be used as a template for launching your own SaaS applications, too! In fact, you can clone the GitHub repository, update a few service tokens, and deploy the pre-built application to your own Cloudflare account within minutes!

Of course, examples and templates are great, but technologies and best practices never stop evolving. Cloudflare is no exception and is constantly iterating and introducing new products and product features. By extension, this requires the SaaS application to be a living example that evolves alongside the Workers platform — and this is part of our commitment.

|| Don’t miss out! Watch the project on GitHub to track its development progress and stay current with our latest changes and recommendations.

Application Overview

Now, aside from actually building the application, we needed to find a balance between picking an example SaaS application that is both complex enough to serve as a convincing case study and simple — or self-contained — enough so that developers can quickly dive in, follow the source, and understand the components involved and the reasons why and/or how they are used.

Ultimately we decided to build an example content management system (CMS) which, as an application archetype, has also transformed over the years. Traditionally, a CMS operated on rented hardware, which was home to a long-lived server that handled incoming requests and queried an SQL-like database in order to retrieve the requested content, render it to an HTML page, and repeat the process over and over again. WordPress was — and still is — a very common example of this approach.

Naturally, this application architecture was improved over the years: layers of caching were introduced, database schemas were redesigned to minimize the number of rows processed, and some frameworks began to skip the database entirely, preferring a build-step to render all content upfront as static HTML pages. (This is now known as “static-site generation” and is still a very popular approach.)

Today, in the serverless era, there are a number of “headless CMS” options available. These are made “headless” because they are not monolithic web servers that render HTML for each request. Instead, they offer API endpoints that will return the content as raw JSON data. This allows web developers to build completely custom templates for their website using whatever tools and/or frameworks they prefer. This approach grants an enormous amount of flexibility to the developer without losing the ability to organize their content, image assets, etc. WordPress, the seasoned veteran, is one of few that is able to offer a headless and a “headful” mode. Other headless choices, like Sanity.io and Contentful, are also quite popular.

The CMS application model is a great case study for our open-source example. One of the primary tenets of an edge-first design is that content should be made available as close as physically possible to the users asking for it. And the serverless architecture means that there’s no longer — or should not be — a single point of failure. These both directly benefit the CMS archetype and, when implemented, will yield clear performance gains.

Current Progress

Before diving into the roadmap and explaining how this project will progress, it’s important to call out that this project has already been — and will continue to be — an ongoing effort! Today, you can find the project on GitHub and inspect the work that’s already been completed. As of now, the application already combines Workers, Workers KV, Cloudflare for SaaS, and Rate Limiting with Pages and Durable Objects additions to come in later milestones.

Phase 1 (see below) is nearing completion and, when finished, will mark the end of a very significant milestone. A new update to this blog post series will be issued, covering the highlights and technical overview of the project so far. This is important and immediately useful because on its own, Phase 1 qualifies for a successful, full-stack application.

Development Milestones

The CMS application will progress in milestones. We have already released the project and will continue to build upon it in accordance with the roadmap (below). GitHub stargazers will be able to keep tabs on its progress, or at the very least, only subscribe to updates for the milestones they’re interested in following.

Each milestone is a sizable checkpoint on its own. As you’ll see below, the project roadmap is planned in a way such that each phase adds a considerable amount of new features and/or integrates with a new Cloudflare product. At every point, the application will remain functional and maintain a live, interactive demo to immediately demonstrate the latest functionality to passersby.

This format is chosen because it’s how real applications — and real products — are developed. Our goal is to ensure that the GitHub repository is never out of date. And, because of the development structure, one may always traverse the list of past milestones to review the changes that were necessary to migrate X or how product Y was integrated.

|| Note: Visit the GitHub milestones to view more details and to subscribe for updates. There is so much more than can be listed here.

Phase 1 – JSON API

The project must begin with some API endpoints to start managing and manipulating data. Using Workers and Workers KV, the work within this milestone will focus on building a robust JSON API that handles the core functionality that the rest of the application will need.

There is no HTML, CSS, or client-side JavaScript involved in this phase. Instead, work here should focus solely on the data: how it’s accessed, how models relate to one another, and how best to structure and store these relationships within Workers KV. For example, individuals should be able to create and manage workspaces that belong to their personal user accounts or to the organization(s) that they belong to.

Additionally, when creating content, the document should be validated against an existing schema that the document was assigned. This feature is critical in any CMS platform that plans to handle thousands of documents within a workspace. Without it, there’s very little confidence that your contents’ JSON representation is consistently structured.

A number of other features are planned — subscription management and invoicing through Stripe, sending transactional emails through SendGrid, and assigning vanity domains to workspaces through Cloudflare for SaaS. Finally, of course, the standard house-keeping tasks will be set up. This includes continuous integration (CI) with API testing and automated, continuous deployments (CD).

An Open-Source CMS on the Cloudflare Stack: Introductory Post

By the end of this phase, the project will exist as a collection of API endpoints that, on its own, is a complete application. While it may only be accessible through curl commands — or any other preferred method for manually constructing HTTP requests, the completion of Phase 1 already qualifies the project as a full-stack application and could serve a real-world SaaS product.

Additionally, the repository will include all the best practices for writing tests, automating deployments, and organizing the source in a way for long-term growth and maintenance. And, because we started with the JSON API, the project is immediately useful and capable of integrating with your existing build tools and frameworks. In other words, stargazers could deploy the project to their own accounts as their own personalized Headless CMS. Perhaps some of you will build Gatsby or Eleventy plugins — if you do, please let us know!

Phase 2 — Dashboard UI

As fun as curl may be, most people prefer some form of visual interface they can interact with. This phase will be all about assembling a frontend to serve as the CMS application’s dashboard.

We will use Svelte, a JavaScript framework for building user interfaces. While not everyone may enjoy or agree with this decision, the templating syntax resembles standard HTML markup, which will allow non-frontend developers to follow along and gauge what’s going on.

Svelte will be paired with Tailwind CSS for the design system. Tailwind is a very popular, utility-first CSS framework that allows developers to compose styles through predefined, reusable HTML ”class” names.

The result will be a single-page application (SPA) and will be hosted on Cloudflare Pages. This means that, out of the box, the dashboard will be able to take advantage of Access-protected preview deployments, instant rollbacks, automated deployments, comprehensive analytics, and more.

An Open-Source CMS on the Cloudflare Stack: Introductory Post

Finally, now that Pages integrates with Workers directly, the JSON API from Phase 1 will migrate into a new repository structure. While this may seem like an innocent refactor, it actually unlocks an incredible set of features for the JSON API: Access-protected preview deployments, instant rollbacks, and automated deployments. Yes — these are the same Pages features mentioned above! This is amazing because it means that our API is continuously and atomically versioned, allowing its development to continue safely alongside the client dashboard that depends on it. In other words, there is zero risk of the API and the dashboard diverging, which would have allowed their expectations of one another to misalign. Instant rollbacks will also apply to the API since the entire application operates as a single Pages unit.

The previous phase will have built the core SaaS product functionality, but completing this phase will make it feel like a real-world product that can be launched and used on a daily basis. In fact, the end of Phase 2 marks the application as a possible contender in the Headless CMS service space.

Phase 3 — Article Edge-Rendering

The previous phases are focused on assembling a minimum viable Headless CMS product, but Phase 3 looks to grow outside this archetypal box. This will happen by allowing the application to render HTML web pages by injecting the JSON content into predefined templates.

Like WordPress, the CMS application should allow its users to choose whether they want to continue using the “headless” feature or enlist the complete template engine. Should they opt for HTML output, the Cloudflare project will only include a few premade templates that a user may select from — but, of course, this can be customized in your own projects.

Even though this phase reintroduces the monolithic CMS archetype, it’s a significantly safer, faster, and more resilient architecture than the single, all-in-one server of yesteryear. The CMS contents will still be distributed around the world, close to the customers’ readers — but now, the content can be rendered from anywhere in the world at extremely low latencies, too.

Phase 4 — Feature Upgrades

At this stage, the application is — for the most part — complete. It’s functional, looks nice, performs well globally, and can be used in two very distinct ways.

In the context of a real SaaS product, development begins to shift towards adding new features that excite users or towards maintenance health of the project. For example, Phase 4 will utilize Durable Objects to introduce a document editor that allows multiple users to edit the same document in a real-time, collaborative environment.

It’s also very likely that Cloudflare R2 Storage will be introduced as a backend for media assets, allowing users to upload and manage images within a workspace. Or perhaps we decide to use Cloudflare Images for this and R2 is used for importing and exporting content backups.

As you may expect, this milestone is full of unknowns, but that’s because the future holds unlimited possibilities. The project will continue to evolve and expand with Cloudflare and with time.

Of course, if you have ideas or suggestions for features, start a discussion with us on GitHub. We would love to hear from you!

Next Steps

This was the introductory post of (what will be) an ongoing series. When each milestone is completed, we will publish a new post in this series with a retrospective and with technical walkthroughs of key aspects from that chapter’s work.

We’re at the beginning of an exciting journey, and we hope you’re as interested as we are!

You can show your support by starring or following the project on GitHub. All releases, discussions, and milestone tracking will reside within the repository. The next generation of SaaS applications will be built on Cloudflare — subscribe and dive in early!

Build your next video application on Cloudflare

Post Syndicated from Jonathan Kuperman original https://blog.cloudflare.com/build-video-applications-cloudflare/

Build your next video application on Cloudflare

Build your next video application on Cloudflare

Historically, building video applications has been very difficult. There’s a lot of complicated tech behind recording, encoding, and playing videos. Luckily, Cloudflare Stream abstracts all the difficult parts away, so you can build custom video and streaming applications easily. Let’s look at how we can combine Cloudflare Stream, Access, Pages, and Workers to create a high-performance video application with very little code.

Today, we’re going to build a video application inspired by Cloudflare TV. We’ll have user authentication and the ability for administrators to upload recorded videos or livestream new content. Think about being able to build your own YouTube or Twitch using Cloudflare services!

Fetching a list of videos

On the main page of our application, we want to display a list of all videos. The videos are uploaded and stored with Cloudflare Stream, but more on that later! This code could be changed to display only the “trending” videos or a selection of videos chosen for each user. For now, we’ll use the search API and pass in an empty string to return all.

import { getSignedStreamId } from "../../src/cfStream"

export async function onRequestGet(context) {
    const {
        request,
        env,
        params,
    } = context

    const { id } = params

    if (id) {
        const res = await fetch(`https://api.cloudflare.com/client/v4/accounts/${env.CF_ACCOUNT_ID}/stream/${id}`, {
            method: "GET",
            headers: {
                "Authorization": `Bearer ${env.CF_API_TOKEN_STREAM}`
            }
        })

        const video = (await res.json()).result

        if (video.meta.visibility !== "public") {
            return new Response(null, {status: 401})
        }

        const signedId = await getSignedStreamId(id, env.CF_STREAM_SIGNING_KEY)

        return new Response(JSON.stringify({
            signedId: `${signedId}`
        }), {
            headers: {
                "content-type": "application/json"
            }
        })
    } else {
        const url = new URL(request.url)
        const res = await (await fetch(`https://api.cloudflare.com/client/v4/accounts/${env.CF_ACCOUNT_ID}/stream?search=${url.searchParams.get("search") || ""}`, {
            headers: {
                "Authorization": `Bearer ${env.CF_API_TOKEN_STREAM}`
            }
        })).json()

        const filteredVideos = res.result.filter(x => x.meta.visibility === "public") 
        const videos = await Promise.all(filteredVideos.map(async x => {
            const signedId = await getSignedStreamId(x.uid, env.CF_STREAM_SIGNING_KEY)
            return {
                uid: x.uid,
                status: x.status,
                thumbnail: `https://videodelivery.net/${signedId}/thumbnails/thumbnail.jpg`,
                meta: {
                    name: x.meta.name
                },
                created: x.created,
                modified: x.modified,
                duration: x.duration,
            }
        }))
        return new Response(JSON.stringify(videos), {headers: {"content-type": "application/json"}})
    }
}

We’ll go through each video, filter out any private videos, and pull out the metadata we need, such as the thumbnail URL, ID, and created date.

Playing the videos

To allow users to play videos from your application, they need to be public, or you’ll have to sign each request. Marking your videos as public makes this process easier. However, there are many reasons you might want to control access to your videos. If you want users to log in before they play them or the ability to limit access in any way, mark them as private and use signed URLs to control access. You can find more information about securing your videos here.

If you are testing your application locally or expect to have fewer than 10,000 requests per day, you can call the /token endpoint to generate a signed token. If you expect more than 10,000 requests per day, sign your own tokens as we do here using JSON Web Tokens.

Allowing users to upload videos

The next step is to build out an admin page where users can upload their videos. You can find documentation on allowing user uploads here.

This process is made easy with the Cloudflare Stream API. You use your API token and account ID to generate a unique, one-time upload URL. Just make sure your token has the Stream:Edit permission. We hook into all POST requests from our application and return the generated upload URL.

export const cfTeamsAccessAuthMiddleware = async ({request, data, env, next}) => {
    try {
        const userEmail = request.headers.get("cf-access-authenticated-user-email")

        if (!userEmail) {
            throw new Error("User not found, make sure application is behind Cloudflare Access")
        }
  
        // Pass user info to next handlers
        data.user = {
            email: userEmail
        }
  
        return next()
    } catch (e) {
        return new Response(e.toString(), {status: 401})
    }
}

export const onRequest = [
    cfTeamsAccessAuthMiddleware
]

The admin page contains a form allowing users to drag and drop or upload videos from their computers. When a logged-in user hits submit on the upload form, the application generates a unique URL and then posts the FormData to it. This code would work well for building a video sharing site or with any application that allows user-generated content.

async function getOneTimeUploadUrl() {
    const res = await fetch('/api/admin/videos', {method: 'POST', headers: {'accept': 'application/json'}})
    const upload = await res.json()
    return upload.uploadURL
}

async function uploadVideo() {
    const videoInput = document.getElementById("video");

    const oneTimeUploadUrl = await getOneTimeUploadUrl();
    const video = videoInput.files[0];
    const formData = new FormData();
    formData.append("file", video);

    const uploadResult = await fetch(oneTimeUploadUrl, {
        method: "POST",
        body: formData,
    })
}

Adding real time video with Stream Live

You can add a livestreaming section to your application as well, using Stream Live in conjunction with the techniques we’ve already covered.  You could allow logged-in users to start a broadcast and then allow other logged-in users, or even the public, to watch it in real-time! The streams will automatically save to your account, so they can be viewed immediately after the broadcast finishes in the main section of your application.

Securing our app with middleware

We put all authenticated pages behind this middleware function. It checks the request headers to make sure the user is sending a valid authenticated user email.

export const cfTeamsAccessAuthMiddleware = async ({request, data, env, next}) => {
    try {
        const userEmail = request.headers.get("cf-access-authenticated-user-email")

        if (!userEmail) {
            throw new Error("User not found, make sure application is behind Cloudflare Access")
        }
  
        // Pass user info to next handlers
        data.user = {
            email: userEmail
        }
  
        return next()
    } catch (e) {
        return new Response(e.toString(), {status: 401})
    }
}

export const onRequest = [
    cfTeamsAccessAuthMiddleware
]

Putting it all together with Pages

We have Cloudflare Access controlling our log-in flow. We use the Stream APIs to manage uploading, displaying, and watching videos. We use Workers for managing fetch requests and handling API calls. Now it’s time to tie it all together using Cloudflare Pages!

Pages provides an easy way to deploy and host static websites. But now, Pages seamlessly integrates with the Workers platform (link to announcement post). With this new integration, we can deploy this entire application with a single, readable repository.

Controlling access

Some applications are better public; others contain sensitive data and should be restricted to specific users. The main page is public for this application, and we’ve used Cloudflare Access to limit the admin page to employees. You could just as easily use Access to protect the entire application if you’re building an internal learning service or even if you want to beta launch a new site!

When a user clicks the admin link on our demo site, they will be prompted for an email address. If they enter a valid Cloudflare email, the application will send them an access code. Otherwise, they won’t be able to access that page.

Check out the source code and get started building your own video application today!

Workers, Now Even More Unbound: 15 Minutes, 100 Scripts, and No Egress

Post Syndicated from Kabir Sikand original https://blog.cloudflare.com/workers-now-even-more-unbound/

Workers, Now Even More Unbound: 15 Minutes, 100 Scripts, and No Egress

Workers, Now Even More Unbound: 15 Minutes, 100 Scripts, and No Egress

Our mission is to enable developers to build their applications, end to end, on our platform, and ruthlessly eliminate limitations that may get in the way. Today, we’re excited to announce you can build large, data-intensive applications on our network, all without breaking the bank; starting today, we’re dropping egress fees to zero.

More Affordable: No Egress Fees

Building more on any platform historically comes with a caveat — high data transfer cost. These costs often come in the form of egress fees. Especially in the case of data intensive workloads, egress data transfer costs can come at a high premium, depending on the provider.

What exactly are data egress fees? They are the costs of retrieving data from a cloud provider. Cloud infrastructure providers generally pay for bandwidth based on capacity, but often bill customers based on the amount of data transferred. Curious to learn more about what this means for end users? We recently wrote an analysis of AWS’ Egregious Egress — a good read if you would like to learn more about the ‘Hotel California’ model AWS has spun up. Effectively, data egress fees lock you into their platform, making you choose your provider based not on which provider has the best infrastructure for your use case, but instead choosing the provider where your data resides.

At Cloudflare, we’re working to flip the script for our customers. Our recently announced R2 Storage waives the data egress fees other providers implement for similar products. Cloudflare is a founding member of the Bandwidth Alliance, aiming to help our mutual customers overcome these data transfer fees.

We’re keeping true to our mission and, effective immediately, dropping all Egress Data Transfer fees associated with Workers Unbound and Durable Objects. If you’re using Workers Unbound today, your next bill will no longer include Egress Data Transfer fees. If you’re not using Unbound yet, now is a great time to experiment. With Workers Unbound, get access to longer CPU time limits and pay only for what you use, and don’t worry about the data transfer cost. When paired with Bandwidth Alliance partners, this is a cost-effective solution for any data intensive workloads.

More Unbound: 15 Minutes

This week has been about defining what the future of computing is going to look like. Workers are great for your latency sensitive workloads, with zero-milliseconds cold start times, fast global deployment, and the power of Cloudflare’s network. But Workers are not limited to lightweight tasks — we want you to run your heavy workloads on our platform as well. That’s why we’re announcing you can now use up to 15 minutes of CPU time on your Workers! You can run your most compute-intensive tasks on Workers using Cron Triggers. To get started, head to the Settings tab in your Worker and select the ‘Unbound’ usage model.

Once you’ve confirmed your Usage Model is Unbound, switch to the Triggers tab and click Add Cron Trigger. You’ll see a ‘Maximum Duration’ is listed, indicating whether your schedule is eligible for 15 Minute workloads.

Wait, there’s more (literally!)

That’s not all. As a platform, it is validating to see our customers want to grow even more with us, and we’ve been working to address these restrictions. That’s why, starting today, all customers will be allowed to deploy up to 100 Worker scripts. With the introduction of Services, that represents up to 100 environments per account. This higher limit will allow our customers to migrate more use cases to the Workers platform.

We’re also delighted to announce that, alongside this increase, the Workers platform will plan to support scripts larger in size. This increase will allow developers to build Workers with more libraries and new possibilities, like running Golang with WASM. Check out an example of esbuild running on a Worker, in a script that’s just over 2MB compressed. If you’re interested in larger script sizes, sign up here.

The future of cloud computing is here, and it’s on Cloudflare. Workers has always been the secure, fast serverless offering, and has recently been named a leader in the space. Now, it is even more affordable and flexible too.
We can’t wait to see what ambitious projects our customers build. Developers are now better positioned than ever to deploy large and complex applications on Cloudflare. Excited to build using Workers, or get engaged with the community? Join our Discord server to keep up with the latest on Cloudflare Workers.

Developer Spotlight: Automating Workflows with Airtable and Cloudflare Workers

Post Syndicated from Erwin van der Koogh original https://blog.cloudflare.com/developer-spotlight-jacob-hands-tritrails/

Developer Spotlight: Automating Workflows with Airtable and Cloudflare Workers

Developer Spotlight: Automating Workflows with Airtable and Cloudflare Workers

Next up on the Developer Spotlight is another favourite of mine. Today’s post is by Jacob Hands. Jacob operates TriTails Premium Beef, which is an online store for meat, a very perishable good. So he has a lot of unique challenges when it comes to shipping. To deal with their growth, Jacob, a developer by trade, turned to Airtable and Cloudflare Workers to automate a lot of their workflow.

One of Jacob’s quotes is one of my favourites:

“Sure, Cloudflare Workers allows you to scale to billions of requests per day, but it is also awesome for a few hundred requests a day.”

Here is Jacob talking about how it only took him a few days to put together a fully customised workflow tool by integrating Airtable and Workers. And how it saves them multiple hours every single day.

Shipping Requirements

Working at a new e-commerce business shipping perishable goods has several challenges as operations scale up. One of our biggest challenges is that daily shipping throughput is limited. Partly because of a small workspace, limiting how many employees can simultaneously pack orders, and also because despite having a requested pickup time with UPS, they often show up hours early, requiring packers to stop and scramble to meet them before they leave. Packing is also time-consuming because it’s a game of Tetris getting all products to fit with enough dry ice to keep it frozen.

This is what a regular box looks like:

Developer Spotlight: Automating Workflows with Airtable and Cloudflare Workers

Ensuring time-in-transit stays as low as possible is critical for ensuring that products stay frozen when arriving at the customer’s doorstep. Because of this requirement, avoiding packages staying in transit during the weekend is a must. We learned that the hard way after a package got delayed by a day, which wouldn’t have been too bad, but that meant it stayed in a sorting centre over the weekend, which wasn’t as pleasant.

Luckily, we caught it on time, and we were able to send a replacement set of steaks overnight and save a dinner party. But after that, we started triaging our orders to make sure that the correct packages were shipped at the right time.

Order Triage, The Hard Way

In the early days, we could pack orders after lunch and be done in an hour, but as we grew we needed to be careful about what, when, and how we ship. First, all open orders were copied to a Google Sheet. Next, the time-in-transit was manually checked for each order and added to the sheet. The sheet was then sorted by transit time (with paid priority air at the top), and each set of orders was separated into groups. Finally, the Google Sheet was printed for the packing team to work through.

Transit times are so crucial to the shipment process that they need to be on each packing slip so that the packing team knows how much dry ice and packaging each order needs. So the transit times were typed into each packing slip in Adobe Acrobat before printing. While this is a very tedious process, it is vital to ensure that each package is packed according to what they need to arrive in good condition.

Once the packing team would finish packing orders, the box weights and sizes were added to the Google Sheet based on the worksheet filled out by the packers. Next, each order label was created, individually copying weights and sizes from the Google Sheet to ShipStation, the application we use to manage logistics with our providers. Finally, the packages would be picked up and started their journey to the customer’s doorstep.

This process worked fine for ten orders, but as operations scaled up, triaging and organizing the orders became a full-time job, checking and double-checking that everything was entered correctly and that no human mistakes occurred (spoiler, they still happened!)

Automation

At first, I just wanted to automate the most tedious step: calculating transit times. This process took so long that it hindered how early the packing team could start packing orders, further limiting our throughput. Cloudflare Workers are so easy to use and get running quickly, so they seemed like a great place to start. The plan was to use =IMPORTDATA(order) in Google Sheets and eliminate that step in the process.

Automating just one thing is powerful, and it opened a flood of ideas about how our workflow could further be improved. With the first 30 minutes of daily work automated, what else could be done? That’s when I set out to automate as much of the workflow as possible, excited about the possibilities.

Triaging the Triaging

Problem-solving is often about figuring out what to prioritize, and automating this workflow is no different. Our order triaging process has many steps, and setting out to automate the entire thing at once wasn’t possible because of the limited blocks of time to work on it. Instead, I decided to only solve the highest priority problems, one step at a time. Triaging the triaging process helped me build everything needed to automate an entire workflow without it ever feeling overwhelming, and gaining efficiency each step along the way.

With the time-in-transit calculation API working, the next part I automated was getting the orders that need shipping from Shopify via the API instead of copy-pasting every time. This is where the limits of Google Sheets started to become apparent. While automation can be done in Sheets, it can quickly become a black box full of hacks. So it was time to move to a better platform, but which one?

While I had often heard of Airtable and played with it a few times since it launched in 2012, the pricing and limitations never seemed to fit any of my use cases. But with the little amount of data we needed to store at any one time, it seemed worth trying since it has an easy-to-use API and supports strict cell formats, which is much harder to do in Sheets. Airtable has an intuitive UI, and it is easy to create custom fields for each type of data needed.

Once I found out Airtable had a built-in Scripting app, it was obvious this was the right tool for the job.

Building Airtable Scripting Apps

Airtable Scripting is a powerful tool for building functionality directly within Airtable using JavaScript. Unfortunately, there are some limitations. For example, it isn’t possible to share code between different instances of the Scripting App without copying and pasting. There’s also no source control so reverting changes relies on the Undo button.

Cloudflare Workers, on the other hand, is a full developer platform. You can easily use source control, and it has a great developer experience with Wrangler and Miniflare, so testing and deploying is fast and seamless.

Airtable Scripting and Cloudflare Workers work together beautifully. Building APIs on Workers allows more complex tasks to run on the Cloudflare network. These APIs are then fetched by Airtable scripts, solving the code-sharing issue and speeding up development.

Shopify Order Importing

First, we needed to import orders from Shopify into Airtable. The API endpoint I created in Workers goes through all open orders and figures out which ones should be shipped this week. The orders are then cached in the Workers Cache API, so we can request this endpoint as much as needed without hitting Shopify API’s limits.

From there, the Airtable Scripting app checks the transit time for each order using our Workers API that makes calls to Shippo (a multi-carrier shipping API) to get time-in-transit estimates for the carrier. Finally, each row in Airtable is updated with the respective transit times, automatically sorted with priority paid air at the top, followed by the longest to the shortest transit times.

Going from an entirely manual process of getting a list of triaged orders in 45 minutes to clicking a button and having Airtable and Workers do it all for me in seconds was one of the most significant “lightbulb” moments I’ve ever had programming.

Printing Packing Slips in Order

The next big thing to tackle was the printing of packing slips. They need to be printed in the triaged order rather than in chronological order. To do so, this manually required searching for each order, but now a button in Airtable generates links to Shopify search with each batch of orders prefilled.

Printing the Order Worksheet

Of course, we just couldn’t stop there.

To keep track of orders as they are packed, we use a printed worksheet with all orders listed and columns for each order’s box size and weight. Unfortunately, Airtable does not have a good way to customize the printout of a table.

Ironically, this brought us back to Google Sheets! Since Sheets is the easiest way to format a table, it seemed like the best choice. But copying data from Airtable to Sheets is tedious. Instead, I created an API endpoint in Workers to get the data from Airtable and format it as a CSV the way we need it to look when printing. From Sheets, the IMPORTDATA function imports the day’s orders automatically when opened, ready for printing.

Sending Package Details to ShipStation

Once the packing team has finished packing and filling out the shipment worksheet, box size and weights are entered into Airtable for each order. Rather than typing these details also into ShipStation, I built an endpoint in our Workers API to set the weight and size using the ShipStation API. ShipStation order updates are done based on the ID of the order. The script first lists all open orders and then writes the order name and ID mapping for all open orders to Workers KV so that future requests to this API can avoid the ShipStation list API, which is slow and has strict limits.

Next, I built another Airtable script to send the details of each box to this API. In addition to setting the weight and size, the order is also tagged with today’s date, making it easy to identify what orders in ShipStation are ready to be labeled. Finally, the labels are created and printed in ShipStation in bulk and applied to their respective packages.

Putting it all together

So an overview of the entire system looks like this. All clients connect to Airtable and Airtable makes calls out to the Worker APIs which connect and coordinates between all third party APIs.

Developer Spotlight: Automating Workflows with Airtable and Cloudflare Workers

Why Workers and Airtable Work Well Together

While it might have been possible to build this entire workflow in Airtable, integrating Workers has made the process much easier to build, test, and reuse code both between Airtable scripts and other platforms.

Development Experience

The Airtable Scripting app makes it quick and easy to build scripts that work with the data stored in Airtable, with a decent editor and autocomplete, but it is hard to build more complex scripts in it.

Funnily enough for this project, latency and scaling weren’t all that important. But Cloudflare Workers makes development and testing incredibly easy: no excessive configuration or deployment pipelines.

Reliability and Security

We are running a business and having to babysit servers is a massive distraction that we certainly don’t need. With Workers being fully serverless I never have to worry about anything breaking because a server is down.

And we can safely store all our secrets needed to access all third-party systems with Cloudflare, with the secret environments variables. Making sure those tokens and keys are all fully encrypted and secure.

Airtable is a great database and UI in one

Building UI’s around data entry and visualisation takes a lot of time and resources. By utilizing Airtable, I built out an entire workflow without ever touching HTML, let alone front-end frameworks. Instead, I could focus solely on core business logic. Airtable’s dashboard feature also allows building reports with high-level overviews of the types of packages being sent, helping us forecast future packing supplies needed.

While building workflows in spreadsheets can feel like a hack when custom scripting gets involved, Airtable is the opposite. The extensibility and good UX have made Airtable a great tool to use going forward.

Improvements Going Forward

Now that we had the basics covered, I noticed one of the most powerful things about this setup: how easy it was to add features. I started noticing minor issues with the workflow that could be improved. For example, when an order has to be split into multiple packages, the row in Airtable has to be duplicated and have a suffix added to the order number for each order. Automating order splitting was not a priority previously, but it quickly became one of the most time-consuming parts of the process. Thirty minutes later, every row had a “Split order” button, built with another Airtable script.

Another issue was when a customer was not going to be home on a Wednesday, which meant that if the order got shipped on Monday, it would go bad sitting on their doorstep. Thankfully, adding an optional minimum ship date tag to the Workers API that gets shippable orders was quick and easy. Now, our sales team can add tags for minimum ship dates when customers are not home, and the rest of the workflow will automatically take it into account when deciding what to ship.

Conclusion

Many businesses are turning to Workers for their incredible performance and scaling to millions or billions of requests, but we couldn’t be happier with how much value we get with the few hundred Workers requests we do every day.

Cloudflare Workers, especially in combination with tools like Airtable, make it really easy to create your own internal tool, built to your exact specifications. Which will bring this capability to so many more businesses.

Cloudflare is not affiliated with Formagrid, Inc., dba Airtable. The views and opinions expressed in this blog post are solely those of the guest author and do not necessarily represent those of Cloudflare, Inc.

The Cloudflare Developer Expert Program: apply today!

Post Syndicated from Albert Zhao original https://blog.cloudflare.com/developer-expert-program/

The Cloudflare Developer Expert Program: apply today!

The Cloudflare Developer Expert Program: apply today!

Today we’re launching the Cloudflare Developer Expert Program: an initiative to support and recognize our VIP users who build with Workers, Pages, and the entire Cloudflare developer ecosystem.

A Cloudflare Developer Expert is an early adopter of new releases, a frequent participant in feedback sessions, and an evangelist for Cloudflare products made for the larger developer community.

But first, what are the benefits of becoming a Cloudflare Developer Expert?

  • Early access to features (e.g., private betas)
  • Admission to a private community of power users
  • Routine calls with product managers, engineers, and developer advocates
  • Sponsorships for OSS work
  • Our best swag, of course

We have already sent invites to our first batch of power users, but if you’d like to join or want to nominate a developer, please fill out this form.

Why We Made This Program

We ship very quickly at Cloudflare.

This is because we want feedback early in development, allowing users to challenge our assumptions and validate what we’re building. In the Workers team, this strategy has been very successful.

For example, we began beta testing custom builds for Wrangler (our CLI tool) that allow you to run any JavaScript bundler you want. This was a huge release because it introduced the ES Modules syntax for the first time in Workers, significantly increasing the number of usable JavaScript packages and libraries. To get feedback before public release, we opened a private Discord channel and invited around 50 users for testing.

We were blown away by the feedback.

Our users quickly discovered edge cases that weren’t working, such as needing support for Workers Unbound. This made it easy for us to prioritize what to fix before GA. We also discovered actionable steps to improve documentation.

“The Workers team wanted our input early on for such a big release, and it really shows how seriously they’re taking developer experience,” said James Ross, CTO of Nodecraft.

After seeing the success of this small group of users, we figured it was time to make this a regular part of development.

We threw together a list of users, sent NDAs, and opened a private Discord channel for one of our biggest releases of the year: running functions directly on Cloudflare Pages.

We’re able to ship this feature more quickly and confidently because of feedback in our Discord,” said Nevi Shah, product manager for Cloudflare Pages. “Users let us know quickly what can be better and what features they need first.

Developers, Developers, Developers

Back in April, we launched our first Developer Week with a central focus: how to get developers to build more on Cloudflare. This included exciting releases like Cloudflare Pages and the Durable Objects open beta.

Since then, after receiving so much feedback in our Discord and other channels, we learned developers either expect their code to automatically run on Cloudflare’s infrastructure (Cloudflare Pages), or, if it’s a new technology (such as Durable Objects) they want as much direct guidance as possible to reliably get up and running. We realized involving users earlier in development allowed us to support more happy paths.

And since developers like doing things their own way, we aim to support as many happy paths as possible on our platform.

“I started developing with Cloudflare Workers shortly after it was announced. Over that time, Cloudflare has only increased its emphasis on developer experience,” said David Barratt, staff software engineer at Drizly. “The Cloudflare Developer Expert Program has been a fantastic way to have a quick feedback loop between the developers who have a lot of experience using the platform and the developers building that platform.”

Apply now!

If you are a developer who deploys with Workers, Pages, and our other tools, we want you to apply! We’re hoping to review applicants with experience deploying to production with our developer tools.

And again, Cloudflare Developer Experts get special, special care from our team.

To apply for the Cloudflare Developer Expert Program, fill out this form.

Cloudflare Pages Goes Full Stack

Post Syndicated from Nevi Shah original https://blog.cloudflare.com/cloudflare-pages-goes-full-stack/

Cloudflare Pages Goes Full Stack

Cloudflare Pages Goes Full Stack

When we announced Cloudflare Pages as generally available in April, we promised you it was just the beginning. The journey of our platform started with support for static sites with small bits of dynamic functionality like setting redirects and custom headers. But we wanted to give even more power to you and your teams to begin building the unimaginable. We envisioned a future where your entire application — frontend, APIs, storage, data — could all be deployed with a single commit, easily testable in staging and requiring a single merge to deploy to production. So in the spirit of “Full Stack” Week, we’re bringing you the tools to do just that.

Welcome to the future, everyone. We’re thrilled to announce that Pages is now a Full Stack platform with help from But how?

It works the exact same way Pages always has: write your code, git push to your git provider (now supporting GitLab!) and we’ll deploy your entire site for you. The only difference is, it won’t just be your frontend but your backend too using Cloudflare Workers to help deploy serverless functions.

The integration you’ve been waiting for

Cloudflare Workers provides a serverless execution environment that allows you to create entirely new applications or augment existing ones without configuring or maintaining infrastructure. Before today, it was possible to connect Workers to a Pages project—installing Wrangler and manually deploying a Worker by writing your app in both Pages and Workers. But we didn’t just want “possible”, we wanted something that came as second nature to you so you wouldn’t have to think twice about adding dynamic functionality to your site.

How it works

By using your repo’s filesystem convention and exporting one or more function handlers, Pages can leverage Workers to deploy serverless functions on your behalf. To begin, simply add a ./functions directory in the root of your project, and inside a JavaScript or TypeScript file, export a function handler. For example, let’s say in your ./functions directory, you have a file, hello.js, containing:

// GET requests to /filename would return "Hello, world!"
export const onRequestGet = () => {
  return new Response("Hello, world!")
}
 
// POST requests to /filename with a JSON-encoded body would return "Hello, <name>!"
export const onRequestPost = async ({ request }) => {
  const { name } = await request.json()
  return new Response(`Hello, ${name}!`)
}

If you perform a git commit, it will trigger a new Pages build to deploy your dynamic site! During the build pipeline, Pages traverses your directory, mapping the filenames to URLs relative to your repo structure.

Under the hood, Pages generates Workers which include all your routing and functionality from the source.  Functions supports deeply-nested routes, wildcard matching, middleware for things like authentication and error-handling, and more! To demonstrate all of its bells and whistles, we’ve created a blog post to walk through an example full stack application.

Letting you do what you do best

As your site grows in complexity, with Pages’ new full stack functionality, your developer experience doesn’t have to. You can enjoy the workflow you know and love while unlocking even more depth to your site.

Seamlessly build

In the same way we’ve handled builds and deployments with your static sites — with a `git commit` and `git push` — we’ll deploy your functions for you automatically. As long as your directory follows the proper structure, Pages will identify and deploy your functions to our network with your site.

Define your bindings

While bringing your Workers to Pages, bindings are a big part of what makes your application a full stack application. We’re so excited to bring to Pages all the bindings you’ve previously used with regular Workers!

  • KV namespace: Our serverless and globally accessible key-value storage solution. Within Pages, you can integrate with any of the KV namespaces you set in your Workers dashboard for your Pages project.
  • Durable Object namespace: Our strongly consistent coordination primitive that makes connecting WebSockets, handling state and building entire applications a breeze. As with KV, you can set your namespaces within the Workers dashboard and choose from that list within the Pages interface.
  • R2 (coming soon!): Our S3-compatible Object Storage solution that’s slashing egress fees to zero.
  • Environment Variable: An injected value that can be accessed by your functions and is stored as plain-text. You can set your environment variables directly within the Pages interface for both your production and preview environments at build-time and run-time.
  • Secret (coming soon!): An encrypted environment variable, which cannot be viewed by wrangler or any dashboard interfaces. Secrets are a great home for sensitive data including passwords and API tokens.
Cloudflare Pages Goes Full Stack

Preview deployments — now for your backend too

With the deployment of your serverless functions, you can still enjoy the ease of collaboration and testing like you did previously. Before you deploy to production, you can easily deploy your project to a preview environment to stage your changes. Even with your functions, Pages lets you keep a version history of every commit with a unique URL for each, making it easy to gather feedback whether it’s from a fellow developer, PM, designer or marketer! You can also enjoy the same infinite staging privileges that you did for static sites, with a consistent URL for the latest changes.

Develop and preview locally too

However, we realize that building and deploying with every small change just to stage your changes can be cumbersome at times if you’re iterating quickly. You can now develop full stack Pages applications with the latest release of our wrangler CLI. Backed by Miniflare, you can run your entire application locally with support for mocked secrets, environment variables, and KV (Durable Objects support coming soon!). Point wrangler at a directory of static assets, or seamlessly connect to your existing tools:

# Install wrangler v2 beta
npm install [email protected]

# Serve a folder of static assets
npx wrangler pages dev ./dist

# Or automatically proxy your existing tools
npx wrangler pages dev -- npx react-scripts start

This is just the beginning of Pages’ integrations with wrangler. Stay tuned as we continue to enhance your developer experience.

What else can you do?

Everything you can do with HTTP Workers today!

When deploying a Pages application with functions, Pages is compiling and deploying first class Workers on your behalf. This means there is zero functionality loss when deploying a Worker within your Pages application — instead, there are only new benefits to be gained!

Integrate with SvelteKit — out of the box!

SvelteKit is a web framework for building Svelte applications. It’s built and maintained by the Svelte team, which makes it the Svelte user’s go-to solution for all their application needs. Out of the box, SvelteKit allows users to build projects with complex API backends.

As of today, SvelteKit projects can attach and configure the @sveltejs/adapter-cloudflare package. After doing this, the project can be added to Pages and is ready for its first deployment! With Pages, your SvelteKit project(s) can deploy with API endpoints and full server-side rendering support. Better yet, the entire project — including the API endpoints — can enjoy the benefits of preview deployments, too! This, even on its own, is a huge victory for advanced projects that were previously on the Workers adapter. Check out this example to see the SvelteKit adapter for Pages in action!

Use server-side rendering

You are now able to intercept any request that comes into your Pages project. This means that you can define Workers logic that will receive incoming URLs and, instead of serving static HTML, your Worker can render fresh HTML responses with dynamic data.

For example, an application with a product page can define a single product/[id].js file that will receive the id parameter, retrieve the product information from a Workers KV binding, and then generate an HTML response for that page. Compared to a static-site generator approach, this is more succinct and easier to maintain over time since you do not need to build a static HTML page per product at build-time… which may potentially be tens or even hundreds of thousands of pages!

Already have a Worker? We’ve got you!

If you already have a single Worker and want to bring it right on over to Pages to reap the developer experience benefits of our platform, our announcement today also enables you to do precisely that. Your build can generate an ES module Worker called _worker.js in the output directory of your project, perform your git commands to deploy, and we’ll take care of the rest! This can be especially advantageous to you if you’re a framework author or have a more complex use case that doesn’t follow our provided file structure.

Try it at no cost — for a limited time only

We’re thrilled to be releasing our open beta today for everyone to try at no additional cost to your Cloudflare plan. While we will still have limits in place, we are using this open beta period to learn more about how you and your teams are deploying functions with your Pages projects. For the time being, we encourage you to lean into your creativity and build out that site you’ve been thinking about for a long time — without the worry of getting billed.

In just a few short months, when we announce General Availability, you can expect our billing to reflect that of the Workers Bundled plan — after all, these are just Workers under the hood!

Coming up…

As we’re only announcing this release as an open beta, we have some really exciting things planned for the coming weeks and months. We want to improve on the quick and easy Pages developer experience that you’re already familiar with by adding support for integrated logging and more analytics for your deployed functions.

Beyond that, we’ll be expanding our first-class support for the next generation of frontend frameworks. As we’ve shown with SvelteKit, Pages’ ability to seamlessly deploy both static and dynamic code together enables unbeatable end-user performance & developer ease, and we’re excited to unlock that for more people. Fans of similar frameworks & technologies, such as NextJS, NuxtJS, React Server Components, Remix, Hydrogen, etc., stay tuned to this blog for more announcements. Or better yet, come join us and help make it happen!

Additionally, as we’ve done with SvelteKit, we’re looking to include more first-class integration with existing frameworks, so Pages can become the primary home for your preferred frameworks of choice. Work is underway on making NextJS, NuxtJS, React Server Components, Shopify Hydrogen and more integrate seamlessly as you develop your full stack apps.

Finally, we’re working to speed up those build times, so you can focus on pushing changes and iterating quickly — without the wait!

Getting started

To get started head over to our Pages docs and check out our demo blog to learn more about how to deploy serverless functions to Pages using Cloudflare Workers.

Of course, what we love most is seeing what you build! Pop into our Discord and show us how you’re using Pages to build your full stack apps.

Cloudflare Pages Goes Full Stack

Building a full stack application with Cloudflare Pages

Post Syndicated from Greg Brimble original https://blog.cloudflare.com/building-full-stack-with-pages/

Building a full stack application with Cloudflare Pages

Building a full stack application with Cloudflare Pages

We were so excited to announce support for full stack applications in Cloudflare Pages that we knew we had to show it off in a big way. We’ve built a sample image-sharing platform to demonstrate how you can add serverless functions right from within Pages with help from Cloudflare Workers. With just one new file to your project, you can add dynamic rendering, interact with other APIs, and persist data with KV and Durable Objects. The possibilities for full-stack applications, in combination with Pages’ quick development cycles and unlimited preview environments, gives you the power to create almost any application.

Today, we’re walking through our example image-sharing platform. We want to be able to share pictures with friends while still also keeping some images private. We’ll build a JSON API with Functions (storing data on KV and Durable Objects), integrate with Cloudflare Images and Cloudflare Access, and use React for our front end.

If you’re wanting to dive right into the good stuff, our demo instance is published here, and the code is on GitHub, but stick around for a more gentle approach.

Building a full stack application with Cloudflare Pages

Building serverless functions with Cloudflare Pages

File-based routing

If you’re not already familiar, Cloudflare Pages connects with your git provider (GitHub and GitLab), and automates the deployment of your static site to Cloudflare’s network. Functions lets you enhance these apps by sprinkling in dynamic data. If you haven’t already, you can sign up here.

In our project, let’s create a new function:

// ./functions/time.js


export const onRequest = () => {
  return new Response(new Date().toISOString())
}

git commit-ing and pushing this file should trigger a build and deployment of your first Pages function. Any requests for /time will be served by this function, and all other requests will fall-back to the static assets of your project. Placing Functions files in directories works as you’d expect: ./functions/api/time.js would be available at /api/time and ./functions/some_directory/index.js would be available at /some_directory.

We also support TypeScript (./functions/time.ts would work just the same), as well as parameterized files:

  • ./functions/todos/[id].js with single square brackets will match all requests like /todos/123;
  • and ./functions/todos/[[path]].js with double square brackets, will match requests for any number of path segments (e.g. /todos/123/subtasks).

We declare a PagesFunction type in the @cloudflare/workers-types library which you can use to type-check your Functions.

Dynamic data

So, returning to our image-sharing app, let’s assume we already have some images uploaded, and we want to display them on the homepage. We’ll need an endpoint which will return a list of these images, which the front-end can call:

// ./functions/api/images.ts

export const jsonResponse = (value: any, init: ResponseInit = {}) =>
  new Response(JSON.stringify(value), {
    headers: { "Content-Type": "application/json", ...init.headers },
    ...init,
  });

const generatePreviewURL = ({
  previewURLBase,
  imagesKey,
  isPrivate,
}: {
  previewURLBase: string;
  imagesKey: string;
  isPrivate: boolean;
}) => {
  // If isPrivate, generates a signed URL for the 'preview' variant
  // Else, returns the 'blurred' variant URL which never requires signed URLs
  // https://developers.cloudflare.com/images/cloudflare-images/serve-images/serve-private-images-using-signed-url-tokens

  return "SIGNED_URL";
};

export const onRequestGet: PagesFunction<{
  IMAGES: KVNamespace;
}> = async ({ env }) => {
  const { imagesKey } = (await env.IMAGES.get("setup", "json")) as Setup;

  const kvImagesList = await env.IMAGES.list<ImageMetadata>({
    prefix: `image:uploaded:`,
  });

  const images = kvImagesList.keys
    .map((kvImage) => {
      try {
        const { id, previewURLBase, name, alt, uploaded, isPrivate } =
          kvImage.metadata as ImageMetadata;

        const previewURL = generatePreviewURL({
          previewURLBase,
          imagesKey,
          isPrivate,
        });

        return {
          id,
          previewURL,
          name,
          alt,
          uploaded,
          isPrivate,
        };
      } catch {
        return undefined;
      }
    })
    .filter((image) => image !== undefined);

  return jsonResponse({ images });
};

Eagle-eyed readers will notice we’re exporting onRequestGet which lets us only respond to GET requests.

We’re also using a KV namespace (accessed with env.IMAGES) to store information about images that have been uploaded. To create a binding in your Pages project, navigate to the “Settings” tab.

Building a full stack application with Cloudflare Pages

Interfacing with other APIs

Cloudflare Images is an inexpensive, high-performance, and featureful service for hosting and transforming images. You can create multiple variants to render your images in different ways and control access with signed URLs. We’ll add a function to interface with this service’s API and upload incoming files to Cloudflare Images:

// ./functions/api/admin/upload.ts

export const onRequestPost: PagesFunction<{
  IMAGES: KVNamespace;
}> = async ({ request, env }) => {
  const { apiToken, accountId } = (await env.IMAGES.get(
    "setup",
    "json"
  )) as Setup;

  // Prepare the Cloudflare Images API request body
  const formData = await request.formData();
  formData.set("requireSignedURLs", "true");
  const alt = formData.get("alt") as string;
  formData.delete("alt");
  const isPrivate = formData.get("isPrivate") === "on";
  formData.delete("isPrivate");

  // Upload the image to Cloudflare Images
  const response = await fetch(
    `https://api.cloudflare.com/client/v4/accounts/${accountId}/images/v1`,
    {
      method: "POST",
      body: formData,
      headers: {
        Authorization: `Bearer ${apiToken}`,
      },
    }
  );

  // Store the image metadata in KV
  const {
    result: {
      id,
      filename: name,
      uploaded,
      variants: [url],
    },
  } = await response.json<{
    result: {
      id: string;
      filename: string;
      uploaded: string;
      requireSignedURLs: boolean;
      variants: string[];
    };
  }>();

  const metadata: ImageMetadata = {
    id,
    previewURLBase: url.split("/").slice(0, -1).join("/"),
    name,
    alt,
    uploaded,
    isPrivate,
  };

  await env.IMAGES.put(
    `image:uploaded:${uploaded}`,
    "Values stored in metadata.",
    { metadata }
  );
  await env.IMAGES.put(`image:${id}`, JSON.stringify(metadata));

  return jsonResponse(true);
};

Persisting data

We’re already using KV to store information that is read often but rarely written to. What about features that require a bit more synchronicity?

Let’s add a download counter to each of our images. We can create a highres variant in Cloudflare Images, and increment the counter every time a user requests a link. This requires a bit more setup, but unlocking the power of Durable Objects in your projects is absolutely worth it.

We’ll need to create and publish the Durable Object class capable of maintaining this download count:

// ./durable_objects/downloadCounter.js
ts#example---counter

export class DownloadCounter {
  constructor(state) {
    this.state = state;
    // `blockConcurrencyWhile()` ensures no requests are delivered until initialization completes.
    this.state.blockConcurrencyWhile(async () => {
      let stored = await this.state.storage.get("value");
      this.value = stored || 0;
    });
  }

  async fetch(request) {
    const url = new URL(request.url);
    let currentValue = this.value;

    if (url.pathname === "/increment") {
      currentValue = ++this.value;
      await this.state.storage.put("value", currentValue);
    }

    return jsonResponse(currentValue);
  }
}

Middleware

If you need to execute some code (such as authentication or logging) before you run your function, Pages offers easy-to-use middleware which can be applied at any level in your file-based routing. By creating a _middleware.ts file in a directory, we know to first run this file, and then execute your function when next() is called.

In our application, we want to prevent unauthorized users from uploading images (/api/admin/upload) or deleting images (/api/admin/delete). Cloudflare Access lets us apply role-based access control to all or part of our application, and you only need a single file to integrate it into our serverless functions. We create  ./functions/api/admin/_middleware.ts which will apply to all incoming requests at /api/admin/*:

// ./functions/api/admin/_middleware.ts

const validateJWT = async (jwtAssertion: string | null, aud: string) => {
  // If the JWT is valid, return the JWT payload
  // Else, return false
  // https://developers.cloudflare.com/cloudflare-one/identity/users/validating-json

  return jwtPayload;
};

const cloudflareAccessMiddleware: PagesFunction<{ IMAGES: KVNamespace }> =
  async ({ request, env, next, data }) => {
    const { aud } = (await env.IMAGES.get("setup", "json")) as Setup;

    const jwtPayload = await validateJWT(
      request.headers.get("CF-Access-JWT-Assertion"),
      aud
    );

    if (jwtPayload === false)
      return new Response("Access denied.", { status: 403 });

    // We could also use the data object to pass information between middlewares
    data.user = jwtPayload.email;

    return await next();
  };

export const onRequest = [cloudflareAccessMiddleware];

Middleware is a powerful tool at your disposal allowing you to easily protect parts of your application with Cloudflare Access, or quickly integrate with observability and error logging platforms such as Honeycomb and Sentry.

Integrating as Jamstack

The “Jam” of “Jamstack” stands for JavaScript, API and Markup. Cloudflare Pages previously provided the ‘J’ and ‘M’, and with Workers in the middle, you can truly go full-stack Jamstack.

We’ve built the front end of this image sharing platform with Create React App as an approachable example, but Cloudflare Pages natively integrates with an ever-growing number of frameworks (currently 23), and you can always configure your own entirely custom build command.

Your front end simply needs to make a call to the Functions we’ve already configured, and render out that data. We’re using SWR to simplify things, but you could do this with entirely vanilla JavaScript fetch-es, if that’s your preference.

// ./src/components/ImageGrid.tsx

export const ImageGrid = () => {
  const { data, error } = useSWR<{ images: Image[] }>("/api/images");

  if (error || data === undefined) {
    return <div>An unexpected error has occurred when fetching the list of images. Please try again.</div>;
  }


  return (
    <div>
      {data.images.map((image) => (
        <ImageCard image={image} key={image.id} />
      ))}
    </div>
  );

}

Local development

No matter how fast it is, iterating on a project like this can be painful if you have to push up every change in order to test how it works. We’ve released a first-class integration with wrangler for local development of Pages projects, including full support for Functions, Workers, secrets, environment variables and KV. Durable Objects support is coming soon.

Install from npm:

npm install [email protected]

and either serve a folder of static assets, or proxy your existing tooling:

# Serve a directory
npx wrangler pages dev ./public

# or integrate with your other tools
npx wrangler pages dev -- npx react-scripts start

Go forth, and build!

If you like puppies, we’ve deployed our image-sharing application here, and if you like code, that’s over on GitHub. Feel free to fork and deploy it yourself! There’s a five-minute setup wizard, and you’ll need Cloudflare Images, Access, Workers, and Durable Objects.

We are so excited about the future of the Pages platform, and we want to hear what you’re building! Show off your full-stack applications in the #what-i-built channel, or get assistance in the #pages-help channel on our Discord server.

Building a full stack application with Cloudflare Pages

Cloudflare Pages now offers Gitlab support

Post Syndicated from Nevi Shah original https://blog.cloudflare.com/cloudflare-pages-partners-with-gitlab/

Cloudflare Pages now offers Gitlab support

Cloudflare Pages now offers Gitlab support

In the early stages of our ideation of Pages, we set out to build a platform with a smooth developer experience that integrates seamlessly with your existing workflow. However, after announcing Pages’ general availability, we realized our platform may not actually be usable by every developer. Before today, only those of you who used GitHub as your source code management tool could take advantage of the Pages experience.

As part of Full Stack Week, we’re opening the doors of our platform to even more users by announcing our integration with GitLab the DevOps platform! You can now create new Pages projects by connecting your repos stored on GitLab and make site changes there via your usual git commands. And what’s more? We’re also launching an official partnership with GitLab to bring you even better integrations with the git provider in the months to come.

Why GitLab?

As a Jamstack platform, our goal is to enable you, the developer, to focus on what you do best — code, code, code — without the heavy lifting! Not only does this mean giving you all the tools you need to build out a full stack site but also provide you with integrations that fit your development needs. By expanding our platform ecosystem to GitLab, Cloudflare can now serve the needs of a broader developer community collaborating on their sites.

Since our April launch, one of the most common questions and pieces of feedback we’ve received in customer calls, on Discord/Twitter, and on our community threads centered around GitLab. We knew our git integration story couldn’t just stop at one provider, especially given the diversity in tooling we see among our community. So it became glaringly obvious we needed to extend Pages to the GitLab community.

Our partnership

Today, we’re proud to now be official technology partners with GitLab Inc. In addition to our git integration, the goal of our partnership is to improve existing and develop future integrations, so your teams can seamlessly collaborate and accelerate site delivery and updates at scale. As you begin using Pages with GitLab, our teams will be working closely together in a cross-collaborative approach for new integrations.

Developers can be more productive when they create, test, secure and deploy software from a single devops application instead of bouncing between multiple different tools. Cloudflare Pages’ integration with GitLab makes it easier for joint users to develop and deploy new code to Cloudflare’s network using the same syntax and git commands they’re already comfortable using.
— Michael LeBeau, Alliance Manager at GitLab

Get started

To set up your first project with GitLab, just create a new project in the Pages dashboard. Select “GitLab” and Pages will bring you to your GitLab sign-in screen where you can sign in to your account. Then, select the repo with which you’d like to create your project, configure your build settings, and deploy! From here, you can begin making changes to your site directly via commits to GitLab, triggering a new build every time.

Cloudflare Pages now offers Gitlab support

Have questions? To get started, check out the Pages docs and be sure to leave us some feedback by clicking the “Give Feedback” button there. Show us what you build by joining the chatter in our Discord channel.

Happy developing!

wrangler 2.0 — a new developer experience for Cloudflare Workers

Post Syndicated from Ashcon Partovi original https://blog.cloudflare.com/wrangler-v2-beta/

wrangler 2.0 — a new developer experience for Cloudflare Workers

wrangler 2.0 — a new developer experience for Cloudflare Workers

Much of a developer’s work is about making trade-offs: consistency versus availability, speed over correctness, you name it. While there will always be different technical trade-offs to consider, we believe there are some that you, as a developer, should never need to make.

One of those decisions is an easy-to-use development environment. Whether you’re onboarding a new developer to your team or you simply want to develop faster, it’s important that even the smallest of things are optimized for speed and simplicity.

That’s why we’re excited to announce the second-generation of our developer tooling for Cloudflare Workers. It’s a new developer experience that’s out-of-the-box, lightning fast, and can even run Workers on a local machine. (Yes!)

If you’re already familiar with our existing tools, we’re not just talking about the wrangler CLI, we’re talking about its next major release: wrangler 2.0. Stick around to get a sneak-peak at the new experience.

No config? No problem

We’ve made it much easier to get started with Cloudflare Workers. All you need is a single JavaScript file to run a Worker — no configuration needed. You don’t even need to decide on a name!

When you run wrangler dev <filename>, your code is automatically bundled and deployed to a development environment. Then, you can send HTTP requests to that environment using a localhost proxy. Here’s what that looks like in-action:

Live debugging, just like that

Now there’s a completely redesigned experience for debugging a Worker. With a simple command you get access to a remote debugger, the same used by Chrome when you click “Inspect,” which provides an interactive view of logs, exceptions, and requests. It feels like your Worker is running locally, yet it’s actually running on the Cloudflare network.

A debugger that “just works” and auto-detects your changes makes all the difference when you’re just trying to code. We’ve also made a number of improvements to make the debugging experience even easier:

  • Keybind shortcuts, to quickly toggle features or open a window.
  • Support for “–public <path>”, to automatically serve your static assets.
  • Faster and more reliable file-watching.

To start a debugging session, just run: wrangler dev <filename>, then hit the “D” keybind.

Local mode? Flip a switch

Another aspect of the new debugging experience is the ability to switch to “local mode,” which runs your Worker on your local machine. In fact, you can easily switch between “network” and “local” mode with just a keybind shortcut.

How does this work? Recently, we announced that Miniflare (created by Brendan Coll), a project to locally emulate Workers in Node.js, has joined the Cloudflare organization. Miniflare is great for unit testing and situations where you’d like to debug Workers without an Internet connection. Now we’ve integrated it directly into the local development experience, so you can enjoy the benefits of both the network and your localhost!

Let us know what you think!

Serverless should be simple. We’re really excited about these improvements to the developer experience for Workers, and we have a lot more planned.

While we’re still working on wrangler 2.0, you can try the beta release by running: npm install [email protected] or by visiting the repository to see what we’re working on. If you’re already using wrangler to deploy existing applications, we recommend continuing to use wrangler 1.0 until the 2.0 release is officially out. We will continue to develop and maintain wrangler 1.0 until we’re finished with backwards-compatibility for 2.0.

If you’re starting a project or want to try out the new experience, we’d love to hear your feedback! Let us know what we’re missing or what you’d like to see in wrangler 2.0. You can create a feature request or start a discussion in the repository. (we’ll merge them into the existing wrangler repository when 2.0 is out of beta).

Thank you to all of our developers out there, and we look forward to seeing what you build!

Automatically generating types for Cloudflare Workers

Post Syndicated from Brendan Coll original https://blog.cloudflare.com/automatically-generated-types/

Automatically generating types for Cloudflare Workers

Automatically generating types for Cloudflare Workers

Historically, keeping our Rust and TypeScript type repos up to date has been hard. They were manually generated, which means they ran the risk of being inaccurate or out of date. Until recently, the workers-types repository needed to be manually updated whenever the types changed. We also used to add type information for mostly complete browser APIs. This led to confusion when people would try to use browser APIs that aren’t supported by the Workers runtime they would compile but throw errors.

That all changed this summer when Brendan Coll, whilst he was interning with us, built an automated pipeline for generating them. It runs every time we build the Workers runtime, generating types for our TypeScript and Rust repositories. Now everything is up-to-date and accurate.

A quick overview

Every time the Workers runtime code is built, a script runs over the public APIs and generates the Rust and TypeScript types as well as a JSON file containing an intermediate representation of the static types. The types are sent to the appropriate repositories and the JSON file is uploaded as well in case people want to create their own types packages. More on that later.

This means the static types will always be accurate and up to date. It also allows projects running Workers in other, statically-typed languages to generate their own types from our intermediate representation. Here is an example PR from our Cloudflare bot. It’s detected a change in the runtime types and is updating the TypeScript files as well as the intermediate representation.

Automatically generating types for Cloudflare Workers

Using the auto-generated types

To get started, use wrangler to generate a new TypeScript project:

$ wrangler generate my-typescript-worker https://github.com/cloudflare/worker-typescript-template

If you already have a TypeScript project, you can install the latest version of workers-types with:

$ npm install --save-dev @cloudflare/workers-types

And then add @cloudflare/workers-types to your project’s tsconfig.json file.

{
"compilerOptions": {
"target": "ES2020",
"module": "CommonJS",
"lib": ["ES2020"],
"types": ["@cloudflare/workers-types"]
}
}

After that, you should get automatic type completion in your IDE of choice.

Automatically generating types for Cloudflare Workers

How it works

Here is some example code from the Workers runtime codebase.

class Blob: public js::Object {
public:
typedef kj::Array<kj::OneOf<kj::Array<const byte>, kj::String, js::Ref<Blob>>> Bits;
struct Options {
js::Optional<kj::String> type;
JS_STRUCT(type);
};
static js::Ref<Blob> constructor(js::Optional<Bits> bits, js::Optional<Options> options);
int getSize();
js::Ref<Blob> slice(js::Optional<int> start, js::Optional<int> end);
JS_RESOURCE_TYPE(Blob) {
JS_READONLY_PROPERTY(size, getSize);
JS_METHOD(slice);
}
};

A Python script runs over this code during each build and generates an Abstract Syntax Tree containing information about the function including an identifier, any argument types and any return types.

{
  "name": "Blob",
  "kind": "class",
  "members": [
    {
      "name": "size",
      "type": {
        "name": "integer"
      },
      "readonly": true
    },
    {
      "name": "slice",
      "type": {
        "params": [
          {
            "name": "start",
            "type": {
              "name": "integer",
              "optional": true
            }
          },
          {
            "name": "end",
            "type": {
              "name": "integer",
              "optional": true
            }
          }
        ],
        "returns": {
          "name": "Blob"
        }
      }
    }
  ]
}

Finally, the TypeScript types repositories are automatically sent PRs with the updated types.

declare type BlobBits = (ArrayBuffer | string | Blob)[];

interface BlobOptions {
  type?: string;
}

declare class Blob {
  constructor(bits?: BlobBits, options?: BlobOptions);
  readonly size: number;
  slice(start?: number, end?: number, type?: string): Blob;
}

Overrides

In some cases, TypeScript supports concepts that our C++ runtime does not. Namely, generics and function overloads. In these cases, we override the generated types with partial declarations. For example, DurableObjectStorage makes heavy use of generics for its getter and setter functions.

declare abstract class DurableObjectStorage {
	 get<T = unknown>(key: string, options?: DurableObjectStorageOperationsGetOptions): Promise<T | undefined>;
	 get<T = unknown>(keys: string[], options?: DurableObjectStorageOperationsGetOptions): Promise<Map<string, T>>;
	 
	 list<T = unknown>(options?: DurableObjectStorageOperationsListOptions): Promise<Map<string, T>>;
	 
	 put<T>(key: string, value: T, options?: DurableObjectStorageOperationsPutOptions): Promise<void>;
	 put<T>(entries: Record<string, T>, options?: DurableObjectStorageOperationsPutOptions): Promise<void>;
	 
	 delete(key: string, options?: DurableObjectStorageOperationsPutOptions): Promise<boolean>;
	 delete(keys: string[], options?: DurableObjectStorageOperationsPutOptions): Promise<number>;
	 
	 transaction<T>(closure: (txn: DurableObjectTransaction) => Promise<T>): Promise<T>;
	}

You can also write type overrides using Markdown. Here is an example of overriding types of KVNamespace.

Creating your own types

The JSON IR (intermediate representation) has been open sourced alongside the TypeScript types and can be found in this GitHub repository. We’ve also open sourced the type schema itself, which describes the format of the IR. If you’re interested in generating Workers types for your own language, you can take the IR, which describes the declaration in a “normalized” data structure, and generate types from it.

The declarations inside `workers.json` contain the elements to derive function signatures and other elements needed for code generation such as identifiers, argument types, return types and error management. A concrete use-case would be to generate external function declarations for a language that compiles to WebAssembly, to import precisely the set of available function calls available from the Workers runtime.

Conclusion

Cloudflare cares deeply about supporting the TypeScript and Rust ecosystems. Brendan created a tool which will ensure the type information for both languages is always up-to-date and accurate. We also are open-sourcing the type information itself in JSON format, so that anyone interested can create type data for any language they’d like!

JavaScript modules are now supported on Cloudflare Workers

Post Syndicated from Ashcon Partovi original https://blog.cloudflare.com/workers-javascript-modules/

JavaScript modules are now supported on Cloudflare Workers

JavaScript modules are now supported on Cloudflare Workers

We’re excited to announce that JavaScript modules are now supported on Cloudflare Workers. If you’ve ever taken look at an example Worker written in JavaScript, you might recognize the following code snippet that has been floating around the Internet the past few years:

addEventListener("fetch", (event) => {
  event.respondWith(new Response("Hello Worker!"));
}

The above syntax is known as the “Service Worker” API, and it was proposed to be standardized for use in web browsers. The idea is that you can attach a JavaScript file to a web page to modify its HTTP requests and responses, acting like a virtual endpoint. It was exactly what we needed for Workers, and it even integrated well with standard Web APIs like fetch() and caches.

Before introducing modules, we want to make it clear that we will continue to support the Service Worker API. No developer wants to get an email saying that you need to rewrite your code because an API or feature is being deprecated; and you won’t be getting one from us. If you’re interested in learning why we made this decision, you can read about our commitment to backwards-compatibility for Workers.

What are JavaScript modules?

JavaScript modules, also known as ECMAScript (abbreviated. “ES”) modules, is the standard API for importing and exporting code in JavaScript. It was introduced by the “ES6” language specification for JavaScript, and has been implemented by most Web browsers, Node.js, Deno, and now Cloudflare Workers. Here’s an example to demonstrate how it works:

// filename: ./src/util.js
export function getDate(time) {
  return new Date(time).toISOString().split("T")[0]; // "YYYY-MM-DD"
}

The “export” keyword indicates that the “getDate” function should be exported from the current module. Then, you can use “import” from another module to use that function.

// filename: ./src/index.js
import { getDate } from "./util.js"

console.log("Today’s date:", getDate());

Those are the basics, but there’s a lot more you can do with modules. It allows you to organize, maintain, and re-use your code in an elegant way that just works. While we can’t go over every aspect of modules here, if you’d like to learn more we’d encourage you to read the MDN guide on modules, or a more technical deep-dive by Lin Clark.

How can I use modules in Workers?

You can export a default module, which will represent your Worker. Instead of using “addEventListener,” each event handler is defined as a function on that module. Today, we support “fetch” for HTTP and WebSocket requests and “scheduled” for cron triggers.

export default {
  async fetch(request, environment, context) {
    return new Response("I’m a module!");
  },
  async scheduled(controller, environment, context) {
    // await doATask();
  }
}

You may also notice some other differences, such as the parameters on each event handler. Instead of a single “Event” object, the parameters that you need the most are spread out on their own. The first parameter is specific to the event type: for “fetch” it’s the Request object, and for “scheduled” it’s a controller that contains the cron schedule.

The second parameter is an object that contains your environment variables (also known as “bindings“). Previously, each variable was inserted into the global scope of the Worker. While a simple solution, it was confusing to have variables magically appear in your code. Now, with an environment object, you can control which modules and libraries get access to your environment variables. This mechanism is more secure, as it can prevent a compromised or nosy third-party library from enumerating all your variables or secrets.

The third parameter is a context object, which allows you to register background tasks using waitUntil(). This is useful for tasks like logging or error reporting that should not block the execution of the event.

When you put that all together, you can import and export multiple modules, as well as use the new event handler syntax.

// filename: ./src/error.js
export async function logError(url, error) {
  await fetch(url, {
     method: "POST",
     body: error.stack
  })
}

// filename: ./src/worker.js
import { logError } from "./error.js"

export default {
  async fetch(request, environment, context) {
    try {
       return await fetch(request);
    } catch (error) {
       context.waitUntil(logError(environment.ERROR_URL, error));
       return new Response("Oops!", { status: 500 });
    }
  }
}

Let’s not forget about Durable Objects, which became generally available earlier this week! You can also export classes, which is how you define a Durable Object class. Here’s another example with a “Counter” Durable Object, that responds with an incrementing value.

// filename: ./src/counter.js
export class Counter {
  value = 0;
  fetch() {
    this.value++;
    return new Response(this.value.toString());
  }
}

// filename: ./src/worker.js
// We need to re-export the Durable Object class in the Worker module.
export { Counter } from "./counter.js"

export default {
  async fetch(request, environment) {
    const clientId = request.headers.get("cf-connecting-ip");
    const counterId = environment.Counter.idFromName(clientId);
    // Each IP address gets its own Counter.
    const counter = environment.Counter.get(counterId);
    return counter.fetch("https://counter.object/increment");
  }
}

Are there non-JavaScript modules?

Yes! While modules are primarily for JavaScript, we also support other modules types, some of which are not yet standardized.

For instance, you can import WebAssembly as a module. Previously, with the Service Worker API, WebAssembly was included as a binding. We think that was a mistake, since WebAssembly should be represented as code and not an external resource. With modules, here’s the new way to import WebAssembly:

import module from "./lib/hello.wasm"

export default {
  async fetch(request) {
    const instance = await WebAssembly.instantiate(module);
    const result = instance.exports.hello();
    return new Response(result);
  }
}

While not supported today, we look forward to a future where WebAssembly and JavaScript modules can be more tightly integrated, as outlined in this proposal. The ergonomics improvement, demonstrated below, can go a long way to make WebAssembly more included in the JavaScript ecosystem.

import { hello } from "./lib/hello.wasm"

export default {
  async fetch(request) {
    return new Response(hello());
  }
}

We’ve also added support for text and binary modules, which allow you to import a String and ArrayBuffer, respectively. While not standardized, it allows you to easily import resources like an HTML file or an image.

<!-- filename: ./public/index.html -->
<!DOCTYPE html>
<html><body>
<p>Hello!</p>
</body></html>

import html from "../public/index.html"

export default {
  fetch(request) {
    if (request.url.endsWith("/index.html") {
       return new Response(html, {
          headers: { "Content-Type": "text/html" }
       });
    }
    return fetch(request);
  }
}

How can I get started?

There are many ways to get started with modules.

First, you can try out modules in your browser using our playground (which doesn’t require an account) or by using the dashboard quick editor. Your browser will automatically detect when you’re using modules to allow you to seamlessly switch from the Service Worker API. For now, you’ll only be able to create one JavaScript module in the browser, though supporting multiple modules is something we’ll be improving soon.

If you’re feeling adventurous and want to start a new project using modules, you can try out the beta release of wrangler 2.0, the next-generation of the command-line interface (CLI) for Workers.

For existing projects, we still recommend using wrangler 1.0 (release 1.17 or later). To enable modules, you can adapt your “wrangler.toml” configuration to the following example:

name = "my-worker"
type = "javascript"
workers_dev = true

[build.upload]
format = "modules"
dir = "./src"
main = "./worker.js" # becomes "./src/worker.js"

[[build.upload.rules]]
type = "ESModule"
globs = "**/*.js"

# Uncomment if you have a build script.
# [build]
# command = "npm run build"

We’ve updated our documentation to provide more details about modules, though some examples will still be using the Service Worker API as we transition to showing both formats. (and TypeScript as a bonus!)

If you experience an issue or notice something strange with modules, please let us know, and we’ll take a look. Happy coding, and we’re excited to see what you build with modules!

JavaScript modules are now supported on Cloudflare Workers

Introducing Services: Build Composable, Distributed Applications on Cloudflare Workers

Post Syndicated from Ashcon Partovi original https://blog.cloudflare.com/introducing-worker-services/

Introducing Services: Build Composable, Distributed Applications on Cloudflare Workers

Introducing Services: Build Composable, Distributed Applications on Cloudflare Workers

First, there was the Worker script. It was simple, yet elegant. With just a few lines of code, you could rewrite an HTTP request, append a header, or make a quick fix to your website.

Though, what if you wanted to build an entire application on Workers? You’d need a lot more tools in your developer toolbox. That’s why we’ve introduced extensions to Workers platform like KV, our distributed key-value store; Durable Objects, — a strongly consistent, object-oriented database; and soon R2, the no-egress object storage. While these tools allow you to build a more robust application, there’s still a gap when it comes to building a system architecture, composed of many applications or services.

Imagine you’ve built an authentication service that authorizes requests to your API. You’d want to re-use that logic among all your other services. Moreover, when you make changes to that authentication service, you’d want to test it in a controlled environment that doesn’t affect those other services in production. Well, you don’t need to imagine anymore.

Introducing Services

Services are the new building block for deploying applications on Cloudflare Workers. Unlike the script, a service is composable, which allows services to talk to each other. Services also support multiple environments, which allow you to test changes in a preview environment, then promote to production when you’re confident it worked.

To enable a seamless transition to services, we’ve automatically migrated every script to become a service with one “production” environment — no action needed.

Services have environments

Each service comes with a production environment and the ability to create or clone dozens of preview environments. Every aspect of an environment is overridable: the code, environment variables, and even resources like a KV namespace. You can create and switch between environments with just a few clicks in the dashboard.

Each environment is resolvable at a unique hostname, which is automatically generated when you create or rename the environment. There’s no waiting around after you deploy. Everything you need, like DNS records, SSL certificates, and more, is ready-to-go seconds later. If you’d like a more advanced setup, you can also add custom routes from your domain to an environment.

Once you’ve tested your changes in a preview environment, you’re ready to promote to production. We’ve made it really easy to promote code from one environment to another, without the need to rebuild or upload your code again. Environments also manage code separately from settings, so you don’t need to manually edit environment variables when you promote from staging to production.

Services are versioned

Every change to a service is versioned and audited. Mistakes do happen, but when they do, it’s important to be able to quickly roll back, then have the tools to answer the age-old question: “who changed what, when?”

Introducing Services: Build Composable, Distributed Applications on Cloudflare Workers

Each environment in a service has its own version history. Every time there is a code change or an environment variable is updated, the version number of that environment is incremented. You can also append additional metadata to each version, like a git commit or a deployment tag.

Services can talk to each other

Services are composable, allowing one service to talk to another service. To support this, we’re introducing a new API to facilitate service-to-service communication: service bindings.

Introducing Services: Build Composable, Distributed Applications on Cloudflare Workers

A service binding allows you to send HTTP requests to another service, without those requests going over the Internet. That means you can invoke other Workers directly from your code! Service bindings open up a new world of composability. In the example below, requests are validated by an authentication service.

export default {
  async fetch(request, environment) {
    const response = await environment.AUTH.fetch(request);
    if (response.status !== 200) {
      return response;
    }
    return new Response("Authenticated!");
  }
}

Introducing Services: Build Composable, Distributed Applications on Cloudflare Workers

Service bindings use the standard fetch API, so you can continue to use your existing utilities and libraries. You can also change the environment of a service binding, so you can test a new version of a service. In the next example, 1% of requests are routed to a “canary” deployment of a service. If a request to the canary fails, it’s sent to the production deployment for another chance.

export default {
  canRetry(request) {
    return request.method === "GET" || request.method === "HEAD";
  },
  async fetch(request, environment) {
    if (Math.random() < 0.01) {
      const response = await environment.CANARY.fetch(request.clone());
      if (response.status < 500 || !canRetry(request)) {
        return response;
      }
    }
    return environment.PRODUCTION.fetch(request);
  }
}

While the interface among services is HTTP, the networking is not. In fact, there is no networking! Unlike the typical “microservice architecture,” where services communicate over a network and can suffer from latency or interruption, service bindings are a zero-cost abstraction. When you deploy a service, we build a dependency graph of its service bindings, then package all of those services into a single deployment. When one service invokes another, there is no network delay; the request is executed immediately.

Introducing Services: Build Composable, Distributed Applications on Cloudflare Workers

This zero-cost model enables teams to share and reuse code within their organizations, without sacrificing latency or performance. Forget the days of convoluted YAML templates or exponential back off to orchestrate services — just write code, and we’ll stitch it all together.

Try out the future, today!

We’re excited to announce that you can start using Services today! If you’ve already used Workers, you’ll notice that each of your scripts have been upgraded to a service with one “production” environment. The dashboard and all the existing Cloudflare APIs will continue to “just work” with services.

You can also create and deploy code to multiple “preview” environments, as part of the open-beta launch. We’re still working on service bindings and versioning, and we’ll provide an update as soon as you can start using them.

For more information about Services, check out any of the resources below:

Introducing Services: Build Composable, Distributed Applications on Cloudflare Workers

Introducing Relational Database Connectors

Post Syndicated from Kabir Sikand original https://blog.cloudflare.com/relational-database-connectors/

Introducing Relational Database Connectors

Introducing Relational Database Connectors

At Cloudflare, we’re building the best compute platform in the world. We want to make it easy, seamless, and obvious to build your applications with us. But simply making the best compute platform is not enough — at the heart of your applications are the data they interact with.

Cloudflare has multiple data storage solutions available today: Workers KV, R2, and Durable Objects. All three follow Cloudflare’s design goals for Workers: global by default, infinitely scalable, and delightful for developers to use. We’ve partnered with third-party storage solutions like Fauna, MongoDB and Prisma, who have built data platforms that align beautifully with our design goals and written tutorials for databases that already support HTTP connections.

The one area that’s been sorely missed: relational databases. Cloudflare itself runs on relational databases, and we’re not alone. In April, we asked which Node libraries you wanted us to support, and four of the top five requests were related to databases. For this Full Stack Week, we asked ourselves: how could we support relational databases in a way that aligned with our design goals?

Today, we’re taking a first step towards that world by announcing support for relational databases, including Postgres and MySQL from Workers.

Connecting to a database is no simple task — if it were as easy as passing a connection string to a database driver, we would have already done it. We’ve had to overcome several hurdles to reach this point, and have several more still to conquer.  

Our goal with this announcement is to work with you, our developers, to solve the unique pain points that come from accessing databases inside Workers. If you’d like to work with us, fill out this form or join us on Discord — this is just the beginning. If you’d just like to grab the code and play around, use this example to get started connecting to your own database, or check out our demo.

Why are Database Connectors so hard to build?

Serverless database connections are challenging to support for several reasons.

Databases are needy — they often require TCP connections, since they assume long-lived connections between an application server and the database. The Workers runtime doesn’t currently support TCP connections, so we’ve only been able to support HTTP-based databases or proxies.

Like a relationship, establishing a connection isn’t quite enough. Developers use client libraries for databases to make submitting queries and managing the responses easy. Since the Workers runtime is not entirely Node.js compatible, we need to either roll our own database library or find one that does not use unsupported built-in libraries.

Finally, databases are sensitive. It often takes external libraries to manage shared connections between an application server and a database, since these connections tend to be expensive to establish.

Moving past these challenges

Our approach today gives us the foundation to address each of these challenges in creative ways going forward.

First, we’re leveraging cloudflared to create a secure tunnel between Cloudflare and a private network within your existing infrastructure. Cloudflared already supports proxying HTTP to TCP over WebSockets — Our challenge is providing interfaces that look like the socket interfaces existing libraries expect, while rewiring the implementations to redirect reads and writes to our websocket. This method is fast, safe, and secure; but limiting in that we lack control of where to direct the final connections. This is a problem we will solve soon, but until then our approach is essential to gathering latency and performance data to see where else we need to improve.

Introducing Relational Database Connectors

Next, we’ve created a shim-layer that adapts the socket API from a popular runtime to connect directly to databases using a WebSocket. This allows us to bundle code as-is, without forking or otherwise making significant changes to the database library. As part of this announcement, we’ve published a tutorial on how to connect to and query a Postgres database from your Workers, using existing Cloudflare technology and a driver from the growing community at Deno. We’re excited to work with the upstream maintainers, on expanding support.

Finally, we’re most excited for how this approach will let us begin to manage connection pooling and connection establishment overhead. While our current tech demo requires setting up the Cloudflare Tunnel on your own infrastructure, we’re looking for customers who’d like to pilot a model where Cloudflare hosts the tunnel for you.

Where we’re going

We’re just getting started. Our goal with today’s announcement is to find customers who are looking to build new applications or migrate existing applications to Workers while working with data that’s stored in a relational database.

Just as Cloudflare started by providing security, performance, and reliability for customer’s websites, we’re excited about a future where Cloudflare manages database connections, handles replication of data across cloud providers and provides low-latency access to data globally.

First, we’re looking to add support for TCP into the runtime natively. With native support for TCP we’ll not only have better support for databases, but expand the Workers runtime to work with data infrastructure more broadly.

Our position in the network layer of the stack makes providing performance, security benefits and extremely reduced egress costs to global databases all possible realities. To do so, we’ll repurpose the HTTP to TCP proxy service that we’ve currently built and run it for developers as a connection pooling service, managing connections to their databases on their behalf.

Finally, our network makes caching data and making it accessible globally at low latency possible. Once we have connections back to your data, making it globally accessible in Cloudflare’s network will unlock fundamentally new architectures for distributed data.

Take our connectors for a spin

Want to check things out? There are three main steps to getting up-and-running:

  1. Deploying cloudflared within your infrastructure.
  2. Deploying a database that connects to cloudflared.
  3. Deploying a Worker with the database driver that submits queries.

The Postgres tutorial is available here.

When you’re all done, it’ll look a little something like this:

import { Client } from './driver/postgres/postgres'

export default {
  async fetch(request: Request, env, ctx: ExecutionContext) {
    try {
      const client = new Client({
        user: 'postgres',
        database: 'postgres',
        hostname: 'https://db.example.com',
        password: '',
        port: 5432,
      })
      await client.connect()
      const result = await client.queryArray('SELECT * FROM users WHERE uuid=1;')
      ctx.waitUntil(client.end())
      return new Response(JSON.stringify(result.rows[0]))
    } catch (e) {
      return new Response((e as Error).message)
    }
  },
}

Hit any snags? Fill out this form, join our Discord or shoot us an email and let’s chat!

Developer Spotlight: Winning a Game Jam with Jamstack and Durable Objects

Post Syndicated from Erwin van der Koogh original https://blog.cloudflare.com/developer-spotlight-guido-zuidhof-full-tilt/

Developer Spotlight: Winning a Game Jam with Jamstack and Durable Objects

Developer Spotlight: Winning a Game Jam with Jamstack and Durable Objects

Welcome to a new blog post series called Developer Spotlight. In this series we will be showcasing interesting applications built on top of the Cloudflare Workers Ecosystem.

And to celebrate Durable Objects going GA, what better to kick off the series than with a really cool tech demo of Durable Objects called Full Tilt?

Full Tilt by Guido Zuidhof is a game jam entry for Ludum Dare, one of the biggest and oldest game jams around, where he won first place in the innovation category. A game jam is like a hackathon for games, where you have a very short amount of time (usually 48-72 hours) to create a game from scratch.

We love Full Tilt, not just because Guido used Workers and Durable Objects to build a cool game where you control a game on your computer via your phone, but especially because it shows how powerful Durable Objects are. In less than 48 hours Guido was able to write all the logic to instantly spin up a personal gaming server anywhere in the world, as close to that player as possible. And it is so fast that you feel like you are controlling the computer directly.

But here is Guido, to tell you about his experience developing Full Tilt.

Last October I participated in the Ludum Dare 49 game jam. The thing I love about game jams is that the time constraint forces you to work quickly, iterate often, and most importantly cut scope.

In this game jam I created Full Tilt, a game in which you seamlessly use your phone as a Wiimote-like motion controller for a game that is run on your laptop in a web browser. The secret sauce to making it so smooth was in combining a game jam with Jamstack and mixing in some Durable Objects.

You can try the game right here.

Phone as a controller

Full Tilt is a browser game in which you move the player character by moving your hand around. There are a few different ways we can do that. One idea would be to use your computer’s webcam to track a marker object through 3D space. While that is possible, it is tricky to get it working in any situation (a dark room can be problematic!) and also some players may feel uncomfortable turning on their webcam for a simple web game.

A smartphone has a bunch of sensors built in, including a magnetometer and gyroscope — those sensors are exactly what we need, and we can assume that the majority of our potential players have a smartphone within arm’s reach. When a native mobile application uses these sensors, it adds a lot of friction (users now have to install an app), as well as a bunch of implementation work. Fortunately modern web browsers allow you to read from these sensors through the DeviceMotion API: a small web app will do the job perfectly!

The next challenge is communicating the sensor readings from the phone to the game running on the main computer. For that we will use a combination of Cloudflare Workers and Durable Objects. There needs to be some shared contact (i.e., a game server) that both the main computer and the smartphone talk to. Using a serverless solution makes a lot of sense for a web game. And as we only have 48 hours here, less stuff to worry about is a big selling point too. Workers and Durable Objects also make it easy to keep the game running after the game jam, without having to pay for — and more importantly worry about — keeping servers running.

Setting up a line of communication

There are roughly two ways for browsers to communicate: through a shared connection (a game server somewhere) or a peer to peer connection, so browsers can talk directly to each other without a middleman by leveraging WebRTC DataChannels.

WebRTC can be very challenging to make reliable and may still need a proxy server for some especially problematic network setups behind multiple NATs. Setting up a WebRTC connection may have been the perfect solution for the lowest possible latency, but it was out of scope for such a short game jam.

The game we are creating has low latency requirements, when I tilt my phone the game should respond quickly. We can probably get away with anything under 100ms, although low double digits or less is definitely preferred! If I’m geographically located in one place and the game server is geographically close enough, then this game server will be able to pass messages from my smartphone to my laptop without too much delay. Edge computing is hot nowadays, and for this gaming use case it is not just a nice-to-have but also a necessity.

Enough background, let’s architect the system.

On-demand game rooms

The first thing I needed to set up was game rooms. Something that both the phone and the computer browsers could connect to, so they can pass messages between them.

Durable Objects allow us to create a large amount of small stateful “mini servers” or “rooms” that multiple clients can connect to simultaneously over WebSockets, providing perfect small on-demand game servers. Think of Durable Objects as a single thread of Javascript that runs in a data center close to where the user first asked for it to be created.

After the browser on the computer starts the game, it asks our Cloudflare Worker API to create a room for the current game session. This API request is a simple POST request, the server responds with a four character room code that uniquely identifies the room that was created. The reason we want the room code to be short is because the user may need to copy this code and type it into their smartphone if they are unable to scan a QR code.

Humans are notoriously bad at copying random character strings as different characters look alike, so we use a limited character set that excludes the most commonly confused characters:

const DICTIONARY = "2345679ADEFGHJKLMNPQRSTUVWXYZ"; // 29 chars (no 0, O, I, 1, B, 8)

A four character code will allow us to uniquely point to ~700,000 different rooms, that seems like it would be enough even if our game got quite popular! What’s more: these game sessions don’t last forever. After some period of time (say 24 hours) we can be certain enough that the game session has ended, and we can re-use that room code.

Room code coordination

In your Cloudflare Worker script, there are two ways to create a Durable Object: either we ask for one to be created with an ID generated from a name, or we ask for one to be created with a random unique ID. The naive solution would be to create a Durable Object with an ID generated from the room code. However, that is not a good idea here because a Durable Object gets created in a data center that is geographically close to the end user.

A problematic situation would be if a user in Mumbai requests a room and gets room code ABCD. Initially, they play the game for a bit, and it works great.

The issue comes a week later when that room code is reused for another player based in Los Angeles, The game room Durable Object will be revived in Mumbai and latency will be awful for our Los Angeles player. In the future, Durable Objects may get migrated between data centers, but that’s not yet guaranteed.

Instead, what we can do is create a new Durable Object with a random ID for every new game session and keep a mapping from the four character room code to this random ID. We are introducing some state in our system: we will need a central source of truth which is where Durable Objects can come to the rescue again.

We will solve this by creating a single “Room Hub” Durable Object that keeps track of this mapping from room code to Durable Object ID. This Durable Object will have two endpoints, one to request a new room and another to look up a room’s information.

Here’s our request handler for the Room Request endpoint (the argument is a Sunder Context, Sunder is the web framework I used for this project):

export async function handleRoomRequest(ctx: Context<Env>) {
    const now = Date.now();    
    const reqBody = await ctx.request.json();

    // We make some attempts to find a room that is available..
    const attempts = 5

    let roomCode: string;
    let roomStorageKey: string;

    for (let i = 0; i < attempts; i++) {
        roomCode = generateRoomCode();
        roomStorageKey = ROOM_STATE_PREFIX + roomCode;
        const room = await ctx.state.storage.get<RoomData>(roomStorageKey);
        if (room === undefined) {
            break;
        } else if (now - room.createdAt > MAX_ROOM_AGE) {
            await ctx.state.storage.delete(roomStorageKey);
            break;
        }
        if (i === attempts-1) {
            return ctx.throw("Couldn't find available room code :(");
        }
    }

    const roomData: RoomData = {
        roomCode: roomCode,
        durableObjectId: reqBody.durableObjectId,
        createdAt: now,
    }

    await ctx.state.storage.put<RoomData>(roomStorageKey, roomData);

    ctx.response.body = {
        room: roomData
    };
    ctx.response.status = HttpStatus.Created;
}

In a nutshell, we generate a few room codes until we find one that has never been used or hasn’t been used in a long enough time.

There is a subtle but important nuance in this code: the Durable Object gets created in the Cloudflare Worker that talks to our Room Hub, not in the Room Hub itself. Our Room Hub will run in a single data center somewhere on Cloudflare’s network. If we create the game room from there it may still be far away from our end user!

Looking up a room’s information is simpler, we return either the room data or status 404.

export async function handleRoomLookup(ctx: Context<Env, {roomCode: string}>) {
    const now = Date.now();

    let roomStorageKey = ROOM_STATE_PREFIX + ctx.params.roomCode;
    const roomData = await ctx.state.storage.get<RoomData>(roomStorageKey);

    if (roomData === undefined) {
        ctx.throw(404, "Room not found");
        return;
    }

    if (now - roomData.createdAt > MAX_ROOM_AGE) {
        // About time we cleaned it up.
        await ctx.state.storage.delete(roomStorageKey);
        ctx.response.status = HttpStatus.NotFound;
        return;
    }

    ctx.response.body = {
        room: roomData
    };
}

The smartphone browser needs to connect to that same room. To make things easy for the user we generate a four character room code which points at that specific “Game Room” Durable Object. This way the user can take their smartphone, navigate to the website address https://ld49.pages.dev, and enter the code “ABCD” (or more commonly, they scan a QR code with the link https://ld49.pages.dev?room=ABCD).

Game Room Durable Object

Our game room Durable Object can be pretty simple since it is only responsible for passing along messages from the smartphone to the laptop with the latest sensor readings. I was able to modify the Durable Objects chat room example to do exactly this — a good time saver for a game jam!

When a browser connects, they either connect as a “peer” or as a “host” role. Any messages sent by peers are forwarded to the host, and all messages from the host get forwarded to all peers. In this case, our host is the laptop browser running the game and the peer is the smartphone controller. Implementation-wise this means that we keep two lists of users: the peers and the hosts. Whenever a message comes in, we loop over the list to broadcast it to all connections of the other role. In practice, the code is a bit more involved to deal with users disconnecting.

Full Tilt is a singleplayer game, but adapting it to be a multiplayer game would be easy with this setup. Imagine a Mario Kart-like game that runs in your browser in which multiple friends can join using their smartphones as their steering wheel controller! Unfortunately there was not enough time in this game jam to make a polished game.

Front end

With the backend ready, we still need to create the actual game and the controller web app.

My initial plan was to create a game in which you fly a 3D plane by tilting your phone, collecting stars and completing aerobatic tricks. It would fit the theme “Unstable” as I expected this control method to be pretty flimsy! Since there was no time left to create anything close to that, it was time to cut scope.

I ended up using the Phaser game engine and put the entire system together in a Svelte app.   This was certainly a challenge, as I had only used Phaser once before many years ago and had never used Svelte. Luckily, I was able to put together something simple quickly enough: a snake-like game in which you move a thingy to collect blips that appear randomly on the screen.

In order for a game to be a game there needs to be some goal for the user, usually some game over condition. I changed the game to go faster and faster over time, added a score counter, and added a game over condition where you lose the game if you touch the edge of the screen. I made my “art” in MS Paint, generated some sound effects using the online tool sfxr, and “composed” the music soundtrack in Chrome’s Music Lab Song Maker.

Simultaneously I wrote a small client for my game server and duct-taped together the smartphone controller app which is powered by the browser’s DeviceMotion APIs. To distribute my game I used Cloudflare Pages, which worked like a charm the first time.

All done

And then the deadline was there — I barely made it, but I submitted something I was proud of. A not-so-great game, but with an interesting backend system and a novel input method. Do try the game yourself here, and the source code is available here (warning: it’s pretty hacky code of course!).

The reception of my co-jammers was great. While everybody agreed the game itself and its graphics were awful, it was novel. The game ended up rated first in the *Innovation* category for this game jam, out of hundreds of other games!

Finally, is this the future of game servers? For indie developers and smaller studios, setting up and maintaining a globally distributed fleet of game servers is both a huge distraction and cost. Serverless and in particular Durable Objects can provide an amazing solution.

But not every game is well-suited for a WebSocket-based backend. In some real-time games you are not interested in what happened a second ago and only the latest information matters. This is where the reliable and ordered nature of WebSockets can get in the way.

All in all my first impressions of Durable Objects are very positive, it is a great tool to have in your toolbelt for all kinds of web projects. You can now tackle problems that would otherwise take days in mere minutes. I am very excited to see what other problems will be made easy by Durable Objects, even those I haven’t even thought of yet.