All posts by James M Snell

More Node.js APIs in Cloudflare Workers — Streams, Path, StringDecoder

Post Syndicated from James M Snell original http://blog.cloudflare.com/workers-node-js-apis-stream-path/

More Node.js APIs in Cloudflare Workers — Streams, Path, StringDecoder

More Node.js APIs in Cloudflare Workers — Streams, Path, StringDecoder

Today we are announcing support for three additional APIs from Node.js in Cloudflare Workers. This increases compatibility with the existing ecosystem of open source npm packages, allowing you to use your preferred libraries in Workers, even if they depend on APIs from Node.js.

We recently added support for AsyncLocalStorage, EventEmitter, Buffer, assert and parts of util. Today, we are adding support for:

We are also sharing a preview of a new module type, available in the open-source Workers runtime, that mirrors a Node.js environment more closely by making some APIs available as globals, and allowing imports without the node: specifier prefix.

You can start using these APIs today, in the open-source runtime that powers Cloudflare Workers, in local development, and when you deploy your Worker. Get started by enabling the nodejs_compat compatibility flag for your Worker.

Stream

The Node.js streams API is the original API for working with streaming data in JavaScript that predates the WHATWG ReadableStream standard. Now, a full implementation of Node.js streams (based directly on the official implementation provided by the Node.js project) is available within the Workers runtime.

Let's start with a quick example:

import {
  Readable,
  Transform,
} from 'node:stream';

import {
  text,
} from 'node:stream/consumers';

import {
  pipeline,
} from 'node:stream/promises';

// A Node.js-style Transform that converts data to uppercase
// and appends a newline to the end of the output.
class MyTransform extends Transform {
  constructor() {
    super({ encoding: 'utf8' });
  }
  _transform(chunk, _, cb) {
    this.push(chunk.toString().toUpperCase());
    cb();
  }
  _flush(cb) {
    this.push('\n');
    cb();
  }
}

export default {
  async fetch() {
    const chunks = [
      "hello ",
      "from ",
      "the ",
      "wonderful ",
      "world ",
      "of ",
      "node.js ",
      "streams!"
    ];

    function nextChunk(readable) {
      readable.push(chunks.shift());
      if (chunks.length === 0) readable.push(null);
      else queueMicrotask(() => nextChunk(readable));
    }

    // A Node.js-style Readable that emits chunks from the
    // array...
    const readable = new Readable({
      encoding: 'utf8',
      read() { nextChunk(readable); }
    });

    const transform = new MyTransform();
    await pipeline(readable, transform);
    return new Response(await text(transform));
  }
};

In this example we create two Node.js stream objects: one stream.Readable and one stream.Transform. The stream.Readable simply emits a sequence of individual strings, piped through the stream.Transform which converts those to uppercase and appends a new-line as a final chunk.

The example is straightforward and illustrates the basic operation of the Node.js API. For anyone already familiar with using standard WHATWG streams in Workers the pattern here should be recognizable.

The Node.js streams API is used by countless numbers of modules published on npm. Now that the Node.js streams API is available in Workers, many packages that depend on it can be used in your Workers. For example, the split2 module is a simple utility that can break a stream of data up and reassemble it so that every line is a distinct chunk. While simple, the module is downloaded over 13 million times each week and has over a thousand direct dependents on npm (and many more indirect dependents). Previously it was not possible to use split2 within Workers without also pulling in a large and complicated polyfill implementation of streams along with it. Now split2 can be used directly within Workers with no modifications and no additional polyfills. This reduces the size and complexity of your Worker by thousands of lines.

import {
  PassThrough,
} from 'node:stream';

import { default as split2 } from 'split2';

const enc = new TextEncoder();

export default {
  async fetch() {
    const pt = new PassThrough();
    const readable = pt.pipe(split2());

    pt.end('hello\nfrom\nthe\nwonderful\nworld\nof\nnode.js\nstreams!');
    for await (const chunk of readable) {
      console.log(chunk);
    }

    return new Response("ok");
  }
};

Path

The Node.js Path API provides utilities for working with file and directory paths. For example:

import path from "node:path"
path.join('/foo', 'bar', 'baz/asdf', 'quux', '..');

// Returns: '/foo/bar/baz/asdf'

Note that in the Workers implementation of path, the path.win32 variants of the path API are not implemented, and will throw an exception.

StringDecoder

The Node.js StringDecoder API is a simple legacy utility that predates the WHATWG standard TextEncoder/TextDecoder API and serves roughly the same purpose. It is used by Node.js' stream API implementation as well as a number of popular npm modules for the purpose of decoding UTF-8, UTF-16, Latin1, Base64, and Hex encoded data.

import { StringDecoder } from 'node:string_decoder';
const decoder = new StringDecoder('utf8');

const cent = Buffer.from([0xC2, 0xA2]);
console.log(decoder.write(cent));

const euro = Buffer.from([0xE2, 0x82, 0xAC]);
console.log(decoder.write(euro)); 

In the vast majority of cases, your Worker should just keep on using the standard TextEncoder/TextDecoder APIs, but the StringDecoder is available directly for workers to use now without relying on polyfills.

Node.js Compat Modules

One Worker can already be a bundle of multiple assets. This allows a single Worker to be made up of multiple individual ESM modules, CommonJS modules, JSON, text, and binary data files.

Soon there will be a new type of module that can be included in a Worker bundles: the NodeJsCompatModule.

A NodeJsCompatModule is designed to emulate the Node.js environment as much as possible. Within these modules, common Node.js global variables such as process, Buffer, and even __filename will be available. More importantly, it is possible to require() our Node.js core API implementations without using the node: specifier prefix. This maximizes compatibility with existing NPM packages that depend on globals from Node.js being present, or don’t import Node.js APIs using the node: specifier prefix.

Support for this new module type has landed in the open source workerd runtime, with deeper integration with Wrangler coming soon.

What’s next

We’re adding support for more Node.js APIs each month, and as we introduce new APIs, they will be added under the nodejs_compat compatibility flag — no need to take any action or update your compatibility date.

Have an NPM package that you wish worked on Workers, or an API you’d like to be able to use? Join the Cloudflare Developers Discord and tell us what you’re building, and what you’d like to see next.

Node.js compatibility for Cloudflare Workers – starting with Async Context Tracking, EventEmitter, Buffer, assert, and util

Post Syndicated from James M Snell original https://blog.cloudflare.com/workers-node-js-asynclocalstorage/

Node.js compatibility for Cloudflare Workers – starting with Async Context Tracking, EventEmitter, Buffer, assert, and util

Node.js compatibility for Cloudflare Workers – starting with Async Context Tracking, EventEmitter, Buffer, assert, and util

Over the coming months, Cloudflare Workers will start to roll out built-in compatibility with Node.js core APIs as part of an effort to support increased compatibility across JavaScript runtimes.

We are happy to announce today that the first of these Node.js APIs – AsyncLocalStorage, EventEmitter, Buffer, assert, and parts of util – are now available for use. These APIs are provided directly by the open-source Cloudflare Workers runtime, with no need to bundle polyfill implementations into your own code.

These new APIs are available today — start using them by enabling the nodejs_compat compatibility flag in your Workers.

Async Context Tracking with the AsyncLocalStorage API

The AsyncLocalStorage API provides a way to track context across asynchronous operations. It allows you to pass a value through your program, even across multiple layers of asynchronous code, without having to pass a context value between operations.

Consider an example where we want to add debug logging that works through multiple layers of an application, where each log contains the ID of the current request. Without AsyncLocalStorage, it would be necessary to explicitly pass the request ID down through every function call that might invoke the logging function:

function logWithId(id, state) {
  console.log(`${id} - ${state}`);
}

function doSomething(id) {
  // We don't actually use id for anything in this function!
  // It's only here because logWithId needs it.
  logWithId(id, "doing something");
  setTimeout(() => doSomethingElse(id), 10);
}

function doSomethingElse(id) {
  logWithId(id, "doing something else");
}

let idSeq = 0;

export default {
  async fetch(req) {
    const id = idSeq++;
    doSomething(id);
    logWithId(id, 'complete');
    return new Response("ok");
  }
}

While this approach works, it can be cumbersome to coordinate correctly, especially as the complexity of an application grows. Using AsyncLocalStorage this becomes significantly easier by eliminating the need to explicitly pass the context around. Our application functions (doSomething and doSomethingElse in this case) never need to know about the request ID at all while the logWithId function does exactly what we need it to:

import { AsyncLocalStorage } from 'node:async_hooks';

const requestId = new AsyncLocalStorage();

function logWithId(state) {
  console.log(`${requestId.getStore()} - ${state}`);
}

function doSomething() {
  logWithId("doing something");
  setTimeout(() => doSomethingElse(), 10);
}

function doSomethingElse() {
  logWithId("doing something else");
}

let idSeq = 0;

export default {
  async fetch(req) {
    return requestId.run(idSeq++, () => {
      doSomething();
      logWithId('complete');
      return new Response("ok");
    });
  }
}

With the nodejs_compat compatibility flag enabled, import statements are used to access specific APIs. The Workers implementation of these APIs requires the use of the node: specifier prefix that was introduced recently in Node.js (e.g. node:async_hooks, node:events, etc)

We implement a subset of the AsyncLocalStorage API in order to keep things as simple as possible. Specifically, we’ve chosen not to support the enterWith() and disable() APIs that are found in Node.js implementation simply because they make async context tracking more brittle and error prone.

Conceptually, at any given moment within a worker, there is a current “Asynchronous Context Frame”, which consists of a map of storage cells, each holding a store value for a specific AsyncLocalStorage instance. Calling asyncLocalStorage.run(...) causes a new frame to be created, inheriting the storage cells of the current frame, but using the newly provided store value for the cell associated with asyncLocalStorage.

const als1 = new AsyncLocalStorage();
const als2 = new AsyncLocalStorage();

// Code here runs in the root frame. There are two storage cells,
// one for als1, and one for als2. The store value for each is
// undefined.

als1.run(123, () => {
  // als1.run(...) creates a new frame (1). The store value for als1
  // is set to 123, the store value for als2 is still undefined.
  // This new frame is set to "current".

  als2.run(321, () => {
    // als2.run(...) creates another new frame (2). The store value
    // for als1 is still 123, the store value for als2 is set to 321.
    // This new frame is set to "current".
    console.log(als1.getStore(), als2.getStore());
  });

  // Frame (1) is restored as the current. The store value for als1
  // is still 123, but the store value for als2 is undefined again.
});

// The root frame is restored as the current. The store values for
// both als1 and als2 are both undefined again.

Whenever an asynchronous operation is initiated in JavaScript, for example, creating a new JavaScript promise, scheduling a timer, etc, the current frame is captured and associated with that operation, allowing the store values at the moment the operation was initialized to be propagated and restored as needed.

const als = new AsyncLocalStorage();
const p1 = als.run(123, () => {
  return promise.resolve(1).then(() => console.log(als.getStore());
});

const p2 = promise.resolve(1); 
const p3 = als.run(321, () => {
  return p2.then(() => console.log(als.getStore()); // prints 321
});

als.run('ABC', () => setInterval(() => {
  // prints "ABC" to the console once a second…
  setInterval(() => console.log(als.getStore(), 1000);
});

als.run('XYZ', () => queueMicrotask(() => {
  console.log(als.getStore());  // prints "XYZ"
}));

Note that for unhandled promise rejections, the “unhandledrejection” event will automatically propagate the context that is associated with the promise that was rejected. This behavior is different from other types of events emitted by EventTarget implementations, which will propagate whichever frame is current when the event is emitted.

const asyncLocalStorage = new AsyncLocalStorage();

asyncLocalStorage.run(123, () => Promise.reject('boom'));
asyncLocalStorage.run(321, () => Promise.reject('boom2'));

addEventListener('unhandledrejection', (event) => {
  // prints 123 for the first unhandled rejection ('boom'), and
  // 321 for the second unhandled rejection ('boom2')
  console.log(asyncLocalStorage.getStore());
});

Workers can use the AsyncLocalStorage.snapshot() method to create their own objects that capture and propagate the context:

const asyncLocalStorage = new AsyncLocalStorage();

class MyResource {
  #runInAsyncFrame = AsyncLocalStorage.snapshot();

  doSomething(...args) {
    return this.#runInAsyncFrame((...args) => {
      console.log(asyncLocalStorage.getStore());
    }, ...args);
  }
}

const resource1 = asyncLocalStorage.run(123, () => new MyResource());
const resource2 = asyncLocalStorage.run(321, () => new MyResource());

resource1.doSomething();  // prints 123
resource2.doSomething();  // prints 321

For more, refer to the Node.js documentation about the AsyncLocalStorage API.

There is currently an effort underway to add a new AsyncContext mechanism (inspired by AsyncLocalStorage) to the JavaScript language itself. While it is still early days for the TC-39 proposal, there is good reason to expect it to progress through the committee. Once it does, we look forward to being able to make it available in the Cloudflare Workers platform. We expect our implementation of AsyncLocalStorage to be compatible with this new API.

The proposal for AsyncContext provides an excellent set of examples and description of the motivation of why async context tracking is useful.

Events with EventEmitter

The EventEmitter API is one of the most fundamental Node.js APIs and is critical to supporting many other higher level APIs, including streams, crypto, net, and more. An EventEmitter is an object that emits named events that cause listeners to be called.

import { EventEmitter } from 'node:events';

const emitter = new EventEmitter();
emitter.on('hello', (...args) => {
  console.log(...args);
});

emitter.emit('hello', 1, 2, 3);

The implementation in the Workers runtime fully supports the entire Node.js EventEmitter API including the captureRejections option that allows improved handling of async functions as event handlers:

const emitter = new EventEmitter({ captureRejections: true });
emitter.on('hello', async (...args) => {
  throw new Error('boom');
});
emitter.on('error', (err) => {
  // the async promise rejection is emitted here!
});

Please refer to the Node.js documentation for more details on the use of the EventEmitter API: https://nodejs.org/dist/latest-v19.x/docs/api/events.html#events.

Buffer

The Buffer API in Node.js predates the introduction of the standard TypedArray and DataView APIs in JavaScript by many years and has persisted as one of the most commonly used Node.js APIs for manipulating binary data. Today, every Buffer instance extends from the standard Uint8Array class but adds a range of unique capabilities such as built-in base64 and hex encoding/decoding, byte-order manipulation, and encoding-aware substring searching.

import { Buffer } from 'node:buffer';

const buf = Buffer.from('hello world', 'utf8');

console.log(buf.toString('hex'));
// Prints: 68656c6c6f20776f726c64
console.log(buf.toString('base64'));
// Prints: aGVsbG8gd29ybGQ=

Because a Buffer extends from Uint8Array, it can be used in any workers API that currently accepts Uint8Array, such as creating a new Response:

const response = new Response(Buffer.from("hello world"));

Or interacting with streams:

const writable = getWritableStreamSomehow();
const writer = writable.getWriter();
writer.write(Buffer.from("hello world"));

Please refer to the Node.js documentation for more details on the use of the Buffer API: https://nodejs.org/dist/latest-v19.x/docs/api/buffer.html.

Assertions

The assert module in Node.js provides a number of useful assertions that are useful when building tests.

import {
  strictEqual,
  deepStrictEqual,
  ok,
  doesNotReject,
} from 'node:assert';

strictEqual(1, 1); // ok!
strictEqual(1, "1"); // fails! throws AssertionError

deepStrictEqual({ a: { b: 1 }}, { a: { b: 1 }});// ok!
deepStrictEqual({ a: { b: 1 }}, { a: { b: 2 }});// fails! throws AssertionError

ok(true); // ok!
ok(false); // fails! throws AssertionError

await doesNotReject(async () => {}); // ok!
await doesNotReject(async () => { throw new Error('boom') }); // fails! throws AssertionError

In the Workers implementation of assert, all assertions run in what Node.js calls the “strict assertion mode“, which means that non-strict methods behave like their corresponding strict methods. For instance, deepEqual() will behave like deepStrictEqual().

Please refer to the Node.js documentation for more details on the use of the assertion API: https://nodejs.org/dist/latest-v19.x/docs/api/assert.html.

Promisify/Callbackify

The promisify and callbackify APIs in Node.js provide a means of bridging between a Promise-based programming model and a callback-based model.

The promisify method allows taking a Node.js-style callback function and converting it into a Promise-returning async function:

import { promisify } from 'node:util';

function foo(args, callback) {
  try {
    callback(null, 1);
  } catch (err) {
    // Errors are emitted to the callback via the first argument.
    callback(err);
  }
}

const promisifiedFoo = promisify(foo);
await promisifiedFoo(args);

Similarly, callbackify converts a Promise-returning async function into a Node.js-style callback function:

import { callbackify } from 'node:util';

async function foo(args) {
  throw new Error('boom');
}

const callbackifiedFoo = callbackify(foo);

callbackifiedFoo(args, (err, value) => {
  if (err) throw err;
});

Together these utilities make it easy to properly handle all of the generally tricky nuances involved with properly bridging between callbacks and promises.

Please refer to the Node.js documentation for more information on how to use these APIs: https://nodejs.org/dist/latest-v19.x/docs/api/util.html#utilcallbackifyoriginal, https://nodejs.org/dist/latest-v19.x/docs/api/util.html#utilpromisifyoriginal.

Type brand-checking with util.types

The util.types API provides a reliable and generally more efficient way of checking that values are instances of various built-in types.

import { types } from 'node:util';

types.isAnyArrayBuffer(new ArrayBuffer());  // Returns true
types.isAnyArrayBuffer(new SharedArrayBuffer());  // Returns true
types.isArrayBufferView(new Int8Array());  // true
types.isArrayBufferView(Buffer.from('hello world')); // true
types.isArrayBufferView(new DataView(new ArrayBuffer(16)));  // true
types.isArrayBufferView(new ArrayBuffer());  // false
function foo() {
  types.isArgumentsObject(arguments);  // Returns true
}
types.isAsyncFunction(function foo() {});  // Returns false
types.isAsyncFunction(async function foo() {});  // Returns true
// .. and so on

Please refer to the Node.js documentation for more information on how to use the type check APIs: https://nodejs.org/dist/latest-v19.x/docs/api/util.html#utiltypes. The workers implementation currently does not provide implementations of the util.types.isExternal(), util.types.isProxy(), util.types.isKeyObject(), or util.type.isWebAssemblyCompiledModule() APIs.

What’s next

Keep your eyes open for more Node.js core APIs coming to Cloudflare Workers soon! We currently have implementations of the string decoder, streams and crypto APIs in active development. These will be introduced into the workers runtime incrementally over time and any worker using the nodejs_compat compatibility flag will automatically pick up the new modules as they are added.

The road to a more standards-compliant Workers API

Post Syndicated from James M Snell original https://blog.cloudflare.com/standards-compliant-workers-api/

The road to a more standards-compliant Workers API

The road to a more standards-compliant Workers API

Earlier this year, we announced our participation in a new W3C Community Group for the advancement of Web-interoperable API standards. Since then, this new WinterCG has been hard at work identifying the common API standards around which all JavaScript runtimes can build. Today I just want to give a peek at some work the WinterCG has been doing; and show off some of the improvements we have been making in the Workers runtime to increase alignment with Web Platform standards around event handling, task cancellation using AbortController, text encoding and decoding, URL parsing and pattern matching, and streams support.

The WinterCG Minimum Common Web Platform API

Right at the start of the WinterCG activity, the group took some time to evaluate and compare the various non-browser JavaScript runtimes such as Node.js, Deno, Bun, and Workers with the purpose of identifying the Web Platform APIs they all had in common. Following a very simple criteria, we looked at the standard APIs that were already implemented and supported by at least two of these runtimes and compiled those into a list that the WinterCG calls the “Minimum Common Web Platform API“. This list will serve as the basis for what the community group defines as the minimum set of Web Platform APIs that should be implemented consistently across runtimes that claim to be “Web-interoperable”.

The current list is straightforward:

AbortController ReadableStreamDefaultController
AbortSignal ReadableStreamDefaultReader
ByteLengthQueuingStrategy SubtleCrypto
CompressionStream TextDecoder
CountQueuingStrategy TextDecoderStream
Crypto TextEncoder
CryptoKey TextEncoderStream
DecompressionStream TransformStream
DOMException TransformStreamDefaultController
Event URL
EventTarget URLPattern
ReadableByteStreamController URLSearchParams
ReadableStream WritableStream
ReadableStreamBYOBReader WritableStreamDefaultController
ReadableStreamBYOBRequest  

In addition to these, the WinterCG also expects Web-interoperable runtimes to have implementations of the atob(), btoa(), queueMicrotask(), structuredClone(), setTimeout(), clearTimeout(), setInterval(), clearInterval(), console, and crypto.subtle APIs available on the global scope.

Today, we are happy to say that the Workers runtime has compliant or nearly compliant implementations of every one of these WinterCG Minimum Common Web Platform APIs. Some of these APIs intentionally diverge from the standards either due to backwards compatibility concerns, Workers-specific features, or performance optimizations. Other APIs diverge still because we are still in the process of updating them to align with the specifications.

Improving standards compliance in the Workers runtime

The Workers runtime has, from the beginning, had the mission to align its developer experience with JavaScript and Web Platform standards as much as possible. Over the past year we have worked hard to continue advancing that mission forward both by improving the standards-compliance of existing APIs such as Event, EventTarget, URL, and streams; and the introduction of new Web Platform APIs such as URLPattern, encoding streams, and compression streams.

Event and EventTarget

The Workers runtime has provided an implementation of the Event and EventTarget Web Platform APIs from the very beginning. These were, however, only limited implementations of what the WHATWG DOM specification defines. Specifically, Workers had only implemented the bare minimum of the Event API that it itself needed to operate.

Today, the Event and EventTarget implementations in Workers provide a more complete implementation.

Let’s look at the official definition of Event as defined by the WHATWG DOM standard:

[Exposed=*]
interface Event {
  constructor(DOMString type, optional EventInit eventInitDict = {});
 
  readonly attribute DOMString type;
  readonly attribute EventTarget? target;
  readonly attribute EventTarget? srcElement; // legacy
  readonly attribute EventTarget? currentTarget;
  sequence<EventTarget> composedPath();
 
  const unsigned short NONE = 0;
  const unsigned short CAPTURING_PHASE = 1;
  const unsigned short AT_TARGET = 2;
  const unsigned short BUBBLING_PHASE = 3;
  readonly attribute unsigned short eventPhase;
 
  undefined stopPropagation();
           attribute boolean cancelBubble; // legacy alias of .stopPropagation()
  undefined stopImmediatePropagation();
 
  readonly attribute boolean bubbles;
  readonly attribute boolean cancelable;
           attribute boolean returnValue;  // legacy
  undefined preventDefault();
  readonly attribute boolean defaultPrevented;
  readonly attribute boolean composed;
 
  [LegacyUnforgeable] readonly attribute boolean isTrusted;
  readonly attribute DOMHighResTimeStamp timeStamp;
 
  undefined initEvent(DOMString type, optional boolean bubbles = false, optional boolean cancelable = false); // legacy
};
 
dictionary EventInit {
  boolean bubbles = false;
  boolean cancelable = false;
  boolean composed = false;
};

Web Platform API specifications are always written in terms of a definition language called Web IDL. Every attribute defined in the interface is a property that is exposed on the object. Event objects, then, are supposed to have properties like type, target, srcElement, currentTarget, bubbles, cancelable, returnValue, defaultPrevented, composed, isTrusted, and timeStamp. They are also expected to have methods such as composedPath(), stopPropagation(), and stopImmediatePropagation(). Because most of these were not immediately needed by Workers, most were not provided originally.

Today, all standard, non-legacy properties and methods defined by the specification are available for use:

 const event = new Event('foo', {
    bubbles: false,
    cancelable: true,
    composed: true,
  });
 
  console.log(event.bubbles);
  console.log(event.cancelable);
  console.log(event.composed);
  
  addEventListener('foo', (event) => {
    console.log(event.eventPhase);  // 2 AT_TARGET
    console.log(event.currentTarget);
    console.log(event.composedPath());
  });
 
  dispatchEvent(event);

While we were at it, we also fixed a long standing bug in the implementation of Event that prevented user code from properly subclassing the Event object to create their own custom event types. This change is protected by a compatibility flag that is now enabled by default for all Workers using a compatibility date on or past 2022-01-31.

  class MyEvent extends Event {
    constructor() {
      super('my-event')
    }
 
    get type() { return super.type.toUpperCase() }
  }
 
  const myEvent = new MyEvent();
  // Previously, this would print "my-event" instead of "MY-EVENT" as expected.
  console.log(myEvent.type);

The EventTarget implementation has also been updated to support once handlers (event handlers that are triggered at-most once then automatically unregistered), cancelable handlers (using AbortSignal), and event listener objects, all in line with the standard.

Using a one-time event handler

  addEventListener('foo', (event) => {
    console.log('printed only once');
  }, {
    once: true
  });
 
  dispatchEvent(new Event('foo'));
  dispatchEvent(new Event('foo'));

Once handlers are key for preventing memory leaks in your applications when you know that a particular event is only ever going to be emitted once, or whenever you only care about handling it once. The stored reference to the function or object that is handling the event is removed immediately upon the first invocation, allowing the memory to be garbage collected.

Using a cancelable event handler

  const ac = new AbortController();
 
  addEventListener('foo', (event) => {
    console.log('not printed at all');
  }, {
    signal: ac.signal
  });
 
  ac.abort();
 
  dispatchEvent(new Event('foo'));

Using an event listener object

While passing a function to addEventListener() is the most common case, the standard actually allows an event listener to be an object with a handleEvent() method as well.

  const listener = {
    handleEvent(event) {
      console.log(event.type);
    }
  };
 
  addEventListener('foo', listener);
  addEventListener('bar', listener);
 
  dispatchEvent(new Event('foo'));
  dispatchEvent(new Event('bar'));

AbortController and AbortSignal

As illustrated in the cancelable event example above, we have also introduced an implementation of the AbortController and AbortSignal APIs into Workers. These provide a standard, and interoperable way of signaling cancellation of several kinds of tasks.

The AbortController/AbortSignal pattern is straightforward: An AbortSignal is just a type of EventTarget that will emit a single “abort” event when it is triggered:

  const ac = new AbortController();
 
  ac.signal.addEventListener('abort', (event) => {
    console.log(event.reason);  // 'just because'
  }, { once: true });
 
  ac.abort('just because');

The AbortController is used to actually trigger the abort event, optionally with a reason argument that is passed on to the event. The reason is typically an Error object but can be any JavaScript value.

The AbortSignal can only be triggered once, so the “abort” event should only ever be emitted once.

It is also possible to create AbortSignals that timeout after a specified period of time:

const signal = AbortSignal.timeout(10);

Or an AbortSignal that is pre-emptively triggered immediately on creation (these will never actually emit the “abort” event):

const signal = AbortSignal.abort('for reasons');

Currently, within Workers, AbortSignal and AbortController has been integrated with the EventTarget, fetch(), and streams APIs in alignment with the relevant standard specifications for each.

Using AbortSignal to cancel a fetch()

  const ac = new AbortController();
 
  const res = fetch('https://example.org', {
    signal: ac.signal
  });
 
  ac.abort(new Error('canceled'))
 
  try {
    await res;
  } catch (err) {
    console.log(err);
  }

TextEncoderStream and TextDecoderStream

The Workers runtime has long provided basic implementations of the TextEncoder and TextDecoder APIs. Initially, these were limited to only supporting encoding and decoding of UTF-8 text. The standard definition of TextDecoder, however, defines a much broader range of text encodings that are now fully supported by the Workers implementation. Per the standard, TextEncoder currently only supports UTF-8.

  const win1251decoder = new TextDecoder("windows-1251");
  const bytes = new Uint8Array([
    207, 240, 232, 226, 229, 242, 44, 32, 236, 232, 240, 33,
  ]);
  console.log(win1251decoder.decode(bytes)); // Привет, мир!

In addition to supporting the full range of encodings defined by the standard, Workers also now provides implementations of the TextEncoderStream and TextDecoderStream, which provide TransformStream implementations that apply encoding and decoding to streaming data:

  const { writable, readable } = new TextDecoderStream("windows-1251");
 
  const writer = writable.getWriter();
  writer.write(new Uint8Array([
    207, 240, 232, 226, 229, 242, 44, 32, 236, 232, 240, 33,
  ]));
 
  const reader = readable.getReader();
  const res = await reader.read();
  console.log(res.value); // Привет, мир!

Using the encoding streams requires the use of the transformstream_enable_standard_constructor compatibility flag.

CompressionStream and DecompressionStream

Streaming compression and decompression is also now supported in the runtime using the standard CompressionStream and DecompressionStream APIs.

const ds = new DecompressionStream('gzip');
const decompressedStream = blob.stream().pipeThrough(ds);

const cs = new CompressionStream('gzip');
const compressedStream = blob.stream().pipeThrough(cs);

These are TransformStream implementations that fully conform to the standard definitions. Use of the compression streams does not require a compatibility flag to enable.

URL and URLPattern

Similar to Event, there has been an implementation of the Web Platform standard URL API available within Workers from nearly the beginning. But also like Event, the implementation was not entirely compatible with the standard.

The incompatibilities were subtle, for instance, in the original implementation, the URL string “https://a//b//c//” would be parsed incorrectly as “https://a/b/c” (note that the extra empty path segments are removed) whereas the standard parsing algorithm would produce “https://a//b//c/” as a result. Such inconsistent results were causing interoperability issues with JavaScript written to run across multiple JavaScript runtimes and needed to be fixed.

A new implementation of the URL parsing algorithm has been provided, and as of October 31, 2022 it has been enabled by default for all newly deployed Workers. Older Workers can begin using the new implementation by updating their compatibility dates to 2022-10-31 or by enabling the url_standard compatibility flag.

Along with the updated URL implementation, Workers now provides an implementation of the standard URLPattern API.

URLPattern provides a regular-expression-like syntax for matching a URL string against a pattern. For instance, consider this example taken from the MDN documentation for URLPattern:

  // Matching a pathname
  let pattern1 = new URLPattern('https://example.com/books/:id')
  // same as
  let pattern2 = new URLPattern(
    '/books/:id',
    'https://example.com',
  );
  // or
  let pattern3 = new URLPattern({
    protocol: 'https',
    hostname: 'example.com',
    pathname: '/books/:id',
  });
  // or
  let pattern4 = new URLPattern({
    pathname: '/books/:id',
    baseURL: 'https://example.com',
  });

ReadableStream, WritableStream, and TransformStream

Last, but absolutely not least, our most significant effort over the past year has been providing new standards compliant implementations of the ReadableStream, WritableStream, and TransformStream APIs.

The Workers runtime has always provided an implementation of these objects but they were never fully conformant to the standard. User code was not capable of creating custom ReadableStream and WritableStream instances, and TransformStreams were limited to simple identity pass-throughs of bytes. The implementations have been updated now to near complete compliance with the standard (near complete because we still have a few edge cases and features we are working on).

The new streams implementation will be enabled by default in all new Workers as of November 30, 2022, or can be enabled earlier using the streams_enable_constructors and transformstream_enable_standard_constructor compatibility flags.

Creating a custom ReadableStream

async function handleRequest(request) {
  const enc = new TextEncoder();

  const rs = new ReadableStream({
    pull(controller) {
      controller.enqueue(enc.encode('hello world'));
      controller.close();
    }
  });

  return new Response(rs);
}

The new implementation supports both “regular” and “bytes” ReadableStream types, supports BYOB readers, and includes performance optimizations for both tee() and pipeThrough().

Creating a custom WritableStream

  const ws = new WritableStream({
    write(chunk) {
      console.log(chunk);  // "hello world"
    }
  });
 
  const writer = ws.getWriter();
  writer.write("hello world");

WritableStreams are fairly simple objects that can accept any JavaScript value written to them.

Creating a custom TransformStream

  const { readable, writable } = new TransformStream({
    transform(chunk, controller) {
      controller.enqueue(chunk.toUpperCase());
    }
  });
 
  const writer = writable.getWriter();
  const reader = readable.getReader();
 
  writer.write("hello world");
 
  const res = await reader.read();
  console.log(res.value);  // "HELLO WORLD"

It has always been possible in Workers to call new TransformStream() (with no arguments) to create a limited version of a TransformStream that only accepts bytes and only acts as a pass-through, passing the bytes written to the writer on to the reader without any modification.

That original implementation is now available within Workers using the IdentityTransformStream class.

  const { readable, writable } = new IdentityTransformStream();
 
  const writer = writable.getWriter();
  const reader = readable.getReader();
 
  const enc = new TextEncoder();
  const dec = new TextDecoder();
 
  writer.write(enc.encode("hello world"));
 
  const res = await reader.read();
  console.log(dec.decode(res.value));  // "hello world"

If your code is using new TransformStream() today as this kind of pass-through, the new implementation will continue to work except for one very important difference: the old, non-standard implementation of new TransformStream() supported BYOB reads on the readable side (i.e. readable.getReader({ mode: 'byob' })). The new implementation (enabled via a compatibility flag and becoming the default on November 30 ) does not support BYOB reads as required by the stream standard.

What’s next

It is clear that we have made a lot of progress in improving the standards compliance of the Workers runtime over the past year, but there is far more to do. Next we will be turning our attention to the implementation of the fetch() and WebSockets APIs, as well as actively seeking closer alignment with other runtimes through collaboration in the Web-interoperable Runtimes Community Group.

If you are interested in helping drive the implementation of Web Platform APIs forward, and advancing interoperability between JavaScript runtime environments, the Workers Runtime team at Cloudflare is hiring! Reach out, or see our open positions here.

A Community Group for Web-interoperable JavaScript runtimes

Post Syndicated from James M Snell original https://blog.cloudflare.com/introducing-the-wintercg/

A Community Group for Web-interoperable JavaScript runtimes

A Community Group for Web-interoperable JavaScript runtimes

Today, Cloudflare – in partnership with Vercel, Shopify, and individual core contributors to both Node.js and Deno – is announcing the establishment of a new Community Group focused on the interoperable implementation of standardized web APIs in non-web browser, JavaScript-based development environments.

The W3C and the Web Hypertext Application Technology Working Group (or WHATWG) have long pioneered the efforts to develop standardized APIs and features for the web as a development environment. APIs such as fetch(), ReadableStream and WritableStream, URL, URLPattern, TextEncoder, and more have become ubiquitous and valuable components of modern web development. However, the charters of these existing groups have always been explicitly limited to considering only the specific needs of web browsers, resulting in the development of standards that are not readily optimized for any environment that does not look exactly like a web browser. A good example of this effect is that some non-browser implementations of the Streams standard are an order of magnitude slower than the equivalent Node.js streams and Deno reader implementations due largely to how the API is specified in the standard.

Serverless environments such as Cloudflare Workers, or runtimes like Node.js and Deno, have a broad wide range of requirements, issues, and concerns that are simply not relevant to web browsers, and vice versa. This disconnect and the lack of clear consideration of these differences while the various specifications have been developed, has led to a situation where the non-browser runtimes have implemented their own bespoke, ad-hoc solutions for functionality that is actually common across the environments.

This new effort is changing that by providing a venue to discuss and advocate for the common requirements of all web environments, deployed anywhere throughout the stack.

What’s in it for developers?

Developers want their code to be portable. Once they write it, if they choose to move to a different environment (from Node.js to Deno, for instance) they don’t want to have to completely rewrite it just to make it keep doing the exact same thing it already was.

One of the more common questions we get from Cloudflare users is how they can make use of some arbitrary module published to npm that makes use of some set of Node.js-specific or Deno-specific APIs. The answer usually involves pulling in some arbitrary combination of polyfill implementations. The situation is similar with the Deno project, which has opted to integrate a polyfill of the full Node.js core API directly into their standard library. The more these environments implement the same common standards, the more the developer ecosystem can depend on the code they write just working, regardless of where it is being run.

Cloudflare Workers, Node.js, Deno, and web browsers are all very different from each other, but they share a good number of common functions. For instance, they all provide APIs for generating cryptographic hashes; they all deal in some way with streaming data; they all provide the ability to send an HTTP request somewhere. Where this overlap exists, and where the requirements and functionality are the same, the environments should all implement the same standardized mechanisms.

The Web-interoperable Runtimes Community Group

The new Web-interoperable Runtimes Community Group (or “WinterCG”) operates under the established processes of the W3C.

The naming of this group is something that took us a while to settle on because it is critical to understanding the goals the group is trying to achieve (and what it is not). The key element is the phrase “web-interoperable”.

We use “web” in exactly the same sense that the W3C and WHATWG communities use the term – precisely: web browsers. The term “web-interoperable”, then, means implementing features in a manner that is either identical or at least as consistent as possible with the way those features are implemented in web browsers. For instance, the way that the new URL() constructor works in browsers is exactly how the new URL() constructor should work in Node.js, in Deno, and in Cloudflare Workers.

It is important, however, to acknowledge the fact that Node.js, Deno, and Cloudflare Workers are explicitly not web browsers. While this point should be obvious, it is important to call out because the differences between the various JavaScript environments can greatly impact the design decisions of standardized APIs. Node.js and Deno, for instance, each provide full access to the local file system. Cloudflare Workers, in contrast, has no local file system; and web browsers necessarily restrict applications from manipulating the local file system. Likewise, while web browsers inherently include a concept of a website’s “origin” and implement mechanisms such as CORS to protect users against a variety of security threats, there is no equivalent concept of “origins” on the server-side where Node.js, Deno, and Cloudflare Workers operate.

Up to now, the W3C and WHATWG have concerned themselves strictly with the needs of web browsers. The new Web-interoperable Runtimes Community Group is explicitly addressing and advocating for the needs of everyone else.

It is not intended that WinterCG will go off and publish its own set of independent standard APIs. Ideas for new specifications that emerge from WinterCG will first be submitted for consideration by existing work streams in the W3C and WHATWG with the goal of gaining the broadest possible consensus. However, should it become clear that web browsers have no particular need for, or interest in, a feature that the other environments (such as Cloudflare Workers) have need for, WinterCG will be empowered to move forward with a specification of its own – with the constraint that nothing will be introduced that intentionally conflicts with or is incompatible with the established web standards.

WinterCG will be open for anyone to participate; it will operate under the established W3C processes and policies; all work will be openly accessible via the “wintercg” GitHub organization; and everything it does will be centered on the goal of maximizing interoperability.

Work in Progress

WinterCG has already started work on a number of important work items.

The Minimum Common Web API

From the introduction in the current draft of the specification:

“The Minimum Common Web Platform API is a curated subset of standardized web platform APIs intended to define a minimum set of capabilities common to Browser and Non-Browser JavaScript-based runtime environments.”

Or put another way: It is a minimal set of existing web APIs that will be implemented consistently and correctly in Node.js, Deno, and Cloudflare Workers. Most of the APIs, with some exceptions and nuances, already exist in these environments, so the bulk of the work remaining is to ensure that those implementations are conformant to their relative specifications and portable across environments.

The table below lists all the APIs currently included in this subset (along with an indication of whether the API is currently or likely soon to be supported by Node.js, Deno, and Cloudflare Workers):

Node.js Deno Cloudflare Workers
AbortController ✔️ ✔️ ✔️
AbortSignal ✔️ ✔️ ✔️
ByteLengthQueueingStrategy ✔️ ✔️ ✔️
CompressionStream ✔️ ✔️ ✔️
CountQueueingStrategy ✔️ ✔️ ✔️
Crypto ✔️ ✔️ ✔️
CryptoKey ✔️ ✔️ ✔️
DecompressionStream ✔️ ✔️ ✔️
DOMException ✔️ ✔️ ✔️
Event ✔️ ✔️ ✔️
EventTarget ✔️ ✔️ ✔️
ReadableByteStreamController ✔️ ✔️ ✔️
ReadableStream ✔️ ✔️ ✔️
ReadableStreamBYOBReader ✔️ ✔️ ✔️
ReadableStreamBYOBRequest ✔️ ✔️ ✔️
ReadableStreamDefaultController ✔️ ✔️ ✔️
ReadableStreamDefaultReader ✔️ ✔️ ✔️
SubtleCrypto ✔️ ✔️ ✔️
TextDecoder ✔️ ✔️ ✔️
TextDecoderStream ✔️ ✔️ (soon)
TextEncoder ✔️ ✔️ ✔️
TextEncoderStream ✔️ ✔️
TransformStream ✔️ ✔️ ✔️
TransformStreamDefaultController ✔️ ✔️ (soon)
URL ✔️ ✔️ ✔️
URLPattern ? ✔️ ✔️
URLSearchParams ✔️ ✔️ ✔️
WritableStream ✔️ ✔️ ✔️
WritableStreamDefaultController ✔️ ✔️ ✔️
globalThis.self ? ✔️ (soon)
globalThis.atob() ✔️ ✔️ ✔️
globalThis.btoa() ✔️ ✔️ ✔️
globalThis.console ✔️ ✔️ ✔️
globalThis.crypto ✔️ ✔️ ✔️
globalThis.navigator.userAgent ? ✔️ ✔️
globalThis.queueMicrotask() ✔️ ✔️ ✔️
globalThis.setTimeout() / globalthis.clearTimeout() ✔️ ✔️ ✔️
globalThis.setInterval() / globalThis.clearInterval() ✔️ ✔️ ✔️
globalThis.structuredClone() ✔️ ✔️ ✔️

Whenever one of the environments diverges from the standardized definition of the API (such as Node.js implementation of setTimeout() and setInterval()), clear documentation describing the differences will be made available. Such differences should only exist for backwards compatibility with existing code.

Web Cryptography Streams

The Web Cryptography API provides a minimal (and very limited) APIs for  common cryptography operations. One of its key limitations is the fact that – unlike Node.js’ built-in crypto module – it does not have any support for streaming inputs and outputs to symmetric cryptographic algorithms. All Web Cryptography features operate on chunks of data held in memory, all at once. This strictly limits the performance and scalability of cryptographic operations. Using these APIs in any environment that is not a web browser, and trying to make them perform well, quickly becomes painful.

To address that issue, WinterCG has started drafting a new specification for Web Crypto Streams that will be submitted to the W3C for consideration as part of a larger effort currently being bootstrapped by the W3C to update the Web Cryptography specification. The goal is to bring streaming crypto operations to the whole of the web, including web browsers, in a way that conforms with existing standards.

A subset of fetch() for servers

With the recent release of version 18.0.0, Node.js has joined the collection of JavaScript environments that provide an implementation of the WHATWG standardized fetch() API. There are, however, a number of important differences between the way Node.js, Deno, and Cloudflare Workers implement fetch() versus the way it is implemented in web browsers.

For one, server environments do not have a concept of “origin” like a web browser does. Features such as CORS intended to protect against cross-site scripting vulnerabilities are simply irrelevant on the server. Likewise, where web browsers are generally used by one individual user at a time and have a concept of a globally-scoped cookie store, server and serverless applications can be used by millions of users simultaneously and a globally-scoped cookie store that potentially contains session and authentication details would be both impractical and dangerous.

Because of the acute differences in the environments, it is often difficult to reason about, and gain consensus on, proposed changes in the fetch standard. Some proposed new API, for instance, might be fantastically relevant to fetch users on a server but completely useless to fetch users in a web browser. Some set of security concerns that are relevant to the Browser might have no impact whatsoever on the server.

To address this issue, and to make it easier for non-web browser environments to implement fetch in a consistent way, WinterCG is working on documenting a subset of the fetch standard that deals specifically with those different requirements and constraints.

Critically, this subset will be fully compatible with the fetch standard; and is being cooperatively developed by the same folks who have worked on fetch in Node.js, Deno, and Cloudflare Workers. It is not intended that this will become a competing definition of the fetch standard, but rather a set of documented guidelines on how to implement fetch correctly in these other environments.

We’re just getting started

The Web-interoperable Runtimes Community Group is just getting started, and we have a number of ambitious goals. Participation is open to everyone, and all work will be done in the open via GitHub at https://github.com/wintercg. We are actively seeking collaboration with the W3C, the WHATWG, and the JavaScript community at large to ensure that web features are available, work consistently, and meet the requirements of all web developers working anywhere across the stack.

For more information on the WinterCG, refer to https://wintercg.org. For details on how to participate, refer to https://github.com/wintercg/admin.

Making connections with TCP and Sockets for Workers

Post Syndicated from James M Snell original https://blog.cloudflare.com/introducing-socket-workers/

Making connections with TCP and Sockets for Workers

Making connections with TCP and Sockets for Workers

Today we are excited to announce that we are developing APIs and infrastructure to support more TCP, UDP, and QUIC-based protocols in Cloudflare Workers. Once released, these new capabilities will make it possible to use non-HTTP socket connections to and from a Worker or Durable Object as easily as one can use HTTP and WebSockets today.

Out of the box, fetch and WebSocket APIs. With just a few internal changes to make it operational in Workers, we’ve developed an example using an off-the-shelf driver (in this example, a Deno-based Postgres client driver) to communicate with a remote Postgres server via WebSocket over a secure Cloudflare Tunnel.

import { Client } from './driver/postgres/postgres'

export default {
  async fetch(request: Request, env, ctx: ExecutionContext) {
    try {
      const client = new Client({
        user: 'postgres',
        database: 'postgres',
        hostname: 'https://db.example.com',
        password: '',
        port: 5432,
      })
      await client.connect()
      const result = await client.queryArray('SELECT * FROM users WHERE uuid=1;')
      ctx.waitUntil(client.end())
      return new Response(JSON.stringify(result.rows[0]))
    } catch (e) {
      return new Response((e as Error).message)
    }
  },
}

The example works by replacing the bits of the Postgres client driver that use the Deno-specific TCP socket APIs with standard fetch and WebSockets APIs. We then establish a WebSocket connection with a remote Cloudflare Tunnel daemon running adjacent to the Postgres server, establishing what is effectively TCP-over-WebSockets.

Making connections with TCP and Sockets for Workers

While the fact we were able to build the example and communicate effectively and efficiently with the Postgres server — without making any changes to the Cloudflare Workers runtime — is impressive, there are limitations to the approach. For one, the solution requires additional infrastructure to establish and maintain the WebSocket tunnel — in this case, the instance of the Cloudflare Tunnel daemon running adjacent to the Postgres server. While we are certainly happy to provide that daemon to customers, it would just be better if that component were not required at all. Second, tunneling TCP over WebSockets, which is itself tunneled via HTTP over TCP is a bit suboptimal. It works, but we can do better.

Making connections from Cloudflare Workers

Currently, there is no standard API for socket connections in JavaScript. We want to change that.

If you’ve used Node.js before, then you’re most likely familiar with the net.Socket and net.TLSSocket objects. If you use Deno, then you might know that they’ve recently introduced the Deno.connect() and Deno.connectTLS() APIs. When you look at those APIs, what should immediately be apparent is how different they are from one another despite doing the exact same thing.

When we decided that we would add the ability to open and use socket connections from within Workers, we also agreed that we really have no interest in developing yet another non-standard, platform-specific API that is unlike the APIs provided by other platforms. Therefore, we are extending an invitation to all JavaScript runtime platforms that need socket capabilities to collaborate on a new (and eventually standardized) API that just works no matter which runtime you choose to develop on.

Here’s a rough example of what we have in mind for opening and reading from a simple TCP client connection:

const socket = new Socket({
  remote: { address: '123.123.123.123', port: 1234 },
})
for await (const chunk of socket.readable)
  console.log(chunk)

Or, this example, sending a simple “hello world” packet using UDP:

const socket = new Socket({
  type: 'udp',
  remote: { address: '123.123.123.123', port: 1234 },
});
const enc = new TextEncoder();
const writer = socket.writable.getWriter();
await writer.write(enc.encode('hello world'));
await writer.close();

The API will be designed generically enough to work both client and server-side; for TCP, UDP, and QUIC; with or without TLS, and will not rely on any mechanism specific to any single JavaScript runtime. It will build on existing broadly supported Web Platform standards such as EventTarget, ReadableStream, WritableStream, AbortSignal, and promises. It will be familiar to developers who are already familiar with the fetch() API, service workers, and promises using async/await.

interface Socket : EventTarget {
  constructor(object SocketInit);

  Promise<undefined> update(object SocketInit);

  readonly attribute ReadableStream readable;
  readonly attribute WritableStream writable;
  
  readonly attribute Promise<undefined> ready;
  readonly attribute Promise<undefined> closed;

  Promise<undefined> abort(optional any reason);
  readonly attribute AbortSignal signal;
 
  readonly attribute SocketStats stats;
  readonly attribute SocketInfo info;
}

This is just a proposal at this point and the details will very likely change from the examples above by the time the capability is delivered in Workers. It is our hope that other platforms will join us in the effort of developing and supporting this new API so that developers have a consistent foundation upon which to build regardless of where they run their code.

Introducing Socket Workers

The ability to open socket client connections is only half of the story.

When we first started talking about adding these capabilities an obvious question was asked: What about using non-HTTP protocols to connect to Workers? What if instead of just having the ability to connect a Worker to some other back-end database, we could implement the entire database itself on the edge, inside Workers, and have non-HTTP clients connect to it? For that matter, what if we could implement an SMTP server in Workers? Or an MQTT message queue? Or a full VoIP platform? Or implement packet filters, transformations, inspectors, or protocol transcoders?

Workers are far too powerful to limit to just HTTP and WebSockets, so we will soon introduce Socket Workers — that is, Workers that can be connected to directly using raw TCP, UDP, or QUIC protocols without using HTTP.

What will this new Workers feature look like? Many of the details are still being worked through, but the idea is to deploy a Worker script that understands and responds to “connect” events in much the same way that “fetch” events work today. Importantly, this would build on the same common socket API being developed for client connections:

addEventListener('connect', (event) => {
  const enc = new TextEncoder();
  const writer = event.socket.writable.getWriter();
  writer.write(enc.encode('Hello World'));
  writer.close();
});

Next Steps (and a call to action)

The new socket API for JavaScript and Socket Workers are under active development, with focus initially on enabling better and more efficient ways for Workers to connect to databases on the backend — you can sign up here to join the waitlist for access to Database Connectors and Socket Workers. We are excited to work with early users, as well as our technology partners to develop, refine, and test these new capabilities.

Once released, we expect Socket Workers to blow the doors wide open on the types of intelligent distributed applications that can be deployed to the Cloudflare network edge, and we are excited to see what you build with them.