Tag Archives: Developers

Automate an isolated browser instance with just a few lines of code

Post Syndicated from Tanushree Sharma original https://blog.cloudflare.com/introducing-workers-browser-rendering-api/

Automate an isolated browser instance with just a few lines of code

Automate an isolated browser instance with just a few lines of code

If you’ve ever created a website that shows any kind of analytics, you’ve probably also thought about adding a “Save Image” or “Save as PDF” button to store and share results. This isn’t as easy as it seems (I can attest to this firsthand) and it’s not long before you go down a rabbit hole of trying 10 different libraries, hoping one will work.

This is why we’re excited to announce a private beta of the Workers Browser Rendering API, improving the browser automation experience for developers. With browser automation, you can programmatically do anything that a user can do when interacting with a browser.

The Workers Browser Rendering API, or just Rendering API for short, is our out-of-the-box solution for simplifying developer workflows, including capturing images or screenshots, by running browser automation in Workers.

Browser automation, everywhere

As with many of the best Cloudflare products, Rendering API was born out of an internal need. Many of our teams were setting up or wanted to set up their own tools to perform what sounds like an incredibly simple task: taking automated screenshots.

When gathering use cases, we realized that much of what our internal teams wanted would also be useful for our customers. Some notable ones are:

  • Taking screenshots for social sharing thumbnails or preview images
  • Emailed daily screenshots of dashboards (requested specifically by our SVP of Engineering)
  • Reporting bugs on websites and sending them directly to frontend teams

Not to mention use cases for other browser automation functions like:

Testing UI/UX Flows
End-to-end (E2E) testing is used to minimic user behaviour and can identify bugs that unit tests or integration tests have missed. And let’s be honest – no developer wants to manually check the user flow each time they make changes to their application. E2E tests can be especially useful to verify logic on your customer’s critical path like account creation, authentication or checkout.

Performance Tests
Application performance metrics, such as page load time, directly impact your user’s experience and your SEO rankings. To avoid performance regressions, you want to test impact on latency in conditions that are as close as possible to your production environment before you merge. By automating performance testing you can measure if your proposed changes will result in a degraded experience for your uses and make improvements accordingly.  

Unlocking a new building block

One of the most common browser automation frameworks is Puppeteer. It’s common to run Puppeteer in a containerization tool like Docker or in a serverless environment. Taking automated screenshots should be as easy as writing some code, hitting deploy and having it run when a particular event is triggered or on a regular schedule.

It should be, but it’s not.

Even on a serverless solution like AWS Lambda, running Puppeteer means packaging it, making sure dependencies are covered, uploading packages to S3 and deploying using Layers. Whether using Docker or something like Lambda, it’s clear that this is not easy to set up.

One of the pillars of Cloudflare’s development platform is to provide our developers with tools that are incredibly simple, yet powerful to build on. Rendering API is our out-of-the-box solution for running Puppeteer in Workers.

Screenshotting made simple

To start, the Rendering API will have support for navigating to a webpage and taking a screenshot, with more functions to follow. To use it, all you need to do is add our new browser binding to your project’s wrangler.toml file

wrangler.toml

bindings = [
 { name = "my_browser” type = "browser" }
]

From there, taking a screenshot and saving it to R2 is as simple as:

script.ts

import puppeteer from '@cloudflare/puppeteer'

export default {
    async fetch(request: Request, env: Env): Promise<Response> {
        const browser = await puppeteer.launch({
            browserBinding: env.MY_BROWSER
        })
        const page = await browser.newPage()

        await page.goto("https://example.com/")
        const img = await page.screenshot() as Buffer
        await browser.close()

        //upload to R2
        try {
            await env.MY_BUCKET.put("screenshot.jpg", img);
            return new Response(`Success!`);
        } catch (e) {
            return new Response('', { status: 400 })
        }
    }
}

Down the line, we have plans to add full Puppeteer support, including functions like page.type, page.click, page.evaluate!

What’s happening under the hood?

Remote browser isolation technology is an integral part of our Zero Trust product offering. Remote browser isolation lets users interact with a web browser that instead of running on the client’s device, runs in a remote environment. The Rendering API repurposes this under the hood!

Automate an isolated browser instance with just a few lines of code

We’ve wrapped the Puppeteer library so that it can be run directly from your own Worker. You can think of your Worker as the client. Each of Cloudflare’s data centers has a pool of warm browsers ready to go and when a Worker requests a browser, the browser is instantly returned and is connected to via a WebSocket. Once the WebSocket connection is established, our internal browser API Worker handles all communication to the browser session via the Chrome Devtools Protocol.

To ensure the security of your Worker, individual remote browsers are run as disposable instances – one instance per request, and never shared. They are secured using gVisor to protect against kernel level exploits. On top of that, the browser is running sandboxed processes with the lowest privilege level using a Linux seccomp profile.

The Rendering API should be used when you’re building and testing your applications.  To prevent abuse, Cloudflare Bot Management has baked in signals to indicate that a request is coming from a Worker running Puppeteer. As a Cloudflare Bot Management customer, this will automatically be added to your blocklist, with the option to explicitly opt in and allow it.

How can you get started?

We’re introducing the Workers Browser Rendering API in closed beta. If you’re interested, please tell us a bit about your use case and join the waitlist. We would love to hear what else you want to build using the Workers Browser Rendering API, let us know in the Workers channel on the Cloudflare Developers Discord!

Making static sites dynamic with Cloudflare D1

Post Syndicated from Kristian Freeman original https://blog.cloudflare.com/making-static-sites-dynamic-with-cloudflare-d1/

Making static sites dynamic with Cloudflare D1

Introduction

Making static sites dynamic with Cloudflare D1

There are many ways to store data in your applications. For example, in Cloudflare Workers applications, we have Workers KV for key-value storage and Durable Objects for real-time, coordinated storage without compromising on consistency. Outside the Cloudflare ecosystem, you can also plug in other tools like NoSQL and graph databases.

But sometimes, you want SQL. Indexes allow us to retrieve data quickly. Joins enable us to describe complex relationships between different tables. SQL declaratively describes how our application’s data is validated, created, and performantly queried.

D1 was released today in open alpha, and to celebrate, I want to share my experience building apps with D1: specifically, how to get started, and why I’m excited about D1 joining the long list of tools you can use to build apps on Cloudflare.

Making static sites dynamic with Cloudflare D1

D1 is remarkable because it’s an instant value-add to applications without needing new tools or stepping out of the Cloudflare ecosystem. Using wrangler, we can do local development on our Workers applications, and with the addition of D1 in wrangler, we can now develop proper stateful applications locally as well. Then, when it’s time to deploy the application, wrangler allows us to both access and execute commands to your D1 database, as well as your API itself.

What we’re building

In this blog post, I’ll show you how to use D1 to add comments to a static blog site. To do this, we’ll construct a new D1 database and build a simple JSON API that allows the creation and retrieval of comments.

As I mentioned, separating D1 from the app itself – an API and database that remains separate from the static site – allows us to abstract the static and dynamic pieces of our website from each other. It also makes it easier to deploy our application: we will deploy the frontend to Cloudflare Pages, and the D1-powered API to Cloudflare Workers.

Building a new application

First, we’ll add a basic API in Workers. Create a new directory and in it a new wrangler project inside it:

$ mkdir d1-example && d1-example
$ wrangler init

In this example, we’ll use Hono, an Express.js-style framework, to rapidly build our API. To use Hono in this project, install it using NPM:

$ npm install hono

Then, in src/index.ts, we’ll initialize a new Hono app, and define a few endpoints – GET /API/posts/:slug/comments, and POST /get/api/:slug/comments.

import { Hono } from 'hono'
import { cors } from 'hono/cors'

const app = new Hono()

app.get('/api/posts/:slug/comments', async c => {
  // do something
})

app.post('/api/posts/:slug/comments', async c => {
  // do something
})

export default app

Now we’ll create a D1 database. In Wrangler 2, there is support for the wrangler d1 subcommand, which allows you to create and query your D1 databases directly from the command line. So, for example, we can create a new database with a single command:

$ wrangler d1 create d1-example

With our created database, we can take the database name ID and associate it with a binding inside of wrangler.toml, wrangler’s configuration file. Bindings allow us to access Cloudflare resources, like D1 databases, KV namespaces, and R2 buckets, using a simple variable name in our code. Below, we’ll create the binding DB and use it to represent our new database:

[[ d1_databases ]]
binding = "DB" # i.e. available in your Worker on env.DB
database_name = "d1-example"
database_id = "4e1c28a9-90e4-41da-8b4b-6cf36e5abb29"

Note that this directive, the [[d1_databases]] field, currently requires a beta version of wrangler. You can install this for your project using the command npm install -D wrangler/beta.

With the database configured in our wrangler.toml, we can start interacting with it from the command line and inside our Workers function.

First, you can issue direct SQL commands using wrangler d1 execute:

$ wrangler d1 execute d1-example --command "SELECT name FROM sqlite_schema WHERE type ='table'"
Executing on d1-example:
┌─────────────────┐
│ name │
├─────────────────┤
│ sqlite_sequence │
└─────────────────┘

You can also pass a SQL file – perfect for initial data seeding in a single command. Create src/schema.sql, which will create a new comments table for our project:

drop table if exists comments;
create table comments (
  id integer primary key autoincrement,
  author text not null,
  body text not null,
  post_slug text not null
);
create index idx_comments_post_id on comments (post_slug);

-- Optionally, uncomment the below query to create data

-- insert into comments (author, body, post_slug)
-- values ("Kristian", "Great post!", "hello-world");

With the file created, execute the schema file against the D1 database by passing it with the flag --file:

$ wrangler d1 execute d1-example --file src/schema.sql

We’ve created a SQL database with just a few commands and seeded it with initial data. Now we can add a route to our Workers function to retrieve data from that database. Based on our wrangler.toml config, the D1 database is now accessible via the DB binding. In our code, we can use the binding to prepare SQL statements and execute them, for instance, to retrieve comments:

app.get('/api/posts/:slug/comments', async c => {
  const { slug } = c.req.param()
  const { results } = await c.env.DB.prepare(`
    select * from comments where post_slug = ?
  `).bind(slug).all()
  return c.json(results)
})

In this function, we accept a slug URL query parameter and set up a new SQL statement where we select all comments with a matching post_slug value to our query parameter. We can then return it as a simple JSON response.

So far, we’ve built read-only access to our data. But “inserting” values to SQL is, of course, possible as well. So let’s define another function that allows POST-ing to an endpoint to create a new comment:

app.post('/API/posts/:slug/comments', async c => {
  const { slug } = c.req.param()
  const { author, body } = await c.req.json<Comment>()

  if (!author) return c.text("Missing author value for new comment")
  if (!body) return c.text("Missing body value for new comment")

  const { success } = await c.env.DB.prepare(`
    insert into comments (author, body, post_slug) values (?, ?, ?)
  `).bind(author, body, slug).run()

  if (success) {
    c.status(201)
    return c.text("Created")
  } else {
    c.status(500)
    return c.text("Something went wrong")
  }
})

In this example, we built a comments API for powering a blog. To see the source for this D1-powered comments API, you can visit cloudflare/templates/worker-d1-api.

Making static sites dynamic with Cloudflare D1

Conclusion

One of the things most exciting about D1 is the opportunity to augment existing applications or websites with dynamic, relational data. As a former Ruby on Rails developer, one of the things I miss most about that framework in the world of JavaScript and serverless development tools is the ability to rapidly spin up full data-driven applications without needing to be an expert in managing database infrastructure. With D1 and its easy onramp to SQL-based data, we can build true data-driven applications without compromising on performance or developer experience.

This shift corresponds nicely with the advent of static sites in the last few years, using tools like Hugo or Gatsby. A blog built with a static site generator like Hugo is incredibly performant – it will build in seconds with small asset sizes.

But by trading a tool like WordPress for a static site generator, you lose the opportunity to add dynamic information to your site. Many developers have patched over this problem by adding more complexity to their build processes: fetching and retrieving data and generating pages using that data as part of the build.

This addition of complexity in the build process attempts to fix the lack of dynamism in applications, but it still isn’t genuinely dynamic. Instead of being able to retrieve and display new data as it’s created, the application rebuilds and redeploys whenever data changes so that it appears to be a live, dynamic representation of data. Your application can remain static, and the dynamic data will live geographically close to the users of your site, accessible via a queryable and expressive API.

Get started with Cloudflare Workers with ready-made templates

Post Syndicated from Gift Egwuenu original https://blog.cloudflare.com/cloudflare-workers-templates/

Get started with Cloudflare Workers with ready-made templates

Get started with Cloudflare Workers with ready-made templates

One of the things we prioritize at Cloudflare is enabling developers to build their applications on our developer platform with ease. We’re excited to share a collection of ready-made templates that’ll help you start building your next application on Workers. We want developers to get started as quickly as possible, so that they can focus on building and innovating and avoid spending so much time configuring and setting up their projects.

Introducing Cloudflare Workers Templates

Cloudflare Workers enables you to build applications with exceptional performance, reliability, and scale. We are excited to share a collection of templates that helps you get started quickly and give you an idea of what is possible to build on our developer platform.

We have made available a set of starter templates highlighting different use cases of Workers. We understand that you have different ideas you will love to build on top of Workers and you may have questions or wonder if it is possible. These sets of templates go beyond the convention ‘Hello, World’ starter. They’ll help shape your idea of what kind of applications you can build with Workers as well as other products in the Cloudflare Developer Ecosystem.

We are excited to introduce a new collection of starter templates workers.new/templates. This shortcut will serve you a collection of templates that you can immediately start using to build your next application.

Get started with Cloudflare Workers with ready-made templates
Cloudflare Workers Template Collection

How to use Cloudflare Workers templates

We created this collection of templates to support you in getting started with Workers. Some examples listed showcase a use-case with a combination of multiple Cloudflare developer platform products to build applications similar to ones you might use every day. We know that Workers are being used today for different use-cases, we want to showcase some of them to you.

We have templates for building an image sharing website with Pages Functions, direct creator upload to Cloudflare Stream, Durable Object-powered request scheduler and many more.

One example to highlight is a template that lets you accept payment for video content. It is powered by Pages Functions, Cloudflare Stream and Stripe Checkout.  This app shows how you can use Cloudflare Workers with external payment and authentication systems to create a logged-in experience, without ever having to manage a database or persist any state yourself.

Once a user has paid, Stripe Checkout redirects to a URL that verifies a token from Stripe, and generates a signed URL using Cloudflare Stream to view the video. This signed URL is only valid for a specified amount of time, and access can be restricted by country or IP address.

To use the template, you need to either click the deploy with Workers button or open the template with Stackblitz.

Get started with Cloudflare Workers with ready-made templates

The Deploy with Workers button will redirect you to a page where you can authorize GitHub to deploy a fork of the repository using GitHub actions while opening with StackBlitz creates a new fork of the template for you to start working with.

Get started with Cloudflare Workers with ready-made templates
Deploy to Workers Demo

These templates also comes bundled with additional features I would like to share with you:

Integrated Deploy with Workers Button

We added a deploy button to all templates listed in the templates repository and the collection website, so you can quickly get up to speed with your code. The deploy with Workers button lets you deploy your code in under five minutes, it uses a GitHub action powered with Cloudflare Workers to do this.

Get started with Cloudflare Workers with ready-made templates
GitHub Repository with Deploy to Workers Button

Support for Test Driven Development

As developers, we need to write tests for our code to ensure we’re shipping quality production grade software and also ensure that our code is bug-free. We support writing tests in Workers using the Wrangler unstable_dev API to write integration and end-to-end tests. We want to enable not just the developer experience but also nudge developers to follow best practices and prioritize TDD in development. We configured a number of templates to support integration tests against a local server, this will serve as a template to help you set up tests in your projects.

Here’s an example using Wrangler’s unstable_dev API and Vitest test framework to test the code written in an example ‘Hello worker’ starter template:

import { unstable_dev } from 'wrangler';
import { describe, expect, it, beforeAll, afterAll } from 'vitest';

describe('Worker', () => {
 let worker;

 beforeAll(async () => {
   worker = await unstable_dev('index.js', {}, { disableExperimentalWarning: true });
 });

 afterAll(async () => {
   await worker.stop();
 });

 it('should return Hello worker!', async () => {
   const resp = await worker.fetch();
   if (resp) {
     const text = await resp.text();
     expect(text).toMatchInlineSnapshot(`"Hello worker!"`);
   }
 });
});

Online IDE Integration with StackBlitz

We announced StackBlitz’s partnership with Cloudflare Workers during Platform Week early this year. We believe a developer’s experience should be of utmost priority because we want them to build with ease on our developer platform.

StackBlitz is an online development platform for building web applications. It is powered by WebContainers, the first WebAssembly-based operating system which boots Node.js environments in milliseconds, securely within your browser tab.

We made it even easier to get started with Workers with an integrated Open with StackBlitz button for each starter template, making it easier to create a fork of a template and the great thing is you only need a web browser to build your application.

Everything we’ve highlighted in this post, all leads to one thing – How can we create a better experience for developers getting started with Workers. We introduce these ready-made templates to make you more efficient, bring you a wholesome developer experience and help improve your time to deployment. We want you to spend less time getting started with building on the Workers developer platform, so what are you waiting for?

Next Steps

You can start building your own Worker today using the available templates provided in the templates collection to help you get started. If you would like to contribute your own templates to the collection, be sure to send in a pull request we’re more than happy to review and add to the growing collection. Share what you have built with us in the #builtwith channel on our Discord community. Make sure to follow us on Twitter or join our Discord Developers Community server.

Easy Postgres integration on Cloudflare Workers with Neon.tech

Post Syndicated from Erwin van der Koogh original https://blog.cloudflare.com/neon-postgres-database-from-workers/

Easy Postgres integration on Cloudflare Workers with Neon.tech

Easy Postgres integration on Cloudflare Workers with Neon.tech

It’s no wonder that Postgres is one of the world’s favorite databases. It’s easy to learn, a pleasure to use, and can scale all the way up from your first database in an early-stage startup to the system of record for giant organizations. Postgres has been an integral part of Cloudflare’s journey, so we know this fact well. But when it comes to connecting to Postgres from environments like Cloudflare Workers, there are unfortunately a bunch of challenges, as we mentioned in our Relational Database Connector post.

Neon.tech not only solves these problems; it also has other cool features such as branching databases — being able to branch your database in exactly the same way you branch your code: instant, cheap and completely isolated.

How to use it

It’s easy to get started. Neon’s client library @neondatabase/serverless is a drop-in replacement for node-postgres, the npm pg package with which you may already be familiar. After going through the getting started process to set up your Neon database, you can easily create a Worker to ask Postgres for the current time like so:

  1. Create a new Worker — Run npx wrangler init neon-cf-demo and accept all the defaults. Enter the new folder with cd neon-cf-demo.
  2. Install the Neon package — Run npm install @neondatabase/serverless.
  3. Provide connection details — For deployment, run npx wrangler secret put DATABASE_URL and paste in your connection string when prompted (you’ll find this in your Neon dashboard: something like postgres://user:[email protected]/main). For development, create a new file .dev.vars with the contents DATABASE_URL= plus the same connection string.
  4. Write the code — Lastly, replace src/index.ts with the following code:

import { Client } from '@neondatabase/serverless';
interface Env { DATABASE_URL: string; }

export default {
  async fetch(request: Request, env: Env, ctx: ExecutionContext) {
    const client = new Client(env.DATABASE_URL);
    await client.connect();
    const { rows: [{ now }] } = await client.query('select now();');
    ctx.waitUntil(client.end());  // this doesn’t hold up the response
    return new Response(now);
  }
}

To try this locally, type npm start. To deploy it around the globe, type npx wrangler publish.

You can also check out the source for a slightly more complete demo app. This shows your nearest UNESCO World Heritage sites using IP geolocation in Cloudflare Workers and nearest-neighbor sorting in PostGIS.

Easy Postgres integration on Cloudflare Workers with Neon.tech

How does this work? In this case, we take the coordinates supplied to our Worker in request.cf.longitude and request.cf.latitude. We then feed these coordinates to a SQL query that uses the PostGIS distance operator <-> to order our results:

const { longitude, latitude } = request.cf
const { rows } = await client.query(`
  select 
    id_no, name_en, category,
    st_makepoint($1, $2) <-> location as distance
  from whc_sites_2021
  order by distance limit 10`,
  [longitude, latitude]
);

Since we created a spatial index on the location column, the query is blazing fast. The result (rows) looks like this:

[{
  "id_no": 308,
  "name_en": "Yosemite National Park",
  "category": "Natural",
  "distance": 252970.14782223428
},
{
  "id_no": 134,
  "name_en": "Redwood National and State Parks",
  "category": "Natural",
  "distance": 416334.3926827573
},
/* … */
]

For even lower latencies, we could cache these results at a slightly coarser geographical resolution — rounding, say, to one sixtieth of a degree (one arc minute) of longitude and latitude, which is a little under a mile.

Sign up to Neon using the invite code serverless and try the @neondatabase/serverless driver with Cloudflare Workers.

Why we did it

Cloudflare Workers has enormous potential to improve back-end development and deployment. It’s cost-effective, admin-free, and radically scalable.

The use of V8 isolates means Workers are now fast and lightweight enough for nearly any use case. But it has a key drawback: Cloudflare Workers don’t yet support raw TCP communication, which has made database connections a challenge.

Even when Workers eventually support raw TCP communication, we will not have fully solved our problem, because database connections are expensive to set up and also have quite a bit of memory overhead.

This is what the solution looks like:

Easy Postgres integration on Cloudflare Workers with Neon.tech

It consists of three parts:

  1. Connection pooling built into the platform — Given Neon’s serverless compute model, splitting storage and compute operations, it is not recommended to rely on a one-to-one mapping between external clients and Postgres connections. Instead, you can turn on connection pooling simply by flicking a switch (it’s in the Settings area of your Neon dashboard).
  2. WebSocket proxy — We deploy our own WebSocket-to-TCP proxy, written in Go. The proxy simply accepts WebSocket connections from Cloudflare Worker clients, relays message payloads to a requested (Neon-only) host over plain TCP, and relays back the responses.
  3. Client library — Our driver library is based on node-postgres but provides the necessary shims for Node.js features that aren’t present in Cloudflare Workers. Crucially, we replace Node’s net.Socket and tls.connect with code that redirects network reads and writes via the WebSocket connection. To support end-to-end TLS encryption between Workers and the database, we compile WolfSSL to WebAssembly with emscripten. Then we use esbuild to bundle it all together into an easy-to-use npm package.

The @neondatabase/serverless package is currently in public beta. We have plans to improve, extend, and explain it further in the near future on the Neon blog. In line with our commitment to open source, you can configure our serverless driver and/or run our WebSocket proxy to provide access to Postgres databases hosted anywhere — just see the respective repos for details.

So try Neon using invite code serverless, sign up and connect to it with Cloudflare Workers, and you’ll have a fully flexible back-end service running in next to no time.

Build applications of any size on Cloudflare with the Queues open beta

Post Syndicated from Rob Sutter original https://blog.cloudflare.com/cloudflare-queues-open-beta/

Build applications of any size on Cloudflare with the Queues open beta

Build applications of any size on Cloudflare with the Queues open beta

Message queues are a fundamental building block of cloud applications—and today the Cloudflare Queues open beta brings queues to every developer building for Region: Earth. Cloudflare Queues follows Cloudflare Workers and Cloudflare R2 in a long line of innovative services built for the Workers Developer Platform, enabling developers to build more complex applications without configuring networks, choosing regions, or estimating capacity. Best of all, like many other Cloudflare services, there are no egregious egress charges!

Build applications of any size on Cloudflare with the Queues open beta

If you’ve ever purchased something online and seen a message like “you will receive confirmation of your order shortly,” you’ve interacted with a queue. When you completed your order, your shopping cart and information were stored and the order was placed into a queue. At some later point, the order fulfillment service picks and packs your items and hands it off to the shipping service—again, via a queue. Your order may sit for only a minute, or much longer if an item is out of stock or a warehouse is busy, and queues enable all of this functionality.

Message queues are great at decoupling components of applications, like the checkout and order fulfillment services for an ecommerce site. Decoupled services are easier to reason about, deploy, and implement, allowing you to ship features that delight your customers without worrying about synchronizing complex deployments.

Queues also allow you to batch and buffer calls to downstream services and APIs. This post shows you how to enroll in the open beta, walks you through a practical example of using Queues to build a log sink, and tells you how we built Queues using other Cloudflare services. You’ll also learn a bit about the roadmap for the open beta.

Getting started

Enrolling in the open beta

Open the Cloudflare dashboard and navigate to the Workers section. Select Queues from the Workers navigation menu and choose Enable Queues Beta.

Review your order and choose Proceed to Payment Details.

Note: If you are not already subscribed to a Workers Paid Plan, one will be added to your order automatically.

Enter your payment details and choose Complete Purchase. That’s it – you’re enrolled in the open beta! Choose Return to Queues on the confirmation page to return to the Cloudflare Queues home page.

Creating your first queue

After enabling the open beta, open the Queues home page and choose Create Queue. Name your queue `my-first-queue` and choose Create queue. That’s all there is to it!

The dash displays a confirmation message along with a list of all the queues in your account.

Build applications of any size on Cloudflare with the Queues open beta

Note: As of the writing of this blog post each account is limited to ten queues. We intend to raise this limit as we build towards general availability.

Managing your queues with Wrangler

You can also manage your queues from the command line using Wrangler, the CLI for Cloudflare Workers. In this section, you build a simple but complete application implementing a log aggregator or sink to learn how to integrate Workers, Queues, and R2.

Setting up resources
To create this application, you need access to a Cloudflare Workers account with a subscription plan, access to the Queues open beta, and an R2 plan.

Install and authenticate Wrangler then run wrangler queues create log-sink from the command line to create a queue for your application.

Run wrangler queues list and note that Wrangler displays your new queue.

Note: The following screenshots use the jq utility to format the JSON output of wrangler commands. You do not need to install jq to complete this application.

Build applications of any size on Cloudflare with the Queues open beta

Finally, run wrangler r2 bucket create log-sink to create an R2 bucket to store your aggregated logs. After the bucket is created, run wrangler r2 bucket list to see your new bucket.

Build applications of any size on Cloudflare with the Queues open beta

Creating your Worker
Next, create a Workers application with two handlers: a fetch() handler to receive individual incoming log lines and a queue() handler to aggregate a batch of logs and write the batch to R2.

In an empty directory, run wrangler init to create a new Cloudflare Workers application. When prompted:

  • Choose “y” to create a new package.json
  • Choose “y” to use TypeScript
  • Choose “Fetch handler” to create a new Worker at src/index.ts
Build applications of any size on Cloudflare with the Queues open beta

Open wrangler.toml and replace the contents with the following:

wrangler.toml

name = "queues-open-beta"
main = "src/index.ts"
compatibility_date = "2022-11-03"
 
 
[[queues.producers]]
 queue = "log-sink"
 binding = "BUFFER"
 
[[queues.consumers]]
 queue = "log-sink"
 max_batch_size = 100
 max_batch_timeout = 30
 
[[r2_buckets]]
 bucket_name = "log-sink"
 binding = "LOG_BUCKET"

The [[queues.producers]] section creates a producer binding for the Worker at src/index.ts called BUFFER that refers to the log-sink queue. This Worker can place messages onto the log-sink queue by calling await env.BUFFER.send(log);

The [[queues.consumers]] section creates a consumer binding for the log-sink queue for your Worker. Once the log-sink queue has a batch ready to be processed (or consumed), the Workers runtime will look for the queue() event handler in src/index.ts and invoke it, passing the batch as an argument. The queue() function signature looks as follows:

async queue(batch: MessageBatch<Error>, env: Environment): Promise<void> {

The final binding in your wrangler.toml creates a binding for the log-sink R2 bucket that makes the bucket available to your Worker via env.LOG_BUCKET.

src/index.ts

Open src/index.ts and replace the contents with the following code:

export interface Env {
 BUFFER: Queue;
 LOG_BUCKET: R2Bucket;
}
 
export default {
 async fetch(request: Request, env: Environment): Promise<Response> {
   let log = await request.json();
   await env.BUFFER.send(log);
   return new Response("Success!");
 },
 async queue(batch: MessageBatch<Error>, env: Environment): Promise<void> {
   const logBatch = await JSON.stringify(batch.messages);
   await env.LOG_BUCKET.put(`logs/${Date.now()}.log.json`, logBatch);
 },
};

The export interface Env section exposes the two bindings you defined in wrangler.toml: a queue named BUFFER and an R2 bucket named LOG_BUCKET.

The fetch() handler transforms the request body into JSON, adds the body to the BUFFER queue, then returns an HTTP 200 response with the message Success!

The `queue()` handler receives a batch of messages that each contain log entries, iterates through concatenating each log into a string buffer, then writes that buffer to the LOG_BUCKET R2 bucket using the current timestamp as the filename.

Publishing and running your application
To publish your log sink application, run wrangler publish. Wrangler packages your application and its dependencies and deploys it to Cloudflare’s global network.

Build applications of any size on Cloudflare with the Queues open beta

Note that the output of wrangler publish includes the BUFFER queue binding, indicating that this Worker is a producer and can place messages onto the queue. The final line of output also indicates that this Worker is a consumer for the log-sink queue and can read and remove messages from the queue.

Use your favorite API client, like curl, httpie, or Postman, to send JSON log entries to the published URL for your Worker via HTTP POST requests. Navigate to your log-sink R2 bucket in the Cloudflare dashboard and note that the logs prefix is now populated with aggregated logs from your request.

Build applications of any size on Cloudflare with the Queues open beta

Download and open one of the logfiles to view the JSON array inside. That’s it – with fewer than 45 lines of code and config, you’ve built a log aggregator to ingest and store data in R2!

Build applications of any size on Cloudflare with the Queues open beta

Buffering R2 writes with Queues in the real world

In the previous example, you create a simple Workers application that buffers data into batches before writing the batches to R2. This reduces the number of calls to the downstream service, reducing load on the service and saving you money.

UUID.rocks, the fastest UUIDv4-as-a-service, wanted to confirm whether their API truly generates unique IDs on every request. With 80,000 requests per day, it wasn’t trivial to find out. They decided to write every generated UUID to R2 to compare IDs across the entire population. However, writing directly to R2 at the rate UUIDs are generated is inefficient and expensive.

To reduce writes and costs, UUID.rocks introduced Cloudflare Queues into their UUID generation workflow. Each time a UUID is requested, a Worker places the value of the UUID into a queue. Once enough messages have been received, the buffered batch of JSON objects is written to R2. This avoids invoking an R2 write on every API call, saving costs and making the data easier to process later.

The uuid-queue application consists of a single Worker with three event handlers:

  1. A fetch handler that receives a JSON object representing the generated UUID and writes it to a Cloudflare Queue.
  2. A queue handler that writes batches of JSON objects to R2 in CSV format.
  3. A scheduled handler that combines batches from the previous hour into a single file for future processing.

To view the source or deploy this application into your own account, visit the repository on GitHub.

How we built Cloudflare Queues

Like many of the Cloudflare services you use and love, we built Queues by composing other Cloudflare services like Workers and Durable Objects. This enabled us to rapidly solve two difficult challenges: securely invoking your Worker from our own service and maintaining a strongly consistent state at scale. Several recent Cloudflare innovations helped us overcome these challenges.

Securely invoking your Worker

In the Before Times (early 2022), invoking one Worker from another Worker meant a fresh HTTP call from inside your script. This was a brittle experience, requiring you to know your downstream endpoint at deployment time. Nested invocations ran as HTTP calls, passing all the way through the Cloudflare network a second time and adding latency to your request. It also meant security was on you – if you wanted to control how that second Worker was invoked, you had to create and implement your own authentication and authorization scheme.

Worker to Worker requests
During Platform Week in May 2022, Service Worker Bindings entered general availability. With Service Worker Bindings, your Worker code has a binding to another Worker in your account that you invoke directly, avoiding the network penalty of a nested HTTP call. This removes the performance and security barriers discussed previously, but it still requires that you hard-code your nested Worker at compile time. You can think of this setup as “static dispatch,” where your Worker has a static reference to another Worker where it can dispatch events.

Dynamic dispatch
As Service Worker Bindings entered general availability, we also launched a closed beta of Workers for Platforms, our tool suite to help make any product programmable. With Workers for Platforms, software as a service (SaaS) and platform providers can allow users to upload their own scripts and run them safely via Cloudflare Workers. User scripts are not known at compile time, but are dynamically dispatched at runtime.

Workers for Platforms entered general availability during GA week in September 2022, and is available for all customers to build with today.

With dynamic dispatch generally available, we now have the ability to discover and invoke Workers at runtime without the performance penalty of HTTP traffic over the network. We use dynamic dispatch to invoke your queue’s consumer Worker whenever a message or batch of messages is ready to be processed.

Consistent stateful data with Durable Objects

Another challenge we faced was storing messages durably without sacrificing performance. We took the design goal of ensuring that all messages were persisted to disk in multiple locations before we confirmed receipt of the message to the user. Again, we turned to an existing Cloudflare product—Durable Objects—which entered general availability nearly one year ago today.

Durable Objects are named instances of JavaScript classes that are guaranteed to be unique across Cloudflare’s entire network. Durable Objects process messages in-order and on a single-thread, allowing for coordination across messages and provide a strongly consistent storage API for key-value pairs. Offloading the hard problem of storing data durably in a distributed environment to Distributed Objects allowed us to reduce the time to build Queues and prepare it for open beta.

Open beta roadmap

Our open beta process empowers you to influence feature prioritization and delivery. We’ve set ambitious goals for ourselves on the path to general availability, most notably supporting unlimited throughput while maintaining 100% durability. We also have many other great features planned, like first-in first-out (FIFO) message processing and API compatibility layers to ease migrations, but we need your feedback to build what you need most, first.

Conclusion

Cloudflare Queues is a global message queue for the Workers developer. Building with Queues makes your applications more performant, resilient, and cost-effective—but we’re not done yet. Join the Open Beta today and share your feedback to help shape the Queues roadmap as we deliver application integration services for the next generation cloud.

Cloudflare Workers scale too well and broke our infrastructure, so we are rebuilding it on Workers

Post Syndicated from Jonathan Norris (Guest Blogger) original https://blog.cloudflare.com/devcycle-customer-story/

Cloudflare Workers scale too well and broke our infrastructure, so we are rebuilding it on Workers

Cloudflare Workers scale too well and broke our infrastructure, so we are rebuilding it on Workers

While scaling our new Feature Flagging product DevCycle, we’ve encountered an interesting challenge: our Cloudflare Workers-based infrastructure can handle way more instantaneous load than our traditional AWS infrastructure. This led us to rethink how we design our infrastructure to always use Cloudflare Workers for everything.

The origin of DevCycle

For almost 10 years, Taplytics has been a leading provider of no-code A/B testing and feature flagging solutions for product and marketing teams across a wide range of use cases for some of the largest consumer-facing companies in the world. So when we applied ourselves to build a new engineering-focused feature management product, DevCycle, we built upon our experience using Workers which have served over 140 billion requests for Taplytics customers.

The inspiration behind DevCycle is to build a focused feature management tool for engineering teams, empowering them to build their software more efficiently and deploy it faster. Helping engineering teams reach their goals, whether it be continuous deployment, lower change failure rate, or a faster recovery time. DevCycle is the culmination of our vision of how teams should use Feature Management to build high-quality software faster. We’ve used DevCycle to build DevCycle, enabling us to implement continuous deployment successfully.

DevCycle architecture

One of the first things we asked ourselves when ideating DevCycle was how we could get out of the business of managing 1000’s of vCPUs worth of AWS instances and move our core business logic closer to our end-user devices. Based on our experience with Cloudflare Workers at Taplytics we knew we wanted it to be a core part of our future infrastructure for DevCycle.

By using the global computing power of Workers and moving as much logic to the SDKs as possible with our local bucketing server-side SDKs, we were able to massively reduce or eliminate the latency of fetching feature flag configurations for our users. In addition, we used a shared WASM library across our Workers and local bucketing SDKs to dramatically reduce the amount of code we need to maintain per SDK, and increase the consistency of our platform. This architecture has also fundamentally changed our business’s cost structure to easily serve any customer of any scale.

The core architecture of DevCycle revolves around publishing and consuming JSON configuration files per project environment. The publishing side is managed in our AWS services, while Cloudflare manages the consumption of these config files at scale. This split in responsibilities allows for all high-scale requests to be managed by Cloudflare, while keeping our AWS services simple and low-scale.

Cloudflare Workers scale too well and broke our infrastructure, so we are rebuilding it on Workers

“Workers are breaking our events pipeline”

One of the primary challenges as a feature management platform is that we don’t have direct control over the load from our customers’ applications using our SDKs; our systems need the ability to scale instantly to match their load. For example, we have a couple of large customers whose mobile traffic is primarily driven by push notifications, which causes massive instantaneous spikes in traffic to our APIs in the range of 10x increases in load. As you can imagine, any traditional auto-scaled API service and the load balancer cannot manage that type of increase in load. Thus, our choices are to dramatically increase the minimum size of our cluster and load balancer to handle these unknown load spikes, accept that some requests will be rate-limited, or move to an architecture that can handle this load.

Cloudflare Workers scale too well and broke our infrastructure, so we are rebuilding it on Workers

Given that all our SDK API requests are already served with Workers, they have no problem scaling instantly to 10x+ their base load. Sadly we can’t say the same about the traditional parts of our infrastructure.

For each feature flag configuration request to a Worker, a corresponding events request is sent to our AWS events infrastructure. The events are received by our events API container in Kubernetes, where they are then published to Kafka and eventually ingested by Snowflake. While Cloudflare Workers have no problem handling instantaneous spikes in feature flag requests, the events system can’t keep up. Our cluster and events API containers need to be scaled up faster to prevent the existing instances from being overwhelmed. Even the load balancer has issues accepting the sudden increase. Cloudflare Workers just work too well in comparison to EC2 instances + EKS.

To solve this issue we are moving towards a new events Cloudflare Worker which will be able to handle the instantaneous events load from these requests and make use of the Kinesis Data Firehose to write events to our existing S3 bucket which is ingested by Snowflake. In the future, we look forward to testing out Cloudflare Queues writing to R2 once a Snowflake connector has been created. This architecture should allow us to ingest events at almost any scale and withstand instantaneous traffic spikes with a predictable and efficient cost structure.

Cloudflare Workers scale too well and broke our infrastructure, so we are rebuilding it on Workers

Building without a database next to your code

Workers provide many benefits, including fast response times, infinite scalability, serverless architecture, and excellent up-time performance. However, if you want to see all these benefits, you need to architect your Workers to assume that you don’t have direct access to a centralized SQL / NoSQL database (or D1) like you would with a traditional API service. For example, suppose you build your workers to require reaching out to a database to fetch and update user data every time a request is made to your Workers. In that case, your request latency will be tied to the geographic distance between your Worker and the database plus the latency of the database. In addition, your Workers will be able to scale significantly beyond the number of database connections you can support, and your uptime will be tied to the uptime of your external database. Therefore, when architecting your systems to use Workers, we advise relying primarily on data sent as part of the API request and cacheable data on Cloudflare’s global network.

Cloudflare provides multiple products and services to help with data on their global network:

  • KV: “global, low-latency, key-value data store.”
  • However, the lowest latency way of retrieving data from within a Worker is limited by a minimum 60-second TTL. So you’ll need to be ok with cached data that is 60 seconds stale.
  • Durable Objects: “provide low-latency coordination and consistent storage for the Workers platform through two features: global uniqueness and a transactional storage API.”
  • Ability to store user-level information closer to the end user.
  • Unfamiliar worker interface for accessing data for developers with SQL / NoSQL experience.
  • R2: “store large amounts of unstructured data.”
  • Ability to store arbitrarily large amounts of unstructured data using familiar S3 APIs.
  • Cloudflare’s cache can be used to provide low-latency access within workers.
  • D1: “serverless SQLite database”

Each of these tools that Cloudflare provides has made building APIs far more accessible than when Workers launched initially; however, each service has aspects which need to be accounted for when architecting your systems. Being an open platform, you can also access any publically available database you want from a Worker. For example, we are making use of Macrometa for our EdgeDB product built into our Workers to help customers access their user data.

The predictable cost structure of Workers

One of the greatest advantages of moving most of our workloads towards Cloudflare Workers is the predictable cost structure that can scale 1:1 with our request loads and can be easily mapped to usage-based billing for our customers. In addition, we no longer have to run excess EC2 instances to handle random spikes in load, just in case they happen.

Too many SaaS services have opaque billing based on max usage or other metrics that don’t relate directly to their costs. Moving from our legacy AWS architecture with high fixed costs like databases and caching layers to Workers has resulted in our infrastructure spending is directly tied to using our APIs and SDKs. For DevCycle, this architecture has been over ~5x more cost-efficient to operate.

The future of DevCycle and Cloudflare

With DevCycle we will continue to invest in leveraging serverless computing and moving our core business logic as close to our users as possible, either on Cloudflare’s global network or locally within our SDKs. We’re excited to integrate even more deeply with the Cloudflare developer platform as new services evolve. We already see future use cases for R2, Queues and Durable Objects and look forward to what’s coming next from Cloudflare.

Cloudflare Workers and micro-frontends: made for one another

Post Syndicated from Peter Bacon Darwin original https://blog.cloudflare.com/better-micro-frontends/

Cloudflare Workers and micro-frontends: made for one another

To help developers build better web applications we researched and devised a fragments architecture to build micro-frontends using Cloudflare Workers that is lightning fast, cost-effective to develop and operate, and scales to the needs of the largest enterprise teams without compromising release velocity or user experience.

Here we share a technical overview and a proof of concept of this architecture.

Why micro-frontends?

One of the challenges of modern frontend web development is that applications are getting bigger and more complex. This is especially true for enterprise web applications supporting e-commerce, banking, insurance, travel, and other industries, where a unified user interface provides access to a large amount of functionality. In such projects it is common for many teams to collaborate to build a single web application. These monolithic web applications, usually built with JavaScript technologies like React, Angular, or Vue, span thousands, or even millions of lines of code.

When a monolithic JavaScript architecture is used with applications of this scale, the result is a slow and fragile user experience with low Lighthouse scores. Furthermore, collaborating development teams often struggle to maintain and evolve their parts of the application, as their fates are tied with fates of all the other teams, so the mistakes and tech debt of one team often impacts all.

Drawing on ideas from microservices, the frontend community has started to advocate for micro-frontends to enable teams to develop and deploy their features independently of other teams. Each micro-frontend is a self-contained mini-application, that can be developed and released independently, and is responsible for rendering a “fragment” of the page. The application then combines these fragments together so that from the user’s perspective it feels like a single application.

Cloudflare Workers and micro-frontends: made for one another
An application consisting of multiple micro-frontends

Fragments could represent vertical application features, like “account management” or “checkout”, or horizontal features, like “header” or “navigation bar”.

Client-side micro-frontends

A common approach to micro-frontends is to rely upon client-side code to lazy load and stitch fragments together (e.g. via Module Federation). Client-side micro-frontend applications suffer from a number of problems.

Common code must either be duplicated or published as a shared library. Shared libraries are problematic themselves. It is not possible to tree-shake unused library code at build time resulting in more code than necessary being downloaded to the browser and coordinating between teams when shared libraries need to be updated can be complex and awkward.

Also, the top-level container application must bootstrap before the micro-frontends can even be requested, and they also need to boot before they become interactive. If they are nested, then you may end up getting a waterfall of requests to get micro-frontends leading to further runtime delays.

These problems can result in a sluggish application startup experience for the user.

Server-side rendering could be used with client-side micro-frontends to improve how quickly a browser displays the application but implementing this can significantly increase the complexity of development, deployment and operation. Furthermore, most server-side rendering approaches still suffer from a hydration delay before the user can fully interact with the application.

Addressing these challenges was the main motivation for exploring an alternative solution, which relies on the distributed, low latency properties provided by Cloudflare Workers.

Micro-frontends on Cloudflare Workers

Cloudflare Workers is a compute platform that offers a highly scalable, low latency JavaScript execution environment that is available in over 275 locations around the globe. In our exploration we used Cloudflare Workers to host and render micro-frontends from anywhere on our global network.

Fragments architecture

In this architecture the application consists of a tree of “fragments” each deployed to Cloudflare Workers that collaborate to server-side render the overall response. The browser makes a request to a “root fragment”, which will communicate with “child fragments” to generate the final response. Since Cloudflare Workers can communicate with each other with almost no overhead, applications can be server-side rendered quickly by child fragments, all working in parallel to render their own HTML, streaming their results to the parent fragment, which combines them into the final response stream delivered to the browser.

Cloudflare Workers and micro-frontends: made for one another
A high-level overview of a fragments architecture

We have built an example of a “Cloud Gallery” application to show how this can work in practice. It is deployed to Cloudflare Workers at  https://cloud-gallery.web-experiments.workers.dev/

The demo application is a simple filtered gallery of cloud images built using our fragments architecture. Try selecting a tag in the type-ahead to filter the images listed in the gallery. Then change the delay on the stream of cloud images to see how the type-ahead filtering can be interactive before the page finishes loading.

Multiple Cloudflare Workers

The application is composed of a tree of six collaborating but independently deployable Cloudflare Workers, each rendering their own fragment of the screen and providing their own client-side logic, and assets such as CSS stylesheets and images.

Cloudflare Workers and micro-frontends: made for one another
Architectural overview of the Cloud Gallery app

The “main” fragment acts as the root of the application. The “header” fragment has a slider to configure an artificial delay to the display of gallery images. The “body” fragment contains the “filter” fragment and “gallery” fragments. Finally, the “footer” fragment just shows some static content.

The full source code of the demo app is available on GitHub.

Benefits and features

This architecture of multiple collaborating server-side rendered fragments, deployed to Cloudflare Workers has some interesting features.

Encapsulation

Fragments are entirely encapsulated, so they can control what they own and what they make available to other fragments.

Fragments can be developed and deployed independently

Updating one of the fragments is as simple as redeploying that fragment. The next request to the main application will use the new fragment. Also, fragments can host their own assets (client-side JavaScript, images, etc.), which are streamed through their parent fragment to the browser.

Server-only code is not sent to the browser

As well as reducing the cost of downloading unnecessary code to the browser, security sensitive code that is only needed for server-side rendering the fragment is never exposed to other fragments and is not downloaded to the browser. Also, features can be safely hidden behind feature flags in a fragment, allowing more flexibility with rolling out new behavior safely.

Composability

Fragments are fully composable – any fragment can contain other fragments. The resulting tree structure gives you more flexibility in how you architect and deploy your application. This helps larger projects to scale their development and deployment. Also, fine-grain control over how fragments are composed, could allow fragments that are expensive to server-side render to be cached individually.

Fantastic Lighthouse scores

Streaming server-rendered HTML results in great user experiences and Lighthouse scores, which in practice means happier users and higher chance of conversions for your business.

Cloudflare Workers and micro-frontends: made for one another
Lighthouse scores for the Cloud Gallery app

Each fragment can parallelize requests to its child fragments and pipe the resulting HTML streams into its own single streamed server-side rendered response. Not only can this reduce the time to render the whole page but streaming each fragment through to the browser reduces the time to the first byte of each fragment.

Eager interactivity

One of the powers of a fragments architecture is that fragments can become interactive even while the rest of the application (including other fragments) is still being streamed down to the browser.

In our demo, the “filter” fragment is immediately interactive as soon as it is rendered, even if the image HTML for the “gallery” fragment is still loading.

To make it easier to see this, we added a slider to the top of the “header” that can simulate a network or database delay that slows down the HTML stream which renders the “gallery” images. Even when the “gallery” fragment is still loading, the type-ahead input, in the “filter” fragment, is already fully interactive.

Just think of all the frustration that this eager interactivity could avoid for web application users with unreliable Internet connection.

Under the hood

As discussed already this architecture relies upon deploying this application as many cooperating Cloudflare Workers. Let’s look into some details of how this works in practice.

We experimented with various technologies, and while this approach can be used with many frontend libraries and frameworks, we found the Qwik framework to be a particularly good fit, because of its HTML-first focus and low JavaScript overhead, which avoids any hydration problems.

Implementing a fragment

Each fragment is a server-side rendered Qwik application deployed to its own Cloudflare Worker. This means that you can even browse to these fragments directly. For example, the “header” fragment is deployed to https://cloud-gallery-header.web-experiments.workers.dev/.

Cloudflare Workers and micro-frontends: made for one another
A screenshot of the self-hosted “header” fragment

The header fragment is defined as a Header component using Qwik. This component is rendered in a Cloudflare Worker via a fetch() handler:

export default {
  fetch(request: Request, env: Record<string, unknown>): Promise<Response> {
    return renderResponse(request, env, <Header />, manifest, "header");
  },
};

cloud-gallery/header/src/entry.ssr.tsx

The renderResponse() function is a helper we wrote that server-side renders the fragment and streams it into the body of a Response that we return from the fetch() handler.

The header fragment serves its own JavaScript and image assets from its Cloudflare Worker. We configure Wrangler to upload these assets to Cloudflare and serve them from our network.

Implementing fragment composition

Fragments that contain child fragments have additional responsibilities:

  • Request and inject child fragments when rendering their own HTML.
  • Proxy requests for child fragment assets through to the appropriate fragment.

Injecting child fragments

The position of a child fragment inside its parent can be specified by a FragmentPlaceholder helper component that we have developed. For example, the “body” fragment has the “filter” and “gallery” fragments.

<div class="content">
  <FragmentPlaceholder name="filter" />
  <FragmentPlaceholder name="gallery" />
</div>

cloud-gallery/body/src/root.tsx

The FragmentPlaceholder component is responsible for making a request for the fragment and piping the fragment stream into the output stream.

Proxying asset requests

As mentioned earlier, fragments can host their own assets, especially client-side JavaScript files. When a request for an asset arrives at the parent fragment, it needs to know which child fragment should receive the request.

In our demo we use a convention that such asset paths will be prefixed with /_fragment/<fragment-name>. For example, the header logo image path is /_fragment/header/cf-logo.png. We developed a tryGetFragmentAsset() helper which can be added to the parent fragment’s fetch() handler to deal with this:

export default {
  async fetch(
    request: Request,
    env: Record<string, unknown>
  ): Promise<Response> {
    // Proxy requests for assets hosted by a fragment.
    const asset = await tryGetFragmentAsset(env, request);
    if (asset !== null) {
      return asset;
    }
    // Otherwise server-side render the template injecting child fragments.
    return renderResponse(request, env, <Root />, manifest, "div");
  },
};

cloud-gallery/body/src/entry.ssr.tsx

Fragment asset paths

If a fragment hosts its own assets, then we need to ensure that any HTML it renders uses the special _fragment/<fragment-name> path prefix mentioned above when referring to these assets. We have implemented a strategy for this in the helpers we developed.

The FragmentPlaceholder component adds a base searchParam to the fragment request to tell it what this prefix should be. The renderResponse() helper extracts this prefix and provides it to the Qwik server-side renderer. This ensures that any request for client-side JavaScript has the correct prefix. Fragments can apply a hook that we developed called useFragmentRoot(). This allows components to gather the prefix from a FragmentContext context.

For example, since the “header” fragment hosts the Cloudflare and GitHub logos as assets, it must call the useFragmentRoot() hook:

export const Header = component$(() => {
  useStylesScoped$(HeaderCSS);
  useFragmentRoot();

  return (...);
});

cloud-gallery/header/src/root.tsx

The FragmentContext value can then be accessed in components that need to apply the prefix. For example, the Image component:

export const Image = component$((props: Record<string, string | number>) => {
  const { base } = useContext(FragmentContext);
  return <img {...props} src={base + props.src} />;
});

cloud-gallery/helpers/src/image/image.tsx

Service-binding fragments

Cloudflare Workers provide a mechanism called service bindings to make requests between Cloudflare Workers efficiently that avoids network requests. In the demo we use this mechanism to make the requests from parent fragments to their child fragments with almost no performance overhead, while still allowing the fragments to be independently deployed.

Comparison to current solutions

This fragments architecture has three properties that distinguish it from other current solutions.

Unlike monoliths, or client-side micro-frontends, fragments are developed and deployed as independent server-side rendered applications that are composed together on the server-side. This significantly improves rendering speed, and lowers interaction latency in the browser.

Unlike server-side rendered micro-frontends with Node.js or cloud functions, Cloudflare Workers is a globally distributed compute platform with a region-less deployment model. It has incredibly low latency, and a near-zero communication overhead between fragments.

Unlike solutions based on module federation, a fragment’s client-side JavaScript is very specific to the fragment it is supporting. This means that it is small enough that we don’t need to have shared library code, eliminating the version skew issues and coordination problems when updating shared libraries.

Future possibilities

This demo is just a proof of concept, so there are still areas to investigate. Here are some of the features we’d like to explore in the future.

Caching

Each micro-frontend fragment can be cached independently of the others based on how static its content is. When requesting the full page, the fragments only need to run server-side rendering for micro-frontends that have changed.

Cloudflare Workers and micro-frontends: made for one another
An application where the output of some fragments are cached

With per-fragment caching you can return the HTML response to the browser faster, and avoid incurring compute costs in re-rendering content unnecessarily.

Fragment routing and client-side navigation

Our demo application used micro-frontend fragments to compose a single page. We could however use this approach to implement page routing as well. When server-side rendering, the main fragment could insert the appropriate “page” fragment based on the visited URL. When navigating, client-side, within the app, the main fragment would remain the same while the displayed “page” fragment would change.

Cloudflare Workers and micro-frontends: made for one another
An application where each route is delegated to a different fragment

This approach combines the best of server-side and client-side routing with the power of fragments.

Using other frontend frameworks

Although the Cloud Gallery application uses Qwik to implement all fragments, it is possible to use other frameworks as well. If really necessary, it’s even possible to mix and match frameworks.

To achieve good results, the framework of choice should be capable of server-side rendering, and should have a small client-side JavaScript footprint. HTML streaming capabilities, while not required, can significantly improve performance of large applications.

Cloudflare Workers and micro-frontends: made for one another
An application using different frontend frameworks

Incremental migration strategies

Adopting a new architecture, compute platform, and deployment model is a lot to take in all at once, and for existing large applications is prohibitively risky and expensive. To make this  fragment-based architecture available to legacy projects, an incremental adoption strategy is a key.

Developers could test the waters by migrating just a single piece of the user-interface within their legacy application to a fragment, integrating with minimal changes to the legacy application. Over time, more of the application could then be moved over, one fragment at a time.

Convention over configuration

As you can see in the Cloud Gallery demo application, setting up a fragment-based micro-frontend requires quite a bit of configuration. A lot of this configuration is very mechanical and could be abstracted away via conventions and better tooling. Following productivity-focused precedence found in Ruby on Rails, and filesystem based routing meta-frameworks, we could make a lot of this configuration disappear.

Try it yourself!

There is still so much to dig into! Web applications have come a long way in recent years and their growth is hard to overstate. Traditional implementations of micro-frontends have had only mixed success in helping developers scale development and deployment of large applications. Cloudflare Workers, however, unlock new possibilities which can help us tackle many of the existing challenges and help us build better web applications.

Thanks to the generous free plan offered by Cloudflare Workers, you can check out the Gallery Demo code and deploy it yourself.

If all of these sounds interesting to you, and you would like to work with us on improving the developer experience for Cloudflare Workers, we are also happy to share that we are hiring!

Bringing the best live video experience to Cloudflare Stream with AV1

Post Syndicated from Renan Dincer original https://blog.cloudflare.com/av1-cloudflare-stream-beta/

Bringing the best live video experience to Cloudflare Stream with AV1

Bringing the best live video experience to Cloudflare Stream with AV1

Consumer hardware is pushing the limits of consumers’ bandwidth.

VR headsets support 5760 x 3840 resolution — 22.1 million pixels per frame of video. Nearly all new TVs and smartphones sold today now support 4K — 8.8 million pixels per frame. It’s now normal for most people on a subway to be casually streaming video on their phone, even as they pass through a tunnel. People expect all of this to just work, and get frustrated when it doesn’t.

Consumer Internet bandwidth hasn’t kept up. Even advanced mobile carriers still limit streaming video resolution to prevent network congestion. Many mobile users still have to monitor and limit their mobile data usage. Higher Internet speeds require expensive infrastructure upgrades, and 30% of Americans still say they often have problems simply connecting to the Internet at home.

We talk to developers every day who are pushing up against these limits, trying to deliver the highest quality streaming video without buffering or jitter, challenged by viewers’ expectations and bandwidth. Developers building live video experiences hit these limits the hardest — buffering doesn’t just delay video playback, it can cause the viewer to get out of sync with the live event. Buffering can cause a sports fan to miss a key moment as playback suddenly skips ahead, or find out in a text message about the outcome of the final play, before they’ve had a chance to watch.

Today we’re announcing a big step towards breaking the ceiling of these limits — support in Cloudflare Stream for the AV1 codec for live videos and their recordings, available today to all Cloudflare Stream customers in open beta. Read the docs to get started, or watch an AV1 video from Cloudflare Stream in your web browser. AV1 is an open and royalty-free video codec that uses 46% less bandwidth than H.264, the most commonly used video codec on the web today.

What is AV1, and how does it improve live video streaming?

Every piece of information that travels across the Internet, from web pages to photos, requires data to be transmitted between two computers. A single character usually takes one byte, so a two-page letter would be 3600 bytes or 3.6 kilobytes of data transferred.

One pixel in a photo takes 3 bytes, one each for red, green and blue in the pixel. A 4K photo would take 8,294,400 bytes, or 8.2 Megabytes. A video is like a photo that changes 30 times a second, which would make almost 15 Gigabytes per minute. That’s a lot!

To reduce the amount of bandwidth needed to stream video, before video is sent to your device, it is compressed using a codec. When your device receives video, it decodes this into the pixels displayed on your screen. These codecs are essential to both streaming and storing video.

Video compression codecs combine multiple advanced techniques, and are able to compress video to one percent of the original size, with your eyes barely noticing a difference. This also makes video codecs computationally intensive and hard to run. Smartphones, laptops and TVs have specific media decoding hardware, separate from the main CPU, optimized to decode specific protocols quickly, using the minimum amount of battery life and power.

Every few years, as researchers invent more efficient compression techniques, standards bodies release new codecs that take advantage of these improvements. Each generation of improvements in compression technology increases the requirements for computers that run them. With higher requirements, new chips are made available with increased compute capacity. These new chips allow your device to display higher quality video while using less bandwidth.

AV1 takes advantage of recent advances in compute to deliver video with dramatically fewer bytes, even compared to other relatively recent video protocols like VP9 and HEVC.

AV1 leverages the power of new smartphone chips

One of the biggest developments of the past few years has been the rise of custom chip designs for smartphones. Much of what’s driven the development of these chips is the need for advanced on-device image and video processing, as companies compete on the basis of which smartphone has the best camera.

This means the phones we carry around have an incredible amount of compute power. One way to think about AV1 is that it shifts work from the network to the viewer’s device. AV1 is fewer bytes over the wire, but computationally harder to decode than prior formats. When AV1 was first announced in 2018, it was dismissed by some as too slow to encode and decode, but smartphone chips have become radically faster in the past four years, more quickly than many saw coming.

AV1 hardware decoding is already built into the latest Google Pixel smartphones as part of the Tensor chip design. The Samsung Exynos 2200 and MediaTek Dimensity 1000 SoC mobile chipsets both support hardware accelerated AV1 decoding. It appears that Google will require that all devices that support Android 14 support decoding AV1. And AVPlayer, the media playback API built into iOS and tvOS, now includes an option for AV1, which hints at future support. It’s clear that the industry is heading towards hardware-accelerated AV1 decoding in the most popular consumer devices.

With hardware decoding comes battery life savings — essential for both today’s smartphones and tomorrow’s VR headsets. For example, a Google Pixel 6 with AV1 hardware decoding uses only minimal battery and CPU to decode and play our test video:

Bringing the best live video experience to Cloudflare Stream with AV1

AV1 encoding requires even more compute power

Just as decoding is significantly harder for end-user devices, it is also significantly harder to encode video using AV1. When AV1 was announced in 2018, many doubted whether hardware would be able to encode it efficiently enough for the protocol to be adopted quickly enough.

To demonstrate this, we encoded the 4K rendering of Big Buck Bunny (a classic among video engineers!) into AV1, using an AMD EPYC 7642 48-Core Processor with 256 GB RAM. This CPU continues to be a workhorse of our compute fleet, as we have written about previously. We used the following command to re-encode the video, based on the example in the ffmpeg AV1 documentation:

ffmpeg -i bbb_sunflower_2160p_30fps_normal.mp4 -c:v libaom-av1 -crf 30 -b:v 0 -strict -2 av1_test.mkv

Using a single core, encoding just two seconds of video at 30fps took over 30 minutes. Even if all 48 cores were used to encode, it would take at minimum over 43 seconds to encode just two seconds of video. Live encoding using only CPUs would require over 20 servers running at full capacity.

Special-purpose AV1 software encoders like rav1e and SVT-AV1 that run on general purpose CPUs can encode somewhat faster than libaom-av1 with ffmpeg, but still consume a huge amount of compute power to encode AV1 in real-time, requiring multiple servers running at full capacity in many scenarios.

Cloudflare Stream encodes your video to AV1 in real-time

At Cloudflare, we control both the hardware and software on our network. So to solve the CPU constraint, we’ve installed dedicated AV1 hardware encoders, designed specifically to encode AV1 at blazing fast speeds. This end to end control is what lets us encode your video to AV1 in real-time. This is entirely out of reach to most public cloud customers, including the video infrastructure providers who depend on them for compute power.

Encoding in real-time means you can use AV1 for live video streaming, where saving bandwidth matters most. With a pre-recorded video, the client video player can fetch future segments of video well in advance, relying on a buffer that can be many tens of seconds long. With live video, buffering is constrained by latency — it’s not possible to build up a large buffer when viewing a live stream. There is less margin for error with live streaming, and every byte saved means that if a viewer’s connection is interrupted, it takes less time to recover before the buffer is empty.

Stream lets you support AV1 with no additional work

AV1 has a chicken or the egg dilemma. And we’re helping solve it.

Companies with large video libraries often re-encode their entire content library to a new codec before using it. But AV1 is so computationally intensive that re-encoding whole libraries has been cost prohibitive. Companies have to choose specific videos to re-encode, and guess which content will be most viewed ahead of time. This is particularly challenging for apps with user generated content, where content can suddenly go viral, and viewer patterns are hard to anticipate.

This has slowed down the adoption of AV1 — content providers wait for more devices to support AV1, and device manufacturers wait for more content to use AV1. Which will come first?

With Cloudflare Stream there is no need to manually trigger re-encoding, re-upload video, or manage the bulk encoding of a large video library. This is a unique approach that is made possible by integrating encoding and delivery into a single product — it is not possible to encode on-demand using the old way of encoding first, and then pointing a CDN at a bucket of pre-encoded files.

We think this approach can accelerate the adoption of AV1. Consider a video app with millions of minutes of user-generated video. Most videos will never be watched again. In the old model, developers would have to spend huge sums of money to encode upfront, or pick and choose which videos to re-encode. With Stream, we can help anyone incrementally adopt AV1, without re-encoding upfront. As we work towards making AV1 Generally Available, we’ll be working to make supporting AV1 simple and painless, even for videos already uploaded to Stream, with no special configuration necessary.

Open, royalty-free, and widely supported

At Cloudflare, we are committed to open standards and fighting patent trolls. While there are multiple competing options for new video codecs, we chose to support AV1 first in part because it is open source and has royalty-free licensing.

Other encoding codecs force device manufacturers to pay royalty fees in order to adopt their standard in consumer hardware, and have been quick to file lawsuits against competing video codecs. The group behind the open and royalty-free VP8 and VP9 codecs have been pushing back against this model for more than a decade, and AV1 is the successor to these codecs, with support from all the biggest technology companies, both software and hardware. Beyond its technical accomplishments, AV1 is a clear message from the industry that the future of video encoding should be open, royalty-free, and free from patent litigation.

Try AV1 right now with your live stream or live recording

Support for AV1 is currently in open beta. You can try using AV1 on your own live video with Cloudflare Stream right now — just add the ?betaCodecSuggestion=av1 query parameter to the HLS or DASH manifest URL for any live stream or live recording created after October 1st in Cloudflare Stream. Read the docs to get started. If you don’t yet have a Cloudflare account, you can sign up here and start using Cloudflare Stream in just a few minutes.

We also have a recording of a live video, encoded using AV1, that you can watch here. Note that Safari does not yet support AV1.

We encourage you to try AV1 with your test streams, and we’d love your feedback. Join our Discord channel and tell us what you’re building, and what kinds of video you’re interested in using AV1 with. We’d love to hear from you!

The world’s leading venture capitalist firms commit $1.25 BILLION to back startups built on Cloudflare Workers

Post Syndicated from Mia Wang original https://blog.cloudflare.com/workers-launchpad/

The world’s leading venture capitalist firms commit $1.25 BILLION to back startups built on Cloudflare Workers

The world’s leading venture capitalist firms commit $1.25 BILLION to back startups built on Cloudflare Workers

From our earliest days, Cloudflare has stood for helping build a better Internet that’s accessible to all. It’s core to our mission that anyone who wants to start building on the Internet should be able to do so easily, and without the barriers of prohibitively expensive or difficult to use infrastructure.

Nowhere is this philosophy more important – and more impactful to the Internet – than with our developer platform, Cloudflare Workers. Workers is, quite simply, where developers and entrepreneurs start on Day 1. It’s a full developer platform that includes cloud storage; website hosting; SQL databases; and of course, the industry’s leading serverless product. The platform’s ease-of-use and accessible pricing (all the way down to free) are critical in advancing our mission. For startups, this translates into fast, easy deployment and iteration, that scales seamlessly with predictable, transparent and cost-effective pricing. Building a great business from scratch is hard enough – we ought to know! – and so we’re aiming to take all the complexity out of your application infrastructure.

Announcing the Workers Launchpad funding program

Today, we’re taking things a step further and making it easier for startups to build the business of their dreams. We’re announcing a $1.25 billion Workers Launchpad funding program in partnership with some of the world’s leading venture capital firms. Any startup built on Workers can apply. As is the case with the Workers Platform itself, we’ve tried to make applying dead simple: it should take you less than five minutes to submit your application through the Workers Launchpad portal.

How does it work? The only requirement for being eligible for the funding program is that you’ve built your core infrastructure on Workers. If you’re new to Cloudflare and Cloudflare Workers, check out our Startup Plan to get started. We hope these resources will be helpful to all startups and help level the playing field, no matter where in the world you might be.

Once you submit your application, it will be reviewed by our Launchpad team, several of whom are former entrepreneurs and venture capital folks themselves. They’ll match promising applicants with our VC partners who have the most expertise in your space (more on them below). Every quarter, we’ll announce the winners of our Launchpad program. Winners, our “Workers Founders”, will be guaranteed the opportunity to pitch the VC partner(s) that we’ve determined would be a good match for your business. It’s a win-win all around. VCs get the opportunity to invest in businesses they know are being built on a forward-looking, world-class, development platform. Entrepreneurs get connected to world-class VCs. And for the first class of winners, we’ll have a few added perks that we describe in more detail below.

Who are the VCs that you might get a chance to pitch to?

When we approached our friends in the venture community with our vision for the Workers Launchpad, we received incredibly positive feedback and excitement. Many have seen firsthand the competitive advantages of building on Workers through their own portfolio companies. Moreover, Cloudflare is home to one of the largest developer communities on Earth with approximately 20% of the world’s websites on our network. As such, we can play a unique role in matching great entrepreneurs with great VCs to further not only the Workers platform, but also the Internet ecosystem, for everyone.

We’re honored to announce a world-class group of VC Launch Partners supporting this program and the ecosystem of Workers-based startups:

The world’s leading venture capitalist firms commit $1.25 BILLION to back startups built on Cloudflare Workers

More on why we’re doing this

So why are we doing this? The simple answer is we’re proud of our Workers Developer Platform and think that everyone should be using it. Entrepreneurs who develop on Workers can ship faster; more easily and cost-effectively; and in a way that future proofs your infrastructure:

Speed. Development velocity isn’t just a convenience for an entrepreneur. It’s a massive competitive advantage. In fact, development velocity is one of Cloudflare’s competitive advantages – we’re able to develop quickly because we build on Workers. When you develop on Workers, you don’t need to spend time configuring DNS records, maintaining certificates, scaling up clusters, or building complex deployment pipelines. Focus on developing your application, and Cloudflare will handle the rest.

Ease of use. Startup teams and founders are some of the busiest people on earth. You shouldn’t have to think about – or make complicated decisions about – IT infrastructure. Questions like: “Which availability zone should I choose?”, or “Will I be able to scale up my infrastructure in time for our next viral marketing campaign?” shouldn’t have to cross your mind! And on the Workers Platform, they don’t. The code you and your team writes automatically deploys quickly and consistently across Cloudflare’s global network in 275+ cities in over 100 countries. Cloudflare securely and scalably connects your users to your applications, regardless of where those applications are hosted or how many users suddenly sign up for your product. Developers can easily manage globally distributed applications with a programmable network that easily connects to whatever services they need to talk to.

Future-proofing your infrastructure and your wallet. Cloudflare’s massive global network – that’s distributed across 275+ cities in over 100 countries – is able to scale with your business, no matter how large it grows to become. We also help you remain compliant with local laws and regulations as you expand around the world, with capabilities like Workers’ Jurisdictional Restrictions for Durable Objects. You can sleep soundly at night instead of worrying about how to level up your infrastructure in the midst of shifting regulations, and equally importantly, knowing that you will not wake up to any surprise bills. Many of us have had the experience of being charged unexpected and / or exorbitant fees from our cloud providers. For example, providers will often make it easy and free to onboard your application or data, but charge exorbitant rates when you want to move them out (i.e. egress fees). Cloudflare will never charge for egress. Our pricing is simple, and we constantly aim to be the low-cost provider, no matter how large your business grows to be.

The world’s leading venture capitalist firms commit $1.25 BILLION to back startups built on Cloudflare Workers

Dogfooding our own product

We’re excited about Workers not only because we’ve built our own infrastructure on it, but also because we’re seeing the incredible things others have built on it. In fact, we acquired a company built entirely on Workers at the end of last year, an Isareli start-up named Zaraz, which secures and accelerates third party web tools. Workers allowed Zaraz to replace the multiple network requests of each tool running on a website with one single request, effectively streamlining a messy web of extensions into a single lightweight application. This acquisition opened our eyes to the power of the global community that’s built on our platform, and left us motivated to help startups built on Workers find the funding, mentorship, and support needed to grow.

But wait, there’s more!

To make it even easier for startups to take advantage of all the benefits that Workers has to offer, applicants to the Workers Launchpad program who have raised less than $3 million in total external funding will automatically have the option to receive Cloudflare’s Startup Plan. This plan includes all the elements of Cloudflare’s Pro and Business Plans ($2,400 annual value) plus higher tiers of our Stream video product, our Teams Zero Trust security suite and the Workers platform. To make sure the full range of our developer platform is accessible to startups, we recently more than tripled the number of products available in this plan, which now includes email security, R2, Pages, KV, and many others.

Furthermore, all startups that apply by October 31, 2022, will be eligible to be selected for the Winter 2022 class of Workers Founders, which will unlock additional support, mentorship, and marketing opportunities. Being selected as a Workers Founder will get you a chance to practice your pitch with investors, engage with leaders from Cloudflare, and get advice on how to build a successful business from topics like recruiting to marketing, sales, and beyond during a virtual Workers Founders Bootcamp Week. The program will culminate in a virtual Demo Day, so you can show the world what you’ve been building. We’re leaning in to help promising entrepreneurs join us in our mission to help build a better Internet.

Helping make the Internet better for all

Accessibility and ease of use are core to everything we do at Cloudflare. We will always make our products and platforms so easy to use that even the smallest business or hobbyist can easily use them. We hope the Workers Launchpad funding program encourages entrepreneurs from all around the world, and from all backgrounds, to start building on Workers, and makes it easier for you to find the funding you need to build the business of your dreams.

Head to the Workers Launchpad page to apply and join the Cloudflare Developer Discord to engage with the Workers community. If you’re a VC that is interested in supporting the program, reach out to [email protected].

Cloudflare is not providing any funding or making any funding decisions, and there is no guarantee that any particular company will receive funding through the program. All funding decisions will be made by the venture capital firms that participate in the program. Cloudflare is not a registered broker-dealer, investment adviser, or other similar intermediary.

Cloudflare Queues: globally distributed queues without the egress fees

Post Syndicated from Ashcon Partovi original https://blog.cloudflare.com/introducing-cloudflare-queues/

Cloudflare Queues: globally distributed queues without the egress fees

Cloudflare Queues: globally distributed queues without the egress fees

Developers continue to build more complex applications on Cloudflare’s Developer Platform. We started with Workers, which brought compute, then introduced KV, Durable Objects, R2, and soon D1, which enabled persistence. Now, as we enable developers to build larger, more sophisticated, and more reliable applications, it’s time to unlock another foundational building block: messaging.

Thus, we’re excited to announce the private beta of Cloudflare Queues, a global message queuing service that allows applications to reliably send and receive messages using Cloudflare Workers. It offers at-least once message delivery, supports batching of messages, and charges no bandwidth egress fees. Let’s queue up the details.

What is a Queue?

Queues enable developers to send and receive messages with a guarantee of delivery. Think of it like the postal service for the Internet. You give it a message, then it handles all the hard work to ensure the message gets delivered in a timely manner. Unlike the real postal service, where it’s possible for a message to get lost, Queues provide a guarantee that each message is delivered at-least once; no matter what. This lets you focus on your application, rather than worry about the chaos of transactions, retries, and backoffs to prevent data loss.

Queues also allow you to scale your application to process large volumes of data. Imagine a million people send you a package in the mail, at the same time. Instead of a million postal workers suddenly showing up at your front door, you would want them to aggregate your mail into batches, then ask you when you’re ready to receive each batch. This lets you decouple and spread load among services that have different throughput characteristics.

How does it work?

Queues are integrated into the fabric of the Cloudflare Workers runtime, with simple APIs that make it easy to send and receive messages. First, you’ll want to send messages to the Queue. You can do this by defining a Worker, referred to as a “producer,” which has a binding to the Queue.

In the example below, a Worker catches JavaScript exceptions and sends them to a Queue. You might notice that any object can be sent to the Queue, including an error. That’s because messages are encoded using the standard structuredClone() algorithm.

export default {
  async fetch(request: Request, env: Environment) {
     try {
       return await doRequest(request);
     } catch (error) {
       await env.ERROR_QUEUE.send(error);
       return new Response(error.stack, { status: 500 });
     }
  }
}

Second, you’ll want to process messages in the Queue. You can do this by defining a Worker, referred to as the “consumer,” which will receive messages from the Queue. To facilitate this, there is a new type of event handler, named “queue,” which receives the messages sent to the consumer.

This Worker is configured to receive messages from the previous example. It appends the stack trace of each Error to a log file, then saves it to an R2 bucket.

export default {
  async queue(batch: MessageBatch<Error>, env: Environment) {
     let logs = "";
     for (const message of batch.messages) {
        logs += message.body.stack;
     }
     await env.ERROR_BUCKET.put(`errors/${Date.now()}.log`, logs);
  }
}

Configuration is also easy. You can change the message batch size, message retries, delivery wait time, and dead-letter queue. Here’s a snippet of the wrangler.toml configuration when deploying with wrangler, our command-line interface

name = "my-producer"
[queues]
producers = [{ queue = "errors", binding = "ERROR_QUEUE" }]
# ---
name = "my-consumer"
[queues]
consumers = [{ queue = "errors", max_batch_size = 100, max_retries = 3 }]

Above are two different wrangler.tomls, one for a producer and another for a consumer. It is also possible for a producer and consumer to be implemented by the same Worker. To see the full list of options and examples, see the documentation.

What can you build with it?

You can use Cloudflare Queues to defer tasks and guarantee they get processed, decouple load among different services, batch events and process them together, and send messages from Worker to Worker.

To demonstrate, we’ve put together a demo application that you can run on your local machine using wrangler. It shows how Queues can batch messages and handle failures in your code, here’s a preview of it in action:

In addition to batching, here are other examples of what you can build with Queues:

  • Off-load tasks from the critical path of a Workers request.
  • Guarantee messages get delivered to a service that talks HTTP.
  • Transform, filter, and fan-out messages to multiple Queues.

Cloudflare Queues gives you the flexibility to decide where to route messages. Instead of static configuration files that define routing keys and patterns, you can use JavaScript to define custom logic for how you filter and fan-out to multiple Queues. In the next example, you can distribute work to different Queues based on the attributes of a user.

export default {
  async queue(batch: MessageBatch, env: Environment) {
    for (const message of batch.messages) {
      const user = message.body;
      if (isEUResident(user)) {
        await env.EU_QUEUE.send(user);
      }
      if (isForgotten(user)) {
        await env.DELETION_QUEUE.send(user);
      }
    }
  }
}

We will also support integrations with Cloudflare products, like R2. For example, you might configure an R2 bucket to send lifecycle events to a Queue or archive messages to a R2 bucket for long-term storage.

How much does it cost?

Cloudflare Queues has a simple, transparent pricing model that’s easy to predict. It costs $0.40 per million operations, which is defined for every 64 KB chunk of data that is written, read, or deleted. There are also no egregious bandwidth fees for data in or out — unlike Amazon’s SQS or Google’s Pub/Sub.

To effectively deliver a message, it usually takes three operations: one write, one read, and one acknowledgement. You can estimate your usage by considering the cost of messages delivered, which is $1.20 per million. (calculated as 3 x \$0.40)

When can I try it?

You can register to join the waitlist as we work towards a beta launch. You’ll have an opportunity to try it out, for free. Once it’s ready, we’ll launch an open beta for everyone to try.

In the meantime, you can read the documentation to view our code samples, see which features will be supported, and learn what you can build. If you’re in our developer Discord, you stay up-to-date by joining the #queues-beta channel. If you’re an Enterprise customer, reach out to your account team to schedule a session with our Product team.

We’re excited to see what you build with Cloudflare Queues. Let the queuing begin!

WebRTC live streaming to unlimited viewers, with sub-second latency

Post Syndicated from Kyle Boutette original https://blog.cloudflare.com/webrtc-whip-whep-cloudflare-stream/

WebRTC live streaming to unlimited viewers, with sub-second latency

WebRTC live streaming to unlimited viewers, with sub-second latency

Creators and broadcasters expect to be able to go live from anywhere, on any device. Viewers expect “live” to mean “real-time”. The protocols that power most live streams are unable to meet these growing expectations.

In talking to developers building live streaming into their apps and websites, we’ve heard near universal frustration with the limitations of existing live streaming technologies. Developers in 2022 rightly expect to be able to deliver low latency to viewers, broadcast reliably, and use web standards rather than old protocols that date back to the era of Flash.

Today, we’re excited to announce in open beta that Cloudflare Stream now supports live video streaming over WebRTC, with sub-second latency, to unlimited concurrent viewers. This is a new feature of Cloudflare Stream, and you can start using it right now in the Cloudflare Dashboard — read the docs to get started.

WebRTC with Cloudflare Stream leapfrogs existing tools and protocols, exclusively uses open standards with zero dependency on a specific SDK, and empowers any developer to build both low latency live streaming and playback into their website or app.

The status quo of streaming live video is broken

The status quo of streaming live video has high latency, depends on archaic protocols and is incompatible with the way developers build apps and websites. A reasonable person’s expectations of what the Internet should be able to deliver in 2022 are simply unmet by the dominant set of protocols carried over from past eras.

Viewers increasingly expect “live” to mean “real-time”. People want to place bets on sports broadcasts in real-time, interact and ask questions to presenters in real-time, and never feel behind their friends at a live event.

In practice, the HLS and DASH standards used to deliver video have 10+ seconds of latency. LL-HLS and LL-DASH bring this down to closer to 5 seconds, but only as a hack on top of the existing protocol that delivers segments of video in individual HTTP requests. Sending mini video clips over TCP simply cannot deliver video in real-time. HLS and DASH are here to stay, but aren’t the future of real-time live video.

Creators and broadcasters expect to be able to go live from anywhere, on any device.

In practice, people creating live content are stuck with a limited set of native apps, and can’t go live using RTMP from a web browser. Because it’s built on top of TCP, the RTMP broadcasting protocol struggles under even the slightest network disruption, making it a poor or often unworkable option when broadcasting from mobile networks. RTMP, originally built for use with Adobe Flash Player, was last updated in 2012, and while Stream supports the newer SRT protocol, creators need an option that works natively on the web and can more easily be integrated in native apps.

Developers expect to be able to build using standard APIs that are built into web browsers and native apps.

In practice, RTMP can’t be used from a web browser, and creating a native app that supports RTMP broadcasting typically requires diving into lower-level programming languages like C and Rust. Only those with expertise in both live video protocols and these languages have full access to the tools needed to create novel live streaming client applications.

We’re solving this by using new open WebRTC standards: WHIP and WHEP

WebRTC is the real-time communications protocol, supported across all web browsers, that powers video calling services like Zoom and Google Meet. Since inception it’s been designed for real-time, ultra low-latency communications.

While WebRTC is well established, for most of its history it’s lacked standards for:

  • Ingestion — how broadcasters should send media content (akin to RTMP today)
  • Egress — how viewers request and receive media content (akin to DASH or HLS today)

As a result, developers have had to implement this on their own, and client applications on both sides are often tightly coupled to provider-specific implementations. Developers we talk to often express frustration, having sunk months of engineering work into building around a specific vendor’s SDK, unable to switch without a significant rewrite of their client apps.

At Cloudflare, our mission is broader — we’re helping to build a better Internet. Today we’re launching not just a new feature of Cloudflare Stream, but a vote of confidence in new WebRTC standards for both ingestion and egress. We think you should be able to start using Stream without feeling locked into an SDK or implementation specific to Cloudflare, and we’re committed to using open standards whenever possible.

For ingestion, WHIP is an IETF draft on the Standards Track, with many applications already successfully using it in production. For delivery (egress), WHEP is an IETF draft with broad agreement. Combined, they provide a standardized end-to-end way to broadcast one-to-many over WebRTC at scale.

Cloudflare Stream is the first cloud service to let you both broadcast using WHIP and playback using WHEP — no vendor-specific SDK needed. Here’s how it works:

WebRTC live streaming to unlimited viewers, with sub-second latency

Cloudflare Stream is already built on top of the Cloudflare developer platform, using Workers and Durable Objects running on Cloudflare’s global network, within 50ms of 95% of the world’s Internet-connected population.

Our WebRTC implementation extends this to relay WebRTC video through our network. Broadcasters stream video using WHIP to the point of presence closest to their location, which tells the Durable Object where the live stream can be found. Viewers request streaming video from the point of presence closest to them, which asks the Durable Object where to find the stream, and video is routed through Cloudflare’s network, all with sub-second latency.

Using Durable Objects, we achieve this with zero centralized state. And just like the rest of Cloudflare Stream, you never have to think about regions, both in terms of pricing and product development.

While existing ultra low-latency streaming providers charge significantly more to stream over WebRTC, because Stream runs on Cloudflare’s global network, we’re able to offer WebRTC streaming at the same price as delivering video over HLS or DASH. We don’t think you should be penalized with higher pricing when choosing which technology to rely on to stream live video. Once generally available, WebRTC streaming will cost $1 per 1000 minutes of video delivered, just like the rest of Stream.

What does sub-second latency let you build?

Ultra low latency unlocks interactivity within your website or app, removing the time delay between creators, in-person attendees, and those watching remotely.

Developers we talk to are building everything from live sports betting, to live auctions, to live viewer Q&A and even real-time collaboration in video post-production. Even streams without in-app interactivity can benefit from real-time — no sports fan wants to get a text from their friend at the game that ruins the moment, before they’ve had a chance to watch the final play. Whether you’re bringing an existing app or have a new idea in mind, we’re excited to see what you build.

If you can write JavaScript, you can let your users go live from their browser

While hobbyist and professional creators might take the time to download and learn how to use an application like OBS Studio, most Internet users won’t get past this friction of new tools, and copying RTMP keys from one tool to another. To empower more people to go live, they need to be able to broadcast from within your website or app, just by enabling access to the camera and microphone.

Cloudflare Stream with WebRTC lets you build live streaming into your app as a front-end developer, without any special knowledge of video protocols. And our approach, using the WHIP and WHEP open standards, means you can do this with zero dependencies, with 100% your code that you control.

Go live from a web browser with just a few lines of code

You can go live right now, from your web browser, by creating a live input in the Cloudflare Stream dashboard, and pasting a URL into the example linked below.

Read the docs or run the example code below in your browser using Stackbltiz.

<video id="input-video" autoplay autoplay muted></video>

import WHIPClient from "./WHIPClient.js";

const url = "<WEBRTC_URL_FROM_YOUR_LIVE_INPUT>";
const videoElement = document.getElementById("input-video");
const client = new WHIPClient(url, videoElement);

This example uses an example WHIP client, written in just 100 lines of Javascript, using APIs that are native to web browsers, with zero dependencies. But because WHIP is an open standard, you can use any WHIP client you choose. Support for WHIP is growing across the video streaming industry — it has recently been added to Gstreamer, and one of the authors of the WHIP specification has written a Javascript client implementation. We intend to support the full WHIP specification, including supporting Trickle ICE for fast NAT traversal.

Play a live stream in a browser, with sub-second latency, no SDK required

Once you’ve started streaming, copy the playback URL from the live input you just created, and paste it into the example linked below.

Read the docs or run the example code below in your browser using Stackbltiz.

<video id="playback" controls autoplay muted></video>

import WHEPClient from './WHEPClient.js';
const url = "<WEBRTC_PLAYBACK_URL_FROM_YOUR_LIVE_INPUT>";
const videoElement = document.getElementById("playback");
const client = new WHEPClient(url, videoElement);

Just like the WHIP example before, this one uses an example WHEP client we’ve written that has zero dependencies. WHEP is an earlier IETF draft than WHIP, published in July of this year, but adoption is moving quickly. People in the community have already written open-source client implementations in both Javascript, C, with more to come.

Start experimenting with real-time live video, in open beta today

WebRTC streaming is in open beta today, ready for you to use as an integrated feature of Cloudflare Stream. Once Generally Available, WebRTC streaming will be priced like the rest of Cloudflare Stream, based on minutes of video delivered and minutes stored.

Read the docs to get started.

Build your next startup on Cloudflare with our comprehensive Startup Plan, v2.0

Post Syndicated from Jade Q. Wang original https://blog.cloudflare.com/startup-program-v2/

Build your next startup on Cloudflare with our comprehensive Startup Plan, v2.0

Build your next startup on Cloudflare with our comprehensive Startup Plan, v2.0

Starting a business is hard. And we know that the first few years of your business are crucial to your success.

Cloudflare’s Startup Plan is here to help.

Last year, we piloted a program to a select group of startups for free, with a selection of products that are very high leverage for young startups, early in their product development, like Workers, Stream, and Zero Trust.

Over the past year, startup founders repeatedly wrote into [email protected], and most of these emails followed one of 2 patterns:

  1. A startup would like to request additional products that are not a part of the startup plan, often Workers KV, Pages, Cloudflare for SaaS, R2, Argo, etc.
  2. A startup that is not a part of any accelerator program but would like to get on the startup plan.

Based on this feedback, we are thrilled to announce that today we will be increasing the scope of the program to also include popularly requested products! Beyond that, we’re also super excited to be broadening the eligibility criteria, so more startups can qualify for the plan.

What does the Cloudflare Startup Plan include?

There’s a lot of additional value that’s in the latest version of the plan. Here’s how things have evolved from v1.0:

Product Description v1.0 v2.0*
Content Delivery Network (CDN) Cloudflare’s CDN is a distributed network of servers that provides several advantages for a website. By caching website content, Cloudflare helps improve page load speeds, reduce bandwidth usage, and reduce CPU usage on the server.
Web Application Firewall (WAF) Cloudflare WAF protects a customer’s Internet properties from common vulnerabilities like SQL injection attacks, cross-site scripting, and cross-site forgery requests, with no changes to its existing infrastructure.
DNS Cloudflare DNS is an enterprise-grade authoritative DNS service that offers the fastest response time, unparalleled redundancy, and advanced security with built-in DDoS mitigation and DNSSEC.
Advanced DDoS Protection Cloudflare DDoS protection secures websites, applications, and entire networks while ensuring the performance of legitimate traffic is not compromised.
Page Rules Page Rules allows you to conditionally change zone settings such as security level or rocket loader settings. It is also how cache settings are configured for traffic passing through the zone.
Workers Cloudflare Workers allows developers to augment existing applications or create entirely new ones through a lightweight execution environment without configuring or maintaining infrastructure.
Stream Cloudflare Stream provides out-of-the-box video infrastructure that handles storage, encoding, and delivery. Using the Stream API, customers can build video functionality affordably and with minimal engineering effort.
Zero Trust Cloudflare Zero Trust replaces legacy security perimeters with our global edge, making the Internet faster and safer for teams around the world.
R2 Cloudflare R2 Storage allows developers to store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services.

Pages Cloudflare Pages allows frontend developers to quickly and easily build, collaborate on, and deploy websites.
Workers KV Workers KV is a global, low-latency, key-value data store. It supports high read volumes with low-latency, making it possible to build highly dynamic APIs and websites which respond as quickly as a cached static file would.
Workers Unbound Workers Unbound is a serverless platform for compute-intensive workloads, supporting up to 30s CPU time (GA), and 15 minute cron triggers (beta).
Cloudflare for SaaS Offers the benefits of Cloudflare’s SSL management to SaaS company’s customers. Allows SaaS customers to use their custom vanity domains while maintaining a branded customer experience, improved SEO rankings, and all the perks of dedicated certificates.

Advanced Certificate Manager (ACM) Advanced Certificate Manager is a flexible and customizable way to issue and manage certificates in Cloudflare. Customers use Advanced Certificate Manager when they want something more customizable than Universal SSL but still want the convenience of SSL certificate issuance and renewal.

Image Resizing Resize images for a variety of device types and connections from a single-source master image. Images can be manipulated by dimensions, compression ratios, and format (WebP conversion where supported). It is a feature of Speed→Optimization on the Dashboard.
Images Cloudflare Images lets you set up an image pipeline in minutes. Build a scalable image pipeline to store, resize, optimize and deliver images in a fast and secure manner.
Argo Smart Routing Argo Smart Routing improves Internet performance by intelligently routing end users through less congested and more reliable paths over the Internet using our network.
Spectrum DDoS protection, firewalling features and performance benefits for any TCP or UDP based application (such as gaming, FTP, email etc.)
Load Balancing Improve application performance and availability by steering traffic away from unhealthy origin servers and dynamically distributing it to the most available and responsive server pools.
Rate Limiting Protect your site or API from malicious traffic by blocking client IP addresses that hit a URL pattern and exceed a threshold you define.
Waiting Room Cloudflare Waiting Room allows organizations to route excess users to a custom-branded waiting room, helping preserve customer experience and protect origin servers from being overwhelmed with requests.
Area 1 Email Security Comprehensive email security solution to eliminate the risk from phishing, which is the root cause of damage in 95% of all cybersecurity incidents. Configuration with Google and Microsoft in 5 minutes.

*Product-specific usage limits apply

With the expanded product basket, the Startup Plan v2.0 provides four times the value of the original program. As an added bonus when new Cloudflare products are in beta, participants on the Startup Plan are welcome to share their use case and potentially receive early access.

How to join the Cloudflare Startup Program

Eligibility criteria for startups new to the program

To be eligible for the Startup Plan, your startup must be at an early stage (i.e. raised less than $3 million) and enrolled or recently graduated from a participating accelerator program.

If you are a founder in a participating accelerator program, navigate to the Cloudflare perk from your program’s vendor perk page and follow the instructions there. Those instructions will lead you to this plan page where you can select your program from the list.

If your accelerator program is not on the list, please email us at [email protected] and include the details of your perk manager. We’ll get in touch with them directly.

What if I’m already on the V1 Startup Plan? Can I get access to the products on the V2 plan?

Yes! Startups currently on the Startup Plan will automatically have the additional products added to your accounts in the coming weeks, no action required. This upgrade will apply for the remainder of the year that your company is on the Startup Plan.

If you have any questions about making the most of your plan, drop a line to [email protected] with any questions.

Special Eligibility: Referral Program, Alumni Program, Workers Launchpad

Over our years at Cloudflare, we have had the privilege of working with some remarkable technologists who have gone on to work on extraordinary projects after their time at Cloudflare. Moreover, there are startup founders who interface with teams at Cloudflare, whether through participation in the Cloudflare Community or because they were former coworkers elsewhere.

These startups are now also eligible:

  • Workers Launchpad funding program: If you are building on Cloudflare Workers, simultaneously apply for the Workers Launchpad funding program here, which will make you eligible for introductions to leading investors, additional mentorship opportunities, and other perks.
  • Referral Program: “Referred by (include details in comments)” is now an option in the Select Accelerator dropdown menu on the Startup Plan landing page. Please include the email address of your contact at Cloudflare in the comments field. Your account will be upgraded once your contact validates the referral.
  • Alumni Program: “Cloudflare Alumni” is now also an option in the Select Accelerator dropdown menu.

Questions about the program? Drop us a line at [email protected], and we’ll be in touch.

The easiest way to build a modern SaaS application

Post Syndicated from Tanushree Sharma original https://blog.cloudflare.com/workers-for-platforms-ga/

The easiest way to build a modern SaaS application

The easiest way to build a modern SaaS application

The Software as a Service (SaaS) model has changed the way we work – 80% of businesses use at least one SaaS application. Instead of investing in building proprietary software or installing and maintaining on-prem licensed software, SaaS vendors provide businesses with out-of-the-box solutions.

SaaS has many benefits over the traditional software model: cost savings, continuous updates and scalability, to name a few. However, any managed solution comes with trade-offs. As a business, one of the biggest challenges in adopting SaaS tooling is loss of customization. Not every business uses software in the same way and as you grow as a SaaS company it’s not long until you get customers saying “if only I could do X”.

Enter Workers for Platforms – Cloudflare’s serverless functions offering for SaaS businesses. With Workers for Platforms, your customers can build custom logic to meet their requirements right into your application.

We’re excited to announce that Workers for Platforms is now in GA for all Enterprise customers! If you’re an existing customer, reach out to your Customer Success Manager (CSM) to get access. For new customers, fill out our contact form to get started.

The conundrum of customization

As a SaaS business invested in capturing the widest market, you want to build a universal solution that can be used by customers of different sizes, in various industries and regions. However, every one of your customers has a unique set of tools and vendors and best practices. A generalized platform doesn’t always meet their needs.

For SaaS companies, once you get in the business of creating customizations yourself, it can be hard to keep up with seemingly never ending requests. You want your engineering teams to focus on building out your core business instead of building and maintaining custom solutions for each of your customer’s use cases.

With Workers for Platforms, you can give your customers the ability to write code that customizes your software. This gives your customers the flexibility to meet their exact use case while also freeing up internal engineering time  – it’s a win-win!

How is this different from Workers?

Workers is Cloudflare’s serverless execution environment that runs your code on Cloudflare’s global network. Workers is lightning fast and scalable; running at data centers in 275+ cities globally and serving requests from as close as possible to the end user. Workers for Platforms extends the power of Workers to our customer’s developers!

So, what’s new?

Dispatch Worker: As a platform customer, you want to have full control over how end developer code fits in with your APIs. A Dispatch Worker is written by our platform customers to run their own logic before dispatching (aka routing) to Workers written by end developers. In addition to routing, it can be used to run authentication, create boilerplate functions and sanitize responses.

User Workers: User Workers are written by end developers, that is, our customers’ developers. End developers can deploy User Workers to script automated actions, create integrations or modify response payload to return custom content. Unlike self-managed Function-as-a-Service (FaaS) options, with Workers for Platforms, end developers don’t need to worry about setting up and maintaining their code on any 3rd party platform. All they need to do is upload their code and you – or rather Cloudflare – takes care of the rest.

Unlimited Scripts: Yes, you read that correctly! With hundreds-plus end developers, the existing 100 script limit for Workers won’t cut it for Workers for Platforms customers. Some of our Workers for Platforms customers even deploy a new script each time their end developers make a change to their code in order to maintain version control and to easily revert to a previous state if a bug is deployed.

Dynamic Dispatch Namespaces: If you’ve used Workers before, you may be familiar with a feature we released earlier this year, Service Bindings. Service Bindings are a way for two Workers to communicate with each other. They allow developers to break up their applications into modules that can be chained together. Service Bindings explicitly link two Workers together, and they’re meant for use cases where you know exactly which Workers need to communicate with each other.

Service Bindings don’t work in the Workers for Platforms model because User Workers are uploaded ad hoc. Dynamic Dispatch Namespaces is our solution to this! A Dispatch Namespace is composed of a collection of User Workers. With Dispatch Namespaces, a Dispatch Worker can be used to call any User Worker in a namespace (similar to how Service Bindings work) but without needing to explicitly pre-define the relationship.

Read more about how to use these features below!

How to use Workers for Platforms

The easiest way to build a modern SaaS application

Dispatch Workers

Dispatch Workers are the entry point for requests to Workers in a Dispatch Namespace. The Dispatch Worker can be used to run any functions ahead of User Workers. They can make a request to any User Workers in the Dispatch Namespace, and they ultimately handle the routing to User Workers.

Dispatch Workers are created the same way as a regular Worker, except they need a Dispatch Namespace binding in the project’s wrangler.toml configuration file.

[[dispatch_namespaces]]
binding = "dispatcher"
namespace = "api-prod"

In the example below, this Dispatch Worker reads the subdomain from the path and calls the appropriate User Worker. Alternatively you can use KV, D1 or your data store of choice to map identifying parameters from an incoming request to a User Worker.

export default {
 async fetch(request, env) {
   try {
       // parse the URL, read the subdomain
       let worker_name = new URL(request.url).host.split('.')[0]
       let user_worker = env.dispatcher.get(worker_name)
       return user_worker.fetch(request)
   } catch (e) {
       if (e.message == 'Error: Worker not found.') {
           // we tried to get a worker that doesn't exist in our dispatch namespace
           return new Response('', {status: 404})
       }
       // this could be any other exception from `fetch()` *or* an exception
       // thrown by the called worker (e.g. if the dispatched worker has
       // `throw MyException()`, you could check for that here).
       return new Response(e.message, {status: 500})
   }
 }

}

Uploading User Workers

User Workers must be uploaded to a Dispatch Namespace through the Cloudflare API (wrangler support coming soon!). This code snippet below uses a simple HTML form to take in a script and customer id and then uploads it to the Dispatch Namespace.

export default {
 async fetch(request: Request) {
   try {
     // on form submit
     if (request.method === "POST"){
       const str = JSON.stringify(await request.json())
       const upload_obj = JSON.parse(str)
       await upload(upload_obj.customerID, upload_obj.script)
   }
   //render form
     return new Response (html, {
       headers: {
         "Content-Type": "text/html"
       }
     })
   } catch (e) {
       // form error
       return new Response(e.message, {status: 500})
   }
 }
}

async function upload(customerID:string, script:string){
 const scriptName = customerID;
 const scriptContent = script;
 const accountId = "<ACCOUNT_ID>";
 const dispatchNamespace = "api-prod";
 const url = `https://api.cloudflare.com/client/v4/accounts/${accountId}/workers/dispatch/namespaces/${dispatchNamespace}/scripts/${scriptName}`;
 // construct and send request
 const response = await fetch(url, {
   method: "PUT",
   body: scriptContent,
   headers: {
     "Content-Type": "application/javascript",
     "X-Auth-Email": "<EMAIL>",
     "X-Auth-Key": "<API_KEY>"
   }
 });

 const result = (await response.json());
 if (response.status != 200) {
   throw new Error(`Upload error`);
 }
}

It’s that simple. With Dispatch Namespaces and Dispatch Workers, we’re giving you the building blocks to customize your SaaS applications. Along with the Platforms APIs, we’re also releasing a Workers for Platforms UI on the Cloudflare dashboard where you can view your Dispatch Namespaces, search scripts and view analytics for User Workers.

The easiest way to build a modern SaaS application

To view an end to end example, check out our Workers for Platforms example application.

Get started today!

We’re releasing Workers for Platforms to all Cloudflare Enterprise customers. If you’re interested, reach out to your Customer Success Manager (CSM) to get access. To get started, take a look at our Workers for Platforms starter project and developer documentation.

We also have plans to release this down to the Workers Paid plan. Stay tuned on the Cloudflare Discord (channel name: workers-for-platforms-beta) for updates.

What’s next?

We’ve heard lots of great feature requests from our early Workers for Platforms customers. Here’s a preview of what’s coming next on the roadmap:

  • Fine-grained controls over User Workers: custom script limits, allowlist/blocklist for fetch requests
  • GraphQL and Logs: Metrics for User Workers by tag
  • Plug and play Platform Development Kit
  • Tighter integration with Cloudflare for SaaS custom domains

If you have specific feature requests in mind, please reach out to your CSM or get in touch through the Discord!

Stream Live is now Generally Available

Post Syndicated from Brendan Irvine-Broque original https://blog.cloudflare.com/stream-live-ga/

Stream Live is now Generally Available

Stream Live is now Generally Available

Today, we’re excited to announce that Stream Live is out of beta, available to everyone, and ready for production traffic at scale. Stream Live is a feature of Cloudflare Stream that allows developers to build live video features in websites and native apps.

Since its beta launch, developers have used Stream to broadcast live concerts from some of the world’s most popular artists directly to fans, build brand-new video creator platforms, operate a global 24/7 live OTT service, and more. While in beta, Stream has ingested millions of minutes of live video and delivered to viewers all over the world.

Bring your big live events, ambitious new video subscription service, or the next mobile video app with millions of users — we’re ready for it.

Streaming live video at scale is hard

Live video uses a massive amount of bandwidth. For example, a one-hour live stream at 1080p at 8Mbps is 3.6GB. At typical cloud provider egress prices, even a little egress can break the bank.

Live video must be encoded on-the-fly, in real-time. People expect to be able to watch live video on their phone, while connected to mobile networks with less bandwidth, higher latency and frequently interrupted connections. To support this, live video must be re-encoded in real-time into multiple resolutions, allowing someone’s phone to drop to a lower resolution and continue playback. This can be complex (Which bitrates? Which codecs? How many?) and costly: running a fleet of virtual machines doesn’t come cheap.

Ingest location matters — Live streaming protocols like RTMPS send video over TCP. If a single packet is dropped or lost, the entire connection is brought to a halt while the packet is found and re-transmitted. This is known as “head of line blocking”. The further away the broadcaster is from the ingest server, the more network hops, and the more likely packets will be dropped or lost, ultimately resulting in latency and buffering for viewers.

Delivery location matters — Live video must be cached and served from points of presence as close to viewers as possible. The longer the network round trips, the more likely videos will buffer or drop to a lower quality.

Broadcasting protocols are in flux — The most widely used protocol for streaming live video, RTMPS, has been abandoned since 2012, and dates back to the era of Flash video in the early 2000s. A new emerging standard, SRT, is not yet supported everywhere. And WebRTC has only recently evolved into an option for high definition one-to-many broadcasting at scale.

The old way to solve this has been to stitch together separate cloud services from different vendors. One vendor provides excellent content delivery, but no encoding. Another provides APIs or hardware to encode, but leaves you to fend for yourself and build your own storage layer. As a developer, you have to learn, manage and write a layer of glue code around the esoteric details of video streaming protocols, codecs, encoding settings and delivery pipelines.

Stream Live is now Generally Available

We built Stream Live to make streaming live video easy, like adding an <img> tag to a website. Live video is now a fundamental building block of Internet content, and we think any developer should have the tools to add it to their website or native app.

With Stream, you or your users stream live video directly to Cloudflare, and Cloudflare delivers video directly to your viewers. You never have to manage internal encoding, storage, or delivery systems — it’s just live video in and live video out.

Our network, our hardware = a solution only Cloudflare can provide

We’re not the only ones building APIs for live video — but we are the only ones with our own global network and hardware that we control and optimize for video. That lets us do things that others can’t, like sub-second glass-to-glass latency using RTMPS and SRT playback at scale.

Newer video codecs require specialized hardware encoders, and while others are beholden to the hardware limitations of public cloud providers, we’re already hard at work installing the latest encoding hardware in our own racks, so that you can deliver high resolution video with even less bandwidth. Our goal is to make what is otherwise only available to video giants available directly to you — stay tuned for some exciting updates on this next week.

Most providers limit how many destinations you can restream or simulcast to. Because we operate our own network, we’ve never even worried about this, and let you restream to as many destinations as you need.

Operating our own network lets us price Stream based on minutes of video delivered — unlike others, we don’t pay someone else for bandwidth and then pass along their costs to you at a markup. The status quo of charging for bandwidth or per-GB storage penalizes you for delivering or storing high resolution content. If you ask why a few times, most of the time you’ll discover that others are pushing their own cost structures on to you.

Encoding video is compute-intensive, delivering video is bandwidth intensive, and location matters when ingesting live video. When you use Stream, you don’t need to worry about optimizing performance, finding a CDN, and/or tweaking configuration endlessly. Stream takes care of this for you.

Free your live video from the business models of big platforms

Nearly every business uses live video, whether to engage with customers, broadcast events or monetize live content. But few have the specialized engineering resources to deliver live video at scale on their own, and wire together multiple low level cloud services. To date, many of the largest content creators have been forced to depend on a shortlist of social media apps and streaming services to deliver live content at scale.

Unlike the status quo, who force you to put your live video in their apps and services and fit their business models, Stream gives you full control of your live video, on your website or app, on any device, at scale, without pushing your users to someone else’s service.

Free encoding. Free ingestion. Free analytics. Simple per-minute pricing.

Others Stream
Encoding $ per minute Free
Ingestion $ per GB Free
Analytics Separate product Free
Live recordings Minutes or hours later Instant
Storage $ per GB per minute stored
Delivery $ per GB per minute delivered

Other platforms charge for ingestion and encoding. Many even force you to consider where you’re streaming to and from, the bitrate and frames per second of your video, and even which of their datacenters you’re using.

With Stream, encoding and ingestion are free. Other platforms charge for delivery based on bandwidth, penalizing you for delivering high quality video to your viewers. If you stream at a high resolution, you pay more.

With Stream, you don’t pay a penalty for delivering high resolution video. Stream’s pricing is simple — minutes of video delivered and stored. Because you pay per minute, not per gigabyte, you can stream at the ideal resolution for your viewers without worrying about bandwidth costs.

Other platforms charge separately for analytics, requiring you to buy another product to get metrics about your live streams.

With Stream, analytics are free. Stream provides an API and Dashboard for both server-side and client-side analytics, that can be queried on a per-video, per-creator, per-country basis, and more. You can use analytics to identify which creators in your app have the most viewed live streams, inform how much to bill your customers for their own usage, identify where content is going viral, and more.

Other platforms tack on live recordings or DVR mode as a separate add-on feature, and recordings only become available minutes or even hours after a live stream ends.

With Stream, live recordings are a built-in feature, made available instantly after a live stream ends. Once a live stream is available, it works just like any other video uploaded to Stream, letting you seamlessly use the same APIs for managing both pre-recorded and live content.

Build live video into your website or app in minutes

Stream Live is now Generally Available

Cloudflare Stream enables you or your users to go live using the same protocols and tools that broadcasters big and small use to go live to YouTube or Twitch, but gives you full control over access and presentation of live streams.

Step 1: Create a live input

Create a new live input from the Stream Dashboard or use use the Stream API:

Request

curl -X POST \
-H "Authorization: Bearer <YOUR_API_TOKEN>" \
-d "{"recording": { "mode": "automatic" } }" \
https://api.cloudflare.com/client/v4/accounts/<YOUR_CLOUDFLARE_ACCOUNT_ID>/stream/live_inputs

Response

{
"result": {
"uid": "<UID_OF_YOUR_LIVE_INPUT>",
"rtmps": {
"url": "rtmps://live.cloudflare.com:443/live/",
"streamKey": "<PRIVATE_RTMPS_STREAM_KEY>"
},
...
}
}

Step 2: Use the RTMPS key with any live broadcasting software, or in your own native app

Copy the RTMPS URL and key, and use them with your live streaming application. We recommend using Open Broadcaster Software (OBS) to get started, but any RTMPS or SRT compatible software should be able to interoperate with Stream Live.

Enter the Stream RTMPS URL and the Stream Key from Step 1:

Stream Live is now Generally Available

Step 3: Preview your live stream in the Cloudflare Dashboard

In the Stream Dashboard, within seconds of going live, you will see a preview of what your viewers will see, along with the real-time connection status of your live stream.

Stream Live is now Generally Available

Step 4: Add live video playback to your website or app

Stream your video using our Stream Player embed code, or use any video player that supports HLS or DASH — live streams can be played in both websites or native iOS and Android apps.

For example, on iOS, all you need to do is provide AVPlayer with the URL to the HLS manifest for your live input, which you can find via the API or in the Stream Dashboard.

import SwiftUI
import AVKit

struct MyView: View {
    // Change the url to the Cloudflare Stream HLS manifest URL
    private let player = AVPlayer(url: URL(string: "https://customer-9cbb9x7nxdw5hb57.cloudflarestream.com/8f92fe7d2c1c0983767649e065e691fc/manifest/video.m3u8")!)

    var body: some View {
        VideoPlayer(player: player)
            .onAppear() {
                player.play()
            }
    }
}

struct MyView_Previews: PreviewProvider {
    static var previews: some View {
        MyView()
    }
}

To run a complete example app in XCode, follow this guide in the Stream Developer docs.

Companies are building whole new video platforms on top of Stream

Developers want control, but most don’t have time to become video experts. And even video experts building innovative new platforms don’t want to manage live streaming infrastructure.

Switcher Studio’s whole business is live video — their iOS app allows creators and businesses to produce their own branded, multi camera live streams. Switcher uses Stream as an essential part of their live streaming infrastructure. In their own words:

“Since 2014, Switcher has helped creators connect to audiences with livestreams. Now, our users create over 100,000 streams per month. As we grew, we needed a scalable content delivery solution. Cloudflare offers secure, fast delivery, and even allowed us to offer new features, like multistreaming. Trusting Cloudflare Stream lets our team focus on the live production tools that make Switcher unique.”

While Stream Live has been in beta, we’ve worked with many customers like Switcher, where live video isn’t just one product feature, it is the core of their product. Even as experts in live video, they choose to use Stream, so that they can focus on the unique value they create for their customers, leaving the infrastructure of ingesting, encoding, recording and delivering live video to Cloudflare.

Start building live video into your website or app today

It takes just a few minutes to sign up and start your first live stream, using the Cloudflare Dashboard, with no code required to get started, but APIs for when you’re ready to start building your own live video features. Give it a try — we’re ready for you, no matter the size of your audience.

Using Cloudflare R2 as an apt/yum repository

Post Syndicated from Sudarsan Reddy original https://blog.cloudflare.com/using-cloudflare-r2-as-an-apt-yum-repository/

Using Cloudflare R2 as an apt/yum repository

Using Cloudflare R2 as an apt/yum repository

In this blog post, we’re going to talk about how we use Cloudflare R2 as an apt/yum repository to bring cloudflared (the Cloudflare Tunnel daemon) to your Debian/Ubuntu and CentOS/RHEL systems and how you can do it for your own distributable in a few easy steps!

I work on Cloudflare Tunnel, a product which enables customers to quickly connect their private networks and services through the Cloudflare global network without needing to expose any public IPs or ports through their firewall. Cloudflare Tunnel is managed for users by cloudflared, a tool that runs on the same network as the private services. It proxies traffic for these services via Cloudflare, and users can then access these services securely through the Cloudflare network.

Our connector, cloudflared, was designed to be lightweight and flexible enough to be effectively deployed on a Raspberry Pi, a router, your laptop, or a server running on a data center with applications ranging from IoT control to private networking. Naturally, this means cloudflared comes built for a myriad of operating systems, architectures and package distributions: You could download the appropriate package from our GitHub releases, brew install it or apt/yum install it (https://pkg.cloudflare.com).

In the rest of this blog post, I’ll use cloudflared as an example of how to create an apt/yum repository backed by Cloudflare’s object storage service R2. Note that this can be any binary/distributable. I simply use cloudflared as an example because this is something we recently did and actually use in production.

How apt-get works

Let’s see what happens when you run something like this on your terminal.

$ apt-get install cloudflared

Let’s also assume that apt sources were already added like so:

  $ echo 'deb [signed-by=/usr/share/keyrings/cloudflare-main.gpg] https://pkg.cloudflare.com/cloudflared buster main' | sudo tee /etc/apt/sources.list.d/cloudflared.list


$ apt-get update

From the source.list above, apt first looks up the Release file (or InRelease if it’s a signed package like cloudflared, but we will ignore this for the sake of simplicity).

A Release file contains a list of supported architectures and their md5, sha1 and sha256 checksums. It looks something like this:

$ curl https://pkg.cloudflare.com/cloudflared/dists/buster/Release
Origin: cloudflared
Label: cloudflared
Codename: buster
Date: Thu, 11 Aug 2022 08:40:18 UTC
Architectures: amd64 386 arm64 arm armhf
Components: main
Description: apt repository for cloudflared - buster
MD5Sum:
 c14a4a1cbe9437d6575ae788008a1ef4 549 main/binary-amd64/Packages
 6165bff172dd91fa658ca17a9556f3c8 374 main/binary-amd64/Packages.gz
 9cd622402eabed0b1b83f086976a8e01 128 main/binary-amd64/Release
 5d2929c46648ea8dbeb1c5f695d2ef6b 545 main/binary-386/Packages
 7419d40e4c22feb19937dce49b0b5a3d 371 main/binary-386/Packages.gz
 1770db5634dddaea0a5fedb4b078e7ef 126 main/binary-386/Release
 b0f5ccfe3c3acee38ba058d9d78a8f5f 549 main/binary-arm64/Packages
 48ba719b3b7127de21807f0dfc02cc19 376 main/binary-arm64/Packages.gz
 4f95fe1d9afd0124a32923ddb9187104 128 main/binary-arm64/Release
 8c50559a267962a7da631f000afc6e20 545 main/binary-arm/Packages
 4baed71e49ae3a5d895822837bead606 372 main/binary-arm/Packages.gz
 e472c3517a0091b30ab27926587c92f9 126 main/binary-arm/Release
 bb6d18be81e52e57bc3b729cbc80c1b5 549 main/binary-armhf/Packages
 31fd71fec8acc969a6128ac1489bd8ee 371 main/binary-armhf/Packages.gz
 8fbe2ff17eb40eacd64a82c46114d9e4 128 main/binary-armhf/Release
SHA1:
…
SHA256:
…

Depending on your system’s architecture, Debian picks the appropriate Packages location. A Packages file contains metadata about the binary apt wants to install, including location and its checksum. Let’s say it’s an amd64 machine. This means we’ll go here next:

$ curl https://pkg.cloudflare.com/cloudflared/dists/buster/main/binary-amd64/Packages
Package: cloudflared
Version: 2022.8.0
License: Apache License Version 2.0
Vendor: Cloudflare
Architecture: amd64
Maintainer: Cloudflare <[email protected]>
Installed-Size: 30736
Homepage: https://github.com/cloudflare/cloudflared
Priority: extra
Section: default
Filename: pool/main/c/cloudflared/cloudflared_2022.8.0_amd64.deb
Size: 15286808
SHA256: c47ca10a3c60ccbc34aa5750ad49f9207f855032eb1034a4de2d26916258ccc3
SHA1: 1655dd22fb069b8438b88b24cb2a80d03e31baea
MD5sum: 3aca53ccf2f9b2f584f066080557c01e
Description: Cloudflare Tunnel daemon

Notice the Filename field. This is where apt gets the deb from before running a dpkg command on it. What all of this means is the apt repository (and the yum repositories too) is basically a structured file system of mostly plaintext files and a deb.

The deb file is a Debian software package that contains two things: installable data (in our case, the cloudflared binary) and metadata about the installable.

Building our own apt repository

Now that we know what happens when an apt-get install runs, let’s work our way backwards to construct the repository.

Create a deb file out of the binary

Note: It is optional but recommended one signs the packages. See the section about how apt verifies packages here.

Debian files can be built by the dpkg-buildpackage tool in a linux or Debian environment. We use a handy command line tool called fpm (https://fpm.readthedocs.io/en/v1.13.1/) to do this because it works for both rpm and deb.

$ fpm -s <dir> -t deb -C /path/to/project -name <project_name> –version <version>

This yields a .deb file.

Create plaintext files needed by apt to lookup downloads

Again, these files can be built by hand, but there are multiple tools available to generate this:

We use reprepro.

Using it is as simple as

$ reprepro buster includedeb <path/to/the/deb>

reprepro neatly creates a bunch of folders in the structure we looked into above.

Upload them to Cloudflare R2

We use Cloudflare R2 to now be the host for this structured file system. R2 lets us upload and serve objects in this structured format. This means, we just need to upload the files in the same structure reprepro created them.

Here is a copyable example of how we do it for cloudflared.

Serve them from an R2 worker

For fine-grained control, we’ll write a very lightweight Cloudflare Worker to be the service we talk to and serve as the front-end API for apt to talk to. For an apt repository, we only need it to perform the GET function.

Here’s an example on how-to do this: https://developers.cloudflare.com/r2/examples/demo-worker/

Putting it all together

Here is a handy script we use to push cloudflared to R2 ready for apt/yum downloads and includes signing and publishing the pubkey.

And that’s it! You now have your own apt/yum repo without needing a server that needs to be maintained, there are no egress fees for downloads, and it is on the Cloudflare global network, protecting it against high request volumes. You can automate many of these steps to make it part of a release process.

Today, this is how cloudflared is distributed on the apt and yum repositories: https://pkg.cloudflare.com/

Announcing our Spring Developer Challenge

Post Syndicated from Albert Zhao original https://blog.cloudflare.com/announcing-our-spring-developer-challenge/

Announcing our Spring Developer Challenge

Announcing our Spring Developer Challenge

After many announcements from Platform Week, we’re thrilled to make one more: our Spring Developer Challenge!

The theme for this challenge is building real-time, collaborative applications — one of the most exciting use-cases emerging in the Cloudflare ecosystem. This is an opportunity for developers to merge their ideas with our newly released features, earn recognition on our blog, and take home our best swag yet.

Here’s a list of our tools that will get you started:

  • Workers can either be powerful middleware connecting your app to different APIs and an origin — or it can be the entire application itself. We recommend using Worktop, a popular framework for Workers, if you need TypeScript support, routing, and well-organized submodules. Worktop can also complement your existing app even if it already uses a framework,  such as Svelte.
  • Cloudflare Pages makes it incredibly easy to deploy sites, which you can make into truly dynamic apps by putting a Worker in front or using the Pages Functions (beta).
  • Durable Objects are great for collaborative apps because you can use websockets while coordinating state at the edge, seen in this chat demo. To help scale any load, we also recommend Durable Object Groups.
  • Workers KV provides a global key-value data store that securely stores and quickly serves data across Cloudflare’s network. R2 allows you to store enormous amounts of data without trapping you with costly egress services.

Last year, our Developer Spotlight series highlighted how developers around the world built entire applications on Cloudflare. Our Discord server maintained that momentum with users demonstrating that any type of application can be built. Need a way to organize thousands of lines of JSON? JSON Hero, built with Remix and deployed with Workers, provides an incredibly readable UI for your JSON files. Trying to deploy a GraphQL server for your app that scales? helix-flare deploys a GraphQL server easily through Workers and uses Durable Objects to coordinate data.

We hope developers continue to explore the boundaries of what they can build on Cloudflare as our platform evolves. During our Summer Developer Challenge in 2021, we received over 1,200 submissions that revealed you can build almost any app imaginable with Workers, Pages, and the rest of the developer ecosystem. We sent out hundreds of swag boxes to participants, to show our appreciation. The ensuing unboxing videos on Twitter and YouTube thrilled our team.

This year’s Spring Developer Challenge is all about making real-time, collaborative apps such as chat rooms, games, web-based editing tools, or anything else in your imagination! Here are the rules:

  • You must be at least 18 years old to participate
  • You can work in teams of up to 10 people per submission
  • The deadline to submit your repo is May 24

Enter the challenge by going to this site.

As you build your app, join our Discord if you or your team need any help. We will be enthusiastically reviewing submissions, promoting them on Twitter, and sending out swag boxes.

If you’re new to Cloudflare or have an exciting idea as a developer, this is your opportunity to see how far our platform has evolved and get rewarded for it!

Announcing the Cloudflare Images Sourcing Kit

Post Syndicated from Paulo Costa original https://blog.cloudflare.com/cloudflare-images-sourcing-kit/

Announcing the Cloudflare Images Sourcing Kit

Announcing the Cloudflare Images Sourcing Kit

When we announced Cloudflare Images to the world, we introduced a way to store images within the product and help customers move away from the egress fees met when using remote sources for their deliveries via Cloudflare.

To store the images in Cloudflare, customers can upload them via UI with a simple drag and drop, or via API for scenarios with a high number of objects for which scripting their way through the upload process makes more sense.

To create flexibility on how to import the images, we’ve recently also included the ability to upload via URL or define custom names and paths for your images to allow a simple mapping between customer repositories and the objects in Cloudflare. It’s also possible to serve from a custom hostname to create flexibility on how your end-users see the path, to improve the delivery performance by removing the need to do TLS negotiations or to improve your brand recognition through URL consistency.

Still, there was no simple way to tell our product: “Tens of millions of images are in this repository URL. Go and grab them all from me”.  

In some scenarios, our customers have buckets with millions of images to upload to Cloudflare Images. Their goal is to migrate all objects to Cloudflare through a one-time process, allowing you to drop the external storage altogether.

In another common scenario, different departments in larger companies use independent systems configured with varying storage repositories, all of which they feed at specific times with uneven upload volumes. And it would be best if they could reuse definitions to get all those new Images in Cloudflare to ensure the portfolio is up-to-date while not paying egregious egress fees by serving the public directly from those multiple storage providers.

These situations required the upload process to Cloudflare Images to include logistical coordination and scripting knowledge. Until now.

Announcing the Cloudflare Images Sourcing Kit

Today, we are happy to share with you our Sourcing Kit, where you can define one or more sources containing the objects you want to migrate to Cloudflare Images.

But, what exactly is Sourcing? In industries like manufacturing, it implies a number of operations, from selecting suppliers, to vetting raw materials and delivering reports to the process owners.

So, we borrowed that definition and translated it into a Cloudflare Images set of capabilities allowing you to:

  1. Define one or multiple repositories of images to bulk import;
  2. Reuse those sources and import only new images;
  3. Make sure that only actual usable images are imported and not other objects or file types that exist in that source;
  4. Define the target path and filename for imported images;
  5. Obtain Logs for the bulk operations;

The new kit does it all. So let’s go through it.

How the Cloudflare Images Sourcing Kit works

In the Cloudflare Dashboard, you will soon find the Sourcing Kit under Images.

In it, you will be able to create a new source definition, view existing ones, and view the status of the last operations.

Announcing the Cloudflare Images Sourcing Kit

Clicking on the create button will launch the wizard that will guide you through the first bulk import from your defined source:

Announcing the Cloudflare Images Sourcing Kit

First, you will need to input the Name of the Source and the URL for accessing it. You’ll be able to save the definitions and reuse the source whenever you wish.
After running the necessary validations, you’ll be able to define the rules for the import process.

The first option you have allows an Optional Prefix Path. Defining a prefix allows a unique identifier for the images uploaded from this particular source, differentiating the ones imported from this source.

Announcing the Cloudflare Images Sourcing Kit

The naming rule in place respects the source image name and path already, so let’s assume there’s a puppy image to be retrieved at:

https://my-bucket.s3.us-west-2.amazonaws.com/folderA/puppy.png

When imported without any Path Prefix, you’ll find the image at

https://imagedelivery.net/<AccountId>/folderA/puppy.png

Now, you might want to create an additional Path Prefix to identify the source, for example by mentioning that this bucket is from the Technical Writing department. In the puppy case, the result would be:

https://imagedelivery.net/<AccountId>/techwriting/folderA/puppy.png

Custom Path prefixes also provide a way to prevent name clashes coming from other sources.

Still, there will be times when customers don’t want to use them. And, when re-using the source to import images, a same path+filename destinations clash might occur.

By default, we don’t overwrite existing images, but we allow you to select that option and refresh your catalog present in the Cloudflare pipeline.

Announcing the Cloudflare Images Sourcing Kit

Once these inputs are defined, a click on the Create and start migration button at the bottom will trigger the upload process.

Announcing the Cloudflare Images Sourcing Kit

This action will show the final wizard screen, where the migration status is displayed. The progress log will report any errors obtained during the upload and is also available to download.

Announcing the Cloudflare Images Sourcing Kit
Announcing the Cloudflare Images Sourcing Kit

You can reuse, edit or delete source definitions when no operations are running, and at any point, from the home page of the kit, it’s possible to access the status and return to the ongoing or last migration report.

Announcing the Cloudflare Images Sourcing Kit

What’s next?

With the Beta version of the Cloudflare Images Sourcing Kit, we will allow you to define AWS S3 buckets as a source for the imports. In the following versions, we will enable definitions for other common repositories, such as the ones from Azure Storage Accounts or Google Cloud Storage.

And while we’re aiming for this to be a simple UI, we also plan to make everything available through CLI: from defining the repository URL to starting the upload process and retrieving a final report.

Apply for the Beta version

We will be releasing the Beta version of this kit in the following weeks, allowing you to source your images from third party repositories to Cloudflare.

If you want to be the first to use Sourcing Kit, request to join the waitlist on the Cloudflare Images dashboard.

Announcing the Cloudflare Images Sourcing Kit

Send email using Workers with MailChannels

Post Syndicated from Erwin van der Koogh original https://blog.cloudflare.com/sending-email-from-workers-with-mailchannels/

Send email using Workers with MailChannels

Send email using Workers with MailChannels

Here at Cloudflare we often talk about HTTP and related protocols as we work to help build a better Internet. However, the Simple Mail Transfer Protocol (SMTP) — used to send emails — is still a massive part of the Internet too.

Even though SMTP is turning 40 years old this year, most businesses still rely on email to validate user accounts, send notifications, announce new features, and more.

Sending an email is simple from a technical standpoint, but getting an email actually delivered to an inbox can be extremely tricky. Because of the enormous amount of spam that is sent every single day, all major email providers are very wary of things like new domains and IP addresses that start sending emails.

That is why we are delighted to announce a partnership with MailChannels. MailChannels has created an email sending service specifically for Cloudflare Workers that removes all the friction associated with sending emails. To use their service, you do not need to validate a domain or create a separate account. MailChannels filters spam before sending out an email, so you can feel safe putting user-submitted content in an email and be confident that it won’t ruin your domain reputation with email providers. But the absolute best part? Thanks to our friends at MailChannels, it is completely free to send email.

In the words of their CEO Ken Simpson: “Cloudflare Workers and Pages are changing the game when it comes to ease of use and removing friction to get started. So when we sat down to see what friction we could remove from sending out emails, it turns out that with our incredible anti-spam and anti-phishing, the answer is “everything”. We can’t wait to see what applications the community is going to build on top of this.”

The only constraint currently is that the integration only works when the request comes from a Cloudflare IP address. So it won’t work yet when you are developing on your local machine or running a test on your build server.

First let’s walk you through how to send out your first email using a Worker.

export default {
  async fetch(request) {
    send_request = new Request('https://api.mailchannels.net/tx/v1/send', {
      method: 'POST',
      headers: {
        'content-type': 'application/json',
      },
      body: JSON.stringify({
        personalizations: [
          {
            to: [{ email: '[email protected]', name: 'Test Recipient' }],
          },
        ],
        from: {
          email: '[email protected]',
          name: 'Workers - MailChannels integration',
        },
        subject: 'Look! No servers',
        content: [
          {
            type: 'text/plain',
            value: 'And no email service accounts and all for free too!',
          },
        ],
      }),
    })
  },
}

That is all there is to it. You can modify the example to make it send whatever email you want.

The MailChannels integration makes it easy to send emails to and from anywhere with Workers. However, we also wanted to make it easier to send emails to yourself from a form on your website. This is perfect for quickly and painlessly setting up pages such as “Contact Us” forms, landing pages, and sales inquiries.

The Pages Plugin Framework that we announced earlier this week allows other people to email you without exposing your email address.

The only thing you need to do is copy and paste the following code snippet in your /functions/_middleware.ts file. Now, every form that has the data-static-form-name attribute will automatically be emailed to you.

import mailchannelsPlugin from "@cloudflare/pages-plugin-mailchannels";

export const onRequest = mailchannelsPlugin({
  personalizations: [
    {
      to: [{ name: "ACME Support", email: "[email protected]" }],
    },
  ],
  from: { name: "Enquiry", email: "[email protected]" },
  respondWith: () =>
    new Response(null, {
      status: 302,
      headers: { Location: "/thank-you" },
    }),
});

Here is an example of what such a form would look like. You can make the form as complex as you like, the only thing it needs is the data-static-form-name attribute. You can give it any name you like to be able to distinguish between different forms.

<!DOCTYPE html>
<html>
  <body>
    <h1>Contact</h1>
    <form data-static-form-name="contact">
      <div>
        <label>Name<input type="text" name="name" /></label>
      </div>
      <div>
        <label>Email<input type="email" name="email" /></label>
      </div>
      <div>
        <label>Message<textarea name="message"></textarea></label>
      </div>
      <button type="submit">Send!</button>
    </form>
  </body>
</html>

So as you can see there is no barrier left when it comes to sending out emails. You can copy and paste the above Worker or Pages code into your projects and immediately start to send email for free.

If you have any questions about using MailChannels in your Workers, or want to learn more about Workers in general, please join our Cloudflare Developer Discord server.

Route to Workers, automate your email processing

Post Syndicated from Joao Sousa Botto original https://blog.cloudflare.com/announcing-route-to-workers/

Route to Workers, automate your email processing

Route to Workers, automate your email processing

Cloudflare Email Routing has quickly grown to a few hundred thousand users, and we’re incredibly excited with the number of feature requests that reach our product team every week. We hear you, we love the feedback, and we want to give you all that you’ve been asking for. What we don’t like is making you wait, or making you feel like your needs are too unique to be addressed.

That’s why we’re taking a different approach – we’re giving you the power tools that you need to implement any logic you can dream of to process your emails in the fastest, most scalable way possible.

Today we’re announcing Route to Workers, for which we’ll start a closed beta soon. You can join the waitlist today.

How this works

When using Route to Workers your Email Routing rules can have a Worker process the messages reaching any of your custom Email addresses.

Route to Workers, automate your email processing

Even if you haven’t used Cloudflare Workers before, we are making onboarding as easy as can be. You can start creating Workers straight from the Email Routing dashboard, with just one click.

Route to Workers, automate your email processing

After clicking Create, you will be able to choose a starter that allows you to get up and running with minimal effort. Starters are templates that pre-populate your Worker with the code you would write for popular use cases such as creating a blocklist or allowlist, content based filtering, tagging messages, pinging you on Slack for urgent emails, etc.

Route to Workers, automate your email processing

You can then use the code editor to make your new Worker process emails in exactly the way you want it to – the options are endless.

Route to Workers, automate your email processing

And for those of you that prefer to jump right into writing their own code, you can go straight to the editor without using a starter. You can write Workers with a language you likely already know. Cloudflare built Workers to execute JavaScript and WebAssembly and has continuously added support for new languages.

The Workers you’ll use for processing emails are just regular Workers that listen to incoming events, implement some logic, and reply accordingly. You can use all the features that a normal Worker would.

The main difference being that instead of:

export default {
  async fetch(request, env, ctx) {
    handleRequest(request);
  }
}

You’ll have:

export default {
  async email(message, env, ctx) {
    handleEmail(message);
  }
}

The new `email` event will provide you with the “from”, “to” fields, the full headers, and the raw body of the message. You can then use them in any way that fits your use case, including calling other APIs and orchestrating complex decision workflows. In the end, you can decide what action to take, including rejecting or forwarding the email to one of your Email Routing destination addresses.

With these capabilities you can easily create logic that, for example, only accepts messages coming from one specific address and, when one matches the criteria, forwards to one or more of your verified destination addresses while also immediately alerting you on Slack. Code for such feature could be as simple as this:

export default {
   async email(message, env, ctx) {
       switch (message.to) {
           case "[email protected]":
               await fetch("https://webhook.slack/notification", {
                   body: `Got a marketing email from ${ message.from }, subject: ${ message.headers.get("subject") }`,
               });
               sendEmail(message, [
                   "marketing@corp",
                   "sales@corp",
               ]);
               break;

           default:
               message.reject("Unknown address");
       }
   },
};

Route to Workers enables everyone to programmatically process their emails and use them as triggers for any other action. We think this is pretty powerful.

Process up to 100,000 emails/day for free

The first 100,000 Worker requests (or Email Triggers) each day are free, and paid plans start at just $5 per 10 million requests. You will be able to keep track of your Email Workers usage right from the Email Routing dashboard.

Route to Workers, automate your email processing

Join the Waitlist

You can join the waitlist today by going to the Email section of your dashboard, navigating to the Email Workers tab, and clicking the Join Waitlist button.

Route to Workers, automate your email processing

We are expecting to start the closed beta in just a few weeks, and can’t wait to hear about what you’ll build with it!

As usual, if you have any questions or feedback about Email Routing, please come see us in the Cloudflare Community and the Cloudflare Discord.

Announcing D1: our first SQL database

Post Syndicated from Rita Kozlov original https://blog.cloudflare.com/introducing-d1/

Announcing D1: our first SQL database

Announcing D1: our first SQL database

We announced Cloudflare Workers in 2017, giving developers access to compute on our network. We were excited about the possibilities this unlocked, but we quickly realized — most real world applications are stateful. Since then, we’ve delivered KV, Durable Objects, and R2, giving developers access to various types of storage.

Today, we’re excited to announce D1, our first SQL database.

While the wait on beta access shouldn’t be long — we’ll start letting folks in as early as June (sign up here), we’re excited to share some details of what’s to come.

Meet D1, the database designed for Cloudflare Workers

D1 is built on SQLite. Not only is SQLite the most ubiquitous database in the world, used by billions of devices a day, it’s also the first ever serverless database. Surprised? SQLite was so ahead of its time, it dubbed itself “serverless” before the term gained connotation with cloud services, and originally meant literally “not involving a server”.

Since Workers itself runs between the server and the client, and was inspired by technology built for the client, SQLite seemed like the perfect fit for our first entry into databases.

So what can you build with D1? The true answer is “almost anything!”, that might not be very helpful in triggering the imagination, so how about a live demo?

D1 Demo: Northwind Traders

You can check out an example of D1 in action by trying out our demo running here: northwind.d1sql.com.

If you’re wondering “Who are Northwind Traders?”, Northwind Traders is the “Hello, World!” of databases, if you will. A sample database that Microsoft would provide alongside Microsoft Access to use as their own tutorial. It first appeared 25 years ago in 1997, and you’ll find many examples of its use on the Internet.

It’s a typical business application, with a realistic schema, with many foreign keys, across many different tables — a truly timeless representation of data.

Announcing D1: our first SQL database

When was the recent order of Queso Cabrales shipped, and what ship was it on? You can quickly find out. Someone calling in about ordering some Chai? Good thing Exotic Liquids still has 39 units in stock, for just \$18 each.

Announcing D1: our first SQL database

We welcome you to play and poke around, and answer any questions you have about Northwind Trading’s business.

The Northwind Traders demo also features a dashboard where you can find details and metrics about the D1 SQL queries happening behind the scenes.

Announcing D1: our first SQL database

What can you build with D1?

Going back to our original question before the demo, however, what can you build with D1?

While you may not be running Northwind Traders yourself, you’re likely running a very similar piece of software somewhere. Even at the very core of Cloudflare’s service is a database. A SQL database filled with tables, materialized views and a plethora of stored procedures. Every time a customer interacts with our dashboard they end up changing state in that database.

The reality is that databases are everywhere. They are inside the web browser you’re reading this on, inside every app on your phone, and the storage for your bank transaction, travel reservations, business applications, and on and on. Our goal with D1 is to help you build anything from APIs to rich and powerful applications, including eCommerce sites, accounting software, SaaS solutions, and CRMs.

You can even combine D1 with Cloudflare Access and create internal dashboards and admin tools that are securely locked to only the people in your organization. The world, truly, is your oyster.

The D1 developer experience

We’ll talk about the capabilities, and upcoming features further down in the post, but at the core of it, the strength of D1 is the developer experience: allowing you to go from nothing to a full stack application in an instant. Think back to a tool you’ve used that made development feel magical — that’s exactly what we want developing with Workers and D1 to feel like.

To give you a sense of it, here’s what getting started with D1 will look like.

Creating your first D1 database

With D1, you will be able to create a database, in just a few clicks — define the tables, insert or upload some data, no need to memorize any commands unless you need to.

Announcing D1: our first SQL database

Of course, if the command-line is your jam, earlier this week, we announced the new and improved Wrangler 2, the best tool for wrangling and deploying your Workers, and soon also your tool for deploying D1. Wrangler will also come with native D1 support, so you can create & manage databases with a few simple commands:

Accessing D1 from your Worker

Attaching D1 to your Worker is as easy as creating a new binding. Each D1 database that you attach to your Worker gets attached with its own binding on the env parameter:

export default {
  async fetch(request, env, ctx) {
    const { pathname } = new URL(request.url)
    if (pathname === '/num-products') {
      const { result } = await env.DB.get(`SELECT count(*) AS num_products FROM Product;`)
      return new Response(`There are ${result.num_products} products in the D1 database!`)
    }
  }
}

Or, for a slightly more complex example, you can safely pass parameters from the URL to the database using a Router and parameterised queries:

import { Router } from 'itty-router';
const router = Router();

router.get('/product/:id', async ({ params }, env) => {
  const { result } = await env.DB.get(
    `SELECT * FROM Product WHERE ID = $id;`,
    { $id: params.id }
  )
  return new Response(JSON.stringify(result), {
    headers: {
      'content-type': 'application/json'
    }
  })
})

export default {
  fetch: router.handle,
}

So what can you expect from D1?

First and foremost, we want you to be able to develop with D1, without having to worry about cost.

At Cloudflare, we don’t believe in keeping your data hostage, so D1, like R2, will be free of egress charges. Our plan is to price D1 like we price our storage products by charging for the base storage plus database operations performed.

But, again, we don’t want our customers worrying about the cost or what happens if their business takes off, and they need more storage or have more activity. We want you to be able to build applications as simple or complex as you can dream up. We will ensure that D1 costs less and performs better than comparable centralized solutions. The promise of serverless and a global network like Cloudflare’s is performance and lower cost driven by our architecture.

Here’s a small preview of the features in D1.

Read replication

With D1, we want to make it easy to store your whole application’s state in the one place, so you can perform arbitrary queries across the full data set. That’s what makes relational databases so powerful.

However, we don’t think powerful should be synonymous with cumbersome. Most relational databases are huge, monolithic things and configuring replication isn’t trivial, so in general, most systems are designed so that all reads and writes flow back to a single instance. D1 takes a different approach.

With D1, we want to take configuration off your hands, and take advantage of Cloudflare’s global network. D1 will create read-only clones of your data, close to where your users are, and constantly keep them up-to-date with changes.

Batching

Many operations in an application don’t just generate a single query. If your logic is running in a Worker near your user, but each of these queries needs to execute on the database, then sending them across the wire one-by-one is extremely inefficient.

D1’s API includes batching: anywhere you can send a single SQL statement you can also provide an array of them, meaning you only need a single HTTP round-trip to perform multiple operations. This is perfect for transactions that need to execute and commit atomically:

async function recordPurchase(userId, productId, amount) { 
  const result = await env.DB.exec([
    [
      `UPDATE users SET balance = balance - $amount WHERE user_id = $user_id`,
      { $amount: amount, $user_id: userId },
    ],
    [
      'UPDATE product SET total_sales = total_sales + $amount WHERE product_id = $product_id',
      { $amount: amount, $product_id: productId },
    ],
  ])
  return result
}

Embedded compute

But we’re going further. With D1, it will be possible to define a chunk of your Worker code that runs directly next to the database, giving you total control and maximum performance—each request first hits your Worker near your users, but depending on the operation, can hand off to another Worker deployed alongside a replica or your primary D1 instance to complete its work.

Backups and redundancy

There are few things as critical as the data stored in your main application’s database, so D1 will automatically save snapshots of your database to Cloudflare’s cloud storage service, R2, at regular intervals, with a one-click restoration process. And, since we’re building on the redundant storage of Durable Objects, your database can physically move locations as needed, resulting in self-healing from even the most catastrophic problems in seconds.

Importing and exporting data

While D1 already supports the SQLite API, making it easy for you to write your queries, you might also need data to run them on. If you’re not creating a brand-new application, you may want to import an existing dataset from another source or database, which is why we’ll be working on allowing you to bring your own data to D1.

Likewise, one of SQLite’s advantages is its portability. If your application has a dedicated staging environment, say, you’ll be able to clone a snapshot of that data down to your local machine to develop against. And we’ll be adding more flexibility, such as the ability to create a new database with a set of test data for each new pull request on your Pages project.

What’s next?

This wouldn’t be a Cloudflare announcement if we didn’t conclude on “we’re just getting started!” — and it’s true! We are really excited about all the powerful possibilities our database on our global network opens up.

Are you already thinking about what you’re going to build with D1 and Workers? Same. Give us your details, and we’ll give you access as soon as we can — look out for a beta invite from us starting as early as June 2022!