Tag Archives: Developers

A New Hope for Object Storage: R2 enters open beta

Post Syndicated from Greg McKeon original https://blog.cloudflare.com/r2-open-beta/

A New Hope for Object Storage: R2 enters open beta

A New Hope for Object Storage: R2 enters open beta

In September, we announced that we were building our own object storage solution: Cloudflare R2. R2 is our answer to egregious egress charges from incumbent cloud providers, letting developers store as much data as they want without worrying about the cost of accessing that data.

The response has been overwhelming.

  • Independent developers had bills too small for cloud providers to negotiate fair egress rates with them. Egress charges were the largest line-item on their cloud bills, strangling side projects and the new businesses they were building.
  • Large corporations had written off multi-cloud storage – and thus multi-cloud itself – as a pipe dream. They came to us with excitement, pitching new products that integrated data with partner companies.
  • Non-profit research organizations were paying massive egress fees just to share experiment data with one another. Egress fees were having a real impact on their ability to collaborate, driving silos between organizations and restricting the experiments and analyses they could run.

Cloudflare exists to help build a better Internet. Today, the Internet gets what it deserves: R2 is now in open beta.

Self-serve customers can enable R2 in the Cloudflare dashboard. Enterprise accounts can reach out to their CSM for onboarding.

Internal and external APIs

R2 has two APIs: an API accessible only from within Workers, which we call the In-Worker API, and an S3-compatible API, which exposes your bucket on a URL of the form bucket.account.r2storage.com. Before you can make requests to R2, you’ll need to be authenticated — R2 buckets are private by default.

In-Worker API

With the in-Worker API, a bucket is “bound” to a specific Worker, which can then perform PUT, GET, DELETE and LIST operations against the bucket.

S3-compatible API

For the S3-compatible API, authentication is done the same way as on S3: SigV4 against an R2 URL. SigV4 signs requests using a secret key to authenticate them to R2. This means public access to R2 over the Internet is only possible today by hosting a Worker, connecting it to R2, and routing requests through it.

The easiest way to test the S3-compatible API is to use an S3 client. One of the most popular S3 clients is the boto3 SDK.

In Python, copy the following script and fill in the account_id, access_key, and secret_access_key fields with your R2 account credentials.

#!/usr/bin/env python
import boto3
import pprint
from botocore.client import Config
 
account_id = ''
access_key_id = ''
secret_access_key = ''
endpoint = f'https://{account_id}.r2.cloudflarestorage.com'
 
cl = boto3.client(
    's3',
    aws_access_key_id=access_key_id,
    aws_secret_access_key=secret_access_key,
    endpoint_url=endpoint,
    config=Config(
        region_name = endpoints[endpoint_name].get('region', 'auto'),
        s3={'addressing_style': 'path'},
        retries=dict( max_attempts=0 ),
    ),
)
 
printer = pprint.PrettyPrinter().pprint
 
printer(cl.head_bucket(Bucket='some bucket'))
printer(cl.create_bucket(Bucket='some other bucket'))
printer(cl.put_object(Bucket='some bucket', Key='my object', Body='some payload'))

Features

R2 comes with support for all basic create/read/update/delete S3 features through both of its APIs.

During the open beta period, we’re targeting R2 to sustain 1,000 GET operations per second and 100 PUT operations per second, per bucket. R2 supports objects up to approximately 5 TB in size, with individual parts limited to 5 GB of data.

R2 provides strongly consistent access to data. Once a PUT is confirmed by R2, future GET operations will always reflect the new key/value pair. The only exception to this is when deleting a bucket. For a short period of time following deletion, the bucket may still exist and continue to allow reads/writes.

Pricing

When we initially announced R2, we included preliminary pricing numbers. One of our main goals with R2 has been to serve the developers who can’t negotiate large discounts with cloud vendors. To that end, we’re also announcing a forever-free tier that lets developers start building on R2 with no charges at all.

R2 charges depend on the total volume of data stored and the type of operation performed on the data:

  • Storage is priced at \$0.015 / GB, per month.
  • Class A operations (including writes and lists) cost \$4.50 / million.
  • Class B operations cost \$0.36 / million.

Class A operations tend to mutate state, such as creating a bucket, listing objects in a bucket, or writing an object. Class B operations tend to read existing state, for example reading an object from a bucket. You can find more information on pricing and a full list of operation types in the docs.

Of course, there is no charge for egress bandwidth from R2. You can access your bucket to your heart’s content.

R2’s forever-free tier includes:

  • 10 GB-months of stored data
  • 1,000,000 Class A operations, per month
  • 10,000,000 Class B operations, per month

Free usage resets each month. While in the open beta phase, R2 usage over the free tier will be billed.

Future plans

We’ve spent the past six months in closed beta with a number of design partners, building out our storage solution. Backed by Durable Objects, R2’s novel architecture delivers both high availability and consistent performance.

While we’ve made great progress on R2, we still have plenty left to build in the coming months.

Improving performance

Our first priority is to improve performance and reliability. While we’ve thrown internal usage and our design partner’s demands at R2, there’s no substitute for live production traffic.

During the open beta period, R2 can sustain a maximum of 1,000 GET operations per second and 100 PUT operations per second, per bucket. We’ll look to raise these limits as we get comfortable operating the system. If you have higher needs, reach out to us!

When you create a bucket, you won’t see a region selector. Our vision for R2 includes automatically globally distributed storage, where R2 seamlessly places each object into the storage region closest to where the request comes from. Today, R2 primarily stores data in North America, which can lead to higher latencies when accessing content from other regions. We’ll first look to address this by adding additional regions where objects can be created, before adding automatic migration of existing objects across regions. Similar to what we’ve built with jurisdictional restrictions for Durable Objects, we’ll also enable restricting where an R2 bucket places data to comply with privacy regulations.

Expanding R2’s feature set

We’ll then focus on expanding R2 capabilities beyond the basic S3 API. In the near term, we’re focused on delivering:

  • Support for TTLs, so data can automatically be deleted from buckets over time.
  • Public buckets, so a bucket can be exposed to the internet without writing a Worker
  • Pre-signed URL support, which delegates read and write access for a specific key to a token.
  • Integration with Cloudflare’s cache, to scale read requests and provide global distribution of data.

If you have additional feature requests that aren’t listed above, we want to hear from you! Reach out and let us know what you need to make R2 your new, zero-cost egress object store.

Announcing our Spring Developer Speaker Series

Post Syndicated from Kristian Freeman original https://blog.cloudflare.com/announcing-our-spring-developer-speaker-series/

Announcing our Spring Developer Speaker Series

Announcing our Spring Developer Speaker Series

We love developers.

Late last year, we hosted Full Stack Week, with a focus on new products, features, and partnerships to continue growing Cloudflare’s developer platform. As part of Full Stack Week, we also hosted the Developer Speaker Series, bringing 12 speakers in the web dev community to our 24/7 online TV channel, Cloudflare TV. The talks covered topics across the web development ecosystem, which you can rewatch at any time.

We loved organizing the Developer Speaker Series last year. But as developers know far too well, our ecosystem changes rapidly: what may have been cutting edge back in November 2021 can be old news just a few months later in 2022. That’s what makes conferences and live speaking events so valuable: they serve as an up-to-date reference of best practices and future-facing developments in the industry. With that in mind, we’re excited to announce a new edition of our Developer Speaker Series for 2022!

Check out the eleven expert web dev speakers, developers, and educators that we’ve invited to speak live on Cloudflare TV! Here are the talks you’ll be able to watch, starting tomorrow morning (May 9 at 09:00 PT):

The Bootcampers Companion – Caitlyn Greffly
In her recent book, The Bootcamper’s Companion, Caitlyn dives into the specifics of how to build connections in the tech field, understand confusing tech jargon, and make yourself a stand-out candidate when looking for your first job. She’ll talk about some top tips and share a bit about her experience as well as what she has learned from navigating tech as a career changer.

Engaging Ecommerce with the Visual Web – Colby Fayock
Experiences on the web have grown increasingly visual, from displaying product images to interactive NFTs, but not paying attention to how media is delivered can impact Core Web Vitals, creating a bad UX with slow-loading pages, hurting your store’s conversion and potentially losing sales.

How can we effectively leverage media to showcase products creating engaging experiences for our store? We’ll talk about the media’s role in ecomm and how we can take advantage of it while optimizing delivery.

Testing Web Applications with Playwright – Debbie O’Brien
Testing is hard, testing takes time to learn and to write, and time is money. As developers, we want to test. We know we should, but we don’t have time. So how can we get more developers to do testing? We can create better tools.

Let me introduce you to Playwright, a reliable tool for end-to-end cross browser testing for modern web apps, by Microsoft and fully open source. Playwright’s codegen generates tests for you in JavaScript, TypeScript, Dot Net, Java or Python. Now you really have no excuses. It’s time to play your tests wright.

Building serverless APIs: how Fauna and Workers make it easy – Rob Sutter
Building APIs has always been tricky when it comes to setting up architecture. FaunaDB and Workers remove that burden by letting you write code and watch it run everywhere.

Business context is developer productivity – John Feminella
A major factor in developer productivity is whether they have the context to make decisions on their own, or if instead they can only execute someone else’s plan. But how do organizations give engineers the appropriate context to make those decisions when they weren’t there from the beginning?

On the edge of my server – Brian Rinaldi
Edge functions can be potentially game changing. You get the power of serverless functions but running at the CDN level – meaning the response is incredibly fast. With Cloudflare Workers, every worker is an edge function. In this talk, we’ll explore why edge functions can be powerful and explore examples of how to use them to do things a normal serverless function can’t do.

Ten things I love about Wrangler 2 – Sunil Pai
We spent the last six months rewriting wrangler, the CLI for building and deploying Cloudflare Workers. Almost every single feature has been upgraded to be more powerful and user-friendly, while still remaining backward compatible with the original version of wrangler. In this talk, we’ll go through some of the best parts about the rewrite, and how it provides the foundation for all the things we want to build in the future.

L is for Literacy – Henri Helvetica
It’s 2022, and web performance is now abundantly important, with an abundance of available metrics, used by — you guessed it — an abundance of developers, new and experienced. All quips aside, the complexities of the web has led to increased complexities in web performance. Understanding, or literacy in web performance is as important as the four basic language skills. ‘L is for Literacy’ is a lively look at performance lexicon, backed by enlightening data all will enjoy.

Cloudflare Pages Updates – Greg Brimble
Greg Brimble, a Systems Engineer working on Pages, will showcase some of this week’s announcements live on Cloudflare TV. Tune in to see what is now possible for your Cloudflare Pages projects. We’re excited to show you what the team has been working on!

Migrating to Cloudflare Pages: A look into git control, performance, and scalability – James Ross
James Ross, CTO of Nodecraft, will discuss how moving to Pages brought an improved experience for both users and his team building the future of game servers.

If you want to see the full schedule for the Developer Speaker Series, go to our landing page. It shows each talk, including speaker info and timing, as well as time zones for international viewers. When a talk goes live, tuning in is simple – just visit cloudflare.tv to start watching.

New this year, we’ve also prepared a Discord channel to follow the live conversation with other viewers! If you haven’t joined Cloudflare’s Discord server, get your invite.

Cloudflare Relay Worker

Post Syndicated from Matt Boyle original https://blog.cloudflare.com/cloudflare-relay-worker/

Cloudflare Relay Worker

Cloudflare Relay Worker

Our Notification Center offers first class support for a variety of popular services (a list of which are available here). However, even with such extensive support, you may use a tool that isn’t on that list. In that case, it is possible to leverage Cloudflare Workers in combination with a generic webhook to deliver notifications to any service that accepts webhooks.

Today, we are excited to announce that we are open sourcing a Cloudflare Worker that will make it as easy as possible for you to transform our generic webhook response into any format you require. Here’s how to do it.

For this example, we are going to write a Cloudflare Worker that takes a generic webhook response, transforms it into the correct format and delivers it to Rocket Chat, a popular customer service messaging platform.  When Cloudflare sends you a generic webhook, it will have the following schema, where “text” and “data” will vary depending on the alert that has fired:

{
   "name": "Your custom webhook",
   "text": "The alert text",
   "data": {
       "some": "further",
       "info": [
           "about",
           "your",
           "alert",
           "in"
       ],
       "json": "format"
   },
   "ts": 123456789
}

Whereas Rocket Chat is looking for this format:

{
   "text": "Example message",
   "attachments": [
       {
           "title": "Rocket Chat",
           "title_link": "https://rocket.chat",
           "text": "Rocket.Chat, the best open source chat",
           "image_url": "/images/integration-attachment-example.png",
           "color": "#764FA5"
       }
   ]
}

Getting Started

Firstly, you’ll need to ensure you are ready to develop on the Cloudflare Workers platform. You can find more information on how to do that here. For the purpose of this example, we will assume you have a Cloudflare account and Wrangler, the Workers CLI, setup.

Next, let us see the steps to extend the notifications system in detail.

Step 1
Clone the webhook relay worker GitHub repository: git clone [email protected]:cloudflare/cf-webhook-relay.git

Step 2
Check the webhook payload format required by your communication tool. In this specific case, it would look like the Rocket Chat example payload shared above.

Step 3
Sign up for Rocket Chat and add a webhook integration to accept incoming webhook notifications.

Cloudflare Relay Worker

Step 4
Configure an encrypted wrangler secret for request authentication and the Rocket Chat URL for sending requests in your Worker: Environment variables · Cloudflare Workers docs (for this example, the secret is not encrypted.)

Cloudflare Relay Worker

Step 5
Modify your worker to accept POST webhook requests with the secret configured as a query param for authentication.

if (headers.get("cf-webhook-auth") !== WEBHOOK_SECRET) {
    return new Response(":(", {
        headers: {'content-type': 'text/plain'},
            status: 401
    })
}

Step 6
Convert the incoming request payload from the notification system (like in the example shared above) to the Rocket Chat format in the worker.

let incReq = await request.json()
let msg = incReq.text
let webhookName = incReq.name
let rocketBody = {
    "text": webhookName,
    "attachments": [
        {
            "title": "Cloudflare Webhook",
            "text": msg,
            "title_link": "https://cloudflare.com",
            "color": "#764FA5"
        }
    ]
}

Step 7
Configure the Worker to send POST requests to the Rocket Chat webhook with the converted payload.

const rocketReq = {
    headers: {
        'content-type': 'application/json',
    },
    method: 'POST',
    body: JSON.stringify(rocketBody),
}
const response = await fetch(
    ROCKET_CHAT_URL,
    rocketReq,
)
const res = await response.json()
console.log(res)
return new Response(":)", {
    headers: {'content-type': 'text/plain'},
})

Step 8
Set up deployment configuration in your wrangler.toml file and publish your Worker. You can now see the Worker in the Cloudflare dashboard.

Cloudflare Relay Worker

Step 9
You can manage and monitor the Worker with a variety of available tools.

Cloudflare Relay Worker

Step 10
Add the Worker URL as a generic webhook to the notification destinations in the Cloudflare dashboard: Configure webhooks · Cloudflare Fundamentals docs.

Cloudflare Relay Worker
Cloudflare Relay Worker

Step 11
Create a notification with the destination as the configured generic webhook: Create a Notification · Cloudflare Fundamentals docs.

Cloudflare Relay Worker

Step 12
Tada! With your Cloudflare Worker running, you can now receive all notifications to Rocket Chat. We can configure in the same way for any communication tool.

Cloudflare Relay Worker

We know that a notification system is essential to proactively monitor any issues that may arise within a project. We are excited with this announcement to make notifications available to any communication service without having to worry too much about the system’s compatibility to them. We have lots of updates planned, like adding more alertable events to choose from and extending our support to a wide range of webhook services to receive them.

If you’re interested in building scalable services and solving interesting technical problems, we are hiring engineers on our team in Austin & Lisbon.

The Cloudflare Developer Expert Program: apply today!

Post Syndicated from Albert Zhao original https://blog.cloudflare.com/developer-expert-program/

The Cloudflare Developer Expert Program: apply today!

The Cloudflare Developer Expert Program: apply today!

Today we’re launching the Cloudflare Developer Expert Program: an initiative to support and recognize our VIP users who build with Workers, Pages, and the entire Cloudflare developer ecosystem.

A Cloudflare Developer Expert is an early adopter of new releases, a frequent participant in feedback sessions, and an evangelist for Cloudflare products made for the larger developer community.

But first, what are the benefits of becoming a Cloudflare Developer Expert?

  • Early access to features (e.g., private betas)
  • Admission to a private community of power users
  • Routine calls with product managers, engineers, and developer advocates
  • Sponsorships for OSS work
  • Our best swag, of course

We have already sent invites to our first batch of power users, but if you’d like to join or want to nominate a developer, please fill out this form.

Why We Made This Program

We ship very quickly at Cloudflare.

This is because we want feedback early in development, allowing users to challenge our assumptions and validate what we’re building. In the Workers team, this strategy has been very successful.

For example, we began beta testing custom builds for Wrangler (our CLI tool) that allow you to run any JavaScript bundler you want. This was a huge release because it introduced the ES Modules syntax for the first time in Workers, significantly increasing the number of usable JavaScript packages and libraries. To get feedback before public release, we opened a private Discord channel and invited around 50 users for testing.

We were blown away by the feedback.

Our users quickly discovered edge cases that weren’t working, such as needing support for Workers Unbound. This made it easy for us to prioritize what to fix before GA. We also discovered actionable steps to improve documentation.

“The Workers team wanted our input early on for such a big release, and it really shows how seriously they’re taking developer experience,” said James Ross, CTO of Nodecraft.

After seeing the success of this small group of users, we figured it was time to make this a regular part of development.

We threw together a list of users, sent NDAs, and opened a private Discord channel for one of our biggest releases of the year: running functions directly on Cloudflare Pages.

We’re able to ship this feature more quickly and confidently because of feedback in our Discord,” said Nevi Shah, product manager for Cloudflare Pages. “Users let us know quickly what can be better and what features they need first.

Developers, Developers, Developers

Back in April, we launched our first Developer Week with a central focus: how to get developers to build more on Cloudflare. This included exciting releases like Cloudflare Pages and the Durable Objects open beta.

Since then, after receiving so much feedback in our Discord and other channels, we learned developers either expect their code to automatically run on Cloudflare’s infrastructure (Cloudflare Pages), or, if it’s a new technology (such as Durable Objects) they want as much direct guidance as possible to reliably get up and running. We realized involving users earlier in development allowed us to support more happy paths.

And since developers like doing things their own way, we aim to support as many happy paths as possible on our platform.

“I started developing with Cloudflare Workers shortly after it was announced. Over that time, Cloudflare has only increased its emphasis on developer experience,” said David Barratt, staff software engineer at Drizly. “The Cloudflare Developer Expert Program has been a fantastic way to have a quick feedback loop between the developers who have a lot of experience using the platform and the developers building that platform.”

Apply now!

If you are a developer who deploys with Workers, Pages, and our other tools, we want you to apply! We’re hoping to review applicants with experience deploying to production with our developer tools.

And again, Cloudflare Developer Experts get special, special care from our team.

To apply for the Cloudflare Developer Expert Program, fill out this form.

Developer Spotlight: Chris Coyier, CodePen

Post Syndicated from Kristian Freeman original https://blog.cloudflare.com/developer-spotlight-codepen/

Developer Spotlight: Chris Coyier, CodePen

Developer Spotlight: Chris Coyier, CodePen

Chris Coyier has been building on the web for over 15 years. Chris made his mark on the web development world with CSS-Tricks in 2007, one of the web’s leading publications for frontend and full-stack developers.

In 2012, Chris co-founded CodePen, which is an online code editor that lives in the browser and allows developers to collaborate and share code examples written in HTML, CSS, and JavaScript.

Due to the nature of CodePen — namely, hosting code and an incredibly popular embedding feature, allowing developers to share their CodePen “pens” around the world — any sort of optimization can have a massive impact on CodePen’s business. Increasingly, CodePen relies on the ability to both execute code and store data on Cloudflare’s network as a first stop for those optimizations. As Chris puts it, CodePen uses Cloudflare Workers for “so many things”:

“We pull content from an external CMS and use Workers to manipulate HTML before it arrives to the user’s browser. For example, we fetch the original page, fetch the content, then stitch them together for a full response.”

Workers allows you to work with responses directly using the native Request/Response classes and, with the addition of our streaming HTMLRewriter engine, you can modify, combine, and parse HTML without any loss in performance. Because Workers functions are deployed to Cloudflare’s network, CodePen has the ability to instantly deploy highly-intelligent middleware in-between their origin servers and their clients, without needing to spin up any additional infrastructure.

In a popular YouTube video on Chris Coyier’s YouTube channel, he sits down with a front-end engineer at CodePen, and covers how they use Cloudflare Workers to build crucial CodePen features. Here’s Chris:

“Cloudflare Workers are like serverless functions that always run at the edge, making them incredibly fast. Not only that, but the tooling around them is really nice. They can run at particular routes on your own website, removing any awkward CORS troubles, and helping you craft clean APIs. But they also have special superpowers, like access to KV storage (also at the edge), image manipulation and optimization, and HTML rewriting.”

CodePen also leverages Workers KV to store data. This allows them to avoid an immense amount of repetitive processing work by caching results and making them accessible on Cloudflare’s network, geographically near their users:

“We use Workers combined with the KV Store to run expensive jobs. For example, we check the KV Store to see if we need to do some processing work, or if that work has already been done. If we need to do the work, we do it and then update KV to know it’s been done and where the result of that work is.”

In a follow-up video on his YouTube channel, Chris dives into Workers KV and shows how you can build a simple serverless function — with storage — and deploy it to Cloudflare. With the addition of Workers KV, you can persist complex data structures side-by-side with your Workers function, without compromising on performance or scalability.

Chris and the CodePen team are invested in Workers and, most importantly, they enjoy developing with Cloudflare’s developer tooling. “The DX around them is suspiciously nice. Coming from other cloud functions services, there seems to be a just-right amount of tooling to do the things we need to do.”

CodePen is a great example of what’s possible when you integrate the Cloudflare Workers developer environment into your stack. Across all parts of the business, Workers, and tools like Workers KV and HTMLRewriter, allow CodePen to build highly-performant applications that look towards the future.

If you’d like to learn more about Cloudflare Workers, and deploy your own serverless functions to Cloudflare’s network, check out workers.cloudflare.com!

wrangler 2.0 — a new developer experience for Cloudflare Workers

Post Syndicated from Ashcon Partovi original https://blog.cloudflare.com/wrangler-v2-beta/

wrangler 2.0 — a new developer experience for Cloudflare Workers

wrangler 2.0 — a new developer experience for Cloudflare Workers

Much of a developer’s work is about making trade-offs: consistency versus availability, speed over correctness, you name it. While there will always be different technical trade-offs to consider, we believe there are some that you, as a developer, should never need to make.

One of those decisions is an easy-to-use development environment. Whether you’re onboarding a new developer to your team or you simply want to develop faster, it’s important that even the smallest of things are optimized for speed and simplicity.

That’s why we’re excited to announce the second-generation of our developer tooling for Cloudflare Workers. It’s a new developer experience that’s out-of-the-box, lightning fast, and can even run Workers on a local machine. (Yes!)

If you’re already familiar with our existing tools, we’re not just talking about the wrangler CLI, we’re talking about its next major release: wrangler 2.0. Stick around to get a sneak-peak at the new experience.

No config? No problem

We’ve made it much easier to get started with Cloudflare Workers. All you need is a single JavaScript file to run a Worker — no configuration needed. You don’t even need to decide on a name!

When you run wrangler dev <filename>, your code is automatically bundled and deployed to a development environment. Then, you can send HTTP requests to that environment using a localhost proxy. Here’s what that looks like in-action:

Live debugging, just like that

Now there’s a completely redesigned experience for debugging a Worker. With a simple command you get access to a remote debugger, the same used by Chrome when you click “Inspect,” which provides an interactive view of logs, exceptions, and requests. It feels like your Worker is running locally, yet it’s actually running on the Cloudflare network.

A debugger that “just works” and auto-detects your changes makes all the difference when you’re just trying to code. We’ve also made a number of improvements to make the debugging experience even easier:

  • Keybind shortcuts, to quickly toggle features or open a window.
  • Support for “–public <path>”, to automatically serve your static assets.
  • Faster and more reliable file-watching.

To start a debugging session, just run: wrangler dev <filename>, then hit the “D” keybind.

Local mode? Flip a switch

Another aspect of the new debugging experience is the ability to switch to “local mode,” which runs your Worker on your local machine. In fact, you can easily switch between “network” and “local” mode with just a keybind shortcut.

How does this work? Recently, we announced that Miniflare (created by Brendan Coll), a project to locally emulate Workers in Node.js, has joined the Cloudflare organization. Miniflare is great for unit testing and situations where you’d like to debug Workers without an Internet connection. Now we’ve integrated it directly into the local development experience, so you can enjoy the benefits of both the network and your localhost!

Let us know what you think!

Serverless should be simple. We’re really excited about these improvements to the developer experience for Workers, and we have a lot more planned.

While we’re still working on wrangler 2.0, you can try the beta release by running: npm install wrangler@beta or by visiting the repository to see what we’re working on. If you’re already using wrangler to deploy existing applications, we recommend continuing to use wrangler 1.0 until the 2.0 release is officially out. We will continue to develop and maintain wrangler 1.0 until we’re finished with backwards-compatibility for 2.0.

If you’re starting a project or want to try out the new experience, we’d love to hear your feedback! Let us know what we’re missing or what you’d like to see in wrangler 2.0. You can create a feature request or start a discussion in the repository. (we’ll merge them into the existing wrangler repository when 2.0 is out of beta).

Thank you to all of our developers out there, and we look forward to seeing what you build!

Automatically generating types for Cloudflare Workers

Post Syndicated from Brendan Coll original https://blog.cloudflare.com/automatically-generated-types/

Automatically generating types for Cloudflare Workers

Automatically generating types for Cloudflare Workers

Historically, keeping our Rust and TypeScript type repos up to date has been hard. They were manually generated, which means they ran the risk of being inaccurate or out of date. Until recently, the workers-types repository needed to be manually updated whenever the types changed. We also used to add type information for mostly complete browser APIs. This led to confusion when people would try to use browser APIs that aren’t supported by the Workers runtime they would compile but throw errors.

That all changed this summer when Brendan Coll, whilst he was interning with us, built an automated pipeline for generating them. It runs every time we build the Workers runtime, generating types for our TypeScript and Rust repositories. Now everything is up-to-date and accurate.

A quick overview

Every time the Workers runtime code is built, a script runs over the public APIs and generates the Rust and TypeScript types as well as a JSON file containing an intermediate representation of the static types. The types are sent to the appropriate repositories and the JSON file is uploaded as well in case people want to create their own types packages. More on that later.

This means the static types will always be accurate and up to date. It also allows projects running Workers in other, statically-typed languages to generate their own types from our intermediate representation. Here is an example PR from our Cloudflare bot. It’s detected a change in the runtime types and is updating the TypeScript files as well as the intermediate representation.

Automatically generating types for Cloudflare Workers

Using the auto-generated types

To get started, use wrangler to generate a new TypeScript project:

$ wrangler generate my-typescript-worker https://github.com/cloudflare/worker-typescript-template

If you already have a TypeScript project, you can install the latest version of workers-types with:

$ npm install --save-dev @cloudflare/workers-types

And then add @cloudflare/workers-types to your project’s tsconfig.json file.

{
"compilerOptions": {
"target": "ES2020",
"module": "CommonJS",
"lib": ["ES2020"],
"types": ["@cloudflare/workers-types"]
}
}

After that, you should get automatic type completion in your IDE of choice.

Automatically generating types for Cloudflare Workers

How it works

Here is some example code from the Workers runtime codebase.

class Blob: public js::Object {
public:
typedef kj::Array<kj::OneOf<kj::Array<const byte>, kj::String, js::Ref<Blob>>> Bits;
struct Options {
js::Optional<kj::String> type;
JS_STRUCT(type);
};
static js::Ref<Blob> constructor(js::Optional<Bits> bits, js::Optional<Options> options);
int getSize();
js::Ref<Blob> slice(js::Optional<int> start, js::Optional<int> end);
JS_RESOURCE_TYPE(Blob) {
JS_READONLY_PROPERTY(size, getSize);
JS_METHOD(slice);
}
};

A Python script runs over this code during each build and generates an Abstract Syntax Tree containing information about the function including an identifier, any argument types and any return types.

{
  "name": "Blob",
  "kind": "class",
  "members": [
    {
      "name": "size",
      "type": {
        "name": "integer"
      },
      "readonly": true
    },
    {
      "name": "slice",
      "type": {
        "params": [
          {
            "name": "start",
            "type": {
              "name": "integer",
              "optional": true
            }
          },
          {
            "name": "end",
            "type": {
              "name": "integer",
              "optional": true
            }
          }
        ],
        "returns": {
          "name": "Blob"
        }
      }
    }
  ]
}

Finally, the TypeScript types repositories are automatically sent PRs with the updated types.

declare type BlobBits = (ArrayBuffer | string | Blob)[];

interface BlobOptions {
  type?: string;
}

declare class Blob {
  constructor(bits?: BlobBits, options?: BlobOptions);
  readonly size: number;
  slice(start?: number, end?: number, type?: string): Blob;
}

Overrides

In some cases, TypeScript supports concepts that our C++ runtime does not. Namely, generics and function overloads. In these cases, we override the generated types with partial declarations. For example, DurableObjectStorage makes heavy use of generics for its getter and setter functions.

declare abstract class DurableObjectStorage {
	 get<T = unknown>(key: string, options?: DurableObjectStorageOperationsGetOptions): Promise<T | undefined>;
	 get<T = unknown>(keys: string[], options?: DurableObjectStorageOperationsGetOptions): Promise<Map<string, T>>;
	 
	 list<T = unknown>(options?: DurableObjectStorageOperationsListOptions): Promise<Map<string, T>>;
	 
	 put<T>(key: string, value: T, options?: DurableObjectStorageOperationsPutOptions): Promise<void>;
	 put<T>(entries: Record<string, T>, options?: DurableObjectStorageOperationsPutOptions): Promise<void>;
	 
	 delete(key: string, options?: DurableObjectStorageOperationsPutOptions): Promise<boolean>;
	 delete(keys: string[], options?: DurableObjectStorageOperationsPutOptions): Promise<number>;
	 
	 transaction<T>(closure: (txn: DurableObjectTransaction) => Promise<T>): Promise<T>;
	}

You can also write type overrides using Markdown. Here is an example of overriding types of KVNamespace.

Creating your own types

The JSON IR (intermediate representation) has been open sourced alongside the TypeScript types and can be found in this GitHub repository. We’ve also open sourced the type schema itself, which describes the format of the IR. If you’re interested in generating Workers types for your own language, you can take the IR, which describes the declaration in a “normalized” data structure, and generate types from it.

The declarations inside `workers.json` contain the elements to derive function signatures and other elements needed for code generation such as identifiers, argument types, return types and error management. A concrete use-case would be to generate external function declarations for a language that compiles to WebAssembly, to import precisely the set of available function calls available from the Workers runtime.

Conclusion

Cloudflare cares deeply about supporting the TypeScript and Rust ecosystems. Brendan created a tool which will ensure the type information for both languages is always up-to-date and accurate. We also are open-sourcing the type information itself in JSON format, so that anyone interested can create type data for any language they’d like!

JavaScript modules are now supported on Cloudflare Workers

Post Syndicated from Ashcon Partovi original https://blog.cloudflare.com/workers-javascript-modules/

JavaScript modules are now supported on Cloudflare Workers

JavaScript modules are now supported on Cloudflare Workers

We’re excited to announce that JavaScript modules are now supported on Cloudflare Workers. If you’ve ever taken look at an example Worker written in JavaScript, you might recognize the following code snippet that has been floating around the Internet the past few years:

addEventListener("fetch", (event) => {
  event.respondWith(new Response("Hello Worker!"));
}

The above syntax is known as the “Service Worker” API, and it was proposed to be standardized for use in web browsers. The idea is that you can attach a JavaScript file to a web page to modify its HTTP requests and responses, acting like a virtual endpoint. It was exactly what we needed for Workers, and it even integrated well with standard Web APIs like fetch() and caches.

Before introducing modules, we want to make it clear that we will continue to support the Service Worker API. No developer wants to get an email saying that you need to rewrite your code because an API or feature is being deprecated; and you won’t be getting one from us. If you’re interested in learning why we made this decision, you can read about our commitment to backwards-compatibility for Workers.

What are JavaScript modules?

JavaScript modules, also known as ECMAScript (abbreviated. “ES”) modules, is the standard API for importing and exporting code in JavaScript. It was introduced by the “ES6” language specification for JavaScript, and has been implemented by most Web browsers, Node.js, Deno, and now Cloudflare Workers. Here’s an example to demonstrate how it works:

// filename: ./src/util.js
export function getDate(time) {
  return new Date(time).toISOString().split("T")[0]; // "YYYY-MM-DD"
}

The “export” keyword indicates that the “getDate” function should be exported from the current module. Then, you can use “import” from another module to use that function.

// filename: ./src/index.js
import { getDate } from "./util.js"

console.log("Today’s date:", getDate());

Those are the basics, but there’s a lot more you can do with modules. It allows you to organize, maintain, and re-use your code in an elegant way that just works. While we can’t go over every aspect of modules here, if you’d like to learn more we’d encourage you to read the MDN guide on modules, or a more technical deep-dive by Lin Clark.

How can I use modules in Workers?

You can export a default module, which will represent your Worker. Instead of using “addEventListener,” each event handler is defined as a function on that module. Today, we support “fetch” for HTTP and WebSocket requests and “scheduled” for cron triggers.

export default {
  async fetch(request, environment, context) {
    return new Response("I’m a module!");
  },
  async scheduled(controller, environment, context) {
    // await doATask();
  }
}

You may also notice some other differences, such as the parameters on each event handler. Instead of a single “Event” object, the parameters that you need the most are spread out on their own. The first parameter is specific to the event type: for “fetch” it’s the Request object, and for “scheduled” it’s a controller that contains the cron schedule.

The second parameter is an object that contains your environment variables (also known as “bindings“). Previously, each variable was inserted into the global scope of the Worker. While a simple solution, it was confusing to have variables magically appear in your code. Now, with an environment object, you can control which modules and libraries get access to your environment variables. This mechanism is more secure, as it can prevent a compromised or nosy third-party library from enumerating all your variables or secrets.

The third parameter is a context object, which allows you to register background tasks using waitUntil(). This is useful for tasks like logging or error reporting that should not block the execution of the event.

When you put that all together, you can import and export multiple modules, as well as use the new event handler syntax.

// filename: ./src/error.js
export async function logError(url, error) {
  await fetch(url, {
     method: "POST",
     body: error.stack
  })
}

// filename: ./src/worker.js
import { logError } from "./error.js"

export default {
  async fetch(request, environment, context) {
    try {
       return await fetch(request);
    } catch (error) {
       context.waitUntil(logError(environment.ERROR_URL, error));
       return new Response("Oops!", { status: 500 });
    }
  }
}

Let’s not forget about Durable Objects, which became generally available earlier this week! You can also export classes, which is how you define a Durable Object class. Here’s another example with a “Counter” Durable Object, that responds with an incrementing value.

// filename: ./src/counter.js
export class Counter {
  value = 0;
  fetch() {
    this.value++;
    return new Response(this.value.toString());
  }
}

// filename: ./src/worker.js
// We need to re-export the Durable Object class in the Worker module.
export { Counter } from "./counter.js"

export default {
  async fetch(request, environment) {
    const clientId = request.headers.get("cf-connecting-ip");
    const counterId = environment.Counter.idFromName(clientId);
    // Each IP address gets its own Counter.
    const counter = environment.Counter.get(counterId);
    return counter.fetch("https://counter.object/increment");
  }
}

Are there non-JavaScript modules?

Yes! While modules are primarily for JavaScript, we also support other modules types, some of which are not yet standardized.

For instance, you can import WebAssembly as a module. Previously, with the Service Worker API, WebAssembly was included as a binding. We think that was a mistake, since WebAssembly should be represented as code and not an external resource. With modules, here’s the new way to import WebAssembly:

import module from "./lib/hello.wasm"

export default {
  async fetch(request) {
    const instance = await WebAssembly.instantiate(module);
    const result = instance.exports.hello();
    return new Response(result);
  }
}

While not supported today, we look forward to a future where WebAssembly and JavaScript modules can be more tightly integrated, as outlined in this proposal. The ergonomics improvement, demonstrated below, can go a long way to make WebAssembly more included in the JavaScript ecosystem.

import { hello } from "./lib/hello.wasm"

export default {
  async fetch(request) {
    return new Response(hello());
  }
}

We’ve also added support for text and binary modules, which allow you to import a String and ArrayBuffer, respectively. While not standardized, it allows you to easily import resources like an HTML file or an image.

<!-- filename: ./public/index.html -->
<!DOCTYPE html>
<html><body>
<p>Hello!</p>
</body></html>

import html from "../public/index.html"

export default {
  fetch(request) {
    if (request.url.endsWith("/index.html") {
       return new Response(html, {
          headers: { "Content-Type": "text/html" }
       });
    }
    return fetch(request);
  }
}

How can I get started?

There are many ways to get started with modules.

First, you can try out modules in your browser using our playground (which doesn’t require an account) or by using the dashboard quick editor. Your browser will automatically detect when you’re using modules to allow you to seamlessly switch from the Service Worker API. For now, you’ll only be able to create one JavaScript module in the browser, though supporting multiple modules is something we’ll be improving soon.

If you’re feeling adventurous and want to start a new project using modules, you can try out the beta release of wrangler 2.0, the next-generation of the command-line interface (CLI) for Workers.

For existing projects, we still recommend using wrangler 1.0 (release 1.17 or later). To enable modules, you can adapt your “wrangler.toml” configuration to the following example:

name = "my-worker"
type = "javascript"
workers_dev = true

[build.upload]
format = "modules"
dir = "./src"
main = "./worker.js" # becomes "./src/worker.js"

[[build.upload.rules]]
type = "ESModule"
globs = "**/*.js"

# Uncomment if you have a build script.
# [build]
# command = "npm run build"

We’ve updated our documentation to provide more details about modules, though some examples will still be using the Service Worker API as we transition to showing both formats. (and TypeScript as a bonus!)

If you experience an issue or notice something strange with modules, please let us know, and we’ll take a look. Happy coding, and we’re excited to see what you build with modules!

JavaScript modules are now supported on Cloudflare Workers

Introducing Services: Build Composable, Distributed Applications on Cloudflare Workers

Post Syndicated from Ashcon Partovi original https://blog.cloudflare.com/introducing-worker-services/

Introducing Services: Build Composable, Distributed Applications on Cloudflare Workers

Introducing Services: Build Composable, Distributed Applications on Cloudflare Workers

First, there was the Worker script. It was simple, yet elegant. With just a few lines of code, you could rewrite an HTTP request, append a header, or make a quick fix to your website.

Though, what if you wanted to build an entire application on Workers? You’d need a lot more tools in your developer toolbox. That’s why we’ve introduced extensions to Workers platform like KV, our distributed key-value store; Durable Objects, — a strongly consistent, object-oriented database; and soon R2, the no-egress object storage. While these tools allow you to build a more robust application, there’s still a gap when it comes to building a system architecture, composed of many applications or services.

Imagine you’ve built an authentication service that authorizes requests to your API. You’d want to re-use that logic among all your other services. Moreover, when you make changes to that authentication service, you’d want to test it in a controlled environment that doesn’t affect those other services in production. Well, you don’t need to imagine anymore.

Introducing Services

Services are the new building block for deploying applications on Cloudflare Workers. Unlike the script, a service is composable, which allows services to talk to each other. Services also support multiple environments, which allow you to test changes in a preview environment, then promote to production when you’re confident it worked.

To enable a seamless transition to services, we’ve automatically migrated every script to become a service with one “production” environment — no action needed.

Services have environments

Each service comes with a production environment and the ability to create or clone dozens of preview environments. Every aspect of an environment is overridable: the code, environment variables, and even resources like a KV namespace. You can create and switch between environments with just a few clicks in the dashboard.

Each environment is resolvable at a unique hostname, which is automatically generated when you create or rename the environment. There’s no waiting around after you deploy. Everything you need, like DNS records, SSL certificates, and more, is ready-to-go seconds later. If you’d like a more advanced setup, you can also add custom routes from your domain to an environment.

Once you’ve tested your changes in a preview environment, you’re ready to promote to production. We’ve made it really easy to promote code from one environment to another, without the need to rebuild or upload your code again. Environments also manage code separately from settings, so you don’t need to manually edit environment variables when you promote from staging to production.

Services are versioned

Every change to a service is versioned and audited. Mistakes do happen, but when they do, it’s important to be able to quickly roll back, then have the tools to answer the age-old question: “who changed what, when?”

Introducing Services: Build Composable, Distributed Applications on Cloudflare Workers

Each environment in a service has its own version history. Every time there is a code change or an environment variable is updated, the version number of that environment is incremented. You can also append additional metadata to each version, like a git commit or a deployment tag.

Services can talk to each other

Services are composable, allowing one service to talk to another service. To support this, we’re introducing a new API to facilitate service-to-service communication: service bindings.

Introducing Services: Build Composable, Distributed Applications on Cloudflare Workers

A service binding allows you to send HTTP requests to another service, without those requests going over the Internet. That means you can invoke other Workers directly from your code! Service bindings open up a new world of composability. In the example below, requests are validated by an authentication service.

export default {
  async fetch(request, environment) {
    const response = await environment.AUTH.fetch(request);
    if (response.status !== 200) {
      return response;
    }
    return new Response("Authenticated!");
  }
}

Introducing Services: Build Composable, Distributed Applications on Cloudflare Workers

Service bindings use the standard fetch API, so you can continue to use your existing utilities and libraries. You can also change the environment of a service binding, so you can test a new version of a service. In the next example, 1% of requests are routed to a “canary” deployment of a service. If a request to the canary fails, it’s sent to the production deployment for another chance.

export default {
  canRetry(request) {
    return request.method === "GET" || request.method === "HEAD";
  },
  async fetch(request, environment) {
    if (Math.random() < 0.01) {
      const response = await environment.CANARY.fetch(request.clone());
      if (response.status < 500 || !canRetry(request)) {
        return response;
      }
    }
    return environment.PRODUCTION.fetch(request);
  }
}

While the interface among services is HTTP, the networking is not. In fact, there is no networking! Unlike the typical “microservice architecture,” where services communicate over a network and can suffer from latency or interruption, service bindings are a zero-cost abstraction. When you deploy a service, we build a dependency graph of its service bindings, then package all of those services into a single deployment. When one service invokes another, there is no network delay; the request is executed immediately.

Introducing Services: Build Composable, Distributed Applications on Cloudflare Workers

This zero-cost model enables teams to share and reuse code within their organizations, without sacrificing latency or performance. Forget the days of convoluted YAML templates or exponential back off to orchestrate services — just write code, and we’ll stitch it all together.

Try out the future, today!

We’re excited to announce that you can start using Services today! If you’ve already used Workers, you’ll notice that each of your scripts have been upgraded to a service with one “production” environment. The dashboard and all the existing Cloudflare APIs will continue to “just work” with services.

You can also create and deploy code to multiple “preview” environments, as part of the open-beta launch. We’re still working on service bindings and versioning, and we’ll provide an update as soon as you can start using them.

For more information about Services, check out any of the resources below:

Introducing Services: Build Composable, Distributed Applications on Cloudflare Workers

How we build software at Cloudflare

Post Syndicated from Nick Wood original https://blog.cloudflare.com/building-software-at-cloudflare/

How we build software at Cloudflare

How we build software at Cloudflare

Cloudflare provides a broad range of products — ranging from security, to performance and serverless compute — which are used by millions of Internet properties worldwide. Often, these products are built by multiple teams in close collaboration and delivering them can be a complex task. So ever wondered how we do so consistently and safely at scale?

Software delivery consists of all the activities to get working software into the hands of customers. It’s usual to talk about software delivery with reference to a model, or framework. These provide the scaffolding for most modern software delivery models, although in order to minimise operational friction it’s usual for a company to tailor their approach to suit their business context and culture.

For example, a company that designs the autopilot systems for passenger aircraft will require very strict tolerances, as a failure could cost hundreds of lives. They would want a different process to a cutting edge tech startup, who may value time to market over system uptime or stability.

Before outlining the approach we use at Cloudflare it’s worth quickly running through a couple of commonly used delivery models.

The Waterfall Approach

Waterfall has its foundations (pun intended) in construction and manufacturing. It breaks a project up into phases and presumes that each phase is completed before the next begins. Each phase “cascades” into the next bit like a waterfall, hence the name.

How we build software at Cloudflare

The main criticism of waterfall approaches arises when flaws are discovered downstream, which may necessitate a return to earlier phases — though this can be managed through governance processes that allows for adjusting scope, budgets or timelines.

More recently there are a number of modified waterfall models which have been developed as a response to its perceived inflexibility. Some notable examples are the Rational Unified Process (RUP), which encourages iteration within phases, and Sashimi which provides partial overlap between phases.

Despite falling out of favour in recent years, waterfall still has a place in modern technology companies. It tends to be reserved for projects where the scope and requirements can be defined upfront and are unlikely to change. At Cloudflare, we use it for infrastructure rollouts, for example. It also has a place in very large projects with complex dependencies to manage.

Agile Approaches

Agile isn’t a single well-defined process, rather a family of approaches which share similar philosophies — those of the agile manifesto. Implementations vary, but most agile flavours tend to share a number of common traits:

  • Short release cycles, such that regular feedback (ideally from real users) can be incorporated.
  • Teams maintain a prioritized to-do list of upcoming work (often called a ‘backlog’), with the most valuable items are at the top.
  • Teams should be self-organizing, and work at a sustainable pace.
  • A philosophy of Continuous Improvement, where teams seek to improve their ways of working over time.

Continuous improvement is very much the heart of agile, meaning these approaches are less about nailing down “the correct process” and focus more on regular reflection and change. This means variances between any two teams is expected, and encouraged.

Agile approaches can be divided into two main branches — iterative and flow-based. Scrum is probably the most prevalent of the iterative agile methods. In Scrum a team aims to build shippable increments of code at regular intervals called sprints (or “iterations”). Flow-based approaches on the other hand (such as Kanban) instead pick up new items from their backlog on an ad hoc basis. They use a number of techniques to try and minimise work in progress across the team.

The main differences between the two branches can be typified by looking at two example teams:

  • The “Green” Team has a set of products they support and wants to update them regularly, production issues for them are rare and there is very little ad-hoc work. An iterative approach allows them to make long term plans whilst also being able to incorporate feedback from users with some regularity.
  • The “Blue” Team meanwhile is an operational team, where a big part of their role is to monitor production systems and investigate issues as they arise. For them, a flow based approach is much more appropriate, so they can update their plans on the fly as new items arise.

Which approach does Cloudflare follow?

Cloudflare comprises dozens of globally distributed engineering teams each with their own unique challenges and contexts. A team usually has an Engineering Manager, a Product Manager and less than 10 engineers, who all focus on a singular product or mission. The DDoS team for example is one such team.

A team that supports a newly released product will likely want to rapidly incorporate feedback from customers, whereas a team that manages shared internal platforms will prize platform stability over speed of innovation. There is a spectrum of different contexts within Cloudflare which makes it impossible to define a single software delivery method for all teams to follow.

Instead, we take a more nuanced approach where we allow teams to decide which methodology they wish to follow within the team, whilst also defining a number of high-level concepts and language that are common to all teams. In other words, we are more concerned with macro-management than micro-management.

“SHIP”s and “Epic”s

At the highest level, our unit of work is called a “SHIP”  — this is a change to a service or product which we intend to ship to customers, hence the name. All live SHIPs are published on our internal roadmap, called our “SHIP-board”. Transparency and collaboration are part of our DNA at Cloudflare, so for us, it’s important that anyone in Cloudflare can view the SHIP-board.

Individual SHIPs are sized such that they can be comfortably delivered within a month or two, though we have a strong preference towards shorter timescales. We’d much rather deliver three small feature sets monthly than one big launch every quarter.

A single SHIP might need work from multiple teams in order for customers to use it. We manage this by ensuring there is an EPIC within the SHIP for each team contributing. To prevent circular dependencies, a SHIP can’t contain another SHIP. SHIPs are owned by their Product Manager, and EPICs are usually owned by an Engineering Manager. We also allow for EPICs to be created that don’t deliver against SHIPs — this is where technical improvement initiatives are typically managed.

Below the level of EPICs we don’t enforce any strict delivery model on teams, though teams will usually link their contributory work to the EPIC for ease of tracking. Teams are free to use whichever delivery framework they wish.

Within the Product Engineering organisation, all Product Managers and Engineering managers meet weekly to discuss progress and blockers of their live SHIPs/EPICs. Due to the number of people involved, this is a very rapid fire meeting facilitated by our automated “SHIP-board”. This has a built-in linter to highlight potential issues, to be updated prior to the meeting. We run through each team one by one, starting with the team with the most outstanding lints.

There’s also a few icons which let us visualise the status of a SHIP or EPIC at a glance. For example, a monkey means the target date for an item moved in the last week. Bananas count the total number of times the date has “slipped”, i.e. changed. A typical fragment of the SHIP-board is shown below.

How we build software at Cloudflare

Planning

Planning takes place every quarter. This lets us deliver aggressively, without having to change plans too frequently. It also forces us to make conscious choices about what to include and exclude from a SHIP so that extraneous work is minimised.

About a month before quarter-end, product managers will begin to compile the SHIPs that would deliver the most value to customers. They’ll work with their engineering teams to understand how the work might be done, and what work is required of other teams (e.g. the UI team might need to build a frontend whilst another team builds the API).

The team will likely estimate the work at this stage (though the exact mechanism is left up to them). Where work is required of other teams we’ll also begin to reach out to them, so they can factor it into their work for the quarter, and estimate their effort too. It’s also important at this stage to understand what kind of dependency this is — do we need one piece to fully complete before the other, or can they be done in parallel and integrated towards the end? Usually the latter.

The final aspect of planned work are unlinked EPICs — these are things that don’t necessarily contribute meaningfully to a SHIP, but the team would still like to get them completed. Examples of this are performance improvements, or changes/fixes to backend tooling.

We deliver continuously through the quarter to avoid a scramble of deployment at once, and our target dates will reflect that. We also allow anything delivered in the first two weeks of the following quarter to still count as being on-time — stability of the network is more important than hitting arbitrary dates.

We also take a fairly pragmatic approach towards target dates. A natural part of software delivery is that as we begin to explore the solution space we may uncover additional complexity. As long as we can justify a change of date it’s perfectly acceptable to amend the dates of SHIPs/EPICs to reflect the latest information. The exception to this is where we’ve made an external commitment to deliver something, so changing the delivery dates is subject to greater scrutiny.

Keeping us safe

You might think that letting teams set their own process might lead to chaos, but in my experience the opposite is true. By allowing teams to define their own methods we are empowering them to make better decisions and understand their own context within Cloudflare. We explicitly define the interfaces we use between teams, and that allows teams the flexibility to do what works best for them.

We don’t go as far as to say “there are no rules”. Last quarter Cloudflare blocked an average of 87 billion cyber threats each day, and in July 2021 we blocked the largest DDoS attack ever recorded. If we have an outage, our customers feel it, and we feel it too. To manage this we have strict, though simple, rules governing how code reaches our data centers. For example, we mandate a minimum number of reviews for each piece of code, and our deployments are phased so that changes are tested on a subset of live traffic, so any issues can be localised.

The main takeaway is to find the right balance between freedom and rules, and appreciate that this may vary for different teams within the organisation. Enforcing an unnecessarily strict process can cause a lot of friction in teams, and that’s a shortcut to losing great people. Our ideal process is one that minimises red tape, such that our team can focus on the hard job of protecting our customers.

P.S. — we’re hiring!

Do you want to come and work on advanced technologies where every code push impacts millions of Internet properties? Join our team!

Custom Headers for Cloudflare Pages

Post Syndicated from Nevi Shah original https://blog.cloudflare.com/custom-headers-for-pages/

Custom Headers for Cloudflare Pages

Custom Headers for Cloudflare Pages

Until today, Cloudflare Workers has been a great solution to setting headers, but we wanted to create an even smoother developer experience. Today, we’re excited to announce that Pages now natively supports custom headers on your projects! Simply create a _headers file in the build directory of your project and within it, define the rules you want to apply.

/developer-docs/*
  X-Hiring: Looking for a job? We're hiring engineers
(https://www.cloudflare.com/careers/jobs)

What can you set with custom headers?

Being able to set custom headers is useful for a variety of reasons — let’s explore some of your most popular use cases.

Search Engine Optimization (SEO)

When you create a Pages project, a pages.dev deployment is created for your project which enables you to get started immediately and easily preview changes as you iterate. However, we realize this poses an issue — publishing multiple copies of your website can harm your rankings in search engine results. One way to solve this is by disabling indexing on all pages.dev subdomains, but we see many using their pages.dev subdomain as their primary domain. With today’s announcement you can attach headers such as X-Robots-Tag to hint to Google and other search engines how you’d like your deployment to be indexed.

For example, to prevent your pages.dev deployment from being indexed, you can add the following to your _headers file:

https://:project.pages.dev/*
  X-Robots-Tag: noindex

Security

Customizing headers doesn’t just help with your site’s search result ranking — a number of browser security features can be configured with headers. A few headers that can enhance your site’s security are:

  • X-Frame-Options: You can prevent click-jacking by informing browsers not to embed your application inside another (e.g. with an <iframe>).
  • X-Content-Type-Option: nosniff: To prevent browsers from interpreting a response as any other content-type than what is defined with the Content-Type header.
  • Referrer-Policy: This allows you to customize how much information visitors give about where they’re coming from when they navigate away from your page.
  • Permissions-Policy: Browser features can be disabled to varying degrees with this header (recently renamed from Feature-Policy).
  • Content-Security-Policy: And if you need fine-grained control over the content in your application, this header allows you to configure a number of security settings, including similar controls to the X-Frame-Options header.

You can configure these headers to protect an /app/* path, with the following in your _headers file:

/app/*
  X-Frame-Options: DENY
  X-Content-Type-Options: nosniff
  Referrer-Policy: no-referrer
  Permissions-Policy: document-domain=()
  Content-Security-Policy: script-src 'self'; frame-ancestors 'none';

CORS

Modern browsers implement a security protection called CORS or Cross-Origin Resource Sharing. This prevents one domain from being able to force a user’s action on another. Without CORS, a malicious site owner might be able to do things like make requests to unsuspecting visitors’ banks and initiate a transfer on their behalf. However, with CORS, requests are prevented from one origin to another to stop the malicious activity.

There are, however, some cases where it is safe to allow these cross-origin requests. So-called, “simple requests” (such as linking to an image hosted on a different domain) are permitted by the browser. Fetching these resources dynamically is often where the difficulty arises, and the browser is sometimes overzealous in its protection. Simple static assets on Pages are safe to serve to any domain, since the request takes no action and there is no visitor session. Because of this, a domain owner can attach CORS headers to specify exactly which requests can be allowed in the _headers file for fine-grained and explicit control.

For example, the use of the asterisk will enable any origin to request any asset from your Pages deployment:

/*
  Access-Control-Allow-Origin: *

To be more restrictive and limit requests to only be allowed from a ‘staging’ subdomain, we can do the following:

https://:project.pages.dev/*
  Access-Control-Allow-Origin: https://staging.:project.pages.dev

How we built support for custom headers

To support all these use cases for custom headers, we had to build a new engine to determine which rules to apply for each incoming request. Backed, of course, by Workers, this engine supports splats and placeholders, and allows you to include those matched values in your headers.

Although we don’t support all of its features, we’ve modeled this matching engine after the URLPattern specification which was recently shipped with Chrome 95. We plan to be able to fully implement this specification for custom headers once URLPattern lands in the Workers runtime, and there should hopefully be no breaking changes to migrate.

Enhanced support for redirects

With this same engine, we’re bringing these features to your _redirects file as well. You can now configure your redirects with splats, placeholders and status codes as shown in the example below:

/blog/* https://blog.example.com/:splat 301
/products/:code/:name /products?name=:name&code=:code
/submit-form https://static-form.example.com/submit 307

Get started

Custom headers and redirects for Cloudflare Pages can be configured today. Check out our documentation to get started, and let us know how you’re using it in our Discord server. We’d love to hear about what this unlocks for your projects!

Coming up…

And finally, if a _headers file and enhanced support for _redirects just isn’t enough for you, we also have something big coming very soon which will give you the power to build even more powerful projects. Stay tuned!

Cloudflare for SaaS for All, now Generally Available!

Post Syndicated from Dina Kozlov original https://blog.cloudflare.com/cloudflare-for-saas-for-all-now-generally-available/

Cloudflare for SaaS for All, now Generally Available!

Cloudflare for SaaS for All, now Generally Available!

During Developer Week a few months ago, we opened up the Beta for Cloudflare for SaaS: a one-stop shop for SaaS providers looking to provide fast load times, unparalleled redundancy, and the strongest security to their customers.

Since then, we’ve seen numerous developers integrate with our technology, allowing them to spend their time building out their solution instead of focusing on the burdens of running a fast, secure, and scalable infrastructure — after all, that’s what we’re here for.

Today, we are very excited to announce that Cloudflare for SaaS is generally available, so that every customer, big and small, can use Cloudflare for SaaS to continue scaling and building their SaaS business.

What is Cloudflare for SaaS?

If you’re running a SaaS company, you have customers that are fully reliant on you for your service. That means you’re responsible for keeping their domain fast, secure, and protected. But this isn’t simple. There’s a long checklist you need to get through to put a solution in your customers’ hands:

  • Set up an origin server
  • Encrypt your customers’ traffic
  • Keep your customers online
  • Boost the performance of global customers
  • Support vanity domains
  • Protect against attacks and bots
  • Scale for growth
  • Provide insights and analytics

And on top of that, you need to also focus on building out your solution and your business. As a developer or startup with limited resources, this can delay your product launch by weeks or months.

That’s what we’re here to help with! We have numerous engineering teams whose sole focus is to work on products that take care of each one of these tasks, so you don’t have to!

The Cloudflare solution:

  • Set up an origin server  → Workers
  • Encrypt your customers’ traffic →  SSL for SaaS
  • Keep your customers online → Cloudflare’s global Anycast network
  • Boost the performance of global customers → Argo Smart Routing/Cache
  • Support vanity domains → Custom Hostnames
  • Protect against attacks and bots → WAF and Bot Management
  • Scale for growth → Workers
  • Provide insights and analytics → Custom Hostname Analytics

Pricing, Made for Developers

Starting today, Cloudflare for SaaS is available to purchase on Free, Pro, and Business plans. We wanted to make sure that the pricing made sense for developers. At the time of building, you don’t know how many customers you’ll have, so we wanted to offer flexibility by keeping the pricing as simple as possible: only pay for the customers you use.

Each customer domain using the service is called a Custom Hostname. For each Custom Hostname, we automatically provision a TLS certificate. But not just that!  Beyond the TLS certificate, each of your Custom Hostnames inherits the full suite of Cloudflare products that you set up on your SaaS zone. From Bot Management to Argo Smart Routing, you can extend these add-ons that protect and accelerate your domain to your customers.

Custom Hostnames cost two dollars per month. We will only charge you after each Custom Hostname has been onboarded, adjusted according to when you created it. That means that if you created 10 Custom Hostnames at the start of the month and 10 Custom Hostnames halfway through, at the end of the month you will be billed $30.

This way, you’re only charged for the Custom Hostnames that you provision. It’s also a great incentive to make sure you clean up after your churned customers.

If you’re an Enterprise customer and want to learn more about the benefits that you can from Cloudflare for SaaS, make sure you check out our blog post about the latest developments.

Show us what you’re building!

During the beta alone, we’ve seen incredible projects built out on the platform. We wanted to showcase these developers to show you what’s possible. And even better, some of these have been built on our Workers platform! We’d love to see what you’re working on. Join our Discord channel and showcase your work! Have feature requests for us? Let us know!

mmm.page: Simple Personal Websites

Cloudflare for SaaS for All, now Generally Available!

mmm.page is a drag-and-drop website builder that makes it dead simple to create auto-responsive, collage-like websites: websites with overlapping text, images, GIFs, YouTube videos, Spotify embeds, and (a lot) more. To make it easier, all the standard website tedium — uptime, usability, performance, reliability, responsiveness, SEO, etc. — are handled under the hood so all you have to worry about is adding content and arranging it how you want.

Under their hood is Cloudflare. Cloudflare’s CDN allows both the flexibility of server-side pages as well as the instant loading times of static pages — not to mention an 80% reduction in server costs. Custom Hostnames alone saved months of development time by handling domain names and SSL management (which are otherwise tricky to get perfect and reliable).

They’ve used Workers for increasingly more tasks that would’ve otherwise taken an order of magnitude more time if implemented with their current backend monolith — the ease of deployment and comparatively low cost of Workers is something that keeps them coming back.

The longer-term hope is for pages to be used as a sort of beacon signal, an easy-to-make yet unbounded way to express to others the things you’re interested in, especially for things that aren’t so easily describable or captured in words. They look forward to a world of a ton more DIY micro-sites. Cloudflare has been crucial in taking care of much of the difficult technical plumbing and giving them more time to work on designs and features that get them closer to this hope.

Lightfunnels

Cloudflare for SaaS for All, now Generally Available!

Lightfunnels is a performance driven e-commerce and lead generation platform. It focuses on delivering fast, reliable, and highly converting sales funnels to its users and their customers.

With Cloudflare for SaaS, Lightfunnels allows users to preserve their brand by easily connecting their own domain names with SSL to use on their funnels.

The platform handles large e-commerce traffic volume through Cloudflare Workers. This helps Lightfunnels serve pages from the closest edge to the customer, wherever they are in the world, allowing for blazing fast page load speeds.

Workers also come with a powerful caching API that eliminates a great percentage of back-end trips and reduces the stress on their servers.

“Our aim is to build the best performing e-commerce and lead generation platform on the market. Page load speeds play a significant role in performance. Using Cloudflare for SaaS along with Cloudflare Workers made building a reliable, secure, and fast infrastructure a breeze.”
Yassir Ennazk, Co-founder & CEO at Lightfunnels

Ventrata

Ventrata is a SaaS multi-channel booking platform for large attractions and tour operators. They power booking sites and B2B booking portals for clients that run on other domains. Cloudflare for SaaS has allowed them to leverage all of Cloudflare’s tools, including Firewall, image caching, Workers, and free TLS certificates on Custom Hostnames, while allowing their clients to keep full control of their brand. Their implementation involved just 4 lines of code without any infrastructure/DevOps help required, which would have been impossible before.

Currently a part of the Beta?

If you were accepted as a part of the Cloudflare for SaaS Beta, you will get a notice next week about migrating to the paid version.  

Help build a better Internet

Want to be a part of the Cloudflare team and work on the products that power Cloudflare for SaaS? We’re hiring!

Backwards-compatibility in Cloudflare Workers

Post Syndicated from Kenton Varda original https://blog.cloudflare.com/backwards-compatibility-in-cloudflare-workers/

Backwards-compatibility in
Cloudflare Workers

Backwards-compatibility in
Cloudflare Workers

Cloudflare Workers is our serverless platform that runs your code in 250+ cities worldwide.

On the Workers team, we have a policy:

A change to the Workers Runtime must never break an application that is live in production.

It seems obvious enough, but this policy has deep consequences. What if our API has a bug, and some deployed Workers accidentally depend on that bug? Then, seemingly, we can’t fix the bug! That sounds… bad?

This post will dig deeper into our policy, explaining why Workers is different from traditional server stacks in this respect, and how we’re now making backwards-incompatible changes possible by introducing “compatibility dates”.

TL;DR: Developers may now opt into backwards-incompatible fixes by setting a compatibility date.

Serverless demands strict compatibility

Workers is a serverless platform, which means we maintain the server stack for you. You do not have to manage the runtime version, you only manage your own code. This means that when we update the Workers Runtime, we update it for everyone. We do this at least once a week, sometimes more.

This means that if a runtime upgrade breaks someone’s application, it’s really bad. The developer didn’t make any change, so won’t be watching for problems. They may be asleep, or on vacation. If we want people to trust serverless, we can’t let this happen.

This is very different from traditional server platforms, where the developer maintains their own stack. For example, when a developer maintains a traditional VM-based server running Node.js applications, then the developer must decide exactly when to upgrade to a new version of Node.js. Careful developers do not upgrade Node.js 14 to Node.js 16 in production without testing first. They typically verify that their application works in a staging environment before going to production. A developer who doesn’t have time to spend testing each new version may instead choose to rely on a long-term support release, applying only low-risk security patches.

In the old world, if the Node.js maintainers decide to make a breaking change to an obscure API between releases, it’s OK. Downstream developers are expected to test their code before upgrading, and address any breakages. But in the serverless world, it’s not OK: developers have no control over when upgrades happen, therefore upgrades must never break anything.

But sometimes we need to fix things

Sometimes, we get things wrong, and we need to fix them. But sometimes, the fix would break people.

For example, in Workers, the fetch() function is used to make outgoing HTTP requests. Unfortunately, due to an oversight, our original implementation of fetch(), when given a non-HTTP URL, would silently interpret it as HTTP instead. For example, if you did fetch(“ftp://example.com”), you’d get the same result as fetch(“http://example.com”).

This is obviously not what we want and could lead to confusion or deeper bugs. Instead, fetch() should throw an exception in these cases. However, we couldn’t simply fix the problem, because a surprising number of live Workers depended on the behavior. For whatever reason, some Workers fetch FTP URLs and expect to get a result back. Perhaps they are fetching from sites that support both FTP and HTTP, and they arbitrarily chose FTP and it worked. Perhaps the fetches aren’t actually working, but changing a 404 error result into an exception would break things worse. When you have tens of thousands of new developers deploying applications every month, inevitably there’s always someone relying on any bug. We can’t “fix” the bug because it would break these applications.

The obvious solutions don’t work

Could we contact developers and ask them to fix their code?

No, because the problem is our fault, not the application developer’s, and the developer may not have time to help us fix our problems.

The fact that a Worker is doing something “wrong” — like using an FTP URL when they should be using HTTP — doesn’t necessarily mean the developer did anything wrong. Everyone writes code with bugs. Good developers rely on careful testing to make sure their code does what it is supposed to.

But what if the test only worked because of a bug in the underlying platform that caused it to do the right thing by accident? Well, that’s the platform’s fault. The developer did everything they could: they tested their code thoroughly, and it worked.

Developers are busy people. Nobody likes hearing that they need to drop whatever they are doing to fix a problem in code that they thought worked — especially code that has been working fine for years without anyone touching it. We think developers have enough on their plates already, we shouldn’t be adding more work.

Could we run multiple versions of the Workers Runtime?

No, for three reasons.

First, in order for edge computing to be effective, we need to be able to host a very large number of applications in each instance of the Workers Runtime. This is what allows us to run your code in hundreds of locations around the world at minimal cost. If we ran a separate copy of the runtime for each application, we’d need to charge a lot more, or deploy your code to far fewer locations. So, realistically it is infeasible for us to have different Workers asking for different versions of the runtime.

Second, part of the promise of serverless is that developers shouldn’t have to worry about updating their stack. If we start letting people pin old versions, then we have to start telling people how long they are allowed to do so, alerting people about security updates, giving people documentation that differentiates versions, and so on. We don’t want developers to have to think about any of that.

Third, this doesn’t actually solve the real problem anyway. We can easily implement multiple behaviors within the same runtime binary. But how do we know which behavior to use for any particular Worker?

Introducing Compatibility Dates

Going forward, every Worker is assigned a “compatibility date”, which must be a date in the past. The date is specified inside the project’s metadata (for Wrangler projects, in wrangler.toml). This metadata is passed to the Cloudflare API along with the application code whenever it is updated and deployed. A compatibility date typically starts out as the date when the Worker was first created, but can be updated from time to time.

# wrangler.toml
compatibility_date = "2021-09-20"

We can now introduce breaking changes. When we do, the Workers Runtime must implement both the old and the new behavior, and chooses behavior based on the compatibility date. Each time we introduce a new change, we choose a date in the future when that change will become the default. Workers with a later compatibility date will see the change; Workers with an older compatibility date will retain the old behavior.

A page in our documentation lists the history of breaking changes — and only breaking changes. When you wish to update your Worker’s compatibility date, you can refer to this page to quickly determine what might be affected, so that you can test for problems.

We will reserve the compatibility system strictly for changes which cannot be made without causing a breakage. We don’t want to force people to update their compatibility date to get regular updates, including new features, non-breaking bug fixes, and so on.

If you’d prefer never to update your compatibility date, that’s OK! Old compatibility dates are intended to be supported forever. However, if you are frequently updating your code, you should update your compatibility date along with it.

Acknowledgement

While the details are a bit different, we were inspired by Stripe’s API versioning, as well as the absolute promise of backwards compatibility maintained by both the Linux kernel system call API and the Web Platform implemented by browsers.

Building Cloudflare Images in Rust and Cloudflare Workers

Post Syndicated from Yevgen Safronov original https://blog.cloudflare.com/building-cloudflare-images-in-rust-and-cloudflare-workers/

Building Cloudflare Images in Rust and Cloudflare Workers

Building Cloudflare Images in Rust and Cloudflare Workers

This post explains how we implemented the Cloudflare Images product with reusable Rust libraries and Cloudflare Workers. It covers the technical design of Cloudflare Image Resizing and Cloudflare Images. Using Rust and Cloudflare Workers helps us quickly iterate and deliver product improvements over the coming weeks and months.

Reuse of code in Rusty image projects

We developed Image Resizing in Rust. It’s a web server that receives HTTP requests for images along with resizing options, fetches the full-size images from the origin, applies resizing and other image processing operations, compresses, and returns the HTTP response with the optimized image.

Rust makes it easy to split projects into libraries (called crates). The image processing and compression parts of Image Resizing are usable as libraries.

We also have a product called  Polish, which is a Golang-based service that recompresses images in our cache. Polish was initially designed to run command-line programs like jpegtran and pngcrush. We took the core of Image Resizing and wrapped it in a command-line executable. This way, when Polish needs to apply lossy compression or generate WebP images or animations, it can use Image Resizing via a command-line tool instead of a third-party tool.

Reusing libraries has allowed us to easily unify processing between Image Resizing and Polish (for example, to ensure that both handle metadata and color profiles in the same way).

Cloudflare Images is another product we’ve built in Rust. It added support for a custom storage back-end, variants (size presets), support for signing URLs and more. We made it as a collection of Rust crates, so we can reuse pieces of it in other services running anywhere in our network. Image Resizing provides image processing for Cloudflare Images and shares libraries with Images to understand the new URL scheme, access the storage back-end, and database for variants.

How Image Resizing works

Building Cloudflare Images in Rust and Cloudflare Workers

The Image Resizing service runs at the edge and is deployed on every server of the Cloudflare global network. Thanks to Cloudflare’s global Anycast network, the closest Cloudflare data center will handle eyeball image resizing requests. Image Resizing is tightly integrated with the Cloudflare cache and handles eyeball requests only on a cache miss.

There are two ways to use Image Resizing. The default URL scheme provides an easy, declarative way of specifying image dimensions and other options. The other way is to use a JavaScript API in a Worker. Cloudflare Workers give powerful programmatic control over every image resizing request.

How Cloudflare Images work

Building Cloudflare Images in Rust and Cloudflare Workers

Cloudflare Images consists of the following components:

  • The Images core service that powers the public API to manage images assets.
  • The Image Resizing service responsible for image transformations and caching.
  • The Image delivery Cloudflare Worker responsible for serving images and passing corresponding parameters through to the Imaging Resizing service.
  • Image storage that provides access and storage for original image assets.

To support Cloudflare Images scenarios for image transformations, we made several changes to the Image Resizing service:

  • Added access to Cloudflare storage with original image assets.
  • Added access to variant definitions (size presets).
  • Added support for signing URLs.

Image delivery

The primary use case for Cloudflare Images is to provide a simple and easy-to-use way of managing images assets. To cover egress costs, we provide image delivery through the Cloudflare managed imagedelivery.net domain. It is configured with Tiered Caching to maximize the cache hit ratio for image assets. imagedelivery.net provides image hosting without a need to configure a custom domain to proxy through Cloudflare.

A Cloudflare Worker powers image delivery. It parses image URLs and passes the corresponding parameters to the image resizing service.

How we store Cloudflare Images

There are several places we store information on Cloudflare Images:

  • image metadata in Cloudflare’s core data centers
  • variant definitions in Cloudflare’s edge data centers
  • original images in core data centers
  • optimized images in Cloudflare cache, physically close to eyeballs.

Image variant definitions are stored and delivered to the edge using Cloudflare’s distributed key-value store called Quicksilver. We use a single source of truth for variants. The Images core service makes calls to Quicksilver to read and update variant definitions.

The rest of the information about the image is stored in the image URL itself:
https://imagedelivery.net/<encoded account id>/<image id>/<variant name>

<image id> contains a flag, whether it’s publicly available or requires access verification. It’s not feasible to store any image metadata in Quicksilver as the data volume would increase linearly with the number of images we host. Instead, we only allow a finite number of variants per account, so we responsibly utilize available disk space on the edge. The downside of storing image metadata as part of <image id> is that <image id> will change on access change.

How we keep Cloudflare Images up to date

The only way to access images is through the use of variants. Each variant is a named image resizing configuration. Once the image asset is fetched, we cache the transformed image in the Cloudflare cache. The critical question is how we keep processed images up to date. The answer is by purging the Cloudflare cache when necessary. There are two use cases:

  • access to the image is changed
  • the variant definition is updated

In the first instance, we purge the cache by calling a URL:
https://imagedelivery.net/<encoded account id>/<image id>

Then, the customer updates the variant we issue a cache purge request by tag:
account-id/variant-name

To support cache purge by tag, the image resizing service adds the necessary tags for all transformed images.

How we restrict access to Cloudflare Images

The Image resizing service supports restricted access to images by using URL signatures with expiration. URLs are signed with an SHA-256 HMAC key. The steps to produce valid signatures are:

  1. Take the path and query string (the path starts with /).
  2. Compute the path’s SHA-256 HMAC with the query string, using the Images’ URL signing key as the secret. The key is configured in the Dashboard.
  3. If the URL is meant to expire, compute the Unix timestamp (number of seconds since 1970) of the expiration time, and append ?exp= and the timestamp as an integer to the URL.
  4. Append ? or & to the URL as appropriate (? if it had no query string; & if it had a query string).
  5. Append sig= and the HMAC as hex-encoded 64 characters.

A signed URL looks like this:

Building Cloudflare Images in Rust and Cloudflare Workers

A signed URL with an expiration timestamp looks like this:

Building Cloudflare Images in Rust and Cloudflare Workers

Signature of /hello/world URL with a secret ‘this is a secret’ is 6293f9144b4e9adc83416d1b059abcac750bf05b2c5c99ea72fd47cc9c2ace34.

https://imagedelivery.net/hello/world?sig=6293f9144b4e9adc83416d1b059abcac750bf05b2c5c99ea72fd47cc9c2ace34

Building Cloudflare Images in Rust and Cloudflare Workers
Building Cloudflare Images in Rust and Cloudflare Workers

Direct creator uploads with Cloudflare Worker and KV

Similar to Cloudflare Stream, Images supports direct creator uploads. That allow users to upload images without API tokens. Everyday use of direct creator uploads is by web apps, client-side applications, or mobile apps where users upload content directly to Cloudflare Images.

Once again, we used our serverless platform to support direct creator uploads. The successful API call stores the account’s information in Workers KV with the specified expiration date. A simple Cloudflare Worker handles the upload URL, which reads the KV value and grants upload access only on a successful call to KV.

Future Work

Cloudflare Images product has an exciting product roadmap. Let’s review what’s possible with the current architecture of Cloudflare Images.

Resizing hints on upload

At the moment, no image transformations happen on upload. That means we can serve the image globally once it is uploaded to Image storage. We are considering adding resizing hints on image upload. That won’t necessarily schedule image processing in all cases but could provide a valuable signal to resize the most critical image variants. An example could be to generate an AVIF variant for the most vital image assets.

Serving images from custom domains

We think serving images from a domain we manage (with Tiered Caching) is a great default option for many customers. The downside is that loading Cloudflare images requires additional TLS negotiations on the client-side, adding latency and impacting loading performance. On the other hand, serving Cloudflare Images from custom domains will be a viable option for customers who set up a website through Cloudflare. The good news is that we can support such functionality with the current architecture without radical changes in the architecture.

Conclusion

The Cloudflare Images product runs on top of the Cloudflare global network. We built Cloudflare Images in Rust and Cloudflare Workers. This way, we use Rust reusable libraries in several products such as Cloudflare Images, Image Resizing, and Polish. Cloudflare’s serverless platform is an indispensable tool to build Cloudflare products internally. If you are interested in building innovative products in Rust and Cloudflare Workers, we’re hiring.

Pin, Unpin, and why Rust needs them

Post Syndicated from Adam Chalmers original https://blog.cloudflare.com/pin-and-unpin-in-rust/

Pin, Unpin, and why Rust needs them

Pin, Unpin, and why Rust needs them

Using async Rust libraries is usually easy. It’s just like using normal Rust code, with a little async or .await here and there. But writing your own async libraries can be hard. The first time I tried this, I got really confused by arcane, esoteric syntax like T: ?Unpin and Pin<&mut Self>. I had never seen these types before, and I didn’t understand what they were doing. Now that I understand them, I’ve written the explainer I wish I could have read back then. In this post, we’re gonna learn

  • What Futures are
  • What self-referential types are
  • Why they were unsafe
  • How Pin/Unpin made them safe
  • Using Pin/Unpin to write tricky nested futures

What are Futures?

A few years ago, I needed to write some code which would take some async function, run it and collect some metrics about it, e.g. how long it took to resolve. I wanted to write a type TimedWrapper that would work like this:

// Some async function, e.g. polling a URL with [https://docs.rs/reqwest]
// Remember, Rust functions do nothing until you .await them, so this isn't
// actually making a HTTP request yet.
let async_fn = reqwest::get("http://adamchalmers.com");

// Wrap the async function in my hypothetical wrapper.
let timed_async_fn = TimedWrapper::new(async_fn);

// Call the async function, which will send a HTTP request and time it.
let (resp, time) = timed_async_fn.await;
println!("Got a HTTP {} in {}ms", resp.unwrap().status(), time.as_millis())

I like this interface, it’s simple and should be easy for the other programmers on my team to use. OK, let’s implement it! I know that, under the hood, Rust’s async functions are just regular functions that return a Future. The Future trait is pretty simple. It just means a type which:

  • Can be polled
  • When it’s polled, it might return “Pending” or “Ready”
  • If it’s pending, you should poll it again later
  • If it’s ready, it responds with a value. We call this “resolving”.

Here’s a really easy example of implementing a Future. Let’s make a Future that returns a random u16.

use std::{future::Future, pin::Pin, task::Context}

/// A future which returns a random number when it resolves.
#[derive(Default)]
struct RandFuture;

impl Future for RandFuture {
	// Every future has to specify what type of value it returns when it resolves.
	// This particular future will return a u16.
	type Output = u16;

	// The `Future` trait has only one method, named "poll".
fn poll(self: Pin<&mut Self>, _cx: &mut Context) -> Poll<Self::Output  {
		Poll::ready(rand::random())
	}
}

Not too hard! I think we’re ready to implement TimedWrapper.

Trying and failing to use nested Futures

Let’s start by defining the type.

pub struct TimedWrapper<Fut: Future> {
	start: Option<Instant>,
	future: Fut,
}

OK, so a TimedWrapper is generic over a type Fut, which must be a Future. And it will store a future of that type as a field. It’ll also have a start field which will record when it first was first polled. Let’s write a constructor:

impl<Fut: Future> TimedWrapper<Fut> {
	pub fn new(future: Fut) -> Self {
		Self { future, start: None }
	}
}

Nothing too complicated here. The new function takes a future and wraps it in the TimedWrapper. Of course, we have to set start to None, because it hasn’t been polled yet. So, let’s implement the poll method, which is the only thing we need to implement Future and make it .awaitable.

impl<Fut: Future> Future for TimedWrapper<Fut> {
	// This future will output a pair of values:
	// 1. The value from the inner future
	// 2. How long it took for the inner future to resolve
	type Output = (Fut::Output, Duration);

	fn poll(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Self::Output> {
		// Call the inner poll, measuring how long it took.
		let start = self.start.get_or_insert_with(Instant::now);
		let inner_poll = self.future.poll(cx);
		let elapsed = self.elapsed();

		match inner_poll {
			// The inner future needs more time, so this future needs more time too
			Poll::Pending => Poll::Pending,
			// Success!
			Poll::Ready(output) => Poll::Ready((output, elapsed)),
		}
	}
}

OK, that wasn’t too hard. There’s just one problem: this doesn’t work.

Pin, Unpin, and why Rust needs them

So, the Rust compiler reports an error on self.future.poll(cx), which is “no method named poll found for type parameter Fut in the current scope”. This is confusing, because we know Fut is a Future, so surely it has a poll method? OK, but Rust continues: Fut doesn’t have a poll method, but Pin<&mut Fut> has one. What is this weird type?

Well, we know that methods have a “receiver”, which is some way it can access self. The receiver might be self, &self or &mut self, which mean “take ownership of self,” “borrow self,” and “mutably borrow self” respectively. So this is just a new, unfamiliar kind of receiver. Rust is complaining because we have Fut and we really need a Pin<&mut Fut>. At this point I have two questions:

  1. What is Pin?
  2. If I have a T value, how do I get a Pin<&mut T>?

The rest of this post is going to be answering those questions. I’ll explain some problems in Rust that could lead to unsafe code, and why Pin safely solves them.

Self-reference is unsafe

Pin exists to solve a very specific problem: self-referential datatypes, i.e. data structures which have pointers into themselves. For example, a binary search tree might have self-referential pointers, which point to other nodes in the same struct.

Self-referential types can be really useful, but they’re also hard to make memory-safe. To see why, let’s use this example type with two fields, an i32 called val and a pointer to an i32 called pointer.

Pin, Unpin, and why Rust needs them

So far, everything is OK. The pointer field points to the val field in memory address A, which contains a valid i32. All the pointers are valid, i.e. they point to memory that does indeed encode a value of the right type (in this case, an i32). But the Rust compiler often moves values around in memory. For example, if we pass this struct into another function, it might get moved to a different memory address. Or we might Box it and put it on the heap. Or if this struct was in a Vec<MyStruct>, and we pushed more values in, the Vec might outgrow its capacity and need to move its elements into a new, larger buffer.

Pin, Unpin, and why Rust needs them

When we move it, the struct’s fields change their address, but not their value. So the pointer field is still pointing at address A, but address A now doesn’t have a valid i32. The data that was there was moved to address B, and some other value might have been written there instead! So now the pointer is invalid. This is bad — at best, invalid pointers cause crashes, at worst they cause hackable vulnerabilities. We only want to allow memory-unsafe behaviour in unsafe blocks, and we should be very careful to document this type and tell users to update the pointers after moves.

Unpin and !Unpin

To recap, all Rust types fall into two categories.

  1. Types that are safe to move around in memory. This is the default, the norm. For example, this includes primitives like numbers, strings, bools, as well as structs or enums entirely made of them. Most types fall into this category!
  2. Self-referential types, which are not safe to move around in memory. These are pretty rare. An example is the intrusive linked list inside some Tokio internals. Another example is most types which implement Future and also borrow data, for reasons explained in the Rust async book.

Types in category (1) are totally safe to move around in memory. You won’t invalidate any pointers by moving them around. But if you move a type in (2), then you invalidate pointers and can get undefined behaviour, as we saw before. In earlier versions of Rust, you had to be really careful using these types to not move them, or if you moved them, to use unsafe and update all the pointers. But since Rust 1.33, the compiler can automatically figure out which category any type is in, and make sure you only use it safely.

Any type in (1) implements a special auto trait called Unpin. Weird name, but its meaning will become clear soon. Again, most “normal” types implement Unpin, and because it’s an auto trait (like Send or Sync or Sized1), so you don’t have to worry about implementing it yourself. If you’re unsure if a type can be safely moved, just check it on docs.rs and see if it impls Unpin!

Types in (2) are creatively named !Unpin (the ! in a trait means “does not implement”). To use these types safely, we can’t use regular pointers for self-reference. Instead, we use special pointers that “pin” their values into place, ensuring they can’t be moved. This is exactly what the Pin type does.

Pin, Unpin, and why Rust needs them

Pin wraps a pointer and stops its value from moving. The only exception is if the value impls Unpin — then we know it’s safe to move. Voila! Now we can write self-referential structs safely! This is really important, because as discussed above, many Futures are self-referential, and we need them for async/await.

Using Pin

So now we understand why Pin exists, and why our Future poll method has a pinned &mut self to self instead of a regular &mut self. So let’s get back to the problem we had before: I need a pinned reference to the inner future. More generally: given a pinned struct, how do we access its fields?

The solution is to write helper functions which give you references to the fields. These references might be normal Rust references like &mut, or they might also be pinned. You can choose whichever one you need. This is called projection: if you have a pinned struct, you can write a projection method that gives you access to all its fields.

Projecting is really just getting data into and out of Pins. For example, we get the start: Option<Duration> field from the Pin<&mut self>, and we need to put the future: Fut into a Pin so we can call its poll method). If you read the Pin methods you’ll see this is always safe if it points to an Unpin value, but requires unsafe otherwise.

// Putting data into Pin
pub        fn new          <P: Deref<Target:Unpin>>(pointer: P) -> Pin<P>;
pub unsafe fn new_unchecked<P>                     (pointer: P) -> Pin<P>;

// Getting data from Pin
pub        fn into_inner          <P: Deref<Target: Unpin>>(pin: Pin<P>) -> P;
pub unsafe fn into_inner_unchecked<P>                      (pin: Pin<P>) -> P;

I know unsafe can be a bit scary, but it’s OK to write unsafe code! I think of unsafe as the compiler saying “hey, I can’t tell if this code follows the rules here, so I’m going to rely on you to check for me.” The Rust compiler does so much work for us, it’s only fair that we do some of the work every now and then. If you want to learn how to write your own projection methods, I can highly recommend this fasterthanli.me blog post on the topic. But we’re going to take a little shortcut.

Using pin-project instead

So, OK, look, it’s time for a confession: I don’t like using unsafe. I know I just explained why it’s OK, but still, given the option, I would rather not.

I didn’t start writing Rust because I wanted to carefully think about the consequences of my actions, damnit, I just want to go fast and not break things. Luckily, someone sympathized with me and made a crate which generates totally safe projections! It’s called pin-project and it’s awesome. All we need to do is change our definition:

#[pin_project::pin_project] // This generates a `project` method
pub struct TimedWrapper<Fut: Future> {
	// For each field, we need to choose whether `project` returns an
	// unpinned (&mut T) or pinned (Pin<&mut T>) reference to the field.
	// By default, it assumes unpinned:
	start: Option<Instant>,
	// Opt into pinned references with this attribute:
	#[pin]
	future: Fut,
}

For each field, you have to choose whether its projection should be pinned or not. By default, you should use a normal reference, just because they’re easier and simpler. But if you know you need a pinned reference — for example, because you want to call .poll(), whose receiver is Pin<&mut Self> — then you can do that with #[pin].

Now we can finally poll the inner future!

fn poll(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Self::Output> {
	// This returns a type with all the same fields, with all the same types,
	// except that the fields defined with #[pin] will be pinned.
	let mut this = self.project();
	
    // Call the inner poll, measuring how long it took.
	let start = this.start.get_or_insert_with(Instant::now);
	let inner_poll = this.future.as_mut().poll(cx);
	let elapsed = start.elapsed();

	match inner_poll {
		// The inner future needs more time, so this future needs more time too
		Poll::Pending => Poll::Pending,
		// Success!
		Poll::Ready(output) => Poll::Ready((output, elapsed)),
	}
}

Finally, our goal is complete — and we did it all without any unsafe code.

Summary

If a Rust type has self-referential pointers, it can’t be moved safely. After all, moving doesn’t update the pointers, so they’ll still be pointing at the old memory address, so they’re now invalid. Rust can automatically tell which types are safe to move (and will auto impl the Unpin trait for them). If you have a Pin-ned pointer to some data, Rust can guarantee that nothing unsafe will happen (if it’s safe to move, you can move it, if it’s unsafe to move, then you can’t). This is important because many Future types are self-referential, so we need Pin to safely poll a Future. You probably won’t have to poll a future yourself (just use async/await instead), but if you do, use the pin-project crate to simplify things.

I hope this helped — if you have any questions, please ask me on Twitter. And if you want to get paid to talk to me about Rust and networking protocols, my team at Cloudflare is hiring, so be sure to visit careers.cloudflare.com.

References

Introducing logs from the dashboard for Cloudflare Workers

Post Syndicated from Ashcon Partovi original https://blog.cloudflare.com/workers-dashboard-logs/

Introducing logs from the dashboard for Cloudflare Workers

Introducing logs from the dashboard for Cloudflare Workers

If you’re writing code: what can go wrong, will go wrong.

Many developers know the feeling: “It worked in the local testing suite, it worked in our staging environment, but… it’s broken in production?” Testing can reduce mistakes and debugging can help find them, but logs give us the tools to understand and improve what we are creating.

if (this === undefined) {
  console.log("there’s no way… right?") // Narrator: there was.
}

While logging can help you understand when the seemingly impossible is actually possible, it’s something that no developer really wants to set up or maintain on their own. That’s why we’re excited to launch a new addition to the Cloudflare Workers platform: logs and exceptions from the dashboard.

Starting today, you can view and filter the console.log output and exceptions from a Worker… at no additional cost with no configuration needed!

View logs, just a click away

When you view a Worker in the dashboard, you’ll now see a “Logs” tab which you can click on to view a detailed stream of logs and exceptions. Here’s what it looks like in action:

Each log entry contains an event with a list of logs, exceptions, and request headers if it was triggered by an HTTP request. We also automatically redact sensitive URLs and headers such as Authorization, Cookie, or anything else that appears to have a sensitive name.

If you are in the Durable Objects open beta, you will also be able to view the logs and requests sent to each Durable Object. This is a great tool to help you understand and debug the interactions between your Worker and a Durable Object.

For now, we support filtering by event status and type. Though, you can expect more filters to be added to the dashboard very soon! Today, we support advanced filtering with the wrangler CLI, which will be discussed later in this blog.

console.log(), and you’re all set

It’s really simple to get started with logging for Workers. Simply invoke one of the standard console APIs, such as console.log(), and we handle the rest. That’s it! There’s no extra setup, no configuration needed, and no hidden logging fees.

function logRequest (request) {
  const { cf, headers } = request
  const { city, region, country, colo, clientTcpRtt  } = cf
  
  console.log("Detected location:", [city, region, country].filter(Boolean).join(", "))
  if (clientTcpRtt) {
     console.debug("Round-trip time from client to", colo, "is", clientTcpRtt, "ms")
  }

  // You can also pass an object, which will be interpreted as JSON.
  // This is great if you want to define your own structured log schema.
  console.log({ headers })
}

In fact, you don’t even need to use console.log to view an event from the dashboard. If your Worker doesn’t generate any logs or exceptions, you will still be able to see the request headers from the event.

Advanced filters, from your terminal

If you need more advanced filters you can use wrangler, our command-line tool for deploying Workers. We’ve updated the wrangler tail command to support sampling and a new set of advanced filters. You also no longer need to install or configure cloudflared to use the command. Not to mention it’s much faster, no more waiting around for logs to appear. Here are a few examples:

# Filter by your own IP address, and if there was an uncaught exception.
wrangler tail --format=pretty --ip-address=self --status=error

# Filter by HTTP method, then apply a 10% sampling rate.
wrangler tail --format=pretty --method=GET --sampling-rate=0.1

# Filter using a generic search query.
wrangler tail --format=pretty --search="TypeError"

We recommend using the “pretty” format, since wrangler will output your logs in a colored, human-readable format. (We’re also working on a similar display for the dashboard.)

However, if you want to access structured logs, you can use the “json” format. This is great if you want to pipe your logs to another tool, such as jq, or save them to a file. Here are a few more examples:

# Parses each log event, but only outputs the url.
wrangler tail --format=json | jq .event.request?.url

# You can also specify --once to disconnect the tail after receiving the first log.
# This is useful if you want to run tests in a CI/CD environment.
wrangler tail --format=json --once > event.json

Try it out!

Both logs from the dashboard and wrangler tail are available and free for existing Workers customers. If you would like more information or a step-by-step guide, check out any of the resources below.

Cloudflare Developer Summer Challenge

Post Syndicated from Jenanne Vaccaro original https://blog.cloudflare.com/developer-summer-challenge/

Cloudflare Developer Summer Challenge

Cloudflare Developer Summer Challenge

There are a lot of experiences we have all grown to miss over the last year and a half. After hearing from our community, two of the top experiences they miss are collaborating with peers, and getting Cloudflare swag. Perhaps even in the reverse order! In-person events like conferences were once a key channel to satisfy both these interests, however today’s remote world makes that much harder. But does it have to?

Today, we are excited to introduce the Cloudflare Developer Summer Challenge. We will be rewarding 300 participants with boxes of our most popular swag, while enabling collaboration with other participants through our Workers Discord channel.

To participate, you have to build a project that uses Cloudflare Workers and at least one other product in our rapidly expanding developer platform. We will judge submissions and award swag boxes to those with the most innovative projects, limited to one box per person. The Challenge will be open for submissions between today, and closing November 1, 2021. See a full list of terms and conditions here.

Cloudflare Developer Summer Challenge

What are the details?

Cloudflare’s developer platform offers all the building blocks to create end-to-end applications with products across compute, storage, and frontend services. To successfully participate in the Cloudflare Developer Summer Challenge, you need to build a project with at least two of the following products in the Cloudflare developer platform. Bonus points for using more! These products include:

  • Cloudflare Workers: an edge-based serverless computing platform where you can deploy code automatically worldwide, with speed, security, and scale baked in.
  • Workers KV: a global low-latency key value data store for exceptionally high read volumes, making it possible to build highly dynamic APIs and websites which respond as quickly as a cached static file would.
  • Durable Objects: a storage platform providing low-latency coordination and consistent storage at the edge, enabling stateful serverless use cases.
  • Cloudflare Pages: a Jamstack web development platform to collaborate on and deploy high performance sites quickly across the world.

How will submissions be judged?

Submissions will be based on three criteria:

  • The first criteria is to meet the basic requirements of the challenge. This includes using at least two products in the platform, and submitting the live link of your project and your code repository. You must submit before November 1, 2021.
  • The second criteria we will judge is around innovation. Projects that are unique and useful to users will be more likely to win the swag boxes.
  • The third criteria is based on the breadth of the Cloudflare platform you use. The more products you integrate, the better!

How do I participate?

Step 1: Get Started: You can get started with the developer platform by visiting the Cloudflare Workers Quickstart Guide, and reviewing the Workers KV and/or Durable Objects documentation. To host your frontend site on Cloudflare Pages, you can visit the Pages Quickstart Guide.

Step 2: Build your Project: If you would like inspiration, you can view our tutorials, and examples. You can also view our Built With Workers page. If you want to try and build something more advanced, you can see some additional examples below.

Step 3: Share your Project: Successful submission includes a link to your live project, and a link to your code repository. We also encourage you to share your work, or pictures of you unboxing/ using your swag by tagging @CloudflareDev on Twitter with the hashtag #CloudflareSummerChallenge.

Cloudflare Developer Summer Challenge

Optional: Engage with the Community To promote collaboration and interaction with peers, we will have a dedicated channel in the Cloudflare Workers Discord server. For those who want to, you can learn what others are building, share your project, or join Q&A sessions if you have questions or need help getting started. You can also meet developers participating around the world.

What are examples of advanced projects I can build?

The Cloudflare developer platform is ideal for a wide range of use cases – from augmenting existing applications to building entirely new ones without needing to maintain underlying infrastructure. Essentially, you write the code, and we handle the rest. Running on the Cloudflare edge network, applications scale automatically and run worldwide within seconds of users.

Build a Serverless API for your Frontend

Cloudflare Workers’ high performance and edge network make it well-suited for building APIs. It is also a great companion to your frontend applications on Cloudflare Pages. You can use Workers as the backend, and build your frontend with frameworks such as React, Gatsby, Hugo, Svelte, and more — and then deploy your site onto Pages.

You can easily begin building a serverless API for your frontend by creating a new Workers project with the Wrangler CLI:

‘wrangler generate serverless-api https://github.com/cloudflare/worker-typescript-template’

To complete this type of project, you can follow the rest of the steps outlined in our Pages documentation.

Build an Interactive Game

Combining Cloudflare Workers, Durable Objects, and WebSockets makes a powerful platform for managing state at the edge. Running on Cloudflare’s global network enables exceptionally low latency, so users can interact instantly worldwide. Examples of interactive applications can range from chat rooms to multiplayer video games.

To build interactive video games, you can integrate with popular tools such as Unity and WebGL using an authoritative client model (we have an example of how to achieve this). You do not even need expertise in building video games. Following this example above, the client can run a compiled game directly in the browser with WebAssembly. The server, running on Cloudflare Workers, can be interacted with via WebSockets, and uses Durable Objects to manage game state.

Cloudflare Developer Summer Challenge

Build an E-Commerce Experience

Whether you are building a small e-commerce app or an online ordering site with millions of requests per month, your users can experience exceptional performance and reliability with Cloudflare’s developer platform.

At a small scale, you can instantly personalize a user’s e-commerce experience by setting up A/B testing, performing localization, or providing geo-specific targeting, such as local currency or exchange rates. You can see the linked code samples to quickly get started.

Integration with Workers KV and other third party tools can also streamline the development process. Instead of having to build out an entire database for your application, you can use Workers KV to store product information such as product ID, name, description, price, etc. Outside the Workers platform, you can integrate with popular tools such as Stripe or Shopify. To learn more about how to build an e-commerce application with Workers, Workers KV, and Stripe, you can read this blog post on building an e-commerce experience.

At a larger scale, we have customers building their entire online ordering site on Workers and Pages. Dig, a popular American restaurant chain with nearly 50 locations nationwide, decided to run their ordering site on Cloudflare’s developer platform. They needed high performance and reliability as traffic spiked during the pandemic. The team used a headless React application that is entirely hosted on Cloudflare Pages. Javascript calls an underlying API to get and handle dynamic ordering logic. You can learn more about their story in this case study.

Conclusion

We have heard from our community, and we want to help bring back some of their favorite pre-pandemic experiences: collaboration and swag. The Cloudflare Developer Summer Challenge is meant to do just that. While the last year and a half has transformed the way we all live, it has also been a period of significant expansion of the Cloudflare developer platform. With new products across compute, storage, and frontend services, you now have all the building blocks to quickly create powerful applications end-to-end on our edge network. We hope you enjoy the experience (and the swag!). We cannot wait to see what you build!

Building the Cloudflare Summer Challenge Application

Post Syndicated from Luke Edwards original https://blog.cloudflare.com/building-the-cloudflare-summer-challenge-application/

Building the Cloudflare Summer Challenge Application

Building the Cloudflare Summer Challenge Application

If you haven’t already heard, we’re hosting the Cloudflare Summer Developer Challenge, a contest for the Cloudflare community at large. Anybody – yes, including you – can sign up for free and compete for a chance to win one of 300 available prizes. To submit you need to use  at least two products from the Cloudflare developer platform — which makes this contest a great opportunity to give them a try if you haven’t already! The top 300 submissions will receive a box of our most popular swag, so you should give it a go!

Coincidentally, the Cloudflare Summer Developer Challenge’s landing page and signup workflow qualifies as a valid project submission (so meta), so if you’re looking for some inspiration, this walkthrough will shed some light on how it was built.

Overview

At its core, the application is a series of static HTML pages, most of which have a form to submit, with a backend API to handle those submissions, and a storage layer to persist the data. In a Cloudflare lens, this would point towards using Pages, a Worker, and Workers KV. And while this should be the preferred stack for a project like this, truthfully, this “application” was originally intended to be a single HTML page with a single form, but its list of requirements grew over time, as things tend to do. So instead, this project began as–and remains–a Workers Site project, comprised of a single Worker and a single Workers KV namespace.

Workers Sites, the precursor to our Pages product, is a pattern where your Worker handles all the requests for your site’s assets. While doing this, your Worker Site can still include backend-y things, like offering a collection of JSON API endpoints. Basically, Workers Sites is a coined term for building monoliths within a Worker, but without the negative associations that the word “monolith” can bring. Given that a Workers Site is still a Worker, this means your monolith is deployed globally – tough to beat!

As with all Workers Sites, routing is the primary concern. For this, I used the worktop web framework, which includes a router among many other utilities. (Disclosure: I am also the author of worktop.) This allowed me to quickly structure the layout of the entire application:

import { Router } from 'worktop';
import * as Cache from 'worktop/cache';

const API = new Router;

API.add('GET', '/', (req, res) => {
  res.send(200, 'TODO: send HTML for landing page');
});

API.add('GET', '/rules', (req, res) => {
  res.send(200, 'TODO: send HTML for terms & conditions');
});

API.add('POST', '/signup', (req, res) => {
  res.send(201, 'TODO: parse & save initial registration');
});

API.add('GET', '/submit', (req, res) => {
  res.send(200, 'TODO: render the unique submission form');
});

API.add('POST', '/submit', (req, res) => {
  res.send(201, 'TODO: parse, validate, save submission data');
});

// init; w/ Cache API
Cache.listen(API.run);

At this point, nothing useful is happening, but having an application skeleton laid out like this is my preferred format for a TODO list. It’s very satisfying to go through and fill out the handler bodies as development progresses. Additionally, the Cache.listen helper at the bottom of the file integrates the entire application with the Cache API, which I know I’ll want since most of the requests will be for the static HTML pages anyway.

Building and Optimizing the Client pages

Historically, deploying a Workers Site meant uploading all of your assets into a KV namespace. Then you would include something like @cloudflare/kv-asset-handler into your Worker so that incoming requests would seamlessly route to keys within the namespace. However, I chose to go a different route.

Knowing that each of my static pages would – at most – have one CSS stylesheet and sometimes only one JavaScript file, I thought it would be pretty nifty to include a build system that would inline these assets into the built HTML page. This would mean that my static HTML pages would have absolutely zero network requests for additional resources, which is generally good news for performance.

And while I would love to say that I did this purely for performance reasons, I must also admit that the lazy-me appreciated that I didn’t have to set up additional URL routing, deal with KV asset uploading, or deal with additional Cache lifespans. A win-win in this case!

The trouble is: avoiding any external assets is not a common goal. In fact, this is very much a side quest I bestowed upon myself. And since no frameworks (that I know of, at least) can do this, I had to assemble my own miniature toolkit to accommodate my needs.

In the end, it proved to be a fun detour and didn’t take very long at all to put together. I incorporated Stylus, my preferred CSS preprocessor, and came up with a rather simple convention to inline CSS and/or JS files where needed. Instead of fancy AST parsers and transformers, I opted to simply read the HTML file contents as strings and search for HTML comments that matched the <!-- inject:(path) --> format:

<!-- submit/index.html -->
<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8"/>
    <title>Submit Project | Cloudflare Developer Summer Challenge</title>
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <link rel="icon" type="image/png" href="https://www.cloudflare.com/favicon-128.png">
    <!-- inject:submit/index.styl -->
    <!-- inject:index.js -->
  </head>
  <body>
    <!-- ... -->
  </body>
</html>

In this example, the submit/index.html file is injecting the submit/index.styl, which is its own stylesheet, and the index.js script, which does not live within the `submit` directory because it’s used by other pages. The toolkit looks at both asset paths, converts the Stylus to plain CSS, and then embeds the contents into the appropriate <script> or <style> HTML tags.

Finally, for production builds, the setup will pass the final HTML source through a minifier, which compresses the entire document, including any CSS or JavaScript that was injected. This step is optional, but it never hurts to send fewer bytes down the wire.

Once these pages were built, I was satisfied with the Network Activity panel when loading the main page:

Building the Cloudflare Summer Challenge Application

You can see how the localhost document loads, only dispatching a single request for the favicon-128.png file, which is hosted externally. The three data:image/* requests are Blob URLs and don’t actually transfer network packets. All in all, this means that the HTML document is fully self-contained.

Including HTML into the Worker

Workers can send anything in a Response. Of course, this includes a HTML string. If I wanted to make things incredibly difficult for myself, I could have skipped the /src directory with its own build system, and instead written the HTML, CSS, and JS entirely within a JS string. This would certainly work, but it would be a nightmare to maintain and (for me, at least) be extremely error prone:

API.add('GET', '/', (req, res) => {
  // Note: Worktop APIs
  res.setHeader('Content-Type', 'text/html;charset=utf-8');
  res.send(200, `
    <!doctype html>
    <html lang="en">
      <head>
        <title>Demo | Insanity</title>
        <style>
          body {
            background: #fff;
            color: #424242;
          }
          /* more */
        </style>
        <script>
          $('form').onsubmit = function (ev) {
            ev.preventDefault();
            // ...
          });
        </script>
      </head>
      <body>
        <!-- my entire page content -->
      </body>
    </html>
  `);
});

Thankfully, I planned ahead and already have a build system that produces better HTML files anyway. So now I just needed a way to load those built outputs into my Worker code.

Now for the second half of this project’s toolkit; I find it perfectly acceptable to have a two-step build pipeline. Here, this means that the static site should be built first, followed by building the Worker. I was planning to use TypeScript to author my Worker anyway, which meant I was already going to need a build step – the only change here is that these build steps would now have to be sequential and ordered.

The Worker is built using esbuild, which is an extremely quick JavaScript bundler and compiler that is capable of translating TypeScript, too. It also has its own plugin system, which allowed me the opportunity to add the “inline my HTML files” behavior I needed. The Worker’s build script actually isn’t too intimidating and allows the Worker to `import` HTML files directly. This allows the insanity from above can be safely replaced with this pattern:

import { Router } from 'worktop';
import * as Cache from 'worktop/cache';

// loaded via esbuild plugin
import LANDING from 'index.html';
import RULES from 'rules/index.html';

API.add('GET', '/', (req, res) => {
  res.setHeader('Content-Type', 'text/html;charset=utf-8');
  res.setHeader('Cache-Control', 'public,max-age=60');
  res.send(200, LANDING);
});

API.add('GET', '/rulees', (req, res) => {
  res.setHeader('Content-Type', 'text/html;charset=utf-8');
  res.setHeader('Cache-Control', 'public,max-age=1800');
  res.send(200, RULES);
});

// ...

// init; w/ Cache API
Cache.listen(API.run);

Of course, this is much cleaner and sensible in the long-run. Clarity makes it easier to identify and extract common patterns into utility functions. I took the opportunity to introduce a render function, the first of many reusable helpers this project would encounter:

// worker/utils.ts
import type { ServerResponse } from 'worktop/response';

export function render(res: ServerResponse, template: string) {
  res.setHeader('Content-Type', 'text/html;charset=UTF-8');
  res.send(200, template);
}

// worker/index.ts
import * as utils from './utils';

API.add('GET', '/', (req, res) => {
  res.setHeader('Cache-Control', 'public,max-age=60');
  return utils.render(res, LANDING);
});

API.add('GET', '/rulees', (req, res) => {
  res.setHeader('Cache-Control', 'public,max-age=1800');
  return utils.render(res, RULES);
});

Finally, most of the pages need to dynamically insert values into the HTML markup. For example, the submission form should render with the participant’s name and email address and the landing page is required to reflect the current value of remaining prizes. Much like any other monolithic application, the Worker Site is fully aware – and capable – of injecting these values where needed.

To do this, I standardized the {{ variable }} syntax in my project’s HTML. Each of these variables would be replaced during the Worker request with the appropriate value. Of course, it also requires that each endpoint actually provide the correct information to make the substitutions. With this in mind, I modified the `render` utility and updated the landing page’s route handler:

// worker/utils.ts
import type { KV } from 'worktop/kv';
import type { ServerResponse } from 'worktop/response';

// TypeScript placeholder
// Defines the `DATA` KV binding
declare const DATA: KV.Namespace;

export function render(res: ServerResponse, template: string, values: Record<string, string> = {}) {
  for (let key in values) {
    template = template.replace('{{ ' + key + ' }}', values[key]);
  }
  res.setHeader('Content-Type', 'text/html;charset=UTF-8');
  res.send(200, template);
}
  
export function toCount(): Promise<string> {
  return DATA.get('::remain', 'text').then(v => v || '300+');
}
  
// worker/index.ts
import * as utils from './utils';

API.add('GET', '/', async (req, res) => {
  // Get the "::remain" count from KV
  const count = await utils.toCount();
  
  // Short-term TTL for remaining swag updates
  res.setHeader('Cache-Control', 'public,max-age=60');
  
  // Render the HTML, passing in `count` variable
  return utils.render(res, LANDING, { count });
});

With these changes, the landing page will always check the KV namespace for the latest ::remain value and inject it into the correct location. If you’re interested in checking out the project’s source code, you’ll find that this pattern is used in nearly every HTML response.

Accepting Form Submissions

As expected, this application made heavy use of form submissions. Luckily, the Fetch API offers a variety of built-in body parsers to make retrieval of the data trivial. Additionally, worktop offers a convenience function that will automatically invoke the correct parser based on the request’s Content-Type header. It’s aptly named req.body().

It’s easy to parse and retrieve user data, but it still has to be validated. There are a number of ways to do this, all of which boil down to an input object, a group of rules, and a loop through those rules, collecting any error messages into an errors object. This is precisely what my utils.validate helper does, allowing me to clearly define and manage my rules inline.

Let’s see how this looks within the POST /submit handler, which accepts the initial registration form:

// worker/index.ts
import * as utils from './utils';

API.add('POST', '/signup', async (req, res) => {
  try {
    var input = await req.body<Entry>();
  } catch (err) {
    return toError(res, 400, 'Error parsing input');
  }

  let { email, firstname, lastname } = input || {};
  firstname = String(firstname||'').trim();
  lastname = String(lastname||'').trim();
  email = String(email||'').trim();

  let { errors, invalid } = utils.validate({
    email, firstname, lastname
  }, {
    email(val: string) {
      if (val.length < 1) return 'Required';
      return utils.isEmail(val) || 'Invalid email address';
    },
    firstname(val: string) {
      return val.length > 1 || 'Required';
    },
    lastname(val: string) {
      return val.length > 1 || 'Required';
    }
  });

  if (invalid) {
    return res.send(422, errors);
  }
      
  // The `input` is valid!
  
  return res.send(200, 'TODO: finish me');
});

Only after the data is considered valid can data be stored into KV for future use. For the initial registration, a number of things need to happen:

  1. Ensure that the input.email hasn’t already been registered;
  2. Persist the new registration using the `input` values, identifying each document with the user:<email> key;
  3. Generate and save a unique code for the registration, which will be used later to ensure (a) that unregistered persons cannot submit projects and (b) that a registrant can only submit once;
  4. Send the user an email, containing their unique submission link; and
  5. Render a confirmation page, reminding the user to check their inbox for their link.

It can seem like a lot, but after piecing together a few utility helpers and abstractions, it can actually feel quite approachable:

// worker/index.ts
import * as utils from './utils';
import * as Sparkpost from './emails';
import * as Signup from './signup';
import * as Code from './code';

function toError(res: ServerResponse, status: number, reason: string) {
  return res.send(status, { status, reason });
}

API.add('POST', '/signup', async (req, res) => {
  try {
    var input = await req.body<Entry>();
  } catch (err) {
    return toError(res, 400, 'Error parsing input');
  }
  
  let { email, firstname, lastname } = input || {};
  firstname = String(firstname||'').trim();
  lastname = String(lastname||'').trim();
  email = String(email||'').trim();
  
  // truncated: validation
  
  // Ensure email is not already in use
  let exists = await Signup.find(email);
  if (exists) return toError(res, 400, 'You have already signed up');

  // Generate new `Entry` record
  let entry = Signup.prepare({ email, firstname, lastname });

  // create "user:<unique email>" document
  let isOK = await Signup.save(entry);
  if (!isOK) return toError(res, 500, 'Error persisting entry');

  // create "code:<unique value>" document
  isOK = await Code.save(entry);
  if (!isOK) return toError(res, 500, 'Error saving unique code');

  // dispatch "We received your registration" email
  let sent = await Sparkpost.confirm(entry);
  if (!sent) return toError(res, 500, 'Error sending confirmation email');

  // render "Thank you, check your {{ email }} for next steps" page
  return utils.render(res, CONFIRM, { email: entry.email });
});

A full HTML response is returned, which means that the client-side form handler should be able to see this content and render it directly in the browser window. This can be seen in the following index.js snippet, which was referenced earlier in the submit/index.html as an injected asset:

// (client) index.js

$('form').onsubmit = async function (ev) {
  ev.preventDefault();

  var form = ev.target;
  var res = await fetch(form.action, {
    method: form.method || 'POST',
    body: new FormData(form),
  });

  // truncate: clear existing errors

  if (res.ok) {
    form.reset();
    // Receive HTML response
    let html = await res.text();
    // Force-write the new HTML into this window
    document.documentElement.innerHTML = html;
  } else {
    // truncate: render errors
  }
};

BONUS: Because a full HTML response is returned, and all the client-side <form> elements are semantically correct, the form submission workflow will work with JavaScript disabled! The client-side validation will remain functional, but be a degraded experience – the error dialog won’t popup and any error messages will not appear beneath their respective form inputs.

Sending Transactional Emails

It should (hopefully) come as no surprise that programmatically sending an email is pretty straightforward these days. We chose to use SparkPost, but practically every service has the same API mechanics:

  • Obtain an API Token
  • Send a POST request to an endpoint with:
    • your API Token as an Authorization header
    • your recipient, sender identity, and text and/or HTML content as the POST body
  • Wait for a 200-level response, or deal with any API errors

Most email-as-a-service providers allow you to define templates, which allow you to replace variables with unique values per email – essentially the same thing our utils.render function is doing with our HTML contents. The benefit of this is that you only have to worry about writing your emails once; then you’re just POST’ing new values to the API endpoint.

SparkPost allows templates to be referenced by a custom name rather than a randomly generated identifier, which makes it easy to track and debug templates over time.

// worker/emails.ts
import type { Entry } from './signup';

// wrangler secret
// @see https://developers.sparkpost.com/api/#header-authentication
declare const SPARKPOST_KEY: string;

/**
 * Assemble the POST request for all SparkPost email triggers
 * @see https://developers.sparkpost.com/api/transmissions/#transmissions-post-send-a-template
 */
async function send(
  templateid: string,
  recipient: Entry,
  values?: Record<string, string>
): Promise<boolean> {
  const res = await fetch('https://api.sparkpost.com/api/v1/transmissions', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': SPARKPOST_KEY,
    },
    body: JSON.stringify({
      content: {
        template_id: templateid,
      },
      recipients: [{
        address: {
          email: recipient.email,
          name: recipient.firstname + ' ' + recipient.lastname,
        },
        substitution_data: values || {},
      }]
    })
  });

  let data = await res.json() as {
    results: {
      id: string;
      total_rejected_recipients: number;
      total_accepted_recipients: number;
    }
  };

  return res.ok && data.results.total_accepted_recipients === 1;
}
    
/**
 * Confirming user's signup
 * Sending unique submission form
 */
export function confirm(entry: Entry): Promise<boolean> {
  return send('devchallenge-confirm', entry, {
    firstname: entry.firstname,
    code: entry.code,
  });
}

The above snippet includes the entire POST request formatter – there’s nearly more type-hinting than there is code! Also shown is an example confirm method, which is responsible for sending the unique submission link to the newly-registered user. You’ll notice that firstname and code are the injected variables, required by the “devchallenge-confirm” template.

Overall Performance

I’d call this a success!

Even though this certainly wasn’t my first Worker project – and definitely won’t be my last – I’m consistently amazed how much the Workers runtime lets me get away with. I mean, if you could only take away two points from this article, they should be:

  1. I was able to build a moderately complex application, from scratch, while incorporating a Cache layer, a globally-replicated storage layer, and a super-performant JS runtime, all of which live under the same roof.
  2. I (probably) spent more time fussing with a custom client-side build pipeline than I did piecing together the mission-critical API form handlers.

The cherry on top: Should this contest go viral and lure in millions of visitors, I’d only be paying a couple of dollars at the end of the month. Obviously I have a bias here, but it’s pretty amazing really.

Finally, performance-wise, this may justify the time spent fiddling with the HTML build output:

Building the Cloudflare Summer Challenge Application

Lessons Learned

As I alluded to earlier, if I were to rebuild this application, or if I were to add more to it down the road, I would replace the Workers Site architecture with a Pages project and deploy a Worker in front of it for my API requirements and dynamic KV injections.

Since the static assets would no longer be embedded into the Worker’s source, I would need to replace the `utils.render` approach for another utility that fetches the URL from Pages (which becomes my “origin server”) and then uses HTMLRewriter to inject the variables. Also, not that I was anywhere near the 1MB size limit, the largest contributor to my Worker’s bytesize would disappear.

But, more significantly, this refactor would also reduce my total tooling since the majority of the project’s complexity lies in the custom build system for the frontend assets. In other words, the entire /src directory could have been built and deployed like a normal static website, which would allow me to make use of existing frameworks and/or toolkits instead of taking my self-imposed detour. There would have been no need to create a custom frontend toolkit and its bridge to get the static assets loaded into my Worker.

However, none of this is to say that Workers Sites was a bad approach for this application. It’s quite the contrary! This is all to highlight the flexibility of Worker Sites – and the Workers platform at large. Cloudflare Pages exists so that I, the developer, can lean into existing, well-traveled paths and let the experts worry about toolkits, build pipelines, and deployments… But that doesn’t prevent you, the resident expert, from customizing every aspect if that’s your desire.

Resources

Announcing Green Compute on Cloudflare Workers

Post Syndicated from Aly Cabral original https://blog.cloudflare.com/announcing-green-compute/

Announcing Green Compute on Cloudflare Workers

Announcing Green Compute on Cloudflare Workers

All too often we are confronted with the choice to move quickly or act responsibly. Whether the topic is safety, security, or in this case sustainability, we’re asked to make the trade off of halting innovation to protect ourselves, our users, or the planet. But what if that didn’t always need to be the case? At Cloudflare, our goal is to bring sustainable computing to you without the need for any additional time, work, or complexity.

Enter Green Compute on Cloudflare Workers.

Green Compute can be enabled for any Cron triggered Workers. The concept is simple: when turned on, we’ll take your compute workload and run it exclusively on parts of our edge network located in facilities powered by renewable energy. Even though all of Cloudflare’s edge network is powered by renewable energy already, some of our data centers are located in third-party facilities that are not 100% powered by renewable energy. Green Compute takes our commitment to sustainability one step further by ensuring that not only our network equipment but also the building facility as a whole are powered by renewable energy. There are absolutely no code changes needed. Now, whether you need to update a leaderboard every five minutes or do DNA sequencing directly on our edge (yes, that’s a real use case!), you can minimize the impact of any scheduled work, regardless of how complex or energy intensive.

How it works

Cron triggers allow developers to set time-based invocations for their Workers. These Workers happen on a recurring schedule, as opposed to being triggered by application users via HTTP requests. Developers specify a job schedule in familiar cron syntax either through wrangler or within the Workers Dashboard. To set up a scheduled job, first create a Worker that performs a periodic task, then navigate to the ‘Triggers’ tab to define a Cron Trigger.

Announcing Green Compute on Cloudflare Workers

The great thing about cron triggered Workers is that there is no human on the other side waiting for a response in real time. There is no end user we need to run the job close to. Instead, these Workers are scheduled to run as (often computationally expensive) background jobs making them a no-brainer candidate to run exclusively on sustainable hardware, even when that hardware isn’t the closest to your user base.

Cloudflare’s massive global network is logically one distributed system with all the parts connected, secured, and trusted. Because our network works as a single system, as opposed to a system with logically isolated regions, we have the flexibility to seamlessly move workloads around the world keeping your impact goals in mind without any additional management complexity for you.

Announcing Green Compute on Cloudflare Workers

When you set up a Cron Trigger with Green Compute enabled, the Cloudflare network will route all scheduled jobs to green energy hardware automatically, without any application changes needed. To turn on Green Compute today, signup for our beta.

Real world use

If you haven’t ever had the pleasure of writing a cron job yourself, you might be wondering — what do you use scheduled compute for anyway?

There are a wide range of periodic maintenance tasks necessary to power any application. In my working life, I’ve built a scheduled job that ran every minute to monitor the availability of the system I was responsible for, texting me if any service was unavailable. In another instance, a job ran every five mins, keeping the core database and search feature in sync by pulling all new application data, transforming it, then inserting into a search database. In yet another example, a periodic job ran every half hour to iterate over all user sessions and cleanup sessions that were no longer active.

Scheduled jobs are the backbone of real world systems. Now, with Green Compute on Cloudflare Workers all these real world systems and their computationally expensive background maintenance tasks, can take advantage of running compute exclusively on machines powered by renewable energy.

The Green Network

Our mission at Cloudflare is to help you tackle your sustainability goals. Today, with the launch of the Carbon Impact Report we gave you visibility into your environmental impact. The collaboration with the Green Web Foundation gave green hosting certification for Cloudflare Pages. And our launch of Green Compute on Cloudflare Workers allows you to exclusively run on hardware powered by renewable energy. And the best part? No additional system complexity is required for any of the above.

Cloudflare is focused on making it easy to hit your ambitious goals. We are just getting started.

Introducing Workers Usage Notifications

Post Syndicated from Aly Cabral original https://blog.cloudflare.com/introducing-workers-usage-notifications/

Introducing Workers Usage Notifications

Introducing Workers Usage Notifications

So you’ve built an application on the Workers platform. The first thing you might be wondering after pushing your code out into the world is “what does my production traffic look like?” How many requests is my Worker handling? How long are those requests taking? And as your production traffic evolves overtime it can be a lot to keep up with. The last thing you want is to be surprised by the traffic your serverless application is handling.  But, you have a million things to do in your day job, and having to log in to the Workers dashboard every day to check usage statistics is one extra thing you shouldn’t need to worry about.

Today we’re excited to launch Workers usage notifications that proactively send relevant usage information directly to your inbox. Usage notifications come in two flavors. The first is a weekly summary of your Workers usage with a breakdown of your most popular Workers. The second flavor is an on-demand usage notification, triggered when a worker’s CPU usage is 25% above its average CPU usage over the previous seven days. This on-demand notification helps you proactively catch large changes in Workers usage as soon as those changes occur whether from a huge spike in traffic or a change in your code.

As of today, if you create a new free account with Workers, we’ll enable both the weekly summary and the CPU usage notification by default. All other account types will not have Workers usage notifications turned on by default, but the notifications can be enabled as needed. Once we collect substantial user feedback, our goal is to turn these notifications on by default for all accounts.

The Worker Weekly Summary

The mission of the Worker Weekly Summary is to give you a high-level view of your Workers usage automatically, in your inbox, without needing to sign in to the Worker’s dashboard. The summary includes valuable information such as total request counts, duration and egress data transfer, aggregated across your account, with breakouts for your most popular Workers that include median CPU time. Where duration accounts for the entire time your Worker spends responding to a request, CPU time is the time a script spends in computational work on the Cloudflare edge, discounting any time spent waiting on network requests, including requests to third-party APIs.

Introducing Workers Usage Notifications

Workers Usage Report

Where the Workers Weekly Summary provides a high-level view of your account usage statistics, the Workers Usage Report is targeted, event-driven and potentially actionable. It identifies those Workers with greater than a 25% increase in CPU time compared to the average of the previous 7 days (i.e., those Workers taking significantly more CPU resources now than in the recent past).

While sometimes these increases may be no surprise, perhaps because you’ve recently pushed a new deployment that you expected to do more CPU heavy work, at other times they may indicate a bug or reveal that a script is unintentionally expensive. The Workers Usage Report gives us an opportunity to let you know when a noticeable change in your compute footprint occurs, so that you can remedy any potential problems right away.

Introducing Workers Usage Notifications

Enabling Notifications

If you’d like to explicitly opt in to notifications, you can start off by clicking Add in the Notifications section of the Cloudflare zone dashboard.

Introducing Workers Usage Notifications
Introducing Workers Usage Notifications

After clicking Add, note the two new entries below “Create Notifications” under the “Workers” Product:

Introducing Workers Usage Notifications

Click on the Select box in line with “ Weekly Summary” for the weekly roll-up, which will then allow you to configure email recipients, webhooks or connect the notification to PagerDuty.

Introducing Workers Usage Notifications

Clicking Select next to “Usage Report” for CPU threshold notifications will send you to a similar configuration experience where you can customize email recipients and other integrations.

Introducing Workers Usage Notifications

What’s next?

As mentioned above, we’re enabling notifications by default for all new free plans on Workers. Before rolling out these notifications by default to all our users, we want to hear from you. Let us know your experience with our Workers usage notifications by joining our Developer Community discord or by sending feedback via the survey in the email notifications.

Our Workers Weekly Summary and on-demand CPU usage notification are just the beginning of our journey to support a wide range of useful, relevant notifications that help you get visibility into your deployments. We want to surface the right usage information, exactly when you need it.