Tag Archives: Platform Week

Magic NAT: everywhere, unbounded, and lower cost

Post Syndicated from Annika Garbers original https://blog.cloudflare.com/magic-nat/

Magic NAT: everywhere, unbounded, and lower cost

Magic NAT: everywhere, unbounded, and lower cost

Network Address Translation (NAT) is one of the most common and versatile network functions, used by everything from your home router to the largest ISPs. Today, we’re delighted to introduce a new approach to NAT that solves the problems of traditional hardware and virtual solutions. Magic NAT is free from capacity constraints, available everywhere through our global Anycast architecture, and operates across any network (physical or cloud). For Internet connectivity providers, Magic NAT for Carriers operates across high volumes of traffic, removing the complexity and cost associated with NATing thousands or millions of connections.

What does NAT do?

The main function of NAT is in its name:  NAT is responsible for translating the network address in the header of an IP packet from one address to another – for example, translating the private IP 192.168.0.1 to the publicly routable IP 192.0.2.1. Organizations use NAT to grant Internet connectivity from private networks, enable routing within private networks with overlapping IP space, and preserve limited IP resources by mapping thousands of connections to a single IP. These use cases are typically accomplished with a hardware appliance within a physical network or a managed service delivered by a cloud provider.

Let’s look at those different use cases.

Allowing traffic from private subnets to connect to the Internet

Resources within private subnets often need to reach out to the public Internet. The most common example of this is connectivity from your laptop, which might be allocated a private address like 192.168.0.1, reaching out to a public resource like google.com. In order for Google to respond to a request from your laptop, the source IP of your request needs to be publicly routable on the Internet. To accomplish this, your ISP translates the private source IP in your request to a public IP (and reverse-translates for the responses back to you). This use case is often referred to as public NAT, performed by hardware or software acting as a “NAT gateway.”

Magic NAT: everywhere, unbounded, and lower cost
Public NAT translates private IP addresses to public ones so that traffic from within private networks can access the Internet.

Users might also have requirements around the specific IP addresses that outgoing packets are NAT’d to. For example, they may need packets to egress from only one or a small subset of IPs so that the services they’re reaching out to can positively identify them – e.g. “only allow traffic from this specific source IP and block everything else.” They might also want traffic to NAT to IPs that accurately reflect the source’s geolocation, in order to pass the “pizza test”: are the results returned for the search term “pizza near me” geographically relevant? These requirements can increase the complexity of a customer’s NAT setup.

Enabling communication between private subnets with overlapping IP space

NATs are also used for routing traffic within fully private networks, in order to enable communication between resources with overlapping IP space. One example: imagine that you’re an IT architect at a retail company with a hundred geographically distributed store locations and a central data center. To make your life easier, you want to use the same IP address management scheme for all of your stores – e.g. host all of your printers on 10.0.1.0/24, point of sale devices on 10.0.2.0/24, and security cameras on 10.0.3.0/24. These devices need to reach out to resources hosted in your data center, which is also on your private network. The challenge: if multiple devices across your stores have the same source IP, how do return packets from your data center get back to the right device? This is where private NAT comes in.

Magic NAT: everywhere, unbounded, and lower cost
Private NAT translates IPs into a different private range so that devices with overlapping IP space can communicate with each other.

A NAT gateway sitting in a private network can enable connectivity between overlapping subnets by translating the original source IP (the one shared by multiple resources) to an IP in a different range. This can enable communication between mirrored subnets and other resources (like in our store → datacenter example), as well as between the mirrored subnets themselves – e.g. if traffic needed to flow between our store locations directly, such as a VoIP call from one store to another.

Conserving IP address space

As of 2019, the available pool of allocatable IPv4 space has been exhausted, making addresses a limited resource. In order to conserve their IPv4 space while the industry slowly transitions to IPv6, ISPs have adopted carrier-grade NAT solutions to map multiple users to a single IP, maximizing the mileage of the space they have available. This uses the same mechanisms for address translation we’ve already described, but at a large scale – ISPs need to deploy devices that can handle thousands or millions of concurrent connections without impacting traffic performance.

Magic NAT: everywhere, unbounded, and lower cost

Challenges with existing NAT solutions

Today, users accomplish the use cases we’ve described with a physical appliance (often a firewall) or a virtual appliance delivered as a managed service from a cloud provider. These approaches have the same fundamental limitations as other hardware and virtualized hardware solutions traditionally used to accomplish most network functions.

Geography constraints

Physical or virtual devices performing NAT are deployed in one or a few specific locations (e.g. within a company’s data center or in a specific cloud region). Traffic may need to be backhauled out of its way through those specific locations to be NAT’d. A common example is the hub and spoke network architecture, where all Internet-bound traffic is backhauled from geographically distributed locations to be filtered and passed through a NAT gateway to the Internet at a central “hub.” (We’ve written about this challenge previously in the context of hardware firewalls.)

Managed NAT services offered by cloud providers require customers to deploy NAT gateway instances in specific availability zones. This means that if customers have origin services in multiple availability zones, they either need to backhaul traffic from one zone to another, incurring fees and latency, or deploy instances in multiple zones. They also need to plan for redundancy – for example, AWS recommends configuring a NAT gateway in every availability zone for “zone-independent architecture.”

Capacity constraints

Each appliance or virtual device can only support up to a certain amount of traffic, and higher supported traffic volumes usually come at a higher cost. Beyond these limits, users need to deploy multiple NAT instances and design mechanisms to load balance traffic across them, adding additional hardware and network hops to their stack.

Cost challenges

Physical devices that perform NAT functionality have several costs associated – in addition to the upfront CAPEX for device purchases, organizations need to plan for installation, maintenance, and upgrade costs. While managed cloud services don’t carry the same cost line items of traditional hardware, leading providers’ models include multiple costs and variable pricing that can be hard to predict. A combination of hourly charges, data processing charges, and data transfer charges can lead to surprises at the end of the month, especially if traffic experiences momentary spikes.

Hybrid infrastructure challenges

More and more customers we talk to are embracing hybrid (datacenter/cloud), multi-cloud, or poly-cloud infrastructure to diversify their spend and leverage the best of breed features offered by each provider. This means deploying separate NAT instances across each of these networks, which introduces additional complexity, management overhead, and cost.

Magic NAT: everywhere, unbounded, cross-platform, and predictably priced

Over the past few years, as we’ve been growing our portfolio of network services, we’ve heard over and over from customers that you want an alternative to the NAT solutions currently available on the market and a better way to address the challenges we described. We’re excited to introduce Magic NAT, the latest entrant in our “Magic” family of services designed to help customers build their next-generation networks on Cloudflare.

How does it work?

Magic NAT is built on the foundational components of Cloudflare One, our Zero Trust network-as-a-service platform. You can follow a few simple steps to get set up:

  1. Connect to Cloudflare. Magic NAT works with all of our network-layer on-ramps including Anycast GRE or IPsec, CNI, and WARP. Users set up a tunnel or direct connection and route privately sourced traffic across it; packets land at the closest Cloudflare location automatically.
  2. Upgrade for Internet connectivity. Users can enable Internet-bound TCP and UDP traffic (any port) to access resources on the Internet from Cloudflare IPs.
  3. (Optional) Enable dedicated egress IPs. Available if you need traffic to egress from one or multiple dedicated IPs rather than a shared pool. Dedicated egress IPs may be useful if you interact with services that “allowlist” specific IP addresses or otherwise care about which IP addresses are seen by servers on the Internet.
  4. (Optional) Layer on security policies for safe access. Magic NAT works natively with Cloudflare One security tools including Magic Firewall and our Secure Web Gateway. Users can add policies on top of East/West and Internet-bound traffic to secure all network traffic with L3 through L7 protection.

Address translation between IP versions will also be supported, including 4to6 and 6to4 NAT capabilities to ensure backwards and forwards compatibility when clients or servers are only reachable via IPv4 or IPv6.

Magic NAT: everywhere, unbounded, and lower cost

Anycast: Magic NAT is everywhere, automatically

With Cloudflare’s Anycast architecture and global network of over 275 cities across the world, users no longer need to think about deploying NAT capabilities in specific locations or “availability zones.” Anycast on-ramps mean that traffic automatically lands at the closest Cloudflare location. If that location becomes unavailable (e.g. for maintenance), traffic fails over automatically to the next closest – zero configuration work from customers required. Failover from Cloudflare to customer networks is also automatic; we’ll always route traffic across the healthiest available path to you.

Scale: Magic NAT leverages Cloudflare’s entire network capacity

Cloudflare’s global capacity is at 141 Tbps and counting, and automated traffic management systems like Unimog allow us to take full advantage of that capacity to serve high volumes of traffic smoothly. We absorb some of the largest DDoS attacks on the Internet, process hundreds of Gbps for customers through Magic Firewall, and provide privacy for millions of user devices across the world – and Magic NAT is built with this scale in mind. You’ll never need to provision and load balance across multiple instances or worry about traffic throttling or congestion again.

Cost: no more hardware costs and no surprises

Magic NAT, like our other network services, is priced based on the 95th percentile of clean bandwidth for your network: no installation, maintenance, or upgrades, and no surprise charges for data transfer spikes. Unlike managed services offered by cloud providers, we won’t charge you for traffic twice. This means fair, predictable billing based on what you actually use.

Hybrid and multi-cloud: simplify networking across environments

Today, customers deploying NAT across on-prem environments and cloud properties need to manage separate instances for each network. As with Cloudflare’s other products that provide an overlay across multiple environments (e.g. Magic Firewall), we can dramatically simplify this architecture by giving users a single place for all their traffic to NAT through regardless of source/origin network.

Summary

Traditional NAT solutions Magic NAT
Location-dependent
Deploy physical or virtual appliances in one or more locations; additional cost for redundancy.
Anycast
No more planning availability zones. Magic NAT is everywhere and extremely fault-tolerant, automatically.
Capacity-limited
Physical and virtual appliances have upper limits for throughput; need to deploy and load balance across multiple devices to overcome.
Scalable
No more planning for capacity and deploying multiple instances to load balance traffic across – Magic NAT leverages Cloudflare’s entire network capacity, automatically.
High (hardware) and/or unpredictable (cloud) cost
CAPEX plus installation, maintenance, and upgrades or triple charge for managed cloud service.
Fairly and predictably priced
No more sticker shock from unexpected data processing charges at the end of the month.
Tied to physical network or single cloud
Need to deploy multiple instances to cover traffic flows across the entire network.
Multi-cloud
Simplify networking across environments; one control plane across all of your traffic flows.

Learn more

Magic NAT is currently in beta, translating network addresses globally for a variety of workloads, large and small. We’re excited to get your feedback about it and other new capabilities we’re cooking up to help you simplify and future-proof your network – learn more or contact your account team about getting access today!

Introducing Workers Analytics Engine

Post Syndicated from Jon Levine original https://blog.cloudflare.com/workers-analytics-engine/

Introducing Workers Analytics Engine

Introducing Workers Analytics Engine

Today we’re excited to introduce Workers Analytics Engine, a new way to get telemetry about anything using Cloudflare Workers. Workers Analytics Engine provides time series analytics built for the serverless era.

Workers Analytics Engine uses the same technology that powers Cloudflare’s analytics for millions of customers, who generate 10s of millions of events per second. This unique architecture provides significant benefits over traditional metrics systems – and even enables our customers to build analytics for their customers.

Why use Workers Analytics Engine

Workers Analytics Engine can be used to get telemetry about just about anything.

Our initial motivation for building Workers Analytics Engine was to help internal teams at Cloudflare better understand what’s happening in their Workers. For example, one early internal customer is our R2 storage product. The R2 team is using the Analytics Engine to measure how many reads and writes happen in R2, how many users make these requests, how many bytes are transferred, how long the operations take, and so forth.

After seeing quick adoption from internal teams at Cloudflare, we realized that many customers could benefit from using this product.

For example, Workers Analytics Engine can also be used to build custom security rules. You could use it to implement something like fail2ban, a program that can ban malicious traffic. Every time someone logs in, you could record information like their location and IP. On subsequent logins, you could query the rate of login attempts from these attackers, and block them if they’ve attempted to sign in too many times in a given period.

Workers Analytics Engine can even be used to track things in the world that have nothing (yet!) to do with Workers. For example, imagine you have a network of IoT sensors that connect to the Internet to report weather and air quality data, like temperature, air pressure, wind speed, and PM2.5 pollution. Using Workers Analytics Engine, you could deploy a Worker in just a few minutes that collects these reports, and then query and visualize the data using our analytics APIs.

How to use Workers Analytics Engine

There are three steps to get started with Workers Analytics Engine:

  1. Configure your analytics using Wrangler
  2. Write data using the Workers Runtime API
  3. Query your data using our SQL or GraphQL API.

Configuring Workers Analytics Engine in Wrangler

To start using Workers Analytics Engine, you first need to configure it in Wrangler. This is done by creating a binding in wrangler.toml.

[analytics_engine]
bindings = [
    { name = "WEATHER" }
]

Your analytics can be named after the event in the world that they represent. For example, readings from our weather sensor above might be named “WEATHER.”

For our current beta release, customers may only create one binding at a time. In the future, we plan to enable customers to define multiple bindings, or even define them on-the-fly from within the Workers runtime.

Writing data from the Workers runtime

Once a binding is declared in Wrangler, you get a new environment variable in the Workers runtime that represents your Analytics Engine. This variable has a method, writeDataPoint(). A “data point” is a structured event which consists of a vector of labels and a vector of metrics.

A metric is just a “number” type field that can be aggregated in some way – for example, it could be summed, averaged, or quantiled. A label is a “string” type field that can be used for grouping or filtering.

For example, suppose you are collecting air quality samples. Each data point would represent a reading from your weather sensor. Metrics might include numbers like the temperature or air pressure reading. The labels could include the location of the sensor and the hardware identifier of the sensor.

Here’s what this looks like in code:

  async fetch(request: Request, env) {
    env.WEATHER.writeDataPoint({
      labels: ["Seattle", "USA", "pro_sensor_9000”],
      metrics: [25, 0.5]
    });
    return new Response("OK!");
  }

In our initial version, developers are responsible for providing fields in a consistent order, so that they have the same semantics when querying. In a future iteration, we plan to let developers name their labels and metrics in the binding, and then use these names when writing data points in the runtime.

Querying and visualizing data

To query your data, Cloudflare provides a rich SQL API. For example:

SELECT label_1 as city, avg(metric_2) as avg_humidity
FROM WEATHER
WHERE metric_1 > 0
ORDER BY avg_humidity DESC
LIMIT 10

The results would show you the top 10 cities that had the highest average humidity readings when the temperature was above 0.

Note that, for our initial version, labels and metrics are accessed via names that have 1-based indexing. In the future, when we let developers name labels and metrics in their binding, these names will also be available via the SQL API.

Workers Analytics Engine is optimized for powering time series analytics that can be visualized using tools like Grafana. Every event written from the runtime is automatically populated with a timestamp field. This makes it incredibly easy to make time series charts in Grafana:

Introducing Workers Analytics Engine

The macro $timeSeries simply expands to intDiv(toUInt32(timestamp), 60) * 60 * 1000 — i.e. the timestamp rounded to the nearest minute (as defined in our \$step parameter)  and converted into milliseconds. Grafana also provides \$timeFilter which can be changed at the grafana dashboard level. We could easily add another series here by just “grouping” on another field like “city”.

Data can also be queried using our GraphQL API. At this time, the GraphQL API only supports querying total counts for each named binding.

Finally, the Cloudflare dashboard also provides a high-level count of the total number of data points seen for each binding. In the future, we plan to offer rich analytical abilities through the dashboard.

How is this different from traditional metrics systems?

Many developers are familiar with metrics systems like Prometheus. We built Workers Analytics Engine based on our experience providing analytics for millions of Cloudflare customers. Writing structured event logs and querying them using a relational database model is very different from writing metrics – but it’s also much more powerful.

Here are some of the benefits of our model, compared with metrics systems:

  • Unlimited cardinality of label values: In a traditional metrics system, like Prometheus, every time you add a new label value, under the hood you are actually adding a new metric. If you have multiple labels for one data point, this can rapidly increase the number of metrics. Nearly everyone using a metrics system runs into challenges with cardinality. For example, you may start by including a “customer ID” in a label – but what happens when you have thousands or millions of customers? In contrast, when using Workers Analytics Engine, every label value is stored independently – so every data point can have unique label values with no problem.
  • Low latency reporting: Pull-based metrics systems must check for new metrics at some fixed interval, known as a scrape interval. Commonly this is set to one minute or longer – and this is the absolute fastest that your data can be collected. With Workers Analytics Engine, we can report on new data points within a few seconds.
  • Fast queries at any timescale: Everyone who uses Prometheus knows what happens when you expand that range selector in Grafana to change from looking back 30 minutes to seven days… you wait, and you’re lucky if you get any results at all. Whole new pieces of software exist just for the challenge of storing Prometheus metrics long-term. In contrast, Workers Analytics Engine is superfast at querying anything from the last five minutes of data to the last seven days. Look for yourself to see!

And of course, Workers Analytics Engine runs on Cloudflare’s global network. So rather than worrying about running your own Prometheus server, setting up Thanos, and closely tracking cardinality, you can just write data and query it using our SQL API.

What’s next

Today we’re introducing a closed beta for Workers Analytics Engine. You can join the waitlist by signing up here. We already have many teams at Cloudflare happily using this and would love to get your feedback at this early stage, as we are quickly adding new functionality.

We have an ambitious roadmap ahead of us. One critical use case we plan to support is building analytics and usage-based billing for your customers – so if you’re a platform who is looking to build analytics into your product, we’d love to talk to you!

And of course, if this sounds fun to work on, we’re hiring engineers on the Data team to work in San Francisco, London, or remote locations!

Announcing D1: our first SQL database

Post Syndicated from Rita Kozlov original https://blog.cloudflare.com/introducing-d1/

Announcing D1: our first SQL database

Announcing D1: our first SQL database

We announced Cloudflare Workers in 2017, giving developers access to compute on our network. We were excited about the possibilities this unlocked, but we quickly realized — most real world applications are stateful. Since then, we’ve delivered KV, Durable Objects, and R2, giving developers access to various types of storage.

Today, we’re excited to announce D1, our first SQL database.

While the wait on beta access shouldn’t be long — we’ll start letting folks in as early as June (sign up here), we’re excited to share some details of what’s to come.

Meet D1, the database designed for Cloudflare Workers

D1 is built on SQLite. Not only is SQLite the most ubiquitous database in the world, used by billions of devices a day, it’s also the first ever serverless database. Surprised? SQLite was so ahead of its time, it dubbed itself “serverless” before the term gained connotation with cloud services, and originally meant literally “not involving a server”.

Since Workers itself runs between the server and the client, and was inspired by technology built for the client, SQLite seemed like the perfect fit for our first entry into databases.

So what can you build with D1? The true answer is “almost anything!”, that might not be very helpful in triggering the imagination, so how about a live demo?

D1 Demo: Northwind Traders

You can check out an example of D1 in action by trying out our demo running here: northwind.d1sql.com.

If you’re wondering “Who are Northwind Traders?”, Northwind Traders is the “Hello, World!” of databases, if you will. A sample database that Microsoft would provide alongside Microsoft Access to use as their own tutorial. It first appeared 25 years ago in 1997, and you’ll find many examples of its use on the Internet.

It’s a typical business application, with a realistic schema, with many foreign keys, across many different tables — a truly timeless representation of data.

Announcing D1: our first SQL database

When was the recent order of Queso Cabrales shipped, and what ship was it on? You can quickly find out. Someone calling in about ordering some Chai? Good thing Exotic Liquids still has 39 units in stock, for just \$18 each.

Announcing D1: our first SQL database

We welcome you to play and poke around, and answer any questions you have about Northwind Trading’s business.

The Northwind Traders demo also features a dashboard where you can find details and metrics about the D1 SQL queries happening behind the scenes.

Announcing D1: our first SQL database

What can you build with D1?

Going back to our original question before the demo, however, what can you build with D1?

While you may not be running Northwind Traders yourself, you’re likely running a very similar piece of software somewhere. Even at the very core of Cloudflare’s service is a database. A SQL database filled with tables, materialized views and a plethora of stored procedures. Every time a customer interacts with our dashboard they end up changing state in that database.

The reality is that databases are everywhere. They are inside the web browser you’re reading this on, inside every app on your phone, and the storage for your bank transaction, travel reservations, business applications, and on and on. Our goal with D1 is to help you build anything from APIs to rich and powerful applications, including eCommerce sites, accounting software, SaaS solutions, and CRMs.

You can even combine D1 with Cloudflare Access and create internal dashboards and admin tools that are securely locked to only the people in your organization. The world, truly, is your oyster.

The D1 developer experience

We’ll talk about the capabilities, and upcoming features further down in the post, but at the core of it, the strength of D1 is the developer experience: allowing you to go from nothing to a full stack application in an instant. Think back to a tool you’ve used that made development feel magical — that’s exactly what we want developing with Workers and D1 to feel like.

To give you a sense of it, here’s what getting started with D1 will look like.

Creating your first D1 database

With D1, you will be able to create a database, in just a few clicks — define the tables, insert or upload some data, no need to memorize any commands unless you need to.

Announcing D1: our first SQL database

Of course, if the command-line is your jam, earlier this week, we announced the new and improved Wrangler 2, the best tool for wrangling and deploying your Workers, and soon also your tool for deploying D1. Wrangler will also come with native D1 support, so you can create & manage databases with a few simple commands:

Accessing D1 from your Worker

Attaching D1 to your Worker is as easy as creating a new binding. Each D1 database that you attach to your Worker gets attached with its own binding on the env parameter:

export default {
  async fetch(request, env, ctx) {
    const { pathname } = new URL(request.url)
    if (pathname === '/num-products') {
      const { result } = await env.DB.get(`SELECT count(*) AS num_products FROM Product;`)
      return new Response(`There are ${result.num_products} products in the D1 database!`)
    }
  }
}

Or, for a slightly more complex example, you can safely pass parameters from the URL to the database using a Router and parameterised queries:

import { Router } from 'itty-router';
const router = Router();

router.get('/product/:id', async ({ params }, env) => {
  const { result } = await env.DB.get(
    `SELECT * FROM Product WHERE ID = $id;`,
    { $id: params.id }
  )
  return new Response(JSON.stringify(result), {
    headers: {
      'content-type': 'application/json'
    }
  })
})

export default {
  fetch: router.handle,
}

So what can you expect from D1?

First and foremost, we want you to be able to develop with D1, without having to worry about cost.

At Cloudflare, we don’t believe in keeping your data hostage, so D1, like R2, will be free of egress charges. Our plan is to price D1 like we price our storage products by charging for the base storage plus database operations performed.

But, again, we don’t want our customers worrying about the cost or what happens if their business takes off, and they need more storage or have more activity. We want you to be able to build applications as simple or complex as you can dream up. We will ensure that D1 costs less and performs better than comparable centralized solutions. The promise of serverless and a global network like Cloudflare’s is performance and lower cost driven by our architecture.

Here’s a small preview of the features in D1.

Read replication

With D1, we want to make it easy to store your whole application’s state in the one place, so you can perform arbitrary queries across the full data set. That’s what makes relational databases so powerful.

However, we don’t think powerful should be synonymous with cumbersome. Most relational databases are huge, monolithic things and configuring replication isn’t trivial, so in general, most systems are designed so that all reads and writes flow back to a single instance. D1 takes a different approach.

With D1, we want to take configuration off your hands, and take advantage of Cloudflare’s global network. D1 will create read-only clones of your data, close to where your users are, and constantly keep them up-to-date with changes.

Batching

Many operations in an application don’t just generate a single query. If your logic is running in a Worker near your user, but each of these queries needs to execute on the database, then sending them across the wire one-by-one is extremely inefficient.

D1’s API includes batching: anywhere you can send a single SQL statement you can also provide an array of them, meaning you only need a single HTTP round-trip to perform multiple operations. This is perfect for transactions that need to execute and commit atomically:

async function recordPurchase(userId, productId, amount) { 
  const result = await env.DB.exec([
    [
      `UPDATE users SET balance = balance - $amount WHERE user_id = $user_id`,
      { $amount: amount, $user_id: userId },
    ],
    [
      'UPDATE product SET total_sales = total_sales + $amount WHERE product_id = $product_id',
      { $amount: amount, $product_id: productId },
    ],
  ])
  return result
}

Embedded compute

But we’re going further. With D1, it will be possible to define a chunk of your Worker code that runs directly next to the database, giving you total control and maximum performance—each request first hits your Worker near your users, but depending on the operation, can hand off to another Worker deployed alongside a replica or your primary D1 instance to complete its work.

Backups and redundancy

There are few things as critical as the data stored in your main application’s database, so D1 will automatically save snapshots of your database to Cloudflare’s cloud storage service, R2, at regular intervals, with a one-click restoration process. And, since we’re building on the redundant storage of Durable Objects, your database can physically move locations as needed, resulting in self-healing from even the most catastrophic problems in seconds.

Importing and exporting data

While D1 already supports the SQLite API, making it easy for you to write your queries, you might also need data to run them on. If you’re not creating a brand-new application, you may want to import an existing dataset from another source or database, which is why we’ll be working on allowing you to bring your own data to D1.

Likewise, one of SQLite’s advantages is its portability. If your application has a dedicated staging environment, say, you’ll be able to clone a snapshot of that data down to your local machine to develop against. And we’ll be adding more flexibility, such as the ability to create a new database with a set of test data for each new pull request on your Pages project.

What’s next?

This wouldn’t be a Cloudflare announcement if we didn’t conclude on “we’re just getting started!” — and it’s true! We are really excited about all the powerful possibilities our database on our global network opens up.

Are you already thinking about what you’re going to build with D1 and Workers? Same. Give us your details, and we’ll give you access as soon as we can — look out for a beta invite from us starting as early as June 2022!

Logs on R2: slash your logging costs

Post Syndicated from Tanushree Sharma original https://blog.cloudflare.com/logs-r2/

Logs on R2: slash your logging costs

Logs on R2: slash your logging costs

Hot on the heels of the R2 open beta announcement, we’re excited that Cloudflare enterprise customers can now use Logpush to store logs on R2!

Raw logs from our products are used by our customers for debugging performance issues, to investigate security incidents, to keep up security standards for compliance and much more. You shouldn’t have to make tradeoffs between keeping logs that you need and managing tight budgets. With R2’s low costs, we’re making this decision easier for our customers!

Getting into the numbers

Cloudflare helps customers at different levels of scale — from a few requests per day, up to a million requests per second. Because of this, the cost of log storage also varies widely. For customers with higher-traffic websites, log storage costs can grow large, quickly.

As an example, imagine a website that gets 100,000 requests per second. This site would generate about 9.2 TB of HTTP request logs per day, or 850 GB/day after gzip compression. Over a month, you’ll be storing about 26 TB (compressed) of HTTP logs.

For a typical use case, imagine that you write and read the data exactly once – for example, you might write the data to object storage before ingesting it into an alerting system. Compare the costs of R2 and S3 (note that this excludes costs per operation to read/write data).

Provider Storage price Data transfer price Total cost assuming data is read once
R2 $0.015/GB $0 $390/month
S3 (Standard, US East $0.023/GB $0.09/GB for first 10 TB; then $0.085/GB $2,858/month

In this example, R2 leads to 86% savings! It’s worth noting that querying logs is where another hefty price tag comes in because Amazon Athena charges based on the amount of data scanned. If your team is looking back through historical data, each query can be hundreds of dollars.

Many of our customers have tens to hundreds of domains behind Cloudflare and the majority of our Enterprise customers also use multiple Cloudflare products. Imagine how costs will scale if you need to store HTTP, WAF and Spectrum logs for all of your Internet properties behind Cloudflare.

For SaaS customers that are building the next big thing on Cloudflare, logs are important to get visibility into customer usage and performance. Your customer’s developers may also want access to raw logs to understand errors during development and to troubleshoot production issues. Costs for storing logs multiply and add up quickly!

The flip side: log retrieval

When designing products, one of Cloudflare’s core principles is ease of use. We take on the complexity, so you don’t have to. Storing logs is only half the battle, you also need to be able to access relevant logs when you need them – in the heat of an incident or when doing an in depth analysis.

Our product, Logpull, offers seven days of log retention and an easy to use API to access. Our customers love that Logpull doesn’t need any setup on third parties since it’s completely managed by Cloudflare. However, Logpull is limited in the retention of logs, the type of logs that we store (only HTTP request logs) and the amount of data that can be queried at one time.

We’re building tools for log retrieval that make it super easy to get your data out of R2 from any of our datasets. Similar to Logpull, we’ll start by supporting lookups by time period and rayId. From there, we’ll tackle more complex functions like returning logs within time X and Y that have 500 errors or where WAF action = block.

We’re looking for customers to join a closed beta for our Log Retrieval API. If you’re interested in testing it out, giving feedback and ultimately helping us shape the product sign up here.

Logs on R2: How to get started

Enterprise customers first need to get R2 added to their contract. Reach out to your account team if this is something you’re interested in! Once enabled, create an R2 bucket for your logs and follow the Logpush setup flow to create your job.

Logs on R2: slash your logging costs

It’s that simple! If you have questions, our Logpush to R2 developer docs go into more detail.

More to come

We’re continuing to build out more advanced Logpush features with a focus on customization. Here’s a preview of what’s next on the roadmap:

  • New datasets: Network Analytics Logs, Workers Invocation Logs
  • Log filtering
  • Custom log formatting

We also have exciting plans to build out log analysis and forensics capabilities on top of R2. We want to make log storage tightly coupled to the Cloudflare dash so you can see high level analytics and drill down into individual log lines all in one view. Stay tuned to the blog for more!

Introducing Cache Reserve: massively extending Cloudflare’s cache

Post Syndicated from Alex Krivit original https://blog.cloudflare.com/introducing-cache-reserve/

Introducing Cache Reserve: massively extending Cloudflare’s cache

Introducing Cache Reserve: massively extending Cloudflare’s cache

One hundred percent. 100%. One-zero-zero. That’s the cache ratio we’re all chasing. Having a high cache ratio means that more of a website’s content is served from a Cloudflare data center close to where a visitor is requesting the website. Serving content from Cloudflare’s cache means it loads faster for visitors, saves website operators money on egress fees from origins, and provides multiple layers of resiliency and protection to make sure that content is always available to be served.

Today, I’m delighted to announce a massive extension of the benefits of caching with Cache Reserve: a new way to persistently serve all static content from Cloudflare’s global cache. By using Cache Reserve, customers can see higher cache hit ratios and lower egress bills.

Why is getting a 100% cache ratio difficult?

Every second, Cloudflare serves tens-of-millions of requests from our cache which equates to multiple terabytes-per-second of cached data being delivered to website visitors around the world. With this massive scale, we must ensure that the most requested content is cached in the areas where it is most popular. Otherwise, visitors might wait too long for content to be delivered from farther away and our network would be running inefficiently. If cache storage in a certain region is full, our network avoids imposing these inefficiencies on our customers by evicting less-popular content from the data center and replacing it with more-requested content.

This works well for the majority of use cases, but all customers have long tail content that is rarely requested and may be evicted from cache. This can be a cause of concern for customers, as this unpopular content can be a major cost driver if it is evicted repeatedly and needs to be served from an origin. This concern can be especially significant for customers with massive content libraries. So how can we make sure to keep this less popular content in cache to shield the customer from origin egress?

Cache Reserve removes customer content from this popularity contest and ensures that even if the specific content hasn’t been requested in months, it can still be served from Cloudflare’s cache – avoiding the need to pull it from the origin and saving the customer money on egress. Cache Reserve helps get customers closer to that 100% cache ratio and helps serve all of their content from our global CDN, forever.  

Why is cache eviction needed?

Most content served from our cache starts its journey from an origin server – where content is hosted. In order to be admitted to Cloudflare’s cache the content sent from the origin must meet certain eligibility criteria that ensures it can be reused to respond to other requests for a website (content that doesn’t change based on who is visiting the site).

After content is admitted to cache, the next question to consider is how long it should remain in cache. Since cache ratios are calculated by taking the number of requests for content and identifying the portion that are answered from a cache server instead of an origin server, ensuring content remains cached in an area it is highly requested is paramount to achieving a high cache ratio.

Introducing Cache Reserve: massively extending Cloudflare’s cache

Some CDNs use a pay-to-play model that allows customers to pay more money to ensure content is cached in certain areas for some length of time. At Cloudflare, we don’t charge customers based on where or for how long something is cached. This means that we have to use signals other than a customer’s willingness to pay to make sure that the right content is cached for the right amount of time and in the right areas.

Where to cache a piece of content is pretty straightforward (where it’s being requested), how long content should remain in cache can be highly variable.

Beyond headers like cache-control or cdn-cache-control, which help determine how long a customer wants something to be served from cache, the other element that CDNs must consider is whether they need to evict content early to optimize storage of more popular assets. We do eviction based on an algorithm called “least recently used” or LRU. This means that the least-requested content can be evicted from cache first to make space for more popular content when storage space is full.

This caching strategy requires keeping track of a lot of information about when requests come in and constantly updating the cache to make sure that the hottest content is kept in cache and the least popular content is evicted. This works well and is fair for the wide-array of customers our CDN supports.

However, if a customer has a large library of content that might go through cycles of popularity and which they’d like to serve from cache regardless, then LRU might mean additional origin egress as assets that are requested sparingly over a long time frame are pulled more from the origin.    

That’s where Cache Reserve comes in. Cache Reserve is not an alternative to our popularity-based cache but a complement to it. By backstopping all cacheable content in Cache Reserve customers don’t have to worry about cache eviction or ephemerality any longer.    

Cache Reserve

Cache Reserve is a large, persistent data store that is implemented on top of R2. By pushing a single button in the dashboard, all of your website’s cacheable content will be written to Cache Reserve. In the same way that Tiered Cache builds a hierarchy of caches between your visitors and your origin, Cache Reserve serves as the ultimate upper-tier cache that will reserve storage space for your assets for as long as you want. This ensures that your content is always served from cache, shielding your origin from unneeded egress fees, and improving response performance.

How Does Cache Reserve Work?

Introducing Cache Reserve: massively extending Cloudflare’s cache

Cache Reserve sits between our edge data centers and your origin and provides guaranteed SLAs for how long your content can remain in cache.

As content is pulled from the origin, it will be written to Cache Reserve, followed by upper-tier data centers, and lower-tier data centers until it reaches the client to fulfill the request. Subsequent requests for the same content will not need to go all the way back to the origin for the response and can, instead, be served from a cache closer to the visitor. Improving both performance and costs of serving the assets. As content gets evicted from lower-tiers and upper-tiers, it will be backstopped by Cache Reserve.

Cache Reserve voids the request-based eviction that’s implemented in LRU and ensures that assets will remain in cache as long as they are needed. Cache Reserve extends the benefits of Tiered Cache by reducing the number of times Cloudflare’s network needs to ask an origin for content we should have in cache, while simultaneously limiting the number of connections and requests that our data centers need to open to your origin to ask for missing content. Using Cache Reserve with tiered cache helps collapse the number of requests that result from multiple concurrent cache misses from lower-tiers for the same content.

As an example, let’s assume a cold request for example.com, something our network has never seen before. If a client request comes into the closest lower-tier data center and it is a miss, that lower-tier is mapped to an upper-tier data center. When the lower-tier asks the upper-tier for the content and it is also a miss, the upper-tier will ask Cache Reserve for the content. Now, being the ultimate upper-tier, it will be the only data center that can ask the origin for content if it is not stored on our network. This will help limit the origin resources you need to devote to serving this content as once it’s written to Cache Reserve, your origin doesn’t need to fan out the content to any other part of Cloudflare’s network.

When your content does need updating, Cache Reserve will respect cache-control headers and purge requests. This means that if you want to control how long something remains fresh in Cache Reserve, before Cloudflare goes back to your origin to revalidate the content, set it as a cache-control header and it will be respected without risk of early eviction. Or if you want to update content on the fly, you can send a purge request which will be respected in both Cloudflare’s cache and in Cache Reserve.

How do you use Cache Reserve?

Currently, Cache Reserve is in closed beta, meaning that it’s available to anyone who wants to sign up but we will be slowly rolling it out to customers over the coming weeks to make sure that we are quickly triaging edge cases and making fundamental improvements before we make it generally available to everyone.

To sign up for the Cache Reserve beta:

  • Simply go to the Caching tile in the dashboard.
  • Navigate to the Cache Reserve page and push the sign up button.
Introducing Cache Reserve: massively extending Cloudflare’s cache

The Cache Reserve Plan will mimic the low cost of R2. Storage will be \$0.015 per GB per month and operations will be \$0.36 per million reads, and \$4.50 per million writes. For more information about pricing, please refer to the R2 page to get a general idea (Cache Reserve pricing page will be out soon).  

Try it out!

Cache Reserve holds tremendous promise to increase cache hit ratios — which will improve the economics of running any website while speeding up visitors’ experiences. We’re excited to begin letting people use Cache Reserve soon. Be sure to check out the beta and let us know what you think.

Durable Objects Alarms — a wake-up call for your applications

Post Syndicated from Matt Alonso original https://blog.cloudflare.com/durable-objects-alarms/

Durable Objects Alarms — a wake-up call for your applications

Durable Objects Alarms — a wake-up call for your applications

Since we launched Durable Objects, developers have leveraged them as a novel building block for distributed applications.

Durable Objects provide globally unique instances of a JavaScript class a developer writes, accessed via a unique ID. The Durable Object associated with each ID implements some fundamental component of an application — a banking application might have a Durable Object representing each bank account, for example. The bank account object would then expose methods for incrementing a balance, transferring money or any other actions that the application needs to do on the bank account.

Durable Objects work well as a stateful backend for applications — while Workers can instantiate a new instance of your code in any of Cloudflare’s data centers in response to a request, Durable Objects guarantee that all requests for a given Durable Object will reach the same instance on Cloudflare’s network.

Each Durable Object is single-threaded and has access to a stateful storage API, making it easy to build consistent and highly-available distributed applications on top of them.

This system makes distributed systems’ development easier — we’ve seen some impressive applications launched atop Durable Objects, from collaborative whiteboarding tools to conflict-free replicated data type (CRDT) systems for coordinating distributed state launch.

However, up until now, there’s been a piece missing — how do you invoke a Durable Object when a client Worker is not making requests to it?

As with any distributed system, Durable Objects can become unavailable and stop running. Perhaps the machine you were running on was unplugged, or the datacenter burned down and is never coming back, or an individual object exceeded its memory limit and was reset. Before today, a subsequent request would reinitialize the Durable Object on another machine, but there was no way to programmatically wake up an Object.

Durable Objects Alarms are here to change that, unlocking new use cases for Durable Objects like queues and deferred processing.

What is a Durable Object Alarm?

Durable Object Alarms allow you, from within your Durable Object, to schedule the object to be woken up at a time in the future. When the alarm’s scheduled time comes, the Durable Object’s alarm() handler will be called. If this handler throws an exception, the alarm will be automatically retried using exponential backoff until it succeeds — alarms have guaranteed at-least-once execution.

How are Alarms different from Workers Cron Triggers?

Alarms are more fine-grained than Cron Triggers. While a Workers service can have up to three Cron Triggers configured at once, it can have an unlimited amount of Durable Objects, each of which can have a single alarm active at a time.

Alarms are directly scheduled from and invoke a function within your Durable Object. Cron Triggers, on the other hand, are not programmatic — they execute based on their schedules, which have to be configured via the Cloudflare Dashboard or centralized configuration APIs.

How do I use Alarms?

First, you’ll need to add the durable_object_alarms compatibility flag to your wrangler.toml.

compatibility_flags = ["durable_object_alarms"]

Next, implement an alarm() handler in your Durable Object that will be called when the alarm executes. From anywhere else in your Durable Object, call state.storage.setAlarm() and pass in a time for the alarm to run at. You can use state.storage.getAlarm() to retrieve the currently set alarm time.

In this example, we implemented an alarm handler that wakes the Durable Object up once every 10 seconds to batch requests to a single Durable Object, deferring processing until there is enough work in the queue for it to be worthwhile to process them.

export default {
  async fetch(request, env) {
    let id = env.BATCHER.idFromName("foo");
    return await env.BATCHER.get(id).fetch(request);
  },
};

const SECONDS = 1000;

export class Batcher {
  constructor(state, env) {
    this.state = state;
    this.storage = state.storage;
    this.state.blockConcurrencyWhile(async () => {
      let vals = await this.storage.list({ reverse: true, limit: 1 });
      this.count = vals.size == 0 ? 0 : parseInt(vals.keys().next().value);
    });
  }
  async fetch(request) {
    this.count++;

    // If there is no alarm currently set, set one for 10 seconds from now
    // Any further POSTs in the next 10 seconds will be part of this kh.
    let currentAlarm = await this.storage.getAlarm();
    if (currentAlarm == null) {
      this.storage.setAlarm(Date.now() + 10 * SECONDS);
    }

    // Add the request to the batch.
    await this.storage.put(this.count, await request.text());
    return new Response(JSON.stringify({ queued: this.count }), {
      headers: {
        "content-type": "application/json;charset=UTF-8",
      },
    });
  }
  async alarm() {
    let vals = await this.storage.list();
    await fetch("http://example.com/some-upstream-service", {
      method: "POST",
      body: Array.from(vals.values()),
    });
    await this.storage.deleteAll();
    this.count = 0;
  }
}

Once every 10 seconds, the alarm() handler will be called. In the event an unexpected error terminates the Durable Object, it will be re-instantiated on another machine, following a short delay, after which it can continue processing.

Under the hood, Alarms are implemented by making reads and writes to the storage layer. This means Alarm get and set operations follow the same rules as any other storage operation – writes are coalesced with other writes, and reads have a defined ordering. See our blog post on the caching layer we implemented for Durable Objects for more information.

Durable Objects Alarms guarantee fault-tolerance

Alarms are designed to have no single point of failure and to run entirely on our edge – every Cloudflare data center running Durable Objects is capable of running alarms, including migrating Durable Objects from unhealthy data centers to healthy ones as necessary to ensure that their Alarm executes. Single failures should resolve in under 30 seconds, while multiple failures may take slightly longer.

We achieve this by storing alarms in the same distributed datastore that backs the Durable Object storage API. This allows alarm reads and writes to behave identically to storage reads and writes and to be performed atomically with them, and ensures that alarms are replicated across multiple datacenters.

Within each data center capable of running Durable Objects, there are multiple processes responsible for tracking upcoming alarms and triggering them, providing fault tolerance and scalability within the data center. A single elected leader in each data center is responsible for detecting failure of other data centers and assigning responsibility of those alarms to healthy local processes in its own data center. In the event of leader failure, another leader will be elected and become responsible for executing Alarms in the data center. This allows us to guarantee at-least-once execution for all Alarms.

How do I get started?

Alarms are a great way to build new distributed primitives, like queues, atop Durable Objects. They also provide a method for guaranteeing work within a Durable Object will complete, without relying on a client request to “kick” the Object.

You can get started with Alarms now by enabling Durable Objects in the Cloudflare dashboard. For more info, check the developer docs or jump in our Discord.

A New Hope for Object Storage: R2 enters open beta

Post Syndicated from Greg McKeon original https://blog.cloudflare.com/r2-open-beta/

A New Hope for Object Storage: R2 enters open beta

A New Hope for Object Storage: R2 enters open beta

In September, we announced that we were building our own object storage solution: Cloudflare R2. R2 is our answer to egregious egress charges from incumbent cloud providers, letting developers store as much data as they want without worrying about the cost of accessing that data.

The response has been overwhelming.

  • Independent developers had bills too small for cloud providers to negotiate fair egress rates with them. Egress charges were the largest line-item on their cloud bills, strangling side projects and the new businesses they were building.
  • Large corporations had written off multi-cloud storage – and thus multi-cloud itself – as a pipe dream. They came to us with excitement, pitching new products that integrated data with partner companies.
  • Non-profit research organizations were paying massive egress fees just to share experiment data with one another. Egress fees were having a real impact on their ability to collaborate, driving silos between organizations and restricting the experiments and analyses they could run.

Cloudflare exists to help build a better Internet. Today, the Internet gets what it deserves: R2 is now in open beta.

Self-serve customers can enable R2 in the Cloudflare dashboard. Enterprise accounts can reach out to their CSM for onboarding.

Internal and external APIs

R2 has two APIs: an API accessible only from within Workers, which we call the In-Worker API, and an S3-compatible API, which exposes your bucket on a URL of the form bucket.account.r2storage.com. Before you can make requests to R2, you’ll need to be authenticated — R2 buckets are private by default.

In-Worker API

With the in-Worker API, a bucket is “bound” to a specific Worker, which can then perform PUT, GET, DELETE and LIST operations against the bucket.

S3-compatible API

For the S3-compatible API, authentication is done the same way as on S3: SigV4 against an R2 URL. SigV4 signs requests using a secret key to authenticate them to R2. This means public access to R2 over the Internet is only possible today by hosting a Worker, connecting it to R2, and routing requests through it.

The easiest way to test the S3-compatible API is to use an S3 client. One of the most popular S3 clients is the boto3 SDK.

In Python, copy the following script and fill in the account_id, access_key, and secret_access_key fields with your R2 account credentials.

#!/usr/bin/env python
import boto3
import pprint
from botocore.client import Config
 
account_id = ''
access_key_id = ''
secret_access_key = ''
endpoint = f'https://{account_id}.r2.cloudflarestorage.com'
 
cl = boto3.client(
    's3',
    aws_access_key_id=access_key_id,
    aws_secret_access_key=secret_access_key,
    endpoint_url=endpoint,
    config=Config(
        region_name = endpoints[endpoint_name].get('region', 'auto'),
        s3={'addressing_style': 'path'},
        retries=dict( max_attempts=0 ),
    ),
)
 
printer = pprint.PrettyPrinter().pprint
 
printer(cl.head_bucket(Bucket='some bucket'))
printer(cl.create_bucket(Bucket='some other bucket'))
printer(cl.put_object(Bucket='some bucket', Key='my object', Body='some payload'))

Features

R2 comes with support for all basic create/read/update/delete S3 features through both of its APIs.

During the open beta period, we’re targeting R2 to sustain 1,000 GET operations per second and 100 PUT operations per second, per bucket. R2 supports objects up to approximately 5 TB in size, with individual parts limited to 5 GB of data.

R2 provides strongly consistent access to data. Once a PUT is confirmed by R2, future GET operations will always reflect the new key/value pair. The only exception to this is when deleting a bucket. For a short period of time following deletion, the bucket may still exist and continue to allow reads/writes.

Pricing

When we initially announced R2, we included preliminary pricing numbers. One of our main goals with R2 has been to serve the developers who can’t negotiate large discounts with cloud vendors. To that end, we’re also announcing a forever-free tier that lets developers start building on R2 with no charges at all.

R2 charges depend on the total volume of data stored and the type of operation performed on the data:

  • Storage is priced at \$0.015 / GB, per month.
  • Class A operations (including writes and lists) cost \$4.50 / million.
  • Class B operations cost \$0.36 / million.

Class A operations tend to mutate state, such as creating a bucket, listing objects in a bucket, or writing an object. Class B operations tend to read existing state, for example reading an object from a bucket. You can find more information on pricing and a full list of operation types in the docs.

Of course, there is no charge for egress bandwidth from R2. You can access your bucket to your heart’s content.

R2’s forever-free tier includes:

  • 10 GB-months of stored data
  • 1,000,000 Class A operations, per month
  • 10,000,000 Class B operations, per month

Free usage resets each month. While in the open beta phase, R2 usage over the free tier will be billed.

Future plans

We’ve spent the past six months in closed beta with a number of design partners, building out our storage solution. Backed by Durable Objects, R2’s novel architecture delivers both high availability and consistent performance.

While we’ve made great progress on R2, we still have plenty left to build in the coming months.

Improving performance

Our first priority is to improve performance and reliability. While we’ve thrown internal usage and our design partner’s demands at R2, there’s no substitute for live production traffic.

During the open beta period, R2 can sustain a maximum of 1,000 GET operations per second and 100 PUT operations per second, per bucket. We’ll look to raise these limits as we get comfortable operating the system. If you have higher needs, reach out to us!

When you create a bucket, you won’t see a region selector. Our vision for R2 includes automatically globally distributed storage, where R2 seamlessly places each object into the storage region closest to where the request comes from. Today, R2 primarily stores data in North America, which can lead to higher latencies when accessing content from other regions. We’ll first look to address this by adding additional regions where objects can be created, before adding automatic migration of existing objects across regions. Similar to what we’ve built with jurisdictional restrictions for Durable Objects, we’ll also enable restricting where an R2 bucket places data to comply with privacy regulations.

Expanding R2’s feature set

We’ll then focus on expanding R2 capabilities beyond the basic S3 API. In the near term, we’re focused on delivering:

  • Support for TTLs, so data can automatically be deleted from buckets over time.
  • Public buckets, so a bucket can be exposed to the internet without writing a Worker
  • Pre-signed URL support, which delegates read and write access for a specific key to a token.
  • Integration with Cloudflare’s cache, to scale read requests and provide global distribution of data.

If you have additional feature requests that aren’t listed above, we want to hear from you! Reach out and let us know what you need to make R2 your new, zero-cost egress object store.

Announcing Workers for Platforms: making every application on the Internet more programmable

Post Syndicated from Rita Kozlov original https://blog.cloudflare.com/workers-for-platforms/

Announcing Workers for Platforms: making every application on the Internet more programmable

Announcing Workers for Platforms: making every application on the Internet more programmable

As a business, whether a startup or Fortune 500 company, your number one priority is to make your customers happy and successful with your product. To your customers, however, success and happiness sometimes seems to be just one feature away.

If only you could customize X, we’ll be able to use your product” – the largest prospect in your pipeline. “If you just let us do Y,  we’ll expand our usage of your product by 10x” – your most strategic existing customer.

You want your product to be everything to everybody, but engineering can only keep up so quickly, so, what gives?

Today, we’re announcing Workers for Platforms, our tool suite to help make any product programmable, and help our customers deliver value to their customers and developers instantaneously.

A more programmable interface

One way to give your customers the ability to programmatically interact with your product is by providing them with APIs. That is a big part of why APIs are so prolific today — enabling code (whether your own, or that of a 3rd party) to engage with your applications is nothing short of revolutionary.

But there’s still a problem. While APIs can give developers the ability to interact with your application programmatically, developers are ultimately always limited by the abstractions exposed to them by the API. You, an application owner, have to have predicted how the customer would use your product, and then built out the API to support the use case. If there’s one thing I have learned as a product manager, it’s almost impossible to predict how customers will use a product. And if there’s a second thing I’ve learned, it’s that even with plentiful engineering resources, it’s also almost impossible to build all the functionality required to keep said customers happy.

There is another way, however.

Functions, in contrast to APIs, provide the lowest level primitives (rather than abstractions on top of them). This lets the developer define the right behavior from there — and they can even define their own APIs on top.

In this sense, functions and APIs are actually complementary to each other — you may even choose to call another API directly from your function. For example, if you’re handling an event in a messaging system, you could implement your own feature to send an email by calling an email API, or create a ticket in your ticketing system, etc.

This gets at why we’re so excited about Workers for Platforms: it enables you to expose a direct way for your customers’ developers to bring their own logic to any application. We think it’s going to unlock a wave of customer-led innovation on top of companies that adopt it, and has the potential to be as impactful to building applications on the web as the API has been.

A better experience for developers

While Workers for Platforms expose a more powerful paradigm for making product programmable, they also result in a better experience for you as a developer.

Today, as a developer, before you can even get started using APIs or webhooks, there’s a list of tedious tasks you have to deal with first. First, you have to set up somewhere to host your code, whether a server (or serverless function), and expose it via an external endpoint to be able. You have to deal with ops, custom tokens, figuring out the new authentication schema, all before you get started. Then you have to maintain that service, and make sure that it stays up to ensure that events are always processed properly.

Announcing Workers for Platforms: making every application on the Internet more programmable

With functions embedded directly into the products you’re using, you can just start writing the code.

Announcing Workers for Platforms: making every application on the Internet more programmable

Why hasn’t this model been embraced until now?

Allowing developers to program how events work seems obvious, but just because it’s obvious doesn’t mean it’s easy.

At Cloudflare, we encountered this very problem five years ago — we were onboarding larger and larger customers onto our network, each needing to dictate the fate of a request in their own way. While Page Rules offered a way to modify behavior by URL, customers wanted to control behavior based on cookie, header, geolocation, and more!

We realized our engineering team couldn’t keep up with every request, so we decided to allow customers to bend our product to their own needs.

As we looked for an approach to this problem, we looked for a solution that would meet the two following requirements:

  1. Performance requirement: it’s unacceptable for a CDN, which should make your site faster to introduce latency. How do we make this so fast you don’t even notice it’s there?
  2. Security requirement: how do we run untrusted code securely?

While these requirements are especially critical when you offer performance and security products, solving these challenges is critical when giving your customers the ability to program your product. If the function needs to run on the critical path to your user, introducing latency is equally unacceptable. And of course, no one wants to get breached just to have their users be able to program.

Creating a really fast and secure multi-tenant environment is no easy feat.

When we evaluated our options for solving this problem, we first turned to technologies that existed for solving this problem on the server — serverless functions already existed at time, but were powered by containers, which would introduce cold-start, which was, well, a non-starter. So we turned to the browser, or specifically Chrome, which was powered by V8, and decided to take the same approach, and run it on our servers.

And while the approach sounds simple (and perhaps in retrospect obvious, as these things tend to seem), running a large multi-tenant development platform at scale is no small effort. If a part of the purpose of allowing customers to program your offering is to free up engineering efforts to focus on building new features, the effort it takes to maintain and scale such a development platform may defeat the purpose.

What we realized recently was that we weren’t alone in trying to solve this problem.

Companies like Shopify, building their next generation programmable storefront, Oxygen, were trying to solve the same thing. They wanted to enable their customers to run custom storefronts, and be able to offer the best performance possible, while maintaining a secure, multi-tenant environment.

“Shopify is the Internet’s commerce infrastructure, with millions of merchants using the platform,” said Zach Koch, product director, custom storefronts, at Shopify. “Partnering with Cloudflare, we’re able to give developers the tools they need to build unique and performant storefronts. We are excited to work with Cloudflare to alleviate some complexities of building commerce experiences – like scalability and global availability – so that developers can instead focus on what makes their brand distinct.”

How can you build your next platform on Workers for Platforms?

Working with platforms like Shopify to help them address their developers’ needs, helped us realize another thing — developer experience is not one-size-fits-all. That is, while we’re building our platform for a broad set of developers, eCommerce developers might have a much more specialized set of needs, best solved by a tailored developer experience. And, while the underlying technology is the same, making platforms their experiences using the same high level concepts as our direct customers need doesn’t make sense.

Since no one knows your customers better than you, we want you, the platform provider,  to design the best experience for your users. Workers for Platforms exposes a new set of tools and APIs to integrate directly into the deployment flow you want to design (see what we did there?).

Tags API to manage your functions at scale

Using our APIs, whenever a developer wants to deploy a script on your platform, you can call our APIs to deploy a new Worker in the background. Unlike our traditional Workers offering, Workers for Platforms is designed to be used at scale, to manage hundreds of thousands to millions of Cloudflare Workers.

Depending on how you manage your deployment services, or users, we now also provide the option to use tags to manage groupings of scripts. For example, if a user deletes their account, and you would like to make sure all their Workers are cleaned up. With tags, you can now add any arbitrary tags (such as user ID) per script, to enable bulk actions.

Trace Workers

Where there’s smoke, there’s fire, and where there’s code, well, bugs are also bound to be. When giving developers the tools to write and deploy code, you must also give them the means to debug it.

Trace Workers allow you to collect any information about a request that was handled by a Worker, including any logs or exceptions, and pass them onto your customer. A Trace Worker is a Worker that will receive information about the execution of other Workers, and can forward it to a destination of your choosing, enabling use cases such as live logging or long term storage.

Here is a simple trace Worker that sends its trace data to an HTTP endpoint:

addEventListener("trace", event => {
  event.waitUntil(fetch("http://example.com/trace", {
    method: "POST",
    body: JSON.stringify(event.traces),
  }))
})

Here is an example of what the data in event.traces might look like:

[
  {
    "scriptName": "Example script",
    "outcome": "exception",
    "eventTimestamp": 1587058642005,
    "event": {
      "request": {
        "url": "https://example.com/some/requested/url",
        "method": "GET",
        "headers": [
          "cf-ray": "57d55f210d7b95f3",
          "x-custom-header-name": "my-header-value"
        ],
        "cf": {
          "colo": "SJC"
        }
      },
    },
    "logs": [
      {
        "message": ["string passed to console.log()"],
        "level": "log",
        "timestamp": 1587058642005
      }
    ],
    "exceptions": [
      {
        "name": "Error",
        "message": "Threw a sample exception",
        "timestamp": 1587058642005
      }
    ]
  }
]

Chaining multiple Workers together using Dynamic Dispatch

From working with a few of our early customers, another need we were hearing about often was the ability to run your own code, before running your customer’s code. Perhaps you want to run a layer of authentication, sanitize input or output, or even provide useful information downstream (like user or account IDs).

For this you may want to maintain your own Worker. However, when it’s done executing, you want to be able to call the next Worker, with your customer’s code.

Example:

let user_worker = dispatcher.get('customer-worker-123');
let response = await user_worker.fetch(request);

Custom domains, and more!

The features above are only the new Workers features we enabled for our customers as of this week, but our goal is to provide all the tools you need to build your platform. For example, you can use Workers for Platforms with Cloudflare for SaaS to create custom domains. (And stay tuned for the “and more!”).

How do I get access?

As is the case with any new product we release, we have no doubt we have so much to learn from our customers and their use cases. Since we want to support you, and make sure you’re set up for success, if you’re interested, we’d love to get to know you and your use case, and get you set up with all the tools you need. To get started, we ask that you fill out our form, and we’ll  get in touch with you.

In the meantime, you’re welcome to get started checking out our developer docs, or saying hi in our Discord.

Just getting started

We faced this problem ourselves five years ago — we needed to give our customers the ability to augment our offering in a way that worked for them, so we did just that when we launched Cloudflare Workers. Allowing our customers to program our global network to meet their needs has enabled us to support more customers on our development platform, while enabling our engineering team to focus on turning the most requested customizations into features.

We look forward to seeing both what your developers build on your platform (and we believe you, yourself will be surprised with the use cases developers come up with that you could never dream up yourself), and what your engineering team is able to tackle in parallel!

Service Bindings are generally available, with efficient pricing

Post Syndicated from Kabir Sikand original https://blog.cloudflare.com/service-bindings-ga/

Service Bindings are generally available, with efficient pricing

Service Bindings are generally available, with efficient pricing

Today, we’re happy to unveil a new way to communicate between your Workers. In the spirit of baking more and more flexibility into our Developer Platform, our team has been hard at work building a new API to facilitate Worker to Worker communication: Service Bindings. Service Bindings allow your Workers to send requests to other Workers Services, from your code, without those requests going over the Internet. It opens up a world of composability that was previously closed off by a difficult interface, and makes it a lot easier for you to build complex applications on our developer platform.

Service Bindings allow teams to segment application logic across multiple Workers. By segmenting your logic, your teams can now build with more confidence by only deploying narrowly scoped changes to your applications, instead of recommitting the whole application every time. Service Bindings give developers both composability and confidence. We’ve seen some excellent uses so far, and today we’ll go through one of the more common examples. Alongside this functionality, we’ll show you how Cloudflare’s cost efficiency will save you money.

Example: An API Gateway

Service Bindings allow you to easily expand the number of services running on a single request. Developers can now create a pipeline of Workers that call one another and create a complex series of compute blocks. The ability to separate and compose application logic together has opened Cloudflare Workers up to even more uses.

With Service Bindings, one of our customers has moved multiple services off of their legacy infrastructure by creating a gateway Worker that serves as the entry point of a request. This gateway Worker handles decision-making about request routing and quickly shifts requests to appropriate services – be it on their legacy application servers or their newly created Workers. This project enabled several new teams to onboard as a result, each managing their Worker independently. Large teams need a development ecosystem that allows for granular deployments, minimizing the scope of impact when a bad push to production occurs.

Let’s walk through a simple example of an API gateway Worker that handles routing and user authentication. We’ll build an application that takes in a user request and checks for authorization. If the user isn’t authorized, we block the request. If the user has valid credentials, we’ll fetch the user data. The application will also implement login and logout to change the user authentication state.

Service Bindings are generally available, with efficient pricing

Here, the api-gateway Worker calls login and logout Workers for authentication to privileged endpoints like /getuser. The api-gateway Worker also checks each request for authorization via the auth Worker and allows valid requests to call the get-user Worker. The get-user Worker then makes an outbound network request to gather the required user information, and passes that data back to the client via our api-gateway Worker. The api-gateway Worker is therefore bound to four other Worker Services: auth, get-user, login, and logout.

Service Bindings are generally available, with efficient pricing

Let’s take a look at the code for the api-gateway Worker. We’ll see the routes /login, /logout, and /getuser are implemented on this API. For the /getuser route, the api-gateway Worker requires authorization via the auth Worker. Requests to any other endpoints will return a 404 HTTP status code.

export default {
 async fetch(request, environment) {
   const url = new URL(request.url);
   switch (url.pathname) {
     case '/login':
       return await environment.login.fetch(request);

     case '/logout':
       return await environment.logout.fetch(request);

     case '/getuser': {
       // Check that the "Authorization" header is sent when authenticated.
       const authCheck = await environment.auth.fetch(request.clone());
       if (authCheck.status != 200) { return authCheck }
       // If the auth check passes, send the request to the /admin endpoint
       return await environment.getuser.fetch(request);
     }
   }
   return new Response('Not Found.', { status: 404 });
 }
}

The code really is that simple. The separation of concerns allows your teams to work independently of each other, relying on each service to do what it is supposed to do in production. It allows you to separate your code by use case, developing, testing, and debugging more effectively.

But your next question might be, what am I charged for? Before we get into price, let’s first talk about where the compute execution is happening using our example above. A request to /getuser may look something like this, when looking across the request’s lifecycle:

Service Bindings are generally available, with efficient pricing

The get-user Worker makes a network call to gather user information while the auth Worker executes entirely within the Workers runtime. Now that we understand what a single execution looks like, let’s talk about cost efficiency.

Cost efficiency that saves you money

Service Bindings are available for you to use starting today. They cost the same as any normal Worker; each invocation is charged as if it’s a request from the Internet – with one major and important difference. We’re removing the concept of “idle resources” across Workers. You will be charged a single billable duration across all Workers triggered by a single incoming request. This is possible because Cloudflare can share compute resources used by each request across your Workers and pass the resulting cost savings on to our customers.

Revisiting our example above, the api-gateway Worker may be waiting on other dependencies to perform some work, while it sits idle. When we say idle, we mean the time the api-gateway Worker is awaiting a response from the auth and get-user Workers – represented by the gray bars in the request lifetime graphic.

Service Bindings are generally available, with efficient pricing

When using Service Bindings, you no longer have to pay for those “idle resources”. With the Workers model, customers can execute work on a single shared compute thread across multiple individual Services, for each and every request. Cloudflare will charge for the amount of time that thread is allocated to your Workers and the time your Workers are awaiting external dependencies. Cloudflare won’t double charge for any overlap.

Service Bindings are generally available, with efficient pricing

This is in stark contrast to classic serverless compute models (like Amazon Web Services’ Lambda), where resources are allocated on a per-instance basis, and as such cost is passed to the customer even when those resources are not actively being used. That extra charge is represented by the magenta portions of the request lifetime graphic below.

Service Bindings are generally available, with efficient pricing

Cloudflare is able to squash duration down to a single charge, since Cloudflare can share the compute resources between your services. We pass those cost savings on to our customers, so you can pay only for the work you need done, when you need it done, every time.

Getting Started

Excited to try our Service Bindings? Head over to the Settings => Variables tab of your Worker, and click ‘Edit Variables’ under Service Bindings. You can then reference those bindings within your code and call fetch() on any one of them.

We can’t wait to see what you build. Check us out on Discord to join the conversation.

Workers visibility: announcing Logpush for Worker’s Trace Events

Post Syndicated from Tanushree Sharma original https://blog.cloudflare.com/logpush-for-workers/

Workers visibility: announcing Logpush for Worker’s Trace Events

Workers visibility: announcing Logpush for Worker’s Trace Events

Writing an application is like building a rocket. Countless hours in development and thousands of moving parts all come down to one moment – launch day. Picture the countdown: T minus 10 seconds. The entire team is making sure that things are running smoothly by monitoring dashboards that measure the health of every part of the system.

It’s every developer’s dream to get the level of visibility that NASA has in their mission control room, but for their own code. For flight directors and engineering directors alike, it’s important to have visibility into the systems that are built throughout development and after release. Today, we’re excited to announce Logpush for Worker’s Trace Events, making it easier than ever to gain visibility into applications built on Workers.

Workers Visibility Today

Today, we have lots of tools that are used to find out what’s happening in a Worker.

These tools are awesome for debugging, generalizing trends and monitoring Workers on third parties. They emphasize ease of use and make it effortless to get visibility quickly from your Workers.

As Workers have evolved, we’re now seeing more adoption from larger enterprises and platform companies who are using Workers as a foundation for their customers to develop on top of. When building complex and dynamic applications, customers, especially those building on Workers in the SaaS world need access to raw logs.

Bringing Trace Events to Logpush

Coming soon — we’re adding Workers execution logs to Logpush! Cloudflare Enterprise customers get access to Logpush for any of our available products, including CDN, WAF, Spectrum and many more. We’re excited to add Workers to this list.

Logpush for Worker’s Trace Events will include unstructured console.log() messages, exceptions, and metadata about requests/responses. Here is a sample Trace Event for a fetch request:

{"accountID":123456,"scriptName":"cloudflare-workers-script","outcome":"ok","duration":0.5,"CPUTime":4,"eventType":"fetch","event":{"request":{"url":"https://workersdevtest.com","method":"GET","headers":{"accept":"*/*","accept-encoding":"gzip","connection":"Keep-Alive"},"cf":{"clientTcpRtt":40,"tlsVersion":"TLSv1.2","httpProtocol":"HTTP/2","edgeRequestKeepAliveStatus":1,"country":"CA","asn":16591}},"subrequests":{"request":{"url":"https://example.com","method":"GET","headers":{"x-custom-header":"my-header-value"}},"logs":[{"message":["foo"],"level":"log","timestamp":1587491479166}],"exceptions":[]}}}

With this new dataset, our Enterprise customers will be able to send Workers logs to their preferred cloud storage destination such as GCS or R2 or to analysis platforms like Splunk or New Relic. Logpush handles batching and can scale no matter how much traffic your Worker gets.

This brings new ways to get transparency into Workers! You can pinpoint when a fetch request fails, find out which call is adding the most lag in your application, and track down specific log lines to debug the end user experience. Also, combine Workers logs with HTTP request logs to get a better picture of the full request lifecycle.

It also opens up doors for SaaS companies building on Workers. SaaS companies can get visibility into how their customer’s applications are performing and expose logs to their customer’s developers to make debugging and troubleshooting much easier.

Mission Control in the making

wrangler tail, Workers Analytics Engine (coming later this week!) and Logpush for Worker’s Trace Events are an elite trio to give visibility into every aspect of a Worker.

When you’re deep in the mix of development, wrangler tail is by your side to help you crush bugs and eliminate errors.  With Workers Analytics Engine, you can instrument business logic and query aggregates within seconds to populate dashboards for monitoring. Logpush for Trace Events is there for when you need to debug very specific cases and get an exact record of what happened.

Customers big and small are using Cloudlfare Workers for their next launch, and we’re building tools to make that happen successfully. We’re bringing Logpsush for Trace Events to our Enterprise customers very soon. Stay tuned for updates.

A new era for Cloudflare Pages builds

Post Syndicated from Nevi Shah original https://blog.cloudflare.com/cloudflare-pages-build-improvements/

A new era for Cloudflare Pages builds

A new era for Cloudflare Pages builds

Music is flowing through your headphones. Your hands are flying across the keyboard. You’re stringing together a masterpiece of code. The momentum is building up as you put on the finishing touches of your project. And at last, it’s ready for the world to see. Heart pounding with excitement and the feeling of victory, you push changes to the main branch…. only to end up waiting for the build to execute each step and spit out the build logs.

Starting afresh

Since the launch of Cloudflare Pages, there is no doubt that the build experience has been its biggest source of criticism. From the amount of waiting to inflexibility of CI workflow, Pages had a lot of opportunity for growth and improvement. With Pages, our North Star has always been designing a developer platform that fits right into your workflow and oozes simplicity. User pain points have been and always will be our priority, which is why today we are thrilled to share a list of exciting updates to our build times, logs and settings!

Over the last three quarters, we implemented a new build infrastructure that speeds up Pages builds, so you can iterate quickly and efficiently. In February, we soft released the Pages Fast Builds Beta, allowing you to opt in to this new infrastructure on a per-project basis. This not only allowed us to test our implementation, but also gave our community the opportunity to try it out and give us direct feedback in Discord. Today we are excited to announce the new build infrastructure is now generally available and automatically enabled for all existing and new projects!

Faster build times

As a developer, your time is extremely valuable, and we realize Pages builds were slow. It was obvious that creating an infrastructure that built projects faster and smarter was one of our top requirements.

Looking at a Pages build, there are four main steps: (1) initializing the build environment, (2) cloning your git repository, (3) building the application, and (4) deploying to Cloudflare’s global network. Each of these steps is a crucial part of the build process, and upon investigating areas suitable for optimization, we directed our efforts to cutting down on build initialization time.

In our old infrastructure, every time a build job was submitted, we created a new virtual machine to run that build, costing our users precious dev time. In our new infrastructure, we start jobs on machines that are ready and waiting to be used, taking a major chunk of time away from the build initialization step. This step previously ran for 2+ minutes, but with our new infrastructure update, projects are expected to see a build initialization time cut down to 2-3 SECONDS.

This means less time waiting and more time iterating on your code.

Fast and secure

In our old build infrastructure, because we spun up a new virtual machine (VM) for every build, it would take several minutes to boot up and initialize with the Pages build image needed to execute the build. Alternatively, one could reuse a collection of VMs, assigning a new build to the next available VM, but containers share a kernel with the host operating system, making them far less isolated, posing a huge security risk. This could allow a malicious actor to perform a “container escape” to break out of their sandbox. We wanted the best of both worlds: the speed of a container with the isolation of a virtual machine.

Enter gVisor, a container sandboxing technology that drastically limits the attack surface of a host. In the new infrastructure, each container running with gVisor is given its own independent application “kernel,” instead of directly sharing the kernel with its host. Then, to address the speed, we keep a cluster of virtual machines warm and ready to execute builds so that when a new Pages deployment is triggered, it takes just a few seconds for a new gVisor container to start up and begin executing meaningful work in a secure sandbox with near native performance.

Stream your build logs

After we solidified a fast and secure build, we wanted to enhance the user facing build experience. Because a build may not be successful every time, providing you with the tools you need to debug and access that information as fast as possible is crucial. While we have a long list of future improvements for a better logging experience, today we are starting by enabling you to stream your build logs.

Prior to today, with the aforementioned build steps required to complete a Pages build, you were required to wait until the build completed in order to view the resulting build logs. Easily addressable issues like incorrectly inputting the build command or specifying an environment variable would have required waiting for the entire build to finish before understanding the problem.

Today, we’re giving you the power to understand your build issues as soon as they happen. Spend less time waiting for your logs and start debugging the events of your builds within a second or less after they happen!

Control Branch Builds

Finally, the build experience does not just include the events during execution but everything leading up to the trigger of a build. For our final trick, we’re enabling our users to have full control of the precise branches they’d like to include and exclude for automatic deployments.

Before today, Pages submitted builds for every commit in both production and preview environments, which led to queued builds and even more waiting if you exceeded your concurrent build limit. We wanted to provide even more flexibility to control your CI workflow. Now you can configure your build settings to specify branches to build, as well as skip ad hoc commits.

Specify branches to build

While “unlimited staging” is one of Pages’ greatest advantages, depending on your setup, sometimes automatic deployments to the preview environment can cause extra noise.

In the Pages build configuration setting, you can specify automatic deployments to be turned off for the production environment, the preview environment, or specific preview branches. In a more extreme case, you can even pause all deployments so that any commit sent to your git source will not trigger a new Pages build.

Additionally, in your project’s settings, you can now configure the specific Preview branches you would like to include and exclude for automatic deployments. To make this configuration an even more powerful tool, you can use wildcard syntax to set rules for existing branches as well as any newly created preview branches.

A new era for Cloudflare Pages builds

Read more in our Pages docs on how to get started with configuring automatic deployments with Wildcard Syntax.

Using CI Skip

Sometimes commits need to be skipped on an ad hoc basis. A small update to copy or a set of changes within a small timespan don’t always require an entire site rebuild. That’s why we also implemented a CI Skip command for your commit message, signaling to Pages that the update should be skipped by our builder.

With both CI Skip and configured build rules, you can keep track of your site changes in Pages’ deployment history.

A new era for Cloudflare Pages builds

Where we’re going

We’re extremely excited to bring these updates to you today, but of course, this is only the beginning of improving our build experience. Over the next few quarters, we will be bringing more to the build experience to create a seamless developer journey from site inception to launch.

Incremental builds and caching

From beta testing, we noticed that our new infrastructure can be less impactful on larger projects that use heavier frameworks such as Gatsby. We believe that every user on our developer platform, regardless of their use case, has the right to fast builds. Up next, we will be implementing incremental builds to help Pages identify only the deltas between commits and rebuild only files that were directly updated. We will also be implementing other caching strategies such as caching external dependencies to save time on subsequent builds.

Build image updates

Because we’ve been using the same build image we launched Pages with back in 2021, we are going to make some major updates. Languages release new versions all the time, and we want to make sure we update and maintain the latest versions. An updated build image will mean faster builds, more security and of course supporting all the latest versions of languages and tools we provide. With new build image versions being released, we will allow users to opt in to the updated builds in order to maintain compatibility with all existing projects.

Productive error messaging

Lastly, while streaming build logs helps you to identify those easily addressable issues, the infamous “Internal error occurred” is sometimes a little more cryptic to decipher depending on the failure. While we recently published a “Debugging Cloudflare Pages” guide, in the future we’d like to provide the error feedback in a more productive manner, so you can easily identify the issue.

Have feedback?

As always, your feedback defines our roadmap. With all the updates we’ve made to our build experience, it’s important we hear from you! You can get in touch with our team directly through Discord. Navigate to our Pages specific section and check out our various channels specific to different parts of the product!

Join us at Cloudflare Connect!

Interested in learning more about building with Cloudflare Pages? If you’re based in the New York City area, join us on Thursday, May 12th for a series of workshops on how to build a full stack application on Pages! Follow along with a fully hands-on lab, featuring Pages in conjunction with other products like Workers, Images and Cloudflare Gateway, and hear directly from our product managers. Register now!

Introducing Direct Uploads for Cloudflare Pages

Post Syndicated from Nevi Shah original https://blog.cloudflare.com/cloudflare-pages-direct-uploads/

Introducing Direct Uploads for Cloudflare Pages

Introducing Direct Uploads for Cloudflare Pages

With Pages, we are constantly looking for ways to improve the developer experience. One of the areas we are keen to focus on is removing any barriers to entry for our users regardless of their use case or existing set up. Pages is an all-in-one solution with an automated Continuous Integration (CI) pipeline to help you build and deploy your site with one commit to your projects’ repositories hosted on GitHub or GitLab.

However, we realize that this excluded repositories that used a source control provider that Pages didn’t yet support and required varying build complexities. Even though Pages continues to build first-class integrations – for example, we added GitLab support in November 2021 – there are numerous providers to choose from, some of which use `git` alternatives like SVN or Mercurial for their version control systems. It’s also common for larger companies to self-host their project repositories, guarded by a mix of custom authentication and/or proxy protocols.

Pages needed a solution that worked regardless of the repository’s source location and accommodate build project’s complexity. Today, we’re thrilled to announce that Pages now supports direct uploads to give you more power to build and iterate how you want and with the tools you want.

What are direct uploads?

Direct uploads enable you to push your build artifacts directly to Pages, side-stepping the automatic, done-for-you CI pipeline that Pages provides for GitHub and GitLab repositories. This means that connecting a Pages project to a git repository is optional. In fact, using git or any version control system is optional!

Today, you can bring your assets directly to Pages by dragging and dropping them into our dashboard or pushing them through Wrangler CLI. You also have the power to use your own CI tool whether that’s something like GitHub Actions or CircleCI to handle your build. Taking your output directory you can bring these files directly to Pages to create a new project and all subsequent deployments after that. Every deployment will be distributed right to the Cloudflare network within seconds.

How does it work?

After using your preferred CI tooling outside of Pages, there are two ways to bring your pre-built assets and create a project with the direct uploads feature:

  1. Use the Wrangler CLI
  2. Drag and drop them into the Pages interface

Wrangler CLI

With an estimated 43k weekly Wrangler downloads, you too can use it to iterate quickly on your Pages projects right through the command line. With Wrangler (now with brand-new updates!), you can both create your project and new deployments with a single command.

After Wrangler is installed and authenticated with your Cloudflare account, you can execute the following command to get your site up and running:

npx wrangler pages publish <directory>

Integration with Wrangler provides not only a great way to publish changes in a fast and consecutive manner, but also enables a seamless workflow between CI tooling for building right to Pages for deployment. Check out our tutorials on using CircleCI and GitHub Actions with Pages!

Drag and drop

However, we realize that sometimes you just want to get your site deployed instantaneously without any additional set up or installations. In fact, getting started with Pages shouldn’t have to require extensive configuration. The drag and drop feature allows you to take your pre-built assets and virtually drag them onto the Pages UI. With either a zip file or a single folder of assets, you can watch your project deploy in just a few short seconds straight to the 270+ cities in our network.

What can you build?

With this ease of deploying projects, the possibilities of what you can build are still endless. You can enjoy the fruits of Pages in a project created with direct uploads including but not limited to unique preview URLs, integration with Workers, Access and Web Analytics, and custom redirects/headers.

In thinking about your developer setup, direct uploads provide the flexibility to build the way you want such as:

  • Designing and building your own CI workflow
  • Utilizing the CI tooling of your choice
  • Accommodating complex monorepo structures
  • Implementing custom CI logic for your builds.

Migrating from Workers Sites

We’ll have to admit, the idea of publishing assets directly to our network came from a sister product to Pages called Workers Sites and the resemblance is striking! However, Pages affords many feature enhancements to the developer experience that show as a pain point on Workers Sites.

With Pages direct uploads, you can enjoy the freedom and flexibility of customizing your workflow that Workers Sites provides while including an interface to track and share changes and manage production/preview environments. Check out our tutorial on how to migrate over from Workers Sites.

This release immediately unlocks a broad range of use cases, allowing the most basic of projects to the most advanced to start deploying their websites to Pages today. Refer to our developer documentation for more technical details. As always, head over to the Cloudflare Developers Discord server and let us know what you think in the #direct-uploads-beta channel.

Join us at Cloudflare Connect!

Calling all New York City developers! If you’re interested in learning more about Cloudflare Pages, join us for a series of workshops on how to build a full stack application on Thursday, May 12th. Follow along with demonstrations of using Pages alongside other products like Workers, Images and Cloudflare Gateway, and hear directly from our product managers. Register now!

Come join us at Cloudflare Connect New York this Thursday!

Post Syndicated from Jen Taylor original https://blog.cloudflare.com/cloudflare-connect-nyc-2022/

Come join us at Cloudflare Connect New York this Thursday!

Come join us at Cloudflare Connect New York this Thursday!

We take a break from Platform Week to share big news – we’re going to New York this week for our Cloudflare Connect customer event.

We’re packing our bags, getting on planes and heading to New York to do our first live customer event since 2019 and we could not be more excited.  It is time with you – the people building, delivering and securing the apps and networks we know and trust – that are the inspiration for the innovation we deliver.  We can’t wait to spend time with you.

Our co-founder and CEO Matthew Prince will kick off the day with his view from the top.  We’ll then be breaking out into focused conversations to dig in on our latest product news and roadmaps.

Excited about what we’re talking about for Platform Week?  Come chat with the Workers team in person and hear more about the roadmap.

Intrigued by the latest DDoS stats we posted and want to learn more?  Meet with the team analyzing the attacks and learn about where we go from here.

Not sure where to start your Zero Trust journey?  We’ll talk you through what we’re seeing and introduce you to other customers who are in the process of rolling out Zero Trust solutions for their teams so you can learn from each other.

Don’t miss it!  Register now – use the code BetterInternet to join us in-person for free.  Not in New York?  No worries – we’re coming to London, Sydney and San Francisco later this year.

A Community Group for Web-interoperable JavaScript runtimes

Post Syndicated from James M Snell original https://blog.cloudflare.com/introducing-the-wintercg/

A Community Group for Web-interoperable JavaScript runtimes

A Community Group for Web-interoperable JavaScript runtimes

Today, Cloudflare – in partnership with Vercel, Shopify, and individual core contributors to both Node.js and Deno – is announcing the establishment of a new Community Group focused on the interoperable implementation of standardized web APIs in non-web browser, JavaScript-based development environments.

The W3C and the Web Hypertext Application Technology Working Group (or WHATWG) have long pioneered the efforts to develop standardized APIs and features for the web as a development environment. APIs such as fetch(), ReadableStream and WritableStream, URL, URLPattern, TextEncoder, and more have become ubiquitous and valuable components of modern web development. However, the charters of these existing groups have always been explicitly limited to considering only the specific needs of web browsers, resulting in the development of standards that are not readily optimized for any environment that does not look exactly like a web browser. A good example of this effect is that some non-browser implementations of the Streams standard are an order of magnitude slower than the equivalent Node.js streams and Deno reader implementations due largely to how the API is specified in the standard.

Serverless environments such as Cloudflare Workers, or runtimes like Node.js and Deno, have a broad wide range of requirements, issues, and concerns that are simply not relevant to web browsers, and vice versa. This disconnect and the lack of clear consideration of these differences while the various specifications have been developed, has led to a situation where the non-browser runtimes have implemented their own bespoke, ad-hoc solutions for functionality that is actually common across the environments.

This new effort is changing that by providing a venue to discuss and advocate for the common requirements of all web environments, deployed anywhere throughout the stack.

What’s in it for developers?

Developers want their code to be portable. Once they write it, if they choose to move to a different environment (from Node.js to Deno, for instance) they don’t want to have to completely rewrite it just to make it keep doing the exact same thing it already was.

One of the more common questions we get from Cloudflare users is how they can make use of some arbitrary module published to npm that makes use of some set of Node.js-specific or Deno-specific APIs. The answer usually involves pulling in some arbitrary combination of polyfill implementations. The situation is similar with the Deno project, which has opted to integrate a polyfill of the full Node.js core API directly into their standard library. The more these environments implement the same common standards, the more the developer ecosystem can depend on the code they write just working, regardless of where it is being run.

Cloudflare Workers, Node.js, Deno, and web browsers are all very different from each other, but they share a good number of common functions. For instance, they all provide APIs for generating cryptographic hashes; they all deal in some way with streaming data; they all provide the ability to send an HTTP request somewhere. Where this overlap exists, and where the requirements and functionality are the same, the environments should all implement the same standardized mechanisms.

The Web-interoperable Runtimes Community Group

The new Web-interoperable Runtimes Community Group (or “WinterCG”) operates under the established processes of the W3C.

The naming of this group is something that took us a while to settle on because it is critical to understanding the goals the group is trying to achieve (and what it is not). The key element is the phrase “web-interoperable”.

We use “web” in exactly the same sense that the W3C and WHATWG communities use the term – precisely: web browsers. The term “web-interoperable”, then, means implementing features in a manner that is either identical or at least as consistent as possible with the way those features are implemented in web browsers. For instance, the way that the new URL() constructor works in browsers is exactly how the new URL() constructor should work in Node.js, in Deno, and in Cloudflare Workers.

It is important, however, to acknowledge the fact that Node.js, Deno, and Cloudflare Workers are explicitly not web browsers. While this point should be obvious, it is important to call out because the differences between the various JavaScript environments can greatly impact the design decisions of standardized APIs. Node.js and Deno, for instance, each provide full access to the local file system. Cloudflare Workers, in contrast, has no local file system; and web browsers necessarily restrict applications from manipulating the local file system. Likewise, while web browsers inherently include a concept of a website’s “origin” and implement mechanisms such as CORS to protect users against a variety of security threats, there is no equivalent concept of “origins” on the server-side where Node.js, Deno, and Cloudflare Workers operate.

Up to now, the W3C and WHATWG have concerned themselves strictly with the needs of web browsers. The new Web-interoperable Runtimes Community Group is explicitly addressing and advocating for the needs of everyone else.

It is not intended that WinterCG will go off and publish its own set of independent standard APIs. Ideas for new specifications that emerge from WinterCG will first be submitted for consideration by existing work streams in the W3C and WHATWG with the goal of gaining the broadest possible consensus. However, should it become clear that web browsers have no particular need for, or interest in, a feature that the other environments (such as Cloudflare Workers) have need for, WinterCG will be empowered to move forward with a specification of its own – with the constraint that nothing will be introduced that intentionally conflicts with or is incompatible with the established web standards.

WinterCG will be open for anyone to participate; it will operate under the established W3C processes and policies; all work will be openly accessible via the “wintercg” GitHub organization; and everything it does will be centered on the goal of maximizing interoperability.

Work in Progress

WinterCG has already started work on a number of important work items.

The Minimum Common Web API

From the introduction in the current draft of the specification:

“The Minimum Common Web Platform API is a curated subset of standardized web platform APIs intended to define a minimum set of capabilities common to Browser and Non-Browser JavaScript-based runtime environments.”

Or put another way: It is a minimal set of existing web APIs that will be implemented consistently and correctly in Node.js, Deno, and Cloudflare Workers. Most of the APIs, with some exceptions and nuances, already exist in these environments, so the bulk of the work remaining is to ensure that those implementations are conformant to their relative specifications and portable across environments.

The table below lists all the APIs currently included in this subset (along with an indication of whether the API is currently or likely soon to be supported by Node.js, Deno, and Cloudflare Workers):

Node.js Deno Cloudflare Workers
AbortController ✔️ ✔️ ✔️
AbortSignal ✔️ ✔️ ✔️
ByteLengthQueueingStrategy ✔️ ✔️ ✔️
CompressionStream ✔️ ✔️ ✔️
CountQueueingStrategy ✔️ ✔️ ✔️
Crypto ✔️ ✔️ ✔️
CryptoKey ✔️ ✔️ ✔️
DecompressionStream ✔️ ✔️ ✔️
DOMException ✔️ ✔️ ✔️
Event ✔️ ✔️ ✔️
EventTarget ✔️ ✔️ ✔️
ReadableByteStreamController ✔️ ✔️ ✔️
ReadableStream ✔️ ✔️ ✔️
ReadableStreamBYOBReader ✔️ ✔️ ✔️
ReadableStreamBYOBRequest ✔️ ✔️ ✔️
ReadableStreamDefaultController ✔️ ✔️ ✔️
ReadableStreamDefaultReader ✔️ ✔️ ✔️
SubtleCrypto ✔️ ✔️ ✔️
TextDecoder ✔️ ✔️ ✔️
TextDecoderStream ✔️ ✔️ (soon)
TextEncoder ✔️ ✔️ ✔️
TextEncoderStream ✔️ ✔️
TransformStream ✔️ ✔️ ✔️
TransformStreamDefaultController ✔️ ✔️ (soon)
URL ✔️ ✔️ ✔️
URLPattern ? ✔️ ✔️
URLSearchParams ✔️ ✔️ ✔️
WritableStream ✔️ ✔️ ✔️
WritableStreamDefaultController ✔️ ✔️ ✔️
globalThis.self ? ✔️ (soon)
globalThis.atob() ✔️ ✔️ ✔️
globalThis.btoa() ✔️ ✔️ ✔️
globalThis.console ✔️ ✔️ ✔️
globalThis.crypto ✔️ ✔️ ✔️
globalThis.navigator.userAgent ? ✔️ ✔️
globalThis.queueMicrotask() ✔️ ✔️ ✔️
globalThis.setTimeout() / globalthis.clearTimeout() ✔️ ✔️ ✔️
globalThis.setInterval() / globalThis.clearInterval() ✔️ ✔️ ✔️
globalThis.structuredClone() ✔️ ✔️ ✔️

Whenever one of the environments diverges from the standardized definition of the API (such as Node.js implementation of setTimeout() and setInterval()), clear documentation describing the differences will be made available. Such differences should only exist for backwards compatibility with existing code.

Web Cryptography Streams

The Web Cryptography API provides a minimal (and very limited) APIs for  common cryptography operations. One of its key limitations is the fact that – unlike Node.js’ built-in crypto module – it does not have any support for streaming inputs and outputs to symmetric cryptographic algorithms. All Web Cryptography features operate on chunks of data held in memory, all at once. This strictly limits the performance and scalability of cryptographic operations. Using these APIs in any environment that is not a web browser, and trying to make them perform well, quickly becomes painful.

To address that issue, WinterCG has started drafting a new specification for Web Crypto Streams that will be submitted to the W3C for consideration as part of a larger effort currently being bootstrapped by the W3C to update the Web Cryptography specification. The goal is to bring streaming crypto operations to the whole of the web, including web browsers, in a way that conforms with existing standards.

A subset of fetch() for servers

With the recent release of version 18.0.0, Node.js has joined the collection of JavaScript environments that provide an implementation of the WHATWG standardized fetch() API. There are, however, a number of important differences between the way Node.js, Deno, and Cloudflare Workers implement fetch() versus the way it is implemented in web browsers.

For one, server environments do not have a concept of “origin” like a web browser does. Features such as CORS intended to protect against cross-site scripting vulnerabilities are simply irrelevant on the server. Likewise, where web browsers are generally used by one individual user at a time and have a concept of a globally-scoped cookie store, server and serverless applications can be used by millions of users simultaneously and a globally-scoped cookie store that potentially contains session and authentication details would be both impractical and dangerous.

Because of the acute differences in the environments, it is often difficult to reason about, and gain consensus on, proposed changes in the fetch standard. Some proposed new API, for instance, might be fantastically relevant to fetch users on a server but completely useless to fetch users in a web browser. Some set of security concerns that are relevant to the Browser might have no impact whatsoever on the server.

To address this issue, and to make it easier for non-web browser environments to implement fetch in a consistent way, WinterCG is working on documenting a subset of the fetch standard that deals specifically with those different requirements and constraints.

Critically, this subset will be fully compatible with the fetch standard; and is being cooperatively developed by the same folks who have worked on fetch in Node.js, Deno, and Cloudflare Workers. It is not intended that this will become a competing definition of the fetch standard, but rather a set of documented guidelines on how to implement fetch correctly in these other environments.

We’re just getting started

The Web-interoperable Runtimes Community Group is just getting started, and we have a number of ambitious goals. Participation is open to everyone, and all work will be done in the open via GitHub at https://github.com/wintercg. We are actively seeking collaboration with the W3C, the WHATWG, and the JavaScript community at large to ensure that web features are available, work consistently, and meet the requirements of all web developers working anywhere across the stack.

For more information on the WinterCG, refer to https://wintercg.org. For details on how to participate, refer to https://github.com/wintercg/admin.

Cloudflare and StackBlitz partner to deliver an instant and secure developer experience

Post Syndicated from Adam Janiš original https://blog.cloudflare.com/cloudflare-stackblitz-partnership/

Cloudflare and StackBlitz partner to deliver an instant and secure developer experience

Cloudflare and StackBlitz partner to deliver an instant and secure developer experience

We are starting our Platform Week focused on the most important aspect of a developer platform — developers. At the core of every announcement this week is developer experience. In other words, it doesn’t matter how groundbreaking the technology is if at the end of the day we’re not making your job as a developer easier.

Earlier today, we announced the general availability of a new Wrangler version, making it easier than ever to get started and develop with Workers. We’re also excited to announce that we’re partnering with StackBlitz. Together, we will bring the Wrangler experience closer to you – directly to your browser, with no dependencies required!

StackBlitz is a web-based code editor provided with a fresh and fast development environment on each page load. StackBlitz’s development environments are powered by WebContainers,  the first WebAssembly-based operating system, which boots secure development environments entirely within your browser tab.

Introducing new Wrangler, running in your browser

Cloudflare and StackBlitz partner to deliver an instant and secure developer experience

One of the Wrangler improvements we announced today is the option to easily run Wrangler in any Node.js environment, including your browser which is now powered by WebContainers!

StackBlitz’s WebContainers are optimized for starting any project within seconds, including the installation of all dependencies. Whenever you’re ready to start a fresh development environment, you can refresh the browser tab running StackBlitz’s editor and have everything instantly ready to go.

Don’t just take our word for it, you can test this out yourself by opening up a sample project on https://workers.new/typescript.
Note: currently, only Chromium based browsers are supported.

You can think of WebContainers as an in-browser operating system: they include features like a file system, multi-process and multi-threading application support, and a virtualized TCP network stack with the use of ServiceWorkers.

Interested in learning more about WebContainers? Check out the introduction blog post or WebContainer working group GitHub repository.

Powering a better developer experience and documentation

We’re excited about all the possibilities that instant development environments running in the browser open us up to. For example, they enable us to embed or link full code projects directly from our documentation examples and tutorials without waiting for a remote server to spin up a container with your environment.

Try out the following templates and have a little sneak peek of the developer experience we are working together to enable, as running a new Workers application locally was never easier!

https://workers.new/router
https://workers.new/durable-objects
https://workers.new/typescript

What’s next

StackBlitz supports running Wrangler in a local mode today, and we are working together to enable features that require authentication to bring the full developer lifecycle inside your browser – including development on the edge, publishing, and debugging or tailing logs of your published Workers.

Share what you have built with us and stay tuned for more updates! Make sure to follow us on Twitter or join our Discord Developers Community server.

10 things I love about Wrangler v2.0

Post Syndicated from Sunil Pai original https://blog.cloudflare.com/10-things-i-love-about-wrangler/

10 things I love about Wrangler v2.0

10 things I love about Wrangler v2.0

Last November, we announced the beta release of a full rewrite of Wrangler, our CLI for building Cloudflare Workers. Since then, we’ve been working round the clock to make sure it’s feature complete, bug-free, and easy to use. We are proud to announce that Wrangler goes public today for general usage, and can’t wait to see what people build with it!

Rewrites can be scary. Our goal for this version of Wrangler was backward compatibility with the original version, while significantly improving the developer experience. I’d like to take this opportunity to present 10 reasons why you should upgrade to the new Wrangler!

1. It’s simpler to install:

10 things I love about Wrangler v2.0
A simpler way to get started.

Previously, folks would have to install @cloudflare/wrangler globally on a system. This made it hard to use different versions of Wrangler across projects. Further, it was hard to install on some CI systems because of lack of access to a user’s root folder.  Sometimes, folks would forget to add the @cloudflare scope when installing, confusing them when a completely unrelated package was installed and didn’t work as expected.

Let’s fix that. We’ve simplified this by now publishing to the wrangler package, so you can run npm install wrangler and it works as expected. You can also install it locally to a project’s package.json, like any other regular npm package. It also works across a much broader range of CPU architectures and operating systems, so you can use it on more machines.

This makes it a lot more convenient when starting. But why stop there?

2. Zero config startup:

10 things I love about Wrangler v2.0
Get started with zero configuration

It’s now much simpler to get started with a new project. Previously, you would have to create a wrangler.toml configuration file, and fill it in with details about your cloudflare account, how the project was structured, setting up a custom build process and so on. We heard feedback from many of you who would get frustrated during this step, and how it would take many minutes before you could get to developing a simple Worker.

Let’s fix that. You no longer need to create a configuration file when starting, and none of the fields are mandatory. Wrangler infers details about your account and project as you start developing, and you can add configuration incrementally when you need to.

In fact, you don’t even need to install Wrangler to start! You can create a Worker (say, as index.js) and use npx (a utility that comes installed with node.js) to fetch Wrangler from the npm registry and start developing immediately!

This is great for extremely simple Workers, but why stop there?

3. wrangler init my-worker -y, one liner to set up a full project:

We noticed users would struggle to set up a project with Wrangler, even after they’d installed Wrangler and configured wrangler.toml. Most users want to set up a package.json, commonly use typescript to write code, and set up git to track changes in this project. So, we expanded the wrangler init <project name> command to set up a production grade project. You can optionally choose to use typescript, install the official type definitions for Workers, and use git to track changes.

My favorite trick here is to pass -y to accept all questions without asking. Try running npx wrangler init my-worker -y in your terminal today!

10 things I love about Wrangler v2.0
One line to set up a full Workers project

4. --local mode:

Wrangler typically runs a development server on our global network, setting up a local proxy when developing, so you can develop against a “real” environment in the cloud. This is great for making sure the code you develop will behave the same in development, and after you deploy it to production. The trade-off here is it’s harder to develop code when you have a bad Internet connection, or if you’re running tests on a CI machine. It’s also marginally slower to iterate while coding. Users have asked us for a long time to be able to ‘run’ their code locally on their machines, so that they can iterate quickly and run tests in environments like CI.

Wrangler now lets you develop on your machine by simply calling wrangler dev --local, and no additional configuration. This is powered by Miniflare, a fully featured simulator of the Cloudflare Workers runtime. You can even toggle across ‘edge’ and ‘local’ modes by tapping the ‘L’ hotkey when developing; however you prefer!

10 things I love about Wrangler v2.0
Local mode, powered by Miniflare.

5. Tail any Worker, any time:

10 things I love about Wrangler v2.0
Tail your logs anywhere, anytime. 

It’s useful to be able to “tail” a Worker’s output to a terminal, and see what’s going on in real time. While you can already view these logs in the Workers dashboard, some people are more comfortable seeing the logs in their terminal, and then slicing and dicing to debug any issues that may be occuring. Previously, you would have to checkout a Worker’s repository locally, install dependencies, and then call wrangler tail in the project folder. This was clearly cumbersome, and relied on developer expertise to see something as simple as a Worker’s logs.

Now you can simply call npx wrangler tail <worker name> in your terminal, without any configuration or setup, and immediately see the logs that you expect. We use this ourselves to quickly inspect our production Workers and see what’s going on inside them!

6. Better warnings and errors, everywhere:

One of the worst feelings a developer can face is being presented with an error when writing code, and not knowing how to fix it and proceed. We heard feedback from many of you who were frustrated with the lack of error messages, and how you would spend hours trying to figure out what went wrong. We’ve now added new error and warning messages, so you can easily spot the problems in your code. When possible, we also include steps you can follow to fix your Worker, including things that you can simply copy and paste! This makes Wrangler much more friendly to use, and we promise the experience will only get better.

10 things I love about Wrangler v2.0

7. On-demand developer tools for debugging:

We introduced initial support for debugging Workers in Wrangler in September which enables debugging a Worker directly on our global network. However, getting started with debugging was still a bit cumbersome, because you would have to start Wrangler with an --inspect flag, then open a special page in your browser (chrome://inspect), configuring it to detect Wrangler running on a special port, and then launching the debugger. This would also mean you might have lost any debugging messages that were logged before you opened the Chrome developer tools.

We fixed this. Now you don’t need to pass any special flags when starting up. You can simply hit the D hotkey when developing and a developer tools instance pops up in your browser. And by buffering the messages before you even start up the devtools, you don’t lose any logs or errors! You can also use VS Code developer tools to directly hook into your Worker’s debugging session!

8. A modern module system:

Modern JavaScript isn’t simply about the syntax that the language supports, but also writing code as modules, and leveraging the extremely broad ecosystem of community libraries and frameworks. Previously, Wrangler required that you set up webpack or a custom build with bundlers (like rollup, vite, or esbuild, to name a few) to consume libraries and modules. This introduces a lot of friction, especially when starting a new project and trying out new ideas.

Now, support for npm modules comes out of the box, with no extra configuration required! You can install any package from the npm registry, organize your own code with modules, and it all works as expected. We’re also introducing an experimental node.js compatibility mode for using node.js modules that wouldn’t previously work without setting up your own polyfills! This means you can use popular frameworks and libraries that you’re already familiar with while focusing on delivering value to your users.

9. Closes many outstanding issues:

A rewrite should be judged not just by the new features that are implemented, but by how many existing issues are resolved. We went through hundreds of outstanding issues and bugs with Wrangler, and are happy to say that we solved almost all of them! Across the board, every command and feature got a facelift, bug fixes, and test coverage to make sure it doesn’t break in the future. Developers on using Cloudflare Workers will be glad to hear that simply upgrading Wrangler will immediately fix previous concerns and problems. Which leads us to my absolute favorite feature…

10 things I love about Wrangler v2.0

10. A commitment to improve:

10 things I love about Wrangler v2.0
The effort into building wrangler v2.0, visualised. 

Wrangler has always been special software for us. It represents the primary interface that developers use to interact and use Cloudflare Workers, and we have major plans for the future. We have invested time, effort and resources to make sure Wrangler is the best tool for developers to use, and we’re excited to see what the future holds. This is a commitment to our users and community that we will only keep improving on this foundation, and folks can expect their feedback and concerns to be heard loud and clear.

Open source Managed Components for Cloudflare Zaraz

Post Syndicated from Yo'av Moshe original https://blog.cloudflare.com/zaraz-open-source-managed-components-and-webcm/

Open source Managed Components for Cloudflare Zaraz

Open source Managed Components for Cloudflare Zaraz

In early 2020, we sat down and tried thinking if there’s a way to load third-party tools on the Internet without slowing down websites, without making them less secure, and without sacrificing users’ privacy. In the evening, after scanning through thousands of websites, our answer was “well, sort of”. It seemed possible: many types of third-party tools are merely collecting information in the browser and then sending it to a remote server. We could theoretically figure out what it is that they’re collecting, and then instead just collect it once efficiently, and send it server-side to their servers, mimicking their data schema. If we do this, we can get rid of loading their JavaScript code inside websites completely. This means no more risk of malicious scripts, no more performance losses, and fewer privacy concerns.

But the answer wasn’t a definite “YES!” because we realized this is going to be very complicated. We looked into the network requests of major third-party scripts, and often it seemed cryptic. We set ourselves up for a lot of work, looking at the network requests made by tools and trying to figure out what they are doing – What is this parameter? When is this network request sent? How is this value hashed? How can we achieve the same result more securely, reliably and efficiently? Our team faced these questions on a daily basis.

When we joined Cloudflare, the scale of everything changed. Suddenly we were on thousands of websites, serving more than 10,000 requests per second. Users are writing to us every single day over our Discord channel, the community forum, and sometimes even directly on Twitter. More often than not, their messages would be along the lines of “Hi! Can you please add support for X?” Cloudflare Zaraz launched with around 30 tools in its library, but this market is vast and new tools are popping up all the time.

Changing our trust model

In my previous blog post on how Zaraz uses Cloudflare Workers, I included some examples of how tool integrations are written in Zaraz today. Usually, a “tool” in Zaraz would be a function that prepares a payload and sends it. This function could return one thing – clientJS, JavaScript code that the browser would later execute. We’ve done our best so that tools wouldn’t use clientJS, if it wasn’t really necessary, and in reality most Zaraz-built tool integrations are not using clientJS at all.

This worked great, as long as we were the ones coding all tool integrations. Customers trusted us that we’d write code that is performant and safe, and they trusted the results they saw when trying Zaraz. Upon joining Cloudflare, many third-party tool vendors contacted us and asked to write a Zaraz integration. We quickly realized that our system wasn’t enforcing speed and safety – vendors could literally just dump their old browser-side JavaScript into our clientJS variable, and say “We have a Cloudflare Zaraz integration!”, and that wasn’t our vision at all.

We want third-party tool vendors to be able to write their own performant, safe server-side integrations. We want to make it possible for them to reimagine their tools in a better way. We also want website owners to have transparency into what is happening on their website, to be able to manage and control it, and to trust that if a tool is running through Zaraz, it must be a good tool — not because of who wrote it, but because of the technology it is constructed within. We realized that to achieve that we needed a new format for defining third-party tools.

Introducing Managed Components

We started rethinking how third-party code should be written. Today, it’s a black box – you usually add a script to your site, and you have zero clue what it does and when. You can’t properly read or analyze the minified code. You don’t know if the way it behaves for you is the same way it behaves for everyone else. You don’t know when it might change. If you’re a website owner, you’re completely in the dark.

Tools do many different things. The simple ones just collected information and sent it somewhere. Often, they’d set some cookies. Sometimes, they’d install some event listeners on the page. And widget-based tools can literally manipulate the page DOM, providing new functionality like a social media embed or a chatbot. Our new format needed to support all of this.

Managed Components is how we imagine the future of third-party tools online. It provides vendors with an API that allows them to do much more than a normal script can, including keeping code execution outside the browser. We designed this format together with vendors, for vendors, while having in mind that users’ best interest is everyone’s best interest long-term.

From the get-go, we built Managed Components to use a permission-based system. We want to provide even more transparency than Zaraz does today. As the new API allows tools to set cookies, change the DOM or collect IP addresses, all those abilities require being granted a permission. Installing a third-party tool on your site is similar to installing an app on your phone – you get an explanation of what the tool can and can’t do, and you can allow or disallow features to a granular level. We previously wrote about how you can use Zaraz to not send IP addresses to Google Analytics, and now we’re doubling down in this direction. It’s your website, and it’s your decision to make.

Every Managed Component is a JavaScript module at its core. Unlike today, this JavaScript code isn’t sent to the browser. Instead, it is executed by a Components Manager. This manager implements the APIs that are then used by the component. It dispatches server-side events that originate in the browser, providing the components with access to information while keeping them sandboxed and performant. It handles caching, storage and more — all so that the Managed Components can implement their logic without worrying so much about the surrounding.

An example analytics Managed Component can look something like this:

export default function (manager) {
  manager.addEventListener("pageview", ({ context, client }) => {
    fetch("https://example.com/collect", {
  	method: "POST",
  	data: {
    	  url: context.page.url.href,
    	  userAgent: client.device.userAgent,
  	},
    });
  });
}

The above component gets notified whenever a page view occurs, and it then creates some payload with the visitor user-agent and page URL and sends that as a POST request to the vendor’s server. This is very similar to how things are done today, except this doesn’t require running any code at all in the browser.

But Managed Components aren’t just doing what was previously possible but better, they also provide dramatic new functionality. See for example how we’re exposing server-side endpoints:

export default function (manager) {
  const api = manager.proxy("/api", "https://api.example.com");
  const assets = manager.serve("/assets", "assets");
  const ping = manager.route("/ping", (request) => new Response(204));
}

These three lines are a complete shift in what’s possible for third-parties. If granted the permissions, they can proxy some content, serve and expose their own endpoints – all under the same domain as the one running the website. If a tool needs to do some processing, it can now off-load that from the browser completely without forcing the browser to communicate with a third-party server.

Exciting new capabilities

Every third-party tool vendor should be able to use the Managed Components API to build a better version of their tool. The API we designed is comprehensive, and the benefits for vendors are huge:

  • Same domain: Managed Components can serve assets from the same domain as the website itself. This allows a faster and more secure execution, as the browser needs to trust and communicate with only one server instead of many. This can also reduce costs for vendors as their bandwidth will be lowered.
  • Website-wide events system: Managed Components can hook to a pre-existing events system that is used by the website for tracking events. Not only is there no need to provide a browser-side API to your tool, it’s also easier for your users to send information to your tool because they don’t need to learn your methods.
  • Server logic: Managed Components can provide server-side logic on the same domain as the website. This includes proxying a different server, or adding endpoints that generate dynamic responses. The options are endless here, and this, too, can reduce the load on the vendor servers.
  • Server-side rendered widgets and embeds: Did you ever notice how when you’re loading an article page online, the content jumps when some YouTube or Twitter embed suddenly appears between the paragraphs? Managed Components provide an API for registering widgets and embed that render server-side. This means that when the page arrives to the browser, it already includes the widget in its code. The browser doesn’t need to communicate with another server to fetch some tweet information or styling. It’s part of the page now, so expect a better CLS score.
  • Reliable cross-platform events: Managed Components can subscribe to client-side events such as clicks, scroll and more, without needing to worry about browser or device support. Not only that – those same events will work outside the browser too – but we’ll get to that later.
  • Pre-Response Actions: Managed Components can execute server-side actions before the network response even arrives in the browser. Those actions can access the response object, reading it or altering it.
  • Integrated Consent Manager support: Managed Components are predictable and scoped. The Component Manager knows what they’ll need and can predict what kind of consent is needed to run them.

The right choice: open source

As we started working with vendors on creating a Managed Component for their tool, we heard a repeating concern – “What Components Managers are there? Will this only be useful for Cloudflare Zaraz customers?”. While Cloudflare Zaraz is indeed a Components Manager, and it has a generous free tier plan, we realized we need to think much bigger. We want to make Managed Components available for everyone on the Internet, because we want the Internet as a whole to be better.

Today, we’re announcing much more than just a new format.

WebCM is a reference implementation of the Managed Components API. It is a complete Components Manager that we will soon release and maintain. You will be able to use it as an SDK when building your Managed Component, and you will also be able to use it in production to load Managed Components on your website, even if you’re not a Cloudflare customer. WebCM works as a proxy – you place it before your website, and it rewrites your pages when necessary and adds a couple of endpoints. This makes WebCM 100% framework-agnostic – it doesn’t matter if your website uses Node.js, Python or Ruby behind the scenes: as long as you’re sending out HTML, it supports that.

That’s not all though! We’re also going to open source a few Managed Components of our own. We converted some of our classic Zaraz integrations to Managed Components, and they will soon be available for you to use and improve. You will be able to take our Google Analytics Managed Component, for example, and use WebCM to run Google Analytics on your website, 100% server-side, without Cloudflare.

Tech-leading vendors are already joining

Revolutionizing third-party tools on the internet is something we could only do together with third-party vendors. We love third-party tools, and we want them to be even more popular. That’s why we worked very closely with a few leading companies on creating their own Managed Components. These new Managed Components extend Zaraz capabilities far beyond what’s possible now, and will provide a safe and secure onboarding experience for new users of these tools.

Open source Managed Components for Cloudflare Zaraz

DriftDrift helps businesses connect with customers in moments that matter most.  Drift’s integration will let customers use their fully-featured conversation solution while also keeping it completely sandboxed and without making third-party network connections, increasing privacy and security for our users.

Open source Managed Components for Cloudflare Zaraz

CrazyEggCrazy Egg helps customers make their websites better through visual heatmaps, A/B testing, detailed recordings, surveys and more. Website owners, Cloudflare, and Crazy Egg all care deeply about performance, security and privacy. Managed Components have enabled Crazy Egg to do things that simply aren’t possible with third-party JavaScript, which means our mutual customers will get one of the most performant and secure website optimization tools created.

We also already have customers that are eager to implement Managed Components:

Open source Managed Components for Cloudflare Zaraz

Hopin Quote:

“I have been really impressed with Cloudflare’s Zaraz ability to move Drift’s JS library to an Edge Worker while loading it off the DOM. My work is much more effective due to the savings in page load time. It’s a pleasure to work with two companies that actively seek better ways to increase both page speed and load times with large MarTech stacks.”
– Sean Gowing, Front End Engineer, Hopin

If you’re a third-party vendor, and you want to join these tech-leading companies, do reach out to us, and we’d be happy to support you on writing your own Managed Component.

What’s next for Managed Components

We’re working on Managed Components on many fronts now. While we develop and maintain WebCM, work with vendors and integrate Managed Components into Cloudflare Zaraz, we’re already thinking about what’s possible in the future.

We see a future where many open source runtimes exist for Managed Components. Perhaps your infrastructure doesn’t allow you to use WebCM? We want to see Managed Components runtimes created as service workers, HTTP servers, proxies and framework plugins. We’re also working on making Managed Components available on mobile applications. We’re working on allowing unofficial Managed Components installs on Cloudflare Zaraz. We’re fixing a long-standing issue of the WWW, and there’s so much to do.

We will very soon publish the full specs of Managed Components. We will also open source WebCM, the reference implementation server, as well as many components you can use yourself. If this is interesting to you, reach out to us at [email protected], or join us on Discord.

The next chapter for Cloudflare Workers: open source

Post Syndicated from Rita Kozlov original https://blog.cloudflare.com/workers-open-source-announcement/

The next chapter for Cloudflare Workers: open source

The next chapter for Cloudflare Workers: open source

450,000 developers have used Cloudflare Workers since we launched.

When we announced Cloudflare Workers nearly five years ago, we had no idea if we’d ever be in this position. But a lot of care, hard work — not to mention dogfooding — later, we’ve been absolutely blown away by the use cases and applications built on our developer platform, not to mention the community that’s grown around the product.

My job isn’t just speaking to developers who are already using Cloudflare Workers, however. I spend a lot of time talking to developers who aren’t yet using Workers, too. Despite how cool the tech is — the performance, the ability to just code without worrying about anything else like containers, and the total cost advantages — there are two things that cause developers to hesitate in engaging with us on Workers.

The first: they worry about being locked in. No matter how bullish on the technology you are, if you’re betting the future of a company on a development platform, you don’t want the possibility of being held to ransom. And second: as a developer, you want a local development environment to quickly iterate and test your changes. These concerns might seem unrelated, but they always come up in the form of the same question: can Cloudflare please open source the runtime?

We’re excited to put these concerns to bed. As the first announcement of Platform Week, today Cloudflare is announcing the open sourcing of the Workers runtime under the Apache-2.0 license!

While the code itself will be the best answer to most of the questions you have (we still have some work to do before we’re ready to share it), the questions we did want to answer today were: why are we doing this, and why now?

Development on the web has always been done in the open. If you’re like me, maybe your very first experience writing and looking at code was clicking on “View Source” on a website, and inspecting the HTML to see what pieces you could borrow. So many of the foundational pieces you build on today are open source, from the site, to the browser, to the many frameworks and libraries that are now available to developers. The same is true for us, so much of what we’re able to build is standing on the shoulders of giants like V8.

It was never our intention to introduce opaqueness into the stack, but in reality, when we first announced Workers five years ago, we took a really huge bet.

We wanted to give developers the ability to program on our network, but couldn’t do it at the expense of performance or security. While building on a battle tested technology like V8 seemed promising from a security standpoint, existing runtimes built on V8, couldn’t give us the security guarantees we needed to run a large multi-tenant environment, without the added security layer of a container, which would introduce latency (read: cold starts). Not only were cold starts not acceptable, but in reality, our data centers are much smaller than the centralized monoliths of traditional cloud. Even if we could run existing applications on the edge without cold starts, the code footprint would be far too large to enable every single one of our customers to have access to compute on every node of our global network.

So, we had to get inventive, and the first place we looked was web standards, or the Service Workers API. While Service Workers were designed to run in the browser, the model of Requests and Responses fit our use case really well. And, we liked the idea of the code you write being portable to other environments (and hoped that new players that came up would support the same model).

And that’s exactly what happened.

This all might seem obvious in retrospect, but at the time, it was a huge bet. We didn’t know at the time whether this was going to work. Whether this approach would take off, whether this would all work at scale, whether developers would adopt this model, despite it diverging from what JavaScript looked like on the server-side at the time…

What we did know was that we had a lot to prove, that we didn’t want to lock anyone in, and that open sourcing something properly is not an effort we wanted to take lightly. We wanted to support our community the same way we felt supported by all the open source projects we ourselves were building upon.

Now, it feels like we’re finally there, and we believe the next step in our runtime’s evolution is to give it to the world, and let developers build with it wherever they want, however they want.

Of course, since we’re talking about open source, we already know what you’re going to ask next: what license are we going to use? We plan to use the Apache-2.0 license — we want developers to be able to build on our platform freely.

What’s next?

Open sourcing the runtime alone is not enough to allow developers to write code, free of lock in concerns, which is why we have another announcement coming up today.

And after that, well, if you’ve been following Cloudflare for a while, you know that there’s a certain time in the year, when we like to give back to the Internet. That might be a pretty good bet for the timing of what’s next! 🙂

Welcome to Platform Week

Post Syndicated from Rita Kozlov original https://blog.cloudflare.com/platform-week-2022/

Welcome to Platform Week

Welcome to Platform Week

Principled. It’s one of Cloudflare’s three core values (alongside curiosity and transparency).

It’s a word that we came back to quite a bit in thinking through a question that has been foundational in driving us for this year’s Platform Week: what makes a truly great developer platform?

Of course, when it comes to evaluating developer platforms, the temptation is to focus on the “feeds and speeds” part of the equation. Who is the fastest? Who has the coolest tech? Who lets you do stuff that previously you could not?

Undoubtedly, these are all important questions. But we realized that the fun and shiny things which are often answers to these questions can easily become distractions from the true promise of developing on the Internet — and even traps that the less principled developer platforms can use to lure you into their arms.

The promise being, of course: that you can pull together solutions from a variety of different providers, to build something greater than what you’d be able to do with any one of them alone. That you can build something based on whatever is best when you sit down to create your application. And of course, if something better subsequently comes along, then you can switch to it and take advantage of that, too. When you think about it, it makes sense: all the Internet really is a network based on a common set of standards that allows us all to talk to each other.

And yet, when it comes to the cloud platforms, it feels like we’re further away from that promise than ever before.

How did that happen?

When you start to think about why: well, many of the winners of the cloud have become too big for their (and our) own good. The same players that were underdogs have become incumbents — not just bending the world to their will, but sticking to their assumptions of what the world looked like a decade ago. We went from a highly competitive environment, with an even distribution of power, to something entirely unbalanced. Somewhere along the way, Hotel California became the theme song of the cloud: a friendly face welcomes you in… and then you can’t leave.

This manifests in many ways.

Sometimes it takes the form of egregious egress fees, where you are stuck with using in-ecosystem tooling instead of the best tool for the job. We don’t believe in that. We want an Internet that allows for specialization, where developers can use the best across several offerings, bringing together those services to build something incredible. But that requires giving developers freedom of choice: without hidden pricing considerations pushing you to stay with large, incumbent vendors. In fact, in many respects, freedom of choice is the promise of the Internet for developers.

We want to get back to that.

But it’s not just pricing. Other times, lock-in happens through the code or APIs needed to build with a service. Developers tie their applications to the services that power them, and eventually, without you even realizing it, it becomes incredibly cumbersome to switch off. We’ve watched the Internet become more proprietary, where vendors offer products as a service without the ability to run them anywhere else. Of course, that’s where standards come in, defining the same language and behavior across vendors.

Developers win when we open up the APIs we support and languages we speak, and rally several competing options around a common set. Continuously winning a developer’s business shouldn’t be because you’ve made someone dependent on you, and they can’t get out — it should be because what you’re offering is better than the alternatives.

When that happens, developers win.

This Platform Week, we don’t want to deliver on just new and shiny things (though there will be a few of those, too!). We want to deliver on principles. On letting the best solution win. On breaking developers out of lock in: whether because of code, or because of economics.

To get this right, we must start at the very beginning — the foundation. Everything we do is built on the foundation of the open web and open standards. That’s not something we take lightly, and certainly not something we take for granted. We decided the right way to kick this week off would be by giving back, and helping do what we can to help push the web, and those open standards forward.

So, that’s the foundation. But now you need the right blocks to build on it.

There’s one building block we know you’re excited about, it’s data. And we are too, which is why we’ll be giving you an update on a certain something we’ve had in beta the last little while. And that’s not all, either: there may even be a sequel.

Data is one thing, but applications need to share that data with services to extract value. This week we’ll make it easier and cheaper to connect the pieces of your stack together, enabling the sending of information where you need it, when you need it.

As we all know, the reason we all work so hard as developers is to enable that most critical of functionality: sharing pictures and videos of cats and babies. There are always better ways of doing it though, and we’re going to dedicate a whole day to new ways to upload, stream and share these gems.

And finally, we want to help the Internet become more programmable. Platforms offer real customizability to the developers they serve: enabling them to do things that the platform creator itself never envisioned. When you work with the application services component of Cloudflare, you can customize bot scores, load balancing rules, routing — all by programming our network. And we’re not just talking about relying on APIs to do things that we, the original developer, initially envisioned. We’re talking about true programmability. Whether you want to build a customized bot within an existing chat application, or a bespoke experience on an eCommerce website builder, we’re excited to move development beyond the era of the API into true programmability, beyond our walls, right across the web.

But back to it: principled.

Yes, we’re going to be delivering this week on all the innovation that you’ve come to expect from us. And you know what we can’t wait to see? All the amazing things you’re able to build — but it won’t just be on us. In fact, it might not be on us at all, and that’s completely ok. What we’re excited about is you building things on all the incredible providers out there, the ones that are equally dedicated to helping build a better Internet for all developers.

We can’t wait to show you what we have in store.

Announcing our Spring Developer Speaker Series

Post Syndicated from Kristian Freeman original https://blog.cloudflare.com/announcing-our-spring-developer-speaker-series/

Announcing our Spring Developer Speaker Series

Announcing our Spring Developer Speaker Series

We love developers.

Late last year, we hosted Full Stack Week, with a focus on new products, features, and partnerships to continue growing Cloudflare’s developer platform. As part of Full Stack Week, we also hosted the Developer Speaker Series, bringing 12 speakers in the web dev community to our 24/7 online TV channel, Cloudflare TV. The talks covered topics across the web development ecosystem, which you can rewatch at any time.

We loved organizing the Developer Speaker Series last year. But as developers know far too well, our ecosystem changes rapidly: what may have been cutting edge back in November 2021 can be old news just a few months later in 2022. That’s what makes conferences and live speaking events so valuable: they serve as an up-to-date reference of best practices and future-facing developments in the industry. With that in mind, we’re excited to announce a new edition of our Developer Speaker Series for 2022!

Check out the eleven expert web dev speakers, developers, and educators that we’ve invited to speak live on Cloudflare TV! Here are the talks you’ll be able to watch, starting tomorrow morning (May 9 at 09:00 PT):

The Bootcampers Companion – Caitlyn Greffly
In her recent book, The Bootcamper’s Companion, Caitlyn dives into the specifics of how to build connections in the tech field, understand confusing tech jargon, and make yourself a stand-out candidate when looking for your first job. She’ll talk about some top tips and share a bit about her experience as well as what she has learned from navigating tech as a career changer.

Engaging Ecommerce with the Visual Web – Colby Fayock
Experiences on the web have grown increasingly visual, from displaying product images to interactive NFTs, but not paying attention to how media is delivered can impact Core Web Vitals, creating a bad UX with slow-loading pages, hurting your store’s conversion and potentially losing sales.

How can we effectively leverage media to showcase products creating engaging experiences for our store? We’ll talk about the media’s role in ecomm and how we can take advantage of it while optimizing delivery.

Testing Web Applications with Playwright – Debbie O’Brien
Testing is hard, testing takes time to learn and to write, and time is money. As developers, we want to test. We know we should, but we don’t have time. So how can we get more developers to do testing? We can create better tools.

Let me introduce you to Playwright, a reliable tool for end-to-end cross browser testing for modern web apps, by Microsoft and fully open source. Playwright’s codegen generates tests for you in JavaScript, TypeScript, Dot Net, Java or Python. Now you really have no excuses. It’s time to play your tests wright.

Building serverless APIs: how Fauna and Workers make it easy – Rob Sutter
Building APIs has always been tricky when it comes to setting up architecture. FaunaDB and Workers remove that burden by letting you write code and watch it run everywhere.

Business context is developer productivity – John Feminella
A major factor in developer productivity is whether they have the context to make decisions on their own, or if instead they can only execute someone else’s plan. But how do organizations give engineers the appropriate context to make those decisions when they weren’t there from the beginning?

On the edge of my server – Brian Rinaldi
Edge functions can be potentially game changing. You get the power of serverless functions but running at the CDN level – meaning the response is incredibly fast. With Cloudflare Workers, every worker is an edge function. In this talk, we’ll explore why edge functions can be powerful and explore examples of how to use them to do things a normal serverless function can’t do.

Ten things I love about Wrangler 2 – Sunil Pai
We spent the last six months rewriting wrangler, the CLI for building and deploying Cloudflare Workers. Almost every single feature has been upgraded to be more powerful and user-friendly, while still remaining backward compatible with the original version of wrangler. In this talk, we’ll go through some of the best parts about the rewrite, and how it provides the foundation for all the things we want to build in the future.

L is for Literacy – Henri Helvetica
It’s 2022, and web performance is now abundantly important, with an abundance of available metrics, used by — you guessed it — an abundance of developers, new and experienced. All quips aside, the complexities of the web has led to increased complexities in web performance. Understanding, or literacy in web performance is as important as the four basic language skills. ‘L is for Literacy’ is a lively look at performance lexicon, backed by enlightening data all will enjoy.

Cloudflare Pages Updates – Greg Brimble
Greg Brimble, a Systems Engineer working on Pages, will showcase some of this week’s announcements live on Cloudflare TV. Tune in to see what is now possible for your Cloudflare Pages projects. We’re excited to show you what the team has been working on!

Migrating to Cloudflare Pages: A look into git control, performance, and scalability – James Ross
James Ross, CTO of Nodecraft, will discuss how moving to Pages brought an improved experience for both users and his team building the future of game servers.

If you want to see the full schedule for the Developer Speaker Series, go to our landing page. It shows each talk, including speaker info and timing, as well as time zones for international viewers. When a talk goes live, tuning in is simple – just visit cloudflare.tv to start watching.

New this year, we’ve also prepared a Discord channel to follow the live conversation with other viewers! If you haven’t joined Cloudflare’s Discord server, get your invite.