Tag Archives: JAMstack

Cloudflare Acquires Linc

Post Syndicated from Aly Cabral original https://blog.cloudflare.com/cloudflare-acquires-linc/

Cloudflare Acquires Linc

Cloudflare Acquires Linc

Cloudflare has always been about democratizing the Internet. For us, that means bringing the most powerful tools used by the largest of enterprises to the smallest development shops. Sometimes that looks like putting our global network to work defending against large-scale attacks. Other times it looks like giving Internet users simple and reliable privacy services like 1.1.1.1.  Last week, it looked like Cloudflare Pages — a fast, secure and free way to build and host your JAMstack sites.

We see a huge opportunity with Cloudflare Pages. It goes beyond making it as easy as possible to deploy static sites, and extending that same ease of use to building full dynamic applications. By creating a seamless integration between Pages and Cloudflare Workers, we will be able to host the frontend and backend together, at the edge of the Internet and close to your users. The Linc team is joining Cloudflare to help us do just that.

Today, we’re excited to announce the acquisition of Linc, an automation platform to help front-end developers collaborate and build powerful applications. Linc has done amazing work with Frontend Application Bundles (FABs), making dynamic backends more accessible to frontend developers. Their approach offers a straightforward path to building end-to-end applications on Pages, with both frontend logic and powerful backend logic in one bundle. With the addition of Linc, we will accelerate Pages to enable richer and more powerful full-stack applications.

Combining Cloudflare’s edge network with Linc’s approach to server-side rendering, we’re able to set a new standard for performance on the web by delivering the speed of powerful servers close to users. Now, I’ll hand it over to Glen Maddern, who was the CTO of Linc, to share why they joined Cloudflare.


Linc and the Frontend Application Bundle (FAB) specification were designed with a single goal in mind: to give frontend developers the best possible tools to build, review, refine, and deploy their applications. An important piece of that is making server-side logic and rendering much more accessible, regardless of what type of app you’re building.

Static vs Dynamic frontends

One of the biggest problems in frontend web development today is the dramatic difference in complexity when moving from generating static sites (e.g. building a directory full of HTML, JS, and CSS files) to hosting a full application (traditionally using NodeJS and a web server like Express). While you gain the flexibility of being able to render everything on-demand and customised for the current user, you increase your maintenance cost — you now have servers that you need to keep running. And unless you’re operating at a global scale already, you’ll often see worse end-user performance as your requests are only being served from one or maybe a couple of locations worldwide.

While serverless platforms have arisen to solve these problems for backend services and can be brought to bear on frontend apps, they’re much less cost-effective than using static hosting, especially if the bulk of your frontend assets are static. As such, we’ve seen a rise of technologies under the umbrella term of “JAMstack”; they aim at making static sites more powerful (like rebuilding based off CMS updates), or at making it possible to deploy small pieces of server-side APIs as “cloud functions”, along with each update of your app. But it’s still fundamentally a limited architecture — you always have a static layer between you and your users, so the more dynamic your needs, the more complex your build pipeline becomes, or the more you’re forced to rely on client-side logic.

Cloudflare Acquires Linc

FABs took a different approach: a deployment artefact that could support the full range of server-side needs, from entirely static sites, apps with some API routes or cloud functions, all the way to full server-side streaming rendering. We also made it compatible with all the cloud hosting providers you might want, so that deploying becomes as easy as uploading a ZIP file. Then, as your needs change, as dynamic content becomes more important, as new frameworks arise that offer increasing performance or you look at moving which provider you’re hosting with, you never need to change your tooling and deployment processes.

The FAB approach

Regardless of what framework you’re working with, the FAB compiler generates a fab.zip file that has two components: a server.js file that acts as a server-side entry point, and an _assets directory that stores the HTML, CSS, JS, images, and fonts that are sent to the client.

Cloudflare Acquires Linc

This simple structure gives us enough flexibility to handle all kinds of apps. For example, a static site will have a server.js of only a few auto-generated lines of server-side code, just enough to add redirects for any files outside the _assets directory. On the other end of the spectrum, an app with full server rendering looks and works exactly the same. It just has a lot more code inside its server.js file.

On a server running NodeJS, serving a compiled FAB is as easy as fab serve fab.zip, but FABs are really designed with production class hosting in mind. They make use of world-class CDNs and the best serverless hosting platforms around.

Cloudflare Acquires Linc

When a FAB is deployed, it’s often split into these component parts and deployed separately. Assets are sent to a low-cost object storage platform with a CDN in front of it, and the server component is sent to dedicated serverless hosting. It’s all deployed in an atomic, idempotent manner that feels as simple as uploading static files, but completely unlocks dynamic server-side code as part of your architecture.

That generic architecture works great and is compatible with virtually every hosting platform around, but it works slightly differently on Cloudflare Workers.

Cloudflare Acquires Linc

Workers, unlike other serverless platforms, truly runs at the edge: there is no CDN or load balancer in front of it to split off /_assets routes and send them directly to the Assets storage. This means that every request hits the worker, whether it’s triggering a full page render or piping through the bytes for an image file. It might feel like a downside, but with Workers’ performance and cost profile, it’s quite the opposite — it actually gives us much more flexibility in what we end up building, and gets us closer to the goal of fully unlocking server-side code.

To give just one example, we no longer need to store our asset files on a dedicated static file host — instead, we can use Cloudflare’s global key-value storage: Workers KV. Our server.js running inside a Worker can then map /_assets requests directly into the KV store and stream the result to the user. This results in significantly better performance than proxying to a third-party asset host.

What we’ve found is that Cloudflare offered the most “FAB-native” hosting option, and so it’s very exciting to have the opportunity to further develop what they can do.

Linc + Cloudflare

As we stated above, Linc’s goal was to give frontend developers the best tooling to build and refine their apps, regardless of which hosting they were using. But we started to notice an important trend —  if a team had a free choice for where to host their frontend, they inevitably chose Cloudflare Workers. In some cases, for a period, teams even used Linc to deploy a FAB to Workers alongside their existing hosting to demonstrate the performance improvement before migrating permanently.

At the same time, we started to see more and more opportunities to fully embrace edge-rendering and make global serverless hosting more powerful and accessible. But the most exciting ideas required deep integration with the hosting providers themselves. Which is why, when we started talking to Cloudflare, everything fell into place.

We’re so excited to join the Cloudflare effort and work on expanding Cloudflare Pages to cover the full spectrum of applications. Not only do they share our goal of bringing sophisticated technology to every development team, but with innovations like Durable Objects starting to offer new storage paradigms, the potential for a truly next-generation deployment, review &  hosting platform is tantalisingly close.

Introducing Cloudflare Pages: the best way to build JAMstack websites

Post Syndicated from Rita Kozlov original https://blog.cloudflare.com/cloudflare-pages/

Introducing Cloudflare Pages: the best way to build JAMstack websites

Introducing Cloudflare Pages: the best way to build JAMstack websites

Across multiple cultures around the world, this time of year is a time of celebration and sharing of gifts with the people we care the most about. In that spirit, we thought we’d take this time to give back to the developer community that has been so supportive of Cloudflare for the last 10 years.

Today, we’re excited to announce Cloudflare Pages: a fast, secure and free way to build and host your JAMstack sites.

Today, the path from an idea to a website is paved with good intentions

Websites are the way we express ourselves on the web. It doesn’t matter if you’re a hobbyist with a blog, or the largest of corporations with millions of customers — if you want to reach people outside the confines of 140 280 characters, the web is the place to be.

As a frontend developer, it’s your responsibility to bring this expression to life. And make no mistake — with so many frontend frameworks, tooling, and static site generators at your disposal — it’s a great time to be in your line of work.

That is, of course, right up until the point when you’re ready to show your work off to the world. That’s when things can start to get a little hairy.

At this point, continuing to keep things local rather than committing to source starts to become… irresponsible. But then: how do you quickly iterate and maintain momentum? As you change things, you need to make sure those changes don’t get lost — saving them to source control — while keeping in sync with what’s currently deployed to production.

There are no great solutions.

If you’re in a larger organization, you might have a DevOps organization devoted to exactly that: automating deployments using Continuous Integration (CI) tooling.

Most CI tooling, however, is quite cumbersome, and for good reason — to allow organizations to customize their automation, regardless of their stack and setup. But for the purpose of developing a website, it can still feel like an unnecessary and frustrating diversion on the road to delivering your web project. Configuring a .yaml file, adding and removing commands, waiting minutes for each build to run, and praying to the CI gods at each one that these are the right commands. Hopelessly rerunning the same build over and over, and expecting a different result.  

Often, hours are lost. The process stands in the way of you and doing your best work.

Cloudflare Pages: letting frontend devs do what they do best

We think there’s a better way.

With Cloudflare Pages, we set out to simplify every step along the journey by tying deployment to your existing development workflow.

Seamless Git integration, with builds built-in

With Cloudflare Pages, all you have to do is select your repo, and tell us which framework you’re using. We’ll take care of chanting CI incantations on your behalf, while you keep doing what you were already doing: git commit and git push your changes — we’ll build and deploy them for you.

As the project grows, so do the stakes, and the number of collaborators.

For a site in production, changes need to be reviewed thoroughly. As the reviewer, looking at the code, and skimming for red flags only gets you so far. To thoroughly review, you have to commit or git stash your changes, pull down locally, get it running to make sure it actually works — looking at code alone won’t catch everything!

The other developers on the team are not the only stakeholders. There are designers, marketers, PMs who want to provide feedback before the changes go out.

Unique preview URLs

With Cloudflare Pages, each commit gets its own unique URL. Preview URLs make it easier to get meaningful code reviews without the overhead of pulling down the branch. They also make it easier to get feedback from PMs, designers and marketers on the latest iteration, bridging the gap between mocks and code.

Infinite staging

“Does anyone mind if I take over staging?” might also sound like a familiar question. With Cloudflare Pages, each feature branch will have its own dedicated consistent alias, allowing you to have a consistent URL for the latest changes.

With Preview and Production environments, all feature branches and preview links will be built with preview variables, so you can experiment without impacting production data.

When you’re ready to deploy to production, we’ll redeploy to production for you with the updated production environment variables.

Collaboration for all

Collaboration is the key to building amazing websites and products — the more the merrier! As a security company, we definitely don’t want you sharing password and credentials. Which is why we provide multi user access for free for unlimited users — invite all your friends, on us!

Modern sites with modern standards

We all know premature optimization is a cardinal sin, but once your project is in front of customers you want to have the best performance possible. If it’s successful, you also want it to be available!

Today, this is time you have to spend optimizing performance (chasing those 100 lighthouse scores), and scaling, from a few to millions of users.

Luckily, we happen to know a thing or two about running a global network of 200 data centers though, so we’ve got you covered.

With Pages, your site is deployed directly to our edge, milliseconds away from customers, and at global scale.

The latest web standards are fun to read about on Hacker News but not fun to implement yourself. With Cloudflare Pages, we’ll do the heavy lifting to keep you ahead of the curve: IPv6, HTTP/3, TLS 1.3, all the latest image formats.

Oh, and one more thing

We’re really excited for developers and their teams to use Cloudflare Pages to collaborate on the best static sites together. There’s just one thing that didn’t sit quite right with us: why stop at static sites?

What if we could make building full-blown, dynamic applications just as easy?

Although APIs are a core part of the JAMstack, today that refers primarily to the robust API economy developers have access to. And while that’s great, it’s not always enough. If you want to build your own APIs, and store user or application data, you need more than third party APIs. What to do, though?

Well, this is the point at which it’s mighty helpful we’ve already built a global serverless platform: Cloudflare Workers. Workers allows frontend developers to easily write scalable backends to their applications in the same language as the frontend, JavaScript.

Over the coming months, we’ll be working on integrating Workers and Pages into a seamless experience. It’ll work the exact same way Pages does: just write your code, git push, and we’ll deploy it for you. The only difference is, it won’t just be your frontend, it’ll be your backend, too. And just to be clear: this is not just for stateless functions. With Workers KV and Durable Objects, we see a huge opportunity to really enable any web application to be built on this platform.

We’re super excited about the future of Pages, and how with the power of Cloudflare Workers behind it, it represents a bold vision for how new applications are going to be built on the web.

But you know the thing about gifts? They’re no good without someone to receive them. We’d love for you to sign up for our beta and try out Cloudflare Pages!

PS: we’re hiring!

Want to help us shape the future of development on the web? Join our team.

Building Black Friday e-commerce experiences with JAMstack and Cloudflare Workers

Post Syndicated from Kristian Freeman original https://blog.cloudflare.com/building-black-friday-e-commerce-experiences-with-jamstack-and-cloudflare-workers/

Building Black Friday e-commerce experiences with JAMstack and Cloudflare Workers

The idea of serverless is to allow developers to focus on writing code rather than operations — the hardest of which is scaling applications. A predictably great deal of traffic that flows through Cloudflare’s network every year is Black Friday. As John wrote at the end of last year, Black Friday is the Internet’s biggest online shopping day. In a past case study, we talked about how Cordial, a marketing automation platform, used Cloudflare Workers to reduce their API server latency and handle the busiest shopping day of the year without breaking a sweat.

The ability to handle immense scale is well-trodden territory for us on the Cloudflare blog, but scale is not always the first thing developers think about when building an application — developer experience is likely to come first. And developer experience is something Workers does just as well; through Wrangler and APIs like Workers KV, Workers is an awesome place to hack on new projects.

Over the past few weeks, I’ve been working on a sample open-source e-commerce app for selling software, educational products, and bundles. Inspired by Humble Bundle, it’s built entirely on Workers, and it integrates powerfully with all kinds of first-class modern tooling: Stripe, an API for accepting payments (both from customers and to authors, as we’ll see later), and Sanity.io, a headless CMS for data management.

This kind of project is perfectly suited for Workers. We can lean into Workers as a static site hosting platform (via Workers Sites), API server, and webhook consumer, all within a single codebase, and deployed instantly around the world on Cloudflare’s network.

If you want to see a deployed version of this template, check out ecommerce-example.signalnerve.workers.dev.

Building Black Friday e-commerce experiences with JAMstack and Cloudflare Workers
The frontend of the e-commerce Workers template.

In this blog post, I’ll dive deeper into the implementation details of the site, covering how Workers continues to excel as a JAMstack deployment platform. I’ll also cover some new territory in integrating Workers with Stripe. The project is open-source on GitHub, and I’m actively working on improving the documentation, so that you can take the codebase and build on it for your own e-commerce sites and use cases.

The frontend

As I wrote last year, Workers continues to be an amazing platform for JAMstack apps. When I started building this template, I wanted to use some things I already knew — Sanity.io for managing data, and of course, Workers Sites for deploying — but some new tools as well.

Workers Sites is incredibly simple to use: just point it at a directory of static assets, and you’re good to go. With this project, I decided to try out Nuxt.js, a Vue-based static site generator, to power the frontend for the application.

Using Sanity.io, the data representing the bundles (and the products inside of those bundles) is stored on Sanity.io’s own CDN, and retrieved client-side by the Nuxt.js application.

Building Black Friday e-commerce experiences with JAMstack and Cloudflare Workers
Managing data inside Sanity.io’s headless CMS interface.

When a potential customer visits a bundle, they’ll see a list of products from Sanity.io, and a checkout button provided by Stripe.

Responding to new checkout sessions and purchases

Making API requests with Stripe’s Node SDK isn’t currently supported in Workers (check out the GitHub issue where we’re discussing a fix), but because it’s just REST underneath, we can easily make REST requests using the library.

When a user clicks the checkout button on a bundle page, it makes a request to the Cloudflare Workers API, and securely generates a new session for the user to checkout with Stripe.

import { json, stripe } from '../helpers'

export default async (request) => {
  const body = await request.json()
  const { price_id } = body

  const session = await stripe('/checkout/sessions', {
    payment_method_types: ['card'],
    line_items: [{
        price: price_id,
        quantity: 1,
      }],
    mode: 'payment'
  }, 'POST')

  return json({ session_id: session.id })
}

This is where Workers excels as a JAMstack platform. Yes, it can do static site hosting, but with just a few extra lines of routing code, I can deploy a highly scalable API right alongside my Nuxt.js application.

Webhooks and working with external services

This idea extends throughout the rest of the checkout process. When a customer is successfully charged for their purchase, Stripe sends a webhook back to Cloudflare Workers. In order to complete the transaction on our end, the Workers application:

  • Validates the incoming data from Stripe to ensure that it’s legitimate. This means that every incoming webhook request is explicitly validated using your Stripe account details, and can be confirmed to be valid before the function acts on it.
  • Distributes payments to the authors using Stripe Connect. When a customer buys a bundle for $20, that $20 (minus Stripe fees) gets distributed evenly between the authors in that bundle — all of this calculation and the associated transfer requests happen inside the Worker.
  • Sends a unique download link to the customer. Using Workers KV, a unique token is set up that corresponds to the customer’s email, which can be used to retrieve the content the customer purchased. This integration uses Mailgun to construct an email and send it entirely over REST APIs.

By the time the purchase is complete, the Workers serverless API will have interfaced with four distinct APIs, persisting records, sending emails, and handling and distributing payments to everyone involved in the e-commerce transaction. With Workers, this all happens in a single codebase, with low latency and a superb developer experience. The entire API is type-checked and validated before it ever gets shipped to production, thanks to our TypeScript template.

Building Black Friday e-commerce experiences with JAMstack and Cloudflare Workers

Each of these tasks involves a pretty serious level of complexity, but by using Workers, we can abstract each of them into smaller pieces of functionality, and compose powerful, on-demand, and infinitely scalable webhooks directly on the serverless edge.

Conclusion

I’m really excited about the launch of this template and, of course, it wouldn’t have been possible to ship something like this in just a few weeks without using Cloudflare Workers. If you’re interested in digging into how any of the above stuff works, check out the project on GitHub!

With the recent announcement of our Workers KV free tier, this project is perfect to fork and build your own e-commerce products with. Let me know what you build and say hi on Twitter!

Announcing Built with Workers

Post Syndicated from Adam Schwartz original https://blog.cloudflare.com/built-with-workers/

Announcing Built with Workers

Ever since its initial release, Cloudflare Workers has given JavaScript developers a platform to enable building high-performance applications with automatic scaling.

As with any new technology, we know it can be a bit intimidating to get started. For one thing, running code on the edge is a paradigm shift—forcing us to rethink classic web architecture problems, or removing them altogether. For another, since you can build just about anything, it can be challenging to figure out what to build first.


Today we’re launching Built with Workers, a new site designed to help get those creative juices flowing and unblock you, by answering that simple but important question: What can I build with Cloudflare Workers?

Announcing Built with Workers

Some time in 1999, at age 11, I received my first graphing calculator. It was a TI-82 that my older sister no longer needed. It was on this very calculator that I learned to write code. Looking back, I’m not sure how exactly I had the patience or sanity to figure it all out.

It was a mess. Among the many difficulties were that I had to type the code out on the calculator’s non-QWERTY keyboard, the language I was writing in didn’t have functions, and oh yeah, the text editor would frequently bug out and I’d inexplicably lose half or all of my code.

But what was perhaps more challenging than all of that, was that I had absolutely zero code examples to draw inspiration from.

I remember when I stumbled on a design pattern to handle input. It was quite the eureka moment. With only labels and gotos, I would have to check if each key was pressed and then loop back around to do it all over again. Little did I know I’d be programming games just about the same way today using  requestAnimationFrame .

Though I was able to make a few simple programs hunting around like this, I quickly hit my limits and stopped writing them.

A couple of years later, a friend of mine, with his fancy-pantsed TI-83 Silver Edition calculator with 4× the RAM of mine—I was a bit jealous—showed me a program that came with his fancy new calculator.

It was called Phoenix. If you ever played any graphing calculator games, it was probably this one. It was fast and action packed, flying a spaceship around shooting enemies. It was beautiful.

Seeing this game totally changed my perspective on the platform. I went on to create a couple of games involving similar mechanics and a similar animation style, all because I’d seen this game on a friend’s calculator.

It opened up my eyes to what was possible, gave me the confidence to try things I previously thought were impossible, and brought out the detective inside me to want to figure out how they were able to build each piece of functionality.

A few other friends of mine started writing programs too. We would trade our programs in the back of math class together using a cable to attach the two calculators together.

Importantly, when you’d receive a program from another friend’s calculator, you could run it and you could view and manipulate the source code. This allowed us to collaborate on games, by passing them back and forth.

Our apps and games became more complex and interesting. Non-coder friends of ours were becoming interested in our projects too, and we started sharing games at lunch. We would get great feedback from them, leading us to fix bugs, build more stylish graphics and intro sequences, and streamline the player experience.

I’m sure our teachers loved us…


A few years later, I got the Internet.

What made the Internet so exciting to me was that by design, the source code of any website was just sitting right there, one keyboard command away. Just like the calculators.

So I did what many of us did back then—and still do today: if I saw something cool, I stole it. Oh nice hover animation: I’ll take that thank you very much. Cool button design: yup, that’s mine now.

Back then we called them websites not apps, and ourselves web designers not frontend engineers, but in all of the time since then, the web platform hasn’t really changed.

Although the web today is more complex, often with many more layers of abstraction, it’s still the case that if you can see something in your browser, most likely you can quickly access, study, copy, manipulate, borrow and steal the source code that created it in just a few seconds.

This is why I’m personally so excited to bring Built with Workers to the community, and to learn from the projects on it myself.

Every project page has a section in which the creators get to describe how their project uses Cloudflare Workers. Many projects use Workers KV, our distributed key-value store, and Workers Sites, our edge-based static site hosting, and some projects are entirely written, built, tested, and deployed with Cloudflare Workers.

Even better than that, many projects on Built with Workers are open-source, featuring a direct link to the Github repo, allowing you to quickly get at the source. The Built with Workers site itself is one of these open-source projects on the Built with Workers site.

We hope that the projects on Built with Workers will help inspire you to build your next project, by seeing just what’s possible.

Visit Built with Workers

It’s been thrilling to see the incredible projects people are building. If you’ve got a project you’d like to share, please fill out this form.

JAMstack at the Edge: How we built Built with Workers… on Workers

Post Syndicated from Kristian Freeman original https://blog.cloudflare.com/jamstack-at-the-edge-how-we-built-built-with-workers-on-workers/

JAMstack at the Edge: How we built Built with Workers… on Workers

I’m extremely stoked to announce Built with Workers today – it’s an awesome resource for exploring what you can build with Cloudflare Workers. As Adam explained in our launch post, showcasing developers building incredible projects with tools like Workers KV or our streaming HTML rewriter is a great way to celebrate users of our platform. It also helps encourage developers to try building their dream app on top of Workers. In this post, I’ll explore some of the architectural and implementation designs we made while building the site.

When we first started planning Built with Workers, we wanted to use the site as an opportunity to build a new greenfield application, showcasing the strength of the Workers platform. The Workers Developer Experience team is cross-functional: while we might spend most of our time improving our docs, or developing features for our command-line interface Wrangler, most of us have spent years developing on the web. The prospect of starting a new application is always fun, but in this instance, it was a prime chance to ask (and answer) the question, “If I could build this site on Workers with whatever tools I want, what would I choose?”

A guiding principle for the Workers platform is ease-of-use. The programming model is simple: it’s just JavaScript (or, via WASM, Rust, C, and C++), and you have complete control over the requests coming in and the requests going out from your Workers script. In the same way, while building Built with Workers, it was crucial to find a set of tools that could enable something like this throughout the process of building the entire application. To enable this, we’ve embraced JAMstack – a software stack comprised of JavaScript, APIs, and markup – with Built with Workers, deploying always up-to-date static builds of the site directly to the edge, using Workers Sites. Our framework of choice, Gatsby.js, provides a set of sane defaults to build a modern web application. To manage content and the layout of the site, we’ve chosen Sanity.io, a powerful headless CMS that allows us to model the entire website without needing to deploy any databases or spin up any additional infrastructure.

Personally, I’m excited about JAMstack as a methodology for building web applications because of this emphasis on reducing infrastructure: it’s incredibly similar to the motivations behind deploying serverless applications using Cloudflare Workers, and as we developed Built with Workers, we discovered a number of these philosophical similarities in JAMstack and Cloudflare Workers – exciting! To help encourage developers to explore building their own JAMstack applications on Workers, I’m also announcing today that we’ve made the Built with Workers codebase open-source on GitHub – you can check out how the application is developed, built and deployed from start to finish.

In this post, we’ll dig into Built with Workers, exploring how it works, the technical decisions we’ve made, and some of the most fascinating aspects of what it means to build applications on the edge.

JAMstack at the Edge: How we built Built with Workers… on Workers
A screenshot of the Built with Workers homepage

Exploring the JAMstack

My first encounter with tooling that would ultimately become part of “JAMstack” was in 2013. I noticed the huge proliferation of developers building personal “static” sites – taking blog posts written primarily in Markdown, and pushing them through frameworks like Jekyll to build full websites that could easily be deployed to a number of CDNs and file hosting platforms. These static sites were fast – they are just HTML, CSS, and JavaScript – and easy to update. The average developer spends their days maintaining large and complex software systems, so it was relaxing to just write Markdown, plug in some re-usable HTML and CSS, and deploy your website. The advent of static sites, of course, isn’t new – but after years of increasingly complex full-stack technology, the return to simplicity was a promising development for many kinds of websites that didn’t need databases, or any dynamic content.

In the last couple years, JAMstack has built upon that resurgence, and represents an approach to building complete, complex applications using the same tooling that has become the first choice for developers building their simple personal sites. JAMstack is comprised of three conceptual pieces – JavaScript, APIs, and Markup – each of which is a crucial aspect of simplifying our web applications and making them easy to write, build, and deploy.

J is for JavaScript

The JAMstack architecture relies heavily on the ubiquity of JavaScript as the language of the web. Many modern web applications use powerful, dynamic front-end frameworks like React and Vue to render user interfaces and process state on the client for users. On the backend, or in Workers’ case, on the edge, any dynamic functionality in your JAMstack application should be built on top of JavaScript, often working in the request-response model that full-stack developers are accustomed to.

The Workers platform is perfectly suited to this requirement! As a developer building on Workers, you have total control of incoming requests and outgoing responses, using the JavaScript Service Worker APIs you know and love. We built Workers Sites as an extension of the Workers platform (and Workers KV as a storage mechanism at the edge), making it possible to deploy your site assets using a single command in Wrangler: wrangler publish.

When your Workers Site receives a new request, we’ll execute JavaScript at the edge to look up a piece of content from Workers KV, and serve it back to the client at lightning speed. Remarkably, you can deploy JAMstack applications on Workers with no additional configuration besides generating your Workers Siteby design, Workers Sites is built to serve as an exceptional JAMstack deployment platform.

A is for APIs

The advent of static site tooling for personal sites makes sense: your site is a few pages: a few blog posts, for instance, and the classic “About” or “Contact” page. When it’s compiled to HTML, the footprint is quite small! This small footprint is what makes static sites easy to reason about: they’re trivial to host in terms of bandwidth and storage costs, and they rarely change, so they’re easily cacheable.

While that works for personal sites, complex applications actually have data requirements! We need to talk to the user data in our databases, and analytics information in our data warehouses. JAMstack apps tackle this by definitively stating that these data sources should be accessible via HTTPS APIs, consumable by the application as a way to provide dynamic information to clients.

Workers is a fascinating platform in regards to JAMstack APIs. It can serve as a gateway to your data, or as a place to persist and return data itself. I can, for instance, expose an API endpoint via my Workers script without giving clients access to my origin. I can also use tooling like Workers KV to persist data directly on the edge, and when a user requests that data, I can resolve the data by returning JSON directly from my application.

This flexibility has been an unexpected part of the experience of developing Built with Workers. In a later section of this post, I’ll talk about how we developed an integral feature of the site based on the unique strengths of Workers as a way to host static assets and as a dynamic JavaScript execution platform. This has remarkable implications that blur the lines between classic static sites and dynamic applications, and I’m really excited about it.

M is for Markup

A breakthrough moment in my understanding of JAMstack came at the beginning of this year. I was working on a job board for frontend developers, using the static site framework Gatsby.js and Sanity.io, a headless CMS tool that allows developers to model content without maintaining a database or any infrastructure. (As a reminder – this set of tools is identical to what we ultimately used to develop Built with Workers. It’s a very good stack!)

SEO is crucial to a job board, and as I began to explore how to drive more traffic to my job board, I landed on the idea of generating a huge amount of search-oriented content automatically from the job data I already had. For instance, if I had job posts with the keywords “React”, “Europe”, and “Senior” (as in “senior developer”), what if I created pages with titles like “Senior React developer jobs in Europe”, or “Remote Angular jobs”? This approach would allow the site to begin ranking for a variety of job positions, locations, and experience levels, and as more jobs were posted on the site, each of these pages would be enriched with more useful information and relevant content, helping it rank higher in search.

“But static sites are… static!”, I told myself. Would I need to build an entire dynamic API on top of my static site, just to be able to serve these search-engine optimized pages? This led me to a “eureka” moment with Gatsby – I could define markup (the “M” in JAMstack), and when I’m building my site, I could look at all the available job data I had, cycling through every available keyword combination and inserting it into my markup to generate thousands of these pages. As I later learned, this idea is not necessarily unique to Gatsby – it is possible, for instance, to automate getting data from your API and writing it to data files in earlier static site frameworks like Hugo – but it is a first-class citizen in Gatsby. There are a ton of data sources available via Gatsby plugins, and because they’re all exposed via HTTPS, the workflow is standardized inside of the framework.

In Built with Workers, we connect to the Sanity.io CMS instance at build-time: crucially, by the time that the site has been deployed to Workers, the application effectively has no idea what Sanity even is! Our Gatsby application connects to Sanity.io via an HTTPS API, and using GraphQL, we look at all the data that we have in our CMS, and make decisions about what pages to generate and how to render the site’s interface, ultimately resulting in a statically-built application that is derived from dynamic data.

This emphasis on the build step in JAMstack is quite different than the classic web application. In the past, a user requested data, a web server looked at what the user was requesting, and then the user waits, as the server gets that data, returns JSON, or interpolates it into templates written in tools like Pug or ERB. With JAMstack, the pages are already built, and the deployed application is just a collection of plain HTML, CSS, and JavaScript.

Why Cloudflare Workers?

Cloudflare’s network is a fascinating place to deploy JAMstack applications. Yes, Cloudflare’s edge network can act as a CDN for your static assets, like your CSS stylesheets, or your client-side JavaScript code. But with Workers, we now have the ability to run JavaScript side-by-side with our static assets. In most JAMstack applications, the CDN is simply a bucket where your application ends up. Usually, the CDN is the most boring part of the stack! With Cloudflare Workers, we don’t just have a CDN: we also have access to an extremely low-latency, fully-featured JavaScript runtime.

The implications of this on the standard JAMstack workflow are, frankly, mind-boggling, and as part of developing Built with Workers, we’ve been exploring what it means to have this runtime available side-by-side with our statically-built JAMstack application.

To demonstrate this, we’ve implemented a bookmarking feature, which allows users of Built with Workers to bookmark projects. If you look at a project’s usage of our streaming HTML rewriter and say “Wow, that’s cool!”, you can also bookmark for the project to show your support. This feature, rendered as a button tag is deceptively simple: it’s a single piece of the user interface that makes use of the entirety of the Workers platform, to provide user-specific dynamic functionality. We’ll explore this in greater detail later in the post – see “Enhancing static sites at the edge”.


A modern development and content workflow

In the announcement post for Workers Sites, Rita outlined the motivations behind launching Workers Sites as a modern way to deploy sites:

“Born on the edge, Workers Sites is what we think modern development on the web should look like, natively secure, fast, and massively scalable. Less of your time is spent on configuration, and more of your time is spent on your code, and content itself.”

A few months later, I can say definitively that Workers Sites has enabled us to develop Built with Workers and spend almost no time on configuration. Using our GitHub Action for deploying Workers applications with Wrangler, the site has been continuously deploying to a staging environment for the past couple weeks. The simplicity around this continuous deployment workflow has allowed us to focus on the important aspects of the project: development and content.

The static site framework ecosystem is fairly competitive, but as we considered our options for this site, I advocated strongly for Gatsby.js. It’s an incredible tool for building JAMstack applications, with a great set of default for performant applications. It’s common to see Gatsby sites with Lighthouse scores in the upper 90s, and the decision to use React for implementing the UI makes it straightforward to onboard new developers if they’re familiar with React.

As I mentioned in a previous section, Gatsby shines at build-time. Gatsby’s APIs for creating pages during the build process based on API data are incredibly powerful, allowing developers to concretely define every statically-generated page on their web application, as well as any relevant data that needs to be passed in.

With Gatsby decided upon as our static site framework, we needed to evaluate where our content would live. Built with Workers has two primary data models, used to generate the UI:

  • Projects: websites, applications, and APIs created by developers using Cloudflare Workers. For instance, Built with Workers!
  • Features: features available on the Workers platform used to build applications. For instance, Workers KV, or our streaming HTML rewriter/parser.

Given these requirements, there were a number of potential approaches to take to store this data, and make it accessible. Keeping in line with JAMstack, we know that we probably should expose it via an HTTPS API, but from where? In what format?

As a full-stack developer who’s comfortable with databases, it’s easy to envision a world where we spin up a PostgreSQL instance, write a REST API, and write all kinds of fetch('/api/projects') to get the information we need. This method works, but we can do better! In the same way we built Workers Sites to simplify the deployment process, it was worthwhile to explore the JAMstack ecosystem and see what solutions exist for modeling data without being on the hook for more infrastructure.

Of the different tools in the ecosystem – databases, whether SQL or NoSQL, key-value stores (such as our own, Workers KV), etc. – the growth of “headless CMS” tools has made the largest impact on my development workflow.

On CSS Tricks, Chris Coyier wrote about the rise of headless CMS tools back in March 2016, and summarizes their function well:

[Headless CMSes are] very related to The Big Conversation™ on the web the last many years. How are we going to handle bringing Our Stuff™ all these different devices/screens/inputs.
Responsive design says "let’s let our design and media accommodate as much variation in screens as possible."
Progressive enhancement says "let’s make the functionality of this site work no matter what."
Designing for accessibility says "let’s ensure everyone can use this regardless of their capabilities as a person."
A headless CMS says "let’s not tie our data to any one way of doing things."

Using our headless CMS, Sanity.io, we can get every project inside our dataset, and call Gatsby’s createPage function to create a new page for each project, using a pre-defined project template file:

// gatsby-node.js

exports.createPages = async ({ graphql, actions }) => {
  const { createPage } = actions;

  const result = await graphql(`
    {
      allSanityProject {
        edges {
          node {
            slug
          }
        }
      }
    }
  `);

  if (result.errors) {
    throw result.errors;
  }

  const {
    data: { allSanityProject }
  } = result;

  const projects = allSanityProject.edges.map(({ node }) => node);
  projects.forEach((node, _index) => {
    const path = `/built-with/projects/${node.slug}`;

    createPage({
      path,
      component: require.resolve("./src/templates/project.js"),
      context: { slug: node.slug }
    });
  });
};

Using Sanity to drive the content for Built with Workers has been a huge win for our team. We’re no longer constrained by code deploys to make changes to content on the site – we don’t need to make a pull request to create a new project, and edits to a project’s name or description aren’t constrained by someone with the ability to deploy the project. Instead, we can empower members of our team to log in directly to the CMS and make changes, and be confident that once the corresponding deploy has completed (see “The CDN is the deployment platform” below), their changes will be live on the site.

Dynamic JAMstack layouts

As our team got up and running with Sanity.io, we found that the flexibility of a headless CMS platform was useful not just for creating our original data requirements – projects and features – but in rethinking and innovating on how we actually format the application itself.

With our previous objective of empowering non-technical folks to make changes to the site without deploying any code in consideration, we’ve also taken the entire homepage of Built with Workers and defined it as an instance of the “layout” data model in Sanity.io. By doing this, we can define corresponding “collections”, which are sets of projects. When a layout has many collections defined inside of the CMS, we can rapidly re-order, re-arrange, and experiment with new collections on the homepage, seeing the updated version of the site reflected immediately, and live on the production site within only a few minutes, after our continuous deployment process has finished.

JAMstack at the Edge: How we built Built with Workers… on Workers
Updating the layout of Built with Workers live from Sanity’s studio

With this work implemented, it’s easy to envision a world where our React code is purely concerned with rendering each individual aspect of the application’s interface – for instance, the project title component, or the “card” for an individual project – and the CMS drives the entire layout of the site. In the future, I’d like to continue exploring this idea in other pages in Built with Workers, including the project pages and any other future content we put on the site.

Enhancing static sites at the edge

Much of what we’ve discussed so far can be thought of as features and workflows that have great DX (developer experience), but not specific to Workers. Gatsby and Sanity.io are great, and although Workers Sites is a great platform for deploying JAMstack applications due to the Workers platform’s low-latency and performance characteristics, you could deploy the site to a number of different providers with no real differentiating features.

As we began building a JAMstack application on top of Built with Workers, we also wanted to explore how the Workers platform could allow developers to combine the simplicity of static site deployments with the dynamism of having a JavaScript runtime immediately available.

In particular, our recently-released streaming HTML rewriter seems like a perfect fit for “enhancing” our static sites. Our application is being served by Workers Sites, which itself is a Workers template that can be customized. By passing each HTML page through the HTML rewriter on its way to the client, we had an opportunity to customize the content without any negative performance implications.

As I mentioned previously, we landed on a first exploration of this platform advantage via the “bookmark” button. Users of Built with Workers can “bookmark” for a project – this action sends a request back up to the Workers application, storing the bookmark data as JSON in Workers KV.

// User-specific data stored in Workers KV, representing
// per-project bookmark information

{
  "bytesized_scraper_bookmarked": false,
  "web_scraper_bookmarked": true
}

When a user returns to Built with Workers, we can make a request to Workers KV, looking for corresponding data for that user and the project they’re currently viewing. If that data exists, we can embed the “edge state” directly into the HTML using the streaming HTML rewriter.

// workers-site/index.js

import { getAssetFromKV } from "@cloudflare/kv-asset-handler"

addEventListener("fetch", event => { 
  event.respondWith(handleEvent(event)) 
})

class EdgeStateEmbed {
  constructor(state) {
    this._state = state
  }
  
  element(element) {
    const edgeStateElement = `
      <script id='edge_state' type='application/json'>
        ${JSON.stringify(this._state)}
      </script>
    `
    element.prepend(edgeStateElement, { html: true })
  }
}

const hydrateEdgeState = async ({ state, response }) => {
  const rewriter = new HTMLRewriter().on(
    "body",
    new EdgeStateEmbed(await state)
  )
  return rewriter.transform(await response)
}

async function handleEvent(event) {
  return hydrateEdgeState({
    response: getAssetFromKV(event, options),
    // Get associated state for a request, based on the user and URL
    state: transformBookmark(event.request),
  })
}

When the React application is rendered on the client, it can then check for that embedded edge state, which influences how the “bookmark” icon is rendered – either as “bookmarked”, or “bookmarked”. To support this, we’ve leaned on React’s useContext, which allows any component inside of the application component tree to pull out the edge state and use it inside of the component:

// edge_state.js

import React from "react"
import { useSSR } from "../utils"

const parseDocumentState = () => {
  const edgeStateElement = document.querySelector("#edge_state")
  return edgeStateElement ? JSON.parse(edgeStateElement.innerText) : {}
}

export const EdgeStateContext = React.createContext([{}, () => {}])
export const EdgeStateProvider = ({ children }) => {
  const { isBrowser } = useSSR()
  if (!isBrowser) {
    return <>{children}</>
  }
  
  const edgeState = parseDocumentState()
  const [state, setState] = React.useState(edgeState)
  const updateState = (newState, options = { immutable: true }) => options.immutable
    ? setState(Object.assign({}, state, newState))
    : setState(newState)
  
  return (
    <EdgeStateContext.Provider value={[state, updateState]}>
      {children}
    </EdgeStateContext.Provider>
  )
}

// Inside of a React component
const Bookmark = ({ bookmarked, project, setBookmarked, setLoaded }) => {
const [state, setState] = React.useContext(EdgeStateContext)
// `bookmarked` value is a simplification of actual code
return <BookmarkButton bookmarked={state[project.id]} />
}

The combination of a straightforward JAMstack deployment platform with dynamic key-value object storage and a streaming HTML rewriter is really, really cool. This is an initial exploration into what I consider to be a platform-defining feature, and if you’re interested in this stuff and want to continue to explore how this will influence how we write web applications, get in touch with me on Twitter!

The CDN is the deployment platform

While it doesn’t appear in the acronym, an unsung hero of the JAMstack architecture is deployment. In my local terminal, when I run gatsby build inside of the Built with Workers project, the result is a folder of static HTML, CSS, and JavaScript. It should be easy to deploy!

The recent release of GitHub Actions has proven to be a great companion to building JAMstack applications with Cloudflare Workers – we’ve open-sourced our own wrangler-action, which allows developers to build their Workers applications and deploy directly from GitHub.

The standard workflows in the continuous deployment world – deploy every hour, deploy on new changes to the master branch, etc – are possible and already being used by many developers who make use of our wrangler-action workflow in their projects. Particular to JAMstack and to headless CMS tools is the idea of “build-on-change”: namely, when someone publishes a change in Sanity.io, we want to do a new deploy of the site to immediately reflect our new content in production.

The versatility of Workers as a place to deploy JavaScript code comes to the rescue, again! By telling Sanity.io to make a GET request to a deployed Workers webhook, we can trigger a repository_event on GitHub Actions for our repository, allowing new deploys to happen immediately after a change is detected in the CMS:

const headers = {
  Accept: 'application/vnd.github.everest-preview+json',
  Authorization: 'Bearer $token',
}

const body = JSON.stringify({ event_type: 'repository_dispatch' })

const url = `https://api.github.com/repos/cloudflare/built-with-workers/dispatches`

const handleRequest = async evt => {
  await fetch(url, { method: 'POST', headers, body })
  return new Response('OK')
}

addEventListener('fetch', handleRequest)

In doing this, we’ve made it possible to completely abstract away every deployment task around the Built with Workers project. Not only does the site deploy on a schedule, and on new commits to master, but it can also do additional deploys as the content changes, so that the site is always reflective of the current content in our CMS.

JAMstack at the Edge: How we built Built with Workers… on Workers
The GitHub Actions deployment workflow for Built with Workers

Conclusion

We’re super excited about Built with Workers, not only because it will serve as a great place to showcase the incredible things people are building with the Cloudflare Workers platform, but because it also has allowed us to explore what the future of web development may look like. I’ve been advocating for what I’ve seen referred to as “full-stack serverless” development throughout 2019, and I couldn’t be happier to start 2020 with launching a project like Built with Workers. The full-stack serverless stack feels like the future, and it’s actually fun to build with on a daily basis!

If you’re building something awesome with Cloudflare Workers, we’re looking for submissions to the site! Get in touch with us via this form – we’re excited to speak with you!

Finally, if topics like JAMstack on Cloudflare Workers, “edge state” and dynamic static site hydration, or continuous deployment interest you, the Built with Workers repository is open-source! Check it out, and if you’re inspired to build something cool with Workers after checking out the code, make sure to let us know!

Introducing the HTMLRewriter API to Cloudflare Workers

Post Syndicated from Rita Kozlov original https://blog.cloudflare.com/introducing-htmlrewriter/

Introducing the HTMLRewriter API to Cloudflare Workers

Introducing the HTMLRewriter API to Cloudflare Workers

We are excited to announce that the HTMLRewriter API for Cloudflare Workers is now GA! You can get started today by checking out our documentation, or trying out our tutorial for localizing your site with the HTMLRewriter.

Want to know how it works under the hood? We are excited to tell you everything you wanted to know but were afraid to ask, about building a streaming HTML parser on the edge; read about it in part 1 (and stay tuned for part two coming tomorrow!).

Faster, more scalable applications at the edge

The HTMLRewriter can help solve two big problems web developers face today: making changes to the HTML, when they are hard to make at the server level, and making it possible for HTML to live on the edge, closer to the user — without sacrificing dynamic functionality.

Since the introduction of Workers, Workers have helped customers regain control where control either wasn’t provided, or very hard to obtain at the origin level. Just like Workers can help you set CORS headers at the middleware layer, between your users and the origin, the HTMLRewriter can assist with things like URL rewrites (see the example below!).

Back in January, we introduced the Cache API, giving you more control than ever over what you could store in our edge caches, and how. By helping users have closer control of the cache, users could push experiences that were previously only able to live at the origin server to the network edge. A great example of this is the ability to cache POST requests, where previously only GET requests were able to benefit from living on the edge.

Later this year, during Birthday Week, we announced Workers Sites, taking the idea of content on the edge one step further, and allowing you to fully deploy your static sites to the edge.

We were not the first to realize that the web could benefit from more content being accessible from the CDN level. The motivation behind the resurgence of static sites is to have a canonical version of HTML that can easily be cached by a CDN.

This has been great progress for static content on the web, but what about so much of the content that’s almost static, but not quite? Imagine an e-commerce website: the items being offered, and the promotions are all the same — except that pesky shopping cart on the top right. That small difference means the HTML can no longer be cached, and sent to the edge. That tiny little number of items in the cart requires you to travel all the way to your origin, and make a call to the database, for every user that visits your site. This time of year, around Black Friday, and Cyber Monday, this means you have to make sure that your origin, and database are able to scale to the maximum load of all the excited shoppers eager to buy presents for their loved ones.

Edge-side JAMstack with HTMLRewriter

One way to solve this problem of reducing origin load while retaining dynamic functionality is to move more of the logic to the client — this approach of making the HTML content static, leveraging CDN for caching, and relying on client-side API calls for dynamic content is known as the JAMstack (JavaScript, APIs, Markdown). The JAMstack is certainly a move in the right direction, trying to maximize content on the edge. We’re excited to embrace that approach even further, by allowing dynamic logic of the site to live on the edge as well.

In September, we introduced the HTMLRewriter API in beta to the Cloudflare Workers runtime — a streaming parser API to allow developers to develop fully dynamic applications on the edge. HTMLRewriter is a jQuery-like experience inside your Workers application, allowing you to manipulate the DOM using selectors and handlers. The same way jQuery revolutionized front-end development, we’re excited for the HTMLRewriter to change how we think about architecting dynamic applications.

There are a few advantages to rewriting HTML on the edge, rather than on the client. First, updating content client-side introduces a tradeoff between the performance and consistency of the application — that is, if you want to, update a page to a logged in version client side, the user will either be presented with the static version first, and witness the page update (resulting in an inconsistency known as a flicker), or the rendering will be blocked until the customization can be fully rendered. Having the DOM modifications made edge-side provides the benefits of a scalable application, without the degradation of user experience. The second problem is the inevitable client-side bloat. Most of the world today is connected to the internet on a mobile phone with significantly less powerful CPU, and on shaky last-mile connections where each additional API call risks never making it back to the client. With the HTMLRewriter within Cloudflare Workers, the API calls for dynamic content, (or even calls directly to Workers KV for user-specific data) can be made from the Worker instead, on a much more powerful machine, over much more resilient backbone connections.

Here’s what the developers at Happy Cog have to say about HTMLRewriter:

"At Happy Cog, we’re excited about the JAMstack and static site generators for our clients because of their speed and security, but in the past have often relied on additional browser-based JavaScript and internet-exposed backend services to customize user experiences based on a cookied logged-in state. With Cloudflare’s HTMLRewriter, that’s now changed. HTMLRewriter is a fundamentally new powerful capability for static sites. It lets us parse the request and customize each user’s experience without exposing any backend services, while continuing to use a JAMstack architecture. And it’s fast. Because HTMLRewriter streams responses directly to users, our code can make changes on-the-fly without incurring a giant TTFB speed penalty. HTMLRewriter is going to allow us to make architectural decisions that benefit our clients and their users — while avoiding a lot of the tradeoffs we’d usually have to consider."

Matt Weinberg, – President of Development and Technology, Happy Cog

HTMLRewriter in Action

Since most web developers are familiar with jQuery, we chose to keep the HTMLRewriter very similar, using selectors and handlers to modify content.

Below, we have a simple example of how to use the HTMLRewriter to rewrite the URLs in your application from myolddomain.com to mynewdomain.com:

async function handleRequest(req) {
  const res = await fetch(req)
  return rewriter.transform(res)
}
 
const rewriter = new HTMLRewriter()
  .on('a', new AttributeRewriter('href'))
  .on('img', new AttributeRewriter('src'))
 
class AttributeRewriter {
  constructor(attributeName) {
    this.attributeName = attributeName
  }
 
  element(element) {
    const attribute = element.getAttribute(this.attributeName)
    if (attribute) {
      element.setAttribute(
        this.attributeName,
        attribute.replace('myolddomain.com', 'mynewdomain.com')
      )
    }
  }
}
 
addEventListener('fetch', event => {
  event.respondWith(handleRequest(event.request))
})

The HTMLRewriter, which is instantiated once per Worker, is used here to select the a and img elements.

An element handler responds to any incoming element, when attached using the .on function of the HTMLRewriter instance.

Here, our AttributeRewriter will receive every element parsed by the HTMLRewriter instance. We will then use it to rewrite each attribute, href for the a element for example, to update the URL in the HTML.

Getting started

You can get started with the HTMLRewriter by signing up for Workers today. For a full guide on how to use the HTMLRewriter API, we suggest checking out our documentation.

How the the HTMLRewriter work?

How does the HTMLRewriter work under the hood? We’re excited to tell you about it, and have spared no details. Check out the first part of our two-blog post series to learn all about the not-so-brief history of rewriter HTML at Cloudflare, and stay tuned for part two tomorrow!