Benefits of migrating to event-driven architecture

Post Syndicated from Talia Nassi original https://aws.amazon.com/blogs/compute/benefits-of-migrating-to-event-driven-architecture/

Two common options when building applications are request-response and event-driven architecture. In request-response architecture, an application’s components communicate via API calls. The client sends a request and expects a response before performing the next task. In event-driven architecture, the client generates an event and can immediately move on to its next task. Different parts of the application then respond to the event as needed.

events

In this post, you learn about reasons to consider moving from request-response architecture to an event-driven architecture.

Challenges with request-response architecture

When starting to a build a new application, many developers default to a request-response architecture. A request-response architecture may tightly integrate components and those components communicate via synchronous calls. While a request-response approach is often easier to get started with, it can become challenging as your application grows in complexity.

In this post, I review an example request-response ecommerce application and demonstrate the challenges of tightly coupled integrations. Then I show you how building the same application with an event-driven architecture can give you increased scalability, fault tolerance, and developer velocity.

Close coordination between microservices

In a typical ecommerce application that uses a synchronous API, the client makes a request to place an order and the order service sends the request downstream to an invoice service. If successful, the order service responds with a success message or confirmation number.

In this initial stage, this is a straightforward connection between the two services. The challenge comes when you add more services that integrate with the order service.

picture

If you add a fulfillment service and a forecasting service, the order service has more responsibilities and more complexity. The order service must know how to call each service’s API, from the API call structure to the API’s retry semantics. If there are any backwards incompatible changes to the APIs, the order service team must update them. The system forwards heavy traffic spikes to the order service’s dependency, which may not have the same scaling capabilities. Also, dependent services may transmit errors back up the stack to the client.

Error handling and retries

Now, you add new downstream services for fulfillment and shipping orders to the ecommerce application.

architecture

In the happy path, everything works as expected: The order service triggers invoicing, payment systems, and updates forecasting. Once payment clears, this triggers the fulfillment and packing of the order, and then informs the shipping service to request tracking information.

However, if the fulfillment center cannot find the product because they are out of stock, then fulfillment might have to alert the invoice service, then reverse the payment or issue a refund. If fulfillment fails, then the system that triggers shipping might fail as well. Forecasting must also be updated to reflect the change. This remediation workflow is all just to address one of the many potential “unhappy paths” that can occur in this API-driven ecommerce application.

Close coordination between development teams

In a synchronously integrated application, teams must coordinate any new services that are added to the application. This can slow down each development team’s ability to release new features. Imagine your team works on the payment service but you weren’t told that another team added a new rewards service. What now happens when the fulfillment service errors?

Fulfillment may orchestrate all the other services. Your payments team gets a message and you undo the payment, but you may not know who handles retries and error logic. If the rewards service changes vendors and has a new API, and does not tell your team, you may not be aware of the new service.

Ultimately, it can be hard to coordinate these orchestrations and workflows as systems become more complex and management adds more services. This is one reason that it can be beneficial to migrate to event-driven architecture.

Benefits of event-driven architecture

Event-driven architecture can help solve the problems of the close coordination of microservices, error handling and retries, and coordination between development teams.

Close coordination between microservices

In event-driven architecture, the publisher emits an event, which is acknowledged by the event bus. The event bus routes events to subscribers, which process events with self-contained business logic. There is no direct communication between publishers and subscribers.

Decoupled applications enable teams to act more independently, which can increase their velocity. For example, with an API-based integration, if your team wants to know about a change that happened in another team’s microservice, you might have to ask that team to make an API call to your service. Consequently, you may have to account for authentication, coordination with the other team over the structure of the API call. This causes back and forth between teams, which slows down development time. With an event-driven application, you can subscribe to events sent from your microservice and the event bus (for example, Amazon EventBridge) takes care of routing the event and handling authentication.

Error handling and retries

Another reason to migrate to event-driven architecture is to handle unpredictable traffic. Ecommerce websites like Amazon.com have variable amounts of traffic depending on the day. Once you place an order, several things happen.

First, Amazon checks your credit card to make sure that funds are available. Then, Amazon has to pack the merchandise and load onto trucks. That all happens in an Amazon fulfillment center. There is no synchronous API call for the Amazon backend to package and ship products. After the system confirms your payment, the front end puts together some information describing the event and puts your account number, credit card info, and what you bought in a packaged event and put it into the cloud and onto a queue. Later, another piece of software removes the event from the queue and starts the packaging and shipping.

The key point about this process is that these processes can all run at different rates. Normally, the rate at which customers place orders and the rate at which the warehouses can get the boxes packed are roughly equivalent. However, on busy days like Prime Day, customers place orders much more quickly than the warehouses can operate.

Ecommerce applications, like Amazon.com, must be able to scale up to handle unpredictable traffic. When a customer places an order, an event bus like Amazon EventBridge receives the event and all of the downstream microservices are able to select the order event for processing. Because each of the microservices can fail independently, there are no single points of failure.

Loose coordination between development teams

Event-driven architectures promote development team independence due to loose coupling between publishers and subscribers. Applications can subscribe to events with routing requirements and business logic that are separate from the publisher and other subscribers. This allows publishers and subscribers to change independently of each other, providing more flexibility to the overall architecture.

Decoupled applications also allow you to build new features faster. Adding new features or extending existing ones can be simpler with event-driven architectures because you either add new events, or modify existing ones. This process removes complexity in your application.

Conclusion

In this post, you learn about the challenges of developing applications with request-response architecture. In request-response architecture, the client must send a request and wait for a response before moving on to its next task. As applications grow in complexity, this tightly coupled architecture can cause issues. Event-driven architectures can increase scalability, fault tolerance, and developer velocity by decoupling components of your application.

For more serverless content, go to serverlessland.com.

AWS Week in Review – May 9, 2022

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/aws-week-in-review-may-9-2022/

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

Another week starts, and here’s a collection of the most significant AWS news from the previous seven days. This week is also the one-year anniversary of CloudFront Functions. It’s exciting to see what customers have built during this first year.

Last Week’s Launches
Here are some launches that caught my attention last week:

Amazon RDS supports PostgreSQL 14 with three levels of cascaded read replicas – That’s 5 replicas per instance, supporting a maximum of 155 read replicas per source instance with up to 30X more read capacity. You can now build a more robust disaster recovery architecture with the capability to create Single-AZ or Multi-AZ cascaded read replica DB instances in same or cross Region.

Amazon RDS on AWS Outposts storage auto scalingAWS Outposts extends AWS infrastructure, services, APIs, and tools to virtually any datacenter. With Amazon RDS on AWS Outposts, you can deploy managed DB instances in your on-premises environments. Now, you can turn on storage auto scaling when you create or modify DB instances by selecting a checkbox and specifying the maximum database storage size.

Amazon CodeGuru Reviewer suppression of files and folders in code reviews – With CodeGuru Reviewer, you can use automated reasoning and machine learning to detect potential code defects that are difficult to find and get suggestions for improvements. Now, you can prevent CodeGuru Reviewer from generating unwanted findings on certain files like test files, autogenerated files, or files that have not been recently updated.

Amazon EKS console now supports all standard Kubernetes resources to simplify cluster management – To make it easy to visualize and troubleshoot your applications, you can now use the console to see all standard Kubernetes API resource types (such as service resources, configuration and storage resources, authorization resources, policy resources, and more) running on your Amazon EKS cluster. More info in the blog post Introducing Kubernetes Resource View in Amazon EKS console.

AWS AppConfig feature flag Lambda Extension support for Arm/Graviton2 processors – Using AWS AppConfig, you can create feature flags or other dynamic configuration and safely deploy updates. The AWS AppConfig Lambda Extension allows you to access this feature flag and dynamic configuration data in your Lambda functions. You can now use the AWS AppConfig Lambda Extension from Lambda functions using the Arm/Graviton2 architecture.

AWS Serverless Application Model (SAM) CLI now supports enabling AWS X-Ray tracing – With the AWS SAM CLI you can initialize, build, package, test on local and cloud, and deploy serverless applications. With AWS X-Ray, you have an end-to-end view of requests as they travel through your application, making them easier to monitor and troubleshoot. Now, you can enable tracing by simply adding a flag to the sam init command.

Amazon Kinesis Video Streams image extraction – With Amazon Kinesis Video Streams you can capture, process, and store media streams. Now, you can also request images via API calls or configure automatic image generation based on metadata tags in ingested video. For example, you can use this to generate thumbnails for playback applications or to have more data for your machine learning pipelines.

AWS GameKit supports Android, iOS, and MacOS games developed with Unreal Engine – With AWS GameKit, you can build AWS-powered game features directly from the Unreal Editor with just a few clicks. Now, the AWS GameKit plugin for Unreal Engine supports building games for the Win64, MacOS, Android, and iOS platforms.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Some other updates you might have missed:

🎂 One-year anniversary of CloudFront Functions – I can’t believe it’s been one year since we launched CloudFront Functions. Now, we have tens of thousands of developers actively using CloudFront Functions, with trillions of invocations per month. You can use CloudFront Functions for HTTP header manipulation, URL rewrites and redirects, cache key manipulations/normalization, access authorization, and more. See some examples in this repo. Let’s see what customers built with CloudFront Functions:

  • CloudFront Functions enables Formula 1 to authenticate users with more than 500K requests per second. The solution is using CloudFront Functions to evaluate if users have access to view the race livestream by validating a token in the request.
  • Cloudinary is a media management company that helps its customers deliver content such as videos and images to users worldwide. For them, Lambda@Edge remains an excellent solution for applications that require heavy compute operations, but lightweight operations that require high scalability can now be run using CloudFront Functions. With CloudFront Functions, Cloudinary and its customers are seeing significantly increased performance. For example, one of Cloudinary’s customers began using CloudFront Functions, and in about two weeks it was seeing 20–30 percent better response times. The customer also estimates that they will see 75 percent cost savings.
  • Based in Japan, DigitalCube is a web hosting provider for WordPress websites. Previously, DigitalCube spent several hours completing each of its update deployments. Now, they can deploy updates across thousands of distributions quickly. Using CloudFront Functions, they’ve reduced update deployment times from 4 hours to 2 minutes. In addition, faster updates and less maintenance work result in better quality throughout DigitalCube’s offerings. It’s now easier for them to test on AWS because they can run tests that affect thousands of distributions without having to scale internally or introduce downtime.
  • Amazon.com is using CloudFront Functions to change the way it delivers static assets to customers globally. CloudFront Functions allows them to experiment with hyper-personalization at scale and optimal latency performance. They have been working closely with the CloudFront team during product development, and they like how it is easy to create, test, and deploy custom code and implement business logic at the edge.

AWS open-source news and updates – A newsletter curated by my colleague Ricardo to bring you the latest open-source projects, posts, events, and more. Read the latest edition here.

Reduce log-storage costs by automating retention settings in Amazon CloudWatch – By default, CloudWatch Logs stores your log data indefinitely. This blog post shows how you can reduce log-storage costs by establishing a log-retention policy and applying it across all of your log groups.

Observability for AWS App Runner VPC networking – With X-Ray support in App runner, you can quickly deploy web applications and APIs at any scale and take advantage of adding tracing without having to manage sidecars or agents. Here’s an example of how you can instrument your applications with the AWS Distro for OpenTelemetry (ADOT).

Upcoming AWS Events
It’s AWS Summits season and here are some virtual and in-person events that might be close to you:

You can now register for re:MARS to get fresh ideas on topics such as machine learning, automation, robotics, and space. The conference will be in person in Las Vegas, June 21–24.

That’s all from me for this week. Come back next Monday for another Week in Review!

Danilo

Apple Mail Now Blocks Email Trackers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/05/apple-mail-now-blocks-email-trackers.html

Apple Mail now blocks email trackers by default.

Most email newsletters you get include an invisible “image,” typically a single white pixel, with a unique file name. The server keeps track of every time this “image” is opened and by which IP address. This quirk of internet history means that marketers can track exactly when you open an email and your IP address, which can be used to roughly work out your location.

So, how does Apple Mail stop this? By caching. Apple Mail downloads all images for all emails before you open them. Practically speaking, that means every message downloaded to Apple Mail is marked “read,” regardless of whether you open it. Apples also routes the download through two different proxies, meaning your precise location also can’t be tracked.

Crypto-Gram uses Mailchimp, which has these tracking pixels turned on by default. I turn them off. Normally, Mailchimp requires them to be left on for the first few mailings, presumably to prevent abuse. The company waived that requirement for me.

[Infographic] Cloud Misconfigurations: Don’t Become a Breach Statistic

Post Syndicated from Rapid7 original https://blog.rapid7.com/2022/05/09/infographic-cloud-misconfigurations-dont-become-a-breach-statistic/

[Infographic] Cloud Misconfigurations: Don't Become a Breach Statistic

No one wants their company to be named in the latest headline-grabbing data breach. Luckily, there are steps you can take to keep your organization from becoming another security incident statistic — chief among them, avoiding misconfigurations in the cloud.

Our 2022 Cloud Misconfigurations Report found some key commonalities across publicly reported data exposure incidents last year. Check out some of the highlights here, in our latest infographic.

[Infographic] Cloud Misconfigurations: Don't Become a Breach Statistic

Want to learn more about the cloud misconfigurations and breaches that happened last year? Check out the full 2022 Cloud Misconfigurations Report.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Throttling a tiered, multi-tenant REST API at scale using API Gateway: Part 2

Post Syndicated from Nick Choi original https://aws.amazon.com/blogs/architecture/throttling-a-tiered-multi-tenant-rest-api-at-scale-using-api-gateway-part-2/

In Part 1 of this blog series, we demonstrated why tiering and throttling become necessary at scale for multi-tenant REST APIs, and explored tiering strategy and throttling with Amazon API Gateway.

In this post, Part 2, we will examine tenant isolation strategies at scale with API Gateway and extend the sample code from Part 1.

Enhancing the sample code

To enable this functionality in the sample code (Figure 1), we will make manual changes. First, create one API key for the Free Tier and five API keys for the Basic Tier. Currently, these API keys are private keys for your Amazon Cognito login, but we will make a further change in the backend business logic that will promote them to pooled resources. Note that all of these modifications are specific to this sample code’s implementation; the implementation and deployment of a production code may be completely different (Figure 1).

Cloud architecture of the sample code

Figure 1. Cloud architecture of the sample code

Next, in the business logic for thecreateKey(), find the AWS Lambda function in lambda/create_key.js.  It appears like this:

function createKey(tableName, key, plansTable, jwt, rand, callback) {
  const pool = getPoolForPlanId( key.planId ) 
  if (!pool) {
    createSiloedKey(tableName, key, plansTable, jwt, rand, callback);
  } else {
    createPooledKey(pool, tableName, key, jwt, callback);
  }
}

The getPoolForPlanId() function does a search for a pool of keys associated with the usage plan. If there is a pool, we “create” a kind of reference to the pooled resource, rather than a completely new key that is created by the API Gateway service directly. The lambda/api_key_pools.js should be empty.

exports.apiKeyPools = [];

In effect, all usage plans were considered as siloed keys up to now. To change that, populate the data structure with values from the six API keys that were created manually. You will have to look up the IDs of the API keys and usage plans that were created in API Gateway (Figures 2 and 3). Using the AWS console to navigate to API Gateway is the most intuitive.

A view of the AWS console when inspecting the ID for the Basic usage plan

Figure 2. A view of the AWS console when inspecting the ID for the Basic usage plan

A view of the AWS Console when looking up the API key value (not the ID)

Figure 3. A view of the AWS Console when looking up the API key value (not the ID)

When done, your code in lambda/api_key_pools.js should be the following, but instead of ellipses (), the IDs for the user plans and API keys specific to your environment will appear.

exports.apiKeyPools = [{
    planName: "FreePlan"
    planId: "...",
    apiKeys: [ "..." ]
  },
 {
    planName: "BasicPlan"
    planId: "...",
    apiKeys: [ "...", "...", "...", "...", "..." ]
  }];

After making the code changes, run cdk deploy from the command line to update the Lambda functions. This change will only affect key creation and deletion because of the system implementation. Updates affect only the user’s specific reference to the key, not the underlying resource managed by API Gateway.

When the web application is run now, it will look similar to before—tenants should not be aware what tiering strategy they have been assigned to. The only way to notice the difference would be to create two Free Tier keys, test them, and note that the value of the X-API-KEY header is unchanged between the two.

Now, you have a virtually unlimited number of users who can have API keys in the Free or Basic Tier. By keeping the Premium Tier siloed, you are subject to the 10,000-API-key maximum (less any keys allocated for the lower tiers). You may consider additional techniques to continue to scale, such as replicating your service in another AWS account.

Other production considerations

The sample code is minimal, and it illustrates just one aspect of scaling a Software-as-a-service (SaaS) application. There are many other aspects be considered in a production setting that we explore in this section.

The throttled endpoint, GET /api rely only on API key for authorization for demonstration purpose. For any production implementation consider authentication options for your REST APIs. You may explore and extend to require authentication with Cognito similar to /admin/* endpoints in the sample code.

One API key for Free Tier access and five API keys for Basic Tier access are illustrative in a sample code but not representative of production deployments. Number of API keys with service quota into consideration, business and technical decisions may be made to minimize noisy neighbor effect such as setting blast radius upper threshold of 0.1% of all users. To satisfy that requirement, each tier would need to spread users across at least 1,000 API keys. The number of keys allocated to Basic or Premium Tier would depend on market needs and pricing strategies. Additional allocations of keys could be held in reserve for troubleshooting, QA, tenant migrations, and key retirement.

In the planning phase of your solution, you will decide how many tiers to provide, how many usage plans are needed, and what throttle limits and quotas to apply. These decisions depend on your architecture and business.

To define API request limits, examine the system API Gateway is protecting and what load it can sustain. For example, if your service will scale up to 1,000 requests per second, it is possible to implement three tiers with a 10/50/40 split: the lowest tier shares one common API key with a 100 request per second limit; an intermediate tier has a pool of 25 API keys with a limit of 20 requests per second each; and the highest tier has a maximum of 10 API keys, each supporting 40 requests per second.

Metrics play a large role in continuously evolving your SaaS-tiering strategy (Figure 4). They provide rich insights into how tenants are using the system. Tenant-aware and SaaS-wide metrics on throttling and quota limits can be used to: assess tiering in-place, if tenants’ requirements are being met, and if currently used tenant usage profiles are valid (Figure 5).

Tiering strategy example with 3 tiers and requests allocation per tier

Figure 4. Tiering strategy example with 3 tiers and requests allocation per tier

An example SaaS metrics dashboard

Figure 5. An example SaaS metrics dashboard

API Gateway provides options for different levels of granularity required, including detailed metrics, and execution and access logging to enable observability of your SaaS solution. Granular usage metrics combined with underlying resource consumption leads to managing optimal experience for your tenants with throttling levels and policies per method and per client.

Cleanup

To avoid incurring future charges, delete the resources. This can be done on the command line by typing:

cd ${TOP}/cdk
cdk destroy

cd ${TOP}/react
amplify delete

${TOP} is the topmost directory of the sample code. For the most up-to-date information, see the README.md file.

Conclusion

In this two-part blog series, we have reviewed the best practices and challenges of effectively guarding a tiered multi-tenant REST API hosted in AWS API Gateway. We also explored how throttling policy and quota management can help you continuously evaluate the needs of your tenants and evolve your tiering strategy to protect your backend systems from being overwhelmed by inbound traffic.

Further reading:

[$] Ways to reclaim unused page-table pages

Post Syndicated from original https://lwn.net/Articles/893726/

One of the memory-management subsystem’s most important jobs is reclaiming
unused (or
little-used) memory so that it can be put to better use. When it comes to
one of the core memory-management data structures — page tables — though,
this subsystem often falls down on the job. At the 2022 Linux Storage,
Filesystem, Memory-management and BPF Summit
(LSFMM), David Hildenbrand led a
session on the problems posed by the lack of page-table reclaim and
explored options for improving the situation.

A Community Group for Web-interoperable JavaScript runtimes

Post Syndicated from James M Snell original https://blog.cloudflare.com/introducing-the-wintercg/

A Community Group for Web-interoperable JavaScript runtimes

A Community Group for Web-interoperable JavaScript runtimes

Today, Cloudflare – in partnership with Vercel, Shopify, and individual core contributors to both Node.js and Deno – is announcing the establishment of a new Community Group focused on the interoperable implementation of standardized web APIs in non-web browser, JavaScript-based development environments.

The W3C and the Web Hypertext Application Technology Working Group (or WHATWG) have long pioneered the efforts to develop standardized APIs and features for the web as a development environment. APIs such as fetch(), ReadableStream and WritableStream, URL, URLPattern, TextEncoder, and more have become ubiquitous and valuable components of modern web development. However, the charters of these existing groups have always been explicitly limited to considering only the specific needs of web browsers, resulting in the development of standards that are not readily optimized for any environment that does not look exactly like a web browser. A good example of this effect is that some non-browser implementations of the Streams standard are an order of magnitude slower than the equivalent Node.js streams and Deno reader implementations due largely to how the API is specified in the standard.

Serverless environments such as Cloudflare Workers, or runtimes like Node.js and Deno, have a broad wide range of requirements, issues, and concerns that are simply not relevant to web browsers, and vice versa. This disconnect and the lack of clear consideration of these differences while the various specifications have been developed, has led to a situation where the non-browser runtimes have implemented their own bespoke, ad-hoc solutions for functionality that is actually common across the environments.

This new effort is changing that by providing a venue to discuss and advocate for the common requirements of all web environments, deployed anywhere throughout the stack.

What’s in it for developers?

Developers want their code to be portable. Once they write it, if they choose to move to a different environment (from Node.js to Deno, for instance) they don’t want to have to completely rewrite it just to make it keep doing the exact same thing it already was.

One of the more common questions we get from Cloudflare users is how they can make use of some arbitrary module published to npm that makes use of some set of Node.js-specific or Deno-specific APIs. The answer usually involves pulling in some arbitrary combination of polyfill implementations. The situation is similar with the Deno project, which has opted to integrate a polyfill of the full Node.js core API directly into their standard library. The more these environments implement the same common standards, the more the developer ecosystem can depend on the code they write just working, regardless of where it is being run.

Cloudflare Workers, Node.js, Deno, and web browsers are all very different from each other, but they share a good number of common functions. For instance, they all provide APIs for generating cryptographic hashes; they all deal in some way with streaming data; they all provide the ability to send an HTTP request somewhere. Where this overlap exists, and where the requirements and functionality are the same, the environments should all implement the same standardized mechanisms.

The Web-interoperable Runtimes Community Group

The new Web-interoperable Runtimes Community Group (or “WinterCG”) operates under the established processes of the W3C.

The naming of this group is something that took us a while to settle on because it is critical to understanding the goals the group is trying to achieve (and what it is not). The key element is the phrase “web-interoperable”.

We use “web” in exactly the same sense that the W3C and WHATWG communities use the term – precisely: web browsers. The term “web-interoperable”, then, means implementing features in a manner that is either identical or at least as consistent as possible with the way those features are implemented in web browsers. For instance, the way that the new URL() constructor works in browsers is exactly how the new URL() constructor should work in Node.js, in Deno, and in Cloudflare Workers.

It is important, however, to acknowledge the fact that Node.js, Deno, and Cloudflare Workers are explicitly not web browsers. While this point should be obvious, it is important to call out because the differences between the various JavaScript environments can greatly impact the design decisions of standardized APIs. Node.js and Deno, for instance, each provide full access to the local file system. Cloudflare Workers, in contrast, has no local file system; and web browsers necessarily restrict applications from manipulating the local file system. Likewise, while web browsers inherently include a concept of a website’s “origin” and implement mechanisms such as CORS to protect users against a variety of security threats, there is no equivalent concept of “origins” on the server-side where Node.js, Deno, and Cloudflare Workers operate.

Up to now, the W3C and WHATWG have concerned themselves strictly with the needs of web browsers. The new Web-interoperable Runtimes Community Group is explicitly addressing and advocating for the needs of everyone else.

It is not intended that WinterCG will go off and publish its own set of independent standard APIs. Ideas for new specifications that emerge from WinterCG will first be submitted for consideration by existing work streams in the W3C and WHATWG with the goal of gaining the broadest possible consensus. However, should it become clear that web browsers have no particular need for, or interest in, a feature that the other environments (such as Cloudflare Workers) have need for, WinterCG will be empowered to move forward with a specification of its own – with the constraint that nothing will be introduced that intentionally conflicts with or is incompatible with the established web standards.

WinterCG will be open for anyone to participate; it will operate under the established W3C processes and policies; all work will be openly accessible via the “wintercg” GitHub organization; and everything it does will be centered on the goal of maximizing interoperability.

Work in Progress

WinterCG has already started work on a number of important work items.

The Minimum Common Web API

From the introduction in the current draft of the specification:

“The Minimum Common Web Platform API is a curated subset of standardized web platform APIs intended to define a minimum set of capabilities common to Browser and Non-Browser JavaScript-based runtime environments.”

Or put another way: It is a minimal set of existing web APIs that will be implemented consistently and correctly in Node.js, Deno, and Cloudflare Workers. Most of the APIs, with some exceptions and nuances, already exist in these environments, so the bulk of the work remaining is to ensure that those implementations are conformant to their relative specifications and portable across environments.

The table below lists all the APIs currently included in this subset (along with an indication of whether the API is currently or likely soon to be supported by Node.js, Deno, and Cloudflare Workers):

Node.js Deno Cloudflare Workers
AbortController ✔️ ✔️ ✔️
AbortSignal ✔️ ✔️ ✔️
ByteLengthQueueingStrategy ✔️ ✔️ ✔️
CompressionStream ✔️ ✔️ ✔️
CountQueueingStrategy ✔️ ✔️ ✔️
Crypto ✔️ ✔️ ✔️
CryptoKey ✔️ ✔️ ✔️
DecompressionStream ✔️ ✔️ ✔️
DOMException ✔️ ✔️ ✔️
Event ✔️ ✔️ ✔️
EventTarget ✔️ ✔️ ✔️
ReadableByteStreamController ✔️ ✔️ ✔️
ReadableStream ✔️ ✔️ ✔️
ReadableStreamBYOBReader ✔️ ✔️ ✔️
ReadableStreamBYOBRequest ✔️ ✔️ ✔️
ReadableStreamDefaultController ✔️ ✔️ ✔️
ReadableStreamDefaultReader ✔️ ✔️ ✔️
SubtleCrypto ✔️ ✔️ ✔️
TextDecoder ✔️ ✔️ ✔️
TextDecoderStream ✔️ ✔️ (soon)
TextEncoder ✔️ ✔️ ✔️
TextEncoderStream ✔️ ✔️
TransformStream ✔️ ✔️ ✔️
TransformStreamDefaultController ✔️ ✔️ (soon)
URL ✔️ ✔️ ✔️
URLPattern ? ✔️ ✔️
URLSearchParams ✔️ ✔️ ✔️
WritableStream ✔️ ✔️ ✔️
WritableStreamDefaultController ✔️ ✔️ ✔️
globalThis.self ? ✔️ (soon)
globalThis.atob() ✔️ ✔️ ✔️
globalThis.btoa() ✔️ ✔️ ✔️
globalThis.console ✔️ ✔️ ✔️
globalThis.crypto ✔️ ✔️ ✔️
globalThis.navigator.userAgent ? ✔️ ✔️
globalThis.queueMicrotask() ✔️ ✔️ ✔️
globalThis.setTimeout() / globalthis.clearTimeout() ✔️ ✔️ ✔️
globalThis.setInterval() / globalThis.clearInterval() ✔️ ✔️ ✔️
globalThis.structuredClone() ✔️ ✔️ ✔️

Whenever one of the environments diverges from the standardized definition of the API (such as Node.js implementation of setTimeout() and setInterval()), clear documentation describing the differences will be made available. Such differences should only exist for backwards compatibility with existing code.

Web Cryptography Streams

The Web Cryptography API provides a minimal (and very limited) APIs for  common cryptography operations. One of its key limitations is the fact that – unlike Node.js’ built-in crypto module – it does not have any support for streaming inputs and outputs to symmetric cryptographic algorithms. All Web Cryptography features operate on chunks of data held in memory, all at once. This strictly limits the performance and scalability of cryptographic operations. Using these APIs in any environment that is not a web browser, and trying to make them perform well, quickly becomes painful.

To address that issue, WinterCG has started drafting a new specification for Web Crypto Streams that will be submitted to the W3C for consideration as part of a larger effort currently being bootstrapped by the W3C to update the Web Cryptography specification. The goal is to bring streaming crypto operations to the whole of the web, including web browsers, in a way that conforms with existing standards.

A subset of fetch() for servers

With the recent release of version 18.0.0, Node.js has joined the collection of JavaScript environments that provide an implementation of the WHATWG standardized fetch() API. There are, however, a number of important differences between the way Node.js, Deno, and Cloudflare Workers implement fetch() versus the way it is implemented in web browsers.

For one, server environments do not have a concept of “origin” like a web browser does. Features such as CORS intended to protect against cross-site scripting vulnerabilities are simply irrelevant on the server. Likewise, where web browsers are generally used by one individual user at a time and have a concept of a globally-scoped cookie store, server and serverless applications can be used by millions of users simultaneously and a globally-scoped cookie store that potentially contains session and authentication details would be both impractical and dangerous.

Because of the acute differences in the environments, it is often difficult to reason about, and gain consensus on, proposed changes in the fetch standard. Some proposed new API, for instance, might be fantastically relevant to fetch users on a server but completely useless to fetch users in a web browser. Some set of security concerns that are relevant to the Browser might have no impact whatsoever on the server.

To address this issue, and to make it easier for non-web browser environments to implement fetch in a consistent way, WinterCG is working on documenting a subset of the fetch standard that deals specifically with those different requirements and constraints.

Critically, this subset will be fully compatible with the fetch standard; and is being cooperatively developed by the same folks who have worked on fetch in Node.js, Deno, and Cloudflare Workers. It is not intended that this will become a competing definition of the fetch standard, but rather a set of documented guidelines on how to implement fetch correctly in these other environments.

We’re just getting started

The Web-interoperable Runtimes Community Group is just getting started, and we have a number of ambitious goals. Participation is open to everyone, and all work will be done in the open via GitHub at https://github.com/wintercg. We are actively seeking collaboration with the W3C, the WHATWG, and the JavaScript community at large to ensure that web features are available, work consistently, and meet the requirements of all web developers working anywhere across the stack.

For more information on the WinterCG, refer to https://wintercg.org. For details on how to participate, refer to https://github.com/wintercg/admin.

Cloudflare and StackBlitz partner to deliver an instant and secure developer experience

Post Syndicated from Adam Janiš original https://blog.cloudflare.com/cloudflare-stackblitz-partnership/

Cloudflare and StackBlitz partner to deliver an instant and secure developer experience

Cloudflare and StackBlitz partner to deliver an instant and secure developer experience

We are starting our Platform Week focused on the most important aspect of a developer platform — developers. At the core of every announcement this week is developer experience. In other words, it doesn’t matter how groundbreaking the technology is if at the end of the day we’re not making your job as a developer easier.

Earlier today, we announced the general availability of a new Wrangler version, making it easier than ever to get started and develop with Workers. We’re also excited to announce that we’re partnering with StackBlitz. Together, we will bring the Wrangler experience closer to you – directly to your browser, with no dependencies required!

StackBlitz is a web-based code editor provided with a fresh and fast development environment on each page load. StackBlitz’s development environments are powered by WebContainers,  the first WebAssembly-based operating system, which boots secure development environments entirely within your browser tab.

Introducing new Wrangler, running in your browser

Cloudflare and StackBlitz partner to deliver an instant and secure developer experience

One of the Wrangler improvements we announced today is the option to easily run Wrangler in any Node.js environment, including your browser which is now powered by WebContainers!

StackBlitz’s WebContainers are optimized for starting any project within seconds, including the installation of all dependencies. Whenever you’re ready to start a fresh development environment, you can refresh the browser tab running StackBlitz’s editor and have everything instantly ready to go.

Don’t just take our word for it, you can test this out yourself by opening up a sample project on https://workers.new/typescript.
Note: currently, only Chromium based browsers are supported.

You can think of WebContainers as an in-browser operating system: they include features like a file system, multi-process and multi-threading application support, and a virtualized TCP network stack with the use of ServiceWorkers.

Interested in learning more about WebContainers? Check out the introduction blog post or WebContainer working group GitHub repository.

Powering a better developer experience and documentation

We’re excited about all the possibilities that instant development environments running in the browser open us up to. For example, they enable us to embed or link full code projects directly from our documentation examples and tutorials without waiting for a remote server to spin up a container with your environment.

Try out the following templates and have a little sneak peek of the developer experience we are working together to enable, as running a new Workers application locally was never easier!

https://workers.new/router
https://workers.new/durable-objects
https://workers.new/typescript

What’s next

StackBlitz supports running Wrangler in a local mode today, and we are working together to enable features that require authentication to bring the full developer lifecycle inside your browser – including development on the edge, publishing, and debugging or tailing logs of your published Workers.

Share what you have built with us and stay tuned for more updates! Make sure to follow us on Twitter or join our Discord Developers Community server.

10 things I love about Wrangler v2.0

Post Syndicated from Sunil Pai original https://blog.cloudflare.com/10-things-i-love-about-wrangler/

10 things I love about Wrangler v2.0

10 things I love about Wrangler v2.0

Last November, we announced the beta release of a full rewrite of Wrangler, our CLI for building Cloudflare Workers. Since then, we’ve been working round the clock to make sure it’s feature complete, bug-free, and easy to use. We are proud to announce that Wrangler goes public today for general usage, and can’t wait to see what people build with it!

Rewrites can be scary. Our goal for this version of Wrangler was backward compatibility with the original version, while significantly improving the developer experience. I’d like to take this opportunity to present 10 reasons why you should upgrade to the new Wrangler!

1. It’s simpler to install:

10 things I love about Wrangler v2.0
A simpler way to get started.

Previously, folks would have to install @cloudflare/wrangler globally on a system. This made it hard to use different versions of Wrangler across projects. Further, it was hard to install on some CI systems because of lack of access to a user’s root folder.  Sometimes, folks would forget to add the @cloudflare scope when installing, confusing them when a completely unrelated package was installed and didn’t work as expected.

Let’s fix that. We’ve simplified this by now publishing to the wrangler package, so you can run npm install wrangler and it works as expected. You can also install it locally to a project’s package.json, like any other regular npm package. It also works across a much broader range of CPU architectures and operating systems, so you can use it on more machines.

This makes it a lot more convenient when starting. But why stop there?

2. Zero config startup:

10 things I love about Wrangler v2.0
Get started with zero configuration

It’s now much simpler to get started with a new project. Previously, you would have to create a wrangler.toml configuration file, and fill it in with details about your cloudflare account, how the project was structured, setting up a custom build process and so on. We heard feedback from many of you who would get frustrated during this step, and how it would take many minutes before you could get to developing a simple Worker.

Let’s fix that. You no longer need to create a configuration file when starting, and none of the fields are mandatory. Wrangler infers details about your account and project as you start developing, and you can add configuration incrementally when you need to.

In fact, you don’t even need to install Wrangler to start! You can create a Worker (say, as index.js) and use npx (a utility that comes installed with node.js) to fetch Wrangler from the npm registry and start developing immediately!

This is great for extremely simple Workers, but why stop there?

3. wrangler init my-worker -y, one liner to set up a full project:

We noticed users would struggle to set up a project with Wrangler, even after they’d installed Wrangler and configured wrangler.toml. Most users want to set up a package.json, commonly use typescript to write code, and set up git to track changes in this project. So, we expanded the wrangler init <project name> command to set up a production grade project. You can optionally choose to use typescript, install the official type definitions for Workers, and use git to track changes.

My favorite trick here is to pass -y to accept all questions without asking. Try running npx wrangler init my-worker -y in your terminal today!

10 things I love about Wrangler v2.0
One line to set up a full Workers project

4. --local mode:

Wrangler typically runs a development server on our global network, setting up a local proxy when developing, so you can develop against a “real” environment in the cloud. This is great for making sure the code you develop will behave the same in development, and after you deploy it to production. The trade-off here is it’s harder to develop code when you have a bad Internet connection, or if you’re running tests on a CI machine. It’s also marginally slower to iterate while coding. Users have asked us for a long time to be able to ‘run’ their code locally on their machines, so that they can iterate quickly and run tests in environments like CI.

Wrangler now lets you develop on your machine by simply calling wrangler dev --local, and no additional configuration. This is powered by Miniflare, a fully featured simulator of the Cloudflare Workers runtime. You can even toggle across ‘edge’ and ‘local’ modes by tapping the ‘L’ hotkey when developing; however you prefer!

10 things I love about Wrangler v2.0
Local mode, powered by Miniflare.

5. Tail any Worker, any time:

10 things I love about Wrangler v2.0
Tail your logs anywhere, anytime. 

It’s useful to be able to “tail” a Worker’s output to a terminal, and see what’s going on in real time. While you can already view these logs in the Workers dashboard, some people are more comfortable seeing the logs in their terminal, and then slicing and dicing to debug any issues that may be occuring. Previously, you would have to checkout a Worker’s repository locally, install dependencies, and then call wrangler tail in the project folder. This was clearly cumbersome, and relied on developer expertise to see something as simple as a Worker’s logs.

Now you can simply call npx wrangler tail <worker name> in your terminal, without any configuration or setup, and immediately see the logs that you expect. We use this ourselves to quickly inspect our production Workers and see what’s going on inside them!

6. Better warnings and errors, everywhere:

One of the worst feelings a developer can face is being presented with an error when writing code, and not knowing how to fix it and proceed. We heard feedback from many of you who were frustrated with the lack of error messages, and how you would spend hours trying to figure out what went wrong. We’ve now added new error and warning messages, so you can easily spot the problems in your code. When possible, we also include steps you can follow to fix your Worker, including things that you can simply copy and paste! This makes Wrangler much more friendly to use, and we promise the experience will only get better.

10 things I love about Wrangler v2.0

7. On-demand developer tools for debugging:

We introduced initial support for debugging Workers in Wrangler in September which enables debugging a Worker directly on our global network. However, getting started with debugging was still a bit cumbersome, because you would have to start Wrangler with an --inspect flag, then open a special page in your browser (chrome://inspect), configuring it to detect Wrangler running on a special port, and then launching the debugger. This would also mean you might have lost any debugging messages that were logged before you opened the Chrome developer tools.

We fixed this. Now you don’t need to pass any special flags when starting up. You can simply hit the D hotkey when developing and a developer tools instance pops up in your browser. And by buffering the messages before you even start up the devtools, you don’t lose any logs or errors! You can also use VS Code developer tools to directly hook into your Worker’s debugging session!

8. A modern module system:

Modern JavaScript isn’t simply about the syntax that the language supports, but also writing code as modules, and leveraging the extremely broad ecosystem of community libraries and frameworks. Previously, Wrangler required that you set up webpack or a custom build with bundlers (like rollup, vite, or esbuild, to name a few) to consume libraries and modules. This introduces a lot of friction, especially when starting a new project and trying out new ideas.

Now, support for npm modules comes out of the box, with no extra configuration required! You can install any package from the npm registry, organize your own code with modules, and it all works as expected. We’re also introducing an experimental node.js compatibility mode for using node.js modules that wouldn’t previously work without setting up your own polyfills! This means you can use popular frameworks and libraries that you’re already familiar with while focusing on delivering value to your users.

9. Closes many outstanding issues:

A rewrite should be judged not just by the new features that are implemented, but by how many existing issues are resolved. We went through hundreds of outstanding issues and bugs with Wrangler, and are happy to say that we solved almost all of them! Across the board, every command and feature got a facelift, bug fixes, and test coverage to make sure it doesn’t break in the future. Developers on using Cloudflare Workers will be glad to hear that simply upgrading Wrangler will immediately fix previous concerns and problems. Which leads us to my absolute favorite feature…

10 things I love about Wrangler v2.0

10. A commitment to improve:

10 things I love about Wrangler v2.0
The effort into building wrangler v2.0, visualised. 

Wrangler has always been special software for us. It represents the primary interface that developers use to interact and use Cloudflare Workers, and we have major plans for the future. We have invested time, effort and resources to make sure Wrangler is the best tool for developers to use, and we’re excited to see what the future holds. This is a commitment to our users and community that we will only keep improving on this foundation, and folks can expect their feedback and concerns to be heard loud and clear.

Open source Managed Components for Cloudflare Zaraz

Post Syndicated from Yo'av Moshe original https://blog.cloudflare.com/zaraz-open-source-managed-components-and-webcm/

Open source Managed Components for Cloudflare Zaraz

Open source Managed Components for Cloudflare Zaraz

In early 2020, we sat down and tried thinking if there’s a way to load third-party tools on the Internet without slowing down websites, without making them less secure, and without sacrificing users’ privacy. In the evening, after scanning through thousands of websites, our answer was “well, sort of”. It seemed possible: many types of third-party tools are merely collecting information in the browser and then sending it to a remote server. We could theoretically figure out what it is that they’re collecting, and then instead just collect it once efficiently, and send it server-side to their servers, mimicking their data schema. If we do this, we can get rid of loading their JavaScript code inside websites completely. This means no more risk of malicious scripts, no more performance losses, and fewer privacy concerns.

But the answer wasn’t a definite “YES!” because we realized this is going to be very complicated. We looked into the network requests of major third-party scripts, and often it seemed cryptic. We set ourselves up for a lot of work, looking at the network requests made by tools and trying to figure out what they are doing – What is this parameter? When is this network request sent? How is this value hashed? How can we achieve the same result more securely, reliably and efficiently? Our team faced these questions on a daily basis.

When we joined Cloudflare, the scale of everything changed. Suddenly we were on thousands of websites, serving more than 10,000 requests per second. Users are writing to us every single day over our Discord channel, the community forum, and sometimes even directly on Twitter. More often than not, their messages would be along the lines of “Hi! Can you please add support for X?” Cloudflare Zaraz launched with around 30 tools in its library, but this market is vast and new tools are popping up all the time.

Changing our trust model

In my previous blog post on how Zaraz uses Cloudflare Workers, I included some examples of how tool integrations are written in Zaraz today. Usually, a “tool” in Zaraz would be a function that prepares a payload and sends it. This function could return one thing – clientJS, JavaScript code that the browser would later execute. We’ve done our best so that tools wouldn’t use clientJS, if it wasn’t really necessary, and in reality most Zaraz-built tool integrations are not using clientJS at all.

This worked great, as long as we were the ones coding all tool integrations. Customers trusted us that we’d write code that is performant and safe, and they trusted the results they saw when trying Zaraz. Upon joining Cloudflare, many third-party tool vendors contacted us and asked to write a Zaraz integration. We quickly realized that our system wasn’t enforcing speed and safety – vendors could literally just dump their old browser-side JavaScript into our clientJS variable, and say “We have a Cloudflare Zaraz integration!”, and that wasn’t our vision at all.

We want third-party tool vendors to be able to write their own performant, safe server-side integrations. We want to make it possible for them to reimagine their tools in a better way. We also want website owners to have transparency into what is happening on their website, to be able to manage and control it, and to trust that if a tool is running through Zaraz, it must be a good tool — not because of who wrote it, but because of the technology it is constructed within. We realized that to achieve that we needed a new format for defining third-party tools.

Introducing Managed Components

We started rethinking how third-party code should be written. Today, it’s a black box – you usually add a script to your site, and you have zero clue what it does and when. You can’t properly read or analyze the minified code. You don’t know if the way it behaves for you is the same way it behaves for everyone else. You don’t know when it might change. If you’re a website owner, you’re completely in the dark.

Tools do many different things. The simple ones just collected information and sent it somewhere. Often, they’d set some cookies. Sometimes, they’d install some event listeners on the page. And widget-based tools can literally manipulate the page DOM, providing new functionality like a social media embed or a chatbot. Our new format needed to support all of this.

Managed Components is how we imagine the future of third-party tools online. It provides vendors with an API that allows them to do much more than a normal script can, including keeping code execution outside the browser. We designed this format together with vendors, for vendors, while having in mind that users’ best interest is everyone’s best interest long-term.

From the get-go, we built Managed Components to use a permission-based system. We want to provide even more transparency than Zaraz does today. As the new API allows tools to set cookies, change the DOM or collect IP addresses, all those abilities require being granted a permission. Installing a third-party tool on your site is similar to installing an app on your phone – you get an explanation of what the tool can and can’t do, and you can allow or disallow features to a granular level. We previously wrote about how you can use Zaraz to not send IP addresses to Google Analytics, and now we’re doubling down in this direction. It’s your website, and it’s your decision to make.

Every Managed Component is a JavaScript module at its core. Unlike today, this JavaScript code isn’t sent to the browser. Instead, it is executed by a Components Manager. This manager implements the APIs that are then used by the component. It dispatches server-side events that originate in the browser, providing the components with access to information while keeping them sandboxed and performant. It handles caching, storage and more — all so that the Managed Components can implement their logic without worrying so much about the surrounding.

An example analytics Managed Component can look something like this:

export default function (manager) {
  manager.addEventListener("pageview", ({ context, client }) => {
    fetch("https://example.com/collect", {
  	method: "POST",
  	data: {
    	  url: context.page.url.href,
    	  userAgent: client.device.userAgent,
  	},
    });
  });
}

The above component gets notified whenever a page view occurs, and it then creates some payload with the visitor user-agent and page URL and sends that as a POST request to the vendor’s server. This is very similar to how things are done today, except this doesn’t require running any code at all in the browser.

But Managed Components aren’t just doing what was previously possible but better, they also provide dramatic new functionality. See for example how we’re exposing server-side endpoints:

export default function (manager) {
  const api = manager.proxy("/api", "https://api.example.com");
  const assets = manager.serve("/assets", "assets");
  const ping = manager.route("/ping", (request) => new Response(204));
}

These three lines are a complete shift in what’s possible for third-parties. If granted the permissions, they can proxy some content, serve and expose their own endpoints – all under the same domain as the one running the website. If a tool needs to do some processing, it can now off-load that from the browser completely without forcing the browser to communicate with a third-party server.

Exciting new capabilities

Every third-party tool vendor should be able to use the Managed Components API to build a better version of their tool. The API we designed is comprehensive, and the benefits for vendors are huge:

  • Same domain: Managed Components can serve assets from the same domain as the website itself. This allows a faster and more secure execution, as the browser needs to trust and communicate with only one server instead of many. This can also reduce costs for vendors as their bandwidth will be lowered.
  • Website-wide events system: Managed Components can hook to a pre-existing events system that is used by the website for tracking events. Not only is there no need to provide a browser-side API to your tool, it’s also easier for your users to send information to your tool because they don’t need to learn your methods.
  • Server logic: Managed Components can provide server-side logic on the same domain as the website. This includes proxying a different server, or adding endpoints that generate dynamic responses. The options are endless here, and this, too, can reduce the load on the vendor servers.
  • Server-side rendered widgets and embeds: Did you ever notice how when you’re loading an article page online, the content jumps when some YouTube or Twitter embed suddenly appears between the paragraphs? Managed Components provide an API for registering widgets and embed that render server-side. This means that when the page arrives to the browser, it already includes the widget in its code. The browser doesn’t need to communicate with another server to fetch some tweet information or styling. It’s part of the page now, so expect a better CLS score.
  • Reliable cross-platform events: Managed Components can subscribe to client-side events such as clicks, scroll and more, without needing to worry about browser or device support. Not only that – those same events will work outside the browser too – but we’ll get to that later.
  • Pre-Response Actions: Managed Components can execute server-side actions before the network response even arrives in the browser. Those actions can access the response object, reading it or altering it.
  • Integrated Consent Manager support: Managed Components are predictable and scoped. The Component Manager knows what they’ll need and can predict what kind of consent is needed to run them.

The right choice: open source

As we started working with vendors on creating a Managed Component for their tool, we heard a repeating concern – “What Components Managers are there? Will this only be useful for Cloudflare Zaraz customers?”. While Cloudflare Zaraz is indeed a Components Manager, and it has a generous free tier plan, we realized we need to think much bigger. We want to make Managed Components available for everyone on the Internet, because we want the Internet as a whole to be better.

Today, we’re announcing much more than just a new format.

WebCM is a reference implementation of the Managed Components API. It is a complete Components Manager that we will soon release and maintain. You will be able to use it as an SDK when building your Managed Component, and you will also be able to use it in production to load Managed Components on your website, even if you’re not a Cloudflare customer. WebCM works as a proxy – you place it before your website, and it rewrites your pages when necessary and adds a couple of endpoints. This makes WebCM 100% framework-agnostic – it doesn’t matter if your website uses Node.js, Python or Ruby behind the scenes: as long as you’re sending out HTML, it supports that.

That’s not all though! We’re also going to open source a few Managed Components of our own. We converted some of our classic Zaraz integrations to Managed Components, and they will soon be available for you to use and improve. You will be able to take our Google Analytics Managed Component, for example, and use WebCM to run Google Analytics on your website, 100% server-side, without Cloudflare.

Tech-leading vendors are already joining

Revolutionizing third-party tools on the internet is something we could only do together with third-party vendors. We love third-party tools, and we want them to be even more popular. That’s why we worked very closely with a few leading companies on creating their own Managed Components. These new Managed Components extend Zaraz capabilities far beyond what’s possible now, and will provide a safe and secure onboarding experience for new users of these tools.

Open source Managed Components for Cloudflare Zaraz

DriftDrift helps businesses connect with customers in moments that matter most.  Drift’s integration will let customers use their fully-featured conversation solution while also keeping it completely sandboxed and without making third-party network connections, increasing privacy and security for our users.

Open source Managed Components for Cloudflare Zaraz

CrazyEggCrazy Egg helps customers make their websites better through visual heatmaps, A/B testing, detailed recordings, surveys and more. Website owners, Cloudflare, and Crazy Egg all care deeply about performance, security and privacy. Managed Components have enabled Crazy Egg to do things that simply aren’t possible with third-party JavaScript, which means our mutual customers will get one of the most performant and secure website optimization tools created.

We also already have customers that are eager to implement Managed Components:

Open source Managed Components for Cloudflare Zaraz

Hopin Quote:

“I have been really impressed with Cloudflare’s Zaraz ability to move Drift’s JS library to an Edge Worker while loading it off the DOM. My work is much more effective due to the savings in page load time. It’s a pleasure to work with two companies that actively seek better ways to increase both page speed and load times with large MarTech stacks.”
– Sean Gowing, Front End Engineer, Hopin

If you’re a third-party vendor, and you want to join these tech-leading companies, do reach out to us, and we’d be happy to support you on writing your own Managed Component.

What’s next for Managed Components

We’re working on Managed Components on many fronts now. While we develop and maintain WebCM, work with vendors and integrate Managed Components into Cloudflare Zaraz, we’re already thinking about what’s possible in the future.

We see a future where many open source runtimes exist for Managed Components. Perhaps your infrastructure doesn’t allow you to use WebCM? We want to see Managed Components runtimes created as service workers, HTTP servers, proxies and framework plugins. We’re also working on making Managed Components available on mobile applications. We’re working on allowing unofficial Managed Components installs on Cloudflare Zaraz. We’re fixing a long-standing issue of the WWW, and there’s so much to do.

We will very soon publish the full specs of Managed Components. We will also open source WebCM, the reference implementation server, as well as many components you can use yourself. If this is interesting to you, reach out to us at [email protected], or join us on Discord.

The next chapter for Cloudflare Workers: open source

Post Syndicated from Rita Kozlov original https://blog.cloudflare.com/workers-open-source-announcement/

The next chapter for Cloudflare Workers: open source

The next chapter for Cloudflare Workers: open source

450,000 developers have used Cloudflare Workers since we launched.

When we announced Cloudflare Workers nearly five years ago, we had no idea if we’d ever be in this position. But a lot of care, hard work — not to mention dogfooding — later, we’ve been absolutely blown away by the use cases and applications built on our developer platform, not to mention the community that’s grown around the product.

My job isn’t just speaking to developers who are already using Cloudflare Workers, however. I spend a lot of time talking to developers who aren’t yet using Workers, too. Despite how cool the tech is — the performance, the ability to just code without worrying about anything else like containers, and the total cost advantages — there are two things that cause developers to hesitate in engaging with us on Workers.

The first: they worry about being locked in. No matter how bullish on the technology you are, if you’re betting the future of a company on a development platform, you don’t want the possibility of being held to ransom. And second: as a developer, you want a local development environment to quickly iterate and test your changes. These concerns might seem unrelated, but they always come up in the form of the same question: can Cloudflare please open source the runtime?

We’re excited to put these concerns to bed. As the first announcement of Platform Week, today Cloudflare is announcing the open sourcing of the Workers runtime under the Apache-2.0 license!

While the code itself will be the best answer to most of the questions you have (we still have some work to do before we’re ready to share it), the questions we did want to answer today were: why are we doing this, and why now?

Development on the web has always been done in the open. If you’re like me, maybe your very first experience writing and looking at code was clicking on “View Source” on a website, and inspecting the HTML to see what pieces you could borrow. So many of the foundational pieces you build on today are open source, from the site, to the browser, to the many frameworks and libraries that are now available to developers. The same is true for us, so much of what we’re able to build is standing on the shoulders of giants like V8.

It was never our intention to introduce opaqueness into the stack, but in reality, when we first announced Workers five years ago, we took a really huge bet.

We wanted to give developers the ability to program on our network, but couldn’t do it at the expense of performance or security. While building on a battle tested technology like V8 seemed promising from a security standpoint, existing runtimes built on V8, couldn’t give us the security guarantees we needed to run a large multi-tenant environment, without the added security layer of a container, which would introduce latency (read: cold starts). Not only were cold starts not acceptable, but in reality, our data centers are much smaller than the centralized monoliths of traditional cloud. Even if we could run existing applications on the edge without cold starts, the code footprint would be far too large to enable every single one of our customers to have access to compute on every node of our global network.

So, we had to get inventive, and the first place we looked was web standards, or the Service Workers API. While Service Workers were designed to run in the browser, the model of Requests and Responses fit our use case really well. And, we liked the idea of the code you write being portable to other environments (and hoped that new players that came up would support the same model).

And that’s exactly what happened.

This all might seem obvious in retrospect, but at the time, it was a huge bet. We didn’t know at the time whether this was going to work. Whether this approach would take off, whether this would all work at scale, whether developers would adopt this model, despite it diverging from what JavaScript looked like on the server-side at the time…

What we did know was that we had a lot to prove, that we didn’t want to lock anyone in, and that open sourcing something properly is not an effort we wanted to take lightly. We wanted to support our community the same way we felt supported by all the open source projects we ourselves were building upon.

Now, it feels like we’re finally there, and we believe the next step in our runtime’s evolution is to give it to the world, and let developers build with it wherever they want, however they want.

Of course, since we’re talking about open source, we already know what you’re going to ask next: what license are we going to use? We plan to use the Apache-2.0 license — we want developers to be able to build on our platform freely.

What’s next?

Open sourcing the runtime alone is not enough to allow developers to write code, free of lock in concerns, which is why we have another announcement coming up today.

And after that, well, if you’ve been following Cloudflare for a while, you know that there’s a certain time in the year, when we like to give back to the Internet. That might be a pretty good bet for the timing of what’s next! 🙂

Security updates for Monday

Post Syndicated from original https://lwn.net/Articles/894353/

Security updates have been issued by CentOS (firefox and thunderbird), Debian (ecdsautils and libz-mingw-w64), Fedora (cifs-utils, firefox, galera, git, java-1.8.0-openjdk, java-11-openjdk, java-17-openjdk, java-latest-openjdk, mariadb, maven-shared-utils, mingw-freetype, redis, and seamonkey), Mageia (dcraw, firefox, lighttpd, rsyslog, ruby-nokogiri, and thunderbird), Scientific Linux (thunderbird), SUSE (giflib, kernel, and libwmf), and Ubuntu (dbus and rsyslog).

Долу ръцете от МОЧА!

Post Syndicated from original https://bivol.bg/%D0%B4%D0%BE%D0%BB%D1%83-%D1%80%D1%8A%D1%86%D0%B5%D1%82%D0%B5-%D0%BE%D1%82-%D0%BC%D0%BE%D1%87%D0%B0.html

понеделник 9 май 2022


МОЧА го били напръскали с дрон. Бил оцветен с цветовете на една държава. Няма да казвам коя. Лично аз одобрявам с цялото си сърце тази постъпка. Това е цивилизационна позиция,…

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close