Tag Archives: node.js

Getting started with Projen and AWS CDK

Post Syndicated from Michael Tran original https://aws.amazon.com/blogs/devops/getting-started-with-projen-and-aws-cdk/

In the modern world of cloud computing, Infrastructure as Code (IaC) has become a vital practice for deploying and managing cloud resources. AWS Cloud Development Kit (AWS CDK) is a popular open-source framework that allows developers to define cloud resources using familiar programming languages. A related open source tool called Projen is a powerful project generator that simplifies the management of complex software configurations. In this post, we’ll explore how to get started with Projen and AWS CDK, and discuss the pros and cons of using Projen.

What is Projen?

Building modern and high quality software requires a large number of tools and configuration files to handle tasks like linting, testing, and automating releases. Each tool has its own configuration interface, such as JSON or YAML, and a unique syntax, increasing maintenance complexity.

When starting a new project, you rarely start from scratch, but more often use a scaffolding tool (for instance, create-react-app) to generate a new project structure. A large amount of configuration is created on your behalf, and you get the ownership of those files. Moreover, there is a high number of project generation tools, with new ones created almost everyday.

Projen is a project generator that helps developers to efficiently manage project configuration files and build high quality software. It allows you to define your project structure and configuration in code, making it easier to maintain and share across different environments and projects.

Out of the box, Projen supports multiple project types like AWS CDK construct libraries, react applications, Java projects, and Python projects. New project types can be added by contributors, and projects can be developed in multiple languages. Projen uses the jsii library, which allows us to write APIs once and generate libraries in several languages. Moreover, Projen provides a single interface, the projenrc file, to manage the configuration of your entire project!

The diagram below provides an overview of the deployment process of AWS cloud resources using Projen:

Projen Overview of Deployment process of AWS Resources


  1. In this example, Projen can be used to generate a new project, for instance, a new CDK Typescript application.
  2. Developers define their infrastructure and application code using AWS CDK resources. To modify the project configuration, developers use the projenrc file instead of directly editing files like package.json.
  3. The project is synthesized to produce an AWS CloudFormation template.
  4. The CloudFormation template is deployed in a AWS account, and provisions AWS cloud resources.

Diagram 1 – Projen packaged features: Projen helps gets your project started and allows you to focus on coding instead of worrying about the other project variables. It comes out of the box with linting, unit test and code coverage, and a number of Github actions for release and versioning and dependency management.

Pros and Cons of using Projen


  1. Consistency: Projen ensures consistency across different projects by allowing you to define standard project templates. You don’t need to use different project generators, only Projen.
  2. Version Control: Since project configuration is defined in code, it can be version-controlled, making it easier to track changes and collaborate with others.
  3. Extensibility: Projen supports various plugins and extensions, allowing you to customize the project configuration to fit your specific needs.
  4. Integration with AWS CDK: Projen provides seamless integration with AWS CDK, simplifying the process of defining and deploying cloud resources.
  5. Polyglot CDK constructs library: Build once, run in multiple runtimes. Projen can convert and publish a CDK Construct developed in TypeScript to Java (Maven) and Python (PYPI) with JSII support.
  6. API Documentation : Generate API documentation from the comments, if you are building a CDK construct


  1. Microsoft Windows support. There are a number of open issues about Projen not completely working with the Windows environment (https://github.com/projen/projen/issues/2427 and https://github.com/projen/projen/issues/498).
  2. The framework, Projen, is very opinionated with a lot of assumptions on architecture, best practices and conventions.
  3. Projen is still not GA, with the version at the time of this writing at v0.77.5.


Step 1: Set up prerequisites

  • An AWS account
  • Download and install Node
  • Install yarn
  • AWS CLI : configure your credentials
  • Deploying stacks with the AWS CDK requires dedicated Amazon S3 buckets and other containers to be available to AWS CloudFormation during deployment (More information).

Note: Projen doesn’t need to be installed globally. You will be using npx to run Projen which takes care of all required setup steps. npx is a tool for running npm packages that:

  • live inside of a local node_modules folder
  • are not installed globally.

npx comes bundled with npm version 5.2+

Step 2: Create a New Projen Project

You can create a new Projen project using the following command:

mkdir test_project && cd test_project
npx projen new awscdk-app-ts

This command creates a new TypeScript project with AWS CDK support. The exhaustive list of supported project types is available through the official documentation: Projen.io, or by running the npx projen new command without a project type. It also supports npx projen new awscdk-construct to create a reusable construct which can then be published to other package managers.

The created project structure should be as follows:

| .github/
| .projen/
| src/
| test/
| .eslintrc
| .gitattributes
| .gitignore
| .mergify.yml
| .npmignore
| .projenrc.js
| cdk.json
| package.json
| tsconfig.dev.json
| yarn.lock

Projen generated a new project including:

  • Initialization of an empty git repository, with the associated GitHub workflow files to build and upgrade the project. The release workflow can be customized with projen tasks.
  • .projenrc.js is the main configuration file for project
  • tasks.json file for integration with Visual Studio Code
  • src folder containing an empty CDK stack
  • License and README files
  • A projen configuration file: projenrc.js
  • package.json contains functional metadata about the project like name, versions and dependencies.
  • .gitignore, .gitattributes file to manage your files with git.
  • .eslintrc identifying and reporting patterns on javascript.
  • .npmignore to keep files out of package manager.
  • .mergify.yml for managing the pull requests.
  • tsconfig.json configure the compiler options

Most of the generated files include a disclaimer:

# ~~ Generated by projen. To modify, edit .projenrc.js and run "npx projen".

Projen’s power lies in its single configuration file, .projenrc.js. By editing this file, you can manage your project’s lint rules, dependencies, .gitignore, and more. Projen will propagate your changes across all generated files, simplifying and unifying dependency management across your projects.

Projen generated files are considered implementation details and are not meant to be edited manually. If you do make manual changes, they will be overwritten the next time you run npx projen.

To edit your project configuration, simply edit .projenrc.js and then run npx projen to synthesize again. For more information on the Projen API, please see the documentation: http://projen.io/api/API.html.

Projen uses the projenrc.js file’s configuration to instantiate a new AwsCdkTypeScriptApp with some basic metadata: the project name, CDK version and the default release branch. Additional APIs are available for this project type to customize it (for instance, add runtime dependencies).

Let’s try to modify a property and see how Projen reacts. As an example, let’s update the project name in projenrc.js :

name: 'test_project_2',

and then run the npx projen command:

npx projen

Once done, you can see that the project name was updated in the package.json file.

Step 3: Define AWS CDK Resources

Inside your Projen project, you can define AWS CDK resources using familiar programming languages like TypeScript. Here’s an example of defining an Amazon Simple Storage Service (Amazon S3) bucket:

1. Navigate to your main.ts file in the src/ directory
2. Modify the imports at the top of the file as follow:

import { App, CfnOutput, Stack, StackProps } from 'aws-cdk-lib';
import * as s3 from 'aws-cdk-lib/aws-s3';
import { Construct } from 'constructs';

1. Replace line 9 “// define resources here…” with the code below:

const bucket = new s3.Bucket(this, 'MyBucket', {
  versioned: true,

new CfnOutput(this, 'TestBucket', { value: bucket.bucketArn });

Step 4: Synthesize and Deploy

Next we will bootstrap our application. Run the following in a terminal:

$ npx cdk bootstrap

Once you’ve defined your resources, you can synthesize a cloud assembly, which includes a CloudFormation template (or many depending on the application) using:

$ npx projen build

npx projen build will perform several actions:

  1. Build the application
  2. Synthesize the CloudFormation template
  3. Run tests and linter

The synth() method of Projen performs the actual synthesizing (and updating) of all configuration files managed by Projen. This is achieved by deleting all Projen-managed files (if there are any), and then re-synthesizing them based on the latest configuration specified by the user.

You can find an exhaustive list of the available npx projen commands in .projen/tasks.json. You can also use the projen API project.addTask to add a new task to perform any custom action you need ! Tasks are a project-level feature to define a project command system backed by shell scripts.

Deploy the CDK application:

$ npx projen deploy

Projen will use the cdk deploy command to deploy the CloudFormation stack in the configured AWS account by creating and executing a change set based on the template generated by CDK synthesis. The output of the step above should look as follow:

deploy | cdk deploy

✨ Synthesis time: 3.28s

toto-dev: start: Building 387a3a724050aec67aa083b74c69485b08a876f038078ec7ea1018c7131f4605:263905523351-us-east-1
toto-dev: success: Built 387a3a724050aec67aa083b74c69485b08a876f038078ec7ea1018c7131f4605:263905523351-us-east-1
toto-dev: start: Publishing 387a3a724050aec67aa083b74c69485b08a876f038078ec7ea1018c7131f4605:263905523351-us-east-1
toto-dev: success: Published 387a3a724050aec67aa083b74c69485b08a876f038078ec7ea1018c7131f4605:263905523351-us-east-1
toto-dev: deploying... [1/1]
toto-dev: creating CloudFormation changeset...

✅ testproject-dev

✨ Deployment time: 33.48s

testproject-dev.TestBucket = arn:aws:s3:::testproject-dev-mybucketf68f3ff0-1xy2f0vk0ve4r
Stack ARN:

✨ Total time: 36.76s

The application was successfully deployed in the configured AWS account! Also, the Amazon Resource Name (ARN) of the S3 bucket created is available through the CloudFormation stack Outputs tab, and displayed in your terminal under the ‘Outputs’ section.

Clean up

Delete CloudFormation Stack

To clean up the resources created in this section of the workshop, navigate to the CloudFormation console and delete the stack created. You can also perform the same task programmatically:

$ npx projen destroy

Which should produce the following output:

destroy | cdk destroy
Are you sure you want to delete: testproject-dev (y/n)? y
testproject-dev: destroying... [1/1]

✅ testproject-dev: destroyed

Delete S3 Buckets

The S3 bucket will not be deleted since its retention policy was set to RETAIN. Navigate to the S3 console and delete the created bucket. If you added files to that bucket, you will need to empty it before deletion. See the Deleting a bucket documentation for more information.


Projen and AWS CDK together provide a powerful combination for managing cloud resources and project configuration. By leveraging Projen, you can ensure consistency, version control, and extensibility across your projects. The integration with AWS CDK allows you to define and deploy cloud resources using familiar programming languages, making the entire process more developer-friendly.

Whether you’re a seasoned cloud developer or just getting started, Projen and AWS CDK offer a streamlined approach to cloud resource management. Give it a try and experience the benefits of Infrastructure as Code with the flexibility and power of modern development tools.

Alain Krok

Alain Krok is a Senior Solutions Architect with a passion for emerging technologies. His past experience includes designing and implementing IIoT solutions for the oil and gas industry and working on robotics projects. He enjoys pushing the limits and indulging in extreme sports when he is not designing software.


Dinesh Sajwan

Dinesh Sajwan is a Senior Solutions Architect. His passion for emerging technologies allows him to stay on the cutting edge and identify new ways to apply the latest advancements to solve even the most complex business problems. His diverse expertise and enthusiasm for both technology and adventure position him as a uniquely creative problem-solver.

Michael Tran

Michael Tran is a Sr. Solutions Architect with Prototyping Acceleration team at Amazon Web Services. He provides technical guidance and helps customers innovate by showing the art of the possible on AWS. He specializes in building prototypes in the AI/ML space. You can contact him @Mike_Trann on Twitter.

Node.js 20.x runtime now available in AWS Lambda

Post Syndicated from Pascal Vogel original https://aws.amazon.com/blogs/compute/node-js-20-x-runtime-now-available-in-aws-lambda/

This post is written by Pascal Vogel, Solutions Architect, and Andrea Amorosi, Senior Solutions Architect.

You can now develop AWS Lambda functions using the Node.js 20 runtime. This Node.js version is in active LTS status and ready for general use. To use this new version, specify a runtime parameter value of nodejs20.x when creating or updating functions or by using the appropriate container base image.

You can use Node.js 20 with Powertools for AWS Lambda (TypeScript), a developer toolkit to implement serverless best practices and increase developer velocity. Powertools for AWS Lambda includes proven libraries to support common patterns such as observability, Parameter Store integration, idempotency, batch processing, and more.

You can also use Node.js 20 with Lambda@Edge, allowing you to customize low-latency content delivered through Amazon CloudFront.

This blog post highlights important changes to the Node.js runtime, notable Node.js language updates, and how you can use the new Node.js 20 runtime in your serverless applications.

Node.js 20 runtime updates

Changes to Root CA certificate loading

By default, Node.js includes root certificate authority (CA) certificates from well-known certificate providers. Earlier Lambda Node.js runtimes up to Node.js 18 augmented these certificates with Amazon-specific CA certificates, making it easier to create functions accessing other AWS services. For example, it included the Amazon RDS certificates necessary for validating the server identity certificate installed on your Amazon RDS database.

However, loading these additional certificates has a performance impact during cold start. Starting with Node.js 20, Lambda no longer loads these additional CA certificates by default. The Node.js 20 runtime contains a certificate file with all Amazon CA certificates located at /var/runtime/ca-cert.pem. By setting the NODE_EXTRA_CA_CERTS environment variable to /var/runtime/ca-cert.pem, you can restore the behavior from Node.js 18 and earlier runtimes.

This causes Node.js to validate and load all Amazon CA certificates during a cold start. It can take longer compared to loading only specific certificates. For the best performance, we recommend bundling only the certificates that you need with your deployment package and loading them via NODE_EXTRA_CA_CERTS. The certificates file should consist of one or more trusted root or intermediate CA certificates in PEM format.

For example, for RDS, include the required certificates alongside your code as certificates/rds.pem and then load it as follows:


See Using Lambda environment variables in the AWS Lambda Developer Guide for detailed instructions for setting environment variables.

Amazon Linux 2023

The Node.js 20 runtime is based on the provided.al2023 runtime. The provided.al2023 runtime in turn is based on the Amazon Linux 2023 minimal container image release and brings several improvements over Amazon Linux 2 (AL2).

provided.al2023 contains only the essential components necessary to install other packages and offers a smaller deployment footprint with a compressed image size of less than 40MB compared to the over 100MB AL2-based base image.

With glibc version 2.34, customers have access to a more recent version of glibc, updated from version 2.26 in AL2-based images.

The Amazon Linux 2023 minimal image uses microdnf as package manager, symlinked as dnf, replacing yum in AL2-based images. Additionally, curl and gnupg2 are also included as their minimal versions curl-minimal and gnupg2-minimal.

Learn more about the provided.al2023 runtime in the blog post Introducing the Amazon Linux 2023 runtime for AWS Lambda and the Amazon Linux 2023 launch blog post.

Runtime Interface Client

The Node.js 20 runtime uses the open source AWS Lambda NodeJS Runtime Interface Client (RIC). You can now use the same RIC version in your Open Container Initiative (OCI) Lambda container images as the one used by the managed Node.js 20 runtime.

The Node.js 20 runtime supports Lambda response streaming which enables you to send response payload data to callers as it becomes available. Response streaming can improve application performance by reducing time-to-first-byte, can indicate progress during long-running tasks, and allows you to build functions that return payloads larger than 6MB, which is the Lambda limit for buffered responses.

Setting Node.js heap memory size

Node.js allows you to configure the heap size of the v8 engine via the --max-old-space-size and --max-semi-space-size options. By default, Lambda overrides the Node.js default values with values derived from the memory size configured for the function. If you need control over your runtime’s memory allocation, you can now set both of these options using the NODE_OPTIONS environment variable, without needing an exec wrapper script. See Using Lambda environment variables in the AWS Lambda Developer Guide for details.

Use the --max-old-space-size option to set the max memory size of V8’s old memory section, and the --max-semi-space-size option to set the maximum semispace size for V8’s garbage collector. See the Node.js documentation for more details on these options.

Node.js 20 language updates

Language features

With this release, Lambda customers can take advantage of new Node.js 20 language features, including:

  • HTTP(S)/1.1 default keepAlive: Node.js now sets keepAlive to true by default. Any outgoing HTTPs connections use HTTP 1.1 keep-alive with a default waiting window of 5 seconds. This can deliver improved throughput as connections are reused by default.
  • Fetch API is enabled by default: The global Node.js Fetch API is enabled by default. However, it is still an experimental module.
  • Faster URL parsing: Node.js 20 comes with the Ada 2.0 URL parser which brings performance improvements to URL parsing. This has also been back-ported to Node.js 18.7.0.
  • Web Crypto API now stable: The Node.js implementation of the standard Web Crypto API has been marked as stable. You can access the provided cryptographic primitives through globalThis.crypto.
  • Web assembly support: Node.js 20 enables the experimental WebAssembly System Interface (WASI) API by default without the need to set an experimental flag.

For a detailed overview of Node.js 20 language features, see the Node.js 20 release blog post and the Node.js 20 changelog.

Performance considerations

Node.js 19.3 introduced a change that impacts how non-essential modules are lazy-loaded during the Node.js process startup. In terms of the impact to your Lambda functions, this reduces the work during initialization of each execution environment, then if used, the modules will instead be loaded during the first function invoke. This change remains in Node.js 20.

Builders should continue to measure and test function performance and optimize function code and configuration for any impact. To learn more about how to optimize Node.js performance in Lambda, see Performance optimization in the Lambda Operator Guide, and our blog post Optimizing Node.js dependencies in AWS Lambda.

Migration from earlier Node.js runtimes

Migration from Node.js 16

Lambda occasionally delays deprecation of a Lambda runtime for a limited period beyond the end of support date of the language version that the runtime supports. During this period, Lambda only applies security patches to the runtime OS. Lambda doesn’t apply security patches to programming language runtimes after they reach their end of support date.

In the case of Node.js 16, we have delayed deprecation from the community end of support date on September 11, 2023, to June 12, 2024. This gives customers the opportunity to migrate directly from Node.js 16 to Node.js 20, skipping Node.js 18.

AWS SDK for JavaScript

Up until Node.js 16, Lambda’s Node.js runtimes included the AWS SDK for JavaScript version 2. This has since been superseded by the AWS SDK for JavaScript version 3, which was released in December 2020. Starting with Node.js 18, and continuing with Node.js 20, the Lambda Node.js runtimes have upgraded the version of the AWS SDK for JavaScript included in the runtime from v2 to v3. Customers upgrading from Node.js 16 or earlier runtimes who are using the included AWS SDK for JavaScript v2 should upgrade their code to use the v3 SDK.

For optimal performance, and to have full control over your code dependencies, we recommend bundling and minifying the AWS SDK in your deployment package, rather than using the SDK included in the runtime. For more information, see Optimizing Node.js dependencies in AWS Lambda.

Using the Node.js 20 runtime in AWS Lambda

AWS Management Console

To use the Node.js 20 runtime to develop your Lambda functions, specify a runtime parameter value Node.js 20.x when creating or updating a function. The Node.js 20 runtime version is now available in the Runtime dropdown on the Create function page in the AWS Lambda console:

Select Node.js 20.x when creating a new AWS Lambda function in the AWS Management Console

To update an existing Lambda function to Node.js 20, navigate to the function in the Lambda console, then choose Edit in the Runtime settings panel. The new version of Node.js is available in the Runtime dropdown:

Select Node.js 20.x when updating an existing AWS Lambda function in the AWS Management Console

AWS Lambda – Container Image

Change the Node.js base image version by modifying the FROM statement in your Dockerfile:

FROM public.ecr.aws/lambda/nodejs:20
# Copy function code
COPY lambda_handler.xx ${LAMBDA_TASK_ROOT}

Customers running Node.js 20 Docker images locally, including customers using AWS SAM, will need to upgrade their Docker install to version 20.10.10 or later.

AWS Serverless Application Model (AWS SAM)

In AWS SAM, set the Runtime attribute to node20.x to use this version:

AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31

    Type: AWS::Serverless::Function
      Handler: lambda_function.lambda_handler
      Runtime: nodejs20.x
      CodeUri: my_function/.
      Description: My Node.js Lambda Function

AWS Cloud Development Kit (AWS CDK)

In the AWS CDK, set the runtime attribute to Runtime.NODEJS_20_X to use this version:

import * as cdk from "aws-cdk-lib";
import * as lambda from "aws-cdk-lib/aws-lambda";
import * as path from "path";
import { Construct } from "constructs";

export class CdkStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    // The code that defines your stack goes here

    // The Node.js 20 enabled Lambda Function
    const lambdaFunction = new lambda.Function(this, "node20LambdaFunction", {
      runtime: lambda.Runtime.NODEJS_20_X,
      code: lambda.Code.fromAsset(path.join(__dirname, "/../lambda")),
      handler: "index.handler",


Lambda now supports Node.js 20. This release uses the Amazon Linux 2023 OS, supports configurable CA certificate loading for faster cold starts, as well as other improvements detailed in this blog post.

You can build and deploy functions using the Node.js 20 runtime using the AWS Management Console, AWS CLI, AWS SDK, AWS SAM, AWS CDK, or your choice of Infrastructure as Code (IaC). You can also use the Node.js 20 container base image if you prefer to build and deploy your functions using container images.

The Node.js 20 runtime empowers developers to build more efficient, powerful, and scalable serverless applications. Try the Node.js runtime in Lambda today and read about the Node.js programming model in the Lambda documentation to learn more about writing functions in Node.js 20.

For more serverless learning resources, visit Serverless Land.

A Socket API that works across JavaScript runtimes — announcing a WinterCG spec and Node.js implementation of connect()

Post Syndicated from Dominik Picheta original http://blog.cloudflare.com/socket-api-works-javascript-runtimes-wintercg-polyfill-connect/

A Socket API that works across JavaScript runtimes — announcing a WinterCG spec and Node.js implementation of connect()

A Socket API that works across JavaScript runtimes — announcing a WinterCG spec and Node.js implementation of connect()

Earlier this year, we announced a new API for creating outbound TCP socketsconnect(). From day one, we’ve been working with the Web-interoperable Runtimes Community Group (WinterCG) community to chart a course toward making this API a standard, available across all runtimes and platforms — including Node.js.

Today, we’re sharing that we’ve reached a new milestone in the path to making this API available across runtimes — engineers from Cloudflare and Vercel have published a draft specification of the connect() sockets API for review by the community, along with a Node.js compatible implementation of the connect() API that developers can start using today.

This implementation helps both application developers and maintainers of libraries and frameworks:

  1. Maintainers of existing libraries that use the node:net and node:tls APIs can use it to more easily add support for runtimes where node:net and node:tls are not available.
  2. JavaScript frameworks can use it to make connect() available in local development, making it easier for application developers to target runtimes that provide connect().

Why create a new standard? Why connect()?

As we described when we first announced connect(), to-date there has not been a standard API across JavaScript runtimes for creating and working with TCP or UDP sockets. This makes it harder for maintainers of open-source libraries to ensure compatibility across runtimes, and ultimately creates friction for application developers who have to navigate which libraries work on which platforms.

While Node.js provides the node:net and node:tls APIs, these APIs were designed over 10 years ago in the very early days of the Node.js project and remain callback-based. As a result, they can be hard to work with, and expose configuration in ways that don’t fit serverless platforms or web browsers.

The connect() API fills this gap by incorporating the best parts of existing socket APIs and prior proposed standards, based on feedback from the JavaScript community — including contributors to Node.js. Libraries like pg (node-postgres on Github) are already using the connect() API.

The connect() specification

At time of writing, the draft specification of the Sockets API defines the following API:

dictionary SocketAddress {
  DOMString hostname;
  unsigned short port;

typedef (DOMString or SocketAddress) AnySocketAddress;

enum SecureTransportKind { "off", "on", "starttls" };

dictionary SocketOptions {
  SecureTransportKind secureTransport = "off";
  boolean allowHalfOpen = false;

interface Connect {
  Socket connect(AnySocketAddress address, optional SocketOptions opts);

interface Socket {
  readonly attribute ReadableStream readable;
  readonly attribute WritableStream writable;

  readonly attribute Promise<undefined> closed;
  Promise<undefined> close();

  Socket startTls();

The proposed API is Promise-based and reuses existing standards whenever possible. For example, ReadableStream and WritableStream are used for the read and write ends of the socket. This makes it easy to pipe data from a TCP socket to any other library or existing code that accepts a ReadableStream as input, or to write to a TCP socket via a WritableStream.

The entrypoint of the API is the connect() function, which takes a string containing both the hostname and port separated by a colon, or an object with discrete hostname and port fields. It returns a Socket object which represents a socket connection. An instance of this object exposes attributes and methods for working with the connection.

A connection can be established in plain-text or TLS mode, as well as a special “starttls” mode which allows the socket to be easily upgraded to TLS after some period of plain-text data transfer, by calling the startTls() method on the Socket object. No need to create a new socket or switch to using a separate set of APIs once the socket is upgraded to use TLS.

For example, to upgrade a socket using the startTLS pattern, you might do something like this:

import { connect } from "@arrowood.dev/socket"

const options = { secureTransport: "starttls" };
const socket = connect("address:port", options);
const secureSocket = socket.startTls();
// The socket is immediately writable
// Relies on web standard WritableStream
const writer = secureSocket.writable.getWriter();
const encoder = new TextEncoder();
const encoded = encoder.encode("hello");
await writer.write(encoded);

Equivalent code using the node:net and node:tls APIs:

import net from 'node:net'
import tls from 'node:tls'

const socket = new net.Socket(HOST, PORT);
socket.once('connect', () => {
  const options = { socket };
  const secureSocket = tls.connect(options, () => {
    // The socket can only be written to once the
    // connection is established.
    // Polymorphic API, uses Node.js streams

Use the Node.js implementation of connect() in your library

To make it easier for open-source library maintainers to adopt the connect() API, we’ve published an implementation of connect() in Node.js that allows you to publish your library such that it works across JavaScript runtimes, without having to maintain any runtime-specific code.

To get started, install it as a dependency:

npm install --save @arrowood.dev/socket

And import it in your library or application:

import { connect } from "@arrowood.dev/socket"

What’s next for connect()?

The wintercg/proposal-sockets-api is published as a draft, and the next step is to solicit and incorporate feedback. We’d love your feedback, particularly if you maintain an open-source library or make direct use of the node:net or node:tls APIs.

Once feedback has been incorporated, engineers from Cloudflare, Vercel and beyond will be continuing to work towards contributing an implementation of the API directly to Node.js as a built-in API.

Building resilient serverless applications using chaos engineering

Post Syndicated from Marcia Villalba original https://aws.amazon.com/blogs/compute/building-resilient-serverless-applications-using-chaos-engineering/

This post is written by Suranjan Choudhury (Head of TME and ITeS SA) and Anil Sharma (Sr PSA, Migration) 

Chaos engineering is the process of stressing an application in testing or production environments by creating disruptive events, such as outages, observing how the system responds, and implementing improvements. Chaos engineering helps you create the real-world conditions needed to uncover hidden issues and performance bottlenecks that are challenging to find in distributed applications.

You can build resilient distributed serverless applications using AWS Lambda and test Lambda functions in real world operating conditions using chaos engineering.  This blog shows an approach to inject chaos in Lambda functions, making no change to the Lambda function code. This blog uses the AWS Fault Injection Simulator (FIS) service to create experiments that inject disruptions for Lambda based serverless applications.

AWS FIS is a managed service that performs fault injection experiments on your AWS workloads. AWS FIS is used to set up and run fault experiments that simulate real-world conditions to discover application issues that are difficult to find otherwise. You can improve application resilience and performance using results from FIS experiments.

The sample code in this blog introduces random faults to existing Lambda functions, like an increase in response times (latency) or random failures. You can observe application behavior under introduced chaos and make improvements to the application.

Approaches to inject chaos in Lambda functions

AWS FIS currently does not support injecting faults in Lambda functions. However, there are two main approaches to inject chaos in Lambda functions: using external libraries or using Lambda layers.

Developers have created libraries to introduce failure conditions to Lambda functions, such as chaos_lambda and failure-Lambda. These libraries allow developers to inject elements of chaos into Python and Node.js Lambda functions. To inject chaos using these libraries, developers must decorate the existing Lambda function’s code. Decorator functions wrap the existing Lambda function, adding chaos at runtime. This approach requires developers to change the existing Lambda functions.

You can also use Lambda layers to inject chaos, requiring no change to the function code, as the fault injection is separated. Since the Lambda layer is deployed separately, you can independently change the element of chaos, like latency in response or failure of the Lambda function. This blog post discusses this approach.

Injecting chaos in Lambda functions using Lambda layers

A Lambda layer is a .zip file archive that contains supplementary code or data. Layers usually contain library dependencies, a custom runtime, or configuration files. This blog creates an FIS experiment that uses Lambda layers to inject disruptions in existing Lambda functions for Java, Node.js, and Python runtimes.

The Lambda layer contains the fault injection code. It is invoked prior to invocation of the Lambda function and injects random latency or errors. Injecting random latency simulates real world unpredictable conditions. The Java, Node.js, and Python chaos injection layers provided are generic and reusable. You can use them to inject chaos in your Lambda functions.

The Chaos Injection Lambda Layers

Java Lambda Layer for Chaos Injection

Java Lambda Layer for Chaos Injection

The chaos injection layer for Java Lambda functions uses the JAVA_TOOL_OPTIONS environment variable. This environment variable allows specifying the initialization of tools, specifically the launching of native or Java programming language agents. The JAVA_TOOL_OPTIONS has a javaagent parameter that points to the chaos injection layer. This layer uses Java’s premain method and the Byte Buddy library for modifying the Lambda function’s Java class during runtime.

When the Lambda function is invoked, the JVM uses the class specified with the javaagent parameter and invokes its premain method before the Lambda function’s handler invocation. The Java premain method injects chaos before Lambda runs.

The FIS experiment adds the layer association and the JAVA_TOOL_OPTIONS environment variable to the Lambda function.

Python and Node.js Lambda Layer for Chaos Injection

Python and Node.js Lambda Layer for Chaos Injection

When injecting chaos in Python and Node.js functions, the Lambda function’s handler is replaced with a function in the respective layers by the FIS aws:ssm:start-automation-execution action. The automation, which is an SSM document, saves the original Lambda function’s handler to in AWS Systems Manager Parameter Store, so that the changes can be rolled back once the experiment is finished.

The layer function contains the logic to inject chaos. At runtime, the layer function is invoked, injecting chaos in the Lambda function. The layer function in turn invokes the Lambda function’s original handler, so that the functionality is fulfilled.

The result in all runtimes (Java, Python, or Node.js), is invocation of the original Lambda function with latency or failure injected. The observed changes are random latency or failure injected by the layer.

Once the experiment is completed, an SSM document is provided. This rolls back the layer’s association to the Lambda function and removes the environment variable, in the case of the Java runtime.

Sample FIS experiments using SSM and Lambda layers

In the sample code provided, Lambda layers are provided for Python, Node.js and Java runtimes along with sample Lambda functions for each runtime.

The sample deploys the Lambda layers and the Lambda functions, FIS experiment template, AWS Identity and Access Management (IAM) roles needed to run the experiment, and the AWS Systems Manger (SSM) Documents. AWS CloudFormation template is provided for deployment.

Step 1: Complete the prerequisites

  • To deploy the sample code, clone the repository locally:
    git clone https://github.com/aws-samples/chaosinjection-lambda-samples.git
  • Complete the prerequisites documented here.

Step 2: Deploy using AWS CloudFormation

The CloudFormation template provided along with this blog deploys sample code. Execute runCfn.sh.

When this is complete, it returns the StackId that CloudFormation created:

Step 3: Run the chaos injection experiment

By default, the experiment is configured to inject chaos in the Java sample Lambda function. To change it to Python or Node.js Lambda functions, edit the experiment template and configure it to inject chaos using steps from here.

Step 4: Start the experiment

From the FIS Console, choose Start experiment.

 Start experiment

Wait until the experiment state changes to “Completed”.

Step 5: Run your test

At this stage, you can inject chaos into your Lambda function. Run the Lambda functions and observe their behavior.

1. Invoke the Lambda function using the command below:

aws lambda invoke --function-name NodeChaosInjectionExampleFn out --log-type Tail --query 'LogResult' --output text | base64 -d

2. The CLI commands output displays the logs created by the Lambda layers showing latency introduced in this invocation.

In this example, the output shows that the Lambda layer injected 1799ms of random latency to the function.

The experiment injects random latency or failure in the Lambda function. Running the Lambda function again results in a different latency or failure. At this stage, you can test the application, and observe its behavior under conditions that may occur in the real world, like an increase in latency or Lambda function’s failure.

Step 6: Roll back the experiment

To roll back the experiment, run the SSM document for rollback. This rolls back the Lambda function to the state before chaos injection. Run this command:

aws ssm start-automation-execution \
--document-name “InjectLambdaChaos-Rollback” \
--document-version “\$DEFAULT” \
--parameters \
”]}’ \
--region eu-west-2

Cleaning up

To avoid incurring future charges, clean up the resources created by the CloudFormation template by running the following CLI command. Update the stack name to the one you provided when creating the stack.

aws cloudformation delete-stack --stack-name myChaosStack

Using FIS Experiments results

You can use FIS experiment results to validate expected system behavior. An example of expected behavior is: “If application latency increases by 10%, there is less than a 1% increase in sign in failures.” After the experiment is completed, evaluate whether the application resiliency aligns with your business and technical expectations.


This blog explains an approach for testing reliability and resilience in Lambda functions using chaos engineering. This approach allows you to inject chaos in Lambda functions without changing the Lambda function code, with clear segregation of chaos injection and business logic. It provides a way for developers to focus on building business functionality using Lambda functions.

The Lambda layers that inject chaos can be developed and managed separately. This approach uses AWS FIS to run experiments that inject chaos using Lambda layers and test serverless application’s performance and resiliency. Using the insights from the FIS experiment, you can find, fix, or document risks that surface in the application while testing.

For more serverless learning resources, visit Serverless Land.

More Node.js APIs in Cloudflare Workers — Streams, Path, StringDecoder

Post Syndicated from James M Snell original http://blog.cloudflare.com/workers-node-js-apis-stream-path/

More Node.js APIs in Cloudflare Workers — Streams, Path, StringDecoder

More Node.js APIs in Cloudflare Workers — Streams, Path, StringDecoder

Today we are announcing support for three additional APIs from Node.js in Cloudflare Workers. This increases compatibility with the existing ecosystem of open source npm packages, allowing you to use your preferred libraries in Workers, even if they depend on APIs from Node.js.

We recently added support for AsyncLocalStorage, EventEmitter, Buffer, assert and parts of util. Today, we are adding support for:

We are also sharing a preview of a new module type, available in the open-source Workers runtime, that mirrors a Node.js environment more closely by making some APIs available as globals, and allowing imports without the node: specifier prefix.

You can start using these APIs today, in the open-source runtime that powers Cloudflare Workers, in local development, and when you deploy your Worker. Get started by enabling the nodejs_compat compatibility flag for your Worker.


The Node.js streams API is the original API for working with streaming data in JavaScript that predates the WHATWG ReadableStream standard. Now, a full implementation of Node.js streams (based directly on the official implementation provided by the Node.js project) is available within the Workers runtime.

Let's start with a quick example:

import {
} from 'node:stream';

import {
} from 'node:stream/consumers';

import {
} from 'node:stream/promises';

// A Node.js-style Transform that converts data to uppercase
// and appends a newline to the end of the output.
class MyTransform extends Transform {
  constructor() {
    super({ encoding: 'utf8' });
  _transform(chunk, _, cb) {
  _flush(cb) {

export default {
  async fetch() {
    const chunks = [
      "hello ",
      "from ",
      "the ",
      "wonderful ",
      "world ",
      "of ",
      "node.js ",

    function nextChunk(readable) {
      if (chunks.length === 0) readable.push(null);
      else queueMicrotask(() => nextChunk(readable));

    // A Node.js-style Readable that emits chunks from the
    // array...
    const readable = new Readable({
      encoding: 'utf8',
      read() { nextChunk(readable); }

    const transform = new MyTransform();
    await pipeline(readable, transform);
    return new Response(await text(transform));

In this example we create two Node.js stream objects: one stream.Readable and one stream.Transform. The stream.Readable simply emits a sequence of individual strings, piped through the stream.Transform which converts those to uppercase and appends a new-line as a final chunk.

The example is straightforward and illustrates the basic operation of the Node.js API. For anyone already familiar with using standard WHATWG streams in Workers the pattern here should be recognizable.

The Node.js streams API is used by countless numbers of modules published on npm. Now that the Node.js streams API is available in Workers, many packages that depend on it can be used in your Workers. For example, the split2 module is a simple utility that can break a stream of data up and reassemble it so that every line is a distinct chunk. While simple, the module is downloaded over 13 million times each week and has over a thousand direct dependents on npm (and many more indirect dependents). Previously it was not possible to use split2 within Workers without also pulling in a large and complicated polyfill implementation of streams along with it. Now split2 can be used directly within Workers with no modifications and no additional polyfills. This reduces the size and complexity of your Worker by thousands of lines.

import {
} from 'node:stream';

import { default as split2 } from 'split2';

const enc = new TextEncoder();

export default {
  async fetch() {
    const pt = new PassThrough();
    const readable = pt.pipe(split2());

    for await (const chunk of readable) {

    return new Response("ok");


The Node.js Path API provides utilities for working with file and directory paths. For example:

import path from "node:path"
path.join('/foo', 'bar', 'baz/asdf', 'quux', '..');

// Returns: '/foo/bar/baz/asdf'

Note that in the Workers implementation of path, the path.win32 variants of the path API are not implemented, and will throw an exception.


The Node.js StringDecoder API is a simple legacy utility that predates the WHATWG standard TextEncoder/TextDecoder API and serves roughly the same purpose. It is used by Node.js' stream API implementation as well as a number of popular npm modules for the purpose of decoding UTF-8, UTF-16, Latin1, Base64, and Hex encoded data.

import { StringDecoder } from 'node:string_decoder';
const decoder = new StringDecoder('utf8');

const cent = Buffer.from([0xC2, 0xA2]);

const euro = Buffer.from([0xE2, 0x82, 0xAC]);

In the vast majority of cases, your Worker should just keep on using the standard TextEncoder/TextDecoder APIs, but the StringDecoder is available directly for workers to use now without relying on polyfills.

Node.js Compat Modules

One Worker can already be a bundle of multiple assets. This allows a single Worker to be made up of multiple individual ESM modules, CommonJS modules, JSON, text, and binary data files.

Soon there will be a new type of module that can be included in a Worker bundles: the NodeJsCompatModule.

A NodeJsCompatModule is designed to emulate the Node.js environment as much as possible. Within these modules, common Node.js global variables such as process, Buffer, and even __filename will be available. More importantly, it is possible to require() our Node.js core API implementations without using the node: specifier prefix. This maximizes compatibility with existing NPM packages that depend on globals from Node.js being present, or don’t import Node.js APIs using the node: specifier prefix.

Support for this new module type has landed in the open source workerd runtime, with deeper integration with Wrangler coming soon.

What’s next

We’re adding support for more Node.js APIs each month, and as we introduce new APIs, they will be added under the nodejs_compat compatibility flag — no need to take any action or update your compatibility date.

Have an NPM package that you wish worked on Workers, or an API you’d like to be able to use? Join the Cloudflare Developers Discord and tell us what you’re building, and what you’d like to see next.

Introducing AWS Lambda response streaming

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/introducing-aws-lambda-response-streaming/

Today, AWS Lambda is announcing support for response payload streaming. Response streaming is a new invocation pattern that lets functions progressively stream response payloads back to clients.

You can use Lambda response payload streaming to send response data to callers as it becomes available. This can improve performance for web and mobile applications. Response streaming also allows you to build functions that return larger payloads and perform long-running operations while reporting incremental progress.

In traditional request-response models, the response needs to be fully generated and buffered before it is returned to the client. This can delay the time to first byte (TTFB) performance while the client waits for the response to be generated. Web applications are especially sensitive to TTFB and page load performance. Response streaming lets you send partial responses back to the client as they become ready, improving TTFB latency to within milliseconds. For web applications, this can improve visitor experience and search engine rankings.

Other applications may have large payloads, like images, videos, large documents, or database results. Response streaming lets you transfer these payloads back to the client without having to buffer the entire payload in memory. You can use response streaming to send responses larger than Lambda’s 6 MB response payload limit up to a soft limit of 20 MB.

Response streaming currently supports the Node.js 14.x and subsequent managed runtimes. You can also implement response streaming using custom runtimes. You can progressively stream response payloads through Lambda function URLs, including as an Amazon CloudFront origin, along with using the AWS SDK or using Lambda’s invoke API. You can also use Amazon API Gateway and Application Load Balancer to stream larger payloads.

Writing response streaming enabled functions

Writing the handler for response streaming functions differs from typical Node handler patterns. To indicate to the runtime that Lambda should stream your function’s responses, you must wrap your function handler with the streamifyResponse() decorator. This tells the runtime to use the correct stream logic path, allowing the function to stream responses.

This is an example handler with response streaming enabled:

exports.handler = awslambda.streamifyResponse(
    async (event, responseStream, context) => {
        responseStream.write(“Hello, world!”);

The streamifyResponse decorator accepts the following additional parameter, responseStream, besides the default node handler parameters, event, and context.

The new responseStream object provides a stream object that your function can write data to. Data written to this stream is sent immediately to the client. You can optionally set the Content-Type header of the response to pass additional metadata to your client about the contents of the stream.

Writing to the response stream

The responseStream object implements Node’s Writable Stream API. This offers a write() method to write information to the stream. However, we recommend that you use pipeline() wherever possible to write to the stream. This can improve performance, ensuring that a faster readable stream does not overwhelm the writable stream.

An example function using pipeline() showing how you can stream compressed data:

const pipeline = require("util").promisify(require("stream").pipeline);
const zlib = require('zlib');
const { Readable } = require('stream');

exports.gzip = awslambda.streamifyResponse(async (event, responseStream, _context) => {
    // As an example, convert event to a readable stream.
    const requestStream = Readable.from(Buffer.from(JSON.stringify(event)));
    await pipeline(requestStream, zlib.createGzip(), responseStream);

Ending the response stream

When using the write() method, you must end the stream before the handler returns. Use responseStream.end() to signal that you are not writing any more data to the stream. This is not required if you write to the stream with pipeline().

Reading streamed responses

Response streaming introduces a new InvokeWithResponseStream API. You can read a streamed response from your function via a Lambda function URL or use the AWS SDK to call the new API directly.

Neither API Gateway nor Lambda’s target integration with Application Load Balancer support chunked transfer encoding. It therefore does not support faster TTFB for streamed responses. You can, however, use response streaming with API Gateway to return larger payload responses, up to API Gateway’s 10 MB limit. To implement this, you must configure an HTTP_PROXY integration between your API Gateway and a Lambda function URL, instead of using the LAMBDA_PROXY integration.

You can also configure CloudFront with a function URL as origin. When streaming responses through a function URL and CloudFront, you can have faster TTFB performance and return larger payload sizes.

Using Lambda response streaming with function URLs

You can configure a function URL to invoke your function and stream the raw bytes back to your HTTP client via chunked transfer encoding. You configure the Function URL to use the new InvokeWithResponseStream API by changing the invoke mode of your function URL from the default BUFFERED to RESPONSE_STREAM.

RESPONSE_STREAM enables your function to stream payload results as they become available if you wrap the function with the streamifyResponse() decorator. Lambda invokes your function using the InvokeWithResponseStream API. If InvokeWithResponseStream invokes a function that is not wrapped with streamifyResponse(), Lambda does not stream the response and instead returns a buffered response which is subject to the 6 MB size limit.

Using AWS Serverless Application Model (AWS SAM) or AWS CloudFormation, set the InvokeMode property:

    Type: AWS::Lambda::Url
      TargetFunctionArn: !Ref StreamingFunction
      AuthType: AWS_IAM
      InvokeMode: RESPONSE_STREAM

Using generic HTTP client libraries with function URLs

Each language or framework may use different methods to form an HTTP request and parse a streamed response. Some HTTP client libraries only return the response body after the server closes the connection. These clients do not work with functions that return a response stream. To get the benefit of response streams, use an HTTP client that returns response data incrementally. Many HTTP client libraries already support streamed responses, including the Apache HttpClient for Java, Node’s built-in http client, and Python’s requests and urllib3 packages. Consult the documentation for the HTTP library that you are using.

Example applications

There are a number of example Lambda streaming applications in the Serverless Patterns Collection. They use AWS SAM to build and deploy the resources in your AWS account.

Clone the repository and explore the examples. The README file in each pattern folder contains additional information.

git clone https://github.com/aws-samples/serverless-patterns/ 
cd serverless-patterns

Time to first byte using write()

  1. To show how streaming improves time to first bite, deploy the lambda-streaming-ttfb-write-sam pattern.
  2. cd lambda-streaming-ttfb-write-sam
  3. Use AWS SAM to deploy the resources to your AWS account. Run a guided deployment to set the default parameters for the first deployment.
  4. sam deploy -g --stack-name lambda-streaming-ttfb-write-sam

    For subsequent deployments you can use sam deploy.

  5. Enter a Stack Name and accept the initial defaults.
  6. AWS SAM deploys a Lambda function with streaming support and a function URL.

    AWS SAM deploy --g

    AWS SAM deploy –g

    Once the deployment completes, AWS SAM provides details of the resources.

    AWS SAM resources

    AWS SAM resources

    The AWS SAM output returns a Lambda function URL.

  7. Use curl with your AWS credentials to view the streaming response as the URL uses AWS Identity and Access Management (IAM) for authorization. Replace the URL and Region parameters for your deployment.
curl --request GET https://<url>.lambda-url.<Region>.on.aws/ --user AKIAIOSFODNN7EXAMPLE:wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY --aws-sigv4 'aws:amz:<Region>:lambda'

You can see the gradual display of the streamed response.

Using curl to stream response from write () function

Using curl to stream response from write () function

Time to first byte using pipeline()

  1. To try an example using pipeline(), deploy the lambda-streaming-ttfb-pipeline-sam pattern.
  2. cd ..
    cd lambda-streaming-ttfb-pipeline-sam
  3. Use AWS SAM to deploy the resources to your AWS account. Run a guided deployment to set the default parameters for the first deploy.
  4. sam deploy -g --stack-name lambda-streaming-ttfb-pipeline-sam
  5. Enter a Stack Name and accept the initial defaults.
  6. Use curl with your AWS credentials to view the streaming response. Replace the URL and Region parameters for your deployment.
curl --request GET https://<url>.lambda-url.<Region>.on.aws/ --user AKIAIOSFODNN7EXAMPLE:wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY --aws-sigv4 'aws:amz:<Region>:lambda'

You can see the pipelined response stream returned.

Using curl to stream response from function

Using curl to stream response from function

Large payloads

  1. To show how streaming enables you to return larger payloads, deploy the lambda-streaming-large-sam application. AWS SAM deploys a Lambda function, which returns a 7 MB PDF file which is larger than Lambda’s non-stream 6 MB response payload limit.
  2. cd ..
    cd lambda-streaming-large-sam
    sam deploy -g --stack-name lambda-streaming-large-sam
  3. The AWS SAM output returns a Lambda function URL. Use curl with your AWS credentials to view the streaming response.
curl --request GET https://<url>.lambda-url.<Region>.on.aws/ --user AKIAIOSFODNN7EXAMPLE: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY --aws-sigv4 'aws:amz:<Region>:lambda' -o SVS401-ri22.pdf -w '%{content_type}'

This downloads the PDF file SVS401-ri22.pdf to your current directory and displays the content type as application/pdf.

You can also use API Gateway to stream a large payload with an HTTP_PROXY integration with a Lambda function URL.

Invoking a function with response streaming using the AWS SDK

You can use the AWS SDK to stream responses directly from the new Lambda InvokeWithResponseStream API. This provides additional functionality such as handling midstream errors. This can be helpful when building, for example, internal microservices. Response streaming is supported with the AWS SDK for Java 2.x, AWS SDK for JavaScript v3, and AWS SDKs for Go version 1 and version 2.

The SDK response returns an event stream that you can read from. The event stream contains two event types. PayloadChunk contains a raw binary buffer with partial response data received by the client. InvokeComplete signals that the function has completed sending data. It also contains additional metadata, such as whether the function encountered an error in the middle of the stream. Errors can include unhandled exceptions thrown by your function code and function timeouts.

Using the AWS SDK for Javascript v3

  1. To see how to use the AWS SDK to stream responses from a function, deploy the lambda-streaming-sdk-sam pattern.
  2. cd ..
    cd lambda-streaming-sdk-sam
    sam deploy -g --stack-name lambda-streaming-sdk-sam
  3. Enter a Stack Name and accept the initial defaults.
  4. AWS SAM deploys three Lambda functions with streaming support.

  • HappyPathFunction: Returns a full stream.
  • MidstreamErrorFunction: Simulates an error midstream.
  • TimeoutFunction: Function times out before stream completes.
  • Run the SDK example application, which invokes each Lambda function and outputs the result.
  • npm install @aws-sdk/client-lambda
    node index.mjs

    You can see each function and how the midstream and timeout errors are returned back to the SDK client.

    Streaming midstream error

    Streaming midstream error

    Streaming timeout error

    Streaming timeout error

    Quotas and pricing

    Streaming responses incur an additional cost for network transfer of the response payload. You are billed based on the number of bytes generated and streamed out of your Lambda function over the first 6 MB. For more information, see Lambda pricing.

    There is an initial maximum response size of 20 MB, which is a soft limit you can increase. There is a maximum bandwidth throughput limit of 16 Mbps (2 MB/s) for streaming functions.


    Today, AWS Lambda is announcing support for response payload streaming to send partial responses to callers as the responses become available. This can improve performance for web and mobile applications. You can also use response streaming to build functions that return larger payloads and perform long-running operations while reporting incremental progress. Stream partial responses through Lambda function URLs, or using the AWS SDK. Response streaming currently supports the Node.js 14.x and subsequent runtimes, as well as custom runtimes.

    There are a number of example Lambda streaming applications in the Serverless Patterns Collection to explore the functionality.

    Lambda response streaming support is also available through many AWS Lambda Partners such as Datadog, Dynatrace, New Relic, Pulumi and Lumigo.

    For more serverless learning resources, visit Serverless Land.

    Node.js compatibility for Cloudflare Workers – starting with Async Context Tracking, EventEmitter, Buffer, assert, and util

    Post Syndicated from James M Snell original https://blog.cloudflare.com/workers-node-js-asynclocalstorage/

    Node.js compatibility for Cloudflare Workers – starting with Async Context Tracking, EventEmitter, Buffer, assert, and util

    Node.js compatibility for Cloudflare Workers – starting with Async Context Tracking, EventEmitter, Buffer, assert, and util

    Over the coming months, Cloudflare Workers will start to roll out built-in compatibility with Node.js core APIs as part of an effort to support increased compatibility across JavaScript runtimes.

    We are happy to announce today that the first of these Node.js APIs – AsyncLocalStorage, EventEmitter, Buffer, assert, and parts of util – are now available for use. These APIs are provided directly by the open-source Cloudflare Workers runtime, with no need to bundle polyfill implementations into your own code.

    These new APIs are available today — start using them by enabling the nodejs_compat compatibility flag in your Workers.

    Async Context Tracking with the AsyncLocalStorage API

    The AsyncLocalStorage API provides a way to track context across asynchronous operations. It allows you to pass a value through your program, even across multiple layers of asynchronous code, without having to pass a context value between operations.

    Consider an example where we want to add debug logging that works through multiple layers of an application, where each log contains the ID of the current request. Without AsyncLocalStorage, it would be necessary to explicitly pass the request ID down through every function call that might invoke the logging function:

    function logWithId(id, state) {
      console.log(`${id} - ${state}`);
    function doSomething(id) {
      // We don't actually use id for anything in this function!
      // It's only here because logWithId needs it.
      logWithId(id, "doing something");
      setTimeout(() => doSomethingElse(id), 10);
    function doSomethingElse(id) {
      logWithId(id, "doing something else");
    let idSeq = 0;
    export default {
      async fetch(req) {
        const id = idSeq++;
        logWithId(id, 'complete');
        return new Response("ok");

    While this approach works, it can be cumbersome to coordinate correctly, especially as the complexity of an application grows. Using AsyncLocalStorage this becomes significantly easier by eliminating the need to explicitly pass the context around. Our application functions (doSomething and doSomethingElse in this case) never need to know about the request ID at all while the logWithId function does exactly what we need it to:

    import { AsyncLocalStorage } from 'node:async_hooks';
    const requestId = new AsyncLocalStorage();
    function logWithId(state) {
      console.log(`${requestId.getStore()} - ${state}`);
    function doSomething() {
      logWithId("doing something");
      setTimeout(() => doSomethingElse(), 10);
    function doSomethingElse() {
      logWithId("doing something else");
    let idSeq = 0;
    export default {
      async fetch(req) {
        return requestId.run(idSeq++, () => {
          return new Response("ok");

    With the nodejs_compat compatibility flag enabled, import statements are used to access specific APIs. The Workers implementation of these APIs requires the use of the node: specifier prefix that was introduced recently in Node.js (e.g. node:async_hooks, node:events, etc)

    We implement a subset of the AsyncLocalStorage API in order to keep things as simple as possible. Specifically, we’ve chosen not to support the enterWith() and disable() APIs that are found in Node.js implementation simply because they make async context tracking more brittle and error prone.

    Conceptually, at any given moment within a worker, there is a current “Asynchronous Context Frame”, which consists of a map of storage cells, each holding a store value for a specific AsyncLocalStorage instance. Calling asyncLocalStorage.run(...) causes a new frame to be created, inheriting the storage cells of the current frame, but using the newly provided store value for the cell associated with asyncLocalStorage.

    const als1 = new AsyncLocalStorage();
    const als2 = new AsyncLocalStorage();
    // Code here runs in the root frame. There are two storage cells,
    // one for als1, and one for als2. The store value for each is
    // undefined.
    als1.run(123, () => {
      // als1.run(...) creates a new frame (1). The store value for als1
      // is set to 123, the store value for als2 is still undefined.
      // This new frame is set to "current".
      als2.run(321, () => {
        // als2.run(...) creates another new frame (2). The store value
        // for als1 is still 123, the store value for als2 is set to 321.
        // This new frame is set to "current".
        console.log(als1.getStore(), als2.getStore());
      // Frame (1) is restored as the current. The store value for als1
      // is still 123, but the store value for als2 is undefined again.
    // The root frame is restored as the current. The store values for
    // both als1 and als2 are both undefined again.

    Whenever an asynchronous operation is initiated in JavaScript, for example, creating a new JavaScript promise, scheduling a timer, etc, the current frame is captured and associated with that operation, allowing the store values at the moment the operation was initialized to be propagated and restored as needed.

    const als = new AsyncLocalStorage();
    const p1 = als.run(123, () => {
      return promise.resolve(1).then(() => console.log(als.getStore());
    const p2 = promise.resolve(1); 
    const p3 = als.run(321, () => {
      return p2.then(() => console.log(als.getStore()); // prints 321
    als.run('ABC', () => setInterval(() => {
      // prints "ABC" to the console once a second…
      setInterval(() => console.log(als.getStore(), 1000);
    als.run('XYZ', () => queueMicrotask(() => {
      console.log(als.getStore());  // prints "XYZ"

    Note that for unhandled promise rejections, the “unhandledrejection” event will automatically propagate the context that is associated with the promise that was rejected. This behavior is different from other types of events emitted by EventTarget implementations, which will propagate whichever frame is current when the event is emitted.

    const asyncLocalStorage = new AsyncLocalStorage();
    asyncLocalStorage.run(123, () => Promise.reject('boom'));
    asyncLocalStorage.run(321, () => Promise.reject('boom2'));
    addEventListener('unhandledrejection', (event) => {
      // prints 123 for the first unhandled rejection ('boom'), and
      // 321 for the second unhandled rejection ('boom2')

    Workers can use the AsyncLocalStorage.snapshot() method to create their own objects that capture and propagate the context:

    const asyncLocalStorage = new AsyncLocalStorage();
    class MyResource {
      #runInAsyncFrame = AsyncLocalStorage.snapshot();
      doSomething(...args) {
        return this.#runInAsyncFrame((...args) => {
        }, ...args);
    const resource1 = asyncLocalStorage.run(123, () => new MyResource());
    const resource2 = asyncLocalStorage.run(321, () => new MyResource());
    resource1.doSomething();  // prints 123
    resource2.doSomething();  // prints 321

    For more, refer to the Node.js documentation about the AsyncLocalStorage API.

    There is currently an effort underway to add a new AsyncContext mechanism (inspired by AsyncLocalStorage) to the JavaScript language itself. While it is still early days for the TC-39 proposal, there is good reason to expect it to progress through the committee. Once it does, we look forward to being able to make it available in the Cloudflare Workers platform. We expect our implementation of AsyncLocalStorage to be compatible with this new API.

    The proposal for AsyncContext provides an excellent set of examples and description of the motivation of why async context tracking is useful.

    Events with EventEmitter

    The EventEmitter API is one of the most fundamental Node.js APIs and is critical to supporting many other higher level APIs, including streams, crypto, net, and more. An EventEmitter is an object that emits named events that cause listeners to be called.

    import { EventEmitter } from 'node:events';
    const emitter = new EventEmitter();
    emitter.on('hello', (...args) => {
    emitter.emit('hello', 1, 2, 3);

    The implementation in the Workers runtime fully supports the entire Node.js EventEmitter API including the captureRejections option that allows improved handling of async functions as event handlers:

    const emitter = new EventEmitter({ captureRejections: true });
    emitter.on('hello', async (...args) => {
      throw new Error('boom');
    emitter.on('error', (err) => {
      // the async promise rejection is emitted here!

    Please refer to the Node.js documentation for more details on the use of the EventEmitter API: https://nodejs.org/dist/latest-v19.x/docs/api/events.html#events.


    The Buffer API in Node.js predates the introduction of the standard TypedArray and DataView APIs in JavaScript by many years and has persisted as one of the most commonly used Node.js APIs for manipulating binary data. Today, every Buffer instance extends from the standard Uint8Array class but adds a range of unique capabilities such as built-in base64 and hex encoding/decoding, byte-order manipulation, and encoding-aware substring searching.

    import { Buffer } from 'node:buffer';
    const buf = Buffer.from('hello world', 'utf8');
    // Prints: 68656c6c6f20776f726c64
    // Prints: aGVsbG8gd29ybGQ=

    Because a Buffer extends from Uint8Array, it can be used in any workers API that currently accepts Uint8Array, such as creating a new Response:

    const response = new Response(Buffer.from("hello world"));

    Or interacting with streams:

    const writable = getWritableStreamSomehow();
    const writer = writable.getWriter();
    writer.write(Buffer.from("hello world"));

    Please refer to the Node.js documentation for more details on the use of the Buffer API: https://nodejs.org/dist/latest-v19.x/docs/api/buffer.html.


    The assert module in Node.js provides a number of useful assertions that are useful when building tests.

    import {
    } from 'node:assert';
    strictEqual(1, 1); // ok!
    strictEqual(1, "1"); // fails! throws AssertionError
    deepStrictEqual({ a: { b: 1 }}, { a: { b: 1 }});// ok!
    deepStrictEqual({ a: { b: 1 }}, { a: { b: 2 }});// fails! throws AssertionError
    ok(true); // ok!
    ok(false); // fails! throws AssertionError
    await doesNotReject(async () => {}); // ok!
    await doesNotReject(async () => { throw new Error('boom') }); // fails! throws AssertionError

    In the Workers implementation of assert, all assertions run in what Node.js calls the “strict assertion mode“, which means that non-strict methods behave like their corresponding strict methods. For instance, deepEqual() will behave like deepStrictEqual().

    Please refer to the Node.js documentation for more details on the use of the assertion API: https://nodejs.org/dist/latest-v19.x/docs/api/assert.html.


    The promisify and callbackify APIs in Node.js provide a means of bridging between a Promise-based programming model and a callback-based model.

    The promisify method allows taking a Node.js-style callback function and converting it into a Promise-returning async function:

    import { promisify } from 'node:util';
    function foo(args, callback) {
      try {
        callback(null, 1);
      } catch (err) {
        // Errors are emitted to the callback via the first argument.
    const promisifiedFoo = promisify(foo);
    await promisifiedFoo(args);

    Similarly, callbackify converts a Promise-returning async function into a Node.js-style callback function:

    import { callbackify } from 'node:util';
    async function foo(args) {
      throw new Error('boom');
    const callbackifiedFoo = callbackify(foo);
    callbackifiedFoo(args, (err, value) => {
      if (err) throw err;

    Together these utilities make it easy to properly handle all of the generally tricky nuances involved with properly bridging between callbacks and promises.

    Please refer to the Node.js documentation for more information on how to use these APIs: https://nodejs.org/dist/latest-v19.x/docs/api/util.html#utilcallbackifyoriginal, https://nodejs.org/dist/latest-v19.x/docs/api/util.html#utilpromisifyoriginal.

    Type brand-checking with util.types

    The util.types API provides a reliable and generally more efficient way of checking that values are instances of various built-in types.

    import { types } from 'node:util';
    types.isAnyArrayBuffer(new ArrayBuffer());  // Returns true
    types.isAnyArrayBuffer(new SharedArrayBuffer());  // Returns true
    types.isArrayBufferView(new Int8Array());  // true
    types.isArrayBufferView(Buffer.from('hello world')); // true
    types.isArrayBufferView(new DataView(new ArrayBuffer(16)));  // true
    types.isArrayBufferView(new ArrayBuffer());  // false
    function foo() {
      types.isArgumentsObject(arguments);  // Returns true
    types.isAsyncFunction(function foo() {});  // Returns false
    types.isAsyncFunction(async function foo() {});  // Returns true
    // .. and so on

    Please refer to the Node.js documentation for more information on how to use the type check APIs: https://nodejs.org/dist/latest-v19.x/docs/api/util.html#utiltypes. The workers implementation currently does not provide implementations of the util.types.isExternal(), util.types.isProxy(), util.types.isKeyObject(), or util.type.isWebAssemblyCompiledModule() APIs.

    What’s next

    Keep your eyes open for more Node.js core APIs coming to Cloudflare Workers soon! We currently have implementations of the string decoder, streams and crypto APIs in active development. These will be introduced into the workers runtime incrementally over time and any worker using the nodejs_compat compatibility flag will automatically pick up the new modules as they are added.

    Publishing private npm packages with AWS CodeArtifact

    Post Syndicated from Ryan Sonshine original https://aws.amazon.com/blogs/devops/publishing-private-npm-packages-aws-codeartifact/

    This post demonstrates how to create, publish, and download private npm packages using AWS CodeArtifact, allowing you to share code across your organization without exposing your packages to the public.

    The ability to control CodeArtifact repository access using AWS Identity and Access Management (IAM) removes the need to manage additional credentials for a private npm repository when developers already have IAM roles configured.

    You can use private npm packages for a variety of use cases, such as:

    • Reducing code duplication
    • Configuration such as code linting and styling
    • CLI tools for internal processes

    This post shows how to easily create a sample project in which we publish an npm package and install the package from CodeArtifact. For more information about pipeline integration, see AWS CodeArtifact and your package management flow – Best Practices for Integration.

    Solution overview

    The following diagram illustrates this solution.

    Diagram showing npm package publish and install with CodeArtifact

    In this post, you create a private scoped npm package containing a sample function that can be used across your organization. You create a second project to download the npm package. You also learn how to structure your npm package to make logging in to CodeArtifact automatic when you want to build or publish the package.

    The code covered in this post is available on GitHub:


    Before you begin, you need to complete the following:

    1. Create an AWS account.
    2. Install the AWS Command Line Interface (AWS CLI). CodeArtifact is supported in these CLI versions:
      1. 18.83 or later: install the AWS CLI version 1
      2. 0.54 or later: install the AWS CLI version 2
    3. Create a CodeArtifact repository.
    4. Add required IAM permissions for CodeArtifact.

    Creating your npm package

    You can create your npm package in three easy steps: set up the project, create your npm script for authenticating with CodeArtifact, and publish the package.

    Setting up your project

    Create a directory for your new npm package. We name this directory my-package because it serves as the name of the package. We use an npm scope for this package, where @myorg represents the scope all of our organization’s packages are published under. This helps us distinguish our internal private package from external packages. See the following code:

    npm init --scope=@myorg -y

      "name": "@myorg/my-package",
      "version": "1.0.0",
      "description": "A sample private scoped npm package",
      "main": "index.js",
      "scripts": {
        "test": "echo \"Error: no test specified\" && exit 1"

    The package.json file specifies that the main file of the package is called index.js. Next, we create that file and add our package function to it:

    module.exports.helloWorld = function() {
      console.log('Hello world!');

    Creating an npm script

    To create your npm script, complete the following steps:

    1. On the CodeArtifact console, choose the repository you created as part of the prerequisites.

    If you haven’t created a repository, create one before proceeding.

    CodeArtifact repository details console

    1. Select your CodeArtifact repository and choose Details to view the additional details for your repository.

    We use two items from this page:

    • Repository name (my-repo)
    • Domain (my-domain)
    1. Create a script named co:login in our package.json. The package.json contains the following code:
      "name": "@myorg/my-package",
      "version": "1.0.0",
      "description": "A sample private scoped npm package",
      "main": "index.js",
      "scripts": {
        "co:login": "aws codeartifact login --tool npm --repository my-repo --domain my-domain",
        "test": "echo \"Error: no test specified\" && exit 1"

    Running this script updates your npm configuration to use your CodeArtifact repository and sets your authentication token, which expires after 12 hours.

    1. To test our new script, enter the following command:

    npm run co:login

    The following code is the output:

    > aws codeartifact login --tool npm --repository my-repo --domain my-domain
    Successfully configured npm to use AWS CodeArtifact repository https://my-domain-<ACCOUNT ID>.d.codeartifact.us-east-1.amazonaws.com/npm/my-repo/
    Login expires in 12 hours at 2020-09-04 02:16:17-04:00
    1. Add a prepare script to our package.json to run our login command:
      "name": "@myorg/my-package",
      "version": "1.0.0",
      "description": "A sample private scoped npm package",
      "main": "index.js",
      "scripts": {
        "prepare": "npm run co:login",
        "co:login": "aws codeartifact login --tool npm --repository my-repo --domain my-domain",
        "test": "echo \"Error: no test specified\" && exit 1"

    This configures our project to automatically authenticate and generate an access token anytime npm install or npm publish run on the project.

    If you see an error containing Invalid choice, valid choices are:, you need to update the AWS CLI according to the versions listed in the perquisites of this post.

    Publishing your package

    To publish our new package for the first time, run npm publish.

    The following screenshot shows the output.

    Terminal showing npm publish output

    If we navigate to our CodeArtifact repository on the CodeArtifact console, we now see our new private npm package ready to be downloaded.

    CodeArtifact console showing published npm package

    Installing your private npm package

    To install your private npm package, you first set up the project and add the CodeArtifact configs. After you install your package, it’s ready to use.

    Setting up your project

    Create a directory for a new application and name it my-app. This is a sample project to download our private npm package published in the previous step. You can apply this pattern to all repositories you intend on installing your organization’s npm packages in.

    npm init -y

      "name": "my-app",
      "version": "1.0.0",
      "description": "A sample application consuming a private scoped npm package",
      "main": "index.js",
      "scripts": {
        "test": "echo \"Error: no test specified\" && exit 1"

    Adding CodeArtifact configs

    Copy the npm scripts prepare and co:login created earlier to your new project:

      "name": "my-app",
      "version": "1.0.0",
      "description": "A sample application consuming a private scoped npm package",
      "main": "index.js",
      "scripts": {
        "prepare": "npm run co:login",
        "co:login": "aws codeartifact login --tool npm --repository my-repo --domain my-domain",
        "test": "echo \"Error: no test specified\" && exit 1"

    Installing your new private npm package

    Enter the following command:

    npm install @myorg/my-package

    Your package.json should now list @myorg/my-package in your dependencies:

      "name": "my-app",
      "version": "1.0.0",
      "description": "",
      "main": "index.js",
      "scripts": {
        "prepare": "npm run co:login",
        "co:login": "aws codeartifact login --tool npm --repository my-repo --domain my-domain",
        "test": "echo \"Error: no test specified\" && exit 1"
      "dependencies": {
        "@myorg/my-package": "^1.0.0"

    Using your new npm package

    In our my-app application, create a file named index.js to run code from our package containing the following:

    const { helloWorld } = require('@myorg/my-package');

    Run node index.js in your terminal to see the console print the message from our @myorg/my-package helloWorld function.

    Cleaning Up

    If you created a CodeArtifact repository for the purposes of this post, use one of the following methods to delete the repository:

    Remove the changes made to your user profile’s npm configuration by running npm config delete registry, this will remove the CodeArtifact repository from being set as your default npm registry.


    In this post, you successfully published a private scoped npm package stored in CodeArtifact, which you can reuse across multiple teams and projects within your organization. You can use npm scripts to streamline the authentication process and apply this pattern to save time.

    About the Author

    Ryan Sonshine

    Ryan Sonshine is a Cloud Application Architect at Amazon Web Services. He works with customers to drive digital transformations while helping them architect, automate, and re-engineer solutions to fully leverage the AWS Cloud.



    Building Serverless Land: Part 2 – An auto-building static site

    Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/building-serverless-land-part-2-an-auto-building-static-site/

    In this two-part blog series, I show how serverlessland.com is built. This is a static website that brings together all the latest blogs, videos, and training for AWS serverless. It automatically aggregates content from a number of sources. The content exists in a static JSON file, which generates a new static site each time it is updated. The result is a low-maintenance, low-latency serverless website, with almost limitless scalability.

    A companion blog post explains how to build an automated content aggregation workflow to create and update the site’s content. In this post, you learn how to build a static website with an automated deployment pipeline that re-builds on each GitHub commit. The site content is stored in JSON files in the same repository as the code base. The example code can be found in this GitHub repository.

    The growing adoption of serverless technologies generates increasing amounts of helpful and insightful content from the developer community. This content can be difficult to discover. Serverless Land helps channel this into a single searchable location. By collating this into a static website, users can enjoy a browsing experience with fast page load speeds.

    The serverless nature of the site means that developers don’t need to manage infrastructure or scalability. The use of AWS Amplify Console to automatically deploy directly from GitHub enables a regular release cadence with a fast transition from prototype to production.

    Static websites

    A static site is served to the user’s web browser exactly as stored. This contrasts to dynamic webpages, which are generated by a web application. Static websites often provide improved performance for end users and have fewer or no dependant systems, such as databases or application servers. They may also be more cost-effective and secure than dynamic websites by using cloud storage, instead of a hosted environment.

    A static site generator is a tool that generates a static website from a website’s configuration and content. Content can come from a headless content management system, through a REST API, or from data referenced within the website’s file system. The output of a static site generator is a set of static files that form the website.

    Serverless Land uses a static site generator for Vue.js called Nuxt.js. Each time content is updated, Nuxt.js regenerates the static site, building the HTML for each page route and storing it in a file.

    The architecture

    Serverless Land static website architecture

    When the content.json file is committed to GitHub, a new build process is triggered in AWS Amplify Console.

    Deploying AWS Amplify

    AWS Amplify helps developers to build secure and scalable full stack cloud applications. AWS Amplify Console is a tool within Amplify that provides a user interface with a git-based workflow for hosting static sites. Deploy applications by connecting to an existing repository (GitHub, BitBucket Cloud, GitLab, and AWS CodeCommit) to set up a fully managed, nearly continuous deployment pipeline.

    This means that any changes committed to the repository trigger the pipeline to build, test, and deploy the changes to the target environment. It also provides instant content delivery network (CDN) cache invalidation, atomic deploys, password protection, and redirects without the need to manage any servers.

    Building the static website

    1. To get started, use the Nuxt.js scaffolding tool to deploy a boiler plate application. Make sure you have npx installed (npx is shipped by default with npm version 5.2.0 and above).
      $ npx create-nuxt-app content-aggregator

      The scaffolding tool asks some questions, answer as follows:Nuxt.js scaffolding tool inputs

    2. Navigate to the project directory and launch it with:
      $ cd content-aggregator
      $ npm run dev

      The application is now running on http://localhost:3000.The pages directory contains your application views and routes. Nuxt.js reads the .vue files inside this directory and automatically creates the router configuration.

    3. Create a new file in the /pages directory named blogs.vue:$ touch pages/blogs.vue
    4. Copy the contents of this file into pages/blogs.vue.
    5. Create a new file in /components directory named Post.vue :$ touch components/Post.vue
    6. Copy the contents of this file into components/Post.vue.
    7. Create a new file in /assets named content.json and copy the contents of this file into it.$ touch /assets/content.json

    The blogs Vue component

    The blogs page is a Vue component with some special attributes and functions added to make development of your application easier. The following code imports the content.json file into the variable blogPosts. This file stores the static website’s array of aggregated blog post content.

    import blogPosts from '../assets/content.json'

    An array named blogPosts is initialized:

          blogPosts: []

    The array is then loaded with the contents of content.json.

        this.blogPosts = blogPosts

    In the component template, the v-for directive renders a list of post items based on the blogPosts array. It requires a special syntax in the form of blog in blogPosts, where blogPosts is the source data array and blog is an alias for the array element being iterated on. The Post component is rendered for each iteration. Since components have isolated scopes of their own, a :post prop is used to pass the iterated data into the Post component:

      <li v-for="blog in blogPosts" :key="blog">
         <Post :post="blog" />

    The post data is then displayed by the following template in components/Post.vue.

        <div class="hello">
          <h3>{{ post.title }} </h3>
          <div class="img-holder">
              <img :src="post.image" />
          <p>{{ post.intro }} </p>
          <p>Published on {{post.date}}, by {{ post.author }} p>
          <a :href="post.link"> Read article</a>

    This forms the framework for the static website. The /blogs page displays content from /assets/content.json via the Post component. To view this, go to http://localhost:3000/blogs in your browser:

    The /blogs page

    Add a new item to the content.json file and rebuild the static website to display new posts on the blogs page. The previous content was generated using the aggregation workflow explained in this companion blog post.

    Connect to Amplify Console

    Clone the web application to a GitHub repository and connect it to Amplify Console to automate the rebuild and deployment process:

    1. Upload the code to a new GitHub repository named ‘content-aggregator’.
    2. In the AWS Management Console, go to the Amplify Console and choose Connect app.
    3. Choose GitHub then Continue.
    4. Authorize to your GitHub account, then in the Recently updated repositories drop-down select the ‘content-aggregator’ repository.
    5. In the Branch field, leave the default as master and choose Next.
    6. In the Build and test settings choose edit.
    7. Replace - npm run build with – npm run generate.
    8. Replace baseDirectory: / with baseDirectory: dist

      This runs the nuxt generate command each time an application build process is triggered. The nuxt.config.js file has a target property with the value of static set. This generates the web application into static files. Nuxt.js creates a dist directory with everything inside ready to be deployed on a static hosting service.
    9. Choose Save then Next.
    10. Review the Repository details and App settings are correct. Choose Save and deploy.

      Amplify Console deployment

    Once the deployment process has completed and is verified, choose the URL generated by Amplify Console. Append /blogs to the URL, to see the static website blogs page.

    Any edits pushed to the repository’s content.json file trigger a new deployment in Amplify Console that regenerates the static website. This companion blog post explains how to set up an automated content aggregator to add new items to the content.json file from an RSS feed.


    This blog post shows how to create a static website with vue.js using the nuxt.js static site generator. The site’s content is generated from a single JSON file, stored in the site’s assets directory. It is automatically deployed and re-generated by Amplify Console each time a new commit is pushed to the GitHub repository. By automating updates to the content.json file you can create low-maintenance, low-latency static websites with almost limitless scalability.

    This application framework is used together with this automated content aggregator to pull together articles for http://serverlessland.com. Serverless Land brings together all the latest blogs, videos, and training for AWS Serverless. Download the code from this GitHub repository to start building your own automated content aggregation platform.

    Building, bundling, and deploying applications with the AWS CDK

    Post Syndicated from Cory Hall original https://aws.amazon.com/blogs/devops/building-apps-with-aws-cdk/

    The AWS Cloud Development Kit (AWS CDK) is an open-source software development framework to model and provision your cloud application resources using familiar programming languages.

    The post CDK Pipelines: Continuous delivery for AWS CDK applications showed how you can use CDK Pipelines to deploy a TypeScript-based AWS Lambda function. In that post, you learned how to add additional build commands to the pipeline to compile the TypeScript code to JavaScript, which is needed to create the Lambda deployment package.

    In this post, we dive deeper into how you can perform these build commands as part of your AWS CDK build process by using the native AWS CDK bundling functionality.

    If you’re working with Python, TypeScript, or JavaScript-based Lambda functions, you may already be familiar with the PythonFunction and NodejsFunction constructs, which use the bundling functionality. This post describes how to write your own bundling logic for instances where a higher-level construct either doesn’t already exist or doesn’t meet your needs. To illustrate this, I walk through two different examples: a Lambda function written in Golang and a static site created with Nuxt.js.


    A typical CI/CD pipeline contains steps to build and compile your source code, bundle it into a deployable artifact, push it to artifact stores, and deploy to an environment. In this post, we focus on the building, compiling, and bundling stages of the pipeline.

    The AWS CDK has the concept of bundling source code into a deployable artifact. As of this writing, this works for two main types of assets: Docker images published to Amazon Elastic Container Registry (Amazon ECR) and files published to Amazon Simple Storage Service (Amazon S3). For files published to Amazon S3, this can be as simple as pointing to a local file or directory, which the AWS CDK uploads to Amazon S3 for you.

    When you build an AWS CDK application (by running cdk synth), a cloud assembly is produced. The cloud assembly consists of a set of files and directories that define your deployable AWS CDK application. In the context of the AWS CDK, it might include the following:

    • AWS CloudFormation templates and instructions on where to deploy them
    • Dockerfiles, corresponding application source code, and information about where to build and push the images to
    • File assets and information about which S3 buckets to upload the files to

    Use case

    For this use case, our application consists of front-end and backend components. The example code is available in the GitHub repo. In the repository, I have split the example into two separate AWS CDK applications. The repo also contains the Golang Lambda example app and the Nuxt.js static site.

    Golang Lambda function

    To create a Golang-based Lambda function, you must first create a Lambda function deployment package. For Go, this consists of a .zip file containing a Go executable. Because we don’t commit the Go executable to our source repository, our CI/CD pipeline must perform the necessary steps to create it.

    In the context of the AWS CDK, when we create a Lambda function, we have to tell the AWS CDK where to find the deployment package. See the following code:

    new lambda.Function(this, 'MyGoFunction', {
      runtime: lambda.Runtime.GO_1_X,
      handler: 'main',
      code: lambda.Code.fromAsset(path.join(__dirname, 'folder-containing-go-executable')),

    In the preceding code, the lambda.Code.fromAsset() method tells the AWS CDK where to find the Golang executable. When we run cdk synth, it stages this Go executable in the cloud assembly, which it zips and publishes to Amazon S3 as part of the PublishAssets stage.

    If we’re running the AWS CDK as part of a CI/CD pipeline, this executable doesn’t exist yet, so how do we create it? One method is CDK bundling. The lambda.Code.fromAsset() method takes a second optional argument, AssetOptions, which contains the bundling parameter. With this bundling parameter, we can tell the AWS CDK to perform steps prior to staging the files in the cloud assembly.

    Breaking down the BundlingOptions parameter further, we can perform the build inside a Docker container or locally.

    Building inside a Docker container

    For this to work, we need to make sure that we have Docker running on our build machine. In AWS CodeBuild, this means setting privileged: true. See the following code:

    new lambda.Function(this, 'MyGoFunction', {
      code: lambda.Code.fromAsset(path.join(__dirname, 'folder-containing-source-code'), {
        bundling: {
          image: lambda.Runtime.GO_1_X.bundlingDockerImage,
          command: [
            'bash', '-c', [
              'go test -v',
              'GOOS=linux go build -o /asset-output/main',
          ].join(' && '),

    We specify two parameters:

    • image (required) – The Docker image to perform the build commands in
    • command (optional) – The command to run within the container

    The AWS CDK mounts the folder specified as the first argument to fromAsset at /asset-input inside the container, and mounts the asset output directory (where the cloud assembly is staged) at /asset-output inside the container.

    After we perform the build commands, we need to make sure we copy the Golang executable to the /asset-output location (or specify it as the build output location like in the preceding example).

    This is the equivalent of running something like the following code:

    docker run \
      --rm \
      -v folder-containing-source-code:/asset-input \
      -v cdk.out/asset.1234a4b5/:/asset-output \
      lambci/lambda:build-go1.x \
      bash -c 'GOOS=linux go build -o /asset-output/main'

    Building locally

    To build locally (not in a Docker container), we have to provide the local parameter. See the following code:

    new lambda.Function(this, 'MyGoFunction', {
      code: lambda.Code.fromAsset(path.join(__dirname, 'folder-containing-source-code'), {
        bundling: {
          image: lambda.Runtime.GO_1_X.bundlingDockerImage,
          command: [],
          local: {
            tryBundle(outputDir: string) {
              try {
                spawnSync('go version')
              } catch {
                return false
              spawnSync(`GOOS=linux go build -o ${path.join(outputDir, 'main')}`);
              return true

    The local parameter must implement the ILocalBundling interface. The tryBundle method is passed the asset output directory, and expects you to return a boolean (true or false). If you return true, the AWS CDK doesn’t try to perform Docker bundling. If you return false, it falls back to Docker bundling. Just like with Docker bundling, you must make sure that you place the Go executable in the outputDir.

    Typically, you should perform some validation steps to ensure that you have the required dependencies installed locally to perform the build. This could be checking to see if you have go installed, or checking a specific version of go. This can be useful if you don’t have control over what type of build environment this might run in (for example, if you’re building a construct to be consumed by others).

    If we run cdk synth on this, we see a new message telling us that the AWS CDK is bundling the asset. If we include additional commands like go test, we also see the output of those commands. This is especially useful if you wanted to fail a build if tests failed. See the following code:

    $ cdk synth
    Bundling asset GolangLambdaStack/MyGoFunction/Code/Stage...
    ✓  . (9ms)
    ✓  clients (5ms)
    DONE 8 tests in 11.476s
    ✓  clients (5ms) (coverage: 84.6% of statements)
    ✓  . (6ms) (coverage: 78.4% of statements)
    DONE 8 tests in 2.464s

    Cloud Assembly

    If we look at the cloud assembly that was generated (located at cdk.out), we see something like the following code:

    $ cdk synth
    Bundling asset GolangLambdaStack/MyGoFunction/Code/Stage...
    ✓  . (9ms)
    ✓  clients (5ms)
    DONE 8 tests in 11.476s
    ✓  clients (5ms) (coverage: 84.6% of statements)
    ✓  . (6ms) (coverage: 78.4% of statements)
    DONE 8 tests in 2.464s

    It contains our GolangLambdaStack CloudFormation template that defines our Lambda function, as well as our Golang executable, bundled at asset.01cf34ff646d380829dc4f2f6fc93995b13277bde7db81c24ac8500a83a06952/main.

    Let’s look at how the AWS CDK uses this information. The GolangLambdaStack.assets.json file contains all the information necessary for the AWS CDK to know where and how to publish our assets (in this use case, our Golang Lambda executable). See the following code:

      "version": "5.0.0",
      "files": {
        "01cf34ff646d380829dc4f2f6fc93995b13277bde7db81c24ac8500a83a06952": {
          "source": {
            "path": "asset.01cf34ff646d380829dc4f2f6fc93995b13277bde7db81c24ac8500a83a06952",
            "packaging": "zip"
          "destinations": {
            "current_account-current_region": {
              "bucketName": "cdk-hnb659fds-assets-${AWS::AccountId}-${AWS::Region}",
              "objectKey": "01cf34ff646d380829dc4f2f6fc93995b13277bde7db81c24ac8500a83a06952.zip",
              "assumeRoleArn": "arn:${AWS::Partition}:iam::${AWS::AccountId}:role/cdk-hnb659fds-file-publishing-role-${AWS::AccountId}-${AWS::Region}"

    The file contains information about where to find the source files (source.path) and what type of packaging (source.packaging). It also tells the AWS CDK where to publish this .zip file (bucketName and objectKey) and what AWS Identity and Access Management (IAM) role to use (assumeRoleArn). In this use case, we only deploy to a single account and Region, but if you have multiple accounts or Regions, you see multiple destinations in this file.

    The GolangLambdaStack.template.json file that defines our Lambda resource looks something like the following code:

      "Resources": {
        "MyGoFunction0AB33E85": {
          "Type": "AWS::Lambda::Function",
          "Properties": {
            "Code": {
              "S3Bucket": {
                "Fn::Sub": "cdk-hnb659fds-assets-${AWS::AccountId}-${AWS::Region}"
              "S3Key": "01cf34ff646d380829dc4f2f6fc93995b13277bde7db81c24ac8500a83a06952.zip"
            "Handler": "main",

    The S3Bucket and S3Key match the bucketName and objectKey from the assets.json file. By default, the S3Key is generated by calculating a hash of the folder location that you pass to lambda.Code.fromAsset(), (for this post, folder-containing-source-code). This means that any time we update our source code, this calculated hash changes and a new Lambda function deployment is triggered.

    Nuxt.js static site

    In this section, I walk through building a static site using the Nuxt.js framework. You can apply the same logic to any static site framework that requires you to run a build step prior to deploying.

    To deploy this static site, we use the BucketDeployment construct. This is a construct that allows you to populate an S3 bucket with the contents of .zip files from other S3 buckets or from a local disk.

    Typically, we simply tell the BucketDeployment construct where to find the files that it needs to deploy to the S3 bucket. See the following code:

    new s3_deployment.BucketDeployment(this, 'DeployMySite', {
      sources: [
        s3_deployment.Source.asset(path.join(__dirname, 'path-to-directory')),
      destinationBucket: myBucket

    To deploy a static site built with a framework like Nuxt.js, we need to first run a build step to compile the site into something that can be deployed. For Nuxt.js, we run the following two commands:

    • yarn install – Installs all our dependencies
    • yarn generate – Builds the application and generates every route as an HTML file (used for static hosting)

    This creates a dist directory, which you can deploy to Amazon S3.

    Just like with the Golang Lambda example, we can perform these steps as part of the AWS CDK through either local or Docker bundling.

    Building inside a Docker container

    To build inside a Docker container, use the following code:

    new s3_deployment.BucketDeployment(this, 'DeployMySite', {
      sources: [
        s3_deployment.Source.asset(path.join(__dirname, 'path-to-nuxtjs-project'), {
          bundling: {
            image: cdk.BundlingDockerImage.fromRegistry('node:lts'),
            command: [
              'bash', '-c', [
                'yarn install',
                'yarn generate',
                'cp -r /asset-input/dist/* /asset-output/',
              ].join(' && '),

    For this post, we build inside the publicly available node:lts image hosted on DockerHub. Inside the container, we run our build commands yarn install && yarn generate, and copy the generated dist directory to our output directory (the cloud assembly).

    The parameters are the same as described in the Golang example we walked through earlier.

    Building locally

    To build locally, use the following code:

    new s3_deployment.BucketDeployment(this, 'DeployMySite', {
      sources: [
        s3_deployment.Source.asset(path.join(__dirname, 'path-to-nuxtjs-project'), {
          bundling: {
            local: {
              tryBundle(outputDir: string) {
                try {
                  spawnSync('yarn --version');
                } catch {
                  return false
                spawnSync('yarn install && yarn generate');
           fs.copySync(path.join(__dirname, ‘path-to-nuxtjs-project’, ‘dist’), outputDir);
                return true
            image: cdk.BundlingDockerImage.fromRegistry('node:lts'),
            command: [],

    Building locally works the same as the Golang example we walked through earlier, with one exception. We have one additional command to run that copies the generated dist folder to our output directory (cloud assembly).


    This post showed how you can easily compile your backend and front-end applications using the AWS CDK. You can find the example code for this post in this GitHub repo. If you have any questions or comments, please comment on the GitHub repo. If you have any additional examples you want to add, we encourage you to create a Pull Request with your example!

    Our code also contains examples of deploying the applications using CDK Pipelines, so if you’re interested in deploying the example yourself, check out the example repo.


    About the author

    Cory Hall

    Cory is a Solutions Architect at Amazon Web Services with a passion for DevOps and is based in Charlotte, NC. Cory works with enterprise AWS customers to help them design, deploy, and scale applications to achieve their business goals.

    Yahoo Mail’s New Tech Stack, Built for Performance and Reliability

    Post Syndicated from mikesefanov original https://yahooeng.tumblr.com/post/162320493306

    By Suhas Sadanandan, Director of Engineering 

    When it comes to performance and reliability, there is perhaps no application where this matters more than with email. Today, we announced a new Yahoo Mail experience for desktop based on a completely rewritten tech stack that embodies these fundamental considerations and more.

    We built the new Yahoo Mail experience using a best-in-class front-end tech stack with open source technologies including React, Redux, Node.js, react-intl (open-sourced by Yahoo), and others. A high-level architectural diagram of our stack is below.


    New Yahoo Mail Tech Stack

    In building our new tech stack, we made use of the most modern tools available in the industry to come up with the best experience for our users by optimizing the following fundamentals:


    A key feature of the new Yahoo Mail architecture is blazing-fast initial loading (aka, launch).

    We introduced new network routing which sends users to their nearest geo-located email servers (proximity-based routing). This has resulted in a significant reduction in time to first byte and should be immediately noticeable to our international users in particular.

    We now do server-side rendering to allow our users to see their mail sooner. This change will be immediately noticeable to our low-bandwidth users. Our application is isomorphic, meaning that the same code runs on the server (using Node.js) and the client. Prior versions of Yahoo Mail had programming logic duplicated on the server and the client because we used PHP on the server and JavaScript on the client.   

    Using efficient bundling strategies (JavaScript code is separated into application, vendor, and lazy loaded bundles) and pushing only the changed bundles during production pushes, we keep the cache hit ratio high. By using react-atomic-css, our homegrown solution for writing modular and scoped CSS in React, we get much better CSS reuse.  

    In prior versions of Yahoo Mail, the need to run various experiments in parallel resulted in additional branching and bloating of our JavaScript and CSS code. While rewriting all of our code, we solved this issue using Mendel, our homegrown solution for bucket testing isomorphic web apps, which we have open sourced.  

    Rather than using custom libraries, we use native HTML5 APIs and ES6 heavily and use PolyesterJS, our homegrown polyfill solution, to fill the gaps. These factors have further helped us to keep payload size minimal.

    With all the above optimizations, we have been able to reduce our JavaScript and CSS footprint by approximately 50% compared to the previous desktop version of Yahoo Mail, helping us achieve a blazing-fast launch.

    In addition to initial launch improvements, key features like search and message read (when a user opens an email to read it) have also benefited from the above optimizations and are considerably faster in the latest version of Yahoo Mail.

    We also significantly reduced the memory consumed by Yahoo Mail on the browser. This is especially noticeable during a long running session.


    With this new version of Yahoo Mail, we have a 99.99% success rate on core flows: launch, message read, compose, search, and actions that affect messages. Accomplishing this over several billion user actions a day is a significant feat. Client-side errors (JavaScript exceptions) are reduced significantly when compared to prior Yahoo Mail versions.

    Product agility and launch velocity

    We focused on independently deployable components. As part of the re-architecture of Yahoo Mail, we invested in a robust continuous integration and delivery flow. Our new pipeline allows for daily (or more) pushes to all Mail users, and we push only the bundles that are modified, which keeps the cache hit ratio high.

    Developer effectiveness and satisfaction

    In developing our tech stack for the new Yahoo Mail experience, we heavily leveraged open source technologies, which allowed us to ensure a shorter learning curve for new engineers. We were able to implement a consistent and intuitive onboarding program for 30+ developers and are now using our program for all new hires. During the development process, we emphasise predictable flows and easy debugging.


    The accessibility of this new version of Yahoo Mail is state of the art and delivers outstanding usability (efficiency) in addition to accessibility. It features six enhanced visual themes that can provide accommodation for people with low vision and has been optimized for use with Assistive Technology including alternate input devices, magnifiers, and popular screen readers such as NVDA and VoiceOver. These features have been rigorously evaluated and incorporate feedback from users with disabilities. It sets a new standard for the accessibility of web-based mail and is our most-accessible Mail experience yet.

    Open source 

    We have open sourced some key components of our new Mail stack, like Mendel, our solution for bucket testing isomorphic web applications. We invite the community to use and build upon our code. Going forward, we plan on also open sourcing additional components like react-atomic-css, our solution for writing modular and scoped CSS in React, and lazy-component, our solution for on-demand loading of resources.

    Many of our company’s best technical minds came together to write a brand new tech stack and enable a delightful new Yahoo Mail experience for our users.

    We encourage our users and engineering peers in the industry to test the limits of our application, and to provide feedback by clicking on the Give Feedback call out in the lower left corner of the new version of Yahoo Mail.

    UNIFICli – a CLI tool to manage Ubiquiti’s Unifi Controller

    Post Syndicated from Anonymous original http://deliantech.blogspot.com/2017/04/unificli-cli-tool-to-manage-ubiquitis.html

    As mentioned earlier, I made a nodejs library interface to the Ubiquiti Unifi Controller’s REST API which is available here – https://github.com/delian/node-unifiapi
    Now I am introducing a small, demo, CLI interface, which uses that same library to remotely connect and configure Ubiquiti Unifi Controller (or Ubiquiti UC-CK Cloud Key).
    This software is available on GitHub here – https://github.com/delian/unificli and its main goal for me is to be able to test the node-unifiapi library. The calls and parameters are almost 1:1 with the library and this small code provides great example how such tools could be built.
    This tool is able to connect to a controller either via direct HTTPS connection or via WebRTC trough Ubiquiti’s Unifi Clould network (they name it SDN). And also you have a command you could use to connect to wireless access point via SSH over WebRTC.
    The tool is not completed, neither have any goal. Feel free to fix bugs, extend it with features or provide suggestions. Any help with the development will be appreciated.
    Commands can be executed via the cli too:

    npm start connectSSH 00:01:02:03:04:05 -l unifikeylocation

    node-unifiapi – NodeJS API to access Ubiquiti Unifi Controller API

    Post Syndicated from Anonymous original http://deliantech.blogspot.com/2017/03/node-unifiapi-nodejs-api-to-access.html

    I have completed the initial rewrite of the PHP based UniFi-API-Browser to JavaScript for NodeJS.
    The module and some auto generated documentation with some basic examples is available here:
    The supported features includes all of the UniFi-API-Browser calls via the direct REST http calls or via WebRTC (using Ubiquiti Unifi Cloud) as well as SSH access to Unifi Access Points via WebRTC.

    Passport-http Digest Authentication and Express 4+ bug fix

    Post Syndicated from Anonymous original http://deliantech.blogspot.com/2015/10/passport-http-digest-authentication-and.html

    Passport is quite popular environment for implementing authentication under Node.JS and Express Framework.
    Passport-http is a plug-in that provides Digest Authentication interfaces and is very popular for Passport.
    However, since Express 4, the default approach to the Express routes is them to be relative.
    Example – 
    Express <=3 default application assumed that every file who extend Express with routes should specify the full path to the route in a similar to this approach:
    Where users.js do as well:
    The default app of Express 4+ uses relative routes, which is quite better as it allows full isolation between the modules of the application.
    Where rest.js has:
    router = require(‘express’).Router();
    router.get(‘/data’, function)… // where data is relative url and the actual will be /rest/data
    This simplifies the readability. Also it allows you to separate the authentication. You could have different authentication approach or configuration to each different module. And that works well with Passport.
    For example:
    var DigestStrategy = require(‘passport-http’).DigestStrategy;
    … here there should be a code for authentication function using Digest …
    and then:
    router.get(‘/data’,authentication,function) ….
    This simplifies, makes it more readable and isolates very much the code necessary for authentication.
    Personally, I write my own authentication functions in a separate module, then I include them in the express route module where I want to use them and it became even more simpler:
    var auth = require(‘../libs/auth.js’);
    Router.get(‘/data’, auth(‘admins’), function) …
    I even could apply different permissions, roles like – if you have pre authenticated session, then the interface will not ask you for authentication (saved one RTT) but if you don’t it will ask you for digest authentication. Quite simple and quite readable.
    However, all this does not work with Passport-http, because of a very small bug within.
    The bug:
    For security reasons, passport-http module verifies that the authentication URI from the customer request is the same as the URL requested authentication. However, the authentication URI (creds.uri) is always full path, but it is compared to req.url which is always relative path. The comparision has to be between creds.uri and req.baseUrl+req.url.
    And this is my fix proposed to the author of passport-http, which I hope will be merged with the code.

    EmbeddedJS – async reimplementation

    Post Syndicated from Anonymous original http://deliantech.blogspot.com/2015/09/embeddedjs-async-reimplementation.html

    I like using Embedded JS together with RequireJS in very small projects. It is very small, lighting fast and extremely simple, powerful and extensive.
    However, there are hundreds of implementations, most of them out of date, and particularly the implementation I like is not supported anymore. But that I mean not only that the author is not supporting it, but also it works unpredictably in some browsers because it rely on Sync XMLHttpRequest which is not allowed in the main thread anymore.
    So I decided to rewrite that EJS implementation myself, in a way I could use it in async mode.
    So allow me to introduce the new Async Embedded JS implementation, which is here at https://github.com/delian/embeddedjs
    It supports Node, AMD (requirejs) and globals (as the original) and detect them automatically.
    A little documentation can be found here https://github.com/delian/embeddedjs/blob/master/README.md
    The new code is written in ES5 and uses the new Function method from ES6. That makes it working only in modern browsers (IE10+). But it makes it really really fast. By current estimates about twice faster than the original.
    It is still work in progress (no avoidance of cached URLs for example), but works perfectly fine for me.
    If you use it and hit a bug, please report it at the Issues page at Github

    Node.JS module to access Cisco IOS XR XML interface

    Post Syndicated from Anonymous original http://deliantech.blogspot.com/2015/03/nodejs-module-to-access-cisco-ios-xr.html

    Hello to all,
    This is the early version of my module for Node.JS that allows configuring routers and retrieving information over Cisco IOS XR’s XML interface.
    The module is in its early phases – it still does not read IOS XR schema files and therefore decode the data (in JSON) in a little ugly way (too much arrays). I am planning to fix it, so there may be changes in the responses.
    Please see bellow the first version of the documentation I’ve set in the github:

    Module for Cisco XML API interface IOS XR

    This is a small module that implements interface to Cisco IOS XR XML Interface.
    This module open an maintain TCP session to the router, sends requests and receive responses.


    To install the module do something like that:
    npm install node-ciscoxml


    It is very easy to use this module. See the methods bellow:

    Load the module

    To load and use the module, you have to use a code similar to this:
    var cxml = require('node-ciscoxml');
    var c = cxml( { ...connect options.. });

    Module init and connect options

    host (default – the hostname of the router where we’ll connect
    port (default 38751) – the port of the router where XML API is listening
    username (default guest) – the username used for authentication, if username is requested by the remote side
    password (default guest) – the password used for authentication, if password is requested by the remote side
    connectErrCnt (default 3) – how many times it will retry to connect in case of an error
    autoConnect (default true) – should it try to auto connect to the remote side if a request is dispatched and there is no open session already
    autoDisconnect (default 60000) – how much milliseconds we will wait for another request before the tcp session to the remote side is closed. If the value is 0, it will wait forever (or until the remote side disconnects). Bear in mind autoConnect set to false does not assume autoDisconnect set to 0/false as well.
    userPromptRegex (default (Username|Login)) – the rule used to identify that the remote side requests for a username
    passPromptRegex (default Password) – the rule used to identify that the remote side requests for a password
    xmlPromptRegex (default XML>) – the rule used to identify successful login/connection
    noDelay (default true) – disables the Nagle algorithm (true)
    keepAlive (default 30000) – enabled or disables (value of 0) TCP keepalive for the socket
    ssl (default false) – if it is set to true or an object, then SSL session will be opened. Node.js TLS module is used for that so if the ssl points to an object, the tls options are taken from it. Be careful – enabling SSL does not change the default port from 38751 to 38752. You have to set it explicitly!
    var cxml = require('node-ciscoxml');
    var c = cxml( {
        host: '',
        port: 5000,
        username: 'xmlapi',
        password: 'xmlpass'

    connect method

    This method forces explicitly a connection. It could accept any options of the above.
    var cxml = require('node-ciscoxml');
    var c = cxml();
    c.connect( {
        host: '',
        port: 5000,
        username: 'xmlapi',
        password: 'xmlpass'
    The connect method is not necessary to be used. If autoConnect is enabled (default) the module will automatically open and close tcp connections when needed.
    Connect supports callback. Example:
    var cxml = require('node-ciscoxml');
    cxml().connect( {
        host: '',
        port: 5000,
        username: 'xmlapi',
        password: 'xmlpass'
    }, function(err) {
        if (!err)
            console.log('Successful connection');
    The callback may be the only parameter as well. Example:
    var cxml = require('node-ciscoxml');
        host: '',
        port: 5000,
        username: 'xmlapi',
        password: 'xmlpass'
    }).connect(function(err) {
        if (!err)
            console.log('Successful connection');
    Example with SSL:
    var cxml = require('node-ciscoxml');
    var fs = require('fs');
        host: '',
        port: 38752,
        username: 'xmlapi',
        password: 'xmlpass',
        ssl: {
              // These are necessary only if using the client certificate authentication
              key: fs.readFileSync('client-key.pem'),
              cert: fs.readFileSync('client-cert.pem'),
              // This is necessary only if the server uses the self-signed certificate
              ca: [ fs.readFileSync('server-cert.pem') ]
    }).connect(function(err) {
        if (!err)
            console.log('Successful connection');

    disconnect method

    This method explicitly disconnects a connection.

    sendRaw method

    data – a string containing valid Cisco XML request to be sent
    callback – function that will be called when a valid Cisco XML response is received
    var cxml = require('node-ciscoxml');
    var c = cxml({
        host: '',
        port: 5000,
        username: 'xmlapi',
        password: 'xmlpass'
    c.sendRaw('<Request><GetDataSpaceInfo/></Request>',function(err,data) {

    sendRawObj method

    data – a javascript object that will be converted to a Cisco XML request
    callback – function that will be called with valid Cisco XML response converted to javascript object
    var cxml = require('node-ciscoxml');
    var c = cxml({
        host: '',
        port: 5000,
        username: 'xmlapi',
        password: 'xmlpass'
    c.sendRawObj({ GetDataSpaceInfo: '' },function(err,data) {

    rootGetDataSpaceInfo method

    Equivalent to .sendRawObj for GetDataSpaceInfo command


    Sends getNext request with a specific id, so we can retrieve the rest of the previous operation if it has been truncated.
    id – the ID callback – the callback with the data (in js object format)
    Keep in mind next response may be truncated as well, so you have to check for IteratorID all the time.
    var cxml = require('node-ciscoxml');
    var c = cxml({
        host: '',
        port: 5000,
        username: 'xmlapi',
        password: 'xmlpass'
    c.sendRawObj({ Get: { Configuration: {} } },function(err,data) {
        if ((!err) && data && data.Response.$.IteratorID) {
            return c.getNext(data.Response.$.IteratorID,function(err,nextData) {
                // .. code to merge data with nextData
        // .. code

    sendRequest method

    This method is equivalent to sendRawObj but it can automatically detect the need and resupply GetNext requests so the response is absolutley full. Therefore this method should be the preferred method for sending requests that expect very large replies.
    var cxml = require('node-ciscoxml');
    var c = cxml({
        host: '',
        port: 5000,
        username: 'xmlapi',
        password: 'xmlpass'
    c.sendRequest({ GetDataSpaceInfo: '' },function(err,data) {

    requestPath method

    This is a method equivalent to sendRequest but instead of an object, the request may be formatted in a simple path string. This metod is not very useful for complex requests. But its value is in the ability to simplify very much the simple requests. The response is in JavaScript object
    var cxml = require('node-ciscoxml');
    var c = cxml({
        host: '',
        port: 5000,
        username: 'xmlapi',
        password: 'xmlpass'
    c.requestPath('Get.Configuration.Hostname',function(err,data) {

    reqPathPath method

    This is the same method as requestPath, but the response is not an object, but a path array. The method supports optional filter, which has to be a RegExp object and all paths and values will be tested against it Only those returning true will be included in the response array.
    var cxml = require('node-ciscoxml');
    var c = cxml({
        host: '',
        port: 5000,
        username: 'xmlapi',
        password: 'xmlpass'
    c.reqPathPath('Get.Configuration.Hostname',/Hostname/,function(err,data) {
        // The output should be something like
        // [ 'Response("MajorVersion"="1","MinorVersion"="0").Get.Configuration.Hostname("MajorVersion"="1","MinorVersion"="0")',
               'asr9k-router' ] 
    This method could be very useful for getting simple responses and configurations.

    getConfig method

    This method requests the whole configuration of the remote device and return it as object
    c.getConfig(function(err,config) {

    cliConfig method

    This method is quite simple, it executes a command(s) in CLI Configuration mode and return the response in JS Object. You have to know that any configuration change in IOS XR is not effective unless it is committed!
    c.cliConfig('username testuser\ngroup operator\n',function(err,data) {

    cliExec method

    Executes a command(s) in CLI Exec mode and return the response in JS Object.
    c.cliExec('show interfaces',function(err,data) {

    commit method

    Commit the current configuration.
    c.commit(function(err,data) {

    lock method

    Locks the configuration mode.
    c.lock(function(err,data) {

    unlock method

    Unlocks the configuration mode.
    c.unlock(function(err,data) {

    Configure Cisco IOS XR for XML agent

    To configure IOS XR for remote XML configuration you have to:
    Ensure you have *mgbl*** package installed and activated! Without it you will have no xml agentcommands!
    Enable the XML agent with a similar configuration:
    xml agent
      vrf default
        ipv4 access-list SECUREACCESS
      ipv6 enable
      session timeout 10
      iteration on size 100000
    You can enable tty and/or ssl agents as well!
    (Keep in mind – full filtering of the XML access has to be done by the control-plane management-plane command! The XML interface does not use VTYs!)
    You have to ensure you have correctly configured aaa as the xml agent uses default method for both authentication and authorization and that cannot be changed (last verified with IOS XR 5.3).
    You have to have both aaa authentication and authorization. If authorization is not set (aaa authorization default local or none), you may not be able to log in. And you shall ensure that both the authentication and authorization share the same source (tacacs+ or local).
    The default agent port is 38751 for the default agent and 38752 for SSL.


    The module uses “debug” module to log its outputs. You can enable the debugging by having in your code something like:
    Or setting DEBUG environment to ciscoxml before starting the Node.JS

    node.js module implementing EventEmitter interface using MongoDB tailable cursors as backend

    Post Syndicated from Anonymous original http://deliantech.blogspot.com/2015/03/nodejs-module-implementing-eventemitter.html

    I’ve published in the npm a new module, that I’ve used privately for a long time which implements EventEmitter interface using MongoDB tailable cursors as backend.
    This module could be used as a messaging bus between processes or even between node.js modules as it allows implementing EventEmitter wihout need of sharing the object instance in advance.
    Please see the first version of the README.md bellow:

    Module for creating event bus interface based on MongoDB tailable cursors

    The idea behind this module is to create EventEmitter like interface, which uses MongoDB capped collections and tailable cursors as an internal messaging bus. This model has a lot of advantages, especially if you already use MongoDB in your project.
    The advantages are:
    You don’t have to exchange the event emitter object between different pages or even different processes (forked, clustered, living on separate machines). As long as you use the same mongoUrl and capped collection name, you can exchange information. This way you can even create applications that runs on a different hardware and they may exchanging events and data as if they are the same application! Also your events are stored in a collection and could be used as a transaction log latley (mongodb’s own transaction log is implemented with capped collections).
    It simplifies an application development very much.


    To install the module run the following command:
    npm install node-mongotailableevents


    It is easy to use that module. Look at the following example:
    var ev = require('node-mongotailableevents');
    var e = ev( { ...options ... }, callback );

    Initialization and options

    The following options can be used with the module
    • mongoUrl (default mongodb:// – the URL to the mongo database
    • mongoOptions (default none) – Specific options to be used for the connection to the mongo database
    • name (default tailedEvents) – the name of the capped collection that will be created if it does not exists
    • size (default 1000000) – the maximum size of the capped collection (when reached, the oldest records will be automatically removed)
    • max (default 1000) – the maximum size in amount of records for the capped collection
    You can call and create a new event emitter instance without options:
    var ev = require('node-mongotailableevents');
    var e = ev();
    Or you can call and create a event emitter instance with options:
    var ev = require('node-mongotailableevents');
    var e = ev({
       mongoUrl: 'mongodb://',
       name: 'myEventCollection'
    Or you can call and create a event emitter instance with options and callback, which will be called when the collection is created successfuly:
    var ev = require('node-mongotailableevents');
       mongoUrl: 'mongodb://',
       name: 'myEventCollection'
    }, function(err, e) {
    Or you can call and create event emitter with just callback (and default options):
    ev(function(err, e) {


    This module inherits EventEmitter, so you can use all of the EventEmitter methods. Example:
    ev(function(err, e) {
        if (err) throw err;
        e.on('myevent',function(data) {
            console.log('We have received',data);
        e.emit('myevent','my data');
    The best feature is that you can exchange events between different pages or processes, without the need of exchange in advance of the eventEmitter object instance or without any complex configuration, as long as both pages processes uses the same mongodb database (but it could be a different replica servers) and the same “name” (the name of the capped collection). This way you can create massive clusters and messaging bus distributed among multiple machines without a need of any separate messaging system and its configuration.
    Do a simple example – start two separate node processes with the following code, and see what the results are:
    var ev = require('node-mongotailableevents');
    ev(function(err, e) {
        if (err) throw err;
        e.on('myevent',function(data) {
            console.log('We have received',data);
        setInterval(function() {
            e.emit('myevent','my data'+parseInt(Math.random()*1000000));
    You shall see on both of the outputs both of the messages received.

    Example how to use node-netflowv9 and define your own netflow type decoders

    Post Syndicated from Anonymous original http://deliantech.blogspot.com/2015/03/example-how-to-use-node-netflowv9-and.html

    This is an example of how you can use node-netflowv9 library (version >= 0.2.5) to define your own proprietary Netflow v9 type decoders if they are not supported.
    The given primer is adding decoding for types 30000, 30001, 30002 for Cisco ASA/PIX netflow:

    var Collector = require('node-netflowv9');
    var colObj = Collector(function (flow) { console.log(flow) });
    colObj.listen(5000); var aclDecodeRule = { 12: 'o["$name"] = { aclId: buf.readUInt32BE($pos), aclLineId: buf.readUInt32BE($pos+4), aclCnfId: buf.readUInt32BE($pos+8) };'
    }; colObj.nfTypes[33000] = { name: 'nf_f_ingress_acl_id', compileRule: aclDecodeRule }; colObj.nfTypes[33001] = { name: 'nf_f_egress_acl_id', compileRule: aclDecodeRule }; colObj.nfTypes[33002] = { name: 'nf_f_fw_ext_event', compileRule: { 2: 'o['$name']=buf.readUInt16BE($pos);' } }; colObj.nfTypes[40000] = { name: 'nf_f_username', compileRule: { 0: 'o["$name"] = buf.toString("utf8",$pos,$pos+$len);' } };

    node-netflowv9 node.js module for processing of netflowv9 has been updated to 0.2.5

    Post Syndicated from Anonymous original http://deliantech.blogspot.com/2015/03/node-netflowv9-nodejs-module-for.html

    My node-netflowv9 library has been updated to version 0.2.5

    There are few new things –

    • Almost all of the IETF netflow types are decoded now. Which means practically that we support IPFIX
    • Unknown NetFlow v9 type does not throw an error. It is decoded into property with name ‘unknown_type_XXX’ where XXX is the ID of the type
    • Unknown NetFlow v9 Option Template scope does not throw an error. It is decoded in ‘unknown_scope_XXX’ where XXX is the ID of the scope
    • The user can overwrite how different types of NetFlow are decoded and the user can define its own decoding for new types. The same for scopes. And this can happen “on fly” – at any time.
    • The library supports well multiple netflow collectors running at the same time
    • A lot of new options and models for using of the library has been introduced
    Bellow is the updated README.md file, describing how to use the library:


    The usage of the netflowv9 collector library is very very simple. You just have to do something like this:
    var Collector = require('node-netflowv9');
    Collector(function(flow) {
    or you can use it as event provider:
    Collector({port: 3000}).on('data',function(flow) {
    The flow will be presented in a format very similar to this:
    { header: 
      { version: 9,
         count: 25,
         uptime: 2452864139,
         seconds: 1401951592,
         sequence: 254138992,
         sourceId: 2081 },
      { address: '',
         family: 'IPv4',
         port: 29471,
         size: 1452 },
      packet: Buffer <00 00 00 00 ....>
      flow: [
      { in_pkts: 3,
         in_bytes: 144,
         ipv4_src_addr: '',
         ipv4_dst_addr: '',
         input_snmp: 27,
         output_snmp: 16,
         last_switched: 2452753808,
         first_switched: 2452744429,
         l4_src_port: 61538,
         l4_dst_port: 62348,
         out_as: 0,
         in_as: 0,
         bgp_ipv4_next_hop: '',
         src_mask: 32,
         dst_mask: 24,
         protocol: 17,
         tcp_flags: 0,
         src_tos: 0,
         direction: 1,
         fw_status: 64,
         flow_sampler_id: 2 } } ]
    There will be one callback for each packet, which may contain more than one flow.
    You can also access a NetFlow decode function directly. Do something like this:
    var netflowPktDecoder = require('node-netflowv9').nfPktDecode;
    Currently we support netflow version 1, 5, 7 and 9.


    You can initialize the collector with either callback function only or a group of options within an object.
    The following options are available during initialization:
    port – defines the port where our collector will listen to.
    Collector({ port: 5000, cb: function (flow) { console.log(flow) } })
    If no port is provided, then the underlying socket will not be initialized (bind to a port) until you call listen method with a port as a parameter:
    Collector(function (flow) { console.log(flow) }).listen(port)
    cb – defines a callback function to be executed for every flow. If no call back function is provided, then the collector fires ‘data’ event for each received flow
    Collector({ cb: function (flow) { console.log(flow) } }).listen(5000)
    ipv4num – defines that we want to receive the IPv4 ip address as a number, instead of decoded in a readable dot format
    Collector({ ipv4num: true, cb: function (flow) { console.log(flow) } }).listen(5000)
    socketType – defines to what socket type we will bind to. Default is udp4. You can change it to udp6 is you like.
    Collector({ socketType: 'udp6', cb: function (flow) { console.log(flow) } }).listen(5000)
    nfTypes – defines your own decoders to NetFlow v9+ types
    nfScope – defines your own decoders to NetFlow v9+ Option Template scopes

    Define your own decoders for NetFlow v9+ types

    NetFlow v9 could be extended with vendor specific types and many vendors define their own. There could be no netflow collector in the world that decodes all the specific vendor types. By default this library decodes in readable format all the types it recognises. All the unknown types are decoded as ‘unknown_type_XXX’ where XXX is the type ID. The data is provided as a HEX string. But you can extend the library yourself. You can even replace how current types are decoded. You can even do that on fly (you can dynamically change how the type is decoded in different periods of time).
    To understand how to do that, you have to learn a bit about the internals of how this module works.
    • When a new flowset template is received from the NetFlow Agent, this netflow module generates and compile (with new Function()) a decoding function
    • When a netflow is received for a known flowset template (we have a compiled function for it) – the function is simply executed
    This approach is quite simple and provides enormous performance. The function code is as small as possible and as well on first execution Node.JS compiles it with JIT and the result is really fast.
    The function code is generated with templates that contains the javascript code to be add for each netflow type, identified by its ID.
    Each template consist of an object of the following form:
    { name: 'property-name', compileRule: compileRuleObject }
    compileRuleObject contains rules how that netflow type to be decoded, depending on its length. The reason for that is, that some of the netflow types are variable length. And you may have to execute different code to decode them depending on the length. The compileRuleObject format is simple:
       length: 'javascript code as a string that decode this value',
    There is a special length property of 0. This code will be used, if there is no more specific decode defined for a length. For example:
       4: 'code used to decode this netflow type with length of 4',
       8: 'code used to decode this netflow type with length of 8',
       0: 'code used to decode ANY OTHER length'

    decoding code

    The decoding code must be a string that contains javascript code. This code will be concatenated to the function body before compilation. If that code contain errors or simply does not work as expected it could crash the collector. So be careful.
    There are few variables you have to use:
    $pos – this string is replaced with a number containing the current position of the netflow type within the binary buffer.
    $len – this string is replaced with a number containing the length of the netflow type
    $name – this string is replaced with a string containing the name property of the netflow type (defined by you above)
    buf – is Node.JS Buffer object containing the Flow we want to decode
    o – this is the object where the decoded flow is written to.
    Everything else is pure javascript. It is good if you know the restrictions of the javascript and Node.JS capabilities of the Function() method, but not necessary to allow you to write simple decoding by yourself.
    If you want to decode a string, of variable length, you could write a compileRuleObject of the form:
       0: 'o["$name"] = buf.toString("utf8",$pos,$pos+$len)'
    The example above will say that for this netfow type, whatever length it has, we will decode the value as utf8 string.


    Lets assume you want to write you own code for decoding a NetFlow type, lets say 4444, which could be of variable length, and contains a integer number.
    You can write a code like this:
       port: 5000,
       nfTypes: {
          4444: {   // 4444 is the NetFlow Type ID which decoding we want to replace
             name: 'my_vendor_type4444', // This will be the property name, that will contain the decoded value, it will be also the value of the $name
             compileRule: {
                 1: "o['$name']=buf.readUInt8($pos);", // This is how we decode type of length 1 to a number
                 2: "o['$name']=buf.readUInt16BE($pos);", // This is how we decode type of length 2 to a number
                 3: "o['$name']=buf.readUInt8($pos)*65536+buf.readUInt16BE($pos+1);", // This is how we decode type of length 3 to a number
                 4: "o['$name']=buf.readUInt32BE($pos);", // This is how we decode type of length 4 to a number
                 5: "o['$name']=buf.readUInt8($pos)*4294967296+buf.readUInt32BE($pos+1);", // This is how we decode type of length 5 to a number
                 6: "o['$name']=buf.readUInt16BE($pos)*4294967296+buf.readUInt32BE($pos+2);", // This is how we decode type of length 6 to a number
                 8: "o['$name']=buf.readUInt32BE($pos)*4294967296+buf.readUInt32BE($pos+4);", // This is how we decode type of length 8 to a number
                 0: "o['$name']='Unsupported Length of $len'"
       cb: function (flow) {
    It looks to be a bit complex, but actually it is not. In most of the cases, you don’t have to define a compile rule for each different length. The following example defines a decoding for a netflow type 6789 that carry a string:
    var colObj = Collector(function (flow) {
    colObj.nfTypes[6789] = {
        name: 'vendor_string',
        compileRule: {
            0: 'o["$name"] = buf.toString("utf8",$pos,$pos+$len)'
    As you can see, we can also change the decoding on fly, by defining a property for that netflow type within the nfTypes property of the colObj (the Collector object). Next time when the NetFlow Agent send us a NetFlow Template definition containing this netflow type, the new rule will be used (the routers usually send temlpates from time to time, so even currently compiled templates are recompiled).
    You could also overwrite the default property names where the decoded data is written. For example:
    var colObj = Collector(function (flow) {
    colObj.nfTypes[14].name = 'outputInterface';
    colObj.nfTypes[10].name = 'inputInterface';

    Logging / Debugging the module

    You can use the debug module to turn on the logging, in order to debug how the library behave. The following example show you how:
    var Collector = require('node-netflowv9');
    Collector(function(flow) {

    Multiple collectors

    The module allows you to define multiple collectors at the same time. For example:
    var Collector = require('node-netflowv9');
    Collector(function(flow) { // Collector 1 listening on port 5555
    Collector(function(flow) { // Collector 2 listening on port 6666

    NetFlowV9 Options Template

    NetFlowV9 support Options template, where there could be an option Flow Set that contains data for a predefined fields within a certain scope. This module supports the Options Template and provides the output of it as it is any other flow. The only difference is that there is a property isOption set to true to remind to your code, that this data has come from an Option Template.
    Currently the following nfScope are supported – system, interface, line_card, netflow_cache. You can overwrite the decoding of them, or add another the same way (and using absolutley the same format) as you overwrite nfTypes.

    node-netflowv9 is updated to support netflow v1, v5, v7 and v9

    Post Syndicated from Anonymous original http://deliantech.blogspot.com/2014/11/node-netflowv9-is-updated-to-support.html

    My netflow module for Node.JS has been updated. Now it support more NetFlow versions – like NetFlow ver 1, ver 5, ver 7 and ver 9. Also it has been modified to be able to be used as Event Generator (instead of doing callbacks). Now you can do as well (the old model is still supported):
    Collector({port: 3000}).on('data',function(flow) {
    Additionally, the module now supports and decode option templates and option data flows for NetFlow v9.