All posts by James Beswick

Creating a serverless Apache Kafka publisher using AWS Lambda

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/creating-a-serverless-apache-kafka-publisher-using-aws-lambda/

This post is written by Philipp Klose, Global Solution Architect, and Daniel Wessendorf, Global Solution Architect.

Streaming data and event-driven architectures are becoming more popular for many modern systems. The range of use cases includes web tracking and other logs, industrial IoT, in-game player activity, and the ingestion of data for modern analytics architecture.

One of the most popular technologies in this spaece is Apache Kafka. This is an open-source distributed event streaming platform used by many customers for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications.

Kafka is based on a simple but powerful pattern. The Kafka cluster itself is a highly available broker that receives messages from various producers. The received messages are stored in topics, which are the primary storage abstraction.

Various consumers can subscribe to a Kafka topic and consume messages. In contrast to classic queuing systems, the consumers do not remove the message from the topic but store the individual reading position on the topic. This allows for multiple different patterns for consumption (for example, fan-out or consumer-groups).

Producer and consumer

Producer and consumer libraries for Kafka are available in various programming languages and technologies. This blog post focuses on using serverless and cloud-native technologies for the producer side.

Overview

This example walks you through how to build a serverless real-time stream producer application using Amazon API Gateway and AWS Lambda.

For testing, this blog includes a sample AWS Cloud Development Kit (CDK) application. This creates a demo environment, including an Amazon Managed Streaming for Apache Kafka (MSK) cluster and a bastion host for observing the produced messages on the cluster.

The following diagram shows the architecture of an application that pushes API requests to a Kafka topic in real time, which you build in this blog post:

Architecture overview

  1. An external application calls an Amazon API Gateway endpoint
  2. Amazon API Gateway forwards the request to a Lambda function
  3. AWS Lambda function behaves as a Kafka producer and pushes the message to a Kafka topic
  4. A Kafka “console consumer” on the bastion host then reads the message

The demo shows how to use Lambda Powertools for Java to streamline logging and tracing, and an IAM authenticator to simplify the cluster authentication process. The following sections take you through the steps to deploy, test, and observe the example application.

Prerequisites

The example has the following prerequisites:

Example walkthrough

  1. Clone the project GitHub repository. Change directory to subfolder serverless-kafka-iac:
    git clone https://github.com/aws-samples/serverless-kafka-producer
    cd serverless-kafka-iac
    
  2. Configure environment variables:
    export CDK_DEFAULT_ACCOUNT=$(aws sts get-caller-identity --query 'Account' --output text)
    export CDK_DEFAULT_REGION=$(aws configure get region)
    
  3. Prepare the virtual Python environment:
    python3 -m venv .venv
    source .venv/bin/activate
    pip3 install -r requirements.txt
    
  4. Bootstrap your account for CDK usage:
    cdk bootstrap aws://$CDK_DEFAULT_ACCOUNT/$CDK_DEFAULT_REGION
  5. Run ‘cdk synth’ to build the code and test the requirements:
    cdk synth
  6. Run ‘cdk deploy’ to deploy the code to your AWS account:
    cdk deploy --all

Testing the example

To test the example, log into the bastion host and start a consumer console to observe the messages being added to the topic. You generate messages for the Kafka topics by sending calls via API Gateway from your development machine or AWS Cloud9 environment.

  1. Use AWS System Manager to log into the bastion host. Use the KafkaDemoBackendStack.bastionhostbastion Output-Parameter to connect or via the system manager console.
    aws ssm start-session --target <Bastion Host Instance Id> 
    sudo su ec2-user
    cd /home/ec2-user/kafka_2.13-2.6.3/bin/
    
  2. Create a topic named messages on the MSK cluster:
    ./kafka-topics.sh --bootstrap-server $ZK --command-config client.properties --create --replication-factor 3 --partitions 3 --topic messages
  3. Open a Kafka consumer console on the bastion host to observe incoming messages:
    ./kafka-console-consumer.sh --bootstrap-server $ZK --topic messages --consumer.config client.properties
    
  4. Open another terminal on your development machine to create test requests using the “ServerlessKafkaProducerStack.kafkaproxyapiEndpoint” output parameter of the CDK stack. Append “/event” for the final URL. Use curl to send the API request:
    curl -X POST -d "Hello World" <ServerlessKafkaProducerStack.messagesapiendpointEndpoint>
  5. For load testing the application, it is important to calibrate the parameters. You can use a tool like Artillery to simulate workloads. You can find a sample artillery script in the /load-testing folder from step 1.
  6. Observe the incoming request in the bastion host terminal.

All components in this example integrate with AWS X-Ray. With AWS X-Ray, you can trace the entire application, which is useful to identify bottlenecks when load testing. You can also trace method execution at the Java method level.

Lambda Powertools for java allows you to accelerate this process by adding the @Trace annotation to see traces on method level in X-Ray.

To trace a request end to end:

  1. Navigate to the CloudWatch console.
  2. Open the Service map.
  3. Select a component to investigate (for example, the Lambda function where you deployed the Kafka producer). Choose View traces.
    X-Ray console
  4. Select a single Lambda method invocation and investigate further at the Java method level.
    X-Ray detail

Cleaning up

In the subdirectory “serverless-kafka-iac”, delete the test infrastructure:

cdk destroy –all

Implementation of a Kafka producer in Lambda

Kafka natively supports Java. To stay open, cloud native, and without third-party dependencies, the producer is written in that language. Currently, the IAM authenticator is only available to Java. In this example, the Lambda handler receives a message from an Amazon API Gateway source and pushes this message to an MSK topic called “messages”.

Typically, Kafka producers are long-living and pushing a message to a Kafka topic is an asynchronous process. As Lambda is ephemeral, you must enforce a full flush of a submitted message until the Lambda function ends, by calling producer.flush().

    @Override
    @Tracing
    @Logging(logEvent = true)
    public APIGatewayProxyResponseEvent 
    handleRequest(APIGatewayProxyRequestEvent input, Context context) {
        APIGatewayProxyResponseEvent response = createEmptyResponse();
        try {

            String message = getMessageBody(input);

            KafkaProducer<String, String> producer = createProducer();

            ProducerRecord<String, String> record = new ProducerRecord<String, String>(TOPIC_NAME, context.getAwsRequestId(), message);

            Future<RecordMetadata> send = producer.send(record);
            producer.flush();

            RecordMetadata metadata = send.get();
            log.info(String.format(“Send message was send to partition %s”, metadata.partition()));

            log.info(String.format(“Message was send to partition %s”, metadata.partition()));

            return response.withStatusCode(200).withBody(“Message successfully pushed to kafka”);
        } catch (Exception e) {
            log.error(e.getMessage(), e);
            return response.withBody(e.getMessage()).withStatusCode(500);
        }
    }

    @Tracing
    private KafkaProducer<String, String> createProducer() {
        if (producer == null) {
            log.info(“Connecting to kafka cluster”);
            producer = new KafkaProducer<String, String>(kafkaProducerProperties.getProducerProperties());
        }
        return producer;
    }

Connect to Amazon MSK using IAM Auth

This example uses IAM authentication to connect to the respective Kafka cluster. See the documentation here, which shows how to configure the producer for connectivity.

Since you configure the cluster via IAM, grant “Connect” and “WriteData” permissions to the producer, so that it can push messages to Kafka.

{
    “Version”: “2012-10-17”,
    “Statement”: [
        {            
            “Effect”: “Allow”,
            “Action”: [
                “kafka-cluster:Connect”
            ],
            “Resource”: “arn:aws:kafka:region:account-id:cluster/cluster-name/cluster-uuid “
        }
    ]
}


{
    “Version”: “2012-10-17”,
    “Statement”: [
        {            
            “Effect”: “Allow”,
            “Action”: [
                “kafka-cluster:Connect”,
                “kafka-cluster: DescribeTopic”,
            ],
            “Resource”: “arn:aws:kafka:region:account-id:topic/cluster-name/cluster-uuid/topic-name“
        }
    ]
}

This shows the Kafka excerpt of the IAM policy, which must be applied to the Kafka producer.

When using IAM authentication, be aware of the current limits of IAM Kafka authentication, which affect the number of concurrent connections and IAM requests for a producer. Read https://docs.aws.amazon.com/msk/latest/developerguide/limits.html and follow the recommendation for authentication backoff in the producer client:

        Map<String, String> configuration = Map.of(
                “key.serializer”, “org.apache.kafka.common.serialization.StringSerializer”,
                “value.serializer”, “org.apache.kafka.common.serialization.StringSerializer”,
                “bootstrap.servers”, getBootstrapServer(),
                “security.protocol”, “SASL_SSL”,
                “sasl.mechanism”, “AWS_MSK_IAM”,
                “sasl.jaas.config”, “software.amazon.msk.auth.iam.IAMLoginModule required;”,
                “sasl.client.callback.handler.class”, “software.amazon.msk.auth.iam.IAMClientCallbackHandler”,
                “connections.max.idle.ms”, “60”,
                “reconnect.backoff.ms”, “1000”
        );

Elaboration on implementation

Each Kafka broker node can handle a maximum of 20 IAM authentication requests per second. The demo setup has three brokers, which result in 60 requests per second. Therefore, the broker setup limits the number of concurrent Lambda functions to 60.

To reduce IAM authentication requests from the Kafka producer, place it outside of the handler. For frequent calls, there is a chance that Lambda reuses the previously created class instance and only re-executes the handler.

For bursting workloads with a high number of concurrent API Gateway requests, this can lead to dropped messages. While for some workloads, this might be tolerable, for others this might not be the case.

In these cases, you can extend the architecture with a buffering technology like Amazon SQS or Amazon Kinesis Data Streams between API Gateway and Lambda.

To reduce latency, you can reduce cold start times for Java by changing the tiered compilation level to “1” as described in this blog post. Provisioned Concurrency ensures that polling Lambda functions are ready before requests arrive.

Conclusion

In this post, you learn how to create a serverless integration Lambda function between API Gateway and Apache Managed Streaming for Apache Kafka (MSK). We show how to deploy such an integration with the CDK.

The general pattern is suitable for many use cases that need an integration between API Gateway and Apache Kafka. It may have cost benefits over containerized implementations in use cases with sparse, low-volume input streams, and unpredictable or spiky workloads.

For more serverless learning resources, visit Serverless Land.

Optimizing Node.js dependencies in AWS Lambda

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/optimizing-node-js-dependencies-in-aws-lambda/

This post is written by Richard Davison, Senior Partner Solutions Architect.

AWS Lambda offers support for Node.js versions 12, 14 and recently announced version 16. Since Node.js parses, optimizes and runs JavaScript on-the-fly, it can provide fast startup and low overhead in a serverless environment.

Node.js reads and parses all dependencies and sources that are required or imported from the entry point. Consequently, it’s important to keep the dependencies to a minimum and optimize the ones in use.

This post shows how to bundle and minify Lambda function code to optimize performance and stay up to date with the latest version of your dependencies.

Understanding Node.js module resolution

When you require or import a resource in your code, Node.js tries to resolve that resource by either the file- or directory name, or in the node_modules directory. Once it finds the resource, it is loaded from disk, parsed and run.

If that file or dependency in turn contains other imports or require statements, the process repeats, which causes disk reads. The more dependencies and files that are imported in a function, the longer it takes to initialize.

This only impacts imported and used code. Including files in a project that are not imported or used has minimal effect on startup performance.

You should also evaluate what’s being imported. Even though modern JavaScript bundlers such as esbuild, Rollup, or WebPack uses tree shaking and dead code elimination, importing dependencies via wildcard, global-, or top-level imports can result in larger bundles.

Use path imports if your library supports it:

//es6
import DynamoDB from "aws-sdk/clients/dynamodb"
//es5
const DynamoDB = require("aws-sdk/clients/dynamodb")

Avoid wildcard imports:

//es6
import {* as AWS} from "aws-sdk"
//es5
const AWS = require("aws-sdk")

Avoid top-level imports:

//es6
import AWS from "aws-sdk"
//es5
const AWS = require("aws-sdk")

AWS SDK for JavaScript v3

The documentation shows that all Node.js runtimes share the same AWS SDK for JavaScript version. To control the version of the SDK that you depend on, you must provide it yourself. Consider using AWS SDK V3, which uses a modular architecture with a separate package for each service.

This has many benefits, including faster installations and smaller deployment sizes. It also includes many frequently requested features, such as a first-class TypeScript support and a new middleware stack. Since there is a separate package for each service, top-level import is not possible, which further increases startup performance.

By providing your own AWS SDK, it can also be bundled and minified during the build process, which can result in cold start reduction.

Bundle and minify Node.js Lambda functions

You can bundle and minify Lambda functions by using esbuild. This is one of the fastest JavaScript bundlers available, often 10-100x faster than alternatives like WebPack or Parcel.

To use esbuild:

1. Add esbuild to your dev dependencies using npm or yarn:

  • npm: npm i esbuild --save-dev
  • yarn: yarn add esbuild --dev

2. Create a “build” script in the script section of the package.json file:

 "scripts": {
    "build": "rm -rf dist && esbuild ./src/* --entry-names=[dir]/[name]/index --bundle --minify --sourcemap --platform=node --target=node16.14 --outdir=dist",
 }

This script first removes the dist directory and then runs esbuild with the following command-line arguments:

  • ./src/* First, specify the entry points of the application. esbuild creates one bundle (when the bundle option is enabled) for each entry point provided, containing only the dependencies it uses.
  • --entry-names=[dir]/[name]/index specifies that esbuild should create bundles in the same directory as its entry point and in a directory with the same name as the entry point. The bundle is then named index.js.
  • --bundle indicates that you want to bundle all dependencies and source code in a single file.
  • --minify is used to minify the code.
  • --sourcemap is used to create a source map file, which is essential for debugging minified code. Since the minified code is different from your source code, a source map enables a JavaScript debugger to map the minified code to the original source code. Generating source maps helps debugging but increases the size. Note that source maps must be registered to be applied. To register source maps in a Lambda function, use the NODE_OPTIONS environment variable with the following value: --enable-source-maps
  • --platform=node and --target=node16.14 are used to indicate the ECMAScript version to target. By using a bundler, you can often compile newer JavaScript features and syntaxes to earlier standards. Since Lambda now supports Node.js 16, set the target to node16.14. For reference, use https://node.green/ to compare Node.js versions with ECMAScript features.
  • --outdir=dist indicates that all files should be placed in the dist directory.

Build

Run the build script by running yarn build or npm run build.

Package and deploy

To package your Lambda functions, navigate to the dist directory and zip the contents of each respective directory. Note that one zip file per function should be created, only containing index.js and index.js.map. You may also clone the sample project.

If you are already using the AWS CDK, consider using the NodejsFunction construct. This construct abstracts away the bundle procedure and internally uses esbuild to bundle the code:

const nodeJsFunction = new lambdaNodejs.NodejsFunction(
  this,
  "NodeJsFunction",
  {
    runtime: lambda.Runtime.NODEJS_16_X,
    handler: "main",
    entry: "../path/to/your/entry.js_or_ts",
  }
);

Build and deploy sample project

Once all the sources have been bundled you may have noticed that they have small file sizes compared to zipping node_modules and the source files. Your package may be more than 100x smaller. They will also initialize faster.

  1. Clone the sample project and, install the dependencies, build the project and package the application by running the following commands:
    npm install
    npm run build
    npm run package
    npm run package:unbundled

    This produces zip artifacts in the dist directory as well as in the project root. Comparing the size difference between dist/ddbHandler.zip and unoptimized.zip, the unbundled artifact is more than ten times larger. When unpacked, the code size with dependencies is more than 19 Mb compared to 2.1 Mb for the bundled and minified example.

    This is significant in the ddbHandler example because of the AWS SDK DynamoDB dependencies, which contains multiple files and resources.

  2. To deploy the application, run:
    npm run deploy

Comparing and measuring the results

After deployment, you can also see a significant cold start performance improvement. You can load test the Lambda functions using Artillery. Replace the url from the deployment output:

Load test unbundled

artillery run -t "https://{YOUR_ID_HERE}.execute-api.eu-west-1.amazonaws.com" -v '{ "url": "/x86/v2-top-level-unbundled" }' loadtest.yml

Load test bundled

artillery run -t "https://{YOUR_ID_HERE}.execute-api.eu-west-1.amazonaws.com" -v '{ "url": "/x86/v3" }' loadtest.yml

View results in CloudWatch Insights by selecting the two functions’ log groups and running the following query:

Logs Insights

filter @type = "REPORT"
| parse @log /\d+:\/aws\/lambda\/[\w\d]+-(?<function>[\w\d]+)-[\w\d]+/
| stats
count(*) as invocations,
pct(@duration+greatest(@initDuration,0), 0) as p0,
pct(@duration+greatest(@initDuration,0), 25) as p25,
pct(@duration+greatest(@initDuration,0), 50) as p50,
pct(@duration+greatest(@initDuration,0), 75) as p75,
pct(@duration+greatest(@initDuration,0), 90) as p90,
pct(@duration+greatest(@initDuration,0), 95) as p95,
pct(@duration+greatest(@initDuration,0), 99) as p99,
pct(@duration+greatest(@initDuration,0), 100) as p100
group by function, ispresent(@initDuration) as coldstart
| sort by function, coldstart

The cold start invocations for DdbV3X86 run in 551 ms versus DdbVZTopLevelX86Unbundled, which run in 945 ms (p90). The minified and bundled v3 version has about 1.7x faster cold starts, while also providing faster performance during warm invocations.

Performance results

Conclusion

In this post, you learn how to improve Node.js cold start performance by up to 70% by bundling and minifying your code. You also learned how to provide a different version of AWS SDK for JavaScript and that dependencies and how they are imported affects the performance of Node.js Lambda functions. To achieve the best performance, use AWS SDK V3, bundle and minify your code, and avoid top-level imports.

For more serverless learning resources, visit Serverless Land.

Sending Amazon EventBridge events to private endpoints in a VPC

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/sending-amazon-eventbridge-events-to-private-endpoints-in-a-vpc/

This post is written by Emily Shea, Senior GTM Specialist, Event-Driven Architectures.

Building with events can help you accelerate feature velocity and build scalable, fault tolerant applications. You can achieve loose coupling in your application using asynchronous communication via events. Loose coupling allows each development team to build and deploy independently and each component to scale and fail without impacting the others. This approach is referred to as event-driven architecture.

Amazon EventBridge helps you build event-driven architectures. You can publish events to the EventBridge event bus and EventBridge routes those events to targets. You can write rules to filter events and only send them to the interested targets. For example, an order fulfillment service may only be interested in events of type ‘new order created.’

EventBridge is serverless, so there is no infrastructure to manage and the service scales automatically. EventBridge has native integrations with over 100 AWS services and over 40 SaaS providers.

Amazon EventBridge has a native integration with AWS Lambda, and many AWS customers use events to trigger Lambda functions to process events. You may also want to send events to workloads running on Amazon EC2 or containerized workloads deployed with Amazon ECS or Amazon EKS. These services are deployed into an Amazon Virtual Private Cloud, or VPC.

For some use cases, you may be able to expose public endpoints for your VPC. You can use EventBridge API destinations to send events to any public HTTP endpoint. API destinations include features like OAuth support and rate limiting to control the number of events you are sending per second.

However, some customers are not able to expose public endpoints for security or compliance purposes. This tutorial shows you how to send EventBridge events to a private endpoint in a VPC using a Lambda function to relay events. This solution deploys the Lambda function connected to the VPC and uses IAM permissions to enable EventBridge to invoke the Lambda function. Learn more about Lambda VPC connectivity here.

In this blog post, you learn how to send EventBridge events to a private endpoint in a VPC. You set up an example application with an EventBridge event bus, a Lambda function to relay events, a Flask application running in an EKS cluster to receive events behind an Application Load Balancer (ALB), and a secret stored in Secrets Manager for authenticating requests. This application uses EKS and Secrets Manager to demonstrate sending and authenticating requests to a containerized workload, but the same pattern applies for other container orchestration services like ECS and your preferred secret management solution.

Continue reading for the full example application and walkthrough. If you have an existing application in a VPC, you can deploy just the event relay portion and input your VPC details as parameters.

Solution overview

Architecture

  1. An event is sent to the EventBridge bus.
  2. If the event matches a certain pattern (ex, if ‘detail-type’ is ‘inbound-event-sent’), an EventBridge rule uses EventBridge’s input transformer to format the event as an HTTP call.
  3. The EventBridge rule pushes the event to a Lambda function connected to the VPC and a CloudWatch Logs group for debugging.
  4. The Lambda function calls Secrets Manager and retrieves a secret key. It appends the secret key to the event headers and then makes an HTTP call to the ALB URL endpoint.
  5. ALB routes this HTTP call to a node group in the EKS cluster. The Flask application running on the EKS cluster calls Secret Manager, confirms that the secret key is valid, and then processes the event.
  6. The Lambda function receives a response from ALB.
    1. If the Flask application fails to process the event for any reason, the Lambda function raises an error. The function’s failure destination is configured to send the event and the error message to an SQS dead letter queue.
    2. If the Flask application successfully processes the event and the ‘return-response-event’ flag in the event was set to ‘true’, then the Lambda function publishes a new ‘outbound-event-sent’ event to the same EventBridge bus.
  7. Another EventBridge rule matches detail-type ‘outbound-event-sent’ events and routes these to the CloudWatch Logs group for debugging.

Prerequisites

To run the application, you must install the AWS CLI, Docker CLI, eksctl, kubectl, and AWS SAM CLI.

To clone the repository, run:

git clone https://github.com/aws-samples/eventbridge-events-to-vpc.git

Creating the EKS cluster

  1. In the example-vpc-application directory, use eksctl to create the EKS cluster using the config file.
    cd example-vpc-application
    eksctl create cluster --config-file eksctl_config.yaml

    This takes a few minutes. This step creates an EKS cluster with one node group in us-east-1. The EKS cluster has a service account with IAM permissions to access the Secrets Manager secret you create later.

  2. Use your AWS account’s default Amazon Elastic Container Registry (ECR) private registry to store the container image. First, follow these instructions to authenticate Docker to ECR. Next, run this command to create a new ECR repository. The create-repository command returns a repository URI (for example, 123456789.dkr.ecr.us-east-1.amazonaws.com/events-flask-app).
    aws ecr create-repository --repository-name events-flask-app 

    Use the repository URI in the following commands to build, tag, and push the container image to ECR.

    docker build --tag events-flask-app .
    docker tag events-flask-app:latest {repository-uri}:1
    docker push {repository-uri}:1
  3. In the Kuberenetes deployment manifest file (/example-vpc-application/manifests/deployment.yaml), fill in your repository URI and container image version (for example, 123456789.dkr.ecr.us-east-1.amazonaws.com/events-flask-app:1)

Deploy the Flask application and Application Load Balancer

  1. Within the example-vpc-application directory, use kubectl to apply the Kubernetes manifest files. This step deploys the ALB, which takes time to create and you may receive an error message during the deployment (‘no endpoints available for service “aws-load-balancer-webhook-service”‘). Rerun the same command until the ALB is deployed and you no longer receive the error message.
    kubectl apply --kustomize manifests/
  2. Once the deployment is completed, verify that the Flask application is running by retrieving the Kubernetes pod logs. The first command retrieves a pod name to fill in for the second command.
    kubectl get pod --namespace vpc-example-app
    kubectl logs --namespace vpc-example-app {pod-name} --follow

    You should see the Flask application outputting ‘Hello from my container!’ in response to GET request health checks.

    Hello message

Get VPC and ALB details

Next, you retrieve the security group ID, private subnet IDs, and ALB DNS Name to deploy the Lambda function connected to the same VPC and private subnet and send events to the ALB.

  1. In the AWS Management Console, go to the VPC dashboard and find Subnets. Copy the subnet IDs for the two private subnets (for example, subnet name ‘eksctl-events-cluster/SubnetPrivateUSEAST1A’).
    Subnets
  2. In the VPC dashboard, under Security, find the Security Groups tab. Copy the security group ID for ‘eksctl-events-cluster/ClusterSharedNodeSecurityGroup’.
    Security groups
  3. Go to the EC2 dashboard. Under Load Balancing, find the Load Balancer tab. There is a load balancer associated with your VPC ID. Copy the DNS name for the load balancer, adding ‘http://’ as a prefix (for example, http://internal-k8s-vpcexamp-vpcexamp-c005e07d1a-1074647274.us-east-1.elb.amazonaws.com).
    Load balancer

Create the Secrets Manager VPC endpoint

You need a VPC endpoint for your application to call Secrets Manager.

  1. In the VPC dashboard, find the Endpoints tab and choose Create Endpoint. Select Secrets Manager as the service, and then select the VPC, private subnets, and security group that you copied in the previous step. Choose Create.VPC endpoint

Deploy the event relay application

Deploy the event relay application using the AWS Serverless Application Model (AWS SAM) CLI:

  1. Open a new terminal window and navigate to the event-relay directory. Run the following AWS SAM CLI commands to build the application and step through a guided deployment.
    cd event-relay
    sam build
    sam deploy --guided

    The guided deployment process prompts for input parameters. Enter ‘event-relay-app’ as the stack name and accept the default Region. For other parameters, submit the ALB and VPC details you copied: Url (ALB DNS name), security group ID, and private subnet IDs. For the Secret parameter, pass any value.The AWS SAM template saves this value as a Secrets Manager secret to authenticate calls to the container application. This is an example of how to pass secrets in the event relay HTTP call. Replace this with your own authentication method in production environments.

  2. Accept the defaults for the remaining options. For ‘Deploy this changeset?’, select ‘y’. Here is an example of the deployment parameters.
    Parameters

Test the event relay application

Both the Flask application in a VPC and the event relay application are now deployed. To test the event relay application, keep the Kubernetes pod logs from a previous step open to monitor requests coming into the Flask application.

  1. You can open a new terminal window and run this AWS CLI command to put an event on the bus, or go to the EventBridge console, find your event bus, and use the Send events UI.
    aws events put-events \
    --entries '[{"EventBusName": "event-relay-bus" ,"Source": "eventProducerApp", "DetailType": "inbound-event-sent", "Detail": "{ \"event-id\": \"123\", \"return-response-event\": true }"}]'

    When the event is relayed to the Flask application, a POST request in the Kubernetes pod logs confirms that the application processed the event.

    Terminal response

  2. Navigate to the CloudWatch Logs groups in the AWS Management Console. In the ‘/aws/events/event-bus-relay-logs’ group, there are logs for the EventBridge events. In ‘/aws/lambda/EventRelayFunction’ stream, the Lambda function relays the inbound event and puts a new outbound event on the EventBridge bus.
  3. You can test the SQS dead letter queue by creating an error. For example, you can manually change the Lambda function code in the console to pass an incorrect value for the secret. After sending a test event, navigate to the SQS queue in the console and poll for messages. The message shows the error message from the Flask application and the full event that failed to process.

Cleaning up

In the VPC dashboard in the AWS Management Console, find the Endpoints tab and delete the Secrets Manager VPC endpoint. Next, run the following commands to delete the rest of the example application. Be sure to run the commands in this order as some of the resources have dependencies on one another.

sam delete --stack-name event-relay-app
kubectl --namespace vpc-example-app delete ingress vpc-example-app-ingress

From the example-vpc-application directory, run this command.

eksctl delete cluster --config-file eksctl_config.yaml

Conclusion

Event-driven architectures and EventBridge can help you accelerate feature velocity and build scalable, fault tolerant applications. This post demonstrates how to send EventBridge events to a private endpoint in a VPC using a Lambda function to relay events and emit response events.

To learn more, read Getting started with event-driven architectures and visit EventBridge tutorials on Serverless Land.

Managing multi-tenant APIs using Amazon API Gateway

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/managing-multi-tenant-apis-using-amazon-api-gateway/

This post is written by Satish Mane, Solutions Architect.

Many ISVs provide platforms as a service in a multi-tenant environment. You can achieve multi-tenancy by partitioning the platform based on customer identifiers such as customer ID or account ID. The architecture for multi-tenant environments is often composed of authentication, authorization, a service layer, queues, and databases.

The primary focus of these architectures is to simplify the addition of more features. The multi-tenant design pattern has opened up new challenges and opportunities for software vendors thanks to microservice architectures gaining popularity. The challenge in a multi-tenant environment is that excessive load by a single customer, because of many requests to an API, can affect the entire platform.

This blog post looks at how to protect and monetize multi-tenant APIs using Amazon API Gateway. It describes a multi-tenant architecture design pattern based on a custom tenant ID to onboard customers. A tenant in a multi-tenant platform represents the customer having a group of users with common access, but individuals having specific permissions to the platform.

Overview

This example protects multi-tenant platform REST APIs using Amazon Cognito, Amazon API Gateway, and AWS Lambda.

In the following sections, you learn how to use the API Gateway’s usage plans to protect and productize multi-tenant platforms. Usage plans enable throttling of excessive API requests and apply an API usage quota policy. The user authenticates using Amazon Cognito to get a JSON Web Token (JWT) that is passed to API Gateway for authorization.

The multi-tenant platform that exposes REST APIs has clients such as a mobile app, a web application, and API clients that consume the REST APIs. This post focuses on protecting REST APIs with Amazon Cognito as the security layer for authenticating users and issuing tokens using OpenID Connect. The token contains the customer identity information, such as the tenant ID to which the users belong. API Gateway throttles the requests from a tenant only after the limit defined in the usage plans exceeds.

Architecture

This architecture shows the flow of user requests:

  1. The client application sends a request to Amazon Cognito using the /oauth/authorize or /login API. Amazon Cognito authenticates the user credentials.
  2. Amazon Cognito redirects using an authorization code grant and prompts the user to enter credentials. After authentication, it returns the authorization code.
  3. It then passes the authorization code to obtain a JWT from Amazon Cognito.
  4. Upon successful authentication, Amazon Cognito returns a JWT, such as acccess_token, id_token, refresh_token. The access/id token stores information about the granted permissions including tenant ID to which this user belongs to.
  5. The client application invokes the REST API that is deployed in API Gateway. The API request passes the JWT as a bearer token in the API request Authorization header.
  6. Since the tenant ID is hidden in the encrypted JWT token, the Lambda authorizer function validates and decodes the token, and extracts the tenant ID from the JWT.
  7. The Lambda token authorizer function returns an IAM policy along with tenant ID from the decoded token to which a user belongs.
  8. The application’s REST API is configured with usage plans against a custom API key, which is the tenant ID in API Gateway. API Gateway evaluates the IAM policy and looks up the usage policy using the API key. It throttles API requests if the number of requests exceed the throttle or quota limits in the usage policy.
  9. If the number of API requests is within the limit, then API Gateway sends requests to the downstream application REST API. This could be deployed using containers, Lambda, or an Amazon EC2 instance.

Customer (tenant) onboarding

There are multiple ways to set up multi-tenant applications. You can either create tenant-specific pools or add tenant ID as a custom attribute in each user profile. This blog uses the latter approach. The tenant ID is added to the JWT after successful authentication.

Since, tenant ID is an API key in API Gateway, the length of tenant ID must be a minimum of 20 characters. You can define the structure of tenant ID such as <customer id>-<random string>. As part of tenant onboarding, you can automate configuring the API key and usage plans in API Gateway using CDK APIs. Here, you configure the API key and usage plan as part of the solution deployment itself.

Authentication and authorization

You need a user pool and application client enabled with the authorization code mechanism for authenticating users. API Gateway can verify JWT OAuth tokens against single Amazon Cognito user pools. To get tenant information (tenant ID), use a custom Lambda authorizer function in API Gateway to verify the token, extract the tenant id, and return to API Gateway.

API Gateway usage plans

API Gateway supports the usage plan feature for REST APIs only. This solution uses an integration point as a MOCK integration type. You can use the usage plan to set the throttle and quota limit that are associated with API keys. API keys can be generated or you can use a custom key. To enforce usage plans for each tenant separately, use tenant ID as a prefix to a uniquely generated value to prepare the custom API key.

Configure API Gateway to integrate API key and Usage plan

You need to enable REST API to use the API key and set the source to AUTHORIZER. There are two ways to accept API keys for every incoming request. You can supply it as part of the incoming request HEADER or via a custom authorizer Lambda function. This example uses a custom authorizer Lambda function to retrieve the API key that is extracted from the JWT received through an incoming API request. Customers only pass encrypted JWTs in the request authorization header. These steps are automated using the AWS CDK.

Pre-requisites

Deploying the example

The example source code is available on GitHub. To deploy and configure solution:

  1. Clone the repository to your local machine.
    git clone https://github.com/aws-samples/api-gateway-usage-policy-based-api-protection
  2. Prepare the deployment package.
    cdk synth
    npm run build
    npm install --prefix aws-usage-policy-stack/lambda/src
  3. Configure the user pool in Amazon Cognito.
    npx cdk deploy CognitoStack
  4. Open the AWS Management Console and navigate to Amazon Cognito. Choose Manage user pool and select your user pool. Note down the pool ID under general settings.
    User pool
  5. Create a user with a tenant ID.
    aws cognito-idp admin-create-user --user-pool-id <REPLACE WITH COGNITO POOL ID> --username <REPLACE WITH USERNAME> \
    --user-attributes Name="given_name",Value="<REPLACE WITH FIRST NAME>" Name="family_name",Value="<REPLACE WITH LAST NAME>" " Name="custom:tenant_id",Value="<REPLACE WITH CUSTOMER ID>" \
    --temporary-password change1t
    
  6. To simplify testing the OAuth flow, use https://openidconnect.net/. In the configuration, set the JWKS well known URI.
    https://cognito-idp.<REPLACE WITH AWS REGION>.amazonaws.com/<REPLACE WITH COGNITO POOL ID>/.well-known/openid-configuration
  7. Test the OAuth flow with https://openidconnect.net/ to fetch the JWT ID token. Save the token in a text editor for later use.
  8. Open aws-usage-policy-stack/app.ts in an IDE and replace “NOT_DEFINED” with the 20-character long tenant ID from the previous section.
  9. Configure the user pool in API Gateway and create the Lambda function:
    npx cdk deploy ApigatewayStack
  10. After successfully deploying the API Gateway stack, open the AWS Management Console and select API Gateway. Locate ProductRestApi in the name column and note its ID.
    API Gateway console

Testing the example

Test the example using the following curl command. It throttles the requests to the deployed API based on defined limits and quotas. The following thresholds are preset: API quota limit of 5 requests/day, throttle limit of 10 requests/second, and a burst limit of 2 requests/second.

To simulate the scenario and start throttling requests.

  1. Open a terminal window.
  2. Install the curl utility if necessary.
  3. Run the following command six times after replacing placeholders with the correct values.
    curl -H "Authorization: Bearer <REPLACE WITH ID_TOKEN received in step 7 of Deploy Amazon Cognito Resources>" -X GET https://<REPLACE WITH REST API ID noted in step 10 of Deploy Amazon API Gateway resources>.execute-api.eu-west-1.amazonaws.com/dev/products.

You receive the message {“message”: “Limit Exceeded”} after you run the command for the sixth time. To repeat the tests, navigate to the API Gateway console. Change the quota limits in the usage plan and run the preceding command again. You can monitor HTTP/2 429 exceptions (Limit Exceeded) in API Gateway dashboard.

API Gateway console

Any changes to usage plan limits do not need redeployment of the API in API Gateway. You can change limits dynamically. Changes take a few seconds to become effective.

Cleaning up

To avoid incurring future charges, clean up the resources created. To delete the CDK stack, use the following command. Since there are multiple stacks, you must explicitly specify the name of the stacks.

cdk destroy CognitoStack ApigatewayStack

Conclusion

This post covers the API Gateway usage plan feature to protect multi-tenant APIs from excessive request loads and also as a product offering that enforces customer specific usage quotas.

To learn more about Amazon API Gateway, refer to Amazon API Gateway documentation. For more serverless learning resources, visit Serverless Land.

Combining Amazon AppFlow with AWS Step Functions to maximize application integration benefits

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/combining-amazon-appflow-with-aws-step-functions-to-maximize-application-integration-benefits/

This post is written by Ahmad Aboushady, Senior Technical Account Manager and Kamen Sharlandjiev, Senior Specialist Solution Architect, Integration.

In this blog post, you learn how to orchestrate AWS service integrations to reduce the manual steps in your workflow. The example uses AWS Step Functions SDK integration to integrate Amazon AppFlow and AWS Glue catalog without writing custom code. It automatically uses Amazon EventBridge to trigger Step Functions every time a new Amazon AppFlow flow finishes running.

Amazon AppFlow enables customers to transfer data securely between software as a service (SaaS) applications, like Salesforce, SAP, Zendesk, Slack, ServiceNow, and multiple AWS services.

An everyday use case of Amazon AppFlow is creating a customer-360 by integrating marketing, customer support, and sales data. For example, analyze the revenue impact of different marketing channels by synchronizing the revenue data from Salesforce with marketing data from Adobe Marketo.

This involves setting up flows to ingest data from different data sources and SaaS applications to AWS Data Lake based on Amazon S3. It uses AWS Glue to crawl and catalog this data. Customers use this catalog to access data quickly in several ways.

For example, they query the data using Amazon Athena or Amazon QuickSight for visualizations, business intelligence and anomaly detection. You can create those data flows quickly with no code required. However, to complete the next set of requirements, customers often go through multiple manual steps of provisioning and configuring different AWS resources. One such step requires creating AWS Glue crawler and running it with every Amazon AppFlow flow execution.

Step Functions can help us automate this process. This is a low-code workflow orchestration service that offers a visual workflow designer. You can quickly build workflows using the built-in drag-and-drop interface available in the AWS Management Console.

You can follow this blog and build your end-to-end state machine using the Step Functions Workflow Studio, or use the AWS Serverless Application Model (AWS SAM) template to deploy the example. The Step Functions state machine uses SDK integration with other AWS Services, so you don’t need to write any custom integration code.

Overview

The following diagram depicts the workflow with the different states in the state machine. You can group these into three phases: preparation, processing, and configuration.

  • The preparation phase captures all the configuration parameters and collects information about the metadata of the data, ingested by Amazon AppFlow.
  • The processing phase generates the AWS Glue table definition and sets the required parameters based on the destination file type. It iterates through the different columns and adds them as part of the table definition.
  • The last phase provides the Glue Catalog resources by creating or updating an existing AWS Glue table. With each Amazon AppFlow flow execution, the state machine determines if a new Glue table partition is required.

Workflow architecture

Preparation phase

The first state, “SetDatabaseAndContext”, is a pass state where you set the configuration parameters used in later states. Set the AWS Glue database and table name and capture the details of the data flow. You can do this by using the parameters filter to build a new JSON payload using parts of the state input similar to:

"Parameters": {
        "Config": {
          "Database": "<Glue-Database-Name>",
          "TableName.$": "$.detail['flow-name']",
          "detail.$": "$.detail"
        }
}

The following state, “DatabaseExist?” is an AWS SDK integration using a “GetDatabase” call to AWS Glue to ensure that the database exists. Here, the state uses error handling to intercept exception messages from the SDK call. This feature splits the workflow and adds an extra step if needed.

In this case, the SDK call returns an exception if the database does not exist, and the workflow invokes the “CreateDatabase” state. It moves to the “CleanUpError” state to clean up any errors and set the configuration parameters accordingly. Afterwards, with the database in place, the workflow continues to the next state: “DescribeFlow”. This returns the metadata of the Amazon AppFlow flow. Part of this metadata is the list of the object fields, which you must create in the Glue table and partitions.

Here is an error handling state that catches exceptions and routes the flow to execute an extra step:

"Catch": [
  {
    "ErrorEquals": [
      "States.ALL"
    ],
    "Comment": "Create Glue Database",
    "Next": "CreateDatabase",
    "ResultPath": "$.error"
  }
]

In the next state, “DescribeFlow”, you use the AWS SDK integration to get the Amazon AppFlow flow configuration. This uses the Amazon AppFlow “DescribeFlow API call. It moves to “S3AsDestination?”, which is a choice state to check if S3 is a destination for the flow. Amazon AppFlow allows you to bring data into different purpose-built data stores, such as S3, Amazon Redshift, or external SaaS or data warehouse applications. This automation can only continue if the configured destination is S3.

The choice definition is:

"Choices": [
  {
    "Variable": "$.FlowConfig.DestinationFlowConfigList[0].ConnectorType",
    "StringEquals": "S3",
    "Next": "GenerateTableDefinition"
  }
],
"Default": "S3NotDestination"

Processing phase

The following state generates the base AWS Glue table definition based on the destination file type. Then it uses a map state to iterate and transform the Amazon AppFlow schema output into what the AWS Glue Data Catalog expects as input.

Next, add the “GenerateTableDefinition” state and use the parameters filter to build a new JSON payload output. Finally, use the information from the “DescribeFlow” state similar to:

"Parameters": {
  "Config.$": "$.Config",
  "FlowConfig.$": "$.FlowConfig",
  "TableInput": {
    "Description": "Created by AmazonAppFlow",
    "Name.$": "$.Config.TableName",
    "PartitionKeys": [
      {
        "Name": "partition_0",
        "Type": "string"
      }
    ],
    "Retention": 0,
    "Parameters": {
      "compressionType": "none",
      "classification.$": "$.FlowConfig.DestinationFlowConfigList[0].DestinationConnectorProperties['S3'].S3OutputFormatConfig.FileType",
      "typeOfData": "file"
    },
    "StorageDescriptor": {
      "BucketColumns": [],
      "Columns.$": "$.FlowConfig.Tasks[?(@.TaskType == 'Map')]",
      "Compressed": false,
      "InputFormat": "org.apache.hadoop.mapred.TextInputFormat",
      "Location.$": "States.Format('{}/{}/', $.Config.detail['destination-object'], $.FlowConfig.FlowName)",
      "NumberOfBuckets": -1,
      "OutputFormat": "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat",
      "SortColumns": [],
      "StoredAsSubDirectories": false
    },
    "TableType": "EXTERNAL_TABLE"
  }
}

The following state, “DestinationFileFormatEvaluator”, is a choice state to change the JSON payload according to the destination file type. Amazon AppFlow supports different file type conversions when S3 is the destination of your data. These formats are CSV, Parquet, and JSON Lines. AWS Glue uses various serialization libraries according to the file type.

You iterate within a map state to transform the AWS Glue table schema and set the column type to a known AWS Glue format. If the file type is unrecognized or does not have an equivalent in Glue, default this field to string. The map state configuration is defined as:

"Iterator": {
        "StartAt": "KnownFIleFormat?",
        "States": {
          "KnownFIleFormat?": {
            "Type": "Choice",
            "Choices": [
              {
                "Or": [
                  {
                    "Variable": "$.TaskProperties.SOURCE_DATA_TYPE",
                    "StringEquals": "boolean"
                  },
                  {
                    "Variable": "$.TaskProperties.SOURCE_DATA_TYPE",
                    "StringEquals": "double"
                  },
                  .
                  .
                  .
                  .
                  {
                    "Variable": "$.TaskProperties.SOURCE_DATA_TYPE",
                    "StringEquals": "timestamp"
                  }
                ],
                "Next": "1:1 mapping"
              }
            ],
            "Default": "Cast to String"
          },
          "1:1 mapping": {
            "Type": "Pass",
            "End": true,
            "Parameters": {
              "Name.$": "$.DestinationField",
              "Type.$": "$.TaskProperties.SOURCE_DATA_TYPE"
            }
          },
          "Cast to String": {
            "Type": "Pass",
            "End": true,
            "Parameters": {
              "Name.$": "$.DestinationField",
              "Type": "string"
            }
          }
        }
      },
"ItemsPath": "$.TableInput.StorageDescriptor.Columns",
"ResultPath": "$.TableInput.StorageDescriptor.Columns",

Configuration phase

The next stage in the workflow is “TableExist?”, which checks if the Glue table exists. If the state machine detects any error because the table does not exist, it moves to the “CreateTable” state. Alternatively, it goes to the “UpdateTable” state.

Both states use the AWS SDK integration to create or update the AWS Glue table definition using the “TableInput” parameter. AWS Glue operates with partitions. Every time you have new data stored in a new S3 prefix, you must update the table and add a new partition showing where the data sits.

You need an extra step to check if Amazon AppFlow has stored the data into a new S3 prefix or an existing one. In the “AddPartition?” State, you must review and determine the next step of your workflow. For example, you must validate that the flow executed successfully and processed data.

A choice state helps with those checks:

"And": [
            {
              "Variable": "$.Config.detail['execution-id']",
              "IsPresent": true
            },
            {
              "Variable": "$.Config.detail['status']",
              "StringEquals": "Execution Successful"
            },
            {
              "Not": {
                "Variable": "$.Config.detail['num-of-records-processed']",
                "StringEquals": "0"
              }
            }
          ]

Amazon AppFlow supports different types of flow execution. With scheduled flows, you can regularly configure Amazon AppFlow to hydrate a data lake by bringing only new data since its last execution. Sometimes, after a successful flow execution, there is no new data to ingest. The workflow concludes and moves to the success state in such cases. However, if there is new data, the state machine continues to the next state, “SingleFileAggregation?”.

Amazon AppFlow supports different file aggregation strategies and allows you to aggregate all ingested records into a single or multiple files. Depending on your flow configuration, it may store your data in a different S3 prefix with each flow execution.

In this state, you check this configuration to decide if you need a new partition for your AWS Glue table.

"Variable": "$.FlowConfig.DestinationFlowConfigList[0].DestinationConnectorProperties.S3.S3OutputFormatConfig.AggregationConfig.AggregationType",
"StringEquals": "SingleFile"

If the data flow aggregates all records into a single file per flow execution, it stores all data into a single S3 prefix. In this case, there is a single partition in your AWS Glue table. You must create that single partition the first time this state machine executes for a specific flow.

Use the AWS SDK integration to get the table partition from the AWS Glue in the “IsPartitionExist?” state. Conclude the workflow and move to the “Success” state if the partition exists. Otherwise, create that single partition in another state, “CreateMainPartition”.

If the flow run does not aggregate files, every flow run generates multiple files into a new S3 prefix. In this case, you add a new partition to the AWS Glue table. A pass state, “ConfigureDestination”, configures the required parameters for the partition creation:

"Parameters": {
        "InputFormat.$": "$.TableInput.StorageDescriptor.InputFormat",
        "OutputFormat.$": "$.TableInput.StorageDescriptor.OutputFormat",
        "Columns.$": "$.TableInput.StorageDescriptor.Columns",
        "Compressed.$": "$.TableInput.StorageDescriptor.Compressed",
        "SerdeInfo.$": "$.TableInput.StorageDescriptor.SerdeInfo",
        "Location.$": "States.Format('{}{}', $.TableInput.StorageDescriptor.Location, $.Config.detail['execution-id'])"
      },
 "ResultPath": "$.TableInput.StorageDescriptor"

Next, move to the “CreateNewPartition” state to use the AWS SDK integration to create a new partition to the Glue table similar to:

"Parameters": {
        "DatabaseName.$": "$.Config.Database",
        "TableName.$": "$.Config.TableName",
        "PartitionInput": {
          "Values.$": "States.Array($.Config.detail['execution-id'])",
          "StorageDescriptor.$": "$.TableInput.StorageDescriptor"
        }
      },
"Resource": "arn:aws:states:::aws-sdk:glue:createPartition"

This concludes the workflow with a “Succeed” state after configuring the AWS Glue table in response to the new Amazon AppFlow flow run.

Conclusion

This blog post explores how to integrate Amazon AppFlow and AWS Glue using Step Functions to automate your business requirements. You can use AWS Lambda to simplify the configuration phase and reduce state transitions or create complex checks, filters, or even data cleansing and preparation.

This approach allows you to tailor the schema conversion to your business requirements. Use this AWS SAM template, to deploy this example. This provides the Step Functions workflow described in this post and the EventBridge rule to trigger the state machine after each Amazon AppFlow flow run. The template also includes all required IAM roles and permissions.

For more serverless learning resources, visit Serverless Land.

Optimizing your AWS Lambda costs – Part 2

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/optimizing-your-aws-lambda-costs-part-2/

This post is written by Chris Williams, Solutions Architect and Thomas Moore, Solutions Architect, Serverless.

Part 1 of this blog series looks at optimizing AWS Lambda costs through right-sizing a function’s memory, and code tuning. We also explore how using Graviton2, Provisioned Concurrency and Compute Savings Plans can offer a reduction in per-millisecond billing.

Part 2 continues to explore cost optimization techniques for Lambda with a focus on architectural improvements and cost-effective logging.

Event filtering

A common serverless architecture pattern is Lambda reading events from a queue or a stream, such as Amazon SQS or Amazon Kinesis Data Streams. This uses an event source mapping, which defines how the Lambda service handles incoming messages or records from the event source.

Event filtering

Sometimes you don’t want to process every message in the queue or stream because the data is not relevant. For example, if IoT vehicle data is sent to a Kinesis Stream and you only want to process events where tire_pressure is < 32, then the Lambda code may look like this:

def lambda_handler(event, context):
   if(event[“tire_pressure”] >=32):
      return

# business logic goes here

This is inefficient as you are paying for Lambda invocations and execution time when there is no business value beyond filtering.

Lambda now supports the ability to filter messages before invocation, simplifying your code and reducing costs. You only pay for Lambda when the event matches the filter criteria and triggers an invocation.

Filtering is supported for Kinesis Streams, Amazon DynamoDB Streams and SQS by specifying filter criteria when setting up the event source mapping. For example, using the following AWS CLI command:

aws lambda create-event-source-mapping \
--function-name fleet-tire-pressure-evaluator \
--batch-size 100 \
--starting-position LATEST \
--event-source-arn arn:aws:kinesis:us-east-1:123456789012:stream/fleet-telemetry \
--filter-criteria '{"Filters": [{"Pattern": "{\"tire_pressure\": [{\"numeric\": [\"<\", 32]}]}"}]}'

After applying the filter, Lambda is only invoked when tire_pressure is less than 32 in messages received from the Kinesis Stream. In this example, it may indicate a problem with the vehicle and require attention.

For more information on how to create filters, refer to examples of event pattern rules in EventBridge, as Lambda filters messages in the same way. Event filtering is explored in greater detail in the Lambda event filtering launch blog.

Avoid idle wait time

Lambda function duration is one dimension used for calculating billing. When function code makes a blocking call, you are billed for the time that it waits to receive a response.

This idle wait time can grow when Lambda functions are chained together, or a function is acting as an orchestrator for other functions. For customers who have workflows such as batch operations or order delivery systems, this adds management overhead. Additionally, it may not be possible to complete all workflow logic and error handling within the maximum Lambda timeout of 15 minutes.

Instead of handling this logic in function code, re-architect your solution to use AWS Step Functions as an orchestrator of the workflow. When using a standard workflow, you are billed for each state transition within the workflow rather than the total duration of the workflow. In addition, you can move support for retries, wait conditions, error workflows and callbacks into the state condition allowing your Lambda functions to focus on business logic.

The following example shows an example Step Functions state machine, where a single Lambda function is split into multiple states. During the wait period, there is no charge. You are only billed on state transition.

State machine

Direct integrations

If a Lambda function is not performing custom logic when it integrates with other AWS services, it may be unnecessary and could be replaced by a lower-cost direct integration.

For example, you may be using API Gateway together with a Lambda function to read from a DynamoDB table:

With Lambda

This could be replaced using a direct integration, removing the Lambda function:

Without Lambda

API Gateway supports transformations to return the output response in a format the client expects. This avoids having to use a Lambda function to do the transformation. You can find more detailed instructions on creating an API Gateway with an AWS service integration in the documentation.

You can also benefit from direct integration when using Step Functions. Today, Step Functions supports over 200 AWS services and 9,000 API actions. This gives greater flexibility for direct service integration and in many cases removes the need for a proxy Lambda function. This can simplify Step Function workflows and may reduce compute costs.

Reduce logging output

Lambda automatically stores logs that the function code generates through Amazon CloudWatch Logs. This may be useful for understanding what is happening within your application in near real-time. CloudWatch Logs includes a charge for the total data ingested throughout the month. Therefore, reducing output to include only necessary information can help reduce costs.

When you deploy workloads into production, review the logging level of your application. For example, in a pre-production environment, debug logs can be beneficial in providing additional information to tune the function. Within your production workloads, you may disable debug level logs and use a logging library (such as the Lambda Powertools Python Logger). You can define a minimum logging level to output by using an environment variable, which allows configuration outside of the function code.

Structuring your log format enforces a standard set of information through a defined schema, instead of allowing variable formats or large volumes of text. Defining structures such as error codes and adding accompanying metrics leads to a reduction in the volume of text that repeats throughout your logs. This also improves the ability to filter logs for specific error types and reduces the risk of a mistyped character in a log message.

Use Cost-Effective Storage for Logs

Once CloudWatch Logs ingests data, by default it is persisted forever with a per-GB monthly storage fee. As log data ages, it typically becomes less valuable in the immediate time frame, and is instead reviewed historically on an ad-hoc basis. However, the storage pricing within CloudWatch Logs remains the same.

To avoid this, set retention policies on your CloudWatch Logs log groups to delete old log data automatically. This retention policy applies to both your existing and future log data.

Some applications logs may need to persist for months or years for compliance or regulatory requirements. Instead of keeping the logs in CloudWatch Logs, export them to Amazon S3. By doing this, you can take advantage of lower-cost storage object classes while factoring in any expected usage patterns for how or when data is accessed.

Conclusion

Cost optimization is an important part of creating well-architected solutions and this is no different when using serverless. This blog series explores some best practice techniques to help reduce your Lambda bill.

If you are already running AWS Lambda applications in production today, some techniques are easier to implement than others. For example, you can purchase Savings Plans with zero code or architecture changes, whereas avoiding idle wait time will require new services and code changes. Evaluate which technique is right for your workload in a development environment before applying changes to production.

If you are still in the design and development stages, use this blog series as a reference to incorporate these cost optimization techniques at an early stage. This ensures that your solutions are optimized from day one.

To get hands-on implementing some of the techniques discussed, take the Serverless Optimization Workshop.

For more serverless learning resources, visit Serverless Land.

Optimizing your AWS Lambda costs – Part 1

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/optimizing-your-aws-lambda-costs-part-1/

This post is written by Chris Williams, Solutions Architect and Thomas Moore, Solutions Architect, Serverless.

When you develop and architect solutions, cost-optimization should always be part of the process. This is no different for serverless applications that have been developed using AWS Lambda.

Your workloads may vary in terms of complexity, usage patterns, and technology. However, the following advice is applicable to all customers when deciding how to prioritize cost optimization with the tradeoffs of developing the application:

  • Efficient code makes better use of resources.
  • Consider the downstream services in architectural decisions.
  • Optimization should be a continuous cycle of improvement.
  • Prioritize changes that make the greatest improvements first.

The Optimizing your AWS Lambda costs blog series reviews operational and architectural guidance. It can be applied to existing Lambda functions and those that you develop in the future.

Introduction to Lambda pricing

Lambda pricing is calculated as a combination of:

  • Total number of requests
  • Total duration of invocations
  • Configured memory allocated

When you optimize Lambda functions, each of these components impacts the total monthly cost. This pricing applies after you exceed the AWS Free Tier that is offered for Lambda.

Right-Sizing

Right-sizing is a good starting point in the process of cost optimization. This exercise helps to identify the lowest cost for applications without affecting performance or requiring code changes.

For Lambda, this is accomplished by configuring the memory for a function, ranging anywhere from 128 MB up to 10,240 MB (10 GB). By adjusting the memory configuration, you also adjust the amount of vCPU that is available to the function during invocation. Tuning these settings provides memory- or CPU-bound applications with access to additional resources during the execution, which may lead to an overall reduced duration of invocation.

Identifying the optimal configuration for your Lambda functions may be manually intensive, especially if changes are made frequently. The AWS Lambda Power Tuning tool provides a solution powered by AWS Step Functions that can help identify the appropriate configuration. It analyzes a set of memory configurations against an example payload.

In the following example, as the memory increases for this Lambda function, the total invocation time improves. This leads to a reduction in the cost for the total execution without affecting the original performance of the function. For this function, the optimal memory configuration for the function is 512 MB, as this is where the resource utilization is most efficient for the total cost of each invocation. This varies per function, and using the tool on your Lambda functions can identify if they benefit from right-sizing.

Power Tuning results

This exercise should be completed regularly, especially if code is released frequently. Once you have identified the appropriate memory setting for your Lambda functions, you should add right-sizing to your processes. The AWS Lambda Power Tuning tool generates programmatic output that can be used by your CI/CD workflows during the release of new code, allowing the automation of memory configuration.

Performance efficiency

A key component to Lambda pricing is the total duration of the invocation. The longer the function takes to run, the more it costs and the higher the latency in your application. For this reason, it is important to ensure the code you write is as efficient as possible and follows the Lambda best practices.

At a high level, to optimize code :

  • Minimize deployment package size to its runtime necessities. This reduces the amount of time it takes for the package to be downloaded and unpacked.
  • Minimize the complexity of dependencies. Simpler frameworks often load faster.
  • Take advantage of execution reuse. Initialize SDK clients and database connections outside the function handler, and cache static assets locally in the /tmp directory. Subsequent invocations can reuse open connections and resources in memory and in /tmp.
  • Follow general coding performance best practices for your chosen language and runtime.

To help visualize the components of your application and identify performance bottlenecks, use AWS X-Ray with Lambda. You can enable X-Ray active tracing on new and existing functions by editing the function configuration. For example, with the AWS CLI:

aws lambda update-function-configuration --function-name my-function \
--tracing-config Mode=Active

The AWS X-Ray SDK can be used to trace all AWS SDK calls inside Lambda functions. This helps to identify any bottlenecks in the application performance. The X-Ray SDK for Python can be used to capture data for other libraries such as requests, sqlite3, and httplib, as shown in the following example:

Segments timeline

Amazon CodeGuru Profiler is another tool that can help with code optimization. It uses machines learning algorithms to help find the most expensive lines of code and suggests ways to improve efficiency. It can be enabled on existing Lambda functions by enabling code profiling in the function configuration.

The CodeGuru console shows the results as a series of visualizations and recommendations.

Code Guru recommendations

Use these tools together with the documented best practices to evaluate your code’s performance when developing your serverless applications. Efficient code often means faster applications and reduced costs.

AWS Graviton2

In September 2021, Lambda functions powered by Arm-based AWS Graviton2 processors became generally available. Graviton2 functions are designed to deliver up to 19% better performance at 20% lower cost than x86. In addition to the lower billing cost when using Arm, you could also see a reduction in the function duration due to the CPU performance improvement, reducing costs even further.

You can configure both new and existing functions to target the AWS Graviton2 processor. Functions are invoked in the same way and integrations with services, applications and tools are not affected by the architecture change. Many functions only need the configuration change to take advantage of the price/performance of Graviton2. Others may require repackaging to use Arm-specific dependencies.

It’s always recommended to test your workloads before making the change. To see how much your code benefits from using Graviton2, use the Lambda Power Tuning tool to compare against x86. The tool allows you to compare two results on the same chart:

AWS Lambda Power Tuning Results

Provisioned concurrency

Where customers are looking to reduce their Lambda function cold starts or avoid burst throttling, provisioned concurrency provides execution environments that are ready to be invoked. It can also reduce total Lambda cost when there is a consistent volume of traffic. This is because the provisioned concurrency pricing model offers a lower total price when it is fully used.

Similar to the standard Lambda function pricing, there are price components for total requests, total duration, and memory configuration. In addition, there is the cost of each provisioned concurrency environment (based on its memory configuration). When this execution environment is fully utilized, the combined cost of the invocation and the execution environment can offer up to 16% savings on duration cost when compared to the regular on-demand pricing.

If you cannot maximize usage in an execution environment, provisioned concurrency can still offer a lower total price per invocation. In the following example, once it’s consumed for more than 60% of the available time, it becomes cheaper than using the on-demand pricing model. The savings increase in line with capacity usage.

PC vs on-demand comparison

To identify the invocation baseline of a Lambda function, look at the average concurrent execution metrics per hour over the previous 24 hours. This helps you to find a consistent baseline throughout the day where you are consistently using multiple execution environments.

For Lambda functions where peak invocation levels are only expected during particular windows of time, take advantage of a scheduled scaling action. Where traffic patterns are not easy to determine, you can implement Application Auto Scaling to adjust based on the current level of utilization.

Compute savings plans

AWS Savings Plans is a flexible pricing model offering lower prices compared to on-demand pricing, in exchange for a specific usage commitment (measured in $/hour) for a one- or three-year period.

Compute Savings Plans include Amazon EC2, AWS Fargate and Lambda. Lambda usage for duration and provisioned concurrency are charged at a discounted Savings Plans rate of up to 17% for a 1- or 3-year term.

You can implement Savings Plans without any function code or configuration changes. They can be a simpler way to save money for Lambda-based workloads. Before deciding to use a savings plan, analyze previous patterns to understand any variations in your month to month usage.

Conclusion

This blog post explains how Lambda pricing works and how right-sizing applications and tuning them for performance efficiency offers a more cost-efficient utilization model. The results can also reduce latency, creating a better experience for your end users.

The post explores how architectural changes such as moving to Graviton2 and configuring provisioned concurrency can provide cost reductions for the same operations. Finally, you can use Compute Savings Plans to add an additional cost reduction once you establish a baseline of usage per month.

Part 2 introduces further optimization opportunities for reducing Lambda invocations, moving to an asynchronous model, and reducing logging costs.

For more serverless learning resources, visit Serverless Land.

Testing Amazon EventBridge events using AWS Step Functions

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/testing-amazon-eventbridge-events-using-aws-step-functions/

This post is written by Siarhei Kazhura, Solutions Architect and Riaz Panjwani, Solutions Architect.

Amazon EventBridge is a serverless event bus that can be used to ingest and process events from a variety of sources, such as AWS services and SaaS applications. With EventBridge, developers can build loosely coupled and independently scalable event-driven applications.

It can be useful to know with EventBridge when events are not able to reach the desired destination. This can be caused by multiple factors, such as:

  1. Event pattern does not match the event rule
  2. Event transformer failure
  3. Event destination expects a different payload (for example, API destinations) and returns an error

EventBridge sends metrics to Amazon CloudWatch, which allows for the detection of failed invocations on a given event rule. You can also use EventBridge rules with a dead-letter queue (DLQ) to identify any failed event deliveries. The messages delivered to the queue contain additional metadata such as error codes, error messages, and the target ARN for debugging.

However, understanding why events fail to deliver is still a manual process. Checking CloudWatch metrics for failures, and then the DLQ takes time. This is evident when developing new functionality, when you must constantly update the event matching patterns and event transformers, and run tests to see if they provide the desired effect. EventBridge sandbox functionality can help with manual testing but this approach does not scale or help with automated event testing.

This post demonstrates how to automate testing for EventBridge events. It uses AWS Step Functions for orchestration, along with Amazon DynamoDB and Amazon S3 to capture the results of your events, Amazon SQS for the DLQ, and AWS Lambda to invoke the workflows and processing.

Overview

Using the solution provided in this post, users can track events from its inception to delivery and identify where any issues or errors are occurring. This solution is also customizable, and can incorporate integration tests against events to test pattern matching and transformations.

Reference architecture

At a high level:

  1. The event testing workflow is exposed via an API Gateway endpoint, and users can send a request.
  2. This request is validated and routed to a Step Functions EventTester workflow, which performs the event test.
  3. The EventTester workflow creates a sample event based on the received payload, and performs multiple tests on the sample event.
  4. The sample event is matched against the rule that is being tested. The results are stored in an Amazon DynamoDB EventTracking table, and the transformed event payload is stored in the TransformedEventPayload Amazon S3 bucket.
  5. The EventTester workflow has an embedded AWS Step Functions workflow called EventStatusPoller. The EventStatusPoller workflow polls the EventTracking table.
  6. The EventStatusPoller workflow has a customizable 10-second timeout. If the timeout is reached, this may indicate that the event pattern does not match. EventBridge tests if the event does not match against a given pattern, using the AWS SDK for EventBridge.
  7. After completing the tests, the response is formatted and sent back to the API Gateway. By default, the timeout is set to 15 seconds.
  8. API Gateway processes the response, strips the unnecessary elements, and sends the response back to the issuer. You can use this response to verify if the test event delivery is successful, or identify the reason a failure occurred.

EventTester workflow

After an API call, this event is sent to the EventTester express workflow. This orchestrates the automated testing, and returns the results of the test.

EventTester workflow

In this workflow:

1. The test event is sent to EventBridge to see if the event matches the rule and can be transformed. The result is stored in a DynamoDB table.
2. The PollEventStatus synchronous Express Workflow is invoked. It polls the DynamoDB table until a record with the event ID is found or it reaches the timeout. The configurable timeout is 15 seconds by default.
3. If a record is found, it checks the event status.

From here, there are three possible states. In the first state, if the event status has succeeded:

4. The response from the PollEventStatus workflow is parsed and the payload is formatted.
5. The payload is stored in an S3 bucket.
6. The final response is created, which includes the payload, the event ID, and the event status.
7. The execution is successful, and the final response is returned to the user.

In the second state, if no record is found in the table and the PollEventStatus workflow reaches the timeout:

8. The most likely explanation for reaching the timeout is that the event pattern does not match the rule, so the event is not processed. You can build a test to verify if this is the issue.
9. From the EventBridge SDK, the TestEventPattern call is made to see if the event pattern matches the rule.
10. The results of the TestEventPattern call are checked.
11. If the event pattern does not match the rule, then the issue has been successfully identified and the response is created to be sent back to the user. If the event pattern matches the rule, then the issue has not been identified.
12. The response shows that this is an unexpected error.

In the third state, this acts as a catch-all to any other errors that may occur:

13. The response is created with the details of the unexpected error.
14. The execution has failed, and the final response is sent back to the user.

Event testing process

The following diagram shows how events are sent to EventBridge and their results are captured in S3 and DynamoDB. This is the first step of the EventTester workflow:

Event testing process

When the event is tested:

  1. The sample event is received and sent to the EventBridge custom event bus.
  2. A CatchAll rule is triggered, which captures all events on the custom event bus.
  3. All events from the CatchAll rule are sent to a CloudWatch log group, which allows for an original payload inspection.
  4. The event is also propagated to the EventTesting rule. The event is matched against the rule pattern, and if successful the event is transformed based on the transformer provided.
  5. If the event is matched and transformed successfully, the Lambda function EventProcessor is invoked to process the transformed event payload. You can add additional custom code to this function for further testing of the event (for example, API integration with the transformed payload).
  6. The event status is updated to SUCCESS and the event metadata is saved to the EventTracking DynamoDB table.
  7. The transformed event payload is saved to the TransformedEventPayload S3 bucket.
  8. If there’s an error, EventBridge sends the event to the SQS DLQ.
  9. The Lambda function ErrorHandler polls the DLQ and processes the errors in batches.
  10. The event status is updated to ERROR and the event metadata is saved to the EventTracking DynamoDB table.
  11. The event payload is saved to the TransformedEventPayload S3 bucket.

EventStatusPoller workflow

EventStatusPoller workflow

When the poller runs:

  1. It checks the DynamoDB table to see if the event has been processed.
  2. The result of the poll is checked.
  3. If the event has not been processed, the workflow loops and polls the DynamoDB table again.
  4. If the event has been processed, the results of the event are passed to next step in the Event Testing workflow.

Visit Composing AWS Step Functions to abstract polling of asynchronous services for additional details.

Testing at scale

Testing at scale

The EventTester workflow uses Express Workflows, which can handle testing high volume event workloads. For example, you can run the solution against large volumes of historical events stored in S3 or CloudWatch.

This can be achieved by using services such as Lambda or AWS Fargate to read the events in batches and run tests simultaneously. To achieve optimal performance, some performance tuning may be required depending on the scale and events that are being tested.

To minimize the cost of the demo, the DynamoDB table is provisioned with 5 read capacity units and 5 write capacity units. For a production system, consider using on-demand capacity, or update the provisioned table capacity.

Event sampling

Event sampling

In this implementation, the EventBridge EventTester can be used to periodically sample events from your system for testing:

  1. Any existing rules that must be tested are provisioned via the AWS CDK.
  2. The sampling rule is added to an existing event bus, and has the same pattern as the rule that is tested. This filters out events that are not processed by the tested rule.
  3. SQS queue is used for buffering.
  4. Lambda function processes events in batches, and can optionally implement sampling. For example, setting a 10% sampling rate will take one random message out of 10 messages in a given batch.
  5. The event is tested against the endpoint provided. Note that the EventTesting rule is also provisioned via AWS CDK from the same code base as the tested rule. The tested rule is replicated into the EventTesting workflow.
  6. The result is returned to a Lambda function, and is then sent to CloudWatch Logs.
  7. A metric is set based on the number of ERROR responses in the logs.
  8. An alarm is configured when the ERROR metric crosses a provided threshold.

This sampling can complement existing metrics exposed for EventBridge via CloudWatch.

Solution walkthrough

To follow the solution walkthrough, visit the solution repository. The walkthrough explains:

  1. Prerequisites required.
  2. Detailed solution deployment walkthrough.
  3. Solution customization and testing.
  4. Cleanup process.
  5. Cost considerations.

Conclusion

This blog post outlines how to use Step Functions, Lambda, SQS, DynamoDB, and S3 to create a workflow that automates the testing of EventBridge events. With this example, you can send events to the EventBridge Event Tester endpoint to verify that event delivery is successful or identify the root cause for event delivery failures.

For more serverless learning resources, visit Serverless Land.

Node.js 16.x runtime now available in AWS Lambda

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/node-js-16-x-runtime-now-available-in-aws-lambda/

This post is written by Dan Fox, Principal Specialist Solutions Architect, Serverless.

You can now develop AWS Lambda functions using the Node.js 16 runtime. This version is in active LTS status and considered ready for general use. To use this new version, specify a runtime parameter value of nodejs16.x when creating or updating functions or by using the appropriate container base image.

The Node.js 16 runtime includes support for ES modules and top-level await that was added to the Node.js 14 runtime in January 2022. This is especially useful when used with Provisioned Concurrency to reduce cold start times.

This runtime version is supported by functions running on either Arm-based AWS Graviton2 processors or x86-based processors. Using the Graviton2 processor architecture option allows you to get up to 34% better price performance.

We recognize that customers have been waiting for some time for this runtime release. We hear your feedback and plan to release the next Node.js runtime version in a timelier manner.

AWS SDK for JavaScript

The Node.js 16 managed runtimes and container base images bundle the AWS JavaScript SDK version 2. Using the bundled SDK is convenient for a few use cases. For example, developers writing short functions via the Lambda console or inline functions via CloudFormation templates may find it useful to reference the bundled SDK.

In general, however, including the SDK in the function’s deployment package is good practice. Including the SDK pins a function to a specific minor version of the package, insulating code from SDK API or behavior changes. To include the SDK, refer to the SDK package and version in the dependency object of the package.json file. Use a package manager like npm or yarn to download and install the library locally before building and deploying your deployment package to the AWS Cloud.

Customers who take this route should consider using the JavaScript SDK, version 3. This version of the SDK contains modular packages. This can reduce the size of your deployment package, improving your function’s performance. Additionally, version 3 contains improved TypeScript compatibility, and using it will maximize compatibility with future runtime releases.

Language updates

With this release, Lambda customers can take advantage of new Node.js 16 language features, including:

Prebuilt binaries for Apple Silicon

Node.js 16 is the first runtime release to ship prebuilt binaries for Apple Silicon. Customers using M1 processors in Apple computers may now develop Lambda functions locally using this runtime version.

Stable timers promises API

The timers promises API offers timer functions that return promise objects, improving the functionality for managing timers. This feature is available for both ES modules and CommonJS.

You may designate your function as an ES module by changing the file name extension of your handler file to .mjs, or by specifying “type” as “module” in the function’s package.json file. Learn more about using Node.js ES modules in AWS Lambda.


  // index.mjs
  
  import { setTimeout } from 'timers/promises';

  export async function handler() {
    await setTimeout(2000);
    return;
  }

RegExp match indices

The RegExp match indices feature allows developers to get an array of the start and end indices of the captured string in a regular expression. Use the “/d” flag in your regular expression to access this feature.

  // handler.js
  
  exports.lambdaHandler = async () => {
    const matcher = /(AWS )(Lambda)/d.exec('AWS Lambda');
    console.log("match: " + matcher.indices[0]) // 0,10
    console.log("first capture group: " + matcher.indices[1]) // 0,4
    console.log("second capture group: " + matcher.indices[2]) // 4,10
  }

Working with TypeScript

Many developers using Node.js runtimes in Lambda develop their code using TypeScript. To better support TypeScript developers, we have recently published new documentation on using TypeScript with Lambda, and added beta TypeScript support to the AWS SAM CLI.

We are also working on a TypeScript version of Lambda PowerTools. This is a suite of utilities for Lambda developers to simplify the adoption of best practices, such as tracing, structured logging, custom metrics, and more. Currently, AWS Lambda Powertools for TypeScript is in beta developer preview.

Runtime updates

To help keep Lambda functions secure, AWS will update Node.js 16 with all minor updates released by the Node.js community when using the zip archive format. For Lambda functions packaged as a container image, pull, rebuild, and deploy the latest base image from DockerHub or the Amazon ECR Public Gallery.

Amazon Linux 2

The Node.js 16 managed runtime, like Node.js 14, Java 11, and Python 3.9, is based on an Amazon Linux 2 execution environment. Amazon Linux 2 provides a secure, stable, and high-performance execution environment to develop and run cloud and enterprise applications.

Conclusion

Lambda now supports Node.js 16. Get started building with Node.js 16 by specifying a runtime parameter value of nodejs16.x when creating your Lambda functions using the zip archive packaging format.

You can also build Lambda functions in Node.js 16 by deploying your function code as a container image using the Node.js 16 AWS base image for Lambda. You can read about the Node.js programming model in the AWS Lambda documentation to learn more about writing functions in Node.js 16.

For existing Node.js functions, review your code for compatibility with Node.js 16 including deprecations, then migrate to the new runtime by changing the function’s runtime configuration to nodejs16.x.

For more serverless learning resources, visit Serverless Land.

Handling Lambda functions idempotency with AWS Lambda Powertools

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/handling-lambda-functions-idempotency-with-aws-lambda-powertools/

This post is written by Jerome Van Der Linden, Solutions Architect Builder and Dariusz Osiennik, Sr Cloud Application Architect.

One of the advantages of using AWS Lambda is its integration with messaging services like Amazon SQS or Amazon EventBridge. The integration is managed and can also handle the retrying of failed messages. If there’s an error within the Lambda function, the failed message is sent again and the function is re-invoked.

This feature increases the resilience of the application but also means that a message can be processed multiple times by the function. This is important when managing orders, payments, or any kind of transaction that must be handled only once.

As mentioned in the design principles of Lambda, “since the same event may be received more than once, functions should be designed to be idempotent”. This article explains what idempotency is and how to implement it more easily with Lambda Powertools.

Understanding idempotency

Idempotency is the property of an operation whereby it can be applied multiple times without changing the result beyond the initial application. You can run an idempotent operation safely multiple times without any side effects like duplicates or inconsistent data. For example, this is a key principle for infrastructure as code, where you don’t want to double the number of resources each time you apply a template.

Applied to Lambda, a function is idempotent when it can be invoked multiple times with the same event with no risk of side effects. To make a function idempotent, it must first identify that an event has already been processed. Therefore, it must extract a unique identifier, called an “idempotency key”.

This identifier may be in the event payload (for example, orderId), a combination of multiple fields in the payload (for example, customerId, and orderId), or even a hash of the full payload. If using the full payload, fields such as dates, timestamps, or random elements may affect the hash and lead to changing values.

The function then checks in a persistence layer (for example, Amazon DynamoDB or Amazon ElastiCache):

  • If the key is not there, then the Lambda function can proceed normally, perform the transaction, and save the idempotency key in the persistence layer. You can potentially add the result of the function in the persistence layer too, so that subsequent calls can retrieve this result directly.
  • If the key is there, then the function can return and avoid applying the transaction again.

The following diagram shows the sequence of events with this idempotency scenario:

Sequence diagram

There are edge cases in this example:

  • You can invoke the function twice with the same event within a few milliseconds. In that case, each function acts as if it’s the first time this event is received and processes it, resulting in inconsistencies.
  • The function may perform several operations that are not idempotent. If the first operation is successful and then an error happens, the idempotency key won’t be saved. Subsequent calls redo the first operation, resulting in inconsistencies.

You can guard against these edge cases by inserting a lock as soon as the event is received:

Second sequence diagram

There are other questions and edge cases that you must consider when implementing idempotency on your Lambda functions. Read Making retries safe with idempotent APIs from the Builder’s Library to dive into the details. You can choose to implement idempotency by yourself or you can use a library that handles it and takes care of these edge cases for you. This is what Lambda Powertools (for Python and Java) proposes.

Idempotency with Lambda Powertools

Lambda Powertools is a library, available in Python, Java, and TypeScript. It provides utilities for Lambda functions to ease the adoption of best practices and to reduce the amount of code to perform recurring tasks. In particular, it provides a module to handle idempotency (in the Java and Python versions).

This post shows examples using the Java version. To get started with the Lambda Powertools idempotency module, you must install the library and configure it within your build process. For more details, follow AWS Lambda Powertools documentation.

Next, you must configure a persistence storage layer where the idempotency feature can store its state. You can use the built-in support for DynamoDB or you can create your own implementation for the database of your choice. This example creates a new table in DynamoDB.

The following AWS Serverless Application Model (AWS SAM) template creates a suitable table to store the state:

Resources:
  IdempotencyTable:
    Type: AWS::DynamoDB::Table
    Properties:
      AttributeDefinitions:
        - AttributeName: id
          AttributeType: S
      KeySchema:
        - AttributeName: id
          KeyType: HASH
      TimeToLiveSpecification:
        AttributeName: expiration
        Enabled: true
      BillingMode: PAY_PER_REQUEST

In this definition:

  • The table is multi-tenant and can be reused by multiple Lambda functions that use the Powertools idempotency module.
  • The DynamoDB time-to-live configuration helps keep idempotency limited in time. You can configure the duration, which is 1 hour by default.

Configure the idempotency module’s behavior in the init phase of the function’s lifecycle, before the handleRequest method gets called:

public class SubscriptionHandler implements RequestHandler<Subscription, SubscriptionResult> {

  public SubscriptionHandler() {
    Idempotency.config().withPersistenceStore(
      DynamoDBPersistenceStore.builder()
        .withTableName(System.getenv("TABLE_NAME"))
        .build()
      ).configure();
  }
}

Lambda Powertools follows the paradigm of convention over configuration and provides default values for many parameters. The persistence store is the only required element. To use the DynamoDB implementation, you must specify a table name. In the previous sample, the name is provided by the environment variable TABLE_NAME.

Adding the @Idempotent annotation to the handleRequest method enables the idempotency functionality. It uses a hash of the Subscription event as the idempotency key.

@Idempotent
  public SubscriptionResult handleRequest(final Subscription event, final Context context) {
    SubscriptionPayment payment = createSubscriptionPayment(
      event.getUsername(),
      event.getProductId()
    );

    return new SubscriptionResult(payment.getId(), "success", 200);
  }

Creating orders

The example is about creating an order for a user for a list of products. Orders should not be duplicated if the client repeats the request. API consumers can safely retry a create order request in case of issues (such as a timeout or networking disruption). The application should also allow the user to buy the same products in a short period of time if that is the user’s intention.

The following architecture diagram consists of an Amazon API Gateway REST API, the idempotent Lambda function, and a DynamoDB table for storing orders.

Architecture diagram

The Orders API allows creating a new order by calling its POST /orders endpoint with the following sample payload:

{
  "requestToken": "260d2efe-af84-11ec-b909-0242ac120002",
  "userId": "user1",  
  "items": [
    {
      "productId": "product1",      
      "price": 6.50,      
      "quantity": 5    
    },    
    {
      "productId": "product2",      
      "price": 13.50,      
      "quantity": 2    
    }
  ],  
  "comment": "AWSome Order"
}

Lambda Powertools uses JMESPath to extract the important fields from the request that uniquely identify it. It then calculates a hash of these fields to constitute the idempotency key.

In the example, the important fields are the userId and the items, to avoid duplicated orders. But the user can also buy the same list of products in a short period of time. To allow this, the API consumer can generate a client-side token and assign its value to the requestToken field. For each unique order, the token has a different value. If a request is retried by the client, it uses the same token.

This leads to the following configuration for the idempotency key:

Idempotency.config()
  .withConfig( 
    IdempotencyConfig.builder()
    .withEventKeyJMESPath("powertools_json(body).[requestToken,userId,items]")
    .build())

If the same request is sent more than once, only the first call results in a new order created in the DynamoDB table. The same order identifier is returned by the endpoint for all the subsequent calls. In this way, the API consumer can safely retry the requests without worrying about duplicating the order.

You can find the source code of the example on GitHub.

Processing payments

This example shows asynchronous batch processing of payment messages from a queue. Messages must not be processed more than once to avoid charging users multiple times for the same order. You must consider edge cases like at-least-once message delivery, an error response returned by the third party payment API or retrying the batch of messages.

The following architecture diagram shows an Amazon SQS queue, the idempotent Lambda function, and a third-party API that the function calls for payment.

Architecture diagram

This is the body of a single payment SQS message:

{
    "orderId": "order1",
    "userId": "user1",
    "amount": "50.25"
}

In this example, the process method is annotated as idempotent, not the handleRequest. The method is responsible for processing a single payment record from the SQS batch. It uses @IdempotencyKey annotation to specify which parameter to use as the idempotency key.

@Override
public List<String> handleRequest(SQSEvent sqsEvent, Context context) {
return sqsEvent.getRecords()
  .stream()
  .map(record -> process(record.getMessageId(),record.getBody()))
  .collect(Collectors.toList());
}

@Idempotent
private String process(String messageId, @IdempotencyKey String messageBody) {
    logger.info("Processing messageId: {}", messageId);
    PaymentRequest request = 
extractDataFrom(messageBody).as(PaymentRequest.class);
    return paymentService.process(request);
}

If an SQS record with the same payload is received more than once, the third-party API is not called multiple times. All the subsequent calls return before calling the process method.

If an exception is thrown from the process method, the idempotency feature does not store the idempotency state in the persistence layer. The payment is treated as unprocessed and can be retried safely. This may happen if the third-party API returns a server-side error.

By default, if one message from a batch fails, all the messages in the batch are retried. Lambda Powertools also offers the SQS Batch Processing module which can help in handling partial failures.

You can find the source code of this example in the GitHub repo.

Conclusion

Idempotency is a critical piece of serverless architectures and can be difficult to implement. If not done correctly, it can lead to inconsistent data and other issues. This post shows how you can use Lambda Powertools to make Lambda functions idempotent and ensure that critical transactions are handled only once.

For more details about the Lambda Powertools idempotency feature and its configuration options, refer to the full documentation.

For more serverless learning resources, visit Serverless Land.

Orchestrating high performance computing with AWS Step Functions and AWS Batch

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/orchestrating-high-performance-computing-with-aws-step-functions-and-aws-batch/

This post is written by Dan Fox, Principal Specialist Solutions Architect; Sabha Parameswaran, Senior Solutions Architect.

High performance computing (HPC) workloads address challenges in a wide variety of industries, including genomics, financial services, oil and gas, weather modeling, and semiconductor design. These workloads frequently have complex orchestration requirements that may include large datasets and hundreds or thousands of compute tasks.

AWS Step Functions is a workflow automation service that can simplify orchestrating other AWS services. AWS Batch is a fully managed batch processing service that can dynamically scale to address computationally intensive workloads. Together, these services can orchestrate and run demanding HPC workloads.

This blog post identifies three common challenges when creating HPC workloads. It describes some features with Step Functions and AWS Batch that can help to solve these challenges. It then shows a sample project that performs complex task orchestration using Step Functions and AWS Batch.

Reaching service quotas with polling iterations

It’s common for HPC workflows to require that a step comprising multiple jobs completes before advancing to the next step. In these cases, it’s typical for developers to implement iterative polling patterns to track job completions.

To handle task orchestration for a workload like this, you may choose to use Step Functions. Step Functions has a hard limit of 25,000 history events. A single state transition may contain multiple history events. For example, an entry to the state and an exit from the state count as two steps. A workflow that iteratively polls many long-running processes may run into limits with this service quota.

Step Functions addresses this by providing synchronization capabilities with several integrated services, including AWS Batch. For integrated services, Step Functions can wait for a task to complete before progressing to the next state. This feature, called “Run a Job (.sync)” in the documentation, returns a Success or Failure message only when the compute service task is complete, reducing the number of entries in the event history log. View the Step Functions documentation for a complete list of supported service integrations.

“Run a Job (.sync)” task pattern

Supporting parallel and dynamic tasks

HPC workloads may require a changing number of compute tasks from execution to execution. For example, the number of compute tasks required in a workflow may depend on the complexity or length of an input dataset. For performance reasons, you may desire for these tasks to run in parallel.

Step Functions supports faster data processing with a fixed number of parallel executions through the parallel state type. If a workload has an unknown number of branches, the map state type can run a set of parallel steps for each element of an input array. We refer to this pattern as dynamic parallelism.

Dynamic parallelism using the map state

Limits and flow control with dynamic tasks

The map state may limit concurrent iterations. When this occurs, some iterations do not begin until previous iterations have completed. The likelihood of this occurring increases when an input array has over 40 items. If your HPC workload benefits from increased concurrency, you may use nested workflows. Step Functions allows you to orchestrate more complex processes by composing modular, reusable workflows.

For example, a map state may invoke secondary, nested workflows, which also contain map states. By nesting Step Functions workflows, you can build larger, more complex workflows out of smaller, simpler workflows.

As your workflow grows in complexity, you may use the callback task, which is an additional flow control feature. Callback tasks provide a way to pause a workflow pending the return of a unique task token. A callback task passes this token to an integrated service and then stops. Once the integrated service has completed, it may return the task token to the Step Functions workflow with a SendTaskSuccess or SendTaskFailure call. Once the callback task receives the task token, the workflow proceeds to the next state.

View the documentation for a list of integrated services that support this pattern.

Nested workflows using callback with task token

A sample project with several orchestration patterns

This sample project demonstrates orchestration of HPC workloads using Step Functions, AWS Batch, and Lambda. The nesting in this project is three layers deep. The primary state machine runs the second layer nested state machines, which in turn run the third layer nested state machines.

The primary state machine demonstrates dynamic parallelism. It receives an input payload that contains an array of items used as input for its map step. Each dynamic map branch runs a nested secondary state machine.

The secondary state machine demonstrates several workflow patterns including a Lambda function with callback, a third layer nested state machine in sync mode, an AWS Batch job in sync mode, and a Lambda function calling AWS Batch with a callback. The tertiary state machine only notifies its parent with Success when called in sync mode.

Explore the ASL in this project to review the code for these patterns.

Sample project architecture

Deploy the sample application

The README of the Github project contains more detailed instructions, but you may also follow these steps.

Prerequisites

  1. AWS Account: If you don’t have an AWS account, navigate to https://aws.amazon.com/ and choose Create an AWS Account.
  2. VPC: A valid existing VPC with subnets (for execution of AWS Batch jobs). Refer to https://docs.aws.amazon.com/vpc/latest/userguide/vpc-getting-started.html for creating a VPC.
  3. AWS CLI: This project requires a local install of AWS CLI. Refer to https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html for installing AWS CLI.
  4. Git CLI: This project requires a local install of Git CLI. Refer to https://git-scm.com/book/en/v2/Getting-Started-Installing-Git
  5. AWS SAM CLI: This project requires a local install of AWS SAM CLI to build and deploy the sample application. Refer to https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install.html for instructions to install and use AWS SAM CLI.
  6. Python 3.8 and Docker: Required only if you plan for local development and testing with AWS SAM CLI.

Build and deploy in your account

Follow these steps to build this application locally and deploy it in your AWS account:

  1. Git clone the repository into a local folder.
    git clone https://github.com/aws-samples/aws-stepfunction-complex-orchestrator-app
    cd aws-stepfunction-complex-orchestrator-app
    
  2. Build the application using AWS SAM CLI.
    sam build
  3. Use AWS SAM deploy with the --guided flag.
    sam deploy --guided
    1. Parameter BatchScriptName (accept default: batch-notify-step-function.sh)
    2. Parameter VPCID (enter the id of your VPC)
    3. Parameter FargateSubnetAccessibility (choose Private or Public depending on subnet configuration)
    4. Parameter Subnets (enter ID for a single subnet)
    5. Parameter batchSleepDuration (accept default: 15)
    6. Accept all other defaults
  4. Note the name of the BucketName parameter in the Outputs of the deploy process. Save this S3 bucket name for the next step.
  5. Copy the batch script into the S3 bucket created in the prior step.
    aws s3 cp ./batch-script/batch-notify-step-function.sh s3://<S3-bucket-name>

Testing the sample application

Once the SAM CLI deploy is complete, navigate to Step Functions in the AWS Console.

  1. Note the three new state machines deployed to your account. Each state machine has a random suffix generated as part of the AWS SAM deploy process:
    1. ComplexOrchestratorStateMachine1-…
    2. SyncNestedStateMachine2-…
    3. CallbackNotifyStateMachine3-…
  2. Follow the link for the primary state machine: ComplexOrchestratorStateMachine1-…
  3. Choose Start execution.
  4. The sample payload for this state machine is located here: orchestrator-step-function-payload.json. This JSON document has 2 input elements within the entries array. Select and copy this JSON and paste it into the Input field in the console, overwriting the default value. This causes the map state to create and execute two nested state machines in parallel. You may modify this JSON to contain more elements to increase the number of parallel executions.
  5. View the “Execution event history” in the console for this execution. Under the Resource column, locate the links to the nested state machines. Select these links to follow their individual executions. You may also explore the links to Lambda, CloudWatch, and AWS Batch job.
  6. Navigate to AWS Batch in the console. Step Functions workflows are available to AWS Batch users within the AWS Batch console. Find the integration from the AWS Batch side navigation under “Related AWS services”. You can also read this blog post to learn more.
  7. Common troubleshooting. If the batch job fails or if the Step Functions workflow times out, make sure that you have correctly copied over the batch script into the S3 bucket as described in step 5 of the Build and Deploy in your account section of this post. Also make sure that the FargateSubnetAccessibility Parameter matches the configuration of your subnet (Public or Private).
  8. The state machine may take several minutes to complete. When successful, the Graph Inspector displays:Graph Inspector with Successful completion

Cleaning up

To clean up the deployment, run the following commands to delete the stack associated with the AWS SAM deployment:

aws cloudformation delete-stack --stack-name <stack-name>

Conclusion

This blog post describes several challenges common to orchestrating HPC workloads. I describe how Step Functions with AWS Batch can solve many of these challenges. I provide a project that contains several sample patterns and show how to deploy and test this in your account.

For more information on serverless patterns, visit Serverless Land.

Introducing global endpoints for Amazon EventBridge

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/introducing-global-endpoints-for-amazon-eventbridge/

This post is written by Stephen Liedig, Sr Serverless Specialist SA.

Last year, AWS announced two new features for Amazon EventBridge that allow you to route events from any commercial AWS Region, and across your AWS accounts. This supported a wide range of use cases, allowing you to implement easily global event delivery and replication scenarios.

From today, EventBridge extends this capability with global endpoints. Global endpoints provide a simpler and more reliable way for you to improve the availability and reliability of event-driven applications. The feature allows you to fail over event ingestion automatically to a secondary Region during service disruptions. Global endpoints also provide optional managed event replication, simplifying your event bus configuration and reducing the risk of event loss during any service disruption.

This blog post explains how to configure global endpoints in your AWS account, update your applications to publish events to the endpoint, and how to test endpoint failover.

How global endpoints work

Customers building multi-Region architectures today are building more resilience by using self-managed replication via EventBridge’s cross-Region capabilities. With this architecture, events are sent directly to an event bus in a primary Region and replicated to another event bus in a secondary Region.

Architecture

This event flow can be interrupted if there is a service disruption. In this scenario, event producers in the primary Region cannot PutEvents to their event bus, and event replication to the secondary Region is impacted.

To put more resiliency around multi-Region architectures, you can now use global endpoints. Global endpoints solve these issues by introducing two core service capabilities:

  1. A global endpoint is a managed Amazon Route 53 DNS endpoint. It routes events to the event buses in either Region, depending on the health of the service in the primary Region.
  2. There is a new EventBridge metric called IngestionToInvocationStartLatency. This exposes the time to process events from the point at which they are ingested by EventBridge to the point the first invocation of a target in your rules is made. This is a service-level metric measured across all of your rules and provides an indication of the health of the EventBridge service. Any extended periods of high latency over 30 seconds may indicate a service disruption.

These two features provide you with the ability to failover event ingestion automatically to the event bus in the secondary Region. The failover is triggered via a Route 53 health check that monitors a CloudWatch alarm that observes the IngestionToInvocationStartLatency in the primary Region.

If the metric exceeds the configured threshold of 30 seconds consecutively for 5 minutes, the alarm state changes to “ALARM”. This causes the Route 53 health check state to become unhealthy, and updates the routing of the global endpoint. All events from that point on are delivered to the event bus in the secondary Region.

The diagram below illustrates how global endpoints reroutes events being delivered from the event bus in the primary Region to the event bus in the secondary Region when CloudWatch alarms trigger the failover of the Route 53 health check.

Rerouting events with global endpoints

Once events are routed to the secondary Region, you have a couple of options:

  1. Continue processing events by deploying the same solution that processes events in the primary Region to the secondary Region.
  2. Create an EventBridge archive to persist all events coming through the secondary event bus. EventBridge archives provide you with a type of “Active/Archive” architecture allowing you to replay events to the event bus once the primary Region is healthy again.

When the global endpoint alarm returns to a healthy state, the health check updates the endpoint configuration, and begins routing events back to the primary Region.

Global endpoints can be optionally configured to replicate events across Regions. When enabled, managed rules are created on your primary and secondary event buses that define the event bus in the other Region as the rule target.

Under normal operating conditions, events being sent to the primary event bus are also sent to the secondary in near-real-time to keep both Regions synchronized. When a failover occurs, consumers in the secondary Region have an up-to-date state of processed events, or the ability to replay messages delivered to the secondary Region, before the failover happens.

When the secondary Region is active, the replication rule attempts to send events back to the event bus in the primary Region. If the event bus in the primary Region is not available, EventBridge attempts to redeliver the events for up to 24 hours, per its default event retry policy. As this is a managed rule, you cannot change this. To manage for a longer period, you can archive events being ingested by the secondary event bus.

How do you identify if the event has been replicated from another Region? Events routed via global endpoints have identical resource fields, which contain the Amazon Resource Name (ARN) of the global endpoint that routed the event to the event bus. The region field shows the origin of the event. In the following example, the event is sent to the event bus in the primary Region and replicated to the event bus in the secondary Region. The event in the secondary Region is us-east-1, showing the source of the event was the event bus in the primary Region.

If there is a failover, events are routed to the secondary Region and replicated to the primary Region. Inspecting these events, you would expect to see us-west-2 as the source Region.

Replicated events

The two preceding events are identical except for the id. Event IDs can change across API calls so correlating events across Regions requires you to have an immutable, unique identifier. Consumers should also be designed with idempotency in mind. If you are replicating events, or replaying them from archives, this ensures that there are no side effects from duplicate processing.

Setting up a global endpoint

To configure a Global endpoint, define two event buses — one in the “primary” Region (this is the same Region you configure the endpoint in) and one in a “secondary” Region. To ensure that events are routed correctly, the event bus in the secondary Region must have the same name, in the same account, as the primary event bus.

  1. Create two event buses in different Regions with the same name. This is quickly set up using the AWS Command Line Interface (AWS CLI):Primary event bus:
    aws events create-event-bus --name orders-bus --region us-east-1Secondary event bus:
    aws events create-event-bus --name orders-bus --region us-west-2
  2. Open the Amazon EventBridge console in the Region where you want to create the global endpoint. This aligns with your primary Region. Navigate to the new global endpoints page and create a new endpoint.
    EventBridge console
  3. In the Endpoint details panel, specify a name for your global endpoint (for example, OrdersGlobalEndpoint) and enter a description.
  4. Select the event bus in the primary Region, orders-bus.
  5. Select the Region used when creating the secondary event bus previously. the secondary event bus by choosing the Region it was created in from the dropdown.Create global endpoint
  6. Select the Route 53 health check for triggering failover and recovery. If you have not created one before, choose New health check. This opens an AWS CloudFormation console to create the “LatencyFailuresHealthCheck” health check and CloudWatch alarm in your account. EventBridge provides a template with recommended defaults for a CloudWatch alarm that is triggered when the average latency exceeds 30 seconds for 5 minutes.Endpoint configuration
  7. Once the CloudFormation stack is deployed, return to the EventBridge console and refresh the dropdown list of health checks. Select the physical ID of the health check you created.Failover and recovery
  8. Ensure that event replication is enabled, and create the endpoint.
    Event replication
  9. Once the endpoint is created, it appears in the console. The global endpoint URL contains the EndpointId, which you must specify in PutEvents API calls to publish events to the endpoint.
    Endpoint listed in console

Testing failover

Once you have created an endpoint, you can test the configuration by creating “catch all” rules on the primary and secondary event buses. The simplest way to see events being processed is to create rules with CloudWatch log group target.

Testing global endpoint failure over is accomplished by inverting the Route 53 health check. This can be accomplished using any of the Route 53 APIs. Using the console, open the Route 53 health checks landing page and edit the “LatencyFailuresHealthCheck” associated with your global endpoint. Check “Invert health check status” and save to update the health check.

Within a few minutes, the health check changes state from “Healthy” to “Unhealthy” and you see events flowing to the event bus in the secondary.

Configure health check

Using the PutEvents API with global endpoints

To use global endpoints in your applications, you must update your current PutEvents API call. All AWS SDKs have been updated to include an optional EndpointId parameter that you must set when publishing events to a global endpoint. Even though you are no longer putting events directly on the event bus, the EventBusName must be defined to validate the endpoint configuration.

PutEvents SDK support for global endpoints requires the AWS Common Runtime (CRT) library, which is available for multiple programming languages, including Python, Node.js, and Java:

https://github.com/awslabs/aws-crt-python
https://github.com/awslabs/aws-crt-nodejs
https://github.com/awslabs/aws-crt-java

To install the awscrt module for Python using pip, run:

python3 -m pip install boto3 awscrt

This example shows how to send an event to a global endpoint using the Python SDK:

import json
import boto3
from datetime import datetime
import uuid
import random

client = session.client('events', config=my_config)

detail = {
    "order_date": datetime.now().isoformat(),
    "customer_id": str(uuid.uuid4()),
    "order_id": str(uuid.uuid4()),
    "order_total": round(random.uniform(1.0, 1000.0), 2)
}

put_response = client.put_events(
    EndpointId=" y6gho8g4kc.veo",
    Entries=[
        {
            'Source': 'com.aws.Orders',
            'DetailType': 'OrderCreated',
            'Detail': json.dumps(detail),
            'EventBusName': 'orders-bus'
        }
    ]
)

Event producers can suffer data loss if the PutEvents API call fails, even if you are using global endpoints. Global endpoints allow you to automate the re-routing of events to another event-bus in another Region, but the health checks triggering the failover won’t be invoked for at least 5 minutes. It’s possible that your applications experience increased error rates for PutEvents operations before the failover occurs and events are routed to a healthy Region. To safeguard against message loss during this time, it’s best practice to use exponential retry and back-off patterns and durable store-and-forward capability at the producer level.

Conclusion

This blog shows how to create an EventBridge global endpoint to improve the availability and reliability of event ingestion of event-driven applications. This example shows how to use the PutEvents in the Python AWS SDK to publish events to a global endpoint.

To create a global endpoint using the API, see CreateEndpoint in the Amazon EventBridge API Reference. You can also create a global endpoint by using AWS CloudFormation using an AWS::Events:: Endpoints resource.

To learn more about EventBridge global endpoints, see the EventBridge Developer Guide. For more serverless learning resources, visit Serverless Land.

ICYMI: Serverless Q1 2022

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/icymi-serverless-q1-2022/

Welcome to the 16th edition of the AWS Serverless ICYMI (in case you missed it) quarterly recap. Every quarter, we share all the most recent product launches, feature enhancements, blog posts, webinars, Twitch live streams, and other interesting things that you might have missed!

Calendar

In case you missed our last ICYMI, check out what happened last quarter here.

AWS Lambda

Lambda now offers larger ephemeral storage for functions, up to 10 GB. Previously, the storage was set to 512 MB. There are several common use-cases that can benefit from expanded temporary storage, including extract-transform load (ETL) jobs, machine learning inference, and data processing workloads. To see how to configure the amount of /tmp storage in AWS SAM, deploy this Serverless Land Pattern.

Ephemeral storage settings

For Node.js developers, Lambda now supports ES Modules and top-level await for Node.js 14. This enables developers to use a wider range of JavaScript packages in functions. With top-level await, when used with Provisioned Concurrency, this can improve cold-start performance when using asynchronous initialization.

For .NET developers, Lambda now supports .NET 6 as both a managed runtime and container base image. You can now use new features of the runtime such as improved logging, simplified function definitions using top-level statements, and improved performance using source generators.

The Lambda console now allows you to share test events with other developers in your team, using granular IAM permissions. Previously, test events were only visible to the builder who created them. To learn about creating sharable test events, read this documentation.

Amazon EventBridge

Amazon EventBridge Schema Registry helps you create code bindings from event schemas for use directly in your preferred IDE. You can generate these code bindings for a schema by using the EventBridge console, APIs, or AWS SDK toolkits for Jetbrains (Intellij, PyCharm, Webstorm, Rider) and VS Code. This feature now supports Go, in addition to Java, Python, and TypeScript, and is available at no additional cost.

AWS Step Functions

Developers can test state machines locally using Step Functions Local, and the service recently announced mocked service integrations for local testing. This allows you to define sample output from AWS service integrations and combine them into test cases to validate workflow control. This new feature introduces a robust way to state machines in isolation.

Amazon DynamoDB

Amazon DynamoDB now supports limiting the number of items processed in PartiQL operation, using an optional parameter on each request. The service also increased default Service Quotas, which can help simplify the use of large numbers of tables. The per-account, per-Region quota increased from 256 to 2,500 tables.

AWS AppSync

AWS AppSync added support for custom response headers, allowing you to define additional headers to send to clients in response to an API call. You can now use the new resolver utility $util.http.addResponseHeaders() to configure additional headers in the response for a GraphQL API operation.

Serverless blog posts

January

Jan 6 – Using Node.js ES modules and top-level await in AWS Lambda

Jan 6 – Validating addresses with AWS Lambda and the Amazon Location Service

Jan 20 – Introducing AWS Lambda batching controls for message broker services

Jan 24 – Migrating AWS Lambda functions to Arm-based AWS Graviton2 processors

Jan 31 – Using the circuit breaker pattern with AWS Step Functions and Amazon DynamoDB

Jan 31 – Mocking service integrations with AWS Step Functions Local

February

Feb 8 – Capturing client events using Amazon API Gateway and Amazon EventBridge

Feb 10 – Introducing AWS Virtual Waiting Room

Feb 14 – Building custom connectors using the Amazon AppFlow Custom Connector SDK

Feb 22 – Building TypeScript projects with AWS SAM CLI

Feb 24 – Introducing the .NET 6 runtime for AWS Lambda

March

Mar 6 – Migrating a monolithic .NET REST API to AWS Lambda

Mar 7 – Decoding protobuf messages using AWS Lambda

Mar 8 – Building a serverless image catalog with AWS Step Functions Workflow Studio

Mar 9 – Composing AWS Step Functions to abstract polling of asynchronous services

Mar 10 – Building serverless multi-Region WebSocket APIs

Mar 15 – Using organization IDs as principals in Lambda resource policies

Mar 16 – Implementing mutual TLS for Java-based AWS Lambda functions

Mar 21 – Running cross-account workflows with AWS Step Functions and Amazon API Gateway

Mar 22 – Sending events to Amazon EventBridge from AWS Organizations accounts

Mar 23 – Choosing the right solution for AWS Lambda external parameters

Mar 28 – Using larger ephemeral storage for AWS Lambda

Mar 29 – Using AWS Step Functions and Amazon DynamoDB for business rules orchestration

Mar 31 – Optimizing AWS Lambda function performance for Java

First anniversary of Serverless Land Patterns

Serverless Patterns Collection

The DA team launched the Serverless Patterns Collection in March 2021 as a repository of serverless examples that demonstrate integrating two or more AWS services. Each pattern uses an infrastructure as code (IaC) framework to automate the deployment. These can simplify the creation and configuration of the services used in your applications.

The Serverless Patterns Collection is both an educational resource to help developers understand how to join different services, and an aid for developers that are getting started with building serverless applications.

The collection has just celebrated its first anniversary. It now contains 239 patterns for CDK, AWS SAM, Serverless Framework, and Terraform, covering 30 AWS services. We have expanded example runtimes to include .NET, Java, Rust, Python, Node.js and TypeScript. We’ve served tens of thousands of developers in the first year and we’re just getting started.

Many thanks to our contributors and community. You can also contribute your own patterns.

Videos

YouTube: youtube.com/serverlessland

Serverless Office Hours – Tues 10 AM PT

Weekly live virtual office hours. In each session we talk about a specific topic or technology related to serverless and open it up to helping you with your real serverless challenges and issues. Ask us anything you want about serverless technologies and applications.

YouTube: youtube.com/serverlessland
Twitch: twitch.tv/aws

January

February

March

FooBar Serverless YouTube channel

The Developer Advocate team is delighted to welcome Marcia Villalba onboard. Marcia was an AWS Serverless Hero before joining AWS over two years ago, and she has created one of the most popular serverless YouTube channels. You can view all of Marcia’s videos at https://www.youtube.com/c/FooBar_codes.

January

February

March

AWS Summits

AWS Global Summits are free events that bring the cloud computing community together to connect, collaborate, and learn about AWS. This year, we have restarted in-person Summits at major cities around the world.

The next 4 Summits planned are Paris (April 12), San Francisco (April 20-21), London (April 27), and Madrid (May 4-5). To find and register for your nearest AWS Summit, visit the AWS Summits homepage.

Still looking for more?

The Serverless landing page has more information. The Lambda resources page contains case studies, webinars, whitepapers, customer stories, reference architectures, and even more Getting Started tutorials.

You can also follow the Serverless Developer Advocacy team on Twitter to see the latest news, follow conversations, and interact with the team.

Using AWS Step Functions and Amazon DynamoDB for business rules orchestration

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/using-aws-step-functions-and-amazon-dynamodb-for-business-rules-orchestration/

This post is written by Vijaykumar Pannirselvam, Cloud Consultant, Sushant Patil, Cloud Consultant, and Kishore Dhamodaran, Senior Solution Architect.

A Business Rules Engine (BRE) is used in enterprises to manage business-critical decisions. The logic or rules used to make such decisions can vary in complexity. A finance department may have a basic rule to get any purchase over a certain dollar amount to get director approval. A mortgage company may need to run complex rules based on inputs (for example, credit score, debt-to-income ratio, down payment) to make an approval decision for a loan.

Decoupling these rules from application logic provides agility to your rules management, since business rules may often change while your application may not. It can also provide standardization across your enterprise, so every department can communicate with the same taxonomy.

As part of migrating their workloads, some enterprises consider replacing their commercial rules engine with cloud native and open-source alternatives. The motivation for such a move stem from several factors, such as simplifying the architecture, cost, security considerations, or vendor support.

Many of these commercial rules engines come as part of a BPMS offering that provides orchestration capabilities for rules execution. For a successful migration to cloud using an open-source rules engine management system, you need an orchestration capability to manage incoming rule requests, auditing the rules, and tracking exceptions.

This post showcases an orchestration framework that allows you to use an open-source rules engine. It uses Drools rules engine to build a set of rules for calculating insurance premiums based on the properties of Car and Person objects. This uses AWS Step Functions, AWS Lambda, Amazon API Gateway, Amazon DynamoDB, and open-source Drools rules engine to show this. You can swap the rules engine provided you can manage it in the AWS Cloud environment and expose it as an API.

Solution overview

The following diagram shows the solution architecture.

Solution architecture

The solution comprises:

  1. API Gateway – a fully managed service that makes it easier to create, publish, maintain, monitor, and secure APIs at any scale for API consumers. API Gateway helps you manage traffic to backend systems, in this case Step Functions, which orchestrates the execution of tasks. For the REST API use-case, you can also set up a cache with customizable keys and time-to-live in seconds for your API data to avoid hitting your backend services for each request.
  2. Step Functions – a low code service to orchestrate multiple steps involved to accomplish tasks. Step Functions uses the finite-state machine (FSM) model, which uses given states and transitions to complete the tasks. The diagram depicts three states: Audit Request, Execute Ruleset and Audit Response. We execute them sequentially. You can add additional states and transitions, such as validating incoming payloads, and branching out parallel execution of the states.
  3. Drools rules engine Spring Boot application – runtime component of the rule execution. You set the Drools rule engine Spring Boot application as an Apache Maven Docker project with Drools Maven dependencies. You then deploy the Drools rule engine Docker image to an Amazon Elastic Container Registry (Amazon ECR), create an AWS Fargate cluster, and an Amazon Elastic Container Service (Amazon ECS) service. The service launches Amazon ECS tasks and maintains the desired count. An Application Load Balancer distributes the traffic evenly to all running containers.
  4. Lambda – a serverless execution environment giving you an ability to interact with the Drools Engine and a persistence layer for rule execution audit functions. The Lambda component provides the audit function required to persist the incoming requests and outgoing responses in DynamoDB. Apart from the audit function, Lambda is also used to invoke the service exposed by the Drools Spring Boot application.
  5. DynamoDB – a fully managed and highly scalable key/value store, to persist the rule execution information, such as request and response payload information. DynamoDB provides the persistence layer for the incoming request JSON payload and for the outgoing response JSON payload. The audit Lambda function invokes the DynamoDB put_item() method when it receives the request or response event from Step Functions. The DynamoDB table rule_execution_audit has an entry for every request and response associated with the incoming request-id originated by the application (upstream).

Drools rules engine implementation

The Drools rules engine separates the business rules from the business processes. You use DRL (Drools Rule Language) by defining business rules as .drl text files. You define model objects to build the rules.

The model objects are POJO (Plain Old Java Objects) defined using Eclipse, with the Drools plugin installed. You should have some level of knowledge about building rules and executing them using the Drools rules engine. The below diagram describes the functions of this component.

Drools process

You define the following rules in the .drl file as part of the GitHub repo. The purpose of these rules is to evaluate the driver premium based on the input model objects provided as input. The inputs are Car and Driver objects and output is the Policy object, which has the premium calculated based on the certain criteria defined in the rule:

rule "High Risk"
     when     
         $car : Car(style == "SPORTS", color == "RED") 
         $policy : Policy() 
         and $driver : Driver ( age < 21 )                             
     then
         System.out.println(drools.getRule().getName() +": rule fired");          
         modify ($policy) { setPremium(increasePremiumRate($policy, 20)) };
 end
 
 rule "Med Risk"
     when     
         $car : Car(style == "SPORTS", color == "RED") 
         $policy : Policy() 
         and $driver : Driver ( age > 21 )                             
     then
         System.out.println(drools.getRule().getName() +": rule fired");          
         modify ($policy) { setPremium(increasePremiumRate($policy, 10)) };
 end
 
 
 function double increasePremiumRate(Policy pol, double percentage) {
     return (pol.getPremium() + pol.getPremium() * percentage / 100);
 }
 

Once the rules are defined, you define a RestController that takes input parameters and evaluates the above rules. The below code snippet is a POST method defined in the controller, which handles the requests and sends the response to the caller.

@PostMapping(value ="/policy/premium", consumes = {MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE }, produces = {MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE})
    public ResponseEntity<Policy> getPremium(@RequestBody InsuranceRequest requestObj) {
        
        System.out.println("handling request...");
        
        Car carObj = requestObj.getCar();        
        Car carObj1 = new Car(carObj.getMake(),carObj.getModel(),carObj.getYear(), carObj.getStyle(), carObj.getColor());
        System.out.println("###########CAR##########");
        System.out.println(carObj1.toString());
        
        System.out.println("###########POLICY##########");        
        Policy policyObj = requestObj.getPolicy();
        Policy policyObj1 = new Policy(policyObj.getId(), policyObj.getPremium());
        System.out.println(policyObj1.toString());
            
        System.out.println("###########DRIVER##########");    
        Driver driverObj = requestObj.getDriver();
        Driver driverObj1 = new Driver( driverObj.getAge(), driverObj.getName());
        System.out.println(driverObj1.toString());
        
        KieSession kieSession = kieContainer.newKieSession();
        kieSession.insert(carObj1);      
        kieSession.insert(policyObj1); 
        kieSession.insert(driverObj1);         
        kieSession.fireAllRules(); 
        printFactsMessage(kieSession);
        kieSession.dispose();
    
        
        return ResponseEntity.ok(policyObj1);
    }    

Prerequisites

Solution walkthrough

  1. Clone the project GitHub repository to your local machine, do a Maven build, and create a Docker image. The project contains Drools related folders needed to build the Java application.
    git clone https://github.com/aws-samples/aws-step-functions-business-rules-orchestration
    cd drools-spring-boot
    mvn clean install
    mvn docker:build
    
  2. Create an Amazon ECR private repository to host your Docker image.
    aws ecr create-repository —repository-name drools_private_repo —image-tag-mutability MUTABLE —image-scanning-configuration scanOnPush=false
  3. Tag the Docker image and push it to the Amazon ECR repository.
    aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin <<INSERT ACCOUNT NUMBER>>.dkr.ecr.us-east-1.amazonaws.com
    docker tag drools-rule-app:latest <<INSERT ACCOUNT NUMBER>>.dkr.ecr.us-east-1.amazonaws.com/drools_private_repo:latest
    docker push <<INSERT ACCOUNT NUMBER>>.dkr.ecr.us-east-1.amazonaws.com/drools_private_repo:latest
    
  4. Deploy resources using AWS SAM:
    cd ..
    sam build
    sam deploy --guided

    SAM deployment output

Verifying the deployment

Verify the business rules execution and the orchestration components:

  1. Navigate to the API Gateway console, and choose the rules-stack API.
    API Gateway console
  2. Under Resources, choose POST, followed by TEST.
    Resource configuration
  3. Enter the following JSON under the Request Body section, and choose Test.

    {
      "context": {
        "request_id": "REQ-99999",
        "timestamp": "2021-03-17 03:31:51:40"
      },
      "request": {
        "driver": {
          "age": "18",
          "name": "Brian"
        },
        "car": {
          "make": "honda",
          "model": "civic",
          "year": "2015",
          "style": "SPORTS",
          "color": "RED"
        },
        "policy": {
          "id": "1231231",
          "premium": "300"
        }
      }
    }
    
  4. The response received shows results from the evaluation of the business rule “High Risk“, with the premium representing the percentage calculation in the rule definition. Try changing the request input to evaluate a “Medium Risk” rule by modifying the age of the driver to 22 or higher:
    Sample response
  5. Optionally, you can verify the API using Postman. Get the endpoint information by navigating to the rule-stack API, followed by Stages in the navigation pane, then choosing either Dev or Stage.
  6. Enter the payload in the request body and choose Send:
    Postman UI
  7. The response received is results from the evaluation of business rule “High Risk“, with the premium representing the percentage calculation in the rule definition. Try changing the request input to evaluate a “Medium Risk” rule by modifying the age of the driver to 22 or higher.
    Body JSON
  8. Observe the request and response audit logs. Navigate to the DynamoDB console. Under the navigation pane, choose Tables, then choose rule_execution_audit.
    DynamoDB console
  9. Under the Tables section in the navigation pane, choose Explore Items. Observe the individual audit logs by choosing the audit_id.
    Table audit item

Cleaning up

To avoid incurring ongoing charges, clean up the infrastructure by deleting the stack using the following command:

sam delete SAM confirmations

Delete the Amazon ECR repository, and any other resources you created as a prerequisite for this exercise.

Conclusion

In this post, you learned how to leverage an orchestration framework using Step Functions, Lambda, DynamoDB, and API Gateway to build an API backed by an open-source Drools rules engine, running on a container. Try this solution for your cloud native business rules orchestration use-case.

For more serverless learning resources, visit Serverless Land.

Using larger ephemeral storage for AWS Lambda

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/using-larger-ephemeral-storage-for-aws-lambda/

AWS Lambda functions have always had ephemeral storage available at /tmp in the file system. This was set at 512 MB for every function, regardless of runtime or memory configuration. With this new feature, you can now configure ephemeral storage for up to 10 GB per function instance.

You can set this in the AWS Management Console, AWS CLI, or AWS SDK, AWS Serverless Application Model (AWS SAM), AWS Cloud Development Kit (AWS CDK), AWS Lambda API, and AWS CloudFormation. This blog post explains how this works and how to use this new setting in your Lambda functions.

How ephemeral storage works in Lambda

All functions have ephemeral storage available at the fixed file system location /tmp. This provides a fast file system-based scratch area that is scoped to a specific instance of a Lambda function. This storage is not shared between instances of Lambda functions and the space is guaranteed to be empty when a new instance starts.

This means that you can use the same execution environment to cache static assets in /tmp between invocations. This is a common use case that can help reduce function duration for subsequent invocations. The contents are deleted when the Lambda service eventually terminates the execution environment.

With this new configurable setting, ephemeral storage works in the same way. The behavior is identical whether you use zip or container images to deploy your functions. It’s also available for Provisioned Concurrency. All data stored in /tmp is encrypted at rest with a key managed by AWS.

Common use cases for ephemeral storage

There are three common customer use cases that can benefit from the expanded ephemeral storage.

Extract-transform-load (ETL) jobs: Your code may perform intermediate computation or download other resources to complete processing. More temporary space enables more complex ETL jobs to run in Lambda functions.

Machine learning (ML) inference: Many inference tasks rely on large reference data files, including libraries and models. More ephemeral storage allows you to download larger models from Amazon S3 to /tmp and use these in your processing. To learn more about using Lambda for ML inference, read Building deep learning inference with AWS Lambda and Amazon EFS and Pay as you go machine learning inference with AWS Lambda.

Data processing: For workloads that download objects from S3 in response to S3 events, the larger /tmp space makes it possible to handle larger objects without using in-memory processing. Workloads that create PDFs, use headless Chromium, or process media also benefit from more ephemeral storage.

Zip processing: Some workloads use large zip files from data providers to initialize local databases. These can now unzip to the local file system without the need for in-memory processing. Similarly, applications that generate zip files also benefit from more /tmp space.

Graphics processing: Image processing is a common use-case for Lambda-based applications. For workloads processing large tiff files or satellite images, this makes it easier to use libraries like ImageMagick to perform all the computation in Lambda. Customers using geospatial libraries also gain significant flexibility from writing large satellite images to /tmp.

Deploying the example application

The example application shows how to resize an MP4 file from Amazon S3, using the temporary space for intermediate processing. In this example, you can process video files much larger than the standard 512 MB temporary storage:

Example application architecture

Before deploying the example, you need:

This example uses the AWS Serverless Application Model (AWS SAM). To deploy:

  1. From a terminal window, clone the GitHub repo:
    git clone https://github.com/aws-samples/s3-to-lambda-patterns
  2. Change directory to this example:
    cd ./resize-video
  3. Follow the installation instructions in the README file.

To test the application, upload an MP4 file into the source S3 bucket. After processing, the destination bucket contains the resized video file.

How the example works

The resize function downloads the original video from S3 and saves the result in Lambda’s temporary storage directory:

	// Get signed URL for source object
	const Key = decodeURIComponent(record.s3.object.key.replace(/\+/g, ' '))

	const data = await s3.getObject({
		Bucket: record.s3.bucket.name,
		Key
	}).promise()

	// Save original to tmp directory
	const tempFile = `${ffTmp}/${Key}`
	console.log('Saving downloaded file to ', tempFile)
	fs.writeFileSync(tempFile, data.Body)

The application uses FFmpeg to resize the video and store the output in the temporary storage space:

// Save resized video to /tmp
	const outputFilename = `${Key.split('.')[0]}-smaller.mp4`
	console.log(`Resizing and saving to ${outputFilename}`)
	await execPromise(`${ffmpegPath} -i "${tempFile}" -loglevel error -vf scale=160:-1 -sws_flags fast_bilinear ${ffTmp}/${outputFilename}`)

After processing, the function reads the file from the temporary directory and then uploads to the destination bucket in S3:

	const tmpData = fs.readFileSync(`${ffTmp}/${outputFilename}`)
	console.log(`tmpData size: ${tmpData.length}`)

	// Upload to S3
	console.log(`Uploading ${outputFilename} to ${outputFilename}`)
	await s3.putObject({
		Bucket: process.env.OutputBucketName,
		Key: outputFilename,
		Body: tmpData
	}).promise()
	console.log(`Object written to ${process.env.OutputBucketName}`)

Since temporary storage is not deleted between warm Lambda invocations, you may also choose to remove unneeded files. This example uses a tmpCleanup function to delete the contents of /tmp:

const fs = require('fs')
const path = require('path')
const directory = '/tmp/'

// Deletes all files in a directory
const tmpCleanup = async () => {
	console.log('Starting tmpCleanup')
	fs.readdir(directory, (err, files) => {
		return new Promise((resolve, reject) => {
			if (err) reject(err)

			console.log('Deleting: ', files)				
			for (const file of files) {
				const fullPath = path.join(directory, file)
				fs.unlink(fullPath, err => {
					if (err) reject (err)
				})
			}
			resolve()
		})
	})
}

Setting ephemeral storage with the AWS Management Console or AWS CLI

In the Lambda console, you can view the ephemeral storage allocated to a function in the Generation configuration menu in the Configuration tab:

Lambda function configuration

To make changes to this setting, choose Edit. In the Edit basic settings page, adjust the Ephemeral Storage to any value between 512 MB and 10240 MB. Choose Save to update the function’s settings.

Basic settings

You can also define the ephemeral storage setting in the create-function and update-function-configuration CLI commands. In both cases, use the ephemeral-storage switch to set the value:

aws lambda create-function --function-name testFunction --runtime python3.9 --handler lambda_function.lambda_handler --code S3Bucket=myBucket,S3Key=function.zip --role arn:aws:iam::123456789012:role/testFunctionRole --ephemeral-storage '{"Size": 10240}' 

To modify this setting for testFunction, run:

aws lambda update-function-configuration --function-name testFunction --ephemeral-storage '{"Size": 5000}'

Setting ephemeral storage with AWS CloudFormation or AWS SAM

You can define the size of ephemeral storage in both AWS CloudFormation and AWS SAM templates by using the EphemeralStorage attribute. As shown in the example’s template.yaml, there is a new attribute called EphemeralStorage:

  ResizeFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: resizeFunction/
      Handler: app.handler
      Runtime: nodejs14.x
      Timeout: 900
      MemorySize: 10240
      EphemeralStorage:
        Size: 10240

You define this on a per-function basis. If the attribute is missing, the function is allocated 512 MB of temporary storage.

Using Lambda Insights to monitor temporary storage usage

You can use Lambda Insights to query on the metrics emitted by the Lambda function relating to the usage of temporary storage. First, enable Lambda Insights on a function by following these steps in the documentation.

After running the function, the Lambda service writes ephemeral storage metrics to Amazon CloudWatch Logs. With Lambda Insights enabled, you can now query these from the CloudWatch console. From the Logs Insights feature, you can query to determine the maximum, used, and available space available:

fields @timestamp,
tmp_max/(1024*1024),
tmp_used/(1024*1024),
tmp_free/(1024*1024)

Calculating the cost of more temporary storage

Ephemeral storage is free up to 512 MB, as it always has been. You are charged for the amount you select between 512 MB and 10,240 MB. For example, if you select 1,024 MB, you only pay for 512 MB. Expanded ephemeral storage costs $0.0000000308 per GB/second in the us-east-1 Region (see the pricing page for other Regions).

In us-east-1, for a workload invoking a Lambda function 500,000 times with a 10 second duration, using the maximum temporary storage, the cost is $0.63:

Invocations 500,000
Duration (ms) 10,000
Ephemeral storage (over 512 MB) 9,728
Storage price per GB/s $0.0000000308
GB/s total 20,480,000
Price of storage $0.63

Choosing between ephemeral storage and Amazon EFS

Generally, ephemeral storage is designed for intermediary processing of a function. You can download reference data, machine learning models, or database metadata from other sources such as Amazon S3, and store these in /tmp for further processing. Ephemeral storage can provide a cache for data for repeat usage across invocations and offers fast I/O throughout.

Alternatively, EFS is primarily intended for customers that need to:

  • Share data or state across function invocations.
  • Process files larger than the 10,240 MB storage allows.
  • Use file-system type functionality, such as appending to or modifying files.

Conclusion

Serverless developers can now configure the amount of temporary storage available in AWS Lambda functions. This blog post discusses common use cases and walks through an example application that uses larger temporary storage. It also shows how to configure this in CloudFormation and AWS SAM and explains the cost if you use more than the free, provisioned 512 MB that’s automatically provisioned for every function.

For more serverless learning resources, visit Serverless Land.

Sending events to Amazon EventBridge from AWS Organizations accounts

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/sending-events-to-amazon-eventbridge-from-aws-organizations-accounts/

This post is written by Elinisa Canameti, Associate Cloud Architect, and Iris Kraja, Associate Cloud Architect.

AWS Organizations provides a hierarchical grouping of multiple accounts. This helps larger projects establish a central managing system that governs the infrastructure. This can help with meeting budgetary and security requirements.

Amazon EventBridge is a serverless event-driven service that delivers events from customized applications, AWS services, or software as service (SaaS) applications to targets.

This post shows how to send events from multiple accounts in AWS Organizations to the management account using AWS CDK. This uses AWS CloudFormation StackSets to deploy the infrastructure in the member’s accounts.

Solution overview

This blog post describes a centralized event management strategy in a multi-account setup. The AWS accounts are organized by using AWS Organizations service into one management account and many member accounts.  You can explore and deploy the reference solution from this GitHub repository.

In the example, there are events from three different AWS services: Amazon CloudWatch, AWS Config, and Amazon GuardDuty. These are sent from the member accounts to the management account.

The management account publishes to an Amazon SNS topic, which sends emails when these events occur. AWS CloudFormation StackSets are created in the management account to deploy the infrastructure in the member’s accounts.

Overview

To fully automate the process of adding and removing accounts to the organization unit, an EventBridge rule is triggered:

  • When a new account is moved from and in the organization unit (MoveAccount event).
  • When an account is removed from the organization (RemoveAccountFromOrganization event).

These events invoke an AWS Lambda function, which updates the management EventBridge rule with the additional source accounts.

Prerequisites

  1. At least one AWS account, which represents the member’s account.
  2. An AWS account, which is the management account.
  3. Install AWS Command Line (CLI).
  4. Install AWS CDK.

Set up the environment with AWS Organizations

1. Login into the main account used to manage your organization.

2. Select the AWS Organizations service. Choose Create Organization. Once you create the organization, you receive a verification email.

3. After verification is completed, you can add other customer accounts into the organization.

4. AWS sends an invitation to each account added under the root account.

5. To see the invitation, log in to each of the accounts and search for AWS Organization service. You can find the invitation option listed on the side.

6. Accept the invitation from the root account.

7. Create an Organization Unit (OU) and place all the member accounts, which should propagate the events to the management account in this OU. The OU identifier is used later to deploy the StackSet.

8. Finally, enable trusted access in the organization to be able to create StackSets and deploy resources from the management account to the member accounts.

Management account

After the initial deployment of the solution in the management account, a StackSet is created. It’s configured to deploy the member account infrastructure in all the members of the organization unit.

When you add a new account in the organization unit, this StackSet automatically deploys the resources specified. Read the Member account section for more information about resources in this stack.

All events coming from the member account pass through the custom event bus in the management account. To allow other accounts to put events in the management account, the resource policy of the event bus grants permission to every account in the organization:

{
  "Version": "2012-10-17",
  "Statement": [{
    "Sid": "AllowAllAccountsInOrganizationToPutEvents",
    "Effect": "Allow",
    "Principal": "*",
    "Action": "events:PutEvents",
    "Resource": "arn:aws:events:us-east-1:xxxxxxxxxxxx:event-bus/CentralEventBus ",
    "Condition": {
      "StringEquals": {
        "aws:PrincipalOrgID": "o-xxxxxxxx"
      }
    }
  }]
}

You must configure EventBridge in the management account to handle the events coming from the member accounts. In this example, you send an email with the event information using SNS. Create a new rule with the following event pattern:

Event pattern

When you add or remove accounts in the organization unit, an EventBridge rule invokes a Lambda function to update the rule in the management account. This rule reacts to two events from the organization’s source: MoveAccount and RemoveAccountFromOrganization.

Event pattern

// EventBridge to trigger updateRuleFunction Lambda whenever a new account is added or removed from the organization.
new Rule(this, 'AWSOrganizationAccountMemberChangesRule', {
   ruleName: 'AWSOrganizationAccountMemberChangesRule',
   eventPattern: {
      source: ['aws.organizations'],
      detailType: ['AWS API Call via CloudTrail'],
      detail: {
        eventSource: ['organizations.amazonaws.com'],
        eventName: [
          'RemoveAccountFromOrganization',
          'MoveAccount'
        ]
      }
    },
    targets: [
      new LambdaFunction(updateRuleFunction)
    ]
});

Custom resource Lambda function

The custom resource Lambda function is executed to run custom logic whenever the CloudFormation stack is created, updated or deleted.

// Cloudformation Custom resource event
switch (event.RequestType) {
  case "Create":
    await putRule(accountIds);
    break;
  case "Update":
    await putRule(accountIds);
    break;
  case "Delete":
    await deleteRule()
}

async function putRule(accounts) {
  await eventBridgeClient.putRule({
    Name: rule.name,
    Description: rule.description,
    EventBusName: eventBusName,
    EventPattern: JSON.stringify({
      account: accounts,
      source: rule.sources
    })
  }).promise();
  await eventBridgeClient.putTargets({
    Rule: rule.name,
    Targets: [
      {
        Arn: snsTopicArn,
        Id: `snsTarget-${rule.name}`
      }
    ]
  }).promise();
}

Amazon EventBridge triggered Lambda function code

// New AWS Account moved to the organization unit of out it.
if (eventName === 'MoveAccount' || eventName === 'RemoveAccountFromOrganization') {
   await putRule(accountIds);
}

All events generated from the members’ accounts are then sent to a SNS topic, which has an email address as an endpoint. More sophisticated targets can be configured depending on the application’s needs. The targets include, but are not limited to: Step Functions state machine, Kinesis stream, SQS queue, etc.

Member account

In the member account, we use an Amazon EventBridge rule to route all the events coming from Amazon CloudWatch, AWS Config, and Amazon GuardDuty to the event bus created in the management account.

    const rule = {
      name: 'MemberEventBridgeRule',
      sources: ['aws.cloudwatch', 'aws.config', 'aws.guardduty'],
      description: 'The Rule propagates all Amazon CloudWatch Events, AWS Config Events, AWS Guardduty Events to the management account'
    }

    const cdkRule = new Rule(this, rule.name, {
      description: rule.description,
      ruleName: rule.name,
      eventPattern: {
        source: rule.sources,
      }
    });
    cdkRule.addTarget({
      bind(_rule: IRule, generatedTargetId: string): RuleTargetConfig {
        return {
          arn: `arn:aws:events:${process.env.REGION}:${process.env.CDK_MANAGEMENT_ACCOUNT}:event-bus/${eventBusName.valueAsString}`,
          id: generatedTargetId,
          role: publishingRole
        };
      }
    });

Deploying the solution

Bootstrap the management account:

npx cdk bootstrap  \ 
    --profile <MANAGEMENT ACCOUNT AWS PROFILE>  \ 
    --cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess  \
    aws://<MANAGEMENT ACCOUNT ID>/<REGION>

To deploy the stack, use the cdk-deploy-to.sh script and pass as argument the management account ID, Region, AWS Organization ID, AWS Organization Unit ID and an accessible email address.

sh ./cdk-deploy-to.sh <MANAGEMENT ACCOUNT ID> <REGION> <AWS ORGANIZATION ID> <MEMBER ORGANIZATION UNIT ID> <EMAIL ADDRESS> AwsOrganizationsEventBridgeSetupManagementStack 

Make sure to subscribe to the SNS topic, when you receive an email after the Management stack is deployed.

After the deployment is completed, a stackset starts deploying the infrastructure to the members’ account. You can view this process in the AWS Management Console, under the AWS CloudFormation service in the management account as shown in the following image, or by logging in to the member account, under CloudFormation stacks.

Stack output

Testing the environment

This project deploys an Amazon CloudWatch billing alarm to the member accounts to test that we’re retrieving email notifications when this metric is in alarm. Using the member’s account credentials, run the following AWS CLI Command to change the alarm status to “In Alarm”:

aws cloudwatch set-alarm-state --alarm-name 'BillingAlarm' --state-value ALARM --state-reason "testing only" 

You receive emails at the email address configured as an endpoint on the Amazon SNS topic.

Conclusion

This blog post shows how to use AWS Organizations to organize your application’s accounts by using organization units and how to centralize event management using Amazon EventBridge across accounts in the organization. A fully automated solution is provided to ensure that adding new accounts to the organization unit is efficient.

For more serverless learning resources, visit https://serverlessland.com.

Running cross-account workflows with AWS Step Functions and Amazon API Gateway

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/running-cross-account-workflows-with-aws-step-functions-and-amazon-api-gateway/

This post is written by Hardik Vasa, Senior Solutions Architect, and Pratik Jain, Cloud Infrastructure Architect.

AWS Step Functions allow you to build scalable and distributed applications using state machines. With the launch of Step Functions nested workflows, you can start a Step Functions workflow from another workflow. However, this requires both workflows to be in the same account. There are many use cases that require you to orchestrate workflows across different AWS accounts from one central AWS account.

This blog post covers a solution to invoke Step Functions workflows cross account using Amazon API Gateway. With this, you can perform cross-account orchestration for scheduling, ETL automation, resource deployments, security audits, and log aggregations all from a central account.

Overview

The following architecture shows a Step Functions workflow in account A invoking an API Gateway endpoint in account B, and passing the payload in the API request. The API then invokes another Step Functions workflow in account B asynchronously.  The resource policy on the API allows you to restrict access to a specific Step Functions workflow to prevent anonymous access.

Cross-account workflows

You can extend this architecture to run workflows across multiple Regions or accounts. This blog post shows running cross-account workflows with two AWS accounts.

To invoke an API Gateway endpoint, you can use Step Functions AWS SDK service integrations. This approach allows users to build solutions and integrate services within a workflow without writing code.

The example demonstrates how to use the cross-account capability using two AWS example accounts:

  • Step Functions state machine A: Account ID #111111111111
  • API Gateway API and Step Functions state machine B: Account ID #222222222222

Setting up

Start by creating state machine A in the account #111111111111. Next, create the state machine in target account #222222222222, followed by the API Gateway REST API integrated to the state machine in the target account.

Account A: #111111111111

In this account, create a state machine, which includes a state that invokes an API hosted in a different account.

Create an IAM role for Step Functions

  1. Sign in to the IAM console in account #111111111111, and then choose Roles from left navigation pane
  2. Choose Create role.
  3. For the Select trusted entity, under AWS service, select Step Functions from the list, and then choose Next.
  4. On the Add permissions page, choose Next.
  5. On the Review page, enter StepFunctionsAPIGatewayRole for Role name, and then choose Create role.
  6. Create inline policies to allow Step Functions to access the API actions of the services you need to control. Navigate to the role that you created and select Add Permissions and then Create inline policy.
  7. Use the Visual editor or the JSON tab to create policies for your role. Enter the following:
    Service: Execute-API
    Action: Invoke
    Resource: All Resources
  8. Choose Review policy.
  9. Enter APIExecutePolicy for name and choose Create Policy.

Creating a state machine in source account

  1. Navigate to the Step Functions console in account #111111111111 and choose Create state machine
  2. Select Design your workflow visually, and the click Standard and then click Next
  3. On the design page, search for APIGateway:Invoke state, then drag and drop the block on the page:
    Step Functions designer console
  4. In the API Gateway Invoke section on the right panel, update the API Parameters with the following JSON policy:
     {
      "ApiEndpoint.$": "$.ApiUrl",
      "Method": "POST",
      "Stage": "dev",
      "Path": "/execution",
      "Headers": {},
      "RequestBody": {
       "input.$": "$.body",
       "stateMachineArn.$": "$.stateMachineArn"
      },
      "AuthType": "RESOURCE_POLICY"
    }

    These parameters indicate that the ApiEndpoint, payload (body) and stateMachineArn are dynamically assigned values based on input provided during workflow execution. You can also choose to assign these values statically, based on your use case.

  5. [Optional] You can also configure the API Gateway Invoke state to retry upon task failure by configuring the retries setting.
    Configuring State Machine
  6. Choose Next and then choose Next again. On the Specify state machine settings page:
    1. Enter a name for your state machine.
    2. Select Choose an existing role under Permissions and choose StepFunctionsAPIGatewayRole.
    3. Select Log Level ERROR.
  7. Choose Create State Machine.

After creating this state machine, copy the state machine ARN for later use.

Account B: #222222222222

In this account, create an API Gateway REST API that integrates with the target state machine and enables access to this state machine by means of a resource policy.

Creating a state machine in the target account

  1. Navigate to the Step Functions Console in account #222222222222 and choose Create State Machine.
  2. Under Choose authoring method select Design your workflow visually and the type as Standard.
  3. Choose Next.
  4. On the design page, search for Pass state. Drag and drop the state.
    State machine
  5. Choose Next.
  6. In the Review generated code page, choose Next and:
    1. Enter a name for the state machine.
    2. Select Create new role under the Permissions section.
    3. Select Log Level ERROR.
  7. Choose Create State Machine.

Once the state machine is created, copy the state machine ARN for later use.

Next, set up the API Gateway REST API, which acts as a gateway to accept requests from the state machine in account A. This integrates with the state machine you just created.

Create an IAM Role for API Gateway

Before creating the API Gateway API endpoint, you must give API Gateway permission to call Step Functions API actions:

  1. Sign in to the IAM console in account #222222222222 and choose Roles. Choose Create role.
  2. On the Select trusted entity page, under AWS service, select API Gateway from the list, and then choose Next.
  3. On the Select trusted entity page, choose Next
  4. On the Name, review, and create page, enter APIGatewayToStepFunctions for Role name, and then choose Create role
  5. Choose the name of your role and note the Role ARN:
    arn:aws:iam::222222222222:role/APIGatewayToStepFunctions
  6. Select the IAM role (APIGatewayToStepFunctions) you created.
  7. On the Permissions tab, choose Add permission and choose Attach Policies.
  8. Search for AWSStepFunctionsFullAccess, choose the policy, and then click Attach policy.

Creating the API Gateway API endpoint

After creating the IAM role, create a custom API Gateway API:

  1. Open the Amazon API Gateway console in account #222222222222.
  2. Click Create API. Under REST API choose Build.
  3. Enter StartExecutionAPI for the API name, and then choose Create API.
  4. On the Resources page of StartExecutionAPI, choose Actions, Create Resource.
  5. Enter execution for Resource Name, and then choose Create Resource.
  6. On the /execution Methods page, choose Actions, Create Method.
  7. From the list, choose POST, and then select the check mark.

Configure the integration for your API method

  1. On the /execution – POST – Setup page, for Integration Type, choose AWS Service. For AWS Region, choose a Region from the list. For Regions that currently support Step Functions, see Supported Regions.
  2. For AWS Service, choose Step Functions from the list.
  3. For HTTP Method, choose POST from the list. All Step Functions API actions use the HTTP POST method.
  4. For Action Type, choose Use action name.
  5. For Action, enter StartExecution.
  6. For Execution Role, enter the role ARN of the IAM role that you created earlier, as shown in the following example. The Integration Request configuration can be seen in the image below.
    arn:aws:iam::222222222222:role/APIGatewayToStepFunctions
    API Gateway integration request configuration
  7. Choose Save. The visual mapping between API Gateway and Step Functions is displayed on the /execution – POST – Method Execution page.
    API Gateway method configuration

After you configure your API, you can configure the resource policy to allow the invoke action from the cross-account Step Functions State Machine. For the resource policy to function in cross-account scenarios, you must also enable AWS IAM authorization on the API method.

Configure IAM authorization for your method

  1. On the /execution – POST method, navigate to the Method Request, and under the Authorization option, select AWS_IAM and save.
  2. In the left navigation pane, choose Resource Policy.
  3. Use this policy template to define and enter the resource policy for your API.
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "Service": "states.amazonaws.com"
                },
                "Action": "execute-api:Invoke",
                "Resource": "execute-api:/*/*/*",
                "Condition": {
                    "StringEquals": {
                        "aws:SourceArn": [
                            "<SourceAccountStateMachineARN>"
                         ]
                    }
                }
            }
        ]
    }

    Note: You must replace <SourceAccountStateMachineARN> with the state machine ARN from account #111111111111 (account A).

  4. Choose Save.

Once the resource policy is configured, deploy the API to a stage.

Deploy the API

  1. In the left navigation pane, click Resources and choose Actions.
  2. From the Actions drop-down menu, choose Deploy API.
  3. In the Deploy API dialog box, choose [New Stage], enter dev in Stage name.
  4. Choose Deploy to deploy the API.

After deployment, capture the API ID, API Region, and the stage name. These are used as inputs during the execution phase.

Starting the workflow

To run the Step Functions workflow in account A, provide the following input:

{
   "ApiUrl": "<api_id>.execute-api.<region>.amazonaws.com",
   "stateMachineArn": "<stateMachineArn>",
   "body": "{\"someKey\":\"someValue\"}"
}

Start execution

Paste in the values of APIUrl and stateMachineArn from account B in the preceding input. Make sure the ApiUrl is in the format as shown.

AWS Serverless Application Model deployment

You can deploy the preceding solution architecture with the AWS Serverless Application Model (AWS SAM), which is an open-source framework for building serverless applications. During deployment, AWS SAM transforms and expands the syntax into AWS CloudFormation syntax, enabling you to build serverless applications faster.

Logging and monitoring

Logging and monitoring are vital for observability, measuring performance and audit purposes. Step Functions allows logging using CloudWatch Logs. Step Functions also automatically sends execution metrics to CloudWatch. You can learn more on monitoring Step Functions using CloudWatch.

Cleaning up

To avoid incurring any charges, delete all the resources that you have created in both the accounts. This would include deleting the Step Functions state machines and API Gateway API.

Conclusion

This blog post provides a step-by-step guide on securely invoking a cross-account Step Functions workflow from a central account using API Gateway as front end. This pattern can be extended to scale workflow executions across different Regions and accounts.

By using a centralized account to orchestrate workflows across AWS accounts, this can help prevent duplicating work in each account.

To learn more about serverless and AWS Step Functions, visit the Step Functions Developer Guide.

For more serverless learning resources, visit Serverless Land.

Building serverless multi-Region WebSocket APIs

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-serverless-multi-region-websocket-apis/

This post is written by Ben Freiberg, Senior Solutions Architect, and Marcus Ziller, Senior Solutions Architect.

Many modern web applications use the WebSocket protocol for bidirectional communication between frontend clients and backends. The fastest way to get started with WebSockets on AWS is to use WebSocket APIs powered by Amazon API Gateway.

This serverless solution allows customers to get started with WebSockets without having the complexity of running a WebSocket API. WebSocket APIs are a Regional service bound to a single Region, which may affect latency and resilience for some workloads.

This post shows how to build a multi-regional WebSocket API for a global real-time chat application.

Overview of the solution

This solution uses AWS Cloud Development Kit (CDK). This is an open source software development framework to model and provision cloud application resources. Using the CDK can reduce the complexity and amount of code needed to automate the deployment of resources.

This solution uses AWS LambdaAmazon API Gateway, Amazon DynamoDB, and Amazon EventBridge.

This diagram outlines the workflow implemented in this blog:

Solution architecture

  1. Users across different Regions establish WebSocket connections to an API endpoint in a Region. For every connection, the respective API Gateway invokes the ConnectionHandler Lambda function, which stores the connection details in a Regional DynamoDB table.
  2. User A sends a chat message via the established WebSocket connection. The API Gateway invokes the ClientMessageHandler Lambda function with the received message. The Lambda function publishes an event to an EventBridge event bus that contains the message and the connectionId of the message sender.
  3. The event bus invokes the EventBusMessageHandler Lambda function, which pushes the received message to all other clients connected in the Region. It also replicates the event into us-west-1.
  4. EventBusMessageHandler in us-west-1 receives and send it out to all connected clients in the Region via the same mechanism.

Walkthrough

The following walkthrough explains the required components, their interactions and how the provisioning can be automated via CDK.

For this walkthrough, you need:

Checkout and deploy the sample stack:

  1. After completing the prerequisites, clone the associated GitHub repository by running the following command in a local directory:
    git clone [email protected]/aws-samples/multi-region-websocket-api
  2. Open the repository in your preferred editor and review the contents of the src and cdk folder.
  3. Follows the instructions in the README.md to deploy the stack.

The following components are deployed in your account for every specified Region. If you didn’t change the default, the Regions are eu-west-1 and us-west-1.

API Gateway for WebSocket connectivity

API Gateway is a fully managed service that makes it easier for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the “front door” for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications.

WebSocket APIs serve as a stateful frontend for an AWS service, in this case AWS Lambda. A Lambda function is used for the WebSocket endpoint that maintains a persistent connection to handle message transfer between the backend service and clients. The WebSocket API invokes the backend based on the content of the messages that it receives from client apps.

There are three predefined routes that can be used: $connect, $disconnect, and $default.

const connectionLambda = new lambda.Function(..);
const requestHandlerLambda = new lambda.Function(..);

const webSocketApi = new apigwv2.WebSocketApi(this, 'WebsocketApi', {
      apiName: 'WebSocketApi',
      description: 'A regional Websocket API for the multi-region chat application sample',
      connectRouteOptions: {
        integration: new WebSocketLambdaIntegration('connectionIntegration', connectionLambda.fn),
      },
      disconnectRouteOptions: {
        integration: new WebSocketLambdaIntegration('disconnectIntegration', connectionLambda.fn),
      },
      defaultRouteOptions: {
        integration: new WebSocketLambdaIntegration('defaultIntegration', requestHandlerLambda.fn),
      },
});

const websocketStage = new apigwv2.WebSocketStage(this, 'WebsocketStage', {
      webSocketApi,
      stageName: 'dev',
      autoDeploy: true,
});

$connect and $disconnect are used by clients to initiate or end a connection with the API Gateway. Each route has a backend integration that is invoked for the respective event. In this example, a Lambda function gets invoked with details of the event. The following code snippet shows how you can track each of the connected clients in an Amazon DynamoDB table. Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale.

// Simplified example for brevity
// Visit GitHub repository for complete code

function connectionHandler(event: APIGatewayEvent) {
  if (eventType === 'CONNECT') {
    await dynamoDbClient.put({
      Item: {
        connectionId,
        chatId: 'DEFAULT',
        ttl: Math.round(Date.now() / 1000 + 3600) // TTL of one hour
      },
    });
  }

  if (eventType === 'DISCONNECT') {
    await dynamoDbClient.delete({
      TableName: process.env.TABLE_NAME!,
      Key: {
        connectionId,
        chatId: 'DEFAULT',
      },
    })
  }

  return ..
}

The $default route is used when the route selection expression produces a value that does not match any of the other route keys in your API routes. For this post, we use it as a default route for all messages sent to the API Gateway by a client. For each message, a Lambda function is invoked with an event of the following format.

{
     "requestContext": {
         "routeKey": "$default",
         "messageId": "GXLKJfX4FiACG1w=",
         "eventType": "MESSAGE",
         "messageDirection": "IN",
         "connectionId": "GXLKAfX1FiACG1w=",
         "apiId": "3m4dnp0wy4",
         "requestTimeEpoch": 1632812813588,
         // some fields omitted for brevity   
         },
     "body": "{ .. }",
     "isBase64Encoded": false
}

EventBridge for cross-Region message distribution

The Lambda function uses the AWS SDK to publish the message data in event.body to EventBridge. EventBridge is a serverless event bus that makes it easier to build event-driven applications at scale. It delivers a stream of real-time data from event sources to targets. You can set up routing rules to determine where to send your data to build application architectures that react in real time to your data sources with event publishers and consumers decoupled.

The following CDK code defines routing rules on the event bus that is applied for every event with source ChatApplication and detail type ChatMessageReceived.

    new events.Rule(this, 'ProcessRequest', {
      eventBus,
      enabled: true,
      ruleName: 'ProcessChatMessage',
      description: 'Invokes a Lambda function for each chat message to push the event via websocket and replicates the event to event buses in other regions.',
      eventPattern: {
        detailType: ['ChatMessageReceived'],
        source: ['ChatApplication'],
      },
      targets: [
        new LambdaFunction(processLambda.fn),
        ...additionalEventBuses,
      ],
    });

Intra-Region message delivery

The first target is a Lambda function that sends the message out to clients connected to the API Gateway endpoint in the same Region where the message was received.

To that end, the function first uses the AWS SDK to query DynamoDB for active connections for a given chatId in its AWS Region. It then removes the connectionId of the message sender from the list and calls postToConnection(..) for the remaining connection ids to push the message to the respective clients.

export async function handler(event: EventBridgeEvent<'EventResponse', ResponseEventDetails>): Promise<any> {
  const connections = await getConnections(event.detail.chatId);
  connections
    .filter((cId: string) => cId !== event.detail.senderConnectionId)
    .map((connectionId: string) => gatewayClient.postToConnection({
      ConnectionId: connectionId,
      Data: JSON.stringify({ data: event.detail.message }),
    })
}

Inter-Region message delivery

To send messages across Regions, this solution uses EventBridge’s cross-Region event routing capability. Cross-Region event routing allows you to replicate events across Regions by adding an event bus in another Region as the target of a rule. In this case, the architecture is a mesh of event buses in all Regions so that events in every event bus are replicated to all other Regions.

A message sent to an event bus in a Region is replicated to the event buses in the other Regions and trigger the intra-region workflow that I described earlier. However, to avoid infinite loops the EventBridge service implements circuit breaker logic that prevents infinite loops of event buses sending messages back and forth. Thus, only ProcessRequestLambda is invoked as a rule target. The function receives the message via its invocation event and looks up the active WebSocket connections in its Region. It then pushes the message to all relevant clients.

This process happens in every Region so that the initial message is delivered to every connected client with at-least-once semantics.

Improving resilience

The architecture of this solution is resilient to service disruptions in a Region. In such an event, all clients connected to the affected Region reconnect to an unaffected Region and continue to receive events. Although this isn’t covered in the CDK code, you can also set up Amazon Route 53 health checks to automate DNS failover to a healthy Region.

Testing the workflow

You can use any WebSocket client to test the application. Here you can see three clients, one connected to the us-west-1 API Gateway endpoint and two connected to the eu-west-1 endpoint. Each one sends a message to the application and every other client receives it, regardless of the Region it is connected to.

Testing

Testing

Testing

Cleaning up

Most services used in this blog post have an allowance in the AWS Free Tier. Be sure to check potential costs of this solution and delete the stack if you don’t need it anymore. Instructions on how to do this are included inside the README in the repository.

Conclusion

This blog post shows how to use the AWS serverless platform to build a multi-regional chat application over WebSockets. With the cross-Region event routing of EventBridge the architecture is resilient as well as extensible.

For more resources on how to get the most out of the AWS serverless platform, visit Serverless Land.

Composing AWS Step Functions to abstract polling of asynchronous services

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/composing-aws-step-functions-to-abstract-polling-of-asynchronous-services/

This post is written by Nicolas Jacob Baer, Senior Cloud Application Architect, Goeksel Sarikaya, Senior Cloud Application Architect, and Shukhrat Khodjaev, Engagement Manager, AWS ProServe.

AWS Step Functions workflows can use the three main integration patterns when calling AWS services. A common integration pattern is to call a service and wait for a response before proceeding to the next step.

While this works well with AWS services, there is no built-in support for third-party, on-premises, or custom-built services running on AWS. When a workflow integrates with such a service in an asynchronous fashion, it requires a primary step to invoke the service. There are additional steps to wait and check the result, and handle possible error scenarios, retries, and fallbacks.

Although this approach may work well for small workflows, it does not scale to multiple interactions in different steps of the workflow and becomes repetitive. This may result in a complex workflow, which makes it difficult to build, test, and modify. In addition, large workflows with many repeated steps are more difficult to troubleshoot and understand.

This post demonstrates how to decompose the custom service integration by nesting Step Functions workflows. One basic workflow is dedicated to handling the asynchronous communication that offers modularity. It can be re-used as a building block. Another workflow is used to handle the main process by invoking the nested workflow for service interaction, where all the repeated steps are now hidden in multiple executions of the nested workflow.

Overview

Consider a custom service that provides an asynchronous interface, where an action is initially triggered by an API call. After a few minutes, the result is available to be polled by the caller. The following diagram shows a basic workflow interacting with this custom service that encapsulates the service communication in a workflow:

Workflow diagram

  1. Call Custom Service API – calls a custom service, in this case through an API. Potentially, this could use a service integration or use AWS Lambda if there is custom code.
  2. Wait – waits for the service to prepare a result. This depends on the service that the workflow is interacting with, and could vary from seconds to days to months.
  3. Poll Result – attempts to poll the result from the custom service.
  4. Choice – repeats the polling in case the result was not available yet, move on to failed or success state if result was retrieved. In addition, a timeout should be in place here in case the result is not available within the expected time range. Otherwise, this might lead to an infinite loop.
  5. Fail – fails the workflow if a timeout or a threshold for the number of retries with error conditions is reached.
  6. Transform Result – transforms the result or adds additional meta information to provide further information to the caller (for example, runtime or retries).
  7. Success – finishes the workflow successfully.

If you build a larger workflow that interacts with this custom service in multiple steps in a workflow, you can reduce the complexity by using the Step Functions integration to call the nested workflow with a Wait state.

An illustration of this can be found in the following diagram, where the nested stack is called three times sequentially. Likewise, you can build a more complex workflow that adds additional logic through more steps or interacts with a custom service in parallel. The polling logic is hidden in the nested workflow.

Nested workflow

Walkthrough

To get started with AWS Step Functions and Amazon API Gateway using the AWS Management Console:

  1. Go to AWS Step Functions in the AWS Management Console.
  2. Choose Run a sample project and choose Start a workflow within a workflow.
    Defining state machine
  3. Scroll down to the sample projects, which are defined using Amazon States Language (ASL).
    Definition
  4. Review the example definition, then choose Next.
  5. Choose Deploy resources. The deployment can take up to 10 minutes. After deploying the resources, you can edit the sample ASL code to define steps in the state machine.
    Deploy resources
  6. The deployment creates two state machines: NestingPatternMainStateMachine and NestingPatternAnotherStateMachine. NestingPatternMainStateMachine orchestrates the other nested state machines sequentially or in parallel.
    Two state machines
  7. Select a state machine, then choose Edit to edit the workflow. In the NestingPatternMainStateMachine, the first state triggers the nested workflow NestingPatternAnotherStateMachine. You can pass necessary parameters to the nested workflow by using Parameters and Input as shown in the example below with Parameter1 and Parameter2. Once the first nested workflow completes successfully, the second nested workflow is triggered. If the result of the first nested workflow is not successful, the NestingPatternMainStateMachine fails with the Fail state.
    Editing state machine
  8. Select nested workflow NestingPatternAnotherStateMachine, and then select Edit to add AWS Lambda functions to start a job and poll the state of the jobs. This can be any asynchronous job that needs to be polled to query its state. Based on expected job duration, the Wait state can be configured for 10-20 seconds. If the workflow is successful, the main workflow returns a successful result.
    Edited next state machine

Use cases and limitations

This approach allows encapsulation of workflows consisting of multiple sequential or parallel services. Therefore, it provides flexibility that can be used for different use cases. Services can be part of distributed applications, part of automated business processes, big data or machine learning pipelines using AWS services.

Each nested workflow is responsible for an individual step in the main workflow, providing flexibility and scalability. Hundreds of nested workflows can run and be monitored in parallel with the main workflow (see AWS Step Functions Service Quotas).

The approach described here is not applicable for custom service interactions faster than 1 second, since it is the minimum configurable value for a wait step.

Nested workflow encapsulation

Similar to the principle of encapsulation in object-oriented programming, you can use a nested workflow for different interactions with a custom service. You can dynamically pass input parameters to the nested workflow during workflow execution and receive return values. This way, you can define a clear interface between the nested workflow and the parent workflow with different actions and integrations. Depending on the use-case, a custom service may offer a variety of different actions that must be integrated into workflows that run in Step Functions, but can all be combined into a single workflow.

Debugging and tracing

Additionally, debugging and tracing can be done through Execution Event History in the State Machine Management Console. In the Resource column, you can find a link to the executed nested step function. It can be debugged in case of any error in the nested Step Functions workflow.

Execution event history

However, debugging can be challenging in case of multiple parallel nested workflows. In such cases, AWS X-Ray can be enabled to visualize the components of a state machine, identify performance bottlenecks, and troubleshoot requests that have led to an error.

To enable AWS X-Ray in AWS Step Functions:

  1. Open the Step Functions console and choose Edit state machine.
  2. Scroll down to Tracing settings, and Choose Enable X-Ray tracing.

Tracing

For detailed information regarding AWS X-Ray and AWS Step Functions please refer to the following documentation: https://docs.aws.amazon.com/step-functions/latest/dg/concepts-xray-tracing.html

Conclusion

This blog post describes how to compose a nested Step Functions workflow, which asynchronously manages a custom service using the polling mechanism.

To learn more about how to use AWS Step Functions workflows for serverless microservices orchestration, visit Serverless Land.

Building a serverless image catalog with AWS Step Functions Workflow Studio

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-a-serverless-image-catalog-with-aws-step-functions-workflow-studio/

This post is written by Pascal Vogel, Associate Solutions Architect, and Benjamin Meyer, Sr. Solutions Architect.

Workflow Studio is a low-code visual workflow designer for AWS Step Functions that enables the orchestration of serverless workflows through a guided interactive interface. With the integration of Step Functions and the AWS SDK, you can now access more than 200 AWS services and over 9,000 API actions in your state machines.

This walkthrough uses Workflow Studio to implement a serverless image cataloging pipeline. It includes content moderation, automated tagging, and parallel image processing. Workflow Studio allows you to set up API integrations to other AWS services quickly with drag and drop actions, without writing custom application code.

Solution overview

Photo sharing websites often allow users to publish user-generated content such as text, images, or videos. Manual content review and categorization can be challenging. This solution enables the automation of these tasks.

Workflow overview

In this workflow:

  1. An image stored in Amazon S3 is checked for inappropriate content using the Amazon Rekognition DetectModerationLabels API.
  2. Based on the result of (1), appropriate images are forwarded to image processing while inappropriate ones trigger an email notification.
  3. Appropriate images undergo two processing steps in parallel: the detection of objects and text in the image via Amazon Rekognition’s DetectLabels and DetectText APIs. The results of both processing steps are saved in an Amazon DynamoDB table.
  4. An inappropriate image triggers an email notification for manual content moderation via the Amazon Simple Notification Service (SNS).

Prerequisites

To follow this walkthrough, you need:

  1. An AWS account.
  2. An AWS user with AdministratorAccess (see the instructions on the AWS Identity and Access Management (IAM) console).
  3. AWS CLI using the instructions here.
  4. AWS Serverless Application Model (AWS SAM) CLI using the instructions here.

Initial project setup

Get started by cloning the project repository from GitHub:

git clone https://github.com/aws-samples/aws-step-functions-image-catalog-blog.git

The cloned repository contains two AWS SAM templates.

  1. The starter directory contains a template. It deploys AWS resources and permissions that you use later for building the image cataloging workflow.
  2. The solution directory contains a template that deploys the finished image cataloging pipeline. Use this template if you want to skip ahead to the finished solution.

Both templates deploy the following resources to your AWS account:

  • An Amazon S3 bucket that holds the image files for the catalog.
  • A DynamoDB table as the data store of the image catalog.
  • An SNS topic and subscription that allow you to send an email notification.
  • A Step Functions state machine that defines the processing steps in the cataloging pipeline.

To follow the walkthrough, deploy the AWS SAM template in the starter directory using the AWS SAM CLI:

cd aws-step-functions-image-catalog-blog/starter
sam build
sam deploy --guided

Configure the AWS SAM deployment as follows. Input your email address for the parameter ModeratorEmailAddress:

Configuring SAM deploy

During deployment, you receive an email asking you to confirm the subscription to notifications generated by the Step Functions workflow. In the email, choose Confirm subscription to receive these notifications.

Subscription message

Confirm successful resource creation by going to the AWS CloudFormation console. Open the serverless-image-catalog-starter stack and choose the Stack info tab:

CloudFormation stack

View the Outputs tab of the CloudFormation stack. You reference these items later in the walkthrough:

Outputs tab

Implementing the image cataloging pipeline

Accessing Step Functions Workflow Studio

To access Step Functions in Workflow Studio:

  1. Access the Step Functions console.
  2. In the list of State machines, select image-catalog-workflow-starter.
  3. Choose the Edit button.
  4. Choose Workflow Studio.

Workflow Studio

Workflow Studio consists of three main areas:

  1. The Canvas lets you modify the state machine graph via drag and drop.
  2. The States Browser lets you browse and search more than 9,000 API Actions from over 200 AWS services.
  3. The Inspector panel lets you configure the properties of state machine states and displays the Step Functions definition in the Amazon States Language (ASL).

For the purpose of this walkthrough, you can delete the Pass state present in the state machine graph. Right click on it and choose Delete state.

Auto-moderating content with Amazon Rekognition and the Choice State

Use Amazon Rekognition’s DetectModerationLabels API to detect inappropriate content in the images processed by the workflow:

  1. In the States browser, search for the DetectModerationLabels API action.
  2. Drag and drop the API action on the state machine graph on the canvas.

Drag and drop

In the Inspector panel, select the Configuration tab and add the following API Parameters:

{
  "Image": {
    "S3Object": {
      "Bucket.$": "$.bucket",
      "Name.$": "$.key"
    }
  }
}

Switch to the Output tab and check the box next to Add original input to output using ResultPath. This allows you to pass both the original input and the task’s output on to the next state on the state machine graph.

Input the following ResultPath:

$.moderationResult

Step Functions enables you to make decisions based on the output of previous task states via the choice state. Use the result of the DetectModerationLabels API action to decide how to proceed with the image:

  1. Access the Flow tab in the States browser. Drag and drop a Choice state to the state machine graph below the DetectModerationLabels API action.
  2. In the States browser, choose Flow.
  3. Select a Choice state and place it after the DetectModerationLabels state on the graph.
  4. Select the added Choice state.
  5. In the Inspector panel, choose Rule #1 and select Edit.
  6. Choose Add conditions.
  7. For Variable, enter $.moderationResult.ModerationLabels[0].
  8. For Operator, choose is present.
  9. Choose Save conditions.
    Conditions for rule #1

If Amazon Rekognition detects inappropriate content, the workflow notifies content moderators to inspect the image manually:

  1. In the States browser, find the SNS Publish API Action.
  2. Drag the Action into the Rule #1 branch of the Choice state.
  3. For API Parameters, select the SNS topic that is visible in the Outputs of the serverless-image-catalog-starter stack in the CloudFormation console.

SNS topic in Workflow Studio

Speeding up image cataloging with the Parallel state

Appropriate images should be processed and included in the image catalog. In this example, processing includes the automated generation of tags based on objects and text identified in the image.

To accelerate this, instruct Step Functions to perform these tasks concurrently via a Parallel state:

  1. In the States browser, select the Flow tab.
  2. Drag and drop a Parallel state onto the Default branch of the previously added Choice state.
  3. Search the Amazon Rekognition DetectLabels API action in the States browser
  4. Drag and drop it inside the parallel state.
  5. Configure the following API parameters:
    {
      "Image": {
        "S3Object": {
          "Bucket.$": "$.bucket",
          "Name.$": "$.key"
        }
      }
    }
    
  6. Switch to the Output tab and check the box next to Add original input to output using ResultPath. Set the ResultPath to $.output.

Record the results of the Amazon Rekognition DetectLabels API Action to the DynamoDB database:

  1. Place a DynamoDB UpdateItem API Action inside the Parallel state below the Amazon Rekognition DetectLabels API action.
  2. Configure the following API Parameters to save the tags to the DynamoDB table. Input the name of the DynamoDB table visible in the Outputs of the serverless-image-catalog-starter stack in the CloudFormation console:
{
  "TableName": "<DynamoDB table name>",
  "Key": {
    "Id": {
      "S.$": "$.key"
    }
  },
  "UpdateExpression": "set detectedObjects=:o",
  "ExpressionAttributeValues": {
    ":o": {
      "S.$": "States.JsonToString($.output.Labels)"
    }
  }
}

This API parameter definition makes use of an intrinsic function to convert the list of objects identified by Amazon Rekognition from JSON to String.

Intrinsic functions

In addition to objects, you also want to identify text in images and store it in the database. To do so:

  1. Drag and drop an Amazon Rekognition DetectText API action into the Parallel state next to the DetectLabels Action.
  2. Configure the API Parameters and ResultPath identical to the DetectLabels API Action.
  3. Place another DynamoDB UpdateItem API Action inside the Parallel state below the Amazon Rekognition DetectText API Action. Set the following API Parameters and input the same DynamoDB table name as before.
{
  "TableName": "<DynamoDB table name>",
  "Key": {
    "Id": {
      "S.$": "$.key"
    }
  },
  "UpdateExpression": "set detectedText=:t",
  "ExpressionAttributeValues": {
    ":t": {
      "S.$": "States.JsonToString($.output.TextDetections)"
    }
  }
}

To save the state machine:

  1. Choose Apply and exit.
  2. Choose Save.
  3. Choose Save anyway.

Finishing up and testing the image cataloging workflow

To test the image cataloging workflow, upload an image to the S3 bucket created as part of the initial project setup. Find the name of the bucket in the Outputs of the serverless-image-catalog-starter stack in the CloudFormation console.

  1. Select the image-catalog-workflow-starter state machine in the Step Functions console.
  2. Choose Start execution.
  3. Paste the following test event (use your S3 bucket name):
    {
        "bucket": "<S3-bucket-name>",
        "key": "<Image-name>.jpeg"
    }
    
  4. Choose Start execution.

Once the execution has started, you can follow the state of the state machine live in the Graph inspector. For an appropriate image, the result will look as follows:

Graph inspector

Next, repeat the test process with an image that Amazon Rekognition classifies as inappropriate. Find out more about inappropriate content categories here. This produces the following result:

Graph inspector

You receive an email notifying you regarding the inappropriate image and its properties.

Cleaning up

To clean up the resources provisioned as part of the solution run the following command in the aws-step-functions-image-catalog-blog/starter directory:

sam delete

Conclusion

This blog post demonstrates how to implement a serverless image cataloging pipeline using Step Functions Workflow Studio. By orchestrating AWS API actions and flow states via drag and drop, you can process user-generated images. This example checks images for appropriateness and generates tags based on their content without custom application code.

You can now expand and improve this workflow by triggering it automatically each time an image is uploaded to the Amazon S3 bucket or by adding a manual approval step for submitted content. To find out more about Workflow Studio, visit the AWS Step Functions Developer Guide.

For more serverless learning resources, visit Serverless Land.