Tag Archives: serverless

Building well-architected serverless applications: Optimizing application costs

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-optimizing-application-costs/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the introduction post for a table of contents and explanation of the example application.

COST 1. How do you optimize your serverless application costs?

Design, implement, and optimize your application to maximize value. Asynchronous design patterns and performance practices ensure efficient resource use and directly impact the value per business transaction. By optimizing your serverless application performance and its code patterns, you can directly impact the value it provides, while making more efficient use of resources.

Serverless architectures are easier to manage in terms of correct resource allocation compared to traditional architectures. Due to its pay-per-value pricing model and scale based on demand, a serverless approach effectively reduces the capacity planning effort. As covered in the operational excellence and performance pillars, optimizing your serverless application has a direct impact on the value it produces and its cost. For general serverless optimization guidance, see the AWS re:Invent talks, “Optimizing your Serverless applications” Part 1 and Part 2, and “Serverless architectural patterns and best practices”.

Required practice: Minimize external calls and function code initialization

AWS Lambda functions may call other managed services and third-party APIs. Functions may also use application dependencies that may not be suitable for ephemeral environments. Understanding and controlling what your function accesses while it runs can have a direct impact on value provided per invocation.

Review code initialization

I explain the Lambda initialization process with cold and warm starts in “Optimizing application performance – part 1”. Lambda reports the time it takes to initialize application code in Amazon CloudWatch Logs. As Lambda functions are billed by request and duration, you can use this to track costs and performance. Consider reviewing your application code and its dependencies to improve the overall execution time to maximize value.

You can take advantage of Lambda execution environment reuse to make external calls to resources and use the results for subsequent invocations. Use TTL mechanisms inside your function handler code. This ensures that you can prevent additional external calls that incur additional execution time, while preemptively fetching data that isn’t stale.

Review third-party application deployments and permissions

When using Lambda layers or applications provisioned by AWS Serverless Application Repository, be sure to understand any associated charges that these may incur. When deploying functions packaged as container images, understand the charges for storing images in Amazon Elastic Container Registry (ECR).

Ensure that your Lambda function only has access to what its application code needs. Regularly review that your function has a predicted usage pattern so you can factor in the cost of other services, such as Amazon S3 and Amazon DynamoDB.

Required practice: Optimize logging output and its retention

Considering reviewing your application logging level. Ensure that logging output and log retention are appropriately set to your operational needs to prevent unnecessary logging and data retention. This helps you have the minimum of log retention to investigate operational and performance inquiries when necessary.

Emit and capture only what is necessary to understand and operate your component as intended.

With Lambda, any standard output statements are sent to CloudWatch Logs. Capture and emit business and operational events that are necessary to help you understand your function, its integration, and its interactions. Use a logging framework and environment variables to dynamically set a logging level. When applicable, sample debugging logs for a percentage of invocations.

In the serverless airline example used in this series, the booking service Lambda functions use Lambda Powertools as a logging framework with output structured as JSON.

Lambda Powertools is added to the Lambda functions as a shared Lambda layer in the AWS Serverless Application Model (AWS SAM) template. The layer ARN is stored in Systems Manager Parameter Store.

Parameters:
  SharedLibsLayer:
    Type: AWS::SSM::Parameter::Value<String>
    Description: Project shared libraries Lambda Layer ARN
Resources:
    ConfirmBooking:
        Type: AWS::Serverless::Function
        Properties:
            FunctionName: !Sub ServerlessAirline-ConfirmBooking-${Stage}
            Handler: confirm.lambda_handler
            CodeUri: src/confirm-booking
            Layers:
                - !Ref SharedLibsLayer
            Runtime: python3.7
…

The LOG_LEVEL and other Powertools settings are configured in the Globals section as Lambda environment variable for all functions.

Globals:
    Function:
        Environment:
            Variables:
                POWERTOOLS_SERVICE_NAME: booking
                POWERTOOLS_METRICS_NAMESPACE: ServerlessAirline
                LOG_LEVEL: INFO 

For Amazon API Gateway, there are two types of logging in CloudWatch: execution logging and access logging. Execution logs contain information that you can use to identify and troubleshoot API errors. API Gateway manages the CloudWatch Logs, creating the log groups and log streams. Access logs contain details about who accessed your API and how they accessed it. You can create your own log group or choose an existing log group that could be managed by API Gateway.

Enable access logs, and selectively review the output format and request fields that might be necessary. For more information, see “Setting up CloudWatch logging for a REST API in API Gateway”.

API Gateway logging

API Gateway logging

Enable AWS AppSync logging which uses CloudWatch to monitor and debug requests. You can configure two types of logging: request-level and field-level. For more information, see “Monitoring and Logging”.

AWS AppSync logging

AWS AppSync logging

Define and set a log retention strategy

Define a log retention strategy to satisfy your operational and business needs. Set log expiration for each CloudWatch log group as they are kept indefinitely by default.

For example, in the booking service AWS SAM template, log groups are explicitly created for each Lambda function with a parameter specifying the retention period.

Parameters:
    LogRetentionInDays:
        Type: Number
        Default: 14
        Description: CloudWatch Logs retention period
Resources:
    ConfirmBookingLogGroup:
        Type: AWS::Logs::LogGroup
        Properties:
            LogGroupName: !Sub "/aws/lambda/${ConfirmBooking}"
            RetentionInDays: !Ref LogRetentionInDays

The Serverless Application Repository application, auto-set-log-group-retention can update the retention policy for new and existing CloudWatch log groups to the specified number of days.

For log archival, you can export CloudWatch Logs to S3 and store them in Amazon S3 Glacier for more cost-effective retention. You can use CloudWatch Log subscriptions for custom processing, analysis, or loading to other systems. Lambda extensions allows you to process, filter, and route logs directly from Lambda to a destination of your choice.

Good practice: Optimize function configuration to reduce cost

Benchmark your function using a different set of memory size

For Lambda functions, memory is the capacity unit for controlling the performance and cost of a function. You can configure the amount of memory allocated to a Lambda function, between 128 MB and 10,240 MB. The amount of memory also determines the amount of virtual CPU available to a function. Benchmark your AWS Lambda functions with differing amounts of memory allocated. Adding more memory and proportional CPU may lower the duration and reduce the cost of each invocation.

In “Optimizing application performance – part 2”, I cover using AWS Lambda Power Tuning to automate the memory testing process to balances performance and cost.

Best practice: Use cost-aware usage patterns in code

Reduce the time your function runs by reducing job-polling or task coordination. This avoids overpaying for unnecessary compute time.

Decide whether your application can fit an asynchronous pattern

Avoid scenarios where your Lambda functions wait for external activities to complete. I explain the difference between synchronous and asynchronous processing in “Optimizing application performance – part 1”. You can use asynchronous processing to aggregate queues, streams, or events for more efficient processing time per invocation. This reduces wait times and latency from requesting apps and functions.

Long polling or waiting increases the costs of Lambda functions and also reduces overall account concurrency. This can impact the ability of other functions to run.

Consider using other services such as AWS Step Functions to help reduce code and coordinate asynchronous workloads. You can build workflows using state machines with long-polling, and failure handling. Step Functions also supports direct service integrations, such as DynamoDB, without having to use Lambda functions.

In the serverless airline example used in this series, Step Functions is used to orchestrate the Booking microservice. The ProcessBooking state machine handles all the necessary steps to create bookings, including payment.

Booking service state machine

Booking service state machine

To reduce costs and improves performance with CloudWatch, create custom metrics asynchronously. You can use the Embedded Metrics Format to write logs, rather than the PutMetricsData API call. I cover using the embedded metrics format in “Understanding application health” – part 1 and part 2.

For example, once a booking is made, the logs are visible in the CloudWatch console. You can select a log stream and find the custom metric as part of the structured log entry.

Custom metric structured log entry

Custom metric structured log entry

CloudWatch automatically creates metrics from these structured logs. You can create graphs and alarms based on them. For example, here is a graph based on a BookingSuccessful custom metric.

CloudWatch metrics custom graph

CloudWatch metrics custom graph

Consider asynchronous invocations and review run away functions where applicable

Take advantage of Lambda’s event-based model. Lambda functions can be triggered based on events ingested into Amazon Simple Queue Service (SQS) queues, S3 buckets, and Amazon Kinesis Data Streams. AWS manages the polling infrastructure on your behalf with no additional cost. Avoid code that polls for third-party software as a service (SaaS) providers. Rather use Amazon EventBridge to integrate with SaaS instead when possible.

Carefully consider and review recursion, and establish timeouts to prevent run away functions.

Conclusion

Design, implement, and optimize your application to maximize value. Asynchronous design patterns and performance practices ensure efficient resource use and directly impact the value per business transaction. By optimizing your serverless application performance and its code patterns, you can reduce costs while making more efficient use of resources.

In this post, I cover minimizing external calls and function code initialization. I show how to optimize logging output with the embedded metrics format, and log retention. I recap optimizing function configuration to reduce cost and highlight the benefits of asynchronous event-driven patterns.

This post wraps up the series, building well-architected serverless applications, where I cover the AWS Well-Architected Tool with the Serverless Lens . See the introduction post for links to all the blog posts.

For more serverless learning resources, visit Serverless Land.

 

Building a serverless distributed application using a saga orchestration pattern

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-a-serverless-distributed-application-using-a-saga-orchestration-pattern/

This post is written by Anitha Deenadayalan, Developer Specialist SA, DevAx (Developer Acceleration).

This post shows how to use the saga design pattern to preserve data integrity in distributed transactions across multiple services. In a distributed transaction, multiple services can be called before a transaction is completed. When the services store data in different data stores, it can be challenging to maintain data consistency across these data stores.

To maintain consistency in a transaction, relational databases provide two-phase commit (2PC). This consists of a prepare phase and a commit phase. In the prepare phase, the coordinating process requests the transaction’s participating processes (participants) to promise to commit or rollback the transaction. In the commit phase, the coordinating process requests the participants to commit the transaction. If the participants cannot agree to commit in the prepare phase, then the transaction is rolled back.

In distributed systems architected with microservices, two-phase commit is not an option as the transaction is distributed across various databases. In this case, one solution is to use the saga pattern.

A saga consists of a sequence of local transactions. Each local transaction in saga updates the database and triggers the next local transaction. If a transaction fails, then the saga runs compensating transactions to revert the database changes made by the preceding transactions.

There are two types of implementations for the saga pattern: choreography and orchestration.

Saga choreography

The saga choreography pattern depends on the events emitted by the microservices. The saga participants (microservices) subscribe to the events and they act based on the event triggers. For example, the order service in the following diagram emits an OrderPlaced event. The inventory service subscribes to that event and updates the inventory when the OrderPlaced event is emitted. Similarly the participant services act based on the context of the emitted event.

Solution architecture

Saga orchestration

The saga orchestration pattern has a central coordinator called orchestrator. The saga orchestrator manages and coordinates the entire transaction lifecycle. It is aware of the series of steps to be performed to complete the transaction. To run a step, it sends a message to the participant microservice to perform the operation. The participant microservice completes the operation and sends a message to the orchestrator. Based on the received message, the orchestrator decides which microservice to run next in the transaction:

Sage orchestrator in flow

You can use AWS Step Functions to implement the saga orchestration when the transaction is distributed across multiple databases.

Overview

This example uses a Step Functions workflow to implement the saga orchestration pattern, using the following architecture:

API Gateway to Lambda to Step Functions

When a customer calls the API, the invocation occurs, and pre-processing occurs in the Lambda function. The function starts the Step Functions workflow to start processing the distributed transaction.

The Step Functions workflow calls the individual services for order placement, inventory update, and payment processing to complete the transaction. It sends an event notification for further processing. The Step Functions workflow acts as the orchestrator to coordinate the transactions. If there is any error in the workflow, the orchestrator runs the compensatory transactions to ensure that the data integrity is maintained across various services.

When pre-processing is not required, you can also trigger the Step Functions workflow directly from API Gateway without the Lambda function.

The Step Functions workflow

The following diagram shows the steps that are run inside the Step Functions workflow. The green boxes show the steps that are run successfully. The order is placed, inventory is updated, and payment is processed before a Success state is returned to the caller.

The orange boxes indicate the compensatory transactions that are run when any one of the steps in the workflow fails. If the workflow fails at the Update inventory step, then the orchestrator calls the Revert inventory and Remove order steps before returning a Fail state to the caller. These compensatory transactions ensure that the data integrity is maintained. The inventory reverts to original levels and the order is reverted back.

Step Functions workflow

This preceding workflow is an example of a distributed transaction. The transaction data is stored across different databases and each service writes to its own database.

Prerequisites

For this walkthrough, you need:

Setting up the environment

For this walkthrough, use the AWS CDK code in the GitHub Repository to create the AWS resources. These include IAM roles, REST API using API Gateway, DynamoDB tables, the Step Functions workflow and Lambda functions.

  1. You need an AWS access key ID and secret access key for configuring the AWS Command Line Interface (AWS CLI). To learn more about configuring the AWS CLI, follow these instructions.
  2. Clone the repo:
    git clone https://github.com/aws-samples/saga-orchestration-netcore-blog
  3. After cloning, this is the directory structure:
    Directory structure
  4. The Lambda functions in the saga-orchestration directory must be packaged and copied to the cdk-saga-orchestration\lambdas directory before deployment. Run these commands to process the PlaceOrderLambda function:
    cd PlaceOrderLambda/src/PlaceOrderLambda 
    dotnet lambda package
    cp bin/Release/netcoreapp3.1/PlaceOrderLambda.zip ../../../../cdk-saga-orchestration/lambdas
    
  5. Repeat the same commands for all the Lambda functions in the saga-orchestration directory.
  6. Build the CDK code before deploying to the console:
    cd cdk-saga-orchestration/src/CdkSagaOrchestration
    dotnet build
    
  7. Install the aws-cdk package:
    npm install -g aws-cdk 
  8. The cdk synth command causes the resources defined in the application to be translated into an AWS CloudFormation template. The cdk deploy command deploys the stacks into your AWS account. Run:
    cd cdk-saga-orchestration
    cdk synth 
    cdk deploy
    
  9. CDK deploys the environment to AWS. You can monitor the progress using the CloudFormation console. The stack name is CdkSagaOrchestrationStack:
    CloudFormation console

The Step Functions configuration

The CDK creates the Step Functions workflow, DistributedTransactionOrchestrator. The following snippet defines the workflow with AWS CDK for .NET:

var stepDefinition = placeOrderTask
    .Next(new Choice(this, "Is order placed")
        .When(Condition.StringEquals("$.Status", "ORDER_PLACED"), updateInventoryTask
            .Next(new Choice(this, "Is inventory updated")
                .When(Condition.StringEquals("$.Status", "INVENTORY_UPDATED"),
                    makePaymentTask.Next(new Choice(this, "Is payment success")
                        .When(Condition.StringEquals("$.Status", "PAYMENT_COMPLETED"), successState)
                        .When(Condition.StringEquals("$.Status", "ERROR"), revertPaymentTask)))
                .When(Condition.StringEquals("$.Status", "ERROR"), waitState)))
        .When(Condition.StringEquals("$.Status", "ERROR"), failState));

Step Functions workflow

Compare the states language definition for the state machine with the definition above. Also observe the inputs and outputs for each step and how the conditions have been configured. The steps with type Task call a Lambda function for the processing. The steps with type Choice are decision-making steps that define the workflow.

Setting up the DynamoDB table

The Orders and Inventory DynamoDB tables are created using AWS CDK. The following snippet creates a DynamoDB table with AWS CDK for .NET:

var inventoryTable = new Table(this, "Inventory", new TableProps
{
    TableName = "Inventory",
    PartitionKey = new Attribute
    {
        Name = "ItemId",
        Type = AttributeType.STRING
    },
    RemovalPolicy = RemovalPolicy.DESTROY
});
  1. Open the DynamoDB console and select the Inventory table.
  2. Choose Create Item.
  3. Select Text, paste the following contents, then choose Save.
    {
      "ItemId": "ITEM001",
      "ItemName": "Soap",
      "ItemsInStock": 1000,
      "ItemStatus": ""
    }
    

    Create Item dialog

  4. Create two more items in the Inventory table:
    {
      "ItemId": "ITEM002",
      "ItemName": "Shampoo",
      "ItemsInStock": 500,
      "ItemStatus": ""
    }
    
    {
      "ItemId": "ITEM003",
      "ItemName": "Toothpaste",
      "ItemsInStock": 2000,
      "ItemStatus": ""
    }
    

The Lambda functions UpdateInventoryLambda and RevertInventoryLambda increment and decrement the ItemsInStock attribute value. The Lambda functions PlaceOrderLambda and UpdateOrderLambda insert and delete items in the Orders table. These are invoked by the saga orchestration workflow.

Triggering the saga orchestration workflow

The API Gateway endpoint, SagaOrchestratorAPI, is created using AWS CDK. To invoke the endpoint:

  1. From the API Gateway service page, select the SagaOrchestratorAPI:
    List of APIs
  2. Select Stages in the left menu panel:
    Stages menu
  3. Select the prod stage and copy the Invoke URL:
    Invoke URL
  4. From Postman, open a new tab. Select POST in the dropdown and enter the copied URL in the textbox. Move to the Headers tab and add a new header with the key ‘Content-Type’ and value as ‘application/json’:
    Postman configuration
  5. In the Body tab, enter the following input and choose Send.
    {
      "ItemId": "ITEM001",
      "CustomerId": "ABC/002",
      "MessageId": "",
      "FailAtStage": "None"
    }
    
  6. You see the output:
    Output
  7. Open the Step Functions console and view the execution. The graph inspector shows that the execution has completed successfully.
    Successful workflow execution
  8. Check the items in the DynamoDB tables, Orders & Inventory. You can see an item in the Orders table indicating that an order is placed. The ItemsInStock in the Inventory table has been deducted.
    Changes in DynamoDB tables
  9. To simulate the failure workflow in the saga orchestrator, send the following JSON as body in the Postman call. The FailAtStage parameter injects the failure in the workflow. Select Send in Postman after updating the Body:
    {
      "ItemId": "ITEM002",
      "CustomerId": "DEF/002",
      "MessageId": "",
      "FailAtStage": "UpdateInventory"
    }
    
  10. Open the Step Functions console to see the execution.
  11. While the function waits in the wait state, look at the items in the DynamoDB tables. A new item is added to the Orders table and the stock for Shampoo is deducted in the Inventory table.
    Changes in DynamoDB table
  12. Once the wait completes, the compensatory transaction steps are run:
    Workflow execution result
  13. In the graph inspector, select the Update Inventory step. On the right pane, click on the Step output tab. The status is ERROR, which changes the control flow to run the compensatory transactions.
    Step output
  14. Look at the items in the DynamoDB table again. The data is now back to a consistent state, as the compensatory transactions have run to preserve data integrity:
    DynamoDB table items

The Step Functions workflow implements the saga orchestration pattern. It performs the coordination across distributed services and runs the transactions. It also performs compensatory transactions to preserve the data integrity.

Cleaning up

To avoid incurring additional charges, clean up all the resources that have been created. Run the following command from a terminal window. This deletes all the resources that were created as part of this example.

cdk destroy

Conclusion

This post showed how to implement the saga orchestration pattern using API Gateway, Step Functions, Lambda, DynamoDB, and .NET Core 3.1. This can help maintain data integrity in distributed transactions across multiple services. Step Functions makes it easier to implement the orchestration in the saga pattern.

To learn more about developing microservices on AWS, refer to the whitepaper on microservices. To learn more about the features, refer to the AWS CDK Features page.

Building well-architected serverless applications: Optimizing application performance – part 3

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-optimizing-application-performance-part-3/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the introduction post for a table of contents and explanation of the example application.

PERF 1. Optimizing your serverless application’s performance

This post continues part 2 of this security question. Previously, I look at designing your function to take advantage of concurrency via asynchronous and stream-based invocations. I cover measuring, evaluating, and selecting optimal capacity units.

Best practice: Integrate with managed services directly over functions when possible

Consider using native integrations between managed services as opposed to AWS Lambda functions when no custom logic or data transformation is required. This can enable optimal performance, requires less resources to manage, and increases security. There are also a number of AWS application integration services that enable communication between decoupled components with microservices.

Use native cloud services integration

When using Amazon API Gateway APIs, you can use the AWS integration type to connect to other AWS services natively. With this integration type, API Gateway uses Apache Velocity Template Language (VTL) and HTTPS to directly integrate with other AWS services.

Timeouts and errors must be managed by the API consumer. For more information on using VTL, see “Amazon API Gateway Apache Velocity Template Reference”. For an example application that uses API Gateway to read and write directly to/from Amazon DynamoDB, see “Building a serverless URL shortener app without AWS Lambda”.

API Gateway direct service integration

API Gateway direct service integration

There is also a tutorial available, Build an API Gateway REST API with AWS integration.

When using AWS AppSync, you can use VTL, direct integration with Amazon Aurora, Amazon Elasticsearch Service, and any publicly available HTTP endpoint. AWS AppSync can use multiple integration types and can maximize throughput at the data field level. For example, you can run full-text searches on the orderDescription field against Elasticsearch while fetching the remaining data from DynamoDB. For more information, see the AWS AppSync resolver tutorials.

In the serverless airline example used in this series, the catalog service uses AWS AppSync to provide a GraphQL API for searching flights. AWS AppSync uses DynamoDB as a database, and all compute logic is contained in the Apache Velocity Template (VTL).

Serverless airline catalog service using VTL

Serverless airline catalog service using VTL

AWS Step Functions integrates with multiple AWS services using service Integrations. For example, this allows you to fetch and put data into DynamoDB, or run an AWS Batch job. You can also publish messages to Amazon Simple Notification Service (SNS) topics, and send messages to Amazon Simple Queue Service (SQS) queues. For more details on the available integrations, see “Using AWS Step Functions with other services”.

Using Amazon EventBridge, you can connect your applications with data from a variety of sources. You can connect to various AWS services natively, and act as an event bus across multiple AWS accounts to ease integration. You can also use the API destination feature to route events to services outside of AWS. EventBridge handles the authentication, retries, and throughput. For more details on available EventBridge targets, see the documentation.

Amazon EventBridge

Amazon EventBridge

Good practice: Optimize access patterns and apply caching where applicable

Consider caching when clients may not require up to date data. Optimize access patterns to only fetch data that is necessary to end users. This improves the overall responsiveness of your workload and makes more efficient use of compute and data resources across components.

Implement caching for suitable access patterns

For REST APIs, you can use API Gateway caching to reduce the number of calls made to your endpoint and also improve the latency of requests to your API. When you enable caching for a stage or method, API Gateway caches responses for a specified time-to-live (TTL) period. API Gateway then responds to the request by looking up the endpoint response from the cache, instead of making a request to your endpoint.

API Gateway caching

API Gateway caching

For more information, see “Enabling API caching to enhance responsiveness”.

For geographically distributed clients, Amazon CloudFront or your third-party CDN can cache results at the edge and further reducing network round-trip latency.

For GraphQL APIs, AWS AppSync provides built-in server-side caching at the API level. This reduces the need to access data sources directly by making data available in a high-speed in-memory cache. This improves performance and decreases latency. For queries with common arguments or a restricted set of arguments, you can also enable caching at the resolver level to improve overall responsiveness. For more information, see “Improving GraphQL API performance and consistency with AWS AppSync Caching”.

When using databases, cache results and only connect to and fetch data when needed. This reduces the load on the downstream database and improves performance. Include a caching expiration mechanism to prevent serving stale records. For more information on caching implementation patterns and considerations, see “Caching Best Practices”.

For DynamoDB, you can enable caching with Amazon DynamoDB Accelerator (DAX). DAX enables you to benefit from fast in-memory read performance in microseconds, rather than milliseconds. DAX is suitable for use cases that may not require strongly consistent reads. Some examples include real-time bidding, social gaming, and trading applications. For more information, read “Use cases for DAX“.

For general caching purposes, Amazon ElastiCache provides a distributed in-memory data store or cache environment. ElastiCache supports a variety of caching patterns through key-value stores using the Redis and Memcache engines. Define what is safe to cache, even when using popular caching patterns like lazy caching or write-through. Set a TTL and eviction policy that fits your baseline performance and access patterns. This ensures that you don’t serve stale records or cache data that should have a strongly consistent read. For more information on ElastiCache caching and time-to-live strategies, see the documentation.

For additional serverless caching suggestions, see the AWS Serverless Hero blog post “All you need to know about caching for serverless applications”.

Reduce overfetching and underfetching

Over-fetching is when a client downloads too much data from a database or endpoint. This results in data in the response that you don’t use. Under-fetching is not having enough data in the response. The client then needs to make additional requests to receive the data. Overfetching and underfetching can both affect performance.

To fetch a collection of items from a DynamoDB table, you can perform a query or a scan. A scan operation always scans the entire table or secondary index. It then filters out values to provide the result you want, essentially adding the extra step of removing data from the result set. A query operation finds items directly based on primary key values.

For faster response times, design your tables and indexes so that your applications can use query instead of scan. Use both Global Secondary Index (GSI) in addition to composite sort keys to help you query hierarchical relationships in your data. For more information, see “Best Practices for Querying and Scanning Data”.

Consider GraphQL and AWS AppSync for interactive web applications, mobile, real-time, or for use cases where data drives the user interface. AWS AppSync provides data fetching flexibility, which allows your client to query only for the data it needs, in the format it needs it. Ensure you do not make too many nested queries where a long response may result in timeouts. GraphQL helps you adapt access patterns as your workload evolves. This makes it more flexible as it allows you to move to purpose-built databases if necessary.

Compress payload and data storage

Some AWS services allow you to compress the payload or compress data storage. This can improve performance by sending and receiving less data, and can save on data storage, which can also reduce costs.

If your content supports deflate, gzip or identity content encoding, API Gateway allows your client to call your API with compressed payloads. By default, API Gateway supports decompression of the method request payload. However, you must configure your API to enable compression of the method response payload. Compression in API Gateway and decompression in the client might increase overall latency and require more computing times. Run test cases against your API to determine an optimal value. For more information, see “Enabling payload compression for an API”.

Amazon Kinesis Data Firehose supports compressing streaming data using gzip, snappy, or zip. This minimizes the amount of storage used at the destination. The Amazon Kinesis Data Firehose FAQs has more information on compression. Kinesis Data Firehose also supports converting your streaming data from JSON to Apache Parquet or Apache ORC before storing the data in Amazon S3. Parquet and ORC are columnar data formats that save space and enable faster queries compared to row-oriented formats like JSON.

Conclusion

Evaluate and optimize your serverless application’s performance based on access patterns, scaling mechanisms, and native integrations. You can improve your overall experience and make more efficient use of the platform in terms of both value and resources.

In part 1, I cover measuring and optimizing function startup time. I explain cold and warm starts and how to reuse the Lambda execution environment to improve performance. I explain how only importing necessary libraries and dependencies increases application performance.

In part 2, I look at designing your function to take advantage of concurrency via asynchronous and stream-based invocations. I cover measuring, evaluating, and selecting optimal capacity units.

In this post, I look at integrating with managed services directly over functions when possible. I cover optimizing access patterns and applying caching where applicable.

In the next post in the series, I cover the cost optimization pillar from the Well-Architected Serverless Lens.

For more serverless learning resources, visit Serverless Land.

Increasing performance of Java AWS Lambda functions using tiered compilation

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/increasing-performance-of-java-aws-lambda-functions-using-tiered-compilation/

This post is written by Mark Sailes, Specialist Solutions Architect, Serverless and Richard Davison, Senior Partner Solutions Architect.

The Operating Lambda: Performance optimization blog series covers important topics for developers, architects, and system administrators who are managing applications using AWS Lambda functions. This post explains how you can reduce the initialization time to start a new execution environment when using the Java-managed runtimes.

Lambda lifecycle

Many Lambda workloads are designed to deliver fast responses to synchronous or asynchronous workloads in a fraction of a second. Examples of these could be public APIs to deliver dynamic content to websites or a near-real time data pipeline doing small batch processing.

As usage of these systems increases, Lambda creates new execution environments. When a new environment is created and used for the first time, additional work is done to make it ready to process an event. This creates two different performance profiles: one with and one without the additional work.

To improve the response time, you can minimize the effect of this additional work. One way you can minimize the time taken to create a new managed Java execution environment is to tune the JVM. It can be optimized specifically for these workloads that do not have long execution durations.

One example of this is configuring a feature of the JVM called tiered compilation. From version 8 of the Java Development Kit (JDK), the two just-in-time compilers C1 and C2 have been used in combination. C1 is designed for use on the client side and to enable short feedback loops for developers. C2 is designed for use on the server side and to achieve higher performance after profiling.

Tiering is used to determine which compiler to use to achieve better performance. These are represented as five levels:

Tiering levels

Profiling has an overhead, and performance improvements are only achieved after a method has been invoked a number of times, the default being 10,000. Lambda customers wanting to achieve faster startup times can use level 1 with little risk of reducing warm start performance. The article “Startup, containers & Tiered Compilation” explains tiered compilation further.

For customers who are doing highly repetitive processing, this configuration might not be suited. Applications that repeat the same code paths many times want the JVM to profile and optimize these paths. Concrete examples of these would be using Lambda to run Monte Carlo simulation or hash calculations. You can run the same simulations thousands of times and the JVM profiling can reduce the total execution time significantly.

Performance improvements

The example project is a Java 11-based application used to analyze the impact of this change. The application is triggered by Amazon API Gateway and then puts an item into Amazon DynamoDB. To compare the performance difference caused by this change, there is one Lambda function with the additional changes and one without. There are no other differences in the code.

Download the code for this example project from the GitHub repo: https://github.com/aws-samples/aws-lambda-java-tiered-compilation-example.

To install prerequisite software:

  1. Install the AWS CDK.
  2. Install Apache Maven, or use your preferred IDE.
  3. Build and package the Java application in the software folder:
    cd software/ExampleFunction/
    mvn package
  4. Zip the execution wrapper script:
    cd ../OptimizationLayer/
    ./build-layer.sh
    cd ../../
  5. Synthesize CDK. This previews changes to your AWS account before it makes them:
    cd infrastructure
    cdk synth
  6. Deploy the Lambda functions:
    cdk deploy --outputs-file outputs.json

The API Gateway endpoint URL is displayed in the output and saved in a file named outputs.json. The contents are similar to:

InfrastructureStack.apiendpoint = https://{YOUR_UNIQUE_ID_HERE}.execute-api.eu-west-1.amazonaws.com

Using Artillery to load test the changes

First, install prerequisites:

  1. Install jq and Artillery Core.
  2. Run the following two scripts from the /infrastructure directory:
    artillery run -t $(cat outputs.json | jq -r '.InfrastructureStack.apiendpoint') -v '{ "url": "/without" }' loadtest.yml
    
    artillery run -t $(cat outputs.json | jq -r '.InfrastructureStack.apiendpoint') -v '{ "url": "/with" }' loadtest.yml

Check the results using Amazon CloudWatch Insights

  1. Navigate to Amazon CloudWatch.
  2. Select Logs then Logs Insights.
  3. Select the following two log groups from the drop-down list:
    /aws/lambda/example-with-layer
    /aws/lambda/example-without-layer
  4. Copy the following query and choose Run query:
        filter @type = "REPORT"
        | parse @log /\d+:\/aws\/lambda\/example-(?<function>\w+)-\w+/
        | stats
        count(*) as invocations,
        pct(@duration, 0) as p0,
        pct(@duration, 25) as p25,
        pct(@duration, 50) as p50,
        pct(@duration, 75) as p75,
        pct(@duration, 90) as p90,
        pct(@duration, 95) as p95,
        pct(@duration, 99) as p99,
        pct(@duration, 100) as p100
        group by function, ispresent(@initDuration) as coldstart
        | sort by function, coldstart
    

    Query window

You see results similar to:

Query results

Here is a simplified table of the results:

Settings Type

# of invocations

p90 (ms)

p95 (ms)

p99 (ms)

Default settings Cold start

754

5,212

5,338

5,517

Default settings Warm start

35,247

58

93

255

Tiered compilation stopping at level 1 Cold start

383

2,071

2,086

2,221

Tiered compilation stopping at level 1 Warm start

35,618

23

32

86

The results are from testing 120 concurrent requests over 5 minutes using an open-source software project called Artillery. You can find instructions on how to run these tests in the GitHub repo. The results show that for this application, cold starts for 90% of invocations improve by 3141 ms (60%). These numbers are specific for this application and your application may behave differently.

Using wrapper scripts for Lambda functions

Wrapper scripts are a feature available in Java 8 and Java 11 on Amazon Linux 2 managed runtimes. They are not available for the Java 8 on Amazon Linux 1 managed runtime.

To apply this optimization flag to Java Lambda functions, create a wrapper script and add it to a Lambda layer zip file. This script will alter the JVM flags which Java is started with, within the execution environment.

#!/bin/sh
shift
export _JAVA_OPTIONS="-XX:+TieredCompilation -XX:TieredStopAtLevel=1"
java "$@"

Read the documentation to learn how to create and share a Lambda layer.

Console walkthrough

This change can be configured using AWS Serverless Application Model (AWS SAM), the AWS Command Line Interface (AWS CLI), AWS CloudFormation, or from within the AWS Management Console.

Using the AWS Management Console:

  1. Navigate to the AWS Lambda console.
  2. Select Functions and choose the Lambda function to add the layer to.
    Lambda functions
  3. The Code tab is selected by default. Scroll down to the Layers panel.
  4. Select Add a layer.
    Add a layer
  5. Select Custom layers and choose your layer.
    Add layer
  6. Select the Version. Choose Add.
  7. From the menu, select the Configuration tab and Environment variables. Choose Edit.
    Configuration tab
  8. Choose Add environment variable. Add the following:
    – Key: AWS_LAMBDA_EXEC_WRAPPER
    – Value: /opt/java-exec-wrapper
    Edit environment variables
  9. Choose Save.You can verify that the changes are applied by invoking the function and viewing the log events. The log line Picked up _JAVA_OPTIONS: -XX:+TieredCompilation -XX:TieredStopAtLevel=1 is added.Log events

Conclusion

Tiered compilation stopping at level 1 reduces the time the JVM spends optimizing and profiling your code. This could help reduce start up times for Java applications that require fast responses, where the workload doesn’t meet the requirements to benefit from profiling.

You can make further reductions in startup time using GraalVM. Read more about GraalVM and the Quarkus framework in the architecture blog. View the code example at https://github.com/aws-samples/aws-lambda-java-tiered-compilation-example to see how you can apply this to your Lambda functions.

For more serverless learning resources, visit Serverless Land.

Building a serverless GIF generator with AWS Lambda: Part 1

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-a-serverless-gif-generator-with-aws-lambda-part-1/

Many video streaming services show GIF animations in the frontend when users fast forward and rewind throughout a video. This helps customers see a preview and makes the user interface more intuitive.

Generating these GIF files is a compute-intensive operation that becomes more challenging to scale when there are many videos. Over a typical 2-hour movie with GIF previews every 30 seconds, you need 240 separate GIF animations to support this functionality.

In a server-based solution, you can use a library like FFmpeg to load the underlying MP4 file and create the GIF exports. However, this may be a serial operation that slows down for longer videos or when there are many different videos to process. For a service providing thousands of videos to customers, they need a solution where the encoding process keeps pace with the number of videos.

In this two-part blog post, I show how you can use a serverless approach to create a scalable GIF generation service. I explain how you can use parallelization in AWS Lambda-based workloads to reduce processing time significantly.

The example application uses the AWS Serverless Application Model (AWS SAM), enabling you to deploy the application more easily in your own AWS account. This walkthrough creates some resources covered in the AWS Free Tier but others incur cost. To set up the example, visit the GitHub repo and follow the instructions in the README.md file.

Overview

To show the video player functionality, visit the demo front end. This loads the assets for an example video that is already processed. In the example, there are GIF animations every 30 seconds and freeze frames for every second of video. Move the slider to load the frame and GIF animation at that point in the video:

Serverless GIF generator demo

  1. The demo site defaults to an existing video. After deploying the backend application, you can test your own videos here.
  2. Move the slider to select a second in the video playback. This simulates a typical playback bar used in video application frontends.
  3. This displays the frame at the chosen second, which is a JPG file created by the backend application.
  4. The GIF animation for the selected playback point is a separate GIF file, created by the backend application.

Comparing server-based and serverless solutions

The example application uses FFmpeg, an open source application to record and process video. FFmpeg creates the GIF animations and individual frames for each second of video. You can use FFmpeg directly from a terminal or in scripts and applications. In comparing the performance, I use the AWS re:Invent 2019 keynote video, which is almost 3 hours long.

To show the server-based approach, I use a Node.js application in the GitHub repo. This loops through the video length in 30-second increments, calling FFmpeg with the necessary command line parameters:

const main = async () => {
	const length = 10323
	const inputVideo = 'test.mp4'
	const ffTmp = './output'
	const snippetSize = 30
 	const baseFilename = inputVideo.split('.')[0]

	console.time('task')
	for (let start = 0; start < length; start += snippetSize) {
		const gifName = `${baseFilename}-${start}.gif`
		const end = start + snippetSize -1

		console.log('Now creating: ', gifName)
		// Generates gif in local tmp
		await execPromise(`${ffmpegPath} -loglevel error -ss ${start} -to ${end} -y -i "${inputVideo}" -vf "fps=10,scale=240:-1:flags=lanczos,split[s0][s1];[s0]palettegen[p];[s1][p]paletteuse" -loop 0 ${ffTmp}/${gifName}`)
		await execPromise(`${ffmpegPath} -loglevel error -ss ${start} -to ${end} -i "${inputVideo}" -vf fps=1 ${ffTmp}/${baseFilename}-${start}-frame-%d.jpg`)
	}
	console.timeEnd('task')
}

Running this script from my local development machine, it takes almost 21 minutes to complete the operation and creates over 10,000 output files.

Script output

After deploying the example serverless application in the GitHub repo, I upload the video to the source Amazon S3 bucket using the AWS CLI:

aws s3 cp ./reinvent.mp4 s3://source-bucket-name --acl public-read

The completed S3 PutObject operation starts the GIF generation process. The function that processes the GIF files emits the following metrics on the Monitor tab in the Lambda console:

Lambda function metrics

  1. There are 345 invocations to process the source file into 30-second GIF animations.
  2. The average duration for each invocation is 4,311 ms. The longest is 9,021 ms.
  3. No errors occurred in processing the video.
  4. Lambda scaled up to 344 concurrent execution environments.

After approximately 10 seconds, the conversion is complete and there are over 10,000 objects in the application’s output S3 bucket:

Test results

The main reason that the task duration is reduced from nearly 21 minutes to around 10 seconds is parallelization. In the server-based approach, each 30 second GIF is processed sequentially. In the Lambda-based solution, all of the 30-second clips are generated in parallel, at around the same time:

Server-based vs Lambda-based

Solution architecture

The example application uses the following serverless architecture:

Solution architecture

  1. When the original MP4 video is put into the source S3 bucket, it invokes the first Lambda function.
  2. The Snippets Lambda function detects the length of the video. It determines the number of 30-second snippets and then puts events onto the default event bus. There is one event for each snippet.
  3. An Amazon EventBridge rule matches for events created by the first Lambda function. It invokes the second Lambda function.
  4. The Process MP4 Lambda function receives the event as a payload. It loads the original video using FFmpeg to generate the GIF and per-second frames.
  5. The resulting files are stored in the output S3 bucket.

The first Lambda function uses the following code to detect the length of the video and create events for EventBridge:

const createSnippets = async (record) => {
	// Get signed URL for source object
	const params = {
		Bucket: record.s3.bucket.name, 
		Key: record.s3.object.key, 
		Expires: 300
	}
	const url = s3.getSignedUrl('getObject', params)

	// Get length of source video
	const metadata = await ffProbe(url)
	const length = metadata.format.duration
	console.log('Length (seconds): ', length)

	// Build data array for DynamoDB
	const items = []
	const snippetSize = parseInt(process.env.SnippetSize)

	for (let start = 0; start < length; start += snippetSize) {
		items.push({
			key: record.s3.object.key,
			start,
			end: (start + snippetSize - 1),
			length,
			tsCreated: Date.now()
		})
	}
	// Send events to EventBridge
	await writeBatch(items)
}

The eventbridge.js file contains a function that sends the event array to the default bus in EventBridge. It uses the putEvents method in the EventBridge JavaScript SDK to send events in batches of 10:

const writeBatch = async (items) => {

    console.log('writeBatch items: ', items.length)

    for (let i = 0; i < items.length; i += BATCH_SIZE ) {
        const tempArray = items.slice(i, i + BATCH_SIZE)

        // Create new params array
        const paramsArray = tempArray.map((item) => {
            return {
                DetailType: 'newVideoCreated',
                Source: 'custom.gifGenerator',
                Detail: JSON.stringify ({
                    ...item
                })
            }
        })

        // Create params object for DDB DocClient
        const params = {
            Entries: paramsArray
        }

        // Write to DDB
        const result = await eventbridge.putEvents(params).promise()
        console.log('Result: ', result)
    }
}

The second Lambda function is invoked by an EventBridge rule, matching on the Source and DetailType values. Both the Lambda function and the rule are defined in the AWS SAM template:

  GifsFunction:
    Type: AWS::Serverless::Function 
    Properties:
      CodeUri: gifsFunction/
      Handler: app.handler
      Runtime: nodejs14.x
      Timeout: 30
      MemorySize: 4096
      Layers:
        - !Ref LayerARN
      Environment:
        Variables:
          GenerateFrames: !Ref GenerateFrames
          GifsBucketName: !Ref GifsBucketName
          SourceBucketName: !Ref SourceBucketName
      Policies:
        - S3ReadPolicy:
            BucketName: !Ref SourceBucketName
        - S3CrudPolicy:
            BucketName: !Ref GifsBucketName
      Events:
        Trigger:
          Type: EventBridgeRule 
          Properties:
            Pattern:        
              source:
                - custom.gifGenerator
              detail-type:
                - newVideoCreated        

This Lambda function receives an event payload specifying the start and end time for the video clip it should process. The function then calls the FFmpeg application to generate the output files. It stores these in the local /tmp storage before uploading to the output S3 bucket:

const processMP4 = async (event) => {
    // Get settings from the incoming event
    const originalMP4 = event.detail.Key 
    const start =  event.detail.start
    const end =  event.detail.end

    // Get signed URL for source object
    const params = {
        Bucket: process.env.SourceBucketName, 
        Key: originalMP4, 
        Expires
    }
    const url = s3.getSignedUrl('getObject', params)
    console.log('processMP4: ', { url, originalMP4, start, end })

    // Extract frames from MP4 (1 per second)
    console.log('Create GIF')
    const baseFilename = params.Key.split('.')[0]
    
    // Create GIF
    const gifName = `${baseFilename}-${start}.gif`
    // Generates gif in local tmp
    await execPromise(`${ffmpegPath} -loglevel error -ss ${start} -to ${end} -y -i "${url}" -vf "fps=10,scale=240:-1:flags=lanczos,split[s0][s1];[s0]palettegen[p];[s1][p]paletteuse" -loop 0 ${ffTmp}/${gifName}`)
    // Upload gif to local tmp
    // Generate frames
    if (process.env.GenerateFrames === 'true') {    
        console.log('Capturing frames')
        await execPromise(`${ffmpegPath} -loglevel error -ss ${start} -to ${end} -i "${url}" -vf fps=1 ${ffTmp}/${baseFilename}-${start}-frame-%d.jpg`)
    }

    // Upload all generated files
    await uploadFiles(`${baseFilename}/`)

    // Cleanup temp files
    await tmpCleanup()
}

Creating and using the FFmpeg Lambda layer

FFmpeg uses operating system-specific binaries that may be different on your development machine from the Lambda execution environment. The easiest way to test the code on a local machine and deploy to Lambda with the appropriate binaries is to use a Lambda layer.

As described in the example application’s README file, you can create the FFmpeg Lambda layer by deploying the ffmpeg-lambda-layer application in the AWS Serverless Application Repository. After deployment, the Layers menu in the Lambda console shows the new layer. Copy the version ARN and use this as a parameter in the AWS SAM deployment:

ffmpeg Lambda layer

On your local machine, download and install the FFmpeg binaries for your operating system. The package.json file for the Lambda functions uses two npm installer packages to help ensure the Node.js code uses the correct binaries when tested locally:

{
  "name": "gifs",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "keywords": [],
  "author": "James Beswick",
  "license": "MIT-0",
  "dependencies": {
  },
  "devDependencies": {
    "@ffmpeg-installer/ffmpeg": "^1.0.20",
    "@ffprobe-installer/ffprobe": "^1.1.0",
    "aws-sdk": "^2.927.0"
  }
}

Any npm packages specified in the devDependencies section of package.json are not deployed to Lambda by AWS SAM. As a result, local testing uses local ffmpeg binaries and Lambda uses the previously deployed Lambda layer. This approach also helps to reduce the size of deployment uploads and can help make the testing and deployment cycle faster in development.

Using FFmpeg from a Lambda function

FFmpeg is an application that’s generally used via a terminal. It takes command line parameters as options to determine input files, output locations, and the type of processing needed. To use terminal commands from Lambda, both functions use the asynchronous child_process module in Node.js. The execPromise function wraps this module in a Promise so the main function can use async/await syntax:

const { exec } = require('child_process')

const execPromise = async (command) => {
    console.log(command)
    return new Promise((resolve, reject) => {
        const ls = exec(command, function (error, stdout, stderr) {
          if (error) {
            console.error('Error: ', error)
            reject(error)
          }
          if (stdout) console.log('stdout: ', stdout)
          if (stderr) console.error('stderr: ' ,stderr)
        })
        
        ls.on('exit', (code) => {
          console.log('execPromise finished with code ', code)
          resolve()
        })
    })
}

As a result, you can then call FFmpeg by constructing a command line with options parameters and passing to execPromise:

await execPromise(`${ffmpegPath} -loglevel error -ss ${start} -to ${end} -i "${url}" -vf fps=1 ${ffTmp}/${baseFilename}-${start}-frame-%d.jpg`)

Alternatively, you can also use the fluent-ffmpeg npm library, which exposes the command line options as methods. The example application uses this in the ffmpeg-promisify.js file:

const ffmpeg = require("fluent-ffmpeg")
   ffmpeg(source)
      .noAudio()
      .size(`${IMG_WIDTH}x${IMG_HEIGHT}`)
      .setStartTime(startFormatted)
      .setDuration(snippetSize - 1)
      .output(outputFile)
      .on("end", async (err) => {
        // do work
      })
      .on("error", function (err) {
        console.error('FFMPEG error: ', err)
      })
      .run()

Deploying the application

In the GitHub repository, there are detailed deployment instructions for the example application. The repo contains separate directories for the demo frontend application, server-based script, and the two versions of backend service.

After deployment, you can test the application by uploading an MP4 video to the source S3 bucket. The output GIF and JPG files are written to the application’s destination S3 bucket. The files from each MP4 file are grouped in a folder in the bucket:

S3 bucket contents

Frontend application

The frontend application allows you to visualize the outputs of the backend application. There is also a hosted version of this application. This accepts custom parameters to load graphics resources from S3 buckets in your AWS account. Alternatively, you can run the frontend application locally.

To launch the frontend application:

  1. After cloning the repo, change to the frontend directory.
  2. Run npm install to install Vue.js and all the required npm modules from package.json.
  3. Run npm run serve to start the development server. After building the modules in the project, the terminal shows the local URL where the application is running:
    Terminal output
  4. Open a web browser and navigate to http://localhost:8080 to see the application:
    Localhost browser application

Conclusion

In part 1 of this blog post, I explain how a GIF generation service can support a front-end application for video streaming. I compare the performance of a server-based and serverless approach and show how parallelization can significantly improve processing time. I walk through the solution architecture used in the example application and show how you can use FFmpeg in Lambda functions.

Part 2 covers advanced topics around this implementation. It explains the scaling behavior and considers alternative approaches, and looks at the cost of using this service.

For more serverless learning resources, visit Serverless Land.

Toyota Connected and AWS Design and Deliver Collision Assistance Application

Post Syndicated from Srikanth Kodali original https://aws.amazon.com/blogs/architecture/toyota-connected-and-aws-design-and-deliver-collision-assistance-application/

This post was cowritten by Srikanth Kodali, Sr. IoT Data Architect at AWS, and Will Dombrowski, Sr. Data Engineer at Toyota Connected

Toyota Connected North America (TC) is a technology/big data company that partners with Toyota Motor Corporation and Toyota Motor North America to develop products that aim to improve the driving experience for Toyota and Lexus owners.

TC’s Mobility group provides backend cloud services that are built and hosted in AWS. Together, TC and AWS engineers designed, built, and delivered their new Collision Assistance product, which debuted in early August 2021.

In the aftermath of an accident, Collision Assistance offers Toyota and Lexus drivers instructions to help them navigate a post-collision situation. This includes documenting the accident, filing an insurance claim, and transitioning to the repair process.

In this blog post, we’ll talk about how our team designed, built, refined, and deployed the Collision Assistance product with Serverless on AWS services. We’ll discuss our goals in developing this product and the architecture we developed based on those goals. We’ll also present issues we encountered when testing our initial architecture and how we resolved them to create the final product.

Building a scalable, affordable, secure, and high performing product

We used a serverless architecture because it is often less complex than other architecture types. Our goals in developing this initial architecture were to achieve scalability, affordability, security, and high performance, as described in the following sections.

Scalability and affordability

In our initial architecture, Amazon Simple Queue Service (Amazon SQS) queues, Amazon Kinesis streams, and AWS Lambda functions allow data pipelines to run servers only when they’re needed, which introduces cost savings. They also process data in smaller units and run them in parallel, which allows data pipelines to scale up efficiently to handle peak traffic loads. These services allow for an architecture that can handle non-uniform traffic without needing additional application logic.

Security

Collision Assistance can deliver information to customers via push notifications. This data must be encrypted because many data points the application collects are sensitive, like geolocation.

To secure this data outside our private network, we use Amazon Simple Notification Service (Amazon SNS) as our delivery mechanism. Amazon SNS provides HTTPS endpoint delivery of messages coming to topics and subscriptions. AWS allows us to enable at-rest and/or in-transit encryption for all of our other architectural components as well.

Performance

To quantify our product’s performance, we review the “notification delay.” This metric evaluates the time between the initial collision and when the customer receives a push notification from Collision Assistance. Our ultimate goal is to have the push notification sent within minutes of a crash, so drivers have this information in near real time.

Initial architecture

Figure 1 presents our initial architecture implementation that aims to predict whether a crash has occurred and reduce false positives through the following data pipeline:

  1. The Kinesis stream receives vehicle data from an upstream ingestion service, as discussed in the Enhancing customer safety by leveraging the scalable, secure, and cost-optimized Toyota Connected Data Lake blog.
  2. A Lambda function writes lookup data to Amazon DynamoDB for every Kinesis record.
  3. This Lambda function decreases obvious non-crash data. It sends the current record (X) to Amazon SQS. If X exceeds a certain threshold, it will remain a crash candidate.
  4. Amazon SQS sets a delivery delay so that there will be more Kinesis/DynamoDB records available when X is processed later in the pipeline.
  5. A second Lambda function reads the data from the SQS message. It queries DynamoDB to find the Kinesis lookup data for the message before (X-1) and after (X+1) the crash candidate.
  6. Kinesis GetRecords retrieves X-1 and X+1, because X+1 will exist after the SQS delivery delay times out.
  7. The X-1, X, and X+1 messages are sent to the data science (DS) engine.
  8. When a crash is accurately predicted, these results are stored in a DynamoDB table.
  9. The push notification is sent to the vehicle owner. (Note: the push notification is still in ‘select testing phase’)
Diagram and description of our initial architecture implementation

Figure 1. Diagram and description of our initial architecture implementation

To be consistent with privacy best practices and reduce server uptime, this architecture uses the minimum amount of data the DS engine needs.

We filter out records that are lower than extremely low thresholds. Once these records are filtered out, around 40% of the data fits the criteria to be evaluated further. This reduces the server capacity needed by the DS engine by 60%.

To reduce false positives, we gather data before and after the timestamps where the extremely low thresholds are exceeded. We then evaluate the sensor data across this timespan and discard any sets with patterns of abnormal sensor readings or other false positive conditions. Figure 2 shows the time window we initially used.

Longitudinal acceleration versus time

Figure 2. Longitudinal acceleration versus time

Adjusting our initial architecture for better performance

Our initial design worked well for processing a few sample messages and achieved the desired near real-time delivery of the push notification. However, when the pipeline was enabled for over 1 million vehicles, certain limits were exceeded, particularly for Kinesis and Lambda integrations:

  • Our Kinesis GetRecords API exceeded the allowed five requests per shard per second. With each crash candidate retrieving an X-1 and X+1 message, we could only evaluate two per shard per second, which isn’t cost effective.
  • Additionally, the downstream SQS-reading Lambda function was limited to 10 records per second per invocation. This meant any slowdown that occurs downstream, such as during DS engine processing, could cause the queue to back up significantly.

To improve cost and performance for the Kinesis-related functionality, we abandoned the DynamoDB lookup table and the GetRecord calls in favor of using a Redis cache cluster on Amazon ElastiCache. This allows us to avoid all throughput exceptions from Kinesis and focus on scaling the stream based on the incoming throughput alone. The ElastiCache cluster scales capacity by adding or removing shards, which improves performance and cost efficiency.

To solve the Amazon SQS/Lambda integration issue, we funneled messages directly to an additional Kinesis stream. This allows the final Lambda function to use some of the better scaling options provided to Kinesis-Lambda event source integrations, like larger batch sizes and max-parallelism.

After making these adjustments, our tests proved we could scale to millions of vehicles as needed. Figure 3 shows a diagram of this final architecture.

Final architecture

Figure 3. Final architecture

Conclusion

Engineers across many functions worked closely to deliver the Collision Assistance product.

Our team of backend Java developers, infrastructure experts, and data scientists from TC and AWS built and deployed a near real-time product that helps Toyota and Lexus drivers document crash damage, file an insurance claim, and get updates on the actual repair process.

The managed services and serverless components available on AWS provided TC with many options to test and refine our team’s architecture. This helped us find the best fit for our use case. Having this flexibility in design was a key factor in designing and delivering the best architecture for our product.

 

Sending mobile push notifications and managing device tokens with serverless applications

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/sending-mobile-push-notifications-and-managing-device-tokens-with-serverless-application/

This post is written by Rafa Xu, Cloud Architect, Serverless and Joely Huang, Cloud Architect, Serverless.

Amazon Simple Notification Service (SNS) is a fast, flexible, fully managed push messaging service in the cloud. SNS can send mobile push notifications directly to applications on mobile devices such as message alerts and badge updates. SNS sends push notifications to a mobile endpoint created by supplying a mobile token and platform application.

When publishing mobile push notifications, a device token is used to generate an endpoint. This identifies where the push notification is sent (target destination). To push notifications successfully, the token must be up to date and the endpoint must be validated and enabled.

A common challenge when pushing notifications is keeping the token up to date. Tokens can automatically change due to reasons such as mobile operating system (OS) updates and application store updates.

This post provides a serverless solution to this challenge. It also provides a way to publish push notifications to specific end users by maintaining a mapping between users, endpoints, and tokens.

Overview

To publish mobile push notifications using SNS, generate an SNS endpoint to use as a destination target for the push notification. To create the endpoint, you must supply:

  1. A mobile application token: The mobile operating system (OS) issues the token to the application. It is a unique identifier for the application and mobile device pair.
  2. Platform Application Amazon Resource Name (ARN): SNS provides this ARN when you create a platform application object. The platform application object requires a valid set of credentials issued by the mobile platform, which you provide to SNS.

Once the endpoint is generated, you can store and reuse it again. This prevents the application from creating endpoints indefinitely, which could exhaust the SNS endpoint limit.

To reuse the endpoints and successfully push notifications, there are a number of challenges:

  • Mobile application tokens can change due to a number of reasons, such as application updates. As a result, the publisher must update the platform endpoint to ensure it uses an up-to-date token.
  • Mobile application tokens can become invalid. When this happens, messages won’t be published, and SNS disables the endpoint with the invalid token. To resolve this, publishers must retrieve a valid token and re-enable the platform endpoint
  • Mobile applications can have many users, each user could have multiple devices, or one device could have multiple users. To send a push notification to a specific user, a mapping between the user, device, and platform endpoints should be maintained.

For more information on best practices for managing mobile tokens, refer to this post.

Follow along the blog post to learn how to implement a serverless workflow for managing and maintaining valid endpoints and user mappings.

Solution overview

The solution uses the following AWS services:

  • Amazon API Gateway: Provides a token registration endpoint URL used by the mobile application. Once called, it invokes an AWS Lambda function via the Lambda integration.
  • Amazon SNS: Generates and maintains the target endpoint and manages platform application objects.
  • Amazon DynamoDB: Serverless database for storing endpoints that also maintains a mapping between the user, endpoint, and mobile operating system.
  • AWS Lambda: Retrieves endpoints from DynamoDB, validates and generates endpoints, and publishes notifications by making requests to SNS.

The following diagram represents a simplified interaction flow between the AWS services:

Solution architecture

To register the token, the mobile app invokes the registration token endpoint URL generated by Amazon API Gateway. The token registration happens every time a user logs in or opens the application. This ensures that the token and endpoints are always valid during the application usage.

The mobile application passes the token, user, and mobileOS as parameters to API Gateway, which forwards the request to the Lambda function.

The Lambda function validates the token and endpoint for the user by making API calls to DynamoDB and SNS:

  1. The Lambda function checks DynamoDB to see if the endpoint has been previously created.
    1. If the endpoint does not exist, it creates a platform endpoint via SNS.
  2. Obtain the endpoint attributes from SNS:
    1. Check the “enabled” endpoint attribute and set to “true” to enable the platform endpoint, if necessary.
    2. Validate the “token” endpoint attribute with the token provided in the API Gateway request. If it does not match, update the “token” attribute.
    3. Send a request to SNS to update the endpoint attributes.
  3. If a new endpoint is created, update DynamoDB with the new endpoint.
  4. Return a successful response to API Gateway.

Deploying the AWS Serverless Application Model (AWS SAM) template

Use the AWS SAM template to deploy the infrastructure for this workflow. Before deploying the template, first create a platform application in SNS.

  1. Navigate to the SNS console. Select Push Notifications on the left-hand menu to create a platform application:
    Mobile push notifications
  2. This shows the creation of a platform application for iOS applications:
    Create platform application
  3. To install AWS SAM, visit the installation page.
  4. To deploy the AWS SAM template, navigate to the directory where the template is located. Run the commands in the terminal:
    git clone https://github.com/aws-samples/serverless-mobile-push-notification
    cd serverless-mobile-push-notification
    sam build
    sam deploy --guided

Lambda function code snippets

The following section explains code from the Lambda function for the workflow.

Create the platform endpoint

If the endpoint exists, store it as a variable in the code. If the platform endpoint does not exist in the DynamoDB database, create a new endpoint:

        need_update_ddb = False
        response = table.get_item(Key={'username': username, 'appos': appos})
        if 'Item' not in response:
            # create endpoint
            response = snsClient.create_platform_endpoint(
                PlatformApplicationArn=SUPPORTED_PLATFORM[appos],
                Token=token,
            )
            devicePushEndpoint = response['EndpointArn']
            need_update_ddb = True
        else:
            # update the endpoint
            devicePushEndpoint = response['Item']['endpoint']

Check and update endpoint attributes

Check that the token attribute for the platform endpoint matches the token received from the mobile application through the request. This also checks for the endpoint “enabled” attribute and re-enables the endpoint if necessary:

response = snsClient.get_endpoint_attributes(
                EndpointArn=devicePushEndpoint
            )
            endpointAttributes = response['Attributes']

            previousToken = endpointAttributes['Token']
            previousStatus = endpointAttributes['Enabled']
            if previousStatus.lower() != 'true' or previousToken != token:
                snsClient.set_endpoint_attributes(
                    EndpointArn=devicePushEndpoint,
                    Attributes={
                        'Token': token,
                        'Enabled': 'true'
                    }
                )

Update the DynamoDB table with the newly generated endpoint

If a platform endpoint is newly created, meaning there is no item in the DynamoDB table, create a new item in the table:

        if need_update_ddb:
            table.update_item(
                Key={
                    'username': username,
                    'appos': appos
                },
                UpdateExpression="set endpoint=:e",
                ExpressionAttributeValues={
                    ':e': devicePushEndpoint
                },
                ReturnValues="UPDATED_NEW"
            )

As best practice, the code cleans up the table, in case there are multiple entries for the same endpoint mapped to different users. This can happen when the mobile application is used by multiple users on the same device. When one user logs out and a different user logs in, this creates a new entry in the DynamoDB table to map the endpoint with the new user.

As a result, you must remove the entry that maps the same endpoint to the previously logged in user. This way, you only keep the endpoint that matches the user provided by the mobile application through the request.

result = table.query(
    # Add the name of the index you want to use in your query.
        IndexName="endpoint-index",
        KeyConditionExpression=Key('endpoint').eq(devicePushEndpoint),
    )
    for item in result['Items']:
        if item['username'] != username and item['appos'] == appos:
            print(f"deleting orphan item: username {username}, os {appos}".format(username=item['username'], appos=appos))
            table.delete_item(
                Key={
                    'username': item['username'],
                    'appos': appos
                },
            )

Conclusion

This blog shows how to deploy a serverless solution for validating and managing SNS platform endpoints and tokens. To publish push notifications successfully, use SNS to check the endpoint attribute and ensure it is mapped to the correct token and the endpoint is enabled.

This approach uses DynamoDB to store the device token and platform endpoints for each user. This allows you to send push notifications to specific users, retrieve, and reuse previously created endpoints. You create a Lambda function to facilitate the workflow, including validating the DynamoDB item for storing an enabled and up-to-date token.

Visit this link to learn more about Amazon SNS mobile push notifications: http://docs.aws.amazon.com/sns/latest/dg/SNSMobilePush.html

For more serverless learning resources, visit Serverless Land.

Building well-architected serverless applications: Optimizing application performance – part 2

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-optimizing-application-performance-part-2/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the introduction post for a table of contents and explanation of the example application.

PERF 1. Optimizing your serverless application’s performance

This post continues part 1 of this security question. Previously, I cover measuring and optimizing function startup time. I explain cold and warm starts and how to reuse the Lambda execution environment to improve performance. I show a number of ways to analyze and optimize the initialization startup time. I explain how only importing necessary libraries and dependencies increases application performance.

Good practice: Design your function to take advantage of concurrency via asynchronous and stream-based invocations

AWS Lambda functions can be invoked synchronously and asynchronously.

Favor asynchronous over synchronous request-response processing.

Consider using asynchronous event processing rather than synchronous request-response processing. You can use asynchronous processing to aggregate queues, streams, or events for more efficient processing time per invocation. This reduces wait times and latency from requesting apps and functions.

When you invoke a Lambda function with a synchronous invocation, you wait for the function to process the event and return a response.

Synchronous invocation

Synchronous invocation

As synchronous processing involves a request-response pattern, the client caller also needs to wait for a response from a downstream service. If the downstream service then needs to call another service, you end up chaining calls that can impact service reliability, in addition to response times. For example, this POST /order request must wait for the response to the POST /invoice request before responding to the client caller.

Example synchronous processing

Example synchronous processing

The more services you integrate, the longer the response time, and you can no longer sustain complex workflows using synchronous transactions.

Asynchronous processing allows you to decouple the request-response using events without waiting for a response from the function code. This allows you to perform background processing without requiring the client to wait for a response, improving client performance. You pass the event to an internal Lambda queue for processing and Lambda handles the rest. An external process, separate from the function, manages polling and retries. Using this asynchronous approach can also make it easier to handle unpredictable traffic with significant volumes.

Asynchronous invocation

Asynchronous invocation

For example, the client makes a POST /order request to the order service. The order service accepts the request and returns that it has been received, without waiting for the invoice service. The order service then makes an asynchronous POST /invoice request to the invoice service, which can then process independently of the order service. If the client must receive data from the invoice service, it can handle this separately via a GET /invoice request.

Example asynchronous processing

Example asynchronous processing

You can configure Lambda to send records of asynchronous invocations to another destination service. This helps you to troubleshoot your invocations. You can also send messages or events that can’t be processed correctly into a dedicated Amazon Simple Queue Service (SQS) dead-letter queue for investigation.

You can add triggers to a function to process data automatically. For more information on which processing model Lambda uses for triggers, see “Using AWS Lambda with other services”.

Asynchronous workflows handle a variety of use cases including data Ingestion, ETL operations, and order/request fulfillment. In these use-cases, data is processed as it arrives and is retrieved as it changes. For example asynchronous patterns, see “Serverless Data Processing” and “Serverless Event Submission with Status Updates”.

For more information on Lambda synchronous and asynchronous invocations, see the AWS re:Invent presentation “Optimizing your serverless applications”.

Tune batch size, batch window, and compress payloads for high throughput

When using Lambda to process records using Amazon Kinesis Data Streams or SQS, there are a number of tuning parameters to consider for performance.

You can configure a batch window to buffer messages or records for up to 5 minutes. You can set a limit of the maximum number of records Lambda can process by setting a batch size. Your Lambda function is invoked whichever comes first.

For high volume SQS standard queue throughput, Lambda can process up to 1000 concurrent batches of records per second. For more information, see “Using AWS Lambda with Amazon SQS”.

For high volume Kinesis Data Streams throughput, there are a number of options. Configure the ParallelizationFactor setting to process one shard of a Kinesis Data Stream with more than one Lambda invocation simultaneously. Lambda can process up to 10 batches in each shard. For more information, see “New AWS Lambda scaling controls for Kinesis and DynamoDB event sources.” You can also add more shards to your data stream to increase the speed at which your function can process records. This increases the function concurrency at the expense of ordering per shard. For more details on using Kinesis and Lambda, see “Monitoring and troubleshooting serverless data analytics applications”.

Kinesis enhanced fan-out can maximize throughput by dedicating a 2 MB/second input/output channel per second per consumer instead of 2 MB per shard. For more information, see “Increasing stream processing performance with Enhanced Fan-Out and Lambda”.

Kinesis stream producers can also compress records. This is at the expense of additional CPU cycles for decompressing the records in your Lambda function code.

Required practice: Measure, evaluate, and select optimal capacity units

Capacity units are a unit of consumption for a service. They can include function memory size, number of stream shards, number of database reads/writes, request units, or type of API endpoint. Measure, evaluate and select capacity units to enable optimal configuration of performance, throughput, and cost.

Identify and implement optimal capacity units.

For Lambda functions, memory is the capacity unit for controlling the performance of a function. You can configure the amount of memory allocated to a Lambda function, between 128 MB and 10,240 MB. The amount of memory also determines the amount of virtual CPU available to a function. Adding more memory proportionally increases the amount of CPU, increasing the overall computational power available. If a function is CPU-, network- or memory-bound, then changing the memory setting can dramatically improve its performance.

Choosing the memory allocated to Lambda functions is an optimization process that balances performance (duration) and cost. You can manually run tests on functions by selecting different memory allocations and measuring the time taken to complete. Alternatively, use the AWS Lambda Power Tuning tool to automate the process.

The tool allows you to systematically test different memory size configurations and depending on your performance strategy – cost, performance, balanced – it identifies what is the most optimum memory size to use. For more information, see “Operating Lambda: Performance optimization – Part 2”.

AWS Lambda Power Tuning report

AWS Lambda Power Tuning report

Amazon DynamoDB manages table processing throughput using read and write capacity units. There are two different capacity modes, on-demand and provisioned.

On-demand capacity mode supports up to 40K read/write request units per second. This is recommended for unpredictable application traffic and new tables with unknown workloads. For higher and predictable throughputs, provisioned capacity mode along with DynamoDB auto scaling is recommended. For more information, see “Read/Write Capacity Mode”.

For high throughput Amazon Kinesis Data Streams with multiple consumers, consider using enhanced fan-out for dedicated 2 MB/second throughput per consumer. When possible, use Kinesis Producer Library and Kinesis Client Library for effective record aggregation and de-aggregation.

Amazon API Gateway supports multiple endpoint types. Edge-optimized APIs provide a fully managed Amazon CloudFront distribution. These are better for geographically distributed clients. API requests are routed to the nearest CloudFront Point of Presence (POP), which typically improves connection time.

Edge-optimized API Gateway deployment

Edge-optimized API Gateway deployment

Regional API endpoints are intended when clients are in the same Region. This helps you to reduce request latency and allows you to add your own content delivery network if necessary.

Regional endpoint API Gateway deployment

Regional endpoint API Gateway deployment

Private API endpoints are API endpoints that can only be accessed from your Amazon Virtual Private Cloud (VPC) using an interface VPC endpoint. For more information, see “Creating a private API in Amazon API Gateway”.

For more information on endpoint types, see “Choose an endpoint type to set up for an API Gateway API”. For more general information on API Gateway, see the AWS re:Invent presentation “I didn’t know Amazon API Gateway could do that”.

AWS Step Functions has two workflow types, standard and express. Standard Workflows have exactly once workflow execution and can run for up to one year. Express Workflows have at-least-once workflow execution and can run for up to five minutes. Consider the per-second rates you require for both execution start rate and the state transition rate. For more information, see “Standard vs. Express Workflows”.

Performance load testing is recommended at both sustained and burst rates to evaluate the effect of tuning capacity units. Use Amazon CloudWatch service dashboards to analyze key performance metrics including load testing results. I cover performance testing in more detail in “Regulating inbound request rates – part 1”.

For general serverless optimization information, see the AWS re:Invent presentation “Serverless at scale: Design patterns and optimizations”.

Conclusion

Evaluate and optimize your serverless application’s performance based on access patterns, scaling mechanisms, and native integrations. You can improve your overall experience and make more efficient use of the platform in terms of both value and resources.

This post continues from part 1 and looks at designing your function to take advantage of concurrency via asynchronous and stream-based invocations. I cover measuring, evaluating, and selecting optimal capacity units.

This well-architected question will continue in part 3 where I look at integrating with managed services directly over functions when possible. I cover optimizing access patterns and applying caching where applicable.

For more serverless learning resources, visit Serverless Land.

Adding resiliency to AWS CloudFormation custom resource deployments

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/adding-resiliency-to-aws-cloudformation-custom-resource-deployments/

This post is written by Dathu Patil, Solutions Architect and Naomi Joshi, Cloud Application Architect.

AWS CloudFormation custom resources allow you to write custom provisioning logic in templates. These run anytime you create, update, or delete stacks. Using AWS Lambda-backed custom resources, you can associate a Lambda function with a CloudFormation custom resource. The function is invoked whenever the custom resource is created, updated, or deleted.

When CloudFormation asynchronously invokes the function, it passes the request data, such as the request type and resource properties to the function. The customizability of Lambda functions in combination with CloudFormation allow a wide range of scenarios. For example, you can dynamically look up Amazon Machine Image (AMI) IDs during stack creation or use utilities such as string reversal functions.

Unhandled exceptions or transient errors in the custom resource Lambda function can cause your code to exit without sending a response. CloudFormation requires an HTTPS response to confirm if the operation is successful or not. An unreported exception causes CloudFormation to wait until the operation times out before starting a stack rollback.

If the exception occurs again on rollback, CloudFormation waits for a timeout exception before ending in a rollback failure. During this time, your stack is unusable. You can learn more about this and best practices by reviewing Best Practices for CloudFormation Custom Resources.

In this blog, you learn how you can use Amazon SQS and Lambda to add resiliency to your Lambda-backed CloudFormation custom resource deployments. The example shows how to use CloudFormation custom resource to look up an AMI ID dynamically during Amazon EC2 creation.

Overview

CloudFormation templates that declare an EC2 instance must also specify an AMI ID. This includes an operating system and other software and configuration information used to launch the instance. The correct AMI ID depends on the instance type and Region in which you’re launching your stack. AMI ID can change regularly, such as when an AMI is updated with software updates.

Customers often implement a CloudFormation custom resource to look up an AMI ID while creating an EC2 instance. In this example, the lookup Lambda function calls the EC2 API. It fetches the available AMI IDs, uses the latest AMI ID, and checks for a compliance tag. This implementation assumes that there are separate processes for creating AMI and running compliance checks. The process that performs compliance and security checks creates a compliance tag on a successful scan.

This solution shows how you can use SQS and Lambda to add resiliency to handle an exception. In this case, the exception occurs in the AMI lookup custom resource due to a missing compliance tag. When the AMI lookup function fails processing, it uses the Lambda destination configuration to send the request to an SQS queue. The message is reprocessed using the SQS queue and Lambda function.

Solution architecture

  1. The CloudFormation custom resource asynchronously invokes the AMI lookup Lambda function to perform appropriate actions.
  2. The AMI lookup Lambda function calls the EC2 API to fetch the list of AMIs and checks for a compliance tag. If the tag is missing, it throws an unhandled exception.
  3. On failure, the Lambda destination configuration sends the request to the retry queue that is configured as a dead-letter queue (DLQ). SQS adds a custom delay between retry processing to support more than two retries.
  4. The retry Lambda function processes messages in the retry queue using Lambda with SQS. Lambda polls the queue and invokes the retry Lambda function synchronously with an event that contains queue messages.
  5. The retry function then synchronously invokes the AMI lookup function using the information from the request SQS message.

The AMI Lookup Lambda function

An AWS Serverless Application Model (AWS SAM) template is used to create the AMI lookup Lambda function. You can configure async event options such as number of retries on the Lambda function. The maximum retries allowed is 2 and there is no option to set a delay between the invocation attempts.

When a transient failure or unhandled error occurs, the request is forwarded to the retry queue. This part of the AWS SAM template creates AMI lookup Lambda function:

  AMILookupLambda:
    Type: AWS::Serverless::Function 
    Properties:
      CodeUri: amilookup/
      Handler: app.lambda_handler
      Runtime: python3.8
      Timeout: 300
      EventInvokeConfig:
          MaximumEventAgeInSeconds: 60
          MaximumRetryAttempts: 2
          DestinationConfig:
            OnFailure:
              Type: SQS
              Destination: !GetAtt RetryQueue.Arn
      Policies:
        - AMIDescribePolicy: {}

This function calls the EC2 API using the boto3 AWS SDK for Python. It calls the describe_images method to get a list of images with given filter conditions. The Lambda function iterates through the AMI list and checks for compliance tags. If the tag is not present, it raises an exception:

ec2_client = boto3.client('ec2', region_name=region)
         # Get AMI IDs with the specified name pattern and owner
         describe_response = ec2_client.describe_images(
            Filters=[{'Name': "name", 'Values': architectures},
                     {'Name': "tag-key", 'Values': ['ami-compliance-check']}],
            Owners=["amazon"]
        )

The queue and the retry Lambda function

The retry queue adds a 60-second delay before a message is available for the processing. The time delay between retry processing attempts provides time for transient errors to be corrected. This is the AWS SAM template for creating these resources:

RetryQueue:
  Type: AWS::SQS::Queue
  Properties:
    VisibilityTimeout: 60
    DelaySeconds: 60
    MessageRetentionPeriod: 600

RetryFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: retry/
      Handler: app.lambda_handler
      Runtime: python3.8
      Timeout: 60
      Events:
        MySQSEvent:
          Type: SQS
          Properties:
            Queue: !GetAtt RetryQueue.Arn
            BatchSize: 1
      Policies:
        - LambdaInvokePolicy:
            FunctionName: !Ref AMILookupFunction

The retry Lambda function periodically polls for new messages in the retry queue. The function synchronously invokes the AMI lookup Lambda function. On success, a response is sent to CloudFormation. This process runs until the AMI lookup function returns a successful response or the message is deleted from the SQS queue. The deletion is based on the MessageRetentionPeriod, which is set to 600 seconds in this case.

for record in event['Records']:
        body = json.loads(record['body'])
        response = client.invoke(
            FunctionName=body['requestContext']['functionArn'],
            InvocationType='RequestResponse',
            Payload=json.dumps(body['requestPayload']).encode()
        )            

Deployment walkthrough

Prerequisites

To get started with this solution, you need:

  • AWS CLI and AWS SAM CLI installed to deploy the solution.
  • An existing Amazon EC2 public image. You can choose any of the AMIs from the AWS Management Console with Architecture = X86_64 and Owner = amazon for test purposes. Note the AMI ID.

Download the source code from the resilient-cfn-custom-resource GitHub repository. The template.yaml file is an AWS SAM template. It deploys the Lambda functions, SQS, and IAM roles required for the Lambda function. It uses Python 3.8 as the runtime and assigns 128 MB of memory for the Lambda functions.

  1. To build and deploy this application using the AWS SAM CLI build and guided deploy:
    sam build --use-container
    sam deploy --guided

The custom resource stack creation invokes the AMI lookup Lambda function. This fetches the AMI ID from all public EC2 images available in your account with the tag ami-compliance-check. Typically, the compliance tags are created by a process that performs security scans.

In this example, the security scan process is not running and the tag is not yet added to any AMIs. As a result, the custom resource throws an exception, which goes to the retry queue. This is retried by the retry function until it is successfully processed.

  1. Use the console or AWS CLI to add the tag to the chosen EC2 AMI. In this example, this is analogous to a separate governance process that checks for AMI compliance and adds the compliance tag if passed. Replace the $AMI-ID with the AMI ID captured in the prerequisites:
    aws ec2 create-tags –-resources $AMI-ID --tags Key=ami-compliance-check,Value=True
  2. After the tags are added, a response is sent successfully from the custom resource Lambda function to the CloudFormation stack. It includes your $AMI-ID and a test EC2 instance is created using that image. The stack creation completes successfully with all resources deployed.

Conclusion

This blog post demonstrates how to use SQS and Lambda to add resiliency to CloudFormation custom resources deployments. This solution can be customized for use cases where CloudFormation stacks have a dependency on a custom resource.

CloudFormation custom resource failures can happen due to unhandled exceptions. These are caused by issues with a dependent component, internal service, or transient system errors. Using this solution, you can handle the failures automatically without the need for manual intervention. To get started, download the code from the GitHub repo and start customizing.

For more serverless learning resources, visit Serverless Land.

Building well-architected serverless applications: Optimizing application performance – part 1

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-optimizing-application-performance-part-1/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the introduction post for a table of contents and explanation of the example application.

PERF 1. Optimizing your serverless application’s performance

Evaluate and optimize your serverless application’s performance based on access patterns, scaling mechanisms, and native integrations. This allows you to continuously gain more value per transaction. You can improve your overall experience and make more efficient use of the platform in terms of both value and resources.

Good practice: Measure and optimize function startup time

Evaluate your AWS Lambda function startup time for both performance and cost.

Take advantage of execution environment reuse to improve the performance of your function.

Lambda invokes your function in a secure and isolated runtime environment, and manages the resources required to run your function. When a function is first invoked, the Lambda service creates an instance of the function to process the event. This is called a cold start. After completion, the function remains available for a period of time to process subsequent events. These are called warm starts.

Lambda functions must contain a handler method in your code that processes events. During a cold start, Lambda runs the function initialization code, which is the code outside the handler, and then runs the handler code. During a warm start, Lambda runs the handler code.

Lambda function cold and warm starts

Lambda function cold and warm starts

Initialize SDK clients, objects, and database connections outside of the function handler so that they are started during the cold start process. These connections then remain during subsequent warm starts, which improves function performance and cost.

Lambda provides a writable local file system available at /tmp. This is local to each function but shared between subsequent invocations within the same execution environment. You can download and cache assets locally in the /tmp folder during the cold start. This data is then available locally by all subsequent warm start invocations, improving performance.

In the serverless airline example used in this series, the confirm booking Lambda function initializes a number of components during the cold start. These include the Lambda Powertools utilities and creating a session to the Amazon DynamoDB table BOOKING_TABLE_NAME.

import boto3
from aws_lambda_powertools import Logger, Metrics, Tracer
from aws_lambda_powertools.metrics import MetricUnit
from botocore.exceptions import ClientError

logger = Logger()
tracer = Tracer()
metrics = Metrics()

session = boto3.Session()
dynamodb = session.resource("dynamodb")
table_name = os.getenv("BOOKING_TABLE_NAME", "undefined")
table = dynamodb.Table(table_name)

Analyze and improve startup time

There are a number of steps you can take to measure and optimize Lambda function initialization time.

You can view the function cold start initialization time using Amazon CloudWatch Logs and AWS X-Ray. A log REPORT line for a cold start includes the Init Duration value. This is the time the initialization code takes to run before the handler.

CloudWatch Logs cold start report line

CloudWatch Logs cold start report line

When X-Ray tracing is enabled for a function, the trace includes the Initialization segment.

X-Ray trace cold start showing initialization segment

X-Ray trace cold start showing initialization segment

A subsequent warm start REPORT line does not include the Init Duration value, and is not present in the X-Ray trace:

CloudWatch Logs warm start report line

CloudWatch Logs warm start report line

X-Ray trace warm start without showing initialization segment

X-Ray trace warm start without showing initialization segment

CloudWatch Logs Insights allows you to search and analyze CloudWatch Logs data over multiple log groups. There are some useful searches to understand cold starts.

Understand cold start percentage over time:

filter @type = "REPORT"
| stats
  sum(strcontains(
    @message,
    "Init Duration"))
  / count(*)
  * 100
  as coldStartPercentage,
  avg(@duration)
  by bin(5m)
Cold start percentage over time

Cold start percentage over time

Cold start count and InitDuration:

filter @type="REPORT" 
| fields @memorySize / 1000000 as memorySize
| filter @message like /(?i)(Init Duration)/
| parse @message /^REPORT.*Init Duration: (?<initDuration>.*) ms.*/
| parse @log /^.*\/aws\/lambda\/(?<functionName>.*)/
| stats count() as coldStarts, median(initDuration) as avgInitDuration, max(initDuration) as maxInitDuration by functionName, memorySize
Cold start count and InitDuration

Cold start count and InitDuration

Once you have measured cold start performance, there are a number of ways to optimize startup time. For Python, you can use the PYTHONPROFILEIMPORTTIME=1 environment variable.

PYTHONPROFILEIMPORTTIME environment variable

PYTHONPROFILEIMPORTTIME environment variable

This shows how long each package import takes to help you understand how packages impact startup time.

Python import time

Python import time

Previously, for the AWS Node.js SDK, you enabled HTTP keep-alive in your code to maintain TCP connections. Enabling keep-alive allows you to avoid setting up a new TCP connection for every request. Since AWS SDK version 2.463.0, you can also set the Lambda function environment variable AWS_NODEJS_CONNECTION_REUSE_ENABLED=1 to make the SDK reuse connections by default.

You can configure Lambda’s provisioned concurrency feature to pre-initialize a requested number of execution environments. This runs the cold start initialization code so that they are prepared to respond immediately to your function’s invocations.

Use Amazon RDS Proxy to pool and share database connections to improve function performance. For additional options for using RDS with Lambda, see the AWS Serverless Hero blog post “How To: Manage RDS Connections from AWS Lambda Serverless Functions”.

Choose frameworks that load quickly on function initialization startup. For example, prefer simpler Java dependency injection frameworks like Dagger or Guice over more complex framework such as Spring. When using the AWS SDK for Java, there are some cold start performance optimization suggestions in the documentation. For further Java performance optimization tips, see the AWS re:Invent session, “Best practices for AWS Lambda and Java”.

To minimize deployment packages, choose lightweight web frameworks optimized for Lambda. For example, use MiddyJS, Lambda API JS, and Python Chalice over Node.js Express, Python Django or Flask.

If your function has many objects and connections, consider splitting the function into multiple, specialized functions. These are individually smaller and have less initialization code. I cover designing smaller, single purpose functions from a security perspective in “Managing application security boundaries – part 2”.

Minimize your deployment package size to only its runtime necessities

Smaller functions also allow you to separate functionality. Only import the libraries and dependencies that are necessary for your application processing. Use code bundling when you can to reduce the impact of file system lookup calls. This also includes deployment package size.

For example, if you only use Amazon DynamoDB in the AWS SDK, instead of importing the entire SDK, you can import an individual service. Compare the following three examples as shown in the Lambda Operator Guide:

// Instead of const AWS = require('aws-sdk'), use: +
const DynamoDB = require('aws-sdk/clients/dynamodb')

// Instead of const AWSXRay = require('aws-xray-sdk'), use: +
const AWSXRay = require('aws-xray-sdk-core')

// Instead of const AWS = AWSXRay.captureAWS(require('aws-sdk')), use: +
const dynamodb = new DynamoDB.DocumentClient() +
AWSXRay.captureAWSClient(dynamodb.service)

In testing, importing the DynamoDB library instead of the entire AWS SDK was 125 ms faster. Importing the X-Ray core library was 5 ms faster than the X-Ray SDK. Similarly, when wrapping a service initialization, preparing a DocumentClient before wrapping showed a 140-ms gain. Version 3 of the AWS SDK for JavaScript supports modular imports, which can further help reduce unused dependencies.

For additional options when for optimizing AWS Node.js SDK imports, see the AWS Serverless Hero blog post.

Conclusion

Evaluate and optimize your serverless application’s performance based on access patterns, scaling mechanisms, and native integrations. You can improve your overall experience and make more efficient use of the platform in terms of both value and resources.

In this post, I cover measuring and optimizing function startup time. I explain cold and warm starts and how to reuse the Lambda execution environment to improve performance. I show a number of ways to analyze and optimize the initialization startup time. I explain how only importing necessary libraries and dependencies increases application performance.

This well-architected question will be continued is part 2 where I look at designing your function to take advantage of concurrency via asynchronous and stream-based invocations. I cover measuring, evaluating, and selecting optimal capacity units.

For more serverless learning resources, visit Serverless Land.

Configuring CORS on Amazon API Gateway APIs

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/configuring-cors-on-amazon-api-gateway-apis/

Configuring cross-origin resource sharing (CORS) settings for a backend server is a typical challenge that developers face when building web applications. CORS is a layer of security enforced by modern browsers and is required when the client domain does not match the server domain. The complexity of CORS often leads developers to abandon it entirely by allowing all-access with the proverbial “*” permissions setting. However, CORS is an essential part of your application’s security posture and should be correctly configured.

This post explains how to configure CORS on Amazon API Gateway resources to enforce the least privileged access to an endpoint using the AWS Serverless Application Model (AWS SAM). I cover the notable CORS differences between REST APIs and HTTP APIs. Finally, I introduce you to the Amazon API Gateway CORS Configurator. This is a tool built by the AWS Serverless Developer Advocacy team to help you configure CORS settings properly.

Overview

CORS is a mechanism by which a server limits access through the use of headers. In requests that are not considered simple, the server relies on the browser to make a CORS preflight or OPTIONS request. A full request looks like this:

CORS request flow

CORS request flow

  1. Client application initiates a request
  2. Browser sends a preflight request
  3. Server sends a preflight response
  4. Browser sends the actual request
  5. Server sends the actual response
  6. Client receives the actual response

The preflight request verifies the requirements of the server by indicating the origin, method, and headers to come in the actual request.

OPTIONS preflight request

OPTIONS preflight request

The response from the server differs based on the backend you are using. Some servers respond with the allowed origin, methods, and headers for the endpoint.

OPTIONS preflight response

OPTIONS preflight response

Others only return CORS headers if the requested origin, method, and headers meet the requirements of the server. If the requirements are not met, then the response does not contain any CORS access control headers. The browser verifies the request’s origin, method, and headers against the data returned in the preflight response. If validation fails, the browser throws a CORS error and halts the request. If the validation is successful, the browser continues with the actual request.

Actual request

Actual request

The browser only sends the access-control-allow-origin header to verify the requesting origin during the actual request. The server then responds with the requested data.

Actual response

Actual response

This step is where many developers run into issues. Notice the endpoint of the actual request returns the access-control-allow-origin header. The browser once again verifies this before taking action.

Both the preflight and the actual response require CORS configuration, and it looks different depending on whether you select REST API or HTTP API.

Configuring API Gateway for CORS

While Amazon API Gateway offers several API endpoint types, this post focuses on REST API (v1) and HTTP API (v2). Both types create a representational state transfer (REST) endpoint that proxies an AWS Lambda function and other AWS services or third-party endpoints. Both types process preflight requests. However, there are differences in both the configuration, and the format of the integration response.

Terminology

Before walking through the configuration examples, it is important to understand some terminology:

  • Resource: A unique identifier for the API path (/customer/reports/{region}). Resources can have subresources that combine to make a unique path.
  • Method: the REST methods (for example, GET, POST, PUT, PATCH) the resource supports. The method is not part of the path but is passed through the headers.
  • Endpoint: A combination of resources and methods to create a unique API URL.

REST APIs

A popular use of API Gateway REST APIs is to proxy one or more Lambda functions to build a serverless backend. In this pattern, API Gateway does not modify the request or response payload. Therefore, REST API manages CORS through a combination of preflight configuration and a properly formed response from the Lambda function.

Preflight requests

Configuring CORS on REST APIs is generally configured in four lines of code with AWS SAM:

Cors:
  AllowMethods: "'GET, POST, OPTIONS'"
  AllowOrigin: "'http://localhost:3000'"
  AllowHeaders: "'Content-type, x-api-key'"

This code snippet creates a MOCK API resource that processes all preflight requests for that resource. This configuration is an example of the least privileged access to the server. It only allows GET, POST, and OPTIONS methods from a localhost endpoint on port 3000. Additionally, it only allows the Content-type and x-api-key CORS headers.

Notice that the preflight response only allows one origin to call this API. To enable multiple origins with REST APIs, use ‘*’ for the allow-control-allow-origin header. Alternatively, use a Lambda function integration instead of a MOCK integration to set the header dynamically based on the origin of the caller.

Authorization

When configuring CORS for REST APIs that require authentication, it is important to configure the preflight endpoint without authorization required. The preflight is generated by the browser and does not include the credentials by default. To remove the authorizer from the OPTIONS method add the AddDefaultAuthorizerToCorsPreflight: false setting to the authorization configuration.

Auth:
  AddDefaultAuthorizerToCorsPreflight: false
  Authorizers:
    MyCognitoAuth:
  
  …

Response

In REST APIs proxy configurations, CORS settings only apply to the OPTIONS endpoint and cover only the preflight check by the browser. The Lambda function backing the method must respond with the appropriate CORS information to handle CORS properly in the actual response. The following is an example of a proper response:

{
  "statusCode": 200,
  "headers": {
    "access-control-allow-origin":" http://localhost:3000",
  }
  "body": {"message": "hello world"}
}

In this response, the critical parts are the statusCode returned to the user as the response status and the access-control-allow-origin header required by the browser’s CORS validation.

HTTP APIs

Like REST APIs, Amazon API Gateway HTTP APIs are commonly used to proxy Lambda functions and are configured to handle preflight requests. However, unlike REST APIs, HTTP APIs handle CORS for the actual API response as well.

Preflight requests

The following example shows how to configure CORS on HTTP APIs with AWS SAM:

CorsConfiguration
  AllowMethods:
    - GET
    - POST
    - OPTIONS
  AllowOrigin:
    - http://localhost:3000
    - https://myproddomain.com
  AllowHeaders:
    - Content-type
    - x-api-key

This template configures HTTP APIs to manage CORS for the preflight requests and the actual requests. Note that the AllowOrigin section allows more than one domain. When the browser makes a request, HTTP APIs checks the list for the incoming origin. If it exists, HTTP APIs adds it to the access-control-allow-origin header in the response.

Authorization

When configuring CORS for HTTP APIs with authorization configured, HTTP APIs automatically configures the preflight endpoint without authorization required. The only caveat to this is the use of the $default route. When configuring a $default route, all methods and resources are handled by the default route and the integration behind it. This includes the preflight OPTIONS method.

There are two options to handle preflight. First, and recommended, is to break out the routes individually. Create a route specifically for each method and resource as needed. The second is to create an OPTIONS /{proxy+} method to override the $defaut route for preflight requests.

Response

Unlike REST APIs, by default, HTTP APIs modify the response for the actual request by adding the appropriate CORS headers based upon the CORS configuration. The following is an example of a simple response:

"hello world"

HTTP APIs then constructs the complete response with your data, status code, and any required CORS headers:

{
  "statusCode": 200,
  "headers": {
    "access-control-allow-origin":"[appropriate origin]",
  }
  "body": "hello world"
}

To set the status code manually, configure your response as follows:

{
  "statusCode": 201,
  "body": "hello world"
}

To manage the complete response like in REST APIs, set the payload format to version one. The payload format for HTTP API changes the structure of the payload sent to the Lambda function and the expected response from the Lambda function. By default, HTTP API uses version two, which includes the dynamic CORS settings. For more information, read how the payload version affects the response format in the documentation.

The Amazon API Gateway CORS Configurator

The AWS serverless developer advocacy team built the Amazon API Gateway CORS Configurator to help you configure CORS for your serverless applications.

Amazon API Gateway CORS Configurator

Amazon API Gateway CORS Configurator

Start by entering the information on the left. The CORS Configurator builds the proper snippets to add the CORS settings to your AWS SAM template as you add more information. The utility demonstrates adding the configuration to all APIs in the template by using the Globals section. You can also add to an API’s specific resource to affect only that API.

Additionally, the CORS Configurator constructs an example response based on the API type you are using.

This utility is currently in preview, and we welcome your feedback on how we can improve it. Feel free to open an issue on GitHub at https://github.com/aws-samples/amazon-api-gateway-cors-configurator.

Conclusion

CORS can be challenging. For API Gateway, CORS configuration is the number one question developers ask. In this post, I give an overview of CORS with a link to an in-depth explanation. I then show how to configure API Gateway to create the least privileged access to your server using CORS. I also discuss the differences in how REST APIs and HTTP APIs handle CORS. Finally, I introduced you to the API Gateway CORS Configurator to help you configure CORS using AWS SAM.

I hope to provide you with enough information that you can avoid opening up your servers with the “*” setting for CORS. Take the time to understand your application and limit requests to only methods you support and from only originating hosts you intended.

For more serverless content, go to Serverless Land.

Python 3.9 runtime now available in AWS Lambda

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/python-3-9-runtime-now-available-in-aws-lambda/

You can now use the Python 3.9 runtime to develop your AWS Lambda functions. You can do this in the AWS Management Console, AWS CLI, or AWS SDK, AWS Serverless Application Model (AWS SAM), or AWS Cloud Development Kit (AWS CDK). This post outlines some of the improvements to the Python runtime in version 3.9 and how to use this version in your Lambda functions.

New features and improvements to the Python language

Python 3.9 introduces new features for strings and dictionaries. There are new methods to remove prefixes and suffixes in strings. To remove a prefix, use str.removeprefix(prefix). To remove a suffix, use str.removesuffix(suffix). To learn more, read about PEP 616.

Dictionaries now offer two new operators (| and |=). The first is a union operator for merging dictionaries and the second allows developers to update the contents of a dictionary with another dictionary. To learn more, read about PEP 584.

You can alter the behavior of Python functions by using decorators. Previously, these could only consist of the @ symbol, a name, a dotted name, and an optional single call. Decorators can now consist of any valid expression, as explained in PEP 614.

There are also improvements for time zone handling. The zoneinfo module now supports the IANA time zone database. This can help remove boilerplate and brings improvements for code handling multiple timezones.

While existing Python 3 versions support TLS1.2, Python 3.9 now provides support for TLS1.3. This helps improve the performance of encrypted connections with features such as False Start and Zero Round Trip Time (0-RTT).

For a complete list of updates in Python 3.9, read the launch documentation on the Python website.

Performance improvements in Python 3.9

There are two important performance improvements in Python 3.9 that you can benefit from without making any code changes.

The first impacts code that uses the built-in Python data structures tuple, list, dict, set, or frozenset. In Python 3.9, these internally use the vectorcall protocol, which can make function calls faster by reducing the number of temporary objects used. Second, Python 3.9 uses a new parser that is more performant than previous versions. To learn more, read about PEP 617.

Changes to how Lambda works with the Python runtime

In Python, the presence of an __init__.py file in a directory causes it to be treated as a package. Frequently, __init__.py is an empty file that’s used to ensure that Python identifies the directory as a package. However, it can also contain initialization code for the package. Before Python 3.9, where you provided your Lambda function in a package, Lambda did not run the __init__.py code in the handler’s directory and parent directories during function initialization. From Python 3.9, Lambda now runs this code during the initialization phase. This ensures that imported packages are properly initialized if they make use of __init__.py. Note that __init__.py code is only run when the execution environment is first initialized.

Finally, there is a change to the error response in this new version. When previous Python versions threw errors, the formatting appeared as:

{"errorMessage": "name 'x' is not defined", "errorType": "NameError", "stackTrace": [" File \"/var/task/error_function.py\", line 2, in lambda_handler\n return x + 10\n"]}

From Python 3.9, the error response includes a RequestId:

{"errorMessage": "name 'x' is not defined", "errorType": "NameError", **"requestId"**: "<request id of function invoke>" "stackTrace": [" File \"/var/task/error_function.py\", line 2, in lambda_handler\n return x + 10\n"]}

Using Python 3.9 in Lambda

You can now use the Python 3.9 runtime to develop your AWS Lambda functions. To use this version, specify a runtime parameter value python3.9 when creating or updating Lambda functions. You can see the new version in the Runtime dropdown in the Create function page.

Create function page

To update an existing Lambda function to Python 3.9, navigate to the function in the Lambda console, then choose Edit in the Runtime settings panel. You see the new version in the Runtime dropdown:

Edit runtime settings

In the AWS Serverless Application Model (AWS SAM), set the Runtime attribute to python3.9 to use this version in your application deployments:

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Simple Lambda Function
  
Resources:
  MyFunction:
    Type: AWS::Serverless::Function
    Description: My Python Lambda Function
    Properties:
      CodeUri: my_function/
      Handler: lambda_function.lambda_handler
      Runtime: python3.9

Conclusion

You can now create new functions or upgrade existing Python functions to Python 3.9. Lambda’s support of the Python 3.9 runtime enables you to take advantage of improved performance and new features in this version. Additionally, the Lambda service now runs the __init_.py code before the handler, supports TLS 1.3, and provides enhanced logging for errors.

For more serverless learning resources, visit Serverless Land.

Cloudflare Developer Summer Challenge

Post Syndicated from Jenanne Vaccaro original https://blog.cloudflare.com/developer-summer-challenge/

Cloudflare Developer Summer Challenge

Cloudflare Developer Summer Challenge

There are a lot of experiences we have all grown to miss over the last year and a half. After hearing from our community, two of the top experiences they miss are collaborating with peers, and getting Cloudflare swag. Perhaps even in the reverse order! In-person events like conferences were once a key channel to satisfy both these interests, however today’s remote world makes that much harder. But does it have to?

Today, we are excited to introduce the Cloudflare Developer Summer Challenge. We will be rewarding 300 participants with boxes of our most popular swag, while enabling collaboration with other participants through our Workers Discord channel.

To participate, you have to build a project that uses Cloudflare Workers and at least one other product in our rapidly expanding developer platform. We will judge submissions and award swag boxes to those with the most innovative projects, limited to one box per person. The Challenge will be open for submissions between today, and closing November 1, 2021. See a full list of terms and conditions here.

Cloudflare Developer Summer Challenge

What are the details?

Cloudflare’s developer platform offers all the building blocks to create end-to-end applications with products across compute, storage, and frontend services. To successfully participate in the Cloudflare Developer Summer Challenge, you need to build a project with at least two of the following products in the Cloudflare developer platform. Bonus points for using more! These products include:

  • Cloudflare Workers: an edge-based serverless computing platform where you can deploy code automatically worldwide, with speed, security, and scale baked in.
  • Workers KV: a global low-latency key value data store for exceptionally high read volumes, making it possible to build highly dynamic APIs and websites which respond as quickly as a cached static file would.
  • Durable Objects: a storage platform providing low-latency coordination and consistent storage at the edge, enabling stateful serverless use cases.
  • Cloudflare Pages: a Jamstack web development platform to collaborate on and deploy high performance sites quickly across the world.

How will submissions be judged?

Submissions will be based on three criteria:

  • The first criteria is to meet the basic requirements of the challenge. This includes using at least two products in the platform, and submitting the live link of your project and your code repository. You must submit before November 1, 2021.
  • The second criteria we will judge is around innovation. Projects that are unique and useful to users will be more likely to win the swag boxes.
  • The third criteria is based on the breadth of the Cloudflare platform you use. The more products you integrate, the better!

How do I participate?

Step 1: Get Started: You can get started with the developer platform by visiting the Cloudflare Workers Quickstart Guide, and reviewing the Workers KV and/or Durable Objects documentation. To host your frontend site on Cloudflare Pages, you can visit the Pages Quickstart Guide.

Step 2: Build your Project: If you would like inspiration, you can view our tutorials, and examples. You can also view our Built With Workers page. If you want to try and build something more advanced, you can see some additional examples below.

Step 3: Share your Project: Successful submission includes a link to your live project, and a link to your code repository. We also encourage you to share your work, or pictures of you unboxing/ using your swag by tagging @CloudflareDev on Twitter with the hashtag #CloudflareSummerChallenge.

Cloudflare Developer Summer Challenge

Optional: Engage with the Community To promote collaboration and interaction with peers, we will have a dedicated channel in the Cloudflare Workers Discord server. For those who want to, you can learn what others are building, share your project, or join Q&A sessions if you have questions or need help getting started. You can also meet developers participating around the world.

What are examples of advanced projects I can build?

The Cloudflare developer platform is ideal for a wide range of use cases – from augmenting existing applications to building entirely new ones without needing to maintain underlying infrastructure. Essentially, you write the code, and we handle the rest. Running on the Cloudflare edge network, applications scale automatically and run worldwide within seconds of users.

Build a Serverless API for your Frontend

Cloudflare Workers’ high performance and edge network make it well-suited for building APIs. It is also a great companion to your frontend applications on Cloudflare Pages. You can use Workers as the backend, and build your frontend with frameworks such as React, Gatsby, Hugo, Svelte, and more — and then deploy your site onto Pages.

You can easily begin building a serverless API for your frontend by creating a new Workers project with the Wrangler CLI:

‘wrangler generate serverless-api https://github.com/cloudflare/worker-typescript-template’

To complete this type of project, you can follow the rest of the steps outlined in our Pages documentation.

Build an Interactive Game

Combining Cloudflare Workers, Durable Objects, and WebSockets makes a powerful platform for managing state at the edge. Running on Cloudflare’s global network enables exceptionally low latency, so users can interact instantly worldwide. Examples of interactive applications can range from chat rooms to multiplayer video games.

To build interactive video games, you can integrate with popular tools such as Unity and WebGL using an authoritative client model (we have an example of how to achieve this). You do not even need expertise in building video games. Following this example above, the client can run a compiled game directly in the browser with WebAssembly. The server, running on Cloudflare Workers, can be interacted with via WebSockets, and uses Durable Objects to manage game state.

Cloudflare Developer Summer Challenge

Build an E-Commerce Experience

Whether you are building a small e-commerce app or an online ordering site with millions of requests per month, your users can experience exceptional performance and reliability with Cloudflare’s developer platform.

At a small scale, you can instantly personalize a user’s e-commerce experience by setting up A/B testing, performing localization, or providing geo-specific targeting, such as local currency or exchange rates. You can see the linked code samples to quickly get started.

Integration with Workers KV and other third party tools can also streamline the development process. Instead of having to build out an entire database for your application, you can use Workers KV to store product information such as product ID, name, description, price, etc. Outside the Workers platform, you can integrate with popular tools such as Stripe or Shopify. To learn more about how to build an e-commerce application with Workers, Workers KV, and Stripe, you can read this blog post on building an e-commerce experience.

At a larger scale, we have customers building their entire online ordering site on Workers and Pages. Dig, a popular American restaurant chain with nearly 50 locations nationwide, decided to run their ordering site on Cloudflare’s developer platform. They needed high performance and reliability as traffic spiked during the pandemic. The team used a headless React application that is entirely hosted on Cloudflare Pages. Javascript calls an underlying API to get and handle dynamic ordering logic. You can learn more about their story in this case study.

Conclusion

We have heard from our community, and we want to help bring back some of their favorite pre-pandemic experiences: collaboration and swag. The Cloudflare Developer Summer Challenge is meant to do just that. While the last year and a half has transformed the way we all live, it has also been a period of significant expansion of the Cloudflare developer platform. With new products across compute, storage, and frontend services, you now have all the building blocks to quickly create powerful applications end-to-end on our edge network. We hope you enjoy the experience (and the swag!). We cannot wait to see what you build!

Building the Cloudflare Summer Challenge Application

Post Syndicated from Luke Edwards original https://blog.cloudflare.com/building-the-cloudflare-summer-challenge-application/

Building the Cloudflare Summer Challenge Application

Building the Cloudflare Summer Challenge Application

If you haven’t already heard, we’re hosting the Cloudflare Summer Developer Challenge, a contest for the Cloudflare community at large. Anybody – yes, including you – can sign up for free and compete for a chance to win one of 300 available prizes. To submit you need to use  at least two products from the Cloudflare developer platform — which makes this contest a great opportunity to give them a try if you haven’t already! The top 300 submissions will receive a box of our most popular swag, so you should give it a go!

Coincidentally, the Cloudflare Summer Developer Challenge’s landing page and signup workflow qualifies as a valid project submission (so meta), so if you’re looking for some inspiration, this walkthrough will shed some light on how it was built.

Overview

At its core, the application is a series of static HTML pages, most of which have a form to submit, with a backend API to handle those submissions, and a storage layer to persist the data. In a Cloudflare lens, this would point towards using Pages, a Worker, and Workers KV. And while this should be the preferred stack for a project like this, truthfully, this “application” was originally intended to be a single HTML page with a single form, but its list of requirements grew over time, as things tend to do. So instead, this project began as–and remains–a Workers Site project, comprised of a single Worker and a single Workers KV namespace.

Workers Sites, the precursor to our Pages product, is a pattern where your Worker handles all the requests for your site’s assets. While doing this, your Worker Site can still include backend-y things, like offering a collection of JSON API endpoints. Basically, Workers Sites is a coined term for building monoliths within a Worker, but without the negative associations that the word “monolith” can bring. Given that a Workers Site is still a Worker, this means your monolith is deployed globally – tough to beat!

As with all Workers Sites, routing is the primary concern. For this, I used the worktop web framework, which includes a router among many other utilities. (Disclosure: I am also the author of worktop.) This allowed me to quickly structure the layout of the entire application:

import { Router } from 'worktop';
import * as Cache from 'worktop/cache';

const API = new Router;

API.add('GET', '/', (req, res) => {
  res.send(200, 'TODO: send HTML for landing page');
});

API.add('GET', '/rules', (req, res) => {
  res.send(200, 'TODO: send HTML for terms & conditions');
});

API.add('POST', '/signup', (req, res) => {
  res.send(201, 'TODO: parse & save initial registration');
});

API.add('GET', '/submit', (req, res) => {
  res.send(200, 'TODO: render the unique submission form');
});

API.add('POST', '/submit', (req, res) => {
  res.send(201, 'TODO: parse, validate, save submission data');
});

// init; w/ Cache API
Cache.listen(API.run);

At this point, nothing useful is happening, but having an application skeleton laid out like this is my preferred format for a TODO list. It’s very satisfying to go through and fill out the handler bodies as development progresses. Additionally, the Cache.listen helper at the bottom of the file integrates the entire application with the Cache API, which I know I’ll want since most of the requests will be for the static HTML pages anyway.

Building and Optimizing the Client pages

Historically, deploying a Workers Site meant uploading all of your assets into a KV namespace. Then you would include something like @cloudflare/kv-asset-handler into your Worker so that incoming requests would seamlessly route to keys within the namespace. However, I chose to go a different route.

Knowing that each of my static pages would – at most – have one CSS stylesheet and sometimes only one JavaScript file, I thought it would be pretty nifty to include a build system that would inline these assets into the built HTML page. This would mean that my static HTML pages would have absolutely zero network requests for additional resources, which is generally good news for performance.

And while I would love to say that I did this purely for performance reasons, I must also admit that the lazy-me appreciated that I didn’t have to set up additional URL routing, deal with KV asset uploading, or deal with additional Cache lifespans. A win-win in this case!

The trouble is: avoiding any external assets is not a common goal. In fact, this is very much a side quest I bestowed upon myself. And since no frameworks (that I know of, at least) can do this, I had to assemble my own miniature toolkit to accommodate my needs.

In the end, it proved to be a fun detour and didn’t take very long at all to put together. I incorporated Stylus, my preferred CSS preprocessor, and came up with a rather simple convention to inline CSS and/or JS files where needed. Instead of fancy AST parsers and transformers, I opted to simply read the HTML file contents as strings and search for HTML comments that matched the <!-- inject:(path) --> format:

<!-- submit/index.html -->
<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8"/>
    <title>Submit Project | Cloudflare Developer Summer Challenge</title>
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <link rel="icon" type="image/png" href="https://www.cloudflare.com/favicon-128.png">
    <!-- inject:submit/index.styl -->
    <!-- inject:index.js -->
  </head>
  <body>
    <!-- ... -->
  </body>
</html>

In this example, the submit/index.html file is injecting the submit/index.styl, which is its own stylesheet, and the index.js script, which does not live within the `submit` directory because it’s used by other pages. The toolkit looks at both asset paths, converts the Stylus to plain CSS, and then embeds the contents into the appropriate <script> or <style> HTML tags.

Finally, for production builds, the setup will pass the final HTML source through a minifier, which compresses the entire document, including any CSS or JavaScript that was injected. This step is optional, but it never hurts to send fewer bytes down the wire.

Once these pages were built, I was satisfied with the Network Activity panel when loading the main page:

Building the Cloudflare Summer Challenge Application

You can see how the localhost document loads, only dispatching a single request for the favicon-128.png file, which is hosted externally. The three data:image/* requests are Blob URLs and don’t actually transfer network packets. All in all, this means that the HTML document is fully self-contained.

Including HTML into the Worker

Workers can send anything in a Response. Of course, this includes a HTML string. If I wanted to make things incredibly difficult for myself, I could have skipped the /src directory with its own build system, and instead written the HTML, CSS, and JS entirely within a JS string. This would certainly work, but it would be a nightmare to maintain and (for me, at least) be extremely error prone:

API.add('GET', '/', (req, res) => {
  // Note: Worktop APIs
  res.setHeader('Content-Type', 'text/html;charset=utf-8');
  res.send(200, `
    <!doctype html>
    <html lang="en">
      <head>
        <title>Demo | Insanity</title>
        <style>
          body {
            background: #fff;
            color: #424242;
          }
          /* more */
        </style>
        <script>
          $('form').onsubmit = function (ev) {
            ev.preventDefault();
            // ...
          });
        </script>
      </head>
      <body>
        <!-- my entire page content -->
      </body>
    </html>
  `);
});

Thankfully, I planned ahead and already have a build system that produces better HTML files anyway. So now I just needed a way to load those built outputs into my Worker code.

Now for the second half of this project’s toolkit; I find it perfectly acceptable to have a two-step build pipeline. Here, this means that the static site should be built first, followed by building the Worker. I was planning to use TypeScript to author my Worker anyway, which meant I was already going to need a build step – the only change here is that these build steps would now have to be sequential and ordered.

The Worker is built using esbuild, which is an extremely quick JavaScript bundler and compiler that is capable of translating TypeScript, too. It also has its own plugin system, which allowed me the opportunity to add the “inline my HTML files” behavior I needed. The Worker’s build script actually isn’t too intimidating and allows the Worker to `import` HTML files directly. This allows the insanity from above can be safely replaced with this pattern:

import { Router } from 'worktop';
import * as Cache from 'worktop/cache';

// loaded via esbuild plugin
import LANDING from 'index.html';
import RULES from 'rules/index.html';

API.add('GET', '/', (req, res) => {
  res.setHeader('Content-Type', 'text/html;charset=utf-8');
  res.setHeader('Cache-Control', 'public,max-age=60');
  res.send(200, LANDING);
});

API.add('GET', '/rulees', (req, res) => {
  res.setHeader('Content-Type', 'text/html;charset=utf-8');
  res.setHeader('Cache-Control', 'public,max-age=1800');
  res.send(200, RULES);
});

// ...

// init; w/ Cache API
Cache.listen(API.run);

Of course, this is much cleaner and sensible in the long-run. Clarity makes it easier to identify and extract common patterns into utility functions. I took the opportunity to introduce a render function, the first of many reusable helpers this project would encounter:

// worker/utils.ts
import type { ServerResponse } from 'worktop/response';

export function render(res: ServerResponse, template: string) {
  res.setHeader('Content-Type', 'text/html;charset=UTF-8');
  res.send(200, template);
}

// worker/index.ts
import * as utils from './utils';

API.add('GET', '/', (req, res) => {
  res.setHeader('Cache-Control', 'public,max-age=60');
  return utils.render(res, LANDING);
});

API.add('GET', '/rulees', (req, res) => {
  res.setHeader('Cache-Control', 'public,max-age=1800');
  return utils.render(res, RULES);
});

Finally, most of the pages need to dynamically insert values into the HTML markup. For example, the submission form should render with the participant’s name and email address and the landing page is required to reflect the current value of remaining prizes. Much like any other monolithic application, the Worker Site is fully aware – and capable – of injecting these values where needed.

To do this, I standardized the {{ variable }} syntax in my project’s HTML. Each of these variables would be replaced during the Worker request with the appropriate value. Of course, it also requires that each endpoint actually provide the correct information to make the substitutions. With this in mind, I modified the `render` utility and updated the landing page’s route handler:

// worker/utils.ts
import type { KV } from 'worktop/kv';
import type { ServerResponse } from 'worktop/response';

// TypeScript placeholder
// Defines the `DATA` KV binding
declare const DATA: KV.Namespace;

export function render(res: ServerResponse, template: string, values: Record<string, string> = {}) {
  for (let key in values) {
    template = template.replace('{{ ' + key + ' }}', values[key]);
  }
  res.setHeader('Content-Type', 'text/html;charset=UTF-8');
  res.send(200, template);
}
  
export function toCount(): Promise<string> {
  return DATA.get('::remain', 'text').then(v => v || '300+');
}
  
// worker/index.ts
import * as utils from './utils';

API.add('GET', '/', async (req, res) => {
  // Get the "::remain" count from KV
  const count = await utils.toCount();
  
  // Short-term TTL for remaining swag updates
  res.setHeader('Cache-Control', 'public,max-age=60');
  
  // Render the HTML, passing in `count` variable
  return utils.render(res, LANDING, { count });
});

With these changes, the landing page will always check the KV namespace for the latest ::remain value and inject it into the correct location. If you’re interested in checking out the project’s source code, you’ll find that this pattern is used in nearly every HTML response.

Accepting Form Submissions

As expected, this application made heavy use of form submissions. Luckily, the Fetch API offers a variety of built-in body parsers to make retrieval of the data trivial. Additionally, worktop offers a convenience function that will automatically invoke the correct parser based on the request’s Content-Type header. It’s aptly named req.body().

It’s easy to parse and retrieve user data, but it still has to be validated. There are a number of ways to do this, all of which boil down to an input object, a group of rules, and a loop through those rules, collecting any error messages into an errors object. This is precisely what my utils.validate helper does, allowing me to clearly define and manage my rules inline.

Let’s see how this looks within the POST /submit handler, which accepts the initial registration form:

// worker/index.ts
import * as utils from './utils';

API.add('POST', '/signup', async (req, res) => {
  try {
    var input = await req.body<Entry>();
  } catch (err) {
    return toError(res, 400, 'Error parsing input');
  }

  let { email, firstname, lastname } = input || {};
  firstname = String(firstname||'').trim();
  lastname = String(lastname||'').trim();
  email = String(email||'').trim();

  let { errors, invalid } = utils.validate({
    email, firstname, lastname
  }, {
    email(val: string) {
      if (val.length < 1) return 'Required';
      return utils.isEmail(val) || 'Invalid email address';
    },
    firstname(val: string) {
      return val.length > 1 || 'Required';
    },
    lastname(val: string) {
      return val.length > 1 || 'Required';
    }
  });

  if (invalid) {
    return res.send(422, errors);
  }
      
  // The `input` is valid!
  
  return res.send(200, 'TODO: finish me');
});

Only after the data is considered valid can data be stored into KV for future use. For the initial registration, a number of things need to happen:

  1. Ensure that the input.email hasn’t already been registered;
  2. Persist the new registration using the `input` values, identifying each document with the user:<email> key;
  3. Generate and save a unique code for the registration, which will be used later to ensure (a) that unregistered persons cannot submit projects and (b) that a registrant can only submit once;
  4. Send the user an email, containing their unique submission link; and
  5. Render a confirmation page, reminding the user to check their inbox for their link.

It can seem like a lot, but after piecing together a few utility helpers and abstractions, it can actually feel quite approachable:

// worker/index.ts
import * as utils from './utils';
import * as Sparkpost from './emails';
import * as Signup from './signup';
import * as Code from './code';

function toError(res: ServerResponse, status: number, reason: string) {
  return res.send(status, { status, reason });
}

API.add('POST', '/signup', async (req, res) => {
  try {
    var input = await req.body<Entry>();
  } catch (err) {
    return toError(res, 400, 'Error parsing input');
  }
  
  let { email, firstname, lastname } = input || {};
  firstname = String(firstname||'').trim();
  lastname = String(lastname||'').trim();
  email = String(email||'').trim();
  
  // truncated: validation
  
  // Ensure email is not already in use
  let exists = await Signup.find(email);
  if (exists) return toError(res, 400, 'You have already signed up');

  // Generate new `Entry` record
  let entry = Signup.prepare({ email, firstname, lastname });

  // create "user:<unique email>" document
  let isOK = await Signup.save(entry);
  if (!isOK) return toError(res, 500, 'Error persisting entry');

  // create "code:<unique value>" document
  isOK = await Code.save(entry);
  if (!isOK) return toError(res, 500, 'Error saving unique code');

  // dispatch "We received your registration" email
  let sent = await Sparkpost.confirm(entry);
  if (!sent) return toError(res, 500, 'Error sending confirmation email');

  // render "Thank you, check your {{ email }} for next steps" page
  return utils.render(res, CONFIRM, { email: entry.email });
});

A full HTML response is returned, which means that the client-side form handler should be able to see this content and render it directly in the browser window. This can be seen in the following index.js snippet, which was referenced earlier in the submit/index.html as an injected asset:

// (client) index.js

$('form').onsubmit = async function (ev) {
  ev.preventDefault();

  var form = ev.target;
  var res = await fetch(form.action, {
    method: form.method || 'POST',
    body: new FormData(form),
  });

  // truncate: clear existing errors

  if (res.ok) {
    form.reset();
    // Receive HTML response
    let html = await res.text();
    // Force-write the new HTML into this window
    document.documentElement.innerHTML = html;
  } else {
    // truncate: render errors
  }
};

BONUS: Because a full HTML response is returned, and all the client-side <form> elements are semantically correct, the form submission workflow will work with JavaScript disabled! The client-side validation will remain functional, but be a degraded experience – the error dialog won’t popup and any error messages will not appear beneath their respective form inputs.

Sending Transactional Emails

It should (hopefully) come as no surprise that programmatically sending an email is pretty straightforward these days. We chose to use SparkPost, but practically every service has the same API mechanics:

  • Obtain an API Token
  • Send a POST request to an endpoint with:
    • your API Token as an Authorization header
    • your recipient, sender identity, and text and/or HTML content as the POST body
  • Wait for a 200-level response, or deal with any API errors

Most email-as-a-service providers allow you to define templates, which allow you to replace variables with unique values per email – essentially the same thing our utils.render function is doing with our HTML contents. The benefit of this is that you only have to worry about writing your emails once; then you’re just POST’ing new values to the API endpoint.

SparkPost allows templates to be referenced by a custom name rather than a randomly generated identifier, which makes it easy to track and debug templates over time.

// worker/emails.ts
import type { Entry } from './signup';

// wrangler secret
// @see https://developers.sparkpost.com/api/#header-authentication
declare const SPARKPOST_KEY: string;

/**
 * Assemble the POST request for all SparkPost email triggers
 * @see https://developers.sparkpost.com/api/transmissions/#transmissions-post-send-a-template
 */
async function send(
  templateid: string,
  recipient: Entry,
  values?: Record<string, string>
): Promise<boolean> {
  const res = await fetch('https://api.sparkpost.com/api/v1/transmissions', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': SPARKPOST_KEY,
    },
    body: JSON.stringify({
      content: {
        template_id: templateid,
      },
      recipients: [{
        address: {
          email: recipient.email,
          name: recipient.firstname + ' ' + recipient.lastname,
        },
        substitution_data: values || {},
      }]
    })
  });

  let data = await res.json() as {
    results: {
      id: string;
      total_rejected_recipients: number;
      total_accepted_recipients: number;
    }
  };

  return res.ok && data.results.total_accepted_recipients === 1;
}
    
/**
 * Confirming user's signup
 * Sending unique submission form
 */
export function confirm(entry: Entry): Promise<boolean> {
  return send('devchallenge-confirm', entry, {
    firstname: entry.firstname,
    code: entry.code,
  });
}

The above snippet includes the entire POST request formatter – there’s nearly more type-hinting than there is code! Also shown is an example confirm method, which is responsible for sending the unique submission link to the newly-registered user. You’ll notice that firstname and code are the injected variables, required by the “devchallenge-confirm” template.

Overall Performance

I’d call this a success!

Even though this certainly wasn’t my first Worker project – and definitely won’t be my last – I’m consistently amazed how much the Workers runtime lets me get away with. I mean, if you could only take away two points from this article, they should be:

  1. I was able to build a moderately complex application, from scratch, while incorporating a Cache layer, a globally-replicated storage layer, and a super-performant JS runtime, all of which live under the same roof.
  2. I (probably) spent more time fussing with a custom client-side build pipeline than I did piecing together the mission-critical API form handlers.

The cherry on top: Should this contest go viral and lure in millions of visitors, I’d only be paying a couple of dollars at the end of the month. Obviously I have a bias here, but it’s pretty amazing really.

Finally, performance-wise, this may justify the time spent fiddling with the HTML build output:

Building the Cloudflare Summer Challenge Application

Lessons Learned

As I alluded to earlier, if I were to rebuild this application, or if I were to add more to it down the road, I would replace the Workers Site architecture with a Pages project and deploy a Worker in front of it for my API requirements and dynamic KV injections.

Since the static assets would no longer be embedded into the Worker’s source, I would need to replace the `utils.render` approach for another utility that fetches the URL from Pages (which becomes my “origin server”) and then uses HTMLRewriter to inject the variables. Also, not that I was anywhere near the 1MB size limit, the largest contributor to my Worker’s bytesize would disappear.

But, more significantly, this refactor would also reduce my total tooling since the majority of the project’s complexity lies in the custom build system for the frontend assets. In other words, the entire /src directory could have been built and deployed like a normal static website, which would allow me to make use of existing frameworks and/or toolkits instead of taking my self-imposed detour. There would have been no need to create a custom frontend toolkit and its bridge to get the static assets loaded into my Worker.

However, none of this is to say that Workers Sites was a bad approach for this application. It’s quite the contrary! This is all to highlight the flexibility of Worker Sites – and the Workers platform at large. Cloudflare Pages exists so that I, the developer, can lean into existing, well-traveled paths and let the experts worry about toolkits, build pipelines, and deployments… But that doesn’t prevent you, the resident expert, from customizing every aspect if that’s your desire.

Resources

Authenticating and authorizing Amazon MQ users with Lightweight Directory Access Protocol

Post Syndicated from Talia Nassi original https://aws.amazon.com/blogs/compute/authenticating-and-authorizing-amazon-mq-users-with-lightweight-directory-access-protocol/

This post is written by Dominic Gagné and Mithun Mallick.

Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that simplifies setting up and operating message brokers in the AWS Cloud. Integrating an Amazon MQ broker with a Lightweight Directory Access Protocol (LDAP) server allows you to manage credentials and permissions for users in a single location. There is also the added benefit of not requiring a message broker reboot for new authorization rules to take effect.

This post explores concepts around Amazon MQ’s authentication and authorization model. It covers the steps to set up Amazon MQ access for a Microsoft Active Directory user.

Authentication and authorization

Amazon MQ for ActiveMQ uses native ActiveMQ authentication to manage user permissions by default. Users are created within Amazon MQ to allow broker access, and are mapped to read, write, and admin operations on various destinations. This local user model is referred to as the simple authentication type.

As an alternative to simple authentication, you can maintain broker access control authorization rules within an LDAP server on a per-destination or destination set basis. Wildcards are also supported for rules that apply to multiple destinations.

The LDAP integration feature uses the ActiveMQ standard Java Authentication and Authorization Service (JAAS) plugin. Additional details on the plugin can be found within ActiveMQ security documentation. Authentication details are defined as part of the ldapServerMetadata attribute. Authorization settings are configured as part of the cachedLDAPAuthorizationMap node in the broker’s activemq.xml configuration.

Here is an overview of the integration:

overview graph

  1. Client requests access to a queue or topic.
  2. Authenticate and authorize the client via JAAS.
  3. Grant or deny Access to the specified queue or topic.
  4. If access is granted, allow the client to read, write, or create.

Integration with LDAP

ActiveMQ integration with LDAP sets up a secure LDAP access connection between an Amazon MQ for ActiveMQ broker and a Microsoft Active Directory server. You can also use other implementations of LDAP as the directory server, such as OpenLDAP.

Amazon MQ encrypts all data between a broker and LDAP server, and enforces secure LDAP (LDAPS) via public certificates. Unsecured LDAP on port 389 is not supported; traffic must communicate via the secure LDAP port 636. In this example, a Microsoft Active Directory server has LDAPS configured with a public certificate. To set up a Simple AD server with LDAPS and a public certificate, read this blog post.

To integrate with a Microsoft Active Directory server:

  1. Configure users in the Microsoft Active Directory directory information tree (DIT) structure for client authentication to the broker.
  2. Configure destinations in the Microsoft Active Directory DIT structure to allow destination-level authorization for individual users or entire groups.
  3. Create an ActiveMQ configuration to allow authorization via LDAP.
  4. Create a broker and perform a basic test to validate authentication and authorization access for a test user.

Configuring Microsoft Active Directory for client authentication

Create the hierarchy structure within the Microsoft Active Directory DIT to provision users. The server must be part of the domain and has a domain admin user. The domain admin user is needed in the broker configuration.

In this DIT, the domain corp.example.com is used, though you can use any domain name. An organizational unit (OU) named corp exists under the root. ActiveMQ related entities are defined under the corp OU.

This OU is the user base that the broker uses to search for users when performing authentication operations. Represented as LDIF, the user base is:

OU=Users,OU=corp,DC=corp,DC=example,DC=com

To create this OU and user:

  1. Log on to the Windows Server using a domain admin user.
  2. Open Active Directory Users and Computers by running dsa.msc from the command line.
  3. Choose corp and create an OU named Users, located within corp.
  4. Select the Users OU and enter the name mquser.
  5. Deselect the option to change password on next logon.
  6. Finally, choose Next to create the user.

Because the ActiveMQ source code hardcodes the attribute name for users to uid, make sure that each user has this attribute set. For simplicity, use the user’s connection user name. For more information, see the ActiveMQ source code and knowledgebase article.

Users must belong to the amazonmq-console-admins group to enable console access. Members of this group can create and delete any destinations via the console, regardless of other authorization rules in place. Access to this group should be granted sparingly.

Configuring Microsoft Active Directory for authorization

Now that our broker knows where to search for users, configure the DIT such that the broker can search for user permissions relating to authorization.

Back in the root OU corp where the Users OU was previously created:

  1. Create a new OU named Destination.
  2. Within the Destination OU, create an OU for each type of destination that ActiveMQ offers. These are Queue, Topic, and Temp.

For each destination that you want to allow authorization:

  1. Add an OU under the type of destination.
  2. Provide the name of the destination as the name of the OU. Wildcards are also supported, as found in ActiveMQ documentation.

This example shows three OUs that require authorization. These are DEMO.MYQUEUE, DEMO.MYSECONDQUEUE, and DEMO.EVENTS.$. The queue search base, which provides authorization information for destinations of type Queue, has the following location in the DIT:

OU=Queue,OU=Destination,OU=corp,DC=corp,DC=example,DC=com

Note the DEMO.EVENTS.$ wildcard queue name. Permissions in that OU apply to all queue names matching that wildcard.

Within each OU representing a destination or wildcard destination set, create three security groups. These groups relate to specific permissions on the relevant destination, using the same admin, read, and write permissions rules as ActiveMQ documentation describes.

There is a conflict with the group name “admin”. Legacy “pre-Windows 2000” rules do not allow groups to share the same name, even in different locations of the DIT. The value in the “pre-Windows 2000” text box does not impact the setup but it must be globally unique. In the following screenshot, a uuid suffix is appended to each admin group name.

Adding a user to the admin security group for a particular destination enables the user to create and delete that topic. Adding them to the read security group enables them to read from the destination, and adding them to the write group enables them to write to the destination.

In this example, mquser is added to the admin and write groups for the queue DEMO.MYQUEUE. Later, you test this user’s authorization permissions to confirm that the integration works as expected.

In addition to adding individual users to security group permissions, you can add entire groups. Because ActiveMQ hardcodes attribute names for groups, ensure that the group has the object class groupOfNames, as shown in the ActiveMQ source code.

To do this, follow the same process as with the UID for users. See the knowledgebase article for additional information.

The LDAP server is now compatible with ActiveMQ. Next, create a broker and configure LDAP values based on the LDAP deployment.

Creating a configuration to enable authorization via LDAP

Authorization rules in ActiveMQ are sourced from the broker’s activemq.xml configuration file.

  1. Begin by navigating to the Amazon MQ console to create a configuration with the Authentication Type set as LDAP.
  2. Edit this configuration to include the cachedLDAPAuthorizationMap, which is the node used to configure the locations in the LDAP DIT where authorization rules are stored. For more information on this topic, visit ActiveMQ documentation.
  3. Within the cachedLDAPAuthorizationMap in the broker’s configuration,Add the location of the OUs related to authorization in the broker’s configuration.
  4. Under the authorizationPlugin tag, enter a cachedLDAPAuthorizationMap node.
  5. Do not specify connectionUrl, connectionUsername, or connectionPassword. These values are filled in using the LDAP Server Metadata specified when creating the broker. If you specify these values, they are ignored.An example cachedLDAPAuthorizationMap is presented in the following image:

Creating a broker and testing Active Directory integration

Start by creating a broker using the default durability optimized storage.

  1. Select a Single-instance broker. You can use Active/standby broker or Network of Brokers if required.
  2. Choose Next.
  3. In the next page, under Configure Settings, set a name for the broker.
  4. Select an instance type.
  5. In the ActiveMQ Access section, select LDAP Authentication & Authorization.The input fields display parameters for connecting with the LDAP server. The service account must be associated with a user that can bind to your LDAP server. The server does not need to be public but the domain name must be publicly resolvable.
  6. The next section of the page includes the search configuration for Active Directory users who are authorized to access the queues and topics. The values depend on the org structure created in the Active Directory setup. These values are based on your DIT.
  7. Once users and role search metadata are provided, configure the broker to launch with the configuration created in the previous section (named my-ldap-authorization-conf). Do this by selecting the Additional Settings drop-down and choose the correct configuration file.
  8. Use the configuration where you defined cachedLDAPAuthorizationMap. This enables the broker to enforce read/write/admin permissions for client connections to the broker. These are defined in the LDAP server’s Destination OU.

Once the broker is running, authentication and authorization rules are enforced using the users and authorization rules defined in the configured LDAP server. During the Microsoft Active Directory setup, mquser is added to the admin and write groups for the queue DEMO.MYQUEUE. This means mquser can create and write to the queue DEMO.MYQUEUE but cannot perform any actions on other queues.

Test this by writing to the queue:

The client can connect to the broker and send messages to the queue DEMO.MYQUEUE using the credentials for mquser.

Conclusion

This post shows the steps to integrate an LDAP server with an Amazon MQ broker. After the integration, you can manage authentication and authorization rules for your users, without rebooting the broker.

For more serverless learning resources, visit Serverless Land.

 

Understanding VPC links in Amazon API Gateway private integrations

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/understanding-vpc-links-in-amazon-api-gateway-private-integrations/

This post is written by Jose Eduardo Montilla Lugo, Security Consultant, AWS.

A VPC link is a resource in Amazon API Gateway that allows for connecting API routes to private resources inside a VPC. A VPC link acts like any other integration endpoint for an API and is an abstraction layer on top of other networking resources. This helps simplify configuring private integrations.

This post looks at the underlying technologies that make VPC links possible. I further describe what happens under the hood when a VPC link is created for both REST APIs and HTTP APIs. Understanding these details can help you better assess the features and benefits provided by each type. This also helps you make better architectural decisions when designing API Gateway APIs.

This article assumes you have experience in creating APIs in API Gateway. The main purpose is to provide a deeper explanation of the technologies that make private integrations possible. For more information on creating API Gateway APIs with private integrations, refer to the Amazon API Gateway documentation.

Overview

AWS Hyperplane and AWS PrivateLink

There are two types of VPC links: VPC links for REST APIs and VPC links for HTTP APIs. Both provide access to resources inside a VPC. They are built on top of an internal AWS service called AWS Hyperplane. This is an internal network virtualization platform, which supports inter-VPC connectivity and routing between VPCs. Internally, Hyperplane supports multiple network constructs that AWS services use to connect with the resources in customers’ VPCs. One of those constructs is AWS PrivateLink, which is used by API Gateway to support private APIs and private integrations.

AWS PrivateLink allows access to AWS services and services hosted by other AWS customers, while maintaining network traffic within the AWS network. Since the service is exposed via a private IP address, all communication is virtually local and private. This reduces the exposure of data to the public internet.

In AWS PrivateLink, a VPC endpoint service is a networking resource in the service provider side that enables other AWS accounts to access the exposed service from their own VPCs. VPC endpoint services allow for sharing a specific service located inside the provider’s VPC by extending a virtual connection via an elastic network interface in the consumer’s VPC.

An interface VPC endpoint is a networking resource in the service consumer side, which represents a collection of one or more elastic network interfaces. This is the entry point that allows for connecting to services powered by AWS PrivateLink.

Comparing private APIs and private integrations

Private APIs are different to private integrations. Both use AWS PrivateLink but they are used in different ways.

A private API means that the API endpoint is reachable only through the VPC. Private APIs are accessible only from clients within the VPC or from clients that have network connectivity to the VPC. For example, from on-premises clients via AWS Direct Connect. To enable private APIs, an AWS PrivateLink connection is established between the customer’s VPC and API Gateway’s VPC.

Clients connect to private APIs via an interface VPC endpoint, which routes requests privately to the API Gateway service. The traffic is initiated from the customer’s VPC and flows through the AWS PrivateLink to the API Gateway’s AWS account:

Consumer connected to provider through VPC Link

Consumer connected to provider through VPC Link

When the VPC endpoint for API Gateway is enabled, all requests to API Gateway APIs made from inside the VPC go through the VPC endpoint. This is true for private APIs and public APIs. Public APIs are still accessible from the internet and private APIs are accessible only from the interface VPC endpoint. Currently, you can only configure REST APIs as private.

A private integration means that the backend endpoint resides within a VPC and it’s not publicly accessible. With a private integration, API Gateway service can access the backend endpoint in the VPC without exposing the resources to the public internet.

A private integration uses a VPC link to encapsulate connections between API Gateway and targeted VPC resources. VPC links allow access to HTTP/HTTPS resources within a VPC without having to deal with advanced network configurations. Both REST APIs and HTTP APIs offer private integrations but only VPC links for REST APIs use AWS PrivateLink internally.

VPC links for REST APIs

When you create a VPC link for a REST API, a VPC endpoint service is also created, making the AWS account a service provider. The service consumer in this case is API Gateway’s account. The API Gateway service creates an interface VPC endpoint in their account for the Region where the VPC link is being created. This establishes an AWS PrivateLink from the API Gateway VPC to your VPC. The target of the VPC endpoint service and the VPC link is a Network Load Balancer, which forwards requests to the target endpoints:

VPC Link for REST APIs

VPC Link for REST APIs

Before establishing any AWS PrivateLink connection, the service provider must approve the connection request. Requests from the API Gateway accounts are automatically approved in the VPC link creation process. This is because the AWS accounts that serve API Gateway for each Region are allow-listed in the VPC endpoint service.

When a Network Load Balancer is associated with an endpoint service, the traffic to the targets is sourced from the NLB. The targets receive the private IP addresses of the NLB, not the IP addresses of the service consumers.

This is helpful when configuring the security groups of the instances behind the NLB for two reasons. First, you do not know the IP address range of the VPC that’s connecting to the service. Second, NLB’s elastic network interfaces do not have any security groups attached. This means that they cannot be used as a source in the security groups of the targets. To learn more, read how to find the internal IP addresses assigned to an NLB.

To create a private API with a private integration, two AWS PrivateLink connections are established. The first is from a customer VPC to API Gateway’s VPC so that clients in the VPC can reach the API Gateway service endpoint. The other is from API Gateway’s VPC to the customer VPC so that API Gateway can reach the backend endpoint. Here is an example architecture:

Private API with private integrations

Private API with private integrations

VPC links for HTTP APIs

HTTP APIs are the latest type of API Gateway APIs that are cheaper and faster than REST APIs. VPC links for HTTP APIs do not require the creation of VPC endpoint services so a Network Load Balancer is not necessary. With VPC Links for HTTP APIs, you can now use an ALB or an AWS Cloud Map service to target private resources. This allows for more flexibility and scalability in the configuration required on both sides.

Configuring multiple integration targets is also easier with VPC links for HTTP APIs. For example, VPC links for REST APIs can be associated only with a single NLB. Configuring multiple backend endpoints requires some workarounds such as using multiple listeners on the NLB, associated with different target groups.

In contrast, a single VPC link for HTTP APIs can be associated with multiple backend endpoints without additional configuration. Also, with the new VPC link, customers with containerized applications can use ALBs instead of NLBs and take advantage of layer-7 load-balancing capabilities and other features such as authentication and authorization.

AWS Hyperplane supports multiple types of network virtualization constructs, including AWS PrivateLink. VPC links for REST APIs rely on AWS PrivateLink. However, VPC links for HTTP APIs use VPC-to-VPC NAT, which provides a higher level of abstraction.

The new construct is conceptually similar to a tunnel between both VPCs. These are created via elastic network interface attachments on the provider and consumer ends, which are both managed by AWS Hyperplane. This tunnel allows a service hosted in the provider’s VPC (API Gateway) to initiate communications to resources in a consumer’s VPC. API Gateway has direct connectivity to these elastic network interfaces and can reach the resources in the VPC directly from their own VPC. Connections are permitted according to the configuration of the security groups attached to the elastic network interfaces in the customer side.

Although it seems to provide the same functionality as AWS PrivateLink, these constructs differ in implementation details. A service endpoint in AWS PrivateLink allows for multiple connections to a single endpoint (the NLB), whereas the new approach allows a source VPC to connect to multiple destination endpoints. As a result, a single VPC link can integrate with multiple Application Load Balancers, Network Load Balancers, or resources registered with an AWS Cloud Map service on the customer side:

VPC Link for HTTP APIs

VPC Link for HTTP APIs

This approach is similar to the way that other services such as Lambda access resources inside customer VPCs.

Conclusion

This post explores how VPC links can set up API Gateway APIs with private integrations. VPC links for REST APIs encapsulate AWS PrivateLink resources such as interface VPC endpoints and VPC endpoint services to configure connections from API Gateway’s VPC to customer’s VPC to access private backend endpoints.

VPC links for HTTP APIs use a different construct in the AWS Hyperplane service to provide API Gateway with direct network access to VPC private resources. Understanding the differences between the two is important when adding private integrations as part of your API architecture design.

For more serverless learning resources, visit Serverless Land.

Building well-architected serverless applications: Building in resiliency – part 2

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-building-in-resiliency-part-2/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the introduction post for a table of contents and explanation of the example application.

Reliability question REL2: How do you build resiliency into your serverless application?

This post continues part 1 of this reliability question. Previously, I cover managing failures using retries, exponential backoff, and jitter. I explain how DLQs can isolate failed messages. I show how to use state machines to orchestrate long running transactions rather than handling these in application code.

Required practice: Manage duplicate and unwanted events

Duplicate events can occur when a request is retried or multiple consumers process the same message from a queue or stream. A duplicate can also happen when a request is sent twice at different time intervals with the same parameters. Design your applications to process multiple identical requests to have the same effect as making a single request.

Idempotency refers to the capacity of an application or component to identify repeated events and prevent duplicated, inconsistent, or lost data. This means that receiving the same event multiple times does not change the result beyond the first time the event was received. An idempotent application can, for example, handle multiple identical refund operations. The first refund operation is processed. Any further refund requests to the same customer with the same payment reference should not be processes again.

When using AWS Lambda, you can make your function idempotent. The function’s code must properly validate input events and identify if the events were processed before. For more information, see “How do I make my Lambda function idempotent?

When processing streaming data, your application must anticipate and appropriately handle processing individual records multiple times. There are two primary reasons why records may be delivered more than once to your Amazon Kinesis Data Streams application: producer retries and consumer retries. For more information, see “Handling Duplicate Records”.

Generate unique attributes to manage duplicate events at the beginning of the transaction

Create, or use an existing unique identifier at the beginning of a transaction to ensure idempotency. These identifiers are also known as idempotency tokens. A number of Lambda triggers include a unique identifier as part of the event:

You can also create your own identifiers. These can be business-specific, such as transaction ID, payment ID, or booking ID. You can use an opaque random alphanumeric string, unique correlation identifiers, or the hash of the content.

A Lambda function, for example can use these identifiers to check whether the event has been previously processed.

Depending on the final destination, duplicate events might write to the same record with the same content instead of generating a duplicate entry. This may therefore not require additional safeguards.

Use an external system to store unique transaction attributes and verify for duplicates

Lambda functions can use Amazon DynamoDB to store and track transactions and idempotency tokens to determine if the transaction has been handled previously. DynamoDB Time to Live (TTL) allows you to define a per-item timestamp to determine when an item is no longer needed. This helps to limit the storage space used. Base the TTL on the event source. For example, the message retention period for SQS.

Using DynamoDB to store idempotent tokens

Using DynamoDB to store idempotent tokens

You can also use DynamoDB conditional writes to ensure a write operation only succeeds if an item attribute meets one of more expected conditions. For example, you can use this to fail a refund operation if a payment reference has already been refunded. This signals to the application that it is a duplicate transaction. The application can then catch this exception and return the same result to the customer as if the refund was processed successfully.

Third-party APIs can also support idempotency directly. For example, Stripe allows you to add an Idempotency-Key: <key> header to the request. Stripe saves the resulting status code and body of the first request made for any given idempotency key, regardless of whether it succeeded or failed. Subsequent requests with the same key return the same result.

Validate events using a pre-defined and agreed upon schema

Implicitly trusting data from clients, external sources, or machines could lead to malformed data being processed. Use a schema to validate your event conforms to what you are expecting. Process the event using the schema within your application code or at the event source when applicable. Events not adhering to your schema should be discarded.

For API Gateway, I cover validating incoming HTTP requests against a schema in “Implementing application workload security – part 1”.

Amazon EventBridge rules match event patterns. EventBridge provides schemas for all events that are generated by AWS services. You can create or upload custom schemas or infer schemas directly from events on an event bus. You can also generate code bindings for event schemas.

SNS supports message filtering. This allows a subscriber to receive a subset of the messages sent to the topic using a filter policy. For more information, see the documentation.

JSON Schema is a tool for validating the structure of JSON documents. There are a number of implementations available.

Best practice: Consider scaling patterns at burst rates

Load testing your serverless application allows you to monitor the performance of an application before it is deployed to production. Serverless applications can be simpler to load test, thanks to the automatic scaling built into many of the services. For more information, see “How to design Serverless Applications for massive scale”.

In addition to your baseline performance, consider evaluating how your workload handles initial burst rates. This ensures that your workload can sustain burst rates while scaling to meet possibly unexpected demand.

Perform load tests using a burst strategy with random intervals of idleness

Perform load tests using a burst of requests for a short period of time. Also introduce burst delays to allow your components to recover from unexpected load. This allows you to future-proof the workload for key events when you do not know peak traffic levels.

There are a number of AWS Marketplace and AWS Partner Network (APN) solutions available for performance testing, including Gatling FrontLine, BlazeMeter, and Apica.

In regulating inbound request rates – part 1, I cover running a performance test suite using Gatling, an open source tool.

Gatling performance results

Gatling performance results

Amazon does have a network stress testing policy that defines which high volume network tests are allowed. Tests that purposefully attempt to overwhelm the target and/or infrastructure are considered distributed denial of service (DDoS) tests and are prohibited. For more information, see “Amazon EC2 Testing Policy”.

Review service account limits with combined utilization across resources

AWS accounts have default quotas, also referred to as limits, for each AWS service. These are generally Region-specific. You can request increases for some limits while other limits cannot be increased. Service Quotas is an AWS service that helps you manage your limits for many AWS services. Along with looking up the values, you can also request a limit increase from the Service Quotas console.

Service Quotas dashboard

Service Quotas dashboard

As these limits are shared within an account, review the combined utilization across resources including the following:

  • Amazon API Gateway: number of requests per second across all APIs. (link)
  • AWS AppSync: throttle rate limits. (link)
  • AWS Lambda: function concurrency reservations and pool capacity to allow other functions to scale. (link)
  • Amazon CloudFront: requests per second per distribution. (link)
  • AWS IoT Core message broker: concurrent requests per second. (link)
  • Amazon EventBridge: API requests and target invocations limit. (link)
  • Amazon Cognito: API limits. (link)
  • Amazon DynamoDB: throughput, indexes, and request rates limits. (link)

Evaluate key metrics to understand how workloads recover from bursts

There are a number of key Amazon CloudWatch metrics to evaluate and alert on to understand whether your workload recovers from bursts.

  • AWS Lambda: Duration, Errors, Throttling, ConcurrentExecutions, UnreservedConcurrentExecutions. (link)
  • Amazon API Gateway: Latency, IntegrationLatency, 5xxError, 4xxError. (link)
  • Application Load Balancer: HTTPCode_ELB_5XX_Count, RejectedConnectionCount, HTTPCode_Target_5XX_Count, UnHealthyHostCount, LambdaInternalError, LambdaUserError. (link)
  • AWS AppSync: 5XX, Latency. (link)
  • Amazon SQS: ApproximateAgeOfOldestMessage. (link)
  • Amazon Kinesis Data Streams: ReadProvisionedThroughputExceeded, WriteProvisionedThroughputExceeded, GetRecords.IteratorAgeMilliseconds, PutRecord.Success, PutRecords.Success (if using Kinesis Producer Library), GetRecords.Success. (link)
  • Amazon SNS: NumberOfNotificationsFailed, NumberOfNotificationsFilteredOut-InvalidAttributes. (link)
  • Amazon Simple Email Service (SES): Rejects, Bounces, Complaints, Rendering Failures. (link)
  • AWS Step Functions: ExecutionThrottled, ExecutionsFailed, ExecutionsTimedOut. (link)
  • Amazon EventBridge: FailedInvocations, ThrottledRules. (link)
  • Amazon S3: 5xxErrors, TotalRequestLatency. (link)
  • Amazon DynamoDB: ReadThrottleEvents, WriteThrottleEvents, SystemErrors, ThrottledRequests, UserErrors. (link)

Conclusion

This post continues from part 1 and looks at managing duplicate and unwanted events with idempotency and an event schema. I cover how to consider scaling patterns at burst rates by managing account limits and show relevant metrics to evaluate

Build resiliency into your workloads. Ensure that applications can withstand partial and intermittent failures across components that may only surface in production. In the next post in the series, I cover the performance efficiency pillar from the Well-Architected Serverless Lens.

For more serverless learning resources, visit Serverless Land.

Choosing between AWS services for streaming data workloads

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/choosing-between-aws-services-for-streaming-data-workloads/

Traditionally, streaming data can be complex to manage due to the large amounts of data arriving from many separate sources. Managing fluctuations in traffic and durably persisting messages as they arrive is a non-trivial task. Using a serverless approach, AWS provides a number of services that help manage large numbers of messages, alleviating much of the infrastructure burden.

In this blog post, I compare several AWS services and how to choose between these options in streaming data workloads.

Comparing Amazon Kinesis Data Streams with Amazon SQS queues

While you can use both services to decouple data producers and consumers, each is suited to different types of workload. Amazon SQS is primarily used as a message queue to store messages durably between distributed services. Amazon Kinesis is primarily intended to manage streaming big data.

Kinesis supports ordering of records and the ability for multiple consumers to read messages from the same stream concurrently. It also allows consumers to replay messages from up to 7 days previously. Scaling in Kinesis is based upon shards and you must reshard to scale a data stream up or down.

With SQS, consumers pull data from a queue and it’s hidden from other consumers until processed successfully (known as a visibility timeout). Once a message is processed, it’s deleted from the queue. A queue may have multiple consumers but they all receive separate batches of messages. Standard queues do not provide an ordering guarantee but scaling in SQS is automatic.

Amazon Kinesis Data Streams

Amazon SQS

Ordering guarantee Yes, by shard No for standard queues; FIFO queues support ordering by group ID.
Scaling Resharding required to provision throughput Automatic for standard queues; up to 30,000 message per second for FIFO queues (more details).
Exactly-once delivery No No for standard queues; Yes for FIFO queues.
Consumer model Multiple concurrent Single consumer
Configurable message delay No Up to 15 minutes
Ability to replay messages Yes No
Encryption Yes Yes
Payload maximum 1 MB per record 256 KB per message
Message retention period 24 hours (default) to 365 days (additional charges apply) 1 minute to 14 days. 4 days is the default
Pricing model Per shard hour plus PUT payload unit per million. Additional charges for some features No minimum; $0.24-$0.595 per million messages, depending on Region and queue type
AWS Free Tier included No Yes, 1 million messages per month – see details
Typical use cases

Real-time metrics/reporting

Real-time data analytics

Log and data feed processing

Stream processing

Application integration

Asynchronous processing

Batching messages/smoothing throughput

Integration with Kinesis Data Analytics Yes No
Integration with Kinesis Data Firehose Yes No

While some functionality of both services is similar, Kinesis is often a better fit for many streaming workloads. Kinesis has a broader range of options for ingesting large amounts of data, such as the Kinesis Producer Library and Kinesis Aggregation Library. You can also use the PutRecords API to send up to 500 records (up to a maximum 5 MiB) per request.

Additionally, it has powerful integrations not available to SQS. Amazon Kinesis Data Analytics allows you to transform and analyze streaming data with Apache Flink. You can also use streaming SQL to run windowed queries on live data in near-real time. You can also use Amazon Kinesis Data Firehose as a consumer for Amazon Kinesis Data Streams, which is also not available to SQS queues.

Choosing between Kinesis Data Streams and Kinesis Data Firehose

Both of these services are part of Kinesis but they have different capabilities and target use-cases. Kinesis Data Firehose is a fully managed service that can ingest gigabytes of data from a variety of producers. When Kinesis Data Streams is the source, it automatically scales up or down to match the volume of data. It can optionally process records as they arrive with AWS Lambda and deliver batches of records to services like Amazon S3 or Amazon Redshift. Here’s how the service compares with Kinesis Data Streams:

Kinesis Data Streams

Kinesis Data Firehose

Scaling Resharding required Automatic
Supports compression No Yes (GZIP, ZIP, and SNAPPY)
Latency ~200 ms per consumer (~70 ms if using enhanced fan-out) Seconds (depends on buffer size configuration); minimum buffer window is 60 seconds
Retention 1–365 days None
Message replay Yes No
Quotas See quotas See quotas
Ingestion capacity Determined by number of shards (1,000 records or 1 MB/s per shard) No limit if source is Kinesis Data Streams; otherwise see quota page
Producer types

AWS SDK or AWS CLI

Kinesis Producer Library

Kinesis Agent

Amazon CloudWatch

Amazon EventBridge

AWS IoT Core

AWS SDK or AWS CLI

Kinesis Producer Library

Kinesis Agent

Amazon CloudWatch

Amazon EventBridge

AWS IoT Core

Kinesis Data Streams

Number of consumers Multiple, sharing 2 MB per second per shard throughput One per delivery stream
Consumers

AWS Lambda

Kinesis Data Analytics

Kinesis Data Firehose

Kinesis Client Library

Amazon S3

Amazon Redshift

Amazon Elasticsearch Service

Third-party providers

HTTP endpoints

Pricing Hourly charge plus data volume. Some features have additional charges – see pricing Based on data volume, format conversion and VPC delivery – see pricing

The needs of your workload determine the choice between the two services. To prepare and load data into a data lake or data store, Kinesis Data Firehose is usually the better choice. If you need low latency delivery of records and the ability to replay data, choose Kinesis Data Streams.

Using Kinesis Data Firehose to prepare and load data

Kinesis Data Firehose buffers data based on two buffer hints. You can configure a time-based buffer from 1-15 minutes and a volume-based buffer from 1-128 MB. Whichever limit is reached first causes the service to flush the buffer. These are called hints because the service can adjust the settings if data delivery falls behind writing to the stream. The service raises the buffer settings dynamically to allow the service to catch up.

This is the flow of data in Kinesis Data Firehose from a data source through to a destination, including optional settings for a delivery stream:

Kinesis Dat Firehose flow

  1. The service continuously loads from the data source as it arrives.
  2. The data transformation Lambda function processes individual records and returns these to the service.
  3. Transformed records are delivered to the destination once the buffer size or buffer window is reached.
  4. Any records that could not be delivered to the destination are written to an intermediate S3 bucket.
  5. Any records that cannot be transformed by the Lambda function are written to an intermediate S3 bucket.
  6. Optionally, the original, untransformed records are written to an S3 bucket.

Data transformation using a Lambda function

The data transformation process enables you to modify the contents of individual records. Kinesis Data Firehose synchronously invokes the Lambda function with a batch of records. Your custom code modifies the records and then returns an array of transformed records.

Transformed records

The incoming payload provides the data attribute in base64 encoded format. Once the transformation is complete, the returned array must include the following attributes per record:

  • recordId: This must match the incoming recordId to enable the service to map the new data to the record.
  • result: “Ok”, “Dropped”, or “ProcessingFailed”. Dropped means that your logic has intentionally removed the record whereas ProcessingFailed indicates that an error has occurred.
  • data: The transformed data must be base64 encoded.

The returned array must be the same length as the incoming array. The Alleycat example application uses the following code in the data transformation function to add a calculated field to the record:

exports.handler = async (event) => {
    const output = event.records.map((record) => {
        
      // Extract JSON record from base64 data
      const buffer = Buffer.from(record.data, 'base64').toString()
      const jsonRecord = JSON.parse(buffer)

	// Add the calculated field
	jsonRecord.output = ((jsonRecord.cadence + 35) * (jsonRecord.resistance + 65)) / 100

	// Convert back to base64 + add a newline
	const dataBuffer = Buffer.from(JSON.stringify(jsonRecord) + '\n', 'utf8').toString('base64')

       return {
            recordId: record.recordId,
            result: 'Ok',
            data: dataBuffer
        }
    })
    
    console.log(`Output records: ${output.length}`)
    return { records: output }
}

Comparing scaling and throughput with Kinesis Data Streams and Kinesis Data Firehose

Kinesis Data Firehose manages scaling automatically. If the data source is a Kinesis Data Stream, there is no limit to the amount of data the service can ingest. If the data source is a direct put using the PutRecordBatch API, there are soft limits of up to 500,000 records per second, depending upon the Region. See the Kinesis Data Firehose quota page for more information.

Kinesis Data Firehose invokes a Lambda transformation function synchronously and scales up the function as the number of records in the stream grows. When the destination is S3, Amazon Redshift, or the Amazon Elasticsearch Service, Kinesis Data Firehose allows up to five outstanding Lambda invocations per shard. When the destination is Splunk, the quota is 10 outstanding Lambda invocations per shard.

With Kinesis Data Firehose, the buffer hints are the main controls for influencing the rate of data delivery. You can decide between more frequent delivery of small batches of message or less frequent delivery of larger batches. This can impact the PUT cost when using a destination like S3. However, this service is not intended to provide real-time data delivery due to the transformation and batching processes.

With Kinesis Data Streams, the number of shards in a stream determines the ingestion capacity. Each shard supports ingesting up to 1,000 messages or 1 MB per second of data. Unlike Kinesis Data Firehose, this service does not allow you to transform records before delivery to a consumer.

Data Streams has additional capabilities for increasing throughput and reducing the latency of data delivery. The service invokes Lambda consumers every second with a configurable batch size of messages. If the consumers are falling behind data production in the stream, you can increase the parallelization factor. By default, this is set to 1, meaning that each shard has a single instance of a Lambda function it invokes. You can increase this up to 10 so that multiple instances of the consumer function process additional batches of messages.

Increase the parallelization factor

Data Streams consumers use a pull model over HTTP to fetch batches of records, operating in serial. A stream with five standard consumers averages 200 ms of latency each, taking up to 1 second in total. You can improve the overall latency by using enhanced fan-out (EFO). EFO consumers use a push model over HTTP/2 and are independent of each other.

With EFO, all five consumers in the previous example receive batches of messages in parallel using dedicated throughput. The overall latency averages 70 ms and typically data delivery speed is improved by up to 65%. Note that there is an additional charge for this feature.

Kinesis Data Streams EFO consumers

Conclusion

This blog post compares different AWS services for handling streaming data. I compare the features of SQS and Kinesis Data Streams, showing how ordering, ingestion throughput, and multiple consumers often make Kinesis the better choice for streaming workloads.

I compare Data Streams and Kinesis Data Firehose and show how Kinesis Data Firehose is the better option for many data loading operations. I show how the data transformation process works and the overall workflow of a Kinesis Data Firehose stream. Finally, I compare the scaling and throughput options for these two services.

For more serverless learning resources, visit Serverless Land.

Building well-architected serverless applications: Building in resiliency – part 1

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-building-in-resiliency-part-1/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the introduction post for a table of contents and explanation of the example application.

Reliability question REL2: How do you build resiliency into your serverless application?

Evaluate scaling mechanisms for serverless and non-serverless resources to meet customer demand. Build resiliency into your workload to make your serverless application resilient to withstand partial and intermittent failures across components that may only surface in production.

Required practice: Manage transaction, partial, and intermittent failures

Whenever one service or system calls another, there is a chance that failures can happen. Services or systems often don’t fail as a single unit, but rather suffer partial or transient failures. Applications should be designed to handle component failures as part of the architecture. The system should be designed to detect failure and, ideally, automatically heal itself.

Transaction failures can occur when a component is unavailable or under high load. Partial failures can occur when a percentage of requests succeeds, including during batch processing. Intermittent failures might occur when a request fails for a short period of time due to network or other transient issues.

AWS serverless services, including AWS Lambda, are fault-tolerant and designed to handle failures. If a service invokes a Lambda function and there is a service disruption, Lambda invokes the function in a different Availability Zone.

When you invoke a function directly, you determine the strategy for handling errors. You can retry, send the event to a destination or queue for debugging, or ignore the error. Clients such as the AWS Command Line Interface (CLI) and the AWS SDK retry on client timeouts, throttling errors (429), and other errors that are not caused by a bad request.

When you invoke a function indirectly, you must be aware of the retry behavior of the invoker and any service that the request encounters along the way. For more information, see “Error handling and automatic retries in AWS Lambda”. You can configure Maximum Retry Attempts and Maximum Event Age for asynchronous invocations.

When reading from Amazon Kinesis Data Streams and Amazon DynamoDB Streams, Lambda retries the entire batch of items. Retries continue until the records expire or exceed the maximum age that you configure on the event source mapping. You can also configure the event source mapping to split a failed batch into two batches. Retrying with smaller batches isolates bad records and works around timeout issues.

Partial failures can occur in non-atomic operations. PutRecords for Kinesis and BatchWriteItem for DynamoDB return a successful response if at least one record is ingested successfully. Always inspect the response when using such operations and programmatically deal with partial failures.

Use exponential backoff with jitter

The simplest technique for dealing with failures in a networked environment is to retry calls until they succeed. This technique increases the reliability of the application and reduces operational costs for the developer.

However, it is not always safe to retry. A retry can further increase the load on the system being called if the system is already failing due to an overload. To avoid this problem, use backoff. Instead of retrying immediately and aggressively, the client waits some amount of time between tries. The most common pattern is an exponential backoff, which uses exponentially longer wait times between retries. This is typically capped to a maximum delay and number of retries.

If all backoff retries are still happening at the same time, this can still overload a system or cause contention. To avoid this problem, use jitter. Jitter adds some amount of randomness to the backoff to spread the retries around in time. This can help prevent large bursts by spreading out the rate when clients connect. For more information see the Amazon Builders’ Library article “Timeouts, retries, and backoff with jitter” and AWS Architecture blog post “Exponential Backoff And Jitter”.

Exponential backoff and jitter

Exponential backoff and jitter

When your application responds to callers in fail-fast scenarios and when performance is degraded, inform the caller via headers or metadata when they can retry.

Each AWS SDK implements automatic retry logic including exponential backoff. For downstream calls, you can adjust AWS and third-party SDK retries, backoffs, TCP, and HTTP timeouts. This helps you decide when to stop retrying. For more information, see the documentation and troubleshooting steps for Lambda and the AWS SDK.

Use a dead-letter queue mechanism to retain, investigate and retry failed transactions

There are a number of ways to handle message failures including destinations and dead-letter queues.

You can configure Lambda to send records of asynchronous invocations to another destination service. These include Amazon Simple Queue Service (SQS), Amazon Simple Notification Service (SNS), Lambda, and Amazon EventBridge. You can configure separate destinations for events that fail processing and events that are successfully processed. The invocation record contains details about the event, the response, and the reason that the record was sent.

The following example shows a function that sends a record of a successful invocation to an EventBridge event bus. When an event fails all processing attempts, Lambda sends an invocation record to an SQS queue. It includes the function’s response in the invocation record.

AWS Lambda destinations for asynchronous invocation

AWS Lambda destinations for asynchronous invocation

SNS, SQS, Lambda, and EventBridge support dead-letter queues (DLQs). DLQs make your applications more resilient and durable by storing messages or events that can’t be processed correctly into a dedicated SQS queue. This helps you debug your application by isolating the problematic messages to determine why their processing failed. One you have resolved the issue, re-process the failed message. For more information, see “When should I use a dead-letter queue?” There is an example serverless application to redrive the messages from an SQS DLQ back to its source SQS queue.

For Lambda, DLQs provide an alternative to a failure destination. Lambda destinations is preferable for asynchronous invocations.

Good practice: Orchestrate long-running transactions

Long-running transactions can be processed by one or multiple components. Consider implementing the saga pattern using state machines for these types of transactions.

The saga pattern coordinates transactions between multiple microservices as part of a state machine. Each service that performs a transaction publishes an event to trigger the next transaction in the saga. This continues until the transaction chain is complete. If a transaction fails, saga orchestrates a series of compensating transactions that undo the changes that were made by the preceding transactions.

This is preferable to handling complex or long-running transactions within application code. State machines prevent cascading failures and avoid tightly coupling components with orchestrating logic and business logic.

Use a state machine to visualize distributed transactions, and to separate business logic from orchestration logic.

AWS Step Functions lets you coordinate multiple AWS services into serverless workflows via state machines. Within Step Functions, you can set separate retries, backoff rates, max attempts, intervals, and timeouts. These are set for every step of your state machine using a declarative language.

In the serverless airline example used in this series, Step Functions is used to orchestrate the Booking microservice. The ProcessBooking state machine handles all the necessary steps to create bookings, including payment.

Booking service Step Functions state machine

Booking service Step Functions state machine

The state machine uses a combination of service integrations using DynamoDB, SQS, and Lambda functions to coordinate transactions and handle failures.

For example, the Reserve Booking task invokes a Lambda function. The task has retry and error handling configured as part of the task definition.

"Reserve Booking": {
	"Type": "Task",
	"Resource": "${ReserveBooking.Arn}",
	"TimeoutSeconds": 5,
	"Retry": [
		{
			"ErrorEquals": [
				"BookingReservationException"
			],
			"IntervalSeconds": 1,
			"BackoffRate": 2,
			"MaxAttempts": 2
		}
	],
	"Catch": [
		{
			"ErrorEquals": [
				"States.ALL"
			],
			"ResultPath": "$.bookingError",
			"Next": "Cancel Booking"
		}
	],
	"ResultPath": "$.bookingId",
	"Next": "Collect Payment"
},

Step Functions supports direct service integrations, including DynamoDB. The Reserve Flight task directly updates the flightTable without requiring a Lambda function.

"Reserve Flight": {
	"Type": "Task",
	"Resource": "arn:aws:states:::dynamodb:updateItem",
	"Parameters": {
		"TableName.$": "$.flightTable",
		"Key": {
			"id": {
				"S.$": "$.outboundFlightId"
			}
		},
		"UpdateExpression": "SET seatCapacity = seatCapacity - :dec",
		"ExpressionAttributeValues": {
			":dec": {
				"N": "1"
			},
			":noSeat": {
				"N": "0"
			}
		},
		"ConditionExpression": "seatCapacity > :noSeat"
	},

By default, when a state reports an error, Step Functions causes the execution to fail entirely.

Utilize dead-letter queues in response to failed state machine executions

Any state within the Step Functions workflow can encounter runtime errors. These include state machine definition issues, task failures such as Lambda function exceptions, or transient issues such as network connectivity issues. For more information, see “Error handling in Step Functions”.

Use the Step Functions service integration with SQS to send failed transactions to a DLQ as the final step. This adds a higher level of durability within your state machines.

For example, the airline Notify Failed Booking final task catches failed states from four previous steps. It sends the results to the Booking DLQ.

Booking service Step Functions DLQ

Booking service Step Functions DLQ

The message includes the output of the previous failed states for further troubleshooting.

"Booking DLQ": {
	"Type": "Task",
	"Resource": "arn:aws:states:::sqs:sendMessage",
	"Parameters": {
		"QueueUrl": "${BookingsDLQ}",
		"MessageBody.$": "$"
	},
	"ResultPath": "$.deadLetterQueue",
	"Next": "Booking Failed"
},

The Step Functions documentation has more information on calling SQS.

Conclusion

Build resiliency into your workloads. This makes sure that your application can withstand partial and intermittent failures across components that may only surface in production.

In this post, I cover managing failures using retries, exponential backoff, and jitter. I explain how DLQs can isolate failed messages. I show how to use state machines to orchestrate long running transactions rather than handling these in application code.

This well-architected question continues in part 2 where I look at managing duplicate and unwanted events with idempotency and an event schema. I cover how to consider scaling patterns at burst rates by managing account limits and show relevant metrics to evaluate.

For more serverless learning resources, visit Serverless Land.

Building a serverless multiplayer game that scales: Part 2

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-a-serverless-multiplayer-game-that-scales-part-2/

This post is written by Vito De Giosa, Sr. Solutions Architect and Tim Bruce, Sr. Solutions Architect, Developer Acceleration.

This series discusses solutions for scaling serverless games, using the Simple Trivia Service, a game that relies on user-generated content. Part 1 describes the overall architecture, how to deploy to your AWS account, and different communications methods.

This post discusses how to scale via automation and asynchronous processes. You can use automation to minimize the need to scale personnel to review player-generated content for acceptability. It also introduces asynchronous processing, which allows you to run non-critical processes in the background and batch data together. This helps to improve resource usage and game performance. Both scaling techniques can also reduce overall spend.

To set up the example, see the instructions in the GitHub repo and the README.md file. This example uses services beyond the AWS Free Tier and incurs charges. Instructions to remove the example application from your account are also in the README.md file.

Technical implementation

Games require a mechanism to support auto-moderated avatars. Specifically, this is an upload process to allow the player to send the content to the game. There is a content moderation process to remove unacceptable content and a messaging process to provide players with a status regarding their content.

Here is the architecture for this feature in Simple Trivia Service, which is combined within the avatar workflow:

Architecture diagram

This architecture processes images uploaded to Amazon S3 and notifies the user of the processing result via HTTP WebPush. This solution uses AWS Serverless services and the Amazon Rekognition moderation API.

Uploading avatars

Players start the process by uploading avatars via the game client. Using presigned URLs, the client allows players to upload images directly to S3 without sharing AWS credentials or exposing the bucket publicly.

The URL embeds all the parameters of the S3 request. It includes a SignatureV4 generated with AWS credentials from the backend allowing S3 to authorize the request.

S3 upload process

  1. The front end retrieves the presigned URL invoking an AWS Lambda function through an Amazon API Gateway HTTP API endpoint.
  2. The front end uses the URL to send a PUT request to S3 with the image.

Processing avatars

After the upload completes, the backend performs a set of activities. These include content moderation, generating the thumbnail variant, and saving the image URL to the player profile. AWS Step Functions orchestrates the workflow by coordinating tasks and integrating with AWS services, such as Lambda and Amazon DynamoDB. Step Functions enables creating workflows without writing code and handles errors, retries, and state management. This enables traffic control to avoid overloading single components when traffic surges.

The avatar processing workflow runs asynchronously. This allows players to play the game without being blocked and enables you to batch the requests. The Step Functions workflow is triggered from an Amazon EventBridge event. When the user uploads an image to S3, an event is published to EventBridge. The event is routed to the avatar processing Step Functions workflow.

The single avatar feature runs in seconds and uses Step Functions Express Workflows, which are ideal for high-volume event-processing use cases. Step Functions can also support longer running processes and manual steps, depending on your requirements.

To keep performance at scale, the solution adopts four strategies. First, it moderates content automatically, requiring no human intervention. This is done via Amazon Rekognition moderation API, which can discover inappropriate content in uploaded avatars. Developers do not need machine learning expertise to use this API. If it identifies unacceptable content, the Step Functions workflow deletes the uploaded picture.

Second, it uses avatar thumbnails on the top navigation bar and on leaderboards. This speeds up page loading and uses less network bandwidth. Image-editing software runs in a Lambda function to modify the uploaded file and store the result in S3 with the original.

Third, it uses Amazon CloudFront as a content delivery network (CDN) with the S3 bucket hosting images. This improves performance by implementing caching and serving static content from locations closer to the player. Additionally, using CloudFront allows you to keep the bucket private and provide greater security for the content stored within S3.

Finally, it stores profile picture URLs in DynamoDB and replicates the thumbnail URL in an Amazon Cognito user attribute named picture. This allows the game to retrieve the avatar URL as part of the login process, saving an HTTP GET request for the player profile.

The last step of the workflow publishes the result via an event to EventBridge for downstream systems to consume. The service routes the event to the notification component to inform the player about the moderation status.

Notifying users of the processing result

The result of the avatar workflow to the player is important but not urgent. Players want to know the result but not impact their gameplay experience. A solution for this challenge is to use HTTP web push. It uses the HTTP protocol and does not require a constant communication channel between backend and front end. This allows players to play games without being blocked or by introducing latency to the game communications channel.

Applications requiring low latency fully bidirectional communication, such as highly interactive multi-player games, typically use WebSockets. This creates a persistent two-way channel for front end and backend to exchange information. The web push mechanism can provide non-urgent data and messages to the player without interrupting the WebSockets channel.

The web push protocol describes how to use a consolidated push service as a broker between the web-client and the backend. It accepts subscriptions from the client and receives push message delivery requests from the backend. Each browser vendor provides a push service implementation that is compliant with the W3C Push API specification and is external to both client and backend.

The web client is typically a browser where a JavaScript application interacts with the push service to subscribe and listen for incoming notifications. The backend is the application that notifies the front end. Here is an overview of the protocol with all the parties involved.

Notification process

  1. A component on the client subscribes to the configured push service by sending an HTTP POST request. The client keeps a background connection waiting for messages.
  2. The push service returns a URL identifying a push resource that the client distributes to backend applications that are allowed to send notifications.
  3. Backend applications request a message delivery by sending an HTTP POST request to the previously distributed URL.
  4. The push service forwards the information to the client.

This approach has four advantages. First, it reduces the effort to manage the reliability of the delivery process by off-loading it to an external and standardized component. Second, it minimizes cost and resource consumption. This is because it doesn’t require the backend to keep a persistent communication channel or compute resources to be constantly available. Third, it keeps complexity to a minimum because it relies on HTTP only without requiring additional technologies. Finally, HTTP web push addresses concepts such as message urgency and time-to-live (TTL) by using a standard.

Serverless HTTP web push

The implementation of the web push protocol requires the following components, per the Push API specification. First, the front end is required to create a push subscription. This is implemented through a service worker, a script running in the origin of the application. The service worker exposes operations to access the push service either creating subscriptions or listening for push events.

Serverless HTTP web push

  1. The client uses the service worker to subscribe to the push service via the Push API.
  2. The push service responds with a payload including a URL, which is the client’s push endpoint. The URL is used to create notification delivery requests.
  3. The browser enriches the subscription with public cryptographic keys, which are used to encrypt messages ensuring confidentiality.
  4. The backend must receive and store the subscription for when a delivery request is made to the push service. This is provided by API Gateway, Lambda, and DynamoDB. API Gateway exposes an HTTP API endpoint that accepts POST requests with the push service subscription as payload. The payload is stored in DynamoDB alongside the player identifier.

This front end code implements the process:

//Once service worker is ready
navigator.serviceWorker.ready
  .then(function (registration) {
    //Retrieve existing subscription or subscribe
    return registration.pushManager.getSubscription()
      .then(async function (subscription) {
        if (subscription) {
          console.log('got subscription!', subscription)
          return subscription;
        }
        /*
         * Using Public key of our backend to make sure only our
         * application backend can send notifications to the returned
         * endpoint
         */
        const convertedVapidKey = self.vapidKey;
        return registration.pushManager.subscribe({
          userVisibleOnly: true,
          applicationServerKey: convertedVapidKey
        });
      });
  }).then(function (subscription) {
    //Distributing the subscription to the application backend
    console.log('register!', subscription);
    const body = JSON.stringify(subscription);
    const parms = {jwt: jwt, playerName: playerName, subscription: body};
    //Call to the API endpoint to save the subscription
    const res = DataService.postPlayerSubscription(parms);
    console.log(res);
  });

 

Next, the backend reacts to the avatar workflow completed custom event to create a delivery request. This is accomplished with EventBridge and Lambda.

Backend process after avater workflow completed

  1. EventBridge routes the event to a Lambda function.
  2. The function retrieves the player’s agent subscriptions, including push endpoint and encryption keys, from DynamoDB.
  3. The function sends an HTTP POST to the push endpoint with the encrypted message as payload.
  4. When the push service delivers the message, the browser activates the service worker updating local state and displaying the notification.

The push service allows creating delivery requests based on the knowledge of the endpoint and the front end allows the backend to deliver messages by distributing the endpoint. HTTPS provides encryption for data in transit while DynamoDB encrypts all your data at rest to provide confidentiality and security for the endpoint.

Security of WebPush can be further improved by using Voluntary Application Server Identification (VAPID). With WebPush, the clients authenticate messages at delivery time. VAPID allows the push service to perform message authentication on behalf of the web client avoiding denial-of-service risk. Without the additional security of VAPID, any application knowing the push service endpoint might successfully create delivery requests with an invalid payload. This can cause the player’s agent to accept messages from unauthorized services and, possibly, cause a denial-of-service to the client by overloading its capabilities.

VAPID requires backend applications to own a key pair. In Simple Trivia Service, a Lambda function, which is an AWS CloudFormation custom resource, generates the key pair when deploying the stack. It securely saves values in AWS System Manager (SSM) Parameter Store.

Here is a representation of VAPID in action:

VAPID process architecture

  1. The front end specifies which backend the push service can accept messages from. It does this by including the public key from VAPID in the subscription request.
  2. When requesting a message delivery, the backend self-identifies by including the public key and a token signed with the private key in the HTTP Authorization header. If the keys match and the client uses the public key at subscription, the message is sent. If not, the message is blocked by the push service.

The Lambda function that sends delivery requests to the push service reads the key values from SSM. It uses them to generate the Authorization header to include in the request, allowing for successful delivery to the client endpoint.

Conclusion

This post shows how you can add scaling support for a game via automation. The example uses Amazon Rekognition to check images for unacceptable content and uses asynchronous architecture patterns with Step Functions and HTTP WebPush. These scaling approaches can help you to maximize your technical and personnel investments.

For more serverless learning resources, visit Serverless Land.