All posts by James Beswick

Building event-driven architectures with Amazon SNS FIFO

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-event-driven-architectures-with-amazon-sns-fifo/

This post is courtesy of Christian Mueller, Principal Solutions Architect.

Developers increasingly adopt event-driven architectures to decouple their distributed applications. Often, these events must be propagated in a strictly ordered manner to all subscribed applications. Using Amazon SNS FIFO topics and Amazon SQS FIFO queues, you can address use cases that require end-to-end message ordering, deduplication, filtering, and encryption.

In this blog post, I introduce a sample event-driven architecture. I walk through an implementation based on Amazon SNS FIFO topics and Amazon SQS FIFO queues.

Common requirements in event-driven-architectures

In event-driven architectures, data consistency is a common business requirement. This is often translated into technical requirements such as zero message loss and strict message ordering. For example, if you update your domain object rapidly, you want to be sure that all events are received by each subscriber in exactly the order they occurred. This way, the current domain object state is what each subscriber received as the latest update event. Similarly, all update events should be received after the initial create event.

Before Amazon SNS FIFO, architects had to design applications to check if messages are received out of order before processing.

Comparing SNS and SNS FIFO

Another common challenge is preventing message duplicates when sending events to the messaging service. If an event publisher receives an error, such as a network timeout, the publisher does not know if the messaging service could receive and successfully process the message or not.

The client may retry, as this is the default behavior for some HTTP response codes in AWS SDKs. This can cause duplicate messages.

Before Amazon SNS FIFO, developers had to design receivers to be idempotent. In some cases, where the event cannot be idempotent, this requires the receiver to be implemented in an idempotent way. Often, this is done by adding a key-value store like Amazon DynamoDB or Amazon ElastiCache for Redis to the service. Using this approach, the receiver can track if the event has been seen before.

Exactly once processing and message deduplication

Exploring the recruiting agency example

This sample application models a recruitment agency with a job listings website. The application is composed of multiple services. I explain 3 of them in more detail.

Sample application architecture

A custom service, the anti-corruption service, receives a change data capture (CDC) event stream of changes from a relational database. This service translates the low-level technical database events into meaningful business events for the domain services for easy consumption. These business events are sent to the SNS FIFO “JobEvents.fifo“ topic. Here, interested services subscribe to these events and process them asynchronously.

In this domain, the analytics service is interested in all events. It has an SQS FIFO “AnalyticsJobEvents.fifo” queue subscribed to the SNS FIFO “JobEvents.fifo“ topic. It uses SQS FIFO as event source for AWS Lambda, which processes and stores these events in Amazon S3. S3 is object storage service with high scalability, data availability, durability, security, and performance. This allows you to use services like Amazon EMR, AWS Glue or Amazon Athena to get insights into your data to extract value.

The inventory service owns an SQS FIFO “InventoryJobEvents.fifo” queue, which is subscribed to the SNS FIFO “JobEvents.fifo“ topic. It is only interested in “JobCreated” and “JobDeleted” events, as it only tracks which jobs are currently available and stores this information in a DynamoDB table. Therefore, it uses an SNS filter policy to only receive these events, instead of receiving all events.

This sample application focuses on the SNS FIFO capabilities, so I do not explore other services subscribed to the SNS FIFO topic. This sample follows the SQS best practices and SNS redrive policy recommendations and configures dead-letter queues (DLQ). This is useful in case SNS cannot deliver an event to the subscribed SQS queue. It also helps if the function fails to process an event from the corresponding SQS FIFO queue multiple times. As a requirement in both cases, the attached SQS DLQ must be an SQS FIFO queue.

Deploying the application

To deploy the application using infrastructure as code, it uses the AWS Serverless Application Model (SAM). SAM provides shorthand syntax to express functions, APIs, databases, and event source mappings. It is expanded into AWS CloudFormation syntax during deployment.

To get started, clone the “event-driven-architecture-with-sns-fifo” repository, from here. Alternatively, download the repository as a ZIP file from here and extract it to a directory of your choice.

As a prerequisite, you must have SAM CLI, Python 3, and PIP installed. You must also have the AWS CLI configured properly.

Navigate to the root directory of this project and build the application with SAM. SAM downloads required dependencies and stores them locally. Execute the following commands in your terminal:

git clone https://github.com/aws-samples/event-driven-architecture-with-amazon-sns-fifo.git
cd event-driven-architecture-with-amazon-sns-fifo
sam build

You see the following output:

Deployment output

Now, deploy the application:

sam deploy --guided

Provide arguments for the deployments, such as the stack name and preferred AWS Region:

SAM guided deployment

After a successful deployment, you see the following output:

Successful deployment message

Learning more about the implementation

I explore the three services forming this sample application, and how they use the features of SNS FIFO.

Anti-corruption service

The anti-corruption service owns the SNS FIFO “JobEvents.fifo” topic, where it publishes business events related to job postings. It uses an SNS FIFO topic, as end-to-end ordering per job ID is required. SNS FIFO is configured not to perform content-based deduplication, as I require a unique message deduplication ID for each event for deduplication. The corresponding definition in the SAM template looks like this:

  JobEventsTopic:
    Type: AWS::SNS::Topic
    Properties:
      TopicName: JobEvents.fifo
      FifoTopic: true
      ContentBasedDeduplication: false

For simplicity, the anti-corruption function in the sample application doesn’t consume an external database CDC stream. It uses Amazon CloudWatch Events as an event source to trigger the function every minute.

I provide the SNS FIFO topic Amazon Resource Name (ARN) as an environment variable in the function. This makes this function more portable to deploy in different environments and stages. The function’s AWS Identity and Access Management (IAM) policy grants permissions to publish messages to only this SNS topic:

  AntiCorruptionFunction:
    Type: AWS::Serverless::
    Properties:
      CodeUri: anti-corruption-service/
      Handler: app.lambda_handler
      Runtime: python3.7
      MemorySize: 256
      Environment:
        Variables:
          TOPIC_ARN: !Ref JobEventsTopic
      Policies:
        - SNSPublishMessagePolicy
            TopicName: !GetAtt JobEventsTopic.TopicName
      Events:
        Trigger:
          Type: 
          Properties:
            Schedule: 'rate(1 minute)'

The anti-corruption function uses features in the SNS publish API, which allows you to define a “MessageDeduplicationId” and a “MessageGroupId”. The “MessageDeduplicationId” is used to filter out duplicate messages, which are sent to SNS FIFO within in 5-minute deduplication interval. The “MessageGroupId” is required, as SNS FIFO processes all job events for the same message group in a strictly ordered manner, isolated from other message groups processed through the same topic.

Another important aspect in this implementation is the use of “MessageAttributes”. We define a message attribute with the name “eventType” and values like “JobCreated”, “JobSalaryUpdated”, and “JobDeleted”. This allows subscribers to define SNS filter policies to only receive certain events they are interested in:

import boto3
from datetime import datetime
import json
import os
import random
import uuid

TOPIC_ARN = os.environ['TOPIC_ARN']

sns = boto3.client('sns')

def lambda_handler(event, context):
    jobId = str(random.randrange(0, 1000))

    send_job_created_event(jobId)
    send_job_updated_event(jobId)
    send_job_deleted_event(jobId)
    return

def send_job_created_event(jobId):
    messageId = str(uuid.uuid4())

    response = sns.publish(
        TopicArn=TOPIC_ARN,
        Subject=f'Job {jobId} created',
        MessageDeduplicationId=messageId,
        MessageGroupId=f'JOB-{jobId}',
        Message={...},
        MessageAttributes = {
            'eventType': {
                'DataType': 'String',
                'StringValue': 'JobCreated'
            }
        }
    )
    print('sent message and received response: {}'.format(response))
    return

def send_job_updated_event(jobId):
    messageId = str(uuid.uuid4())

    response = sns.publish(...)
    print('sent message and received response: {}'.format(response))
    return

def send_job_deleted_event(jobId):
    messageId = str(uuid.uuid4())

    response = sns.publish(...)
    print('sent message and received response: {}'.format(response))
    return

Analytics service

The analytics service owns an SQS FIFO “AnalyticsJobEvents.fifo” queue which is subscribed to the SNS FIFO “JobEvents.fifo” topic. Following best practices, I define redrive policies for the SQS FIFO queue and the SNS FIFO subscription in the template:

  AnalyticsJobEventsQueue:
    Type: AWS::SQS::Queue
    Properties:
      QueueName: AnalyticsJobEvents.fifo
      FifoQueue: true
      RedrivePolicy:
        deadLetterTargetArn: !GetAtt AnalyticsJobEventsQueueDLQ.Arn
        maxReceiveCount: 3

  AnalyticsJobEventsQueueToJobEventsTopicSubscription:
    Type: AWS::SNS::Subscription
    Properties:
      Endpoint: !GetAtt AnalyticsJobEventsQueue.Arn
      Protocol: sqs
      RawMessageDelivery: true
      TopicArn: !Ref JobEventsTopic
      RedrivePolicy: !Sub '{"deadLetterTargetArn": "${AnalyticsJobEventsSubscriptionDLQ.Arn}"}'

The analytics function uses SQS FIFO as an event source for Lambda. The S3 bucket name is an environment variable for the function, which increases the code portability across environments and stages. The IAM policy for this function only grants permissions write objects to this S3 bucket:

  AnalyticsFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: analytics-service/
      Handler: app.lambda_handler
      Runtime: python3.7
      MemorySize: 256
      Environment:
        Variables:
          BUCKET_NAME: !Ref AnalyticsBucket
      Policies:
        - S3WritePolicy:
            BucketName: !Ref AnalyticsBucket
      Events:
        Trigger:
          Type: SQS
          Properties:
            Queue: !GetAtt AnalyticsJobEventsQueue.Arn
            BatchSize: 10

View the function implementation at the GitHub repo.

Inventory service

The inventory service also owns an SQS FIFO “InventoryJobEvents.fifo” queue which is subscribed to the SNS FIFO “JobEvents.fifo” topic. It uses redrive policies for the SQS FIFO queue and the SNS FIFO subscription as well. This service is only interested in certain events, so uses an SNS filter policy to specify these events:

  InventoryJobEventsQueue:
    Type: AWS::SQS::Queue
    Properties:
      QueueName: InventoryJobEvents.fifo
      FifoQueue: true
      RedrivePolicy:
        deadLetterTargetArn: !GetAtt InventoryJobEventsQueueDLQ.Arn
        maxReceiveCount: 3

  InventoryJobEventsQueueToJobEventsTopicSubscription:
    Type: AWS::SNS::Subscription
    Properties:
      Endpoint: !GetAtt InventoryJobEventsQueue.Arn
      Protocol: sqs
      RawMessageDelivery: true
      TopicArn: !Ref JobEventsTopic
      FilterPolicy: '{"eventType":["JobCreated", "JobDeleted"]}'
      RedrivePolicy: !Sub '{"deadLetterTargetArn": "${InventoryJobEventsQueueSubscriptionDLQ.Arn}"}'

The inventory function also uses SQS FIFO as event source for Lambda. The DynamoDB table name is set as an environment variable, so the function can look up the name during initialization. The IAM policy grants read/write permissions for only this table:

  InventoryFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: inventory-service/
      Handler: app.lambda_handler
      Runtime: python3.7
      MemorySize: 256
      Environment:
        Variables:
          TABLE_NAME: !Ref InventoryTable
      Policies:
        - DynamoDBCrudPolicy:
            TableName: !Ref InventoryTable
      Events:
        Trigger:
          Type: SQS
          Properties:
            Queue: !GetAtt InventoryJobEventsQueue.Arn
            BatchSize: 10

View the function implementation at the GitHub repo.

Conclusion

Amazon SNS FIFO topics can simplify the design of event-driven architectures and reduce custom code in building such applications.

By using the native integration with Amazon SQS FIFO queues, you can also build architectures that fan out to thousands of subscribers. This pattern helps achieve data consistency, deduplication, filtering, and encryption in near real time, using managed services.

For information on regional availability and service quotas, see SNS endpoints and quotas and SQS endpoints and quotas. For more information on the FIFO functionality, see SNS FIFO and SQS FIFO in their Developer Guides.

Optimizing the cost of serverless web applications

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/optimizing-the-cost-of-serverless-web-applications/

Web application backends are one of the most frequent types of serverless use-case for customers. The pay-for-value model can make it cost-efficient to build web applications using serverless tools.

While serverless cost is generally correlated with level of usage, there are architectural decisions that impact cost efficiency. The impact of these choices is more significant as your traffic grows, so it’s important to consider the cost-effectiveness of different designs and patterns.

This blog post reviews some common areas in web applications where you may be able to optimize cost. It uses the Happy Path web application as a reference example, which you can read about in the introductory blog post.

Serverless web applications generally use a combination of the services in the following diagram. I cover each of these areas to highlight common areas for cost optimization.

Serverless architecture by AWS service

The API management layer: Selecting the right API type

Most serverless web applications use an API between the frontend client and the backend architecture. Amazon API Gateway is a common choice since it is a fully managed service that scales automatically. There are three types of API offered by the service – REST APIs, WebSocket APIs, and the more recent HTTP APIs.

HTTP APIs offer many of the features in the REST APIs service, but the cost is often around 70% less. It supports Lambda service integration, JWT authorization, CORS, and custom domain names. It also has a simpler deployment model than REST APIs. This feature set tends to work well for web applications, many of which mainly use these capabilities. Additionally, HTTP APIs will gain feature parity with REST APIs over time.

The Happy Path application is designed for 100,000 monthly active users. It uses HTTP APIs, and you can inspect the backend/template.yaml to see how to define these in the AWS Serverless Application Model (AWS SAM). If you have existing AWS SAM templates that are using REST APIs, in many cases you can change these easily:

REST to HTTP API

Content distribution layer: Optimizing assets

Amazon CloudFront is a content delivery network (CDN). It enables you to distribute content globally across 216 Points of Presence without deploying or managing any infrastructure. It reduces latency for users who are geographically dispersed and can also reduce load on other parts of your service.

A typical web application uses CDNs in a couple of different ways. First, there is the distribution of the application itself. For single-page application frameworks like React or Vue.js, the build processes create static assets that are ideal for serving over a CDN.

However, these builds may not be optimized and can be larger than necessary. Many frameworks offer optimization plugins, and the JavaScript community frequently uses Webpack to bundle modules and shrink deployment packages. Similarly, any media assets used in the application build should be optimized. You can use tools like Lighthouse to analyze your web apps to find images that can be resized or compressed.

Optimizing images

The second common CDN use-case for web apps is for user-generated content (UGC). Many apps allow users to upload images, which are then shared with other users. A typical photo from a 12-megapixel smartphone is 3–9 MB in size. This high resolution is not necessary when photos are rendered within web apps. Displaying the high-resolution asset results in slower download performance and higher data transfer costs.

The Happy Path application uses a Resizer Lambda function to optimize these uploaded assets. This process creates two different optimized images depending upon which component loads the asset.

Image sizes in front-end applications

The upload S3 bucket shows the original size of the upload from the smartphone:

The distribution S3 bucket contains the two optimized images at different sizes:

Optimized images in the distribution S3 bucket

The distribution file sizes are 98–99% smaller. For a busy web application, using optimized image assets can make a significant difference to data transfer and CloudFront costs.

Additionally, you can convert to highly optimized file formats such as WebP to reduce file size even further. Not all browsers support this format, but you can use CSS on the frontend to fall back to other types if needed:

<img src="myImage.webp" onerror="this.onerror=null; this.src='myImage.jpg'">

The data layer

AWS offers many different database and storage options that can be useful for web applications. Billing models vary by service and Region. By understanding the data access and storage requirements of your app, you can make informed decisions about the right service to use.

Generally, it’s more cost-effective to store binary data in S3 than a database. First, when the data is uploaded, you can upload directly to S3 with presigned URLs instead of proxying data via API Gateway or another service.

If you are using Amazon DynamoDB, it’s best practice to store larger items in S3 and include a reference token in a table item. Part of DynamoDB pricing is based on read capacity units (RCUs). For binary items such as images, it is usually more cost-efficient to use S3 for storage.

Many web developers who are new to serverless are familiar with using a relational database, so choose Amazon RDS for their database needs. Depending upon your use-case and data access patterns, it may be more cost effective to use DynamoDB instead. RDS is not a serverless service so there are monthly charges for the underlying compute instance. DynamoDB pricing is based upon usage and storage, so for many web apps may be a lower-cost choice.

Integration layer

This layer includes services like Amazon SQS, Amazon SNS, and Amazon EventBridge, which are essential for decoupling serverless applications. Each of these have a request-based pricing component, where 64 KB of a payload is billed as one request. For example, a single SQS message with a 256 KB payload is billed as four requests. There are two optimization methods common for web applications.

1. Combine messages

Many messages sent to these services are much smaller than 64 KB. In some applications, the publishing service can combine multiple messages to reduce the total number of publish actions to SNS. Additionally, by either eliminating unused attributes in the message or compressing the message, you can store more data in a single request.

For example, a publishing service may be able to combine multiple messages together in a single publish action to an SNS topic:

  • Before optimization, a publishing service sends 100,000,000 1KB-messages to an SNS topic. This is charged as 100 million messages for a total cost of $50.00.
  • After optimization, the publishing service combines messages to send 1,562,500 64KB-messages to an SNS topic. This is charged as 1,562,500 messages for a total cost of $0.78.

2. Filter messages

In many applications, not every message is useful for a consuming service. For example, an SNS topic may publish to a Lambda function, which checks the content and discards the message based on some criteria. In this case, it’s more cost effective to use the native filtering capabilities of SNS. The service can filter messages and only invoke the Lambda function if the criteria is met. This lowers the compute cost by only invoking Lambda when necessary.

For example, an SNS topic receives messages about customer orders and forwards these to a Lambda function subscriber. The function is only interested in canceled orders and discards all other messages:

  • Before optimization, the SNS topic sends all messages to a Lambda function. It evaluates the message for the presence of an order canceled attribute. On average, only 25% of the messages are processed further. While SNS does not charge for delivery to Lambda functions, you are charged each time the Lambda service is invoked, for 100% of the messages.
  • After optimization, using an SNS subscription filter policy, the SNS subscription filters for canceled orders and only forwards matching messages. Since the Lambda function is only invoked for 25% of the messages, this may reduce the total compute cost by up to 75%.

3. Choose a different messaging service

For complex filtering options based upon matching patterns, you can use EventBridge. The service can filter messages based upon prefix matching, numeric matching, and other patterns, combining several rules into a single filter. You can create branching logic within the EventBridge rule to invoke downstream targets.

EventBridge offers a broader range of targets than SNS destinations. In cases where you publish from an SNS topic to a Lambda function to invoke an EventBridge target, you could use EventBridge instead and eliminate the Lambda invocation. For example, instead of routing from SNS to Lambda to AWS Step Functions, instead create an EventBridge rule that routes events directly to a state machine.

Business logic layer

Step Functions allows you to orchestrate complex workflows in serverless applications while eliminating common boilerplate code. The Standard Workflow service charges per state transition. Express Workflows were introduced in December 2019, with pricing based on requests and duration, instead of transitions.

For workloads that are processing large numbers of events in shorter durations, Express Workflows can be more cost-effective. This is designed for high-volume event workloads, such as streaming data processing or IoT data ingestion. For these cases, compare the cost of the two workflow types to see if you can reduce cost by switching across.

Lambda is the on-demand compute layer in serverless applications, which is billed by requests and GB-seconds. GB-seconds is calculated by multiplying duration in seconds by memory allocated to the function. For a function with a 1-second duration, invoked 1 million times, here is how memory allocation affects the total cost in the US East (N. Virginia) Region:

Memory (MB)GB/SCompute costTotal cost
128125,000$ 2.08$ 2.28
512500,000$ 8.34$ 8.54
10241,000,000$ 16.67$ 16.87
15361,500,000$ 25.01$ 25.21
20482,000,000$ 33.34$ 33.54
30082,937,500$ 48.97$ 49.17

There are many ways to optimize Lambda functions, but one of the most important choices is memory allocation. You can choose between 128 MB and 3008 MB, but this also impacts the amount of virtual CPU as memory increases. Since total cost is a combination of memory and duration, choosing more memory can often reduce duration and lower overall cost.

Instead of manually setting the memory for a Lambda function and running executions to compare duration, you can use the AWS Lambda Power Tuning tool. This uses Step Functions to run your function against varying memory configurations. It can produce a visualization to find the optimal memory setting, based upon cost or execution time.

Optimizing costs with the AWS Lambda Power Tuning tool

Conclusion

Web application backends are one of the most popular workload types for serverless applications. The pay-per-value model works well for this type of workload. As traffic grows, it’s important to consider the design choices and service configurations used to optimize your cost.

Serverless web applications generally use a common range of services, which you can logically split into different layers. This post examines each layer and suggests common cost optimizations helpful for web app developers.

To learn more about building web apps with serverless, see the Happy Path series. For more serverless learning resources, visit https://serverlessland.com.

ICYMI: Serverless Q3 2020

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/icymi-serverless-q3-2020/

Welcome to the 11th edition of the AWS Serverless ICYMI (in case you missed it) quarterly recap. Every quarter, we share all of the most recent product launches, feature enhancements, blog posts, webinars, Twitch live streams, and other interesting things that you might have missed!

Q3 Calendar

In case you missed our last ICYMI, checkout what happened last quarter here.

AWS Lambda

MSK trigger in Lambda

In August, we launched support for using Amazon Managed Streaming for Apache Kafka (Amazon MSK) as an event source for Lambda functions. Lambda has existing support for processing streams from Kinesis and DynamoDB. Now you can process data streams from Amazon MSK and easily integrate with downstream serverless workflows. This integration allows you to process batches of records, one per partition at a time, and scale concurrency by increasing the number of partitions in a topic.

We also announced support for Java 8 (Corretto) in Lambda, and you can now use Amazon Linux 2 for custom runtimes. Amazon Linux 2 is the latest generation of Amazon Linux and provides an application environment with access to the latest innovations in the Linux ecosystem.

Amazon API Gateway

API integrations

API Gateway continued to launch new features for HTTP APIs, including new integrations for five AWS services. HTTP APIs can now route requests to AWS AppConfig, Amazon EventBridge, Amazon Kinesis Data Streams, Amazon SQS, and AWS Step Functions. This makes it easy to create webhooks for business logic hosted in these services. The service also expanded the authorization capabilities, adding Lambda and IAM authorizers, and enabled wildcards in custom domain names. Over time, we will continue to improve and migrate features from REST APIs to HTTP APIs.

In September, we launched mutual TLS for both regional REST APIs and HTTP APIs. This is a new method for client-to-server authentication to enhance the security of your API. It can protect your data from exploits such as client spoofing or man-in-the-middle. This enforces two-way TLS (or mTLS) which enables certificate-based authentication both ways from client-to-server and server-to-client.

Enhanced observability variables now make it easier to troubleshoot each phase of an API request. Each phase from AWS WAF through to integration adds latency to a request, returns a status code, or raises an error. Developers can use these variables to identify the cause of latency within the API request. You can configure these variables in AWS SAM templates – see the demo application to see how you can use these variables in your own application.

AWS Step Functions

X-Ray tracing in Step Functions

We added X-Ray tracing support for Step Functions workflows, giving you full visibility across state machine executions, making it easier to analyze and debug distributed applications. Using the service map view, you can visually identify errors in resources and view error rates across workflow executions. You can then drill into the root cause of an error. You can enable X-Ray in existing workflows by a single-click in the console. Additionally, you can now also visualize Step Functions workflows directly in the Lambda console. To see this new feature, open the Step Functions state machines page in the Lambda console.

Step Functions also increased the payload size to 256 KB and added support for string manipulation, new comparison operators, and improved output processing. These updates were made to the Amazon States Languages (ASL), which is a JSON-based language for defining state machines. The new operators include comparison operators, detecting the existence of a field, wildcarding, and comparing two input fields.

AWS Serverless Application Model (AWS SAM)

AWS SAM goes GA

AWS SAM is an open source framework for building serverless applications that converts a shorthand syntax into CloudFormation resources.

In July, the AWS SAM CLI became generally available (GA). This tool operates on SAM templates and provides developers with local tooling for building serverless applications. The AWS SAM CLI offers a rich set of tools that enable developers to build serverless applications quickly.

AWS X-Ray

X-Ray Insights

X-Ray launched a public preview of X-Ray Insights, which can help produce actionable insights for anomalies within your applications. Designed to make it easier to analyze and debug distributed applications, it can proactively identify issues caused by increases in faults. Using the incident timeline, you can visualize when the issue started and how it developed. The service identifies a probable root cause along with any anomalous services. There is no additional instrumentation needed to use X-Ray Insights – you can enable this feature within X-Ray Groups.

Amazon Kinesis

In July, Kinesis announced support for data delivery to generic HTTP endpoints, and service providers like Datadog, New Relic, MongoDB, and Splunk. Use the Amazon Kinesis console to configure your data producers to send data to Amazon Kinesis Data Firehose and specify one of these new delivery targets. Additionally, Amazon Kinesis Data Firehose is now available in the Europe (Milan) and Africa (Cape Town) AWS Regions.

Serverless Posts

Our team is always working to build and write content to help our customers better understand all our serverless offerings. Here is a list of the latest posts published to the AWS Compute Blog this quarter.

July

August

September

Tech Talks & Events

We hold several AWS Online Tech Talks covering serverless tech talks throughout the year, so look out for them in the Serverless section of the AWS Online Tech Talks page. We also regularly deliver talks at conferences and events around the globe, regularly join in on podcasts, and record short videos you can find to learn in quick byte sized chunks.

Here are some from Q3:

Learning Paths

Ask Around Me

Learn How to Build and Deploy a Web App Backend that Supports Authentication, Geohashing, and Real-Time Messaging

Ask Around Me is an example web app that shows how to build authenticaton, geohashing and real-time messaging into your serverless applications. This learning path includes videos and learning resources to help walk you through the application.

Build a Serverless Web App for a Theme Park

This five-video learning path walks you through the Innovator Island workshop, and provides learning resources for building realtime serverless web applications.

Live streams

July

August

September

There are also a number of other helpful video series covering serverless available on the Serverless Land YouTube channel.

New AWS Serverless Heroes

Serverless Heroes Q3 2020

We’re pleased to welcome Angela Timofte, Luca Bianchi, Matthieu Napoli, Peter Hanssens, Sheen Brisals, and Tom McLaughlin to the growing list of AWS Serverless Heroes.

The AWS Hero program is a selection of worldwide experts that have been recognized for their positive impact within the community. They share helpful knowledge and organize events and user groups. They’re also contributors to numerous open-source projects in and around serverless technologies.

New! The Serverless Land website

Serverless Land

To help developers find serverless learning resources, we have curated a list of serverless blogs, videos, events and training programs at a new site, Serverless Land. This is regularly updated with new information – you can subscribe to the RSS feed for automatic updates, follow the LinkedIn page or subscribe to the YouTube channel.

Still looking for more?

The Serverless landing page has lots of information. The Lambda resources page contains case studies, webinars, whitepapers, customer stories, reference architectures, and even more Getting Started tutorials.

You can also follow all of us on Twitter to see the latest news, follow conversations, and interact with the team.

Building resilient serverless patterns by combining messaging services

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-resilient-no-code-serverless-patterns-by-combining-messaging-services/

In “Choosing between messaging services for serverless applications”, I explain the features and differences between the core AWS messaging services. Amazon SQS, Amazon SNS, and Amazon EventBridge provide queues, publish/subscribe, and event bus functionality for your applications. Individually, these are robust, scalable services that are fundamental building blocks of serverless architectures.

However, you can also combine these services to solve specific challenges in distributed architectures. By doing this, you can use specific features of each service to build sophisticated patterns with little code. These combinations can make your applications more resilient and scalable, and reduce the amount of custom logic and architecture in your workload.

In this blog post, I highlight several important patterns for serverless developers. I also show how you use and deploy these integrations with the AWS Serverless Application Model (AWS SAM).

Examples in this post refer to code that can be downloaded from this GitHub repo. The README.md file explains how to deploy and run each example.

SNS to SQS: Adding resilience and throttling to message throughput

SNS has a robust retry policy that results in up to 100,010 delivery attempts over 23 days. If a downstream service is unavailable, it may be overwhelmed by retries when it comes back online. You can solve this issue by adding an SQS queue.

Adding an SQS queue between the SNS topic and its subscriber has two benefits. First, it adds resilience to message delivery, since the messages are durably stored in a queue. Second, it throttles the rate of messages to the consumer, helping smooth out traffic bursts caused by the service catching up with missed messages.

To build this in an AWS SAM template, you first define the two resources, and the SNS subscription:

  MySqsQueue:
    Type: AWS::SQS::Queue

  MySnsTopic:
    Type: AWS::SNS::Topic
    Properties:
      Subscription:
        - Protocol: sqs
          Endpoint: !GetAtt MySqsQueue.Arn

Finally, you provide permission to the SNS topic to publish to the queue, using the AWS::SQS::QueuePolicy resource:

  SnsToSqsPolicy:
    Type: AWS::SQS::QueuePolicy
    Properties:
      PolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Sid: "Allow SNS publish to SQS"
            Effect: Allow
            Principal: "*"
            Resource: !GetAtt MySqsQueue.Arn
            Action: SQS:SendMessage
            Condition:
              ArnEquals:
                aws:SourceArn: !Ref MySnsTopic
      Queues:
        - Ref: MySqsQueue

To test this, you can publish a message to the SNS topic and then inspect the SQS queue length using the AWS CLI:

aws sns publish --topic-arn "arn:aws:sns:us-east-1:123456789012:sns-sqs-MySnsTopic-ABC123ABC" --message "Test message"
aws sqs get-queue-attributes --queue-url "https://sqs.us-east-1.amazonaws.com/123456789012/sns-sqs-MySqsQueue- ABC123ABC " --attribute-names ApproximateNumberOfMessages

This results in the following output:

CLI output

Another usage of this pattern is when you want to filter messages in architectures using an SQS queue. By placing the SNS topic in front of the queue, you can use the message filtering capabilities of SNS. This ensures that only the messages you need are published to the queue. To use message filtering in AWS SAM, use the AWS:SNS:Subcription resource:

  QueueSubcription:
    Type: 'AWS::SNS::Subscription'
    Properties:
      TopicArn: !Ref MySnsTopic
      Endpoint: !GetAtt MySqsQueue.Arn
      Protocol: sqs
      FilterPolicy:
        type:
        - orders
        - payments 
      RawMessageDelivery: 'true'

EventBridge to SNS: combining features of both services

Both SNS and EventBridge have different characteristics in terms of targets, and integration with broader features. This table compares the major differences between the two services:

Amazon SNSAmazon EventBridge
Number of targets10 million (soft)5
Limits100,000 topics. 12,500,000 subscriptions per topic.100 event buses. 300 rules per event bus.
Input transformationNoYes – see details.
Message filteringYes – see details.Yes, including IP address matching – see details.
FormatRaw or JSONJSON
Receive events from AWS CloudTrailNoYes
TargetsHTTP(S), SMS, SNS Mobile Push, Email/Email-JSON, SQS, Lambda functions15 targets including AWS LambdaAmazon SQSAmazon SNSAWS Step FunctionsAmazon Kinesis Data StreamsAmazon Kinesis Data Firehose.
SaaS integrationNoYes – see integration partners.
Schema Registry integrationNoYes – see details.
Dead-letter queues supportedYesNo
Public visibilityCan create public topicsCannot create public buses
Cross-RegionYou can subscribe your AWS Lambda functions to an Amazon SNS topic in any Region.Targets must be same Region. You can publish across Region to another event bus.

In this pattern, you configure an SNS topic as a target of an EventBridge rule:

SNS topic as a target for an EventBridge rule

In the AWS SAM template, you declare the resources in the preceding diagram as follows:

Resources:
  MySnsTopic:
    Type: AWS::SNS::Topic

  EventRule: 
    Type: AWS::Events::Rule
    Properties: 
      Description: "EventRule"
      EventPattern: 
        account: 
          - !Sub '${AWS::AccountId}'
        source:
          - "demo.cli"
      Targets: 
        - Arn: !Ref MySnsTopic
          Id: "SNStopic"

The default bus already exists in every AWS account, so there is no need to declare it. For the event bus to publish matching events to the SNS topic, you define permissions using the AWS::SNS::TopicPolicy resource:

  EventBridgeToToSnsPolicy:
    Type: AWS::SNS::TopicPolicy
    Properties: 
      PolicyDocument:
        Statement:
        - Effect: Allow
          Principal:
            Service: events.amazonaws.com
          Action: sns:Publish
          Resource: !Ref MySnsTopic
      Topics: 
        - !Ref MySnsTopic       

EventBridge has a limit of five targets per rule. In cases where you must send events to hundreds or thousands of targets, publishing to SNS first and then subscribing those targets to the topic works around this limit. Both services have different targets, and this pattern allows you to deliver EventBridge events to SMS, HTTP(s), email and SNS mobile push.

You can transform and filter the message using these services, often without needing an AWS Lambda function. SNS does not support input transformation but you can do this in an EventBridge rule. Message filtering is possible in both services but EventBridge provides richer content filtering capabilities.

AWS CloudTrail can log and monitor activity across services in your AWS account. It can be a useful source for events, allowing you to respond dynamically to objects in Amazon S3 or react to changes in your environment, for example. This natively integrates with EventBridge, allowing you to ingest events at scale from dozens of services.

Using EventBridge enables you to source events from outside your AWS account, offering integrations with a list of software as a service (SaaS) providers. This capability allows you to receive events from your accounts with SaaS providers like Zendesk, PagerDuty, and Auth0. These events are delivered to a partner event bus in your account, and can then be filtered and routed to an SNS topic.

Additionally, this pattern allows you to deliver events to Lambda functions in other AWS accounts and in other AWS Regions. You can invoke Lambda from SNS topics in other Regions and accounts. It’s also possible to make SNS topics publicly read-only, making them extensible endpoints that other third parties can consume from. SNS has comprehensive access control, which you can incorporate into this pattern.

Cross-account publishing

EventBridge to SQS: Building fault-tolerant microservices

EventBridge can route events to targets such as microservices. In the case of downstream failures, the service retries events for up to 24 hours. For workloads where you need a longer period of time to store and retry messages, you can deliver the events to an SQS queue in each microservice. This durably stores those events until the downstream service recovers. Additionally, this pattern protects the microservice from large bursts of traffic by throttling the delivery of messages.

Fault-tolerant microservices architecture

The resources declared in the AWS SAM template are similar to the previous examples, but it uses the AWS::SQS::QueuePolicy resource to grant the appropriate permission to EventBridge:

  EventBridgeToToSqsPolicy:
    Type: AWS::SQS::QueuePolicy
    Properties:
      PolicyDocument:
        Statement:
        - Effect: Allow
          Principal:
            Service: events.amazonaws.com
          Action: SQS:SendMessage
          Resource:  !GetAtt MySqsQueue.Arn
      Queues:
        - Ref: MySqsQueue

Conclusion

You can combine these services in your architectures to implement patterns that solve complex challenges, often with little code required. This blog post shows three examples that implement message throttling and queueing, integrating SNS and EventBridge, and building fault tolerant microservices.

To learn more building decoupled architectures, see this Learning Path series on EventBridge. For more serverless learning resources, visit https://serverlessland.com.

Pay as you go machine learning inference with AWS Lambda

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/pay-as-you-go-machine-learning-inference-with-aws-lambda/

This post is courtesy of Eitan Sela, Senior Startup Solutions Architect.

Many customers want to deploy machine learning models for real-time inference, and pay only for what they use. Using Amazon EC2 instances for real-time inference may not be cost effective to support sporadic inference requests throughout the day.

AWS Lambda is a serverless compute service with pay-per-use billing. However, ML frameworks like XGBoost are too large to fit into the 250 MB application artifact size limit, or the 512 MB /tmp space limit. While you can store the packages in Amazon S3 and download to Lambda (up to 3 GB), this can increase the cost.

To address this, Lambda functions can now mount an Amazon Elastic File System (EFS). This is a scalable and elastic NFS file system storing data within and across multiple Availability Zones (AZ) for high availability and durability.

With this new capability, it’s now easier to use Python packages in Lambda that require storage space to load models and other dependencies.

In this blog post, I walk through how to:

  • Create an EFS file system and an Access Point as an application-specific entry point.
  • Provision an EC2 instance, mount EFS using the Access Point, and train a breast cancer XGBoost ML model. XGBoost, Python packages, and the model are saved on the EFS file system.
  • Create a Lambda function that loads the Python packages and model from EFS, and performs the prediction based on a test event.

Create an Amazon EFS file system with an Access Point

Configuring EFS for Lambda is straight-forward. I show how to do this in the AWS CloudFormation but you can also use the AWS CLI, AWS SDK, and AWS Serverless Application Model (AWS SAM).

EFS file systems are created within a customer VPC, so Lambda functions using the EFS file system must have access to the same VPC.

You can deploy the AWS CloudFormation stack located on this GitHub repository.

The stack includes the following:

  • Create a VPC with public subnet.
  • Create an EFS file system
  • Create an EFS Access Point
  • Create an EC2 in the VPC

It can take up to 10 minutes for the CloudFormation stack to create the resources. After the resource creation is complete, navigate to the EFS console to see the new file system.

EFS console

Navigate to the Access Points panel to see a new Access Point with the File system ID from the previous page.

Access Points panel

Note the Access Point ID and File System ID for the following sections.

Launch an Amazon EC2 instance to train a breast cancer model

In this section, you install Python packages on the EFS file system, after mounting it to EC2. You then train the breast cancer model, and save the model in the EFS file system used by the Lambda function.

The machine learning framework you use for this function is XGBoost. This is an optimized distributed gradient boosting library designed to be highly efficient, flexible, and portable. XGBoost is one of the most popular machine learning algorithms.

Navigate to the EC2 console to see the new EC2 instance created from the CloudFormation stack. This is an Amazon Linux 2 c5.large EC2 instance named ‘xgboost-for-serverless-inference-cfn-ec2’. In the instance details, you see that the security group is configured to allow inbound SSH access (for connecting to the instance).

Security Groups on instances page

Mount the EFS file system on the EC2

Connect to the instance using SSH and mount the EFS file system previously created by using the Access Point:

  1. Install amazon-efs-utils tools:
    sudo yum -y install amazon-efs-utils
  2. Create a directory to mount EFS into:
    mkdir efs
  3. Mount the EFS file system using the Access Point:
    sudo mount -t efs -o tls,accesspoint=<Access point ID> <File system ID>:/ efs

Console output

Install Python, pip and required packages

  1. Install Python and pip:
    sudo yum -y install python37
    curl -O https://bootstrap.pypa.io/get-pip.py
    python3 get-pip.py --user
  2. Verify the installation:
    python3 --version
    pip3 --version
  3. Create a requirements.txt file containing the dependencies:
    xgboost==1.1.1
    pandas
    sklearn
    joblib
  4. Install the Python packages using the requirements file:
    pip3 install -t efs/lib/ -r requirements.txt

    Note: using bursting throughput mode with EFS File system, this action can take up to 10 minutes.
  5. Set the Python path to refer to the installed packages directory of EFS file system:
    export PYTHONPATH=/home/ec2-user/efs/lib/

Train the breast cancer model

The breast cancer model predicts whether the breast mass is a malignant tumor or benign by looking at features computed from a digitized image of a fine needle aspirate of a breast mass.

The data used to train the model consists of the diagnosis in addition to the 10 real-valued features that are computed for each cell nucleus. Such features include radius, texture, perimeter, area, smoothness, compactness, concavity, concave points, symmetry, and fractal dimension. The prediction returned by the model is either “B” for benign or “M” for malignant. This sample project uses the public Breast Cancer Wisconsin (Diagnostic) dataset.

After installing the required Python packages, train a XGBoost model on the breast cancer dataset:

  1. Create a bc_xgboost_train.py file containing the Python code needed to train a breast cancer XGBoost model. Download the code here.
  2. Start the training of the model:python3 bc_xgboost_train.pyYou see the following message:Console outputThe model file bc-xgboost-model is created in the root directory.
  3. Create a new directory on the EFS file system and copy the XGBoost breast cancer model:
    mkdir efs/model
    cp bc-xgboost-model efs/model/
  4. Check you have the required Python packages and the model on the EFS file system:
    ls efs/model/ efs/lib/

    You see all the Python packages installed previously in the lib directory, and the model file in the model directory.
  5. Review the total size of lib Python packages directory:
    du -sh efs/lib/

You can see that the total size of lib directory is 534 MB. This is a larger package size than was allowed before EFS for Lambda.

Building a serverless machine learning inference using Lambda

In this section, you use the EFS file system previously configured for the Lambda function to import the required libraries and load the model.

Using EFS with Lambda

The AWS SAM template creates the Lambda function, mount the EFS Access Point created earlier, and both IAM roles required.

It takes several minutes for the AWS SAM CLI to create the Lambda function. After, navigate to the Lambda console to see the created Lambda function.

Lambda console

In the Lambda function configuration, you see the environment variables, and basic settings, such as runtime, memory, and timeout.

Lambda function configuration

Further down, you see that the Lambda function has the VPC access configured, and the file system is mounted.

Lambda VPC configuration

Test your Lambda function

  1. In the Lambda console, select Configure test events from the Test events dropdown.
  2. For Event Name, enter InferenceTestEvent.
  3. Copy the event JSON from here and paste in the dialog box.Confiigure test event
  4. Choose Create. After saving, you see InferenceTestEvent in the Test list. Now choose Test.

You see the Lambda function inference result, log output, and duration:

Lambda function result

Conclusion

In this blog post, you train an XGBoost breast cancer model using Python packages installed on an Amazon EFS file system. You create an AWS Lambda function that loads the Python packages and the model from EFS file system, and perform the predictions.

Now you know how to call a machine learning model inference using a Lambda function. To learn more about other real-world examples, see:

Using AWS Lambda as a consumer for Amazon Kinesis

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/using-aws-lambda-as-a-consumer-for-amazon-kinesis/

This post is courtesy of Prateek Mehrotra, Software Development Engineer.

AWS Lambda integrates natively with Amazon Kinesis as a consumer to process data ingested through a data stream. The polling, checkpointing, and error handling complexities are abstracted when you use this native integration. This allows the Lambda function code to focus on business logic processing.

This blog post describes how to operate and optimize this integration at high throughput with low system overhead time and processing latencies.

To learn more about Kinesis concepts and terminology, visit the documentation page.

Overview

You can attach a Lambda function to a Kinesis stream to process data. Multiple Lambda functions can consume from a single Kinesis stream for different kinds of processing independently. These can be used alongside other consumers such as Amazon Kinesis Data Firehose.

If a Kinesis stream has ‘n’ shards, then at least ‘n’ concurrency is required for a consuming Lambda function to process data without any induced delay. Less than ‘n’ available concurrency results in elevated iterator age in the Kinesis stream and elevated iterator age in the Lambda consumer. In a multi-consumer paradigm, if the Kinesis iterator age spikes then at least one of the stream consumers also reports a corresponding iterator age spike.

Stream poller

When the parallelization factor is greater than 1 for a Lambda consumer, the record processor polls up-to ‘parallelization-factor’ partition keys at a time while processing from a single shard. To learn more, read about handling traffic with a parallelization factor.

Kinesis shard level metrics

When using Kinesis streams, it’s best practice to enable enhanced shard level metrics. These metrics can help in detecting if the data distribution is happening uniformly within the shards of the stream, or not.

In a single-source, multiple-consumer use case, enhanced shard level metrics can help identify the cause of elevated iterator age. This could be due to a single shard receiving data too quickly, or at least one of the consumers failing to process the data.

To learn more about Kinesis monitoring, visit the documentation page. If per-partition processing is not a requirement, distribute data uniformly across shards. To learn more about Kinesis partition keys, visit the documentation page.

Processing delay caused by consumer misconfiguration

Kinesis reports an iterator age metric. If this value spikes, data processing from the stream is delayed. The metric value is set by the earliest record read from the stream measured over the specified time period.

This delay slows the data processing of the pipeline. This happens when a single shard is receiving data faster than the consumer can process it or the consumer is failing to complete processing due to errors.

Graph of records iterator age

In a single-source, multiple-consumer use case, at least one of the consumers shows a corresponding iterator age spike. If there are multiple Lambda consumers of the same data stream, then each Lambda consumer will report its own iterator age metric. This helps identify the problematic consumer for further analysis.

Tuning the configuration to optimize for iterator age

There are several tuning options available when the iterator age is increasing for the consumer Lambda function.

1. Increase the batch size

If the Lambda function operates at a low maximum duration, a single invocation may process less than a maximum batch size. Increase the batch size (up to a maximum of 10,000) to read more records from a shard in a single batch. This can help normalize the iterator age.

2. Change the parallelization factor

Increasing the parallelization factor in the Lambda function allows concurrent invocations to read a single shard. Multiple batches of records are created in the shard based on partition keys, resulting in faster data consumption.

Iterator age can spike when the batch size is set to 10,000 and the parallelization factor is set to 10. This can happen when data is produced faster than the consumer can process it, backing up the per-shard/per-partition queues. To mitigate this, subdivide the partition into multiple keys. This helps distribute the data for that partition key more evenly across shards.

Partition keys

3. Reduce the batch window

If data is distributed unequally across shards, or there is low write volume from producers, the Lambda poller may wait for an entire batch. You can reduce this wait time by reducing the batch window, which results in faster processing.

To learn more about Lambda poller batch window for Kinesis, visit the documentation page.

4. De-scale the Kinesis stream if overprovisioned

If the Kinesis stream metrics indicate that the stream is over-provisioned, de-scaling the stream helps increase data compaction within shards. This results in better throughput per Lambda invocation.

After reducing stream size, reduce the Lambda concurrency to maintain a 1:1 ratio of shard count to Lambda concurrency mapping. As load increases, increase the parallelization factor the keep the shard size constant. With this increase, the Lambda concurrency should be at least shard count * parallelization factor.

To learn more, read about handling traffic with a parallelization factor.

5. Enable enhanced fan-out for consumers

Enhanced fan-out allows developers to scale up the number of stream consumers by offering each stream consumer its own read throughput.

To learn more about Kinesis enhanced fan-out, visit the documentation page.

Conclusion

This blog post shows some of the best practices when using Lambda with Kinesis. It covers operational levers for high-throughput, low latency, single source data processing pipelines.

The enhanced Amazon Kinesis shard level metrics help monitor the maximum overhead processing delay per shard. When correlated with the Lambda consumer’s iterator age metrics, this shows each consumer’s performance. The effective combination of batch size, parallelization factor, batch window, and partition key can lead to more efficient stream processing.

To learn more about Amazon Kinesis, visit the Getting Started page.

Choosing between messaging services for serverless applications

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/choosing-between-messaging-services-for-serverless-applications/

Most serverless application architectures use a combination of different AWS services, microservices, and AWS Lambda functions. Messaging services are important in allowing distributed applications to communicate with each other, and are fundamental to most production serverless workloads.

Messaging services can improve the resilience, availability, and scalability of applications, when used appropriately. They can also enable your applications to communicate beyond your workload or even the AWS Cloud, and provide extensibility for future service features and versions.

In this blog post, I compare the primary messaging services offered by AWS and how you can use these in your serverless application architectures. I also show how you use and deploy these integrations with the AWS Serverless Application Model (AWS SAM).

Examples in this post refer to code that can be downloaded from this GitHub repository. The README.md file explains how to deploy and run each example.

Overview

Three of the most useful messaging patterns for serverless developers are queues, publish/subscribe, and event buses. In AWS, these are provided by Amazon SQS, Amazon SNS, and Amazon EventBridge respectively. All of these services are fully managed and highly available, so there is no infrastructure to manage. All three integrate with Lambda, allowing you to publish messages via the AWS SDK and invoke functions as targets. Each of these services has an important role to play in serverless architectures.

SNS enables you to send messages reliably between parts of your infrastructure. It uses a robust retry mechanism for when downstream targets are unavailable. When the delivery policy is exhausted, it can optionally send those messages to a dead-letter queue for further processing. SNS uses topics to logically separate messages into channels, and your Lambda functions interact with these topics.

SQS provides queues for your serverless applications. You can use a queue to send, store, and receive messages between different services in your workload. Queues are an important mechanism for providing fault tolerance in distributed systems, and help decouple different parts of your application. SQS scales elastically, and there is no limit to the number of messages per queue. The service durably persists messages until they are processed by a downstream consumer.

EventBridge is a serverless event bus service, simplifying routing events between AWS services, software as a service (SaaS) providers, and your own applications. It logically separates routing using event buses, and you implement the routing logic using rules. You can filter and transform incoming messages at the service level, and route events to multiple targets, including Lambda functions.

Integrating an SQS queue with AWS SAM

The first example shows an AWS SAM template defining a serverless application with two Lambda functions and an SQS queue:

Producer-consumer example

You can declare an SQS queue in an AWS SAM template with the AWS::SQS::Queue resource:

  MySqsQueue:
    Type: AWS::SQS::Queue

To publish to the queue, the publisher function must have permission to send messages. Using an AWS SAM policy template, you can apply policy that enables send messaging to one specific queue:

      Policies:
        - SQSSendMessagePolicy:
            QueueName: !GetAtt MySqsQueue.QueueName

The AWS SAM template passes the queue name into the Lambda function as an environment variable. The function uses the sendMessage method of the AWS.SQS class to publish the message:

const AWS = require('aws-sdk')
AWS.config.region = process.env.AWS_REGION 
const sqs = new AWS.SQS({apiVersion: '2012-11-05'})

// The Lambda handler
exports.handler = async (event) => {
  // Params object for SQS
  const params = {
    MessageBody: `Message at ${Date()}`,
    QueueUrl: process.env.SQSqueueName
  }
  
  // Send to SQS
  const result = await sqs.sendMessage(params).promise()
  console.log(result)
}

When the SQS queue receives the message, it publishes to the consuming Lambda function. To configure this integration in AWS SAM, the consumer function is granted the SQSPollerPolicy policy. The function’s event source is set to receive messages from the queue in batches of 10:

  QueueConsumerFunction:
    Type: AWS::Serverless::Function 
    Properties:
      CodeUri: code/
      Handler: consumer.handler
      Runtime: nodejs12.x
      Timeout: 3
      MemorySize: 128
      Policies:  
        - SQSPollerPolicy:
            QueueName: !GetAtt MySqsQueue.QueueName
      Events:
        MySQSEvent:
          Type: SQS
          Properties:
            Queue: !GetAtt MySqsQueue.Arn
            BatchSize: 10

The payload for the consumer function is the message from SQS. This is an array of messages up to the batch size, containing a body attribute with the publishing function’s MessageBody. You can see this in the CloudWatch log for the function:

CloudWatch log result

Integrating an SNS topic with AWS SAM

The second example shows an AWS SAM template defining a serverless application with three Lambda functions and an SNS topic:

SNS fanout to Lambda functions

You declare an SNS topic and the subscribing Lambda functions with the AWS::SNS:Topic resource:

  MySnsTopic:
    Type: AWS::SNS::Topic
    Properties:
      Subscription:
        - Protocol: lambda
          Endpoint: !GetAtt TopicConsumerFunction1.Arn    
        - Protocol: lambda
          Endpoint: !GetAtt TopicConsumerFunction2.Arn

You provide the SNS service with permission to invoke the Lambda functions but defining an AWS::Lambda::Permission for each:

  TopicConsumerFunction1Permission:
    Type: 'AWS::Lambda::Permission'
    Properties:
      Action: 'lambda:InvokeFunction'
      FunctionName: !Ref TopicConsumerFunction1
      Principal: sns.amazonaws.com

The SNSPublishMessagePolicy policy template grants permission to the publishing function to send messages to the topic. In the function, the publish method of the AWS.SNS class handles publishing:

const AWS = require('aws-sdk')
AWS.config.region = process.env.AWS_REGION 
const sns = new AWS.SNS({apiVersion: '2012-11-05'})

// The Lambda handler
exports.handler = async (event) => {
  // Params object for SNS
  const params = {
    Message: `Message at ${Date()}`,
    Subject: 'New message from publisher',
    TopicArn: process.env.SNStopic
  }
  
  // Send to SQS
  const result = await sns.publish(params).promise()
  console.log(result)
}

The payload for the consumer functions is the message from SNS. This is an array of messages, containing subject and message attributes from the publishing function. You can see this in the CloudWatch log for the function:

CloudWatch log result

Differences between SQS and SNS configurations

SQS queues and SNS topics offer different functionality, though both can publish to downstream Lambda functions.

An SQS message is stored on the queue for up to 14 days until it is successfully processed by a subscriber. SNS does not retain messages so if there are no subscribers for a topic, the message is discarded.

SNS topics may broadcast to multiple targets. This behavior is called fan-out. It can be used to parallelize work across Lambda functions or send messages to multiple environments (such as test or development). An SNS topic can have up to 12,500,000 subscribers, providing highly scalable fan-out capabilities. The targets may include HTTP/S endpoints, SMS text messaging, SNS mobile push, email, SQS, and Lambda functions.

In AWS SAM templates, you can retrieve properties such as ARNs and names of queues and topics, using the following intrinsic functions:

Amazon SQSAmazon SNS
Channel typeQueueTopic
Get ARN!GetAtt MySqsQueue.Arn!Ref MySnsTopic
Get name!GetAtt MySqsQueue.QueueName!GetAtt MySnsTopic.TopicName

Integrating with EventBridge in AWS SAM

The third example shows the AWS SAM template defining a serverless application with two Lambda functions and an EventBridge rule:

EventBridge integration with AWS SAM

The default event bus already exists in every AWS account. You declare a rule that filters events in the event bus using the AWS::Events::Rule resource:

  EventRule: 
    Type: AWS::Events::Rule
    Properties: 
      Description: "EventRule"
      EventPattern: 
        source: 
          - "demo.event"
        detail: 
          state: 
            - "new"
      State: "ENABLED"
      Targets: 
        - Arn: !GetAtt EventConsumerFunction.Arn
          Id: "ConsumerTarget"

The rule describes an event pattern specifying matching JSON attributes. Events that match this pattern are routed to the list of targets. You provide the EventBridge service with permission to invoke the Lambda functions in the target list:

  PermissionForEventsToInvokeLambda: 
    Type: AWS::Lambda::Permission
    Properties: 
      FunctionName: 
        Ref: "EventConsumerFunction"
      Action: "lambda:InvokeFunction"
      Principal: "events.amazonaws.com"
      SourceArn: !GetAtt EventRule.Arn

The AWS SAM template uses an IAM policy statement to grant permission to the publishing function to put events on the event bus:

  EventPublisherFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: code/
      Handler: publisher.handler
      Timeout: 3
      Runtime: nodejs12.x
      Policies:
        - Statement:
          - Effect: Allow
            Resource: '*'
            Action:
              - events:PutEvents      

The publishing function then uses the putEvents method of the AWS.EventBridge class, which returns after the events have been durably stored in EventBridge:

const AWS = require('aws-sdk')
AWS.config.update({region: 'us-east-1'})
const eventbridge = new AWS.EventBridge()

exports.handler = async (event) => {
  const params = {
    Entries: [ 
      {
        Detail: JSON.stringify({
          "message": "Hello from publisher",
          "state": "new"
        }),
        DetailType: 'Message',
        EventBusName: 'default',
        Source: 'demo.event',
        Time: new Date 
      }
    ]
  }
  const result = await eventbridge.putEvents(params).promise()
  console.log(result)
}

The payload for the consumer function is the message from EventBridge. This is an array of messages, containing subject and message attributes from the publishing function. You can see this in the CloudWatch log for the function:

CloudWatch log result

Comparing SNS with EventBridge

SNS and EventBridge have many similarities. Both can be used to decouple publishers and subscribers, filter messages or events, and provide fan-in or fan-out capabilities. However, there are differences in the list of targets and features for each service, and your choice of service depends on the needs of your use-case.

EventBridge offers two newer capabilities that are not available in SNS. The first is software as a service (SaaS) integration. This enables you to authorize supported SaaS providers to send events directly from their EventBridge event bus to partner event buses in your account. This replaces the need for polling or webhook configuration, and creates a highly scalable way to ingest SaaS events directly into your AWS account.

The second feature is the Schema Registry, which makes it easier to discover and manage OpenAPI schemas for events. EventBridge can infer schemas based on events routed through an event bus by using schema discovery. This can be used to generate code bindings directly to your IDE for type-safe languages like Python, Java, and TypeScript. This can help accelerate development by automating the generation of classes and code directly from events.

This table compares the major features of both services:

Amazon SNSAmazon EventBridge
Number of targets10 million (soft)5
Availability SLA99.9%99.99%
Limits100,000 topics. 12,500,000 subscriptions per topic.100 event buses. 300 rules per event bus.
Publish throughputVaries by Region. Soft limits.Varies by Region. Soft limits.
Input transformationNoYes – see details.
Message filteringYes – see details.Yes, including IP address matching – see details.
Message size maximum256 KB256 KB
BillingPer 64 KB
FormatRaw or JSONJSON
Receive events from AWS CloudTrailNoYes
TargetsHTTP(S), SMS, SNS Mobile Push, Email/Email-JSON, SQS, Lambda functions.15 targets including AWS LambdaAmazon SQSAmazon SNSAWS Step FunctionsAmazon Kinesis Data StreamsAmazon Kinesis Data Firehose.
SaaS integrationNoYes – see integrations.
Schema Registry integrationNoYes – see details.
Dead-letter queues supportedYesNo
FIFO ordering availableNoNo
Public visibilityCan create public topicsCannot create public buses
Pricing$0.50/million requests + variable delivery cost + data transfer out cost. SMS varies.$1.00/million events. Free for AWS events. No charge for delivery.
Billable request size1 request = 64 KB1 event = 64 KB
AWS Free Tier eligibleYesNo
Cross-RegionYou can subscribe your AWS Lambda functions to an Amazon SNS topic in any Region.Targets must be in the same Region. You can publish across Regions to another event bus.
Retry policy
  • For SQS/Lambda, exponential backoff over 23 days.
  • For SMTP, SMS and Mobile push, exponential backoff over 6 hours.
At-least-once event delivery to targets, including retry with exponential backoff for up to 24 hours.

Conclusion

Messaging is an important part of serverless applications and AWS services provide queues, publish/subscribe, and event routing capabilities. This post reviews the main features of SNS, SQS, and EventBridge and how they provide different capabilities for your workloads.

I show three example applications that publish and consume events from the three services. I walk through AWS SAM syntax for deploying these resources in your applications. Finally, I compare differences between the services.

To learn more building decoupled architectures, see this Learning Path series on EventBridge. For more serverless learning resources, visit https://serverlessland.com.

Introducing mutual TLS authentication for Amazon API Gateway

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/introducing-mutual-tls-authentication-for-amazon-api-gateway/

This post is courtesy of Justin Pirtle, Principal Serverless Solutions Architect.

Today, AWS is introducing certificate-based mutual Transport Layer Security (TLS) authentication for Amazon API Gateway. This is a new method for client-to-server authentication that can be used with API Gateway’s existing authorization options.

By default, the TLS protocol only requires a server to authenticate itself to the client. The authentication of the client to the server is managed by the application layer. The TLS protocol also offers the ability for the server to request that the client send an X.509 certificate to prove its identity. This is called mutual TLS (mTLS) as both parties are authenticated via certificates with TLS.

Mutual TLS is commonly used for business-to-business (B2B) applications. It’s used in standards such as Open Banking, which enables secure open API integrations for financial institutions across the United Kingdom and Australia. It’s common for Internet of Things (IoT) applications to authenticate devices using digital certificates. Also, many companies authenticate their employees before granting access to data and services when used with a private certificate authority (CA).

API Gateway now provides integrated mutual TLS authentication at no additional cost. You can enable mutual TLS authentication on your custom domains to authenticate regional REST and HTTP APIs. You can still authorize requests with bearer or JSON Web Tokens (JWTs) or sign requests with IAM-based authorization.

To use mutual TLS with API Gateway, you upload a CA public key certificate bundle as an object containing public or private/self-signed CA certs. This is used for validation of client certificates. All existing API authorization options are available for use with mTLS authentication.

Getting started

To complete the following sample setup, you must first create an HTTP API with a valid custom domain name using the AWS Management Console. Mutual TLS is now available for both regional REST APIs and the newer HTTP APIs. You use HTTP APIs for the examples depicted in this post. More details on the pre-requisites to configure a custom domain name are available in the documentation.

Securing your API with mutual TLS

To configure mutual TLS, you first create the private certificate authority and client certificates. You need the public keys of the root certificate authority and any intermediate certificate authorities. These must be uploaded to API Gateway to authenticate certificates properly using mutual TLS. This example uses OpenSSL to create the certificate authority and client certificate. You can alternatively use a managed service such as AWS Certificate Manager Private Certificate Authority (ACM Private CA).

You first create a new certificate authority with signed client certificate using OpenSSL:

  1. Create the private certificate authority (CA) private and public keys:
    openssl genrsa -out RootCA.key 4096
    openssl req -new -x509 -days 36500 -key RootCA.key -out RootCA.pemopenssl request prompts
  2. Provide the requested inputs for the root certificate authority’s subject name, locality, organization, and organizational unit properties. Choose your own values for these prompts to customize your root CA.Configuration options
  3. You can optionally create any intermediary certificate authorities (CAs) using the previously issued root CA. The certificate chain length for certificates authenticated with mutual TLS in API Gateway can be up to four levels.
  4. Once the CA certificates are created, you create the client certificate for use with authentication.
  5. Create client certificate private key and certificate signing request (CSR):openssl genrsa -out my_client.key 2048
    openssl req -new -key my_client.key -out my_client.csr
  6. Enter the client’s subject name, locality, organization, and organizational unit properties of the client certificate. Keep the optional password challenge empty default.OpenSSL options
  7. Sign the newly created client cert by using your certificate authority you previously created:
    openssl x509 -req -in my_client.csr -CA RootCA.pem -CAkey RootCA.key -set_serial 01 -out my_client.pem -days 36500 -sha256Sign the newly created certificate
  8. You now have a minimum of five files in your directory (there are additional files if you are also using an intermediate CA):
    • RootCA.key (root CA private key)
    • RootCA.pem (root CA public key)
    • my_client.csr (client certificate signing request)
    • my_client.key (client certificate private key)
    • my_client.pem (client certificate public key)
  9. Prepare a PEM-encoded trust store file for all certificate authority public keys you want to use with mutual TLS:
    1. If only using a single root CA (with no intermediary CAs), only the RootCA.pem file is required. Copy the existing root CA public key to a new truststore.pem file name for further clarity on which file is being used by API Gateway as the trust store:cp RootCA.pem truststore.pem
    2. If using one or more intermediary CAs to sign certificates with a root of trust to your root CA previously created, you must bundle the respective PEM files of each CA into a single trust store PEM file. Use the cat command to build the bundle file:cat IntermediateCA_1.pem IntermediateCA_2.pem RootCA.pem > truststore.pem

      Note: The trust store CA bundle can contain up to 1,000 certificates authority PEM-encoded public key certificates up to 1 MB total object size.
  10. Upload the trust store file to an Amazon S3 bucket in the same AWS account as our API Gateway API. It is also recommended to enable object versioning for the bucket you choose. You can perform these actions using the AWS Management Console, SDKs, or AWS CLI. Using the AWS CLI, create an S3 bucket, enable object versioning on the bucket, and upload the CA bundle file:aws s3 mb s3://your-name-ca-truststore --region us-east-1 #creates a new S3 bucket – skip if using existing bucket
    aws s3api put-bucket-versioning --bucket your-name-ca-truststore --versioning-configuration Status=Enabled #enables versioning on S3 bucket
    aws s3 cp truststore.pem s3://your-name-ca-truststore/truststore.pem #uploads object to S3 bucket

 

Uploading to S3

After uploading the new truststore CA bundle file, enable mutual TLS on the API Gateway custom domain name.

Enabling mutual TLS on a custom domain name

To configure mutual TLS within API Gateway:

  1. Browse to the API Gateway console and choose Custom domain names:
  2. Before changing settings, test a custom domain name with an API mapping to ensure that the API works without mutual TLS using curl. If your custom domain name and API configuration are correct, you receive a well-formed response and HTTP status code of 200.
  3. After validation, enable mutual TLS for additional protection. Choose Edit to update the custom domain name configuration:Edit custom domain name configuration
  4. Enable the Mutual TLS authentication option and enter the path of the truststore PEM file, stored in an S3 bucket. You can optionally provide an S3 object version identifier to reference a specific version of the truststore CA bundle object:Enable mutual TLS option
  5. Choose Save to enable mutual TLS for all APIs that the custom domain name maps to.
  6. Wait for the custom domain status to show “Available”, indicating that the mutual TLS change is successfully deployed.
  7. Test the HTTP request again using curl with the same custom domain name and without modifying the request. The request is now forbidden as the call cannot be properly authenticated with mutual TLS.
  8. Test again with additional parameters in the curl command to include the local client certificate and negotiate the mutual TLS session for authentication. You can use curl with the —key and —cert parameters to send the client certificate as part of the request:curl --key my_client.key --cert my_client.pem https://api.yourdomain.com

The request is now properly authenticated and returns successfully.

Hardening the configuration

After setting up mutual TLS authentication for the API, harden the configuration with several additional capabilities.

Disabling access to the default API endpoint

Mutual TLS is successfully enabled on the custom domain name but the default API endpoint URL is still active. This default endpoint has the format https://{apiId}.execute-api.{region}.amazonaws.com. Since the default endpoint does not require mutual TLS, you may want to disable it. This helps to ensure that mutual TLS authentication is enforced for all traffic to the API.

To disable the endpoint:

  1. Browse to the HTTP API in the API Gateway console.
  2. Choose the API name in the menu:
    Select API name from menu
  3. In the API, choose Edit:
    Select the Edit API option
  4. Disable the default endpoint toggle to force traffic to the custom domain name and use mutual TLS authentication. Choose Save.
    Disable the default endpoint toggle
    Note: Disabling the default endpoint is only currently available for HTTP APIs.
  5. Test invoking the default endpoint again. It is no longer active. The custom domain name continues to serve requests when authenticated using your client certificate.

Additional authorization capabilities

In addition to the initial mutual TLS authentication via client certificate, you can use all existing API Gateway authorizer options. This includes JSON Web Tokens (JWT)/Cognito user pool authorizers, Lambda authorizers, and IAM-based authorization.

For Lambda authorizers, the event payload is expanded to include additional certificate properties from the client’s authenticated certificate. These properties are found at requestContext.identity.clientCert with the Lambda authorizer v1 payload version or at requestContext.authentication.clientCert with the v2 payload version. These additional attributes include the PEM-encoded public key of the client cert and also the certificate subject distinguished name (DN), its issuer’s CA distinguished name, and the certificate’s valid from and to timestamps.

These additional context properties enable any custom validation of the calling certificate with any other request properties, such as bearer tokens in authorization headers, all with a unified authorizer response:

"requestContext": {
    "authentication": {
        "clientCert": {
            "clientCertPem": "-----BEGIN CERTIFICATE-----\nMIIEZTCCAk0CAQEwDQ...",
            "issuerDN": "C=US,ST=Washington,L=Seattle,O=Amazon Web Services,OU=Security,CN=My Private CA",
            "serialNumber": "1",
            "subjectDN": "C=US,ST=Washington,L=Seattle,O=Amazon Web Services,OU=Security,CN=My Client",
            "validity": {
                "notAfter": "Aug  5 00:28:21 2120 GMT",
                "notBefore": "Aug 29 00:28:21 2020 GMT"
            }
        }
    },
    ...

For Lambda authorizer blueprint samples, refer to https://github.com/awslabs/aws-apigateway-lambda-authorizer-blueprints.

Certificate revocation validation

You can validate certificates against any certificate revocation list (CRL) or by using the Online Certificate Status Protocol (OCSP) directly from a Lambda custom authorizer. A Lambda authorizer can locally cache a CRL for re-use across API authorization requests without downloading it each time.

For OCSP requests, the authorizer can make an API call to the OCSP server requesting validation that the certificate is still valid before returning the authorization response to API Gateway. Further enhancements supporting native certificate revocation verification capabilities are planned for future API Gateway releases.

Conclusion

Mutual TLS (mTLS) for API Gateway is generally available today at no additional cost. It’s available in all AWS commercial Regions, AWS GovCloud (US) Regions, and China Regions. It supports configuration via the API Gateway console, AWS CLI, SDKs, and AWS CloudFormation.

This post shows how to configure mutual TLS on a custom domain name and disable the default execute-api API endpoint. It also covers how to use Lambda authorizer extensions to further authorize client invocations or verify certificate revocation.

To learn more about Amazon API Gateway, visit the API Gateway developer guide documentation.

Uploading to Amazon S3 directly from a web or mobile application

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/uploading-to-amazon-s3-directly-from-a-web-or-mobile-application/

In web and mobile applications, it’s common to provide users with the ability to upload data. Your application may allow users to upload PDFs and documents, or media such as photos or videos. Every modern web server technology has mechanisms to allow this functionality. Typically, in the server-based environment, the process follows this flow:

Application server upload process

  1. The user uploads the file to the application server.
  2. The application server saves the upload to a temporary space for processing.
  3. The application transfers the file to a database, file server, or object store for persistent storage.

While the process is simple, it can have significant side-effects on the performance of the web-server in busier applications. Media uploads are typically large, so transferring these can represent a large share of network I/O and server CPU time. You must also manage the state of the transfer to ensure that the entire object is successfully uploaded, and manage retries and errors.

This is challenging for applications with spiky traffic patterns. For example, in a web application that specializes in sending holiday greetings, it may experience most traffic only around holidays. If thousands of users attempt to upload media around the same time, this requires you to scale out the application server and ensure that there is sufficient network bandwidth available.

By directly uploading these files to Amazon S3, you can avoid proxying these requests through your application server. This can significantly reduce network traffic and server CPU usage, and enable your application server to handle other requests during busy periods. S3 also is highly available and durable, making it an ideal persistent store for user uploads.

In this blog post, I walk through how to implement serverless uploads and show the benefits of this approach. This pattern is used in the Happy Path web application. You can download the code from this blog post in this GitHub repo.

Overview of serverless uploading to S3

When you upload directly to an S3 bucket, you must first request a signed URL from the Amazon S3 service. You can then upload directly using the signed URL. This is two-step process for your application front end:

Serverless uploading to S3

  1. Call an Amazon API Gateway endpoint, which invokes the getSignedURL Lambda function. This gets a signed URL from the S3 bucket.
  2. Directly upload the file from the application to the S3 bucket.

To deploy the S3 uploader example in your AWS account:

  1. Navigate to the S3 uploader repo and install the prerequisites listed in the README.md.
  2. In a terminal window, run:
    git clone https://github.com/aws-samples/amazon-s3-presigned-urls-aws-sam
    cd amazon-s3-presigned-urls-aws-sam
    sam deploy --guided
  3. At the prompts, enter s3uploader for Stack Name and select your preferred Region. Once the deployment is complete, note the APIendpoint output.

CloudFormation stack outputs

Testing the application

I show two ways to test this application. The first is with Postman, which allows you to directly call the API and upload a binary file with the signed URL. The second is with a basic frontend application that demonstrates how to integrate the API.

To test using Postman:

  1. First, copy the API endpoint from the output of the deployment.
  2. In the Postman interface, paste the API endpoint into the box labeled Enter request URL.
  3. Choose Send.Postman test
  4. After the request is complete, the Body section shows a JSON response. The uploadURL attribute contains the signed URL. Copy this attribute to the clipboard.
  5. Select the + icon next to the tabs to create a new request.
  6. Using the dropdown, change the method from GET to PUT. Paste the URL into the Enter request URL box.
  7. Choose the Body tab, then the binary radio button.Select the binary radio button in Postman
  8. Choose Select file and choose a JPG file to upload.
    Choose Send. You see a 200 OK response after the file is uploaded.200 response code in Postman
  9. Navigate to the S3 console, and open the S3 bucket created by the deployment. In the bucket, you see the JPG file uploaded via Postman.Uploaded object in S3 bucket

To test with the sample frontend application:

  1. Copy index.html from the example’s repo to an S3 bucket.
  2. Update the object’s permissions to make it publicly readable.
  3. In a browser, navigate to the public URL of index.html file.Frontend testing app at index.html
  4. Select Choose file and then select a JPG file to upload in the file picker. Choose Upload image. When the upload completes, a confirmation message is displayed.Upload in the test app
  5. Navigate to the S3 console, and open the S3 bucket created by the deployment. In the bucket, you see the second JPG file you uploaded from the browser.Second uploaded file in S3 bucket

Understanding the S3 uploading process

When uploading objects to S3 from a web application, you must configure S3 for Cross-Origin Resource Sharing (CORS). CORS rules are defined as an XML document on the bucket. Using AWS SAM, you can configure CORS as part of the resource definition in the AWS SAM template:

   S3UploadBucket:
    Type: AWS::S3::Bucket
    Properties:
      CorsConfiguration:
        CorsRules:
        - AllowedHeaders:
            - "*"
          AllowedMethods:
            - GET
            - PUT
            - HEAD
          AllowedOrigins:
            - "*"

The preceding policy allows all headers and origins – it’s recommended that you use a more restrictive policy for production workloads.

In the first step of the process, the API endpoint invokes the Lambda function to make the signed URL request. The Lambda function contains the following code:

const AWS = require('aws-sdk')
AWS.config.update({ region: process.env.AWS_REGION })
const s3 = new AWS.S3()
const URL_EXPIRATION_SECONDS = 300

// Main Lambda entry point
exports.handler = async (event) => {
  return await getUploadURL(event)
}

const getUploadURL = async function(event) {
  const randomID = parseInt(Math.random() * 10000000)
  const Key = `${randomID}.jpg`

  // Get signed URL from S3
  const s3Params = {
    Bucket: process.env.UploadBucket,
    Key,
    Expires: URL_EXPIRATION_SECONDS,
    ContentType: 'image/jpeg'
  }
  const uploadURL = await s3.getSignedUrlPromise('putObject', s3Params)
  return JSON.stringify({
    uploadURL: uploadURL,
    Key
  })
}

This function determines the name, or key, of the uploaded object, using a random number. The s3Params object defines the accepted content type and also specifies the expiration of the key. In this case, the key is valid for 300 seconds. The signed URL is returned as part of a JSON object including the key for the calling application.

The signed URL contains a security token with permissions to upload this single object to this bucket. To successfully generate this token, the code calling getSignedUrlPromise must have s3:putObject permissions for the bucket. This Lambda function is granted the S3WritePolicy policy to the bucket by the AWS SAM template.

The uploaded object must match the same file name and content type as defined in the parameters. An object matching the parameters may be uploaded multiple times, providing that the upload process starts before the token expires. The default expiration is 15 minutes but you may want to specify shorter expirations depending upon your use case.

Once the frontend application receives the API endpoint response, it has the signed URL. The frontend application then uses the PUT method to upload binary data directly to the signed URL:

let blobData = new Blob([new Uint8Array(array)], {type: 'image/jpeg'})
const result = await fetch(signedURL, {
  method: 'PUT',
  body: blobData
})

At this point, the caller application is interacting directly with the S3 service and not with your API endpoint or Lambda function. S3 returns a 200 HTML status code once the upload is complete.

For applications expecting a large number of user uploads, this provides a simple way to offload a large amount of network traffic to S3, away from your backend infrastructure.

Adding authentication to the upload process

The current API endpoint is open, available to any service on the internet. This means that anyone can upload a JPG file once they receive the signed URL. In most production systems, developers want to use authentication to control who has access to the API, and who can upload files to your S3 buckets.

You can restrict access to this API by using an authorizer. This sample uses HTTP APIs, which support JWT authorizers. This allows you to control access to the API via an identity provider, which could be a service such as Amazon Cognito or Auth0.

The Happy Path application only allows signed-in users to upload files, using Auth0 as the identity provider. The sample repo contains a second AWS SAM template, templateWithAuth.yaml, which shows how you can add an authorizer to the API:

  MyApi:
    Type: AWS::Serverless::HttpApi
    Properties:
      Auth:
        Authorizers:
          MyAuthorizer:
            JwtConfiguration:
              issuer: !Ref Auth0issuer
              audience:
                - https://auth0-jwt-authorizer
            IdentitySource: "$request.header.Authorization"
        DefaultAuthorizer: MyAuthorizer

Both the issuer and audience attributes are provided by the Auth0 configuration. By specifying this authorizer as the default authorizer, it is used automatically for all routes using this API. Read part 1 of the Ask Around Me series to learn more about configuring Auth0 and authorizers with HTTP APIs.

After authentication is added, the calling web application provides a JWT token in the headers of the request:

const response = await axios.get(API_ENDPOINT_URL, {
  headers: {
    Authorization: `Bearer ${token}`
        }
})

API Gateway evaluates this token before invoking the getUploadURL Lambda function. This ensures that only authenticated users can upload objects to the S3 bucket.

Modifying ACLs and creating publicly readable objects

In the current implementation, the uploaded object is not publicly accessible. To make an uploaded object publicly readable, you must set its access control list (ACL). There are preconfigured ACLs available in S3, including a public-read option, which makes an object readable by anyone on the internet. Set the appropriate ACL in the params object before calling s3.getSignedUrl:

const s3Params = {
  Bucket: process.env.UploadBucket,
  Key,
  Expires: URL_EXPIRATION_SECONDS,
  ContentType: 'image/jpeg',
  ACL: 'public-read'
}

Since the Lambda function must have the appropriate bucket permissions to sign the request, you must also ensure that the function has PutObjectAcl permission. In AWS SAM, you can add the permission to the Lambda function with this policy:

        - Statement:
          - Effect: Allow
            Resource: !Sub 'arn:aws:s3:::${S3UploadBucket}/'
            Action:
              - s3:putObjectAcl

Conclusion

Many web and mobile applications allow users to upload data, including large media files like images and videos. In a traditional server-based application, this can create heavy load on the application server, and also use a considerable amount of network bandwidth.

By enabling users to upload files to Amazon S3, this serverless pattern moves the network load away from your service. This can make your application much more scalable, and capable of handling spiky traffic.

This blog post walks through a sample application repo and explains the process for retrieving a signed URL from S3. It explains how to the test the URLs in both Postman and in a web application. Finally, I explain how to add authentication and make uploaded objects publicly accessible.

To learn more, see this video walkthrough that shows how to upload directly to S3 from a frontend web application. For more serverless learning resources, visit https://serverlessland.com.

Using Lambda layers to simplify your development process

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/using-lambda-layers-to-simplify-your-development-process/

Serverless developers frequently import libraries and dependencies into their AWS Lambda functions. While you can zip these dependencies as part of the build and deployment process, in many cases it’s easier to use layers instead. In this post, I explain how layers work, and how you can build and include layers in your own applications.

This blog post references the Happy Path application, which shows how to build a flexible backend to a photo-processing web application. To learn more, refer to Using serverless backends to iterate quickly on web apps – part 1. This code in this post is available at this GitHub repo.

Overview of Lambda layers

A Lambda layer is an archive containing additional code, such as libraries, dependencies, or even custom runtimes. When you include a layer in a function, the contents are extracted to the /opt directory in the execution environment. You can include up to five layers per function, which count towards the standard Lambda deployment size limits.

Layers are deployed as immutable versions, and the version number increments each time you publish a new layer. When you include a layer in a function, you specify the layer version you want to use. Layers are automatically set as private, but they can be shared with other AWS accounts, or shared publicly. Permissions only apply to a single version of a layer.

Using layers can make it faster to deploy applications with the AWS Serverless Application Model (AWS SAM) or the Serverless framework. By moving runtime dependencies from your function code to a layer, this can help reduce the overall size of the archive uploaded during a deployment.

Creating a layer containing the AWS SDK

The AWS SDK allows you to interact programmatically with AWS services using one of the supported runtimes. The Lambda service includes the AWS SDK so you can use it without explicitly importing in your deployment package.

However, there is no guarantee of the version provided in the execution environment. The SDK is upgraded frequently to support new AWS services and features. As a result, the version may change at any time. You can see the current version used by Lambda by declaring an instance of the SDK and logging out the version method:

Logging out the version method

For production workloads, it’s best practice to lock the version of the AWS SDK used in your functions. You can achieve this by including the SDK with your code package. Once you include this library, your code always uses the version in the deployment package and not the version included in the Lambda service.

A serverless application may consist of many functions, which all use a common SDK version. Instead of bundling the SDK with each function deployment, you can create a layer containing the SDK. The effect of this is to reduce the size of the uploaded archive, which makes your deployments faster.

To create an AWS SDK layer:

  1. First, clone this blog post’s GitHub repo. From a terminal window, execute:
    git clone https://github.com/aws-samples/aws-lambda-layers-aws-sam-examples
    cd ./aws-sdk-layer
  2. This directory contains an AWS SAM template and Node.js package.json file. Install the package.json contents:
    npm install
  3. Create the layer directory defined in the AWS SAM template and the nodejs directory required by Lambda. Next, move the node_modules directory:
    mkdir -p ./layer/nodejs
    mv ./node_modules ./layer/nodejs
  4. Next, deploy the AWS SAM template to create the layer:
    sam deploy --guided
  5. For the Stack name, enter “aws-sdk-layer”. Enter your preferred AWS Region and accept the other defaults.
  6. After the deployment completes, the new Lambda layer is available to use. Run this command to see the available layers:aws lambda list-layersaws lambda list-layers output

After adding a layer to a function, you can use console.log to log out the AWS SDK version. This shows that the function is now using the SDK version in the layer instead of the version provided by the Lambda service:

Use the SDK layer instead of the bundled layer

Creating layers with OS-specific binaries

Many code libraries include binaries that are operating-system specific. When you build packages on your local development machine, by default the binaries for that operating system are used. These may not be the right binaries for Lambda, which runs on Amazon Linux. If you are not using a compatible operating system, you must ensure you include Linux binaries in the layer.

The simplest way to package these libraries correctly is to use AWS Cloud9. This is an IDE in the AWS Cloud, which runs on Amazon EC2. After creating an environment, you can clone a git repository directly to the local storage of the instance, and run the necessary build scripts.

The Happy Path application resizes images using the Sharp npm library. This library uses libvips, which is written in C, so the compilation is operating system-specific. By creating a layer containing this library, it simplifies the packaging and deployment of the consuming Lambda function.

To create a Sharp layer using AWS Cloud9:

  1. Navigate to the AWS Cloud9 console.
  2. Choose Create environment.
  3. Enter the name “My IDE” and choose Next step.
  4. Accept all the default and choose Next step.
  5. Review the settings and choose Create environment.
  6. In the terminal panel, enter:
    git clone https://github.com/aws-samples/aws-lambda-layers-aws-sam-examples
    cd ./aws-lambda-layers-aws-sam-examples/sharp-layer
    npm installCreating a layer in Cloud9
  7. From a terminal window, ensure you are in the directory where you cloned this post’s GitHub repo. Execute the following commands:cd ./sharp-layer
    npm install
    mkdir -p ./layer/nodejs
    mv ./node_modules ./layer/nodejsCreating the layer in Cloud9
  8. Next, deploy the AWS SAM template to create the layer:
    sam deploy --guided
  9. For the Stack name, enter “sharp-layer”. Enter your preferred AWS Region and accept the other defaults. After the deployment completes, the new Lambda layer is available to use.

In some runtimes, you can specify a local set of packages for development, and another set for production. For example, in Node.js, the package.json file allows you to specify two sections for dependencies. If your development machine uses a different operating system to Lambda, and therefore uses different binaries, you can use package.json to resolve this. In the Happy Path Resizer function, which uses the Sharp layer, the package.json refers to a local binary for development.

Adding development dependencies to package.json

AWS SAM defines Lambda functions with the AWS::Serverless::Function resource. Layers are defined as a property of functions, as a list of layer ARNs including the version:

  MyLambdaFunction:
    Type: AWS::Serverless::Function 
    Properties:
      CodeUri: myFunction/
      Handler: app.handler
      MemorySize: 128
      Layers:
        - !Ref SharpLayerARN

Sharing a layer

Layers are private to your account by default but you can optionally share with other AWS accounts or make a layer public. You cannot share layers via the AWS Management Console but instead use the AWS CLI.

To share a layer, use add-layer-version-permission, specifying the layer name, version, AWS Region, and principal:

aws lambda add-layer-version-permission \
  --layer-name node-sharp \
  --principal '*' \
  --action lambda:GetLayerVersion \
  --version-number 3 
  --statement-id public 
  --region us-east-1

In the principal parameter, specify an individual account ID or use an asterisk to make the layer public. The CLI responds with a RevisionId containing the current revision of the policy:

add-layer-version output

You can check the permissions associated with a layer version by calling get-layer-version-policy with the layer name and version:

aws lambda get-layer-version-policy \
  --layer-name node-sharp \
  --version-number 3 \
  --region us-east-1

get-layer-version-policy output

Similarly, you can delete permissions associated with a layer version by calling remove-layer-vesion-permission with the layer name, statement ID, and version:

aws lambda remove-layer-version-permission \
 -- layer-name node-sharp \
 -- statement-id public \
 -- version-number 3

Once the permissions are removed, calling get-layer-version-policy results in an error:

Error invoking after removal

Conclusion

Lambda layers provide a convenient and effective way to package code libraries for sharing with Lambda functions in your account. Using layers can help reduce the size of uploaded archives and make it faster to deploy your code.

Layers can contain packages using OS-specific binaries, providing a convenient way to distribute these to developers. While layers are private by default, you can share with other accounts or make a layer public. Layers are published as immutable versions, and deleting a layer has no effect on deployed Lambda functions already using that layer.

To learn more about using Lambda layers, visit the documentation, or see how layers are used in the Happy Path web application.

Using serverless backends to iterate quickly on web apps – part 3

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/using-serverless-backends-to-iterate-quickly-on-web-apps-part-3/

This series is about building flexible backends for web applications. The example is Happy Path, a web app that allows park visitors to upload and share maps and photos, to help replace printed materials.

In part 1, I show how the application works and walk through the backend architecture. In part 2, you deploy a basic image-processing workflow. To do this, you use an AWS Serverless Application Model (AWS SAM) template to create an AWS Step Functions state machine.

In this post, I show how to deploy progressively more complex workflow functionality while using minimal code. This solution in this web app is designed for 100,000 monthly active users. Using this approach, you can introduce more sophisticated workflows without affecting the existing backend application.

The code and instructions for this application are available in the GitHub repo.

Introducing image moderation

In the first version, users could upload any images to parks on the map. One of the most important feature requests is to enable moderation, to prevent inappropriate images from appearing on the app. To handle a large number of uploaded images, using human moderation would be slow and labor-intensive.

In this section, you use Amazon ML services to automate analyzing the images for unsafe content. Amazon Rekognition provides an API to detect if an image contains moderation labels. These labels are categorized into different types of unsafe content that would not be appropriate for this app.

Version 2 of the workflow uses this API to automate the process of checking images. To install version 2:

  1. From a terminal window, delete the v1 workflow stack:
    aws cloudformation delete-stack --stack-name happy-path-workflow-v1
  2. Change directory to the version 2 AWS SAM template in the repo:
    cd .\workflows\templates\v2
  3. Build and deploy the solution:
    sam build
    sam deploy --guided
  4. The deploy process prompts you for several parameters. Enter happy-path-workflow-v2 as the Stack Name. The other values are the outputs from the backend deployment process, detailed in the repo’s README. Enter these to complete the deployment.

From VS Code, open the v2 state machine in the repo from workflows/statemachines/v2.asl.json. Choose the Render graph option in the CodeLens to see the workflow visualization.

Serverless workflow visualization

This new workflow introduces a Moderator step. This invokes a Moderator Lambda function that uses the Amazon Rekognition API. If this API identifies any unsafe content labels, it returns these as part of the function output.

The next step in the workflow is a Moderation result choice state. This evaluates the output of the previous function – if the image passes moderation, the process continues to the Resizer function. If it fails, execution moves to the RecordFailState step.

Step Functions integrates directly with some AWS services so that you can call and pass parameters into the APIs of those services. The RecordFailState uses an Amazon DynamoDB service integration to write the workflow failure to the application table, using the arn:aws:states:::dynamodb:updateItem resource.

Testing the workflow

To test moderation, I use an unsafe image with suggestive content. This is an image that is not considered appropriate for this application. To test the deployed v2 workflow:

  1. Open the frontend application at https://localhost:8080 in your browser.
  2. Select a park location, choose Show Details, and then choose Upload images.
  3. Select an unsafe image to upload.
  4. Navigate to the Step Functions console. This shows the v2StateMachine with one successful execution:State machine result
  5. Select the state machine, and choose the execution to display more information. Scroll down to the Visual workflow panel.Visual workflow panel

This shows that the moderation failed and the path continued to log the failed state in the database. If you open the Output resource, this displays more details about why the image is considered unsafe.

Checking the image size and file type

The upload component in the frontend application limits file selection to JPG images but there is no check to reject images that are too small. It’s prudent to check and enforce image types and sizes on the backend API in addition to the frontend. This is because it’s possible to upload images via the API without using the frontend.

The next version of the workflow enforces image sizes and file types. To install version 3:

  1. From a terminal window, delete the v2 workflow stack:
    aws cloudformation delete-stack --stack-name happy-path-workflow-v2
  2. Change directory to the version 3 AWS SAM template in the repo:
    cd .\workflows\templates\v3
  3. Build and deploy the solution:
    sam build
    sam deploy --guided
  4. The deploy process prompts you for several parameters. Enter happy-path-workflow-v3 as the Stack Name. The other values are the outputs from the backend deployment process, detailed in the repo’s README. Enter these to complete the deployment.

From VS Code, open the v3 state machine in the repo from workflows/statemachines/v3.asl.json. Choose the Render graph option in the CodeLens to see the workflow visualization.

v3 state machine

This workflow changes the starting point of the execution, introducing a Check Dimensions step. This invokes a Lambda function that checks the size and types of the Amazon S3 object using the image-size npm package. This function uses environment variables provided by the AWS SAM template to compare against a minimum size and allowed type array.

The output is evaluated by the Dimension Result choice state. If the image is larger than the minimum size allowed, execution continues to the Moderator function as before. If not, it passes to the RecordFailState step to log the result in the database.

Testing the workflow

To test, I use an image that’s narrower than the mixPixels value. To test the deployed v3 workflow:

  1. Open the frontend application at https://localhost:8080 in your browser.
  2. Select a park location, choose Show Details, and then choose Upload images.
  3. Select an image with a width smaller than 800 pixels. After a few seconds, a rejection message appears:"Image is too small" message
  4. Navigate to the Step Functions console. This shows the v3StateMachine with one successful execution. Choose the execution to show more detail.Execution output

The execution shows that the Check Dimension step added the image dimensions to the event object. Next, the Dimensions Result choice state rejected the image, and logged the result at the RecordFailState step. The application’s DynamoDB table now contains details about the failed upload:

DynamoDB item details

Pivoting the application to a new concept

Until this point, the Happy Path web application is designed to help park visitors share maps and photos. This is the development team’s original idea behind the app. During the product-market fit stage of development, it’s common for applications to pivot substantially from the original idea. For startups, it can be critical to have the agility to modify solutions quickly to meet the needs of customers.

In this scenario, the original idea has been less successful than predicted, and park visitors are not adopting the app as expected. However, the business development team has identified a new opportunity. Restaurants would like an app that allows customers to upload menus and food photos. How can the development team create a new proof-of-concept app for restaurant customers to test this idea?

In this version, you modify the application to work for restaurants. While features continue to be added to the parks workflow, it now supports business logic specifically for the restaurant app.

To create the v4 workflow and update the frontend:

  1. From a terminal window, delete the v3 workflow stack:
    aws cloudformation delete-stack --stack-name happy-path-workflow-v3
  2. Change directory to the version 4 AWS SAM template in the repo:
    cd .\workflows\templates\v4
  3. Build and deploy the solution:
    sam build
    sam deploy --guided
  4. The deploy process prompts you for several parameters. Enter happy-path-workflow-v4 as the Stack Name. The other values are the outputs from the backend deployment process, detailed in the repo’s README. Enter these to complete the deployment.
  5. Open frontend/src/main.js and update the businessType variable on line 63. Set this value to ‘restaurant’.Change config to restaurants
  6. Start the local development server:
    npm run serve
  7. Open the application at http://localhost:8080. This now shows restaurants in the local area instead of parks.

In the Step Functions console, select the v4StateMachine to see the latest workflow, then open the Definition tab to see the visualization:

Workflow definition

This workflow starts with steps that apply to both parks and restaurants – checking the image dimensions. Next, it determines the place type from the placeId record in DynamoDB. Depending on place type, it now follows a different execution path:

  • Parks continue to run the automated moderator process, then resizer and publish the result.
  • Restaurants now use Amazon Rekognition to determine the labels in the image. Any photos containing people are rejected. Next, the workflow continues to the resizer and publish process.
  • Other business types go to the RecordFailState step since they are not supported.

Testing the workflow

To test the deployed v4 workflow:

  1. Open the frontend application at https://localhost:8080 in your browser.
  2. Select a restaurant, choose Show Details, and then choose Upload images.
  3. Select an image from the test photos dataset. After a few seconds, you see a message confirming the photo has been added.
  4. Next, select an image that contains one or more people. The new restaurant workflow rejects this type of photo:"Image rejected" message
  5. In the Step Functions console, select the last execution for the v4StateMachine to see how the Check for people step rejected the image:v4 workflow

If other business types are added later to the application, you can extend the Step Functions workflow accordingly. The cost of Step Functions is based on the number of transitions in a workflow, not the number of total steps. This means you can branch by business type in the Happy Path application. This doesn’t affect the overall cost of running an execution, if the total transitions are the same per execution.

Conclusion

Previously in this series, you deploy a simple workflow for processing image uploads in the Happy Path web application. In this post, you add progressively more complex functionality by deploying new versions of workflows.

The first iteration introduces image moderation using Amazon Rekognition, providing the ability to automate the evaluation of unsafe content. Next, the workflow is modified to check image size and file type. This allows you to reject any images that are too small or do not meet the type requirements. Finally, I show how to expand the logic further to accept other business types with their own custom workflows.

To learn more about building serverless web applications, see the Ask Around Me series.

Building Salesforce integrations with Amazon EventBridge and Amazon AppFlow

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-salesforce-integrations-with-amazon-eventbridge/

This post is courtesy of Den Delimarsky, Senior Product Manager, and Vinay Kondapi, Senior Product Manager.

The integration between Amazon EventBridge and Amazon AppFlow enables customers to receive and react to events from Salesforce in their event-driven applications. In this blog post, I show you how to set up the integration, and route Salesforce events to an AWS Lambda function for processing.

Amazon AppFlow is a fully managed integration service that enables you to securely transfer data between software as a service (SaaS) applications like Salesforce, Marketo, Slack, and ServiceNow, and AWS services like Amazon S3 and Amazon Redshift.

EventBridge SaaS integrations make it easier for customers to receive events from over 30 different SaaS providers. Salesforce is a popular SaaS provider among AWS customers, so it has been one of the most anticipated event sources for EventBridge. Customers want to build rich applications that can react to events that track campaigns, contracts, opportunities, and order changes.

The ability to receive these events allows you to build workflows where you can start a variety of processes. For example, you could notify a broad range of subscribers about the changes, or enrich the data with information from another service. Or you could route the event to an order delivery system.

Previously, to connect Salesforce to your application, you must write custom API polling code that routes events either directly to an application or to an event bus. With the Salesforce integration with EventBridge and Amazon AppFlow, the integration is built in minutes directly through the AWS Management Console, with no code required.

The solution outlined in this blog post is structured as follows:

Architecture overview

Setting up the event source

To set up the event source:

  1. Open the Amazon AppFlow console, and create a new flow. Choose Create flow button on the service landing page. Give your flow a unique name, and choose Next.Specify flow details
  2. In the Source name list, select Salesforce, and then choose Connect. Select the Salesforce environment you are using, and provide a unique connection name.Connect to Salesforce
  3. Choose Continue. When prompted, provide your Salesforce credentials. These are the credentials that are associated with the specific Salesforce environment selected in the previous step.
  4. Select Salesforce events from the list of available options for the flow, and choose the event that you want to route to EventBridge. This ensures that Amazon AppFlow can route specific events that are coming from Salesforce to an EventBridge event bus.Source details
  5. With the source set up, you can now specify the destination. In the Destination name list, select EventBridge.Destination name

To send Salesforce events to EventBridge, Amazon AppFlow creates a new partner event source that is associated with a partner event bus.

To create a partner event source:

  1. Select an existing partner event source, or create a new one by choosing the list of partner event sources.Destination details
  2. When creating a new event source, you can optionally customize the name, to make it easier for you to identify it later.Generate partner event source
  3. Choose an Amazon S3 bucket for large events. For events that are larger than 256 KB, Amazon AppFlow sends a URL for the S3 object to the event bus instead of the event payload.Large event handling
  4. Define a flow trigger, which determines when the flow is started. Because we are tracking events, we want to react to those as they come in. Using the default Run flow on event enables this scenario as changes occur in Salesforce.Flow trigger

With Amazon AppFlow, you can also configure data field mapping, validation rules, and filters. These enable you to enrich and modify event data before it is sent to the event bus.

Once you create the flow, you must activate the event source that you created. To complete this step:

  1. Open the EventBridge console.
  2. Associate a partner event source with an event bus by following the link in the Amazon AppFlow integration dialog box, or navigating directly to the partner event sources view. You can see a partner event source with a Pending state.Partner event source
  3. Select the event source and choose Associate with event bus.
  4. Confirm the settings and choose on Associate.Associate with an event bus
  5. Return to the Amazon AppFlow console, and open the flow you were creating. Choose Activate flow.Activate flow

Your integration is now complete, and EventBridge can start receiving Salesforce events from the configured flow.

Routing Salesforce events to Lambda function

The associated partner event bus receives all events of the configured type from the connected Salesforce accounts. Now your application can react to these events with the help of rules in EventBridge. Rules allow you to set conditions for event routing that determine what targets receive event payloads. You can learn more about this functionality in the EventBridge documentation.

To create a new rule:

  1. Go to the rules view in the EventBridge console, and choose Create rule.EventBridge Rules
  2. Provide a unique name and an optional description for your rule.
  3. Select the Event pattern option in the Define pattern section. With event pattern configuration, you can define parts of the event payload that EventBridge must look at to determine where to route the event.Define pattern
    For this exercise, start by capturing every Salesforce event that goes through the partner event bus. The only events routed through this bus are from the partner event source. In this case, it is Amazon AppFlow connected to Salesforce.
  4. Set the event matching pattern to Pre-defined pattern by service, with the service provider being All Events. The default setting allows you to receive all events that are coming through the partner event bus.Event matching pattern
  5. Select the event bus that the rule should be associated with. Choose Custom or partner event bus and select the event bus that you associated with the Amazon AppFlow event source. Every rule in EventBridge is associated with an event bus.Select event bus

When rules are triggered, the event can be routed to other AWS services. Additionally, every rule can have up to five different AWS targets. You can read more about available targets in the EventBridge documentation. For this blog post, we use an AWS Lambda function as a target for Salesforce events received from Amazon AppFlow.

To configure targets for your rule:

  1. From the list of targets, select Lambda function, and select an existing function. If you do not yet have a function available, you can create one in the AWS Lambda console.Select targets
  2. Choose Create. You have now completed the rule setup.

Now, Salesforce events that match the configured type are routed directly to a Lambda function in your account.

Testing the integration

To test the integration:

  1. Open the Lambda view in the AWS Management Console.
  2. Choose the function that is handling the events from EventBridge.
  3. In the Function code section, update the code to:
    exports.handler = async (event) => {
        console.log(event);
        const response = {
            statusCode: 200,
            body: JSON.stringify('Hello from Lambda!'),
        };
        return response;
    };
    

    Function code

  4. Choose Save.
  5. Open your Salesforce instance, and take an action that is associated with the event you configured earlier. For example, you could update a contract or create an order.
  6. Go back to your function in AWS Management Console, and choose the Monitoring tab.Lambda function monitoring tab
  7. Scroll to CloudWatch Logs Insights section.CloudWatch Logs Insights
  8. Choose the latest log stream. Make sure that the timestamp approximately matches the time when you triggered the action in Salesforce.
  9. Choose the log stream.
  10. Observe log events that contain Salesforce event data.

You have completed your first Salesforce integration with EventBridge and Amazon AppFlow. You are now able to build decoupled and highly scalable applications that integrate with Salesforce.

Conclusion

Building decoupled and scalable cross-service applications is more relevant than ever with requirements for high availability, consistency, and reliability. This blog post demonstrates a solution that connects Salesforce to an event-driven application that uses EventBridge and Amazon AppFlow to route events. The application uses events from Salesforce as a starting point for a custom processing workflow in a Lambda function.

To learn more about EventBridge, visit the EventBridge documentation or EventBridge Learning Path.

To learn more about Amazon AppFlow, visit the Amazon AppFlow documentation.

Using serverless backends to iterate quickly on web apps – part 2

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/using-serverless-backends-to-iterate-quickly-on-web-apps-part-2/

This series is about building flexible solutions that can adapt as user requirements change. One of the challenges of building modern web applications is that requirements can change quickly. This is especially true for new applications that are finding their product-market fit. Many development teams start building a product with one set of requirements, and quickly find they must build a product with different features.

For both start-ups and enterprises, it’s often important to find a development methodology and architecture that allows flexibility. This is the surest way to keep up with feature requests in evolving products and innovate to delight your end-users. In this post, I show how to build sophisticated workflows using minimal custom code.

Part 1 introduces the Happy Path application that allows park visitors to share maps and photos with other users. In that post, I explain the functionality, how to deploy the application, and walk through the backend architecture.

The Happy Path application accepts photo uploads from users’ smartphones. The application architecture must support 100,000 monthly active users. These binary uploads are typically 3–9 MB in size and must be resized and optimized for efficient distribution.

Using a serverless approach, you can develop a robust low-code solution that can scale to handle millions of images. Additionally, the solution shown here is designed to handle complex changes that are introduced in subsequent versions of the software. The code and instructions for this application are available in the GitHub repo.

Architecture overview

After installing the backend in the previous post, the architecture looks like this:

In this design, the API, storage, and notification layers exist as one application, and the business logic layer is a separate application. These two applications are deployed using AWS Serverless Application Model (AWS SAM) templates. This architecture uses Amazon EventBridge to pass events between the two applications.

In the business logic layer:

  1. The workflow starts when events are received from EventBridge. Each time a new object is uploaded by an end-user, the PUT event in the Amazon S3 Upload bucket triggers this process.
  2. After the workflow is completed successfully, processed images are stored in the Distribution bucket. Related metadata for the object is also stored in the application’s Amazon DynamoDB table.

By separating the architecture into two independent applications, you can replace the business logic layer as needed. Providing that the workflow accepts incoming events and then stores processed images in the S3 bucket and DynamoDB table, the workflow logic becomes interchangeable. Using the pattern, this workflow can be upgraded to handle new functionality.

Introducing AWS Step Functions for workflow management

One of the challenges in building distributed applications is coordinating components. These systems are composed of separate services, which makes orchestrating workflows more difficult than working with a single monolithic application. As business logic grows more complex, if you attempt to manage this in custom code, it can become quickly convoluted. This is especially true if it handles retries and error handling logic, and it can be hard to test and maintain.

AWS Step Functions is designed to coordinate and manage these workflows in distributed serverless applications. To do this, you create state machine diagrams using Amazon States Language (ASL). Step Functions renders a visualization of your state machine, which makes it simpler to see the flow of data from one service to another.

Each state machine consists of a series of steps. Each step takes an input and produces an output. Using ASL, you define how this data progresses through the state machine. The flow from step to step is called a transition. All state machines transition from a Start state towards an End state.

The Step Functions service manages the state of individual executions. The service also supports versioning, which makes it easier to modify state machines in production systems. Executions continue to use the version of a state machine when they were started, so it’s possible to have active executions on multiple versions.

For developers using VS Code, the AWS Toolkit extension provides support for writing state machines using ASL. It also renders visualizations of those workflows. Combined with AWS Serverless Application Model (AWS SAM) templates, this provides a powerful way to deploy and maintain applications based on Step Functions. I refer to this IDE and AWS SAM in this walkthrough.

Version 1: Image resizing

The Happy Path application uses Step Functions to manage the image-processing part of the backend. The first version of this workflow resizes the uploaded image.

To see this workflow:

  1. In VS Code, open the workflows/statemachines folder in the Explorer panel.
  2. Choose the v1.asl.sjon file.v1 state machine
  3. Choose the Render graph option in the CodeLens. This opens the workflow visualization.CodeLens - Render graph

In this basic workflow, the state machine starts at the Resizer step, then progresses to the Publish step before ending:

  • In the top-level attributes in the definition, StartsAt sets the Resizer step as the first action.
  • The Resizer step is defined as a task with an ARN of a Lambda function. The Next attribute determines that the Publish step is next.
  • In the Publish step, this task defines a Lambda function using an ARN reference. It sets the input payload as the entire JSON payload. This step is set as the End of the workflow.

Deploying the Step Functions workflow

To deploy the state machine:

  1. In the terminal window, change directory to the workflows/templates/v1 folder in the repo.
  2. Execute these commands to build and deploy the AWS SAM template:
    sam build
    sam deploy –guided
  3. The deploy process prompts you for several parameters. Enter happy-path-workflow-v1 as the Stack Name. The other values are the outputs from the backend deployment process, detailed in the repo’s README. Enter these to complete the deployment.
  4. SAM deployment output

Testing and inspecting the deployed workflow

Now the workflow is deployed, you perform an integration test directly from the frontend application.

To test the deployed v1 workflow:

  1. Open the frontend application at https://localhost:8080 in your browser.
  2. Select a park location, choose Show Details, and then choose Upload images.
  3. Select an image from the sample photo dataset.
  4. After a few seconds, you see a pop-up message confirming that the image has been added:Upload confirmation message
  5. Select the same park location again, and the information window now shows the uploaded image:Happy Path - park with image data

To see how the workflow processed this image:

  1. Navigate to the Steps Functions console.
  2. Here you see the v1StateMachine with one execution in the Succeeded column.Successful execution view
  3. Choose the state machine to display more information about the start and end time.State machine detail
  4. Select the execution ID in the Executions panel to open details of this single instance of the workflow.

This view shows important information that’s useful for understanding and debugging an execution. Under Input, you see the event passed into Step Functions by EventBridge:

Event detail from EventBridge

This contains detail about the S3 object event, such as the bucket name and key, together with the placeId, which identifies the location on the map. Under Output, you see the final result from the state machine, shows a successful StatusCode (200) and other metadata:

Event output from the state machine

Using AWS SAM to define and deploy Step Functions state machines

The AWS SAM template defines both the state machine, the trigger for executions, and the permissions needed for Step Functions to execute. The AWS SAM resource for a Step Functions definition is AWS::Serverless::StateMachine.

Definition permissions in state machines

In this example:

  • DefinitionUri refers to an external ASL definition, instead of embedding the JSON in the AWS SAM template directly.
  • DefinitionSubstitutions allow you to use tokens in the ASL definition that refer to resources created in the AWS SAM template. For example, the token ${ResizerFunctionArn} refers to the ARN of the resizer Lambda function.
  • Events define how the state machine is invoked. Here it defines an EventBridge rule. If an event matches this source and detail-type, it triggers an execution.
  • Policies: the Step Functions service must have permission to invoke the services that perform tasks in the state machine. AWS SAM policy templates provide a convenient shorthand for common execution policies, such as invoking a Lambda function.

This workflow application is separate from the main backend template. As more functionality is added to the workflow, you deploy the subsequent AWS SAM templates in the same way.

Conclusion

Using AWS SAM, you can specify serverless resources, configure permissions, and define substitutions for the ASL template. You can deploy a standalone Step Functions-based application using the AWS SAM CLI, separately from other parts of your application. This makes it easier to decouple and maintain larger applications. You can visualize these workflows directly in the VS Code IDE in addition to the AWS Management Console.

In part 3, I show how to build progressively more complex workflows and how to deploy these in-place without affecting the other parts of the application.

To learn more about building serverless web applications, see the Ask Around Me series.

Using serverless backends to iterate quickly on web apps – part 1

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/using-serverless-backends-to-iterate-quickly-on-web-apps-part-1/

For many organizations, building applications is an iterative process where requirements change quickly. Traditional software architectures can be challenging to adapt to these changes. Often, early architectural decisions may limit the developers’ ability to deliver new features. Serverless architectural patterns are often much more adaptable, and can help developers keep pace with an evolving list of end-user requirements.

This blog series explores how to structure and build a serverless web app backend to enable the most flexibility for changing product requirements. It covers how to use serverless services in your architecture, and how to separate parts of the backend to make maintenance easier. I also show how you can use AWS Step Functions to encapsulate complex workflows and minimize the amount the custom code in your applications.

In this series:

  • Part 1: Deploy the application, test the upload process, and review the architecture.
  • Part 2: Understand how to use Step Functions, and deploy a custom workflow.
  • Part 3: Advanced workflows with custom branching and image moderation.

The code uses the AWS Serverless Application Model (AWS SAM), enabling you to deploy the application easily in your own AWS account. This walkthrough creates resources covered in the AWS Free Tier but you may incur cost for usage beyond development and testing.

To set up the example, visit the GitHub repo and follow the instructions in the README.md file.

Introducing the “Happy Path” web application

In this scenario, a startup creates a web application called Happy Path. This app is designed to help state parks and nonprofit organizations replace printed materials, such as flyers and maps, with user-generated content. It allows visitors to capture images of park notices and photos of hiking trails. They can share these with other users to reduce printed waste.

The frontend displays and captures images of different locations, and the backend processes this data according to a set of business rules. This web application is designed for smartphones so it’s used while visitors are at the locations. Here is the typical user flow:

Happy Path user interface

  1. When park visitors first navigate to the site’s URL, it shows their current location with parks highlighted in the vicinity.
  2. The visitor selects a park. It shows thumbnails of any maps, photos, and images already uploaded by other users.
  3. If the visitor is logged in, they can upload their own images directly from their smartphone.

The first production version of this application provides a simple way for users to upload photos. It does little more than provide an uploading and sharing process.

However, the developer team quickly realizes that they must make some improvements. The developers need a way to implement complex, changing workflows on the backend without refactoring the code that is running in production. The architecture must also scale for an expected 100,000 monthly active users.

First, they want to optimize the large uploaded images to improve the speed of downloads. Next, they must also determine the suitability of images to ensure that the app only shows appropriate photos. There is also a rapidly growing list of feature requirements from organizations using the app.

In this series, I show how the development team can design the app to provide this level of flexibility. This way, they can implement new features and even pivot the core application if needed.

Deploying the application

In the GitHub repo, there are detailed deployment instructions in the README. The repo contains separate directories for the frontend, backend, and workflows. You must deploy the backend first. Once you have completed the deployment, you can run the frontend code on your local machine.

To launch the frontend application:

  1. Change to the frontend directory.
  2. Run npm run serve to start the development server. After building the modules in the project, the terminal shows the local URL where the application is running:
    Vue build completed
  3. Open a web browser and navigate to http://localhost:8080 to see the application.
  4. Open the developer console in your browser (for Google Chrome, Mozilla Firefox and Microsoft Edge, press F12 on the keyboard). This displays the application in a responsive layout and shows console logging. This can help you understand the flow of execution in the application.

Happy Path browser developer console

Testing the application

Now you have deployed the backend to your AWS account, and you are running the front end locally, you can test the application.

To upload an image for a location:

  1. Choose Log In and sign into the application, creating a new account if necessary.
  2. Select a location on the map to open the information window.
    Select a location on the map
  3. Choose Show Details, then choose Upload Images.
    Uploading images in Happy Path
  4. In the file picker dialog, select any one of the images from the sample photos dataset.

At this stage, the image is now uploaded to the S3 Uploads bucket on the backend. To verify this:

  1. Navigate to the Amazon S3 console.
  2. Choose the application’s upload bucket, then choose the folder name to open its contents. This shows the uploaded image.
    S3 bucket contents
  3. Navigate to the Amazon DynamoDB console.
  4. Select the hp-application table, then select the Items tab.
    DynamoDB table contents

There are two records shown:

  • The place listing: this item contains details about the selected park, such as the name and address.
  • The file metadata: this stores information about who uploaded the file, the timestamp, and the state of the upload.

At this stage, you have successfully tested that the frontend can upload images to the backend.

Architecture overview

After deploying the application using the repo’s README instructions, the backend architecture looks like this:

Happy Path backend architecture

There are five distinct functional areas for the backend application:

  1. API layer: when users interact with one of the API endpoints, this is processed by the API layer. Each API route invokes a Lambda function to complete its task, storing and fetching data from the storage layer.
  2. Storage layer: information about user uploads is persisted durably here. The application uses Amazon S3 buckets to store the binary objects, and a DynamoDB table for associated metadata.
  3. Notification layer: when images are uploaded, the PUT event triggers a Lambda function. This publishes the event to the Amazon EventBridge default event bus.
  4. Business logic layer: the customized business logic is encapsulated in AWS Step Functions workflows.
  5. Content distribution: the processed images are served via an Amazon CloudFront distribution to reduce latency and optimize delivery cost.

For future requirements, you can implement increasingly complex customized logic entirely within the business logic layer. All new workflow features are implemented here, without needing to modify other parts of the application

Conclusion

This series is about using serverless backends to allow you to iterate quickly on web application functionality.

In this post, I introduce the Happy Path example web application. I show the main features of the application, enabling end-users to upload maps and photos to the backend application. I walk through the deployment of the backend and frontend applications. Finally, you test with a sample image upload.

In part 2, you will deploy the image processing and workflow part of the application. This series explores progressively more complicated workflows, and how to manage their deployment. I will discuss some architectural choices which help to build in flexibility and scalability when designing backend applications

To learn more about building serverless web applications, see the Ask Around Me series.

Using Amazon MSK as an event source for AWS Lambda

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/using-amazon-msk-as-an-event-source-for-aws-lambda/

Amazon Managed Streaming for Apache Kafka (Amazon MSK) is a fully managed, highly available service that uses Apache Kafka to process real-time streaming data. Many producers can send messages to Kafka, which can then be routed to and processed by multiple consumers. Lambda now supports Amazon MSK as an event source, so it can consume messages and integrate with downstream serverless workflows.

Apache Kafka is a distributed streaming platform that it is similar to Amazon Kinesis. Amazon MSK simplifies the setup, scaling, and management of clusters running Kafka. It makes it easier to configure the application for multiple Availability Zones and securing with IAM. It’s fully compatible with Kafka and supports familiar community-build tools such as MirrorMakerApache Flink, and Prometheus.

In this blog post, I explain how to set up a test Amazon MSK cluster and configure key elements in the networking configuration. I also show how to create a Lambda function that is invoked by messages in Amazon MSK topics.

Overview

Using Amazon MSK as an event source operates in a similar way to using Amazon SQS or Amazon Kinesis. In all cases, the Lambda service internally polls for new records or messages from the event source, and then synchronously invokes the target Lambda function. Lambda reads the messages in batches and provides these to your function as an event payload.

Lambda is a consumer application for your Kafka topic. It processes records from one or more partitions and sends the payload to the target function. Lambda continues to process batches until there are no more messages in the topic.

The Lambda function’s event payload contains an array of records. Each array item contains details of the topic and Kafka partition identifier, together with a timestamp and base64 encoded message:

MSK Lambda event payload

Network configuration overview

Amazon MSK is a highly available service, so it must be configured to run in a minimum of two Availability Zones in your preferred Region. To comply with security best practice, the brokers are usually configured in private subnets in each Region.

For Amazon MSK to invoke Lambda, you must ensure that there is a NAT Gateway running in the public subnet of each Region. It’s possible to route the traffic to a single NAT Gateway in one AZ for test and development workloads. For redundancy in production workloads, it’s recommended that there is one NAT Gateway available in each Availability Zone.

Amazon MSK network architecture

The Lambda function target in the event source mapping does not need to be running in a VPC to receive messages from Amazon MSK. To learn more about configuring the private subnet table to use a NAT Gateway, see this Premium Support response. For an example of how to configure the infrastructure with AWS CloudFormation, see this template.

Required Lambda function permissions

The Lambda must have permission to describe VPCs and security groups, and manage elastic network interfaces. These execution roles permissions are:

  • ec2:CreateNetworkInterface
  • ec2:DescribeNetworkInterfaces
  • ec2:DescribeVpcs
  • ec2:DeleteNetworkInterface
  • ec2:DescribeSubnets
  • ec2:DescribeSecurityGroups

To access the Amazon MSK data stream, the Lambda function also needs two Kafka permissions: kafka:DescribeCluster and kafka:GetBootstrapBrokers. The policy template AWSLambdaMSKExecutionRole includes these permissions.

In AWS SAM templates, you can configure a Lambda function with an Amazon MSK event source mapping and the necessary permissions. For example:

Resources:
  ProcessMSKfunction:
    Type: AWS::Serverless::Function 
    Properties:
      CodeUri: code/
      Timeout: 3
      Handler: app.lambdaHandler
      Runtime: nodejs12.x
      Events:
  MSKEvent:
    Type: MSK
    Properties:
      StartingPosition: LATEST
      Stream: arn:aws:kafka:us-east-1:123456789012:cluster/my-cluster/12abcd12-1234-1234-1234-1234abcd1234-1
      Topics:
        - MyTopic
      VpcConfig:
        SecurityGroupIds: "sg-ab1234ab"
        SubnetIds: "subnet-12345678,subnet-abcd1234"
  Policies:
  - AWSLambdaMSKExecutionRole: {}

Setting up an Amazon MSK cluster

Before starting, configure a new VPC with public and private subnets in two Availability Zones using this AWS CloudFormation template. In this configuration, the private subnets are set up to use a NAT Gateway.

To create the Amazon MSK cluster:

  1. From the Amazon MSK console, choose Create Cluster.
  2. Select Create cluster with custom settings, apply the name “my-MSK-cluster”, and keep the recommended Apache Kafka version.

    Create MSK cluster

  3. In the Configuration panel, keep the “Use the MSK default configuration” option selected.

    MSK configuration dialog box

  4. In Networking, select the new VPC and choose “2” for Number of Availability Zones. From the dropdowns, select the two Availability Zones in the VPC, and choose the private subnets for each.

    MSK networking configuration dialog box

  5. In the Brokers panel, choose kafka.t3.small for Broker instance type, and enter “1” for Number of brokers per Availability Zone.

    Brokers dialog box

  6. For Storage, enter “1” for EBS storage volume per broker.
  7. Keep the existing defaults and choose Create cluster. It takes up to 15 minutes to create the cluster and the status is displayed in the Cluster Summary panel.

    Creating MSK cluster summary

Configuring the Lambda event source mapping

You can create the Lambda event source mapping using the AWS CLI or AWS SDK, which provide the CreateEventSourceMapping API. In this walkthrough, you use the AWS Management Console to create the event source mapping.

First, you must create an Amazon MSK topic:

  1. Launch an EC2 instance, selecting t2.micro as the instance size. Connect to the instance once it is available.
  2. Follow these instructions in the Amazon MSK documentation to create a topic. Use the name “mytopic” and a replication-factor of 2, since there are only two Availability Zones in this test.

Now create a Lambda function that uses the Amazon MSK cluster and topic as an event source:

  1. From the Lambda console, select Create function.
  2. Enter a function name, and select Node.js 12.x as the runtime.
  3. Select the Permissions tab, and select the role name in the Execution role panel to open the IAM console.
  4. Choose Attach policies.Attach policies to IAM role
  5. In the filter box, enter “MSK”, then check the AWSLambdaMSKExecutionRole policy name.

    AWSLambdaMSKExecutionRole role

  6. Choose Attach policy.

    IAM role summary

  7. Back in the Lambda function, select the Configuration tab. In the Designer panel, choose Add trigger.
  8. In the dropdown, select MSK. In the MSK cluster box, select the cluster you configured earlier. Enter “mytopic” for Topic name and choose “Latest” for Starting Position. Choose Add.

    Add trigger
    Note: it takes several minutes for the trigger status to update from Creating to Enabled.

  9. In the Function code panel, replace the contents of index.js with:
    exports.handler = async (event) => {
        // Iterate through keys
        for (let key in event.records) {
          console.log('Key: ', key)
          // Iterate through records
          event.records[key].map((record) => {
            console.log('Record: ', record)
            // Decode base64
            const msg = Buffer.from(record.value, 'base64').toString()
            console.log('Message:', msg)
          }) 
        }
    }
  10. Choose Save.

Testing the event source mapping

At this point, you have created a VPC with two private and public subnets and a NAT Gateway. You have placed a new Amazon MSK cluster in the two private subnets. You set up a target Lambda function in the same VPC and private subnets, with the necessary IAM permissions. Next, you publish messages to the Amazon MSK topic and see the resulting invocation in the Lambda function’s logs.

  1. From the EC2 instance you created earlier, follow these instructions to produce messages. In the Kafka console producer script running in the terminal, enter “Message #1”:

    Produce Kafka messages from terminal

  2. Back in the Lambda function, select the Monitoring tab and choose View logs in CloudWatch. Select the first log stream.

    CloudWatch log streams

  3. In the Log events panel, expand the entries to see the message sent from the Amazon MSK topic. Note that the value attribute containing the message is base64 encoded.

    MSK event log in Lambda

Conclusion

Amazon MSK provide a fully managed, highly available service that uses Kafka to process real-time streaming data. Now Lambda supports Amazon MSK as an event source, you can invoke Lambda functions from messages in Kafka topics to integrate into your downstream serverless workflows.

In this post, I give an overview of how the integration compared with other event source mappings. I show how to create a test Amazon MSK cluster, configure the networking, and create the event source mapping with Lambda. I also show how to set up the Lambda function in the AWS Management Console, and refer to the equivalent AWS SAM syntax to simplify deployment.

To learn more about how to use this feature, read the documentation.

 

Understanding database options for your serverless web applications

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/understanding-database-options-for-your-serverless-web-applications/

Many web developers use relational databases to store and manage data in their web applications. As you migrate to a serverless approach for web development, there are also other options available. These can help improve the scale, performance, and cost-effectiveness of your workloads. In this blog post, I highlight use-cases for different serverless database services, and common patterns useful for web applications.

Using Amazon RDS with serverless web applications

You can access Amazon RDS directly from your AWS Lambda functions. The RDS database, such as Amazon Aurora, is configured within the customer VPC. The Lambda function must be configured with access to the same VPC:

Lambda connecting to RDS

There are special considerations for this design in busy serverless applications. It’s common for popular web applications to experience “spiky” usage, where traffic volumes shift rapidly and unpredictably. Serverless services such as AWS Lambda and Amazon API Gateway are designed to automatically scale to meet these traffic increases.

However, relational databases are connection-based, so they are intended to work with a few long-lived clients, such as web servers. By contrast, Lambda functions are ephemeral and short-lived, so their database connections are numerous and brief. If Lambda scales up to hundreds or thousands of instances, you may overwhelm downstream relational databases with connection requests. This is typically only an issue for moderately busy applications. If you are using a Lambda function for low-volume tasks, such as running daily SQL reports, you do not experience this behavior.

The Amazon RDS Proxy service is built to solve the high-volume use-case. It pools the connections between the Lambda service and the downstream RDS database. This means that a scaling Lambda function is able to reuse connections via the proxy. As a result, the relational database is not overwhelmed with connections requests from individual Lambda functions. This does not require code changes in many cases. You only need to replace the database endpoint with the proxy endpoint in your Lambda function.

Lambda to RDS Proxy to RDS diagram

As a result, if you need to use a relational database in a high-volume web application, you can use RDS Proxy with minimal changes required.

Using Amazon DynamoDB as a high-performance operational database

Amazon DynamoDB is a high-performance key-value and document database that operates with single-digit millisecond response times at any scale. This is a NoSQL database that is a natural fit for many serverless workloads, especially web applications. It can operate equally well for low and high usage workloads. Unlike relational databases, the performance of a well-architected DynamoDB table is not adversely affected by heavy usage or large amounts of data storage.

For web applications, DynamoDB tables are ideal for storing common user configuration and application data. When integrated with Amazon Cognito, you can restrict row-level access to the current user context. This makes it a frequent choice for multi-tenant web applications that host data for many users.

DynamoDB tables can be useful for lookups of key-based information, in addition to geo-spatial queries in many cases. DynamoDB is not connection-based, so this integration works even if a Lambda function scales up to hundreds or thousands of concurrent executions. You can query directly from Lambda with minimal code:

const AWS = require('aws-sdk')
AWS.config.region = process.env.AWS_REGION
const documentClient = new AWS.DynamoDB.DocumentClient()

// Construct params
const params = {
  TableName: 'myDDBtable',
  Item: {
    partitionKey: 'user-123',
    sortKey: Date.now(),
    name: 'Alice',
    cartItems: 3
  }
}

// Store in DynamoDB
const result = await documentClient.put(params).promise()

Using advanced patterns in DynamoDB, it’s possible to build equivalent features frequently found in relational schemas. For example, one-to-many tables, many-to-many tables, and ACID transactions can all be modeled in a single DynamoDB table.

Combining DynamoDB with RDS

While DynamoDB remains highly performant for high volumes of traffic, you need to understand data access patterns for your application before designing the schema. There are times where you need to perform ad hoc queries, or where downstream application users must use SQL-based tools to interact with databases.

In this case, combining both DynamoDB and RDS in your architecture can provide a resilient and flexible solution. For example, for a high-volume transactional web application, you can use DynamoDB to ingest data from your frontend application. For ad hoc SQL-based analytics, you could also use Amazon Aurora.

By using DynamoDB streams, you can process updates to a DynamoDB table using a Lambda function. In a simple case, this function can update tables in RDS, keeping the two databases synchronized. For example, when sales transactions are saved in DynamoDB, a Lambda function can post the sales information to transaction tables in Aurora.

DynamoDB to RDS architecture

Both the Lambda function and RDS database operate with the customer’s VPC, while DynamoDB is outside the VPC. DynamoDB Streams can invoke Lambda functions configured to access the VPC. In this model, RDS users can then run ad hoc SQL queries without impacting operational data managed by DynamoDB.

High-volume ETL processes between DynamoDB and RDS

For high-volume workloads capturing large numbers of transactions in DynamoDB, Lambda may still scale rapidly and exhaust the RDS connection pool. To process these flows, you may introduce Amazon Kinesis Data Firehose to help with data replication between DynamoDB and RDS.

ETL processing with with DynamoDB and RDS

  1. New and updated items in DynamoDB are sent to a DynamoDB stream. The stream invokes a stream processing Lambda function, sending batches of records to Kinesis Data Firehose.
  2. Kinesis buffers incoming messages and performs data transformations using a Lambda function. It then writes the output to Amazon S3, buffering by size (1–128 MB) or interval (60–900 seconds).
  3. The Kinesis Data Firehose transformation uses a custom Lambda function for processing records as needed.
  4. Amazon S3 is a durable store for these batches of transformed records. As objects are written, S3 invokes a Lambda function.
  5. The Lambda function loads the objects from S3, then connects to RDS and imports the data.

This approach supports high transaction volumes, enabling table item transformation before loading into RDS. The RDS concurrent connection pool is optimized by upstream batching and buffering, which reduces the number of concurrent Lambda functions and RDS connections.

Conclusion

Web developers commonly use relational databases in building their applications. When migrating to serverless architectures, a web developer can continue to use databases like RDS, or take advantage of other options available. RDS Proxy enables developers to pool database connections and use connection-based databases with ephemeral functions.

DynamoDB provides high-performance, low-latency NoSQL support, which is ideal for many busy web applications with spiky traffic volumes. However, it’s also possible to use both services to take advantage of the throughput of DynamoDB, together with the flexibility of ad hoc SQL queries in RDS.

For extremely high traffic volumes, you can introduce Kinesis Data Firehose to batch and transform data between DynamoDB and RDS. In this case, you separate the operational database from the analytics database. This solution uses multiple serverless services to handle scaling automatically.

To learn more about AWS serverless database solutions for web developers, visit https://aws.amazon.com/products/databases/.

Building a serverless tokenization solution to mask sensitive data

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-a-serverless-tokenization-solution-to-mask-sensitive-data/

This post is courtesy of Anuj Gupta, Senior Solutions Architect, and Steven David, Senior Solutions Architect.

Customers tell us that security and compliance are top priorities regardless of industry or location. Government and industry regulations are regularly updated and companies must move quickly to remain compliant. Organizations must balance the need to generate value from data and to ensure data privacy. There are many situations where it is prudent to obfuscate data to reduce the risk of exposure, while also improving the ability to innovate.

This blog discusses data obfuscation and how it can be used to reduce the risk of unauthorized access. It can also simplify PCI DSS compliance by reducing the number of components for which this compliance may apply.

Comparing tokenization and encryption

There is a difference between encryption and tokenization. Encryption is the process of using an algorithm to transform plaintext into ciphertext. An algorithm and an encryption key are required to decrypt the original plaintext.

Tokenization is the process of transforming a piece of data into a random string of characters called a token. It does not have direct meaningful value in relation to the original data. Tokens serve as a reference to the original data, but cannot be used to derive that data.

Unlike encryption, tokenization does not use a mathematical process to transform the sensitive information into the token. Instead, tokenization uses a database, often called a token vault, which stores the relationship between the sensitive value and the token. The real data in the vault is then secured, often via encryption. The token value can be used in various applications as a substitute for the original data.

For example, for processing a recurring credit card payment, the token is submitted to the vault. The index is used to fetch the original data for use in the authorization process. Recently, tokens are also being used to secure other types of sensitive or personally identifiable information. This includes data like social security numbers (SSNs), telephone numbers, and email addresses.

Overview

In this blog, we show how to design a secure, reliable, scalable, and cost-optimized tokenization solution. It can be integrated with applications to generate tokens, store ciphertext in an encrypted token vault, and exchange tokens for the original text.

In an example use-case, a data analyst needs access to a customer database. The database includes the customer’s name, SSN, credit card, order history, and preferences. Some of the customer information qualifies as sensitive data. To enforce the required information security policy, you must enforce methods such as column level access, role-based control, column level encryption, and protection from unauthorized access.

Providing access to the customer database increases the complexity of managing fine-grained access policies. Tokenization replaces the sensitive data with random unique tokens, which are stored in an application database. This lowers the complexity and the cost of managing access, while helping with data protection.

Walkthrough

This serverless application uses Amazon API Gateway, AWS Lambda, Amazon Cognito, Amazon DynamoDB, and the AWS KMS.

Serverless architecture diagram

The client authenticates with Amazon Cognito and receives an authorization token. This token is used to validate calls to the Customer Order Lambda function. The function calls the tokenization layer, providing sensitive information in the request. This layer includes the logic to generate unique random tokens and store encrypted text in a cipher database.

Lambda calls KMS to obtain an encryption key. It then uses the DynamoDB client-side encryption library to encrypt the original text and store the ciphertext in the cipher database. The Lambda function retrieves the generated token in the response from the tokenization layer. This token is then stored in the application database for future reference.

The KMS makes it easy to create and manage cryptographic keys. It provides logs of all key usage to help you meet regulatory and compliance needs.

One of the most important decisions when using the DynamoDB Encryption Client is selecting a cryptographic materials provider (CMP). The CMP determines how encryption and signing keys are generated, whether new key materials are generated for each item or are reused. It also sets the encryption and signing algorithms that are used. To identify a CMP for your workload, refer to this documentation.

The current solution selects the Direct KMS Provider as the CMP. This cryptographic materials provider returns a unique encryption key and signing key for every table item. To do this, it calls KMS every time you encrypt or decrypt an item.

The KMS process

  • To generate encryption materials, the Direct KMS Provider asks AWS KMS to generate a unique data key for each item using a customer master key (CMK) that you specify. It derives encryption and signing keys for the item from the plaintext copy of the data key, and then returns the encryption and signing keys, along with the encrypted data key, which is stored in the material description attribute of the item.
  • The item encryptor uses the encryption and signing keys and removes them from memory as soon as possible. Only the encrypted copy of the data key from which they were derived is saved in the encrypted item.
  • To generate decryption materials, the Direct KMS Provider asks AWS KMS to decrypt the encrypted data key. Then, it derives verification and signing keys from the plaintext data key, and returns them to the item encryptor.

The item encryptor verifies the item and, if verification succeeds, decrypts the encrypted values. Finally, it removes the keys from memory as soon as possible.

For enhanced security, the example creates the Lambda function inside a VPC with a security group attached to allow incoming HTTPS traffic from only private IPs. The Lambda function connects to DynamoDB and KMS via VPC endpoints instead of going through the public internet. It connects to DynamoDB using a service gateway endpoint and to KMS using an interface endpoint providing a highly available and secure connection.

Additionally, VPC endpoints can use endpoint policies to enforce allowing only permitted operations for KMS and DynamoDB over this connection. To further control the management of encryption keys, the KMS master key has a resource-based policy. It allows the Lambda layer to generate data keys for encryption and decryption, and restrict any administrative activity on master key.

To deploy this solution, follow the instructions in the aws-serverless-tokenization GitHub repo. The AWS Serverless Application Model (AWS SAM) template allows you to quickly deploy this solution into your AWS account.

Understanding the code

The solution uses the tokenizer package, deployed as a Lambda layer. It uses Python UUID4 to generate random values. You can optionally update the logic in hash_gen.py to use your own tokenization technique. For example, you could generate tokens with same length as the original text, preserving the format in the generated token.

The ddb_encrypt_item.py file contains the logic for encrypting DynamoDB items and uses a DynamoDB client-side encryption library. To learn more about how this library works, refer to this documentation.

There are three methods used in the application logic:

  • Encrypt_item encrypts the plaintext using the KMS customer managed key. In AttributeActions actions, you can specify if you don’t want to encrypt a portion of the plaintext. For example, you might exclude keys in the JSON input from being encrypted. It also requires a partition key to index the encrypted text in the DynamoDB table. The hash key is used as the name of the partition key in the DynamoDB table. The value of this partition key is the UUID token generated in the previous step.
def encrypt_item (plaintext_item,table_name):
    table = boto3.resource('dynamodb').Table(table_name)

    aws_kms_cmp = AwsKmsCryptographicMaterialsProvider(key_id=aws_cmk_id)

    actions = AttributeActions(
        default_action=CryptoAction.ENCRYPT_AND_SIGN,
        attribute_actions={'Account_Id': CryptoAction.DO_NOTHING}
    )

    encrypted_table = EncryptedTable(
        table=table,
        materials_provider=aws_kms_cmp,
        attribute_actions=actions
    )
    response = encrypted_table.put_item(Item=plaintext_item)
  • Get_decrypted_item gets the plaintext for a given partition key. For example, the UUID token using the KMS customer managed key.
  • Get_Item gets the obfuscated text, for example the ciphertext stored in the DynamoDB table for the provided partition key.

The dynamodb-encryption-sdk requires cryptography libraries as a dependency. Both of these libraries are platform-dependent and must be installed for a specific operating system. Since Lambda functions use Amazon Linux, you must install these libraries for Amazon Linux even if you are developing application code on different operating system. To do this, use the get_AMI_packages_cryptography.sh script to download the Docker image, install dependencies within the image, and export files to be used by our Lambda layer.

If you are processing DynamoDB items at a high frequency and large scale, you might exceed the AWS KMS requests-per-second limit, causing processing delays. You can use tools such as JMeter to test the required throughput based on the expected traffic for this serverless application. If you need to exceed a quota, you can request a quota increase in Service Quotas. Use the Service Quotas console or the RequestServiceQuotaIncrease operation. For details, see Requesting a quota increase in the Service Quotas User Guide. If Service Quotas for AWS KMS are not available in the AWS Region, create a case in the AWS Support Center.

After following this walkthrough, to avoid incurring future charges, delete the resources following step 7 of the README file.

Conclusion

This post shows how to use AWS Serverless services to design a secure, reliable, and cost-optimized tokenization solution. It can be integrated with applications to protect sensitive information and manage access using strict controls with less operational overhead.

Replacing web server functionality with serverless services

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/replacing-web-server-functionality-with-serverless-services/

Web servers bring together many useful services in traditional web development. Developers use servers like Apache and NGINX for many common tasks. Linux, Apache, MySQL, and PHP formed the LAMP stack to power a large percentage of the world’s websites. Other variants, like the MEAN stack (MongoDB, Express.js, AngularJS, Node.js), have also been popular.

In the migration to serverless, it’s important to understand where this functionality moves to. There are significant benefits in taking a serverless approach to developing web apps but there are differences in where developers spend their efforts. This blog post provides a guide to serverless development for traditional web developers to help with this transition.

Comparing a “Hello World” example

To run a “Hello World” example in a highly available configuration, using a traditional webserver approach you need more than one server in more than one Availability Zone. This server contains an operating system, runtime, and web server software, together with your code. You might build an Amazon Machine Image (AMI) to help with creating more servers.

Scalable "Hello World"

With a web framework like Express, the following code starts a server and listens on port 3000 for connections. For requests at the root URL, it responds with the “Hello World” greeting:

Hello World output

There is a reasonable amount of configuration and infrastructure needed to make this example work. Even creating a TLS connection requires you to maintain a certificate or install and maintain a service like Let’s Encrypt. Additionally, you must patch and maintain the underlying EC2 instance to keep this service running once it’s deployed.

The serverless equivalent is simpler. I can define the Hello World example using an AWS Serverless Application Model (SAM) template:

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: hello-world

Resources:
  HelloWorldFunction:
    Type: AWS::Serverless::Function 
    Properties:
      Handler: index.lambdaHandler
      Runtime: nodejs12.x
      InlineCode: | 
        exports.lambdaHandler = async (event, context) => {
          return { 'statusCode': 200, 'body': 'Hello World!' }
        }
      Events:
        HelloWorld:
          Type: Api 
          Properties:
            Path: /hello
            Method: get

The SAM deployment creates an AWS Lambda function with an Amazon API Gateway endpoint:

Serverless Hello World

This is a highly available, scalable endpoint. The developer does not need to define VPCs, subnets or security groups, or install and manage a web server stack. A considerable part of the underlying infrastructure is managed for you, letting you focus primarily on the business logic of the application.

Additionally, using the default Service Quotas, this Endpoint can handle millions of requests a day. To handle this equivalent load with a traditional web server, you may need EC2 Auto Scaling. Lambda manages the scaling automatically, and also scales down as needed without any intervention from the developer.

Implementing authentication in serverless web apps

Many traditional web servers use web frameworks like Python Flask or Express and implement session-based authentication. This allows the server to authenticate users, often with a user name and password validation scheme. The server is responsible for storing user lists, and hashing and salting passwords securely. There are also user administration flows required for tasks such as creating accounts and resetting passwords.

While you can implement all these within a Lambda function, there is another approach that can be more secure and reduce boilerplate code. You can implement authorization and authentication in serverless development by using open standard JSON Web Tokens (JWTs). API Gateway then authenticates the user at the service level using Amazon Cognito, a Lambda authorizer, or with a JWT authorizer with HTTP APIs.

You use an identity provider such as Amazon Cognito or Auth0 to generate the user token. You pass the token in the API request in the Authorization header. The API Gateway service then validates the token before the request is sent downstream to your application.

While you can use JWTs in server-based web applications, there are benefits to separating out this functionality using serverless services:

  • Failed requests do not put any additional load on your infrastructure. API Gateway also does not charge on authenticated routes when authorization headers are missing.
  • You eliminate custom code for handling and processing logins since this happens before reaching your business logic.
  • You can add support for social logins, multi-factor authentication (MFA) and OAuth without changing your code.

Additionally, as your application grows to more functions or across Regions, you are not relying on a single authentication point in your architecture. Each microservice validates a JWT independently and can verify the authorization claims that can be securely embedded in the token’s payload.

For web developers, one of the most common questions is how to handle the user interface elements related to authorization within the application. Auth0 offers a number of customizable components that you can integrate into any JavaScript application. Amplify Framework provides the Authenticator component that provides a wrapper for common flows for signing in users.

Amplify signin UI

Using either approach eliminates boilerplate user management code and helps provide a consistent and professional login experience for your users. To learn more about using Auth0’s integrated sign-in, see the Ask Around Me application code repo.

Generating HTML, CSS and front-end templates

Many web frameworks use templating languages like Jinja or Mustache to help developers inject dynamic content into static HTML and CSS layouts. Typically, the web server creates the entire page layout for each request. You can use the same approach with Lambda if preferred, having the function build the HTML response for the browser.

However, single-page application (SPA) frameworks such as React, Vue.js, and AngularJS offer a different paradigm that works well for serverless development. The build process for SPA applications generates static HTML, CSS, and JavaScript files. When downloaded to the browser, they use JavaScript to fetch dynamic data and interact with the backend application:

SPA backend architecture

  1. The user visits the web application’s URL. The browser downloads the application’s HTML, CSS, and JavaScript files from Amazon S3 via Amazon CloudFront.
  2. The browser executes the application’s JavaScript.
  3. The application calls API Gateway endpoints to fetch and store dynamic data.

This architecture offers a number of benefits. First, serving the application’s assets is offloaded from your infrastructure to a global CDN. This reduces latency and increases scalability. Second, the HTML page building and rendering is managed entirely by the client browser, improving responsiveness and reducing network traffic with the application backend.

Uploading, processing, and saving binary files

Many web applications handle large binary files, such as user uploads. Processing these on web servers can be compute and network-intensive. You must also manage the amount of temporary space in use on the web server, and scale the fleet of servers appropriately during busy periods.

You can upload files serverlessly by using Amazon S3 directly. In this process, you request a presigned URL and upload the binary data directly to this endpoint. This reduces load on your infrastructure and increase scalability. The code is also simple to adapt for non-serverless applications that use S3. Watch this video to see how you can build an S3 uploader solution.

For processing binaries, you can use the S3 PutObject event to trigger serverless workflows. For example, you can process images, translate documents, or transcribe audio. For complex business workflows, the event can trigger AWS Step Functions workflows. This is a highly scalable way to bring automation and custom processing to binary uploads in your web applications.

When processing binary data, Lambda provides a 512 MB temporary file system (located at /tmp). You use this space for intermediate processing, not permanent storage, since the storage is ephemeral. For example, this can be useful for unzipping files or creating PDFs.

When saving files permanently in serverless applications, S3 buckets are the most common storage choice. S3 is highly durable and highly available, provides robust encryption options, and is a scalable, cost-effective solution for many workloads.

Storing application state

In many traditional applications, the web server stores temporary, context-specific application state, and a relational database stores data permanently.

Serverless tools have a range of different options available for managing state. Lambda functions are ephemeral and stateless, and there is no guarantee of reusing the same instance of a Lambda function multiple times.

For functions that need a durable store of user data that can be rehydrated between invocations, Amazon DynamoDB tables provide a low-latency, cost-effective solution. For example, this is ideal for recalling shopping cart contents or user profiles.

For more complex state, tracking long-lived or complex business workflows, the best practice is to use AWS Step Functions. You can model workflows in JSON that use parallel tasks, require human interaction, or take up to one year to complete.

Conclusion

In this post, I show how traditional web-server applications compare with their serverless counterparts. I show how the infrastructure is managed for you in serverless, and how code for serverless developers in primarily focused on business logic.

I look at how common web server tasks, such as authentication and authorization, are managed by scalable services. In single-page applications, front-end layouts are generated on the client-side, and the distribution is managed by a global CDN.

To learn more about how to build web applications with serverless, see the Ask Around Me application repo.

Building deep learning inference with AWS Lambda and Amazon EFS

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-deep-learning-inference-with-aws-lambda-and-amazon-efs/

This post is courtesy of Giuseppe Angelo Porcelli, Principal ML Specialist SA, and Diego Natali, Solutions Architect.

Amazon EFS for AWS Lambda makes it easier for serverless applications requiring persistent file storage or access to large amounts of reference data. Previously, applications had to download data from an object store or database to local ephemeral storage in 512-MB chunks for processing. This creates more code, causes slower startup behavior, and slower data processing. Customers also faced challenges when loading large code packages and models for ML inference.

Recently, AWS announced Amazon EFS support for AWS Lambda. It enables customers to easily share data across function invocations. It also allows you to read large reference data files, and write function output to a persistent and shared data store. Customers can now use Lambda to build data-intensive applications, and load larger libraries and models. They can process larger amounts of data in a highly distributed manner, and share data across functions, containers, and instances.

In this blog post, we show how you can use EFS to store deep learning (DL) framework libraries and models to load from Lambda to execute inferences. We provide a code example on executing serverless inferences with TensorFlow 2.

Using EFS and Lambda for deep learning inference requires to execute two steps:

  1. Storing the deep learning libraries and model on EFS
  2. Creating a Lambda function for inference, which loads the libraries and model from the EFS file system

In the next sections, we share some best practices to implement these steps, and then discuss a full, working example.

Prerequisites

This post assumes experience with Lambda, EFS, plus general knowledge of Python programming, DL, and DL frameworks. To help you get started, read the blog post and documentation.

1. Storing the deep learning libraries and model on Amazon EFS

To populate EFS with DL framework Python libraries and the DL model, there are different options. You can use EC2 instances, third-party tools like lambdci or AWS CodeBuild. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages for deployment.

This blog post uses an AWS CodeBuild project, configured as follows:

  • The build environment is a Docker container replicating the Lambda runtime environment. To make sure that the packages work in Lambda, it uses the lambci/lambda build container images on Docker Hub.
  • The EFS file system is mounted to the CodeBuild environment.
  • Build commands are used to install the DL framework and download the model to specific paths of the file system.

After the build completes, the EFS file system contains the Python libraries and the model in specific paths. It is attached to the Lambda function for loading those libraries at runtime and execute inference.

For this example, these are the CodeBuild commands to install the TensorFlow 2 framework and an SSD (Single Shot MultiBox Detector) pre-trained object detection model from TensorFlow Hub:

'echo "Downloading and copying model..."',
'mkdir -p $CODEBUILD_EFS1/lambda/model',
'curl https://storage.googleapis.com/tfhub-modules/google/openimages_v4/ssd/mobilenet_v2/1.tar.gz --output /tmp/1.tar.gz',
'tar zxf /tmp/1.tar.gz -C $CODEBUILD_EFS1/lambda/model',

'echo "Installing virtual environment..."',
'mkdir -p $CODEBUILD_EFS1/lambda',
'python3 -m venv $CODEBUILD_EFS1/lambda/tensorflow',
'echo "Installing Tensorflow..."',
'source $CODEBUILD_EFS1/lambda/tensorflow/bin/activate && pip3 install ' +
              (props.installPackages ? props.installPackages : "tensorflow"),

'echo "Changing folder permissions..."',
'chown -R 1000:1000 $CODEBUILD_EFS1/lambda/'

Considerations

  • The approach described can also work for other ML/DL frameworks
  • The EFS file system can be attached to multiple Lambda functions. This means it can share the DL framework libraries with multiple inference functions (up to 25000 connections for each file system).
  • There are alternatives to using EFS for model storage. If the model size fits in the Lambda package deployment, then you could optimize the first invocation since it doesn’t need to download the model. You can also use the function’s initializer to load the model since the first mount to EFS only takes a few hundred milliseconds.

2. Creating a Lambda function for inference

After attaching the EFS file system, you may structure the Lambda code as follows:

Lambda code structure

The code outside the handler method first adds the local mount path to the Python path. It then imports the frameworks, and loads the model into memory. Executing those operations outside of the function’s handler ensures that those objects remain initialized and reused in subsequent invocations of the same Lambda function instance. The code inside the handler runs the inference flow by reading inputs, executing the actual inference, and returning the results to the caller.

For hosting the TensorFlow 2 object detection model in the example, this is the function code:

import sys
import os

# Setting library paths.
efs_path = "/mnt/python"
python_pkg_path = os.path.join(efs_path, "tensorflow/lib/python3.8/site-packages")
sys.path.append(python_pkg_path)

import json
import string
import time
import io
import requests

# Importing TensorFlow
import tensorflow as tf

# Loading model
model_path = os.path.join(efs_path, 'model/')
loaded_model = tf.saved_model.load(model_path)
detector = loaded_model.signatures['default']

def lambda_handler(event, context):
    r = requests.get(event['url'])
    img = tf.image.decode_jpeg(r.content, channels=3)

    # Executing inference.
    converted_img  = tf.image.convert_image_dtype(img, tf.float32)[tf.newaxis, ...]
    start_time = time.time()
    result = detector(converted_img)
    end_time = time.time()

    obj = {
        'detection_boxes' : result['detection_boxes'].numpy().tolist(),
        'detection_scores': result['detection_scores'].numpy().tolist(),
        'detection_class_entities': [el.decode('UTF-8') for el in result['detection_class_entities'].numpy()] 
    }    

    return {
        'statusCode': 200,
        'body': json.dumps(obj)
    }

When invoked, the response is like:

{
    "statusCode": 200,
    "body": "{
    \"detection_boxes\": This field contains the relative position of the bounding boxes,
    \"detection_class_entities\": This field returns the class labels,
    \"detection_scores\": This field returns the detection confidences
    }"
}

Running the example

This working example is provided to set up and run ML/AI inference on Lambda using EFS. To run it, you must have the AWS CDK installed. Execute the following commands:

# clone repository
$ git clone https://github.com/aws-samples/lambda-efs-deep-learning-inference.git
$ cd lambda-efs-ml-demo

# Install the CDK and bootstrap the target account (if this was never done before)
$ npm install -g aws-cdk
$ cdk bootstrap aws://{account_id}/{region}

# Install packages for the project, build and deploy
$ cd cdk/
$ npm install
$ npm run build
$ cdk deploy

After deployment, note the output:

Outputs:
LambdaEFSMLDemo.LambdaFunctionName = LambdaEFSMLDemo-LambdaEFSMLExecuteInference17332C2-0546aa45dfXXXXXX

It takes a few minutes for AWS CodeBuild to deploy the libraries and framework to EFS. To test the Lambda function, run this command, replacing the function name:

$ aws lambda invoke \
    --function-name LambdaEFSMLDemo-LambdaEFSMLExecuteInference17332C2-0546aa45dfXXXXXX \
    --region us-east-1 \
    --cli-binary-format raw-in-base64-out \
    --payload '{"url": "https://images.pexels.com/photos/310983/pexels-photo-310983.jpeg?auto=compress&cs=tinysrgb&dpr=2&h=650&w=940"}' \
    --region us-east-1 \
    /tmp/return.json    

This is the output:

{
    "StatusCode": 200,
    "ExecutedVersion": "$LATEST"
}

Here you can check the inference’s result:
$ tail /tmp/return.json

The following image shows the bounding boxes created from the inference output.

Inference result

The following image shows the bounding boxes created from the inference output.

Image with bounding boxes

To generate this image with the bounding boxes, use the Jupyter notebook from the repository. We reduce the number of bounding boxes to the most relevant classes:

  • Bicycle: 91%
  • Wheel: 48%
  • Person: 45%
  • Wheel: 44%
  • Man: 40%
  • Bicycle wheel: 37%
  • Bicycle wheel: 30%

To clean up the deployment, run:

$ cdk destroy

Performance considerations

When planning for ML inference, you must keep three main aspects in mind: the type of compute resources required for inference, model size and memory footprint, function initialization and cold start.

Lambda is best suited for CPU-based inferencing, which meets the needs for most ML/DL inference use cases. Lambda’s memory can be set between 128 MB and 3008 MB. This means that large models (for example, FasterRCNN models) that may require more memory or dedicated GPUs are not a good fit.

It’s important to understand how Lambda invokes affect performance. The first request to a function instance is called a “cold-start”. This is where the function is provisioned, code downloaded, and the initializer is executed to download the code and load libraries. In this example, it takes about 40 seconds to load the full TensorFlow 2 libraries from EFS, and another 8 seconds to load the model into memory.

Subsequent calls to the same Lambda function instance don’t incur cold start latency if the request is handled by an existing execution environment. Customers who want to reduce this one-time cold start can use Provisioned Concurrency. This feature provides customers with greater control over performance of their serverless applications at any scale.

The EFS mount operation only takes a few hundred milliseconds and only happens once during the function provisioning. EFS supports up to 25,000 connections so is ideal for functions that scale up. We recommend you use EFS provisioned throughput with Provisioned Concurrency for better performance. To learn more, read the documentation about Amazon EFS performance and monitoring Amazon EFS.

Conclusion

This post shows how you can use EFS for Lambda to deploy large DL libraries and models into a function for synchronous invocations. The same approach can be applied to asynchronous invokes. For example, you could perform object detection on images stored in Amazon S3, or streaming invokes on data in Amazon Kinesis and Amazon DynamoDB.

EFS for Lambda enables many new use cases. To learn more about how to use EFS for Lambda, see the AWS News Blog post and read the documentation.

Modeling business logic flows in serverless applications

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/modeling-business-logic-flows-in-serverless-applications/

Serverless applications can help you develop more agile applications that can scale automatically. By using serverless services in your architecture, this reduces the amount of boilerplate code. It also helps offload complex tasks to specialized services. As a result, a well-designed serverless application can be modified easily to deliver new feature requests, while maintaining high availability for existing production users.

Serverless applications often combine serverless services, using AWS Lambda to integrate those services and transform data. This blog post shows how to model business logic in serverless architectures, combining the services available. I also discuss how to handle state across invocations, and manage complex workflows.

Developing architecture around evolving features

Many modern applications evolve quickly to reflect the needs of their users. Developers iterate on features, releasing new versions daily or weekly, and the feature set is guided by feedback. This user-centric approach can make early architectural plans hard to develop, since developers have limited knowledge about future requirements.

Well-designed serverless applications are inherently flexible, making it faster to add new functionality as user requirements change. This is because individual parts of the workflow are specialized and loosely coupled. This can help support iterative development and also help reduce the amount of rewritten code when the design changes.

In the following example, I show how a serverless architecture can evolve due to changing requirements. In this scenario, a custom serverless application processes customer reviews in an e-commerce website.

Storing customer reviews

The serverless application saves customers reviews submitted from a webpage. The initial design of the architecture only commits the review to a database.

Version 1 architecture

The user submits the review on the webpage, which calls Amazon API Gateway. This API invokes a Lambda function to store the review in an Amazon DynamoDB table.

Version 1: Translating reviews to a common language

After gathering feedback, there is a new requirement. Incoming reviews are currently submitted in multiple languages. These must be converted in a single language to help customer support teams.

Version 2 architecture

To solve this problem, the developer adds a new Lambda function invoked from a DynamoDB stream, using Amazon Translate to process the translation work. It stores the result back in DynamoDB.

Version 2: Responding to negative reviews

Customer support needs assistance in identifying negative reviews more quickly. In this version, the application analyzes the sentiment of the review. Comments that score negatively are emailed to a manager to take further action.

Responding to negative reviews

The developer adds a second Lambda function, invoked by the same DynamoDB stream. This sends the comment text to Amazon Comprehend to run a sentiment analysis. If the resulting score is under a threshold, the function uses Amazon SNS to email a manager.

Version 3: Enabling users to upload photos with the reviews

In addition to text comments, users need to upload photos. To support this feature, there is a new API Gateway endpoint that invokes a Lambda function for an Amazon S3 presigned URL. The browser uploads the media object directly to the S3 bucket.

Uploading photos

By the latest version, the application is considerably different from the original, but the developer has been able to keep pace with new customer requests. As users create more requirements in each iteration, this serverless architecture evolves.

By using services effectively, the developer is able to assemble serverless building blocks to complete tasks quickly. By creating specialized units of custom code, additional code is added to create new features without needing to change existing code from previous versions. This makes it faster to test and deploy new functionality.

Managing local state in a stateless environment

In serverless applications, Lambda functions are ephemeral and should be designed to operate without storing state. This means that each time a function is invoked, it has no knowledge of previous interactions. Each request is handled independently.

Yet there are many tasks in an application that rely on existing state. For example, in an e-commerce application, a user may add items to a shopping cart. In a traditional web server environment, a load balancer may use sticky sessions. This setting ensure that a single user’s requests are routed to the same server. The state of the cart is kept in memory on that web server and the process is stateful.

Since Lambda functions can scale up automatically and can run in multiple Availability Zones, each invocation may happen in a different execution environment. There is no guarantee that subsequent invocations are served by the same instance of a function. By using a stateless approach, your application is indifferent if this happens, and there is no conceptual equivalent of sticky sessions.

For single Lambda functions that need to retain state, the function should fetch the state each time the function is invoked. In the e-commerce example, if a Lambda function adds an item to a shopping cart, it can fetch relevant state information from a DynamoDB table before adding the item.

Local function state with DynamoDB

Due to the low latency performance of DynamoDB, this step adds minimal overhead but means that the application has significantly better scaling capability. This can also make it easier to unit test and debug, because there is less variability caused by locally caching between invocations.

Orchestrating stateful workflows in a serverless environment

Beyond managing the state of shopping carts, it’s common for business applications to model complex workflows reflecting a custom business process. These workflows may persist for long periods of time, require human intervention, or call out to third-party systems.

For example, when a customer places an order in an e-commerce application, this triggers a workflow. There are many different potential steps and paths involved across different systems. From processing a payment to scheduling a delivery, managing the lifecycle of an order is a complex task. This gets even more difficult when you consider error states such a failed payment or a rescheduled delivery.

It’s possible to write custom code to orchestrate these flows, using a database to save the state of each process. Your code must manage retries for downstream failures and undo previous steps if a process fails. While it’s possible to engineer this for your solution, it can result in significant amounts of convoluted code. It can quickly become fragile and be difficult modify as processes change.

AWS Step Functions is designed to manage workflows, and is usually a better approach. You can model the workflow in JSON, replacing complex custom logic in your code. Step Functions can manage individual executions in a workflow that last up to one year. It can also handle different versions of workflows, making it easier to change processes without impacting in-flight executions.

Moving workflow logic into Step Functions can help create more resilient applications. You can explicitly define parallel processes, error handing, and retry logic. The service scales to handle almost any volume of individual executions, letting you focus on the custom logic specific to your application.

Workflows across different systems and beyond AWS

It’s common to build applications that need to interact with other existing applications within your organization. Increasingly, customers also use software as a service (SaaS) providers for significant parts of their systems. In traditional approaches, interoperability with these systems can be complicated.

Amazon EventBridge is a serverless event bus service designed to make it easier to communicate across systems. It integrates with third-party SaaS providers like Auth0, PagerDuty and Datadog. It also receives and routes events from AWS services and your own custom applications.

In a traditional architecture, you typically use a polling mechanism or develop a webhook to retrieve data from a third-party service. This API is publicly accessible and must be secured. You must also scale the API if the third-party service increases the volume of data sent to your endpoint.

Using an EventBridge integration, this data arrives directly in your AWS account as an event. You configure rules in EventBridge to define targets where events are routed to. The data from the third party does not use the public internet and the delivery, security, and error handling is managed by the service.

Conclusion

Serverless applications are distributed applications, using a combination of managed services and custom code. Understanding how to assemble and use these services can help improve agility and your team’s ability to deliver software feature quickly.

Writing less code and creating more specialized functions also helping layer additional features and functionality in subsequent versions. It can improve testing, reduce rewritten code, and help reduce deployment complexity.

While serverless application should be stateless at the function level, you can use DynamoDB to managing state local. This offers low latency, durable state management while enabling your functions to scale.

For managing workflows, AWS provides a number of services for building and managing workflows. EventBridge enables you to connect data between different applications and from SaaS providers. Step Functions makes it faster to develop and orchestrate complex workflows that make your application resilient.

Using these services effectively can reduce custom code and help you deliver solutions more quickly. To see this working in a sample serverless project, see the Ask Around Me blog series.