Tag Archives: AWS Lambda

Best practices for organizing larger serverless applications

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/best-practices-for-organizing-larger-serverless-applications/

Well-designed serverless applications are decoupled, stateless, and use minimal code. As projects grow, a goal for development managers is to maintain the simplicity of design and low-code implementation. This blog post provides recommendations for designing and managing code repositories in larger serverless projects, and best practices for deploying releases of production systems.

Organizing your code repositories

Many serverless applications begin as monolithic applications. This can occur either because a simple application has grown more complex over time, or because developers are following existing development practices. A monolithic application is represented by a single AWS Lambda function performing multiple tasks, and a mono-repo is a single repository containing the entire application logic.

Monoliths work well for the simplest serverless applications that perform single-purpose functions. These are small applications such as cron jobs, data processing tasks, and some asynchronous processes. As those applications evolve into workflows or develop new features, it becomes important to refactor the code into smaller services.

Using frameworks such as the AWS Serverless Application Model (SAM) or the Serverless Framework can make it easier to group common pieces of functionality into smaller services. Each of these can have a separate code repository. For SAM, the template.yaml file contains all the resources and function definitions needed for an application. Consequently, breaking an application into microservices with separate templates is a simple way to split repos and resource groups.

Separate templates for microservices

In the smallest unit of a serverless application, it’s also possible to create one repository per function. If these functions are independent and do not share other AWS resources, this may be appropriate. Helper functions and simple event processing code are examples of candidates for this kind of repo structure.

In most cases, it makes sense to create repos around groups of functions and resources that define a microservice. In an ecommerce example, “Payment processing” is a microservice with multiple smaller related functions that share common resources.

As with any software, the repo design depends upon the use-case and structure of development teams. One large repo makes it harder for developer teams to work on different features, and test and deploy. Having too many repos can create duplicate code, and difficulty in sharing resources across repos. Finding the balance for your project is an important step in designing your application architecture.

Using AWS services instead of code libraries

AWS services are important building blocks for your serverless applications. These can frequently provide greater scale, performance, and reliability than bundled code packages with similar functionality.

For example, many web applications that are migrated to Lambda use web frameworks like Flask (for Python) or Express (for Node.js). Both packages support routing and separate user contexts that are well suited if the application is running on a web server. Using these packages in Lambda functions results in architectures like this:

Web servers in Lambda functions

In this case, Amazon API Gateway proxies all requests to the Lambda function to handle routing. As the application develops more routes, the Lambda function grows in size and deployments of new versions replace the entire function. It becomes harder for multiple developers to work on the same project in this context.

This approach is generally unnecessary, and it’s often better to take advantage of the native routing functionality available in API Gateway. In many cases, there is no need for the web framework in the Lambda function, which increases the size of the deployment package. API Gateway is also capable of validating parameters, reducing the need for checking parameters with custom code. It can also provide protection against unauthorized access, and a range of other features more suited to be handled at the service level. When using API Gateway this way, the new architecture looks like this:

Using API Gateway for routing

Additionally, the Lambda functions consist of less code and fewer package dependencies. This makes testing easier and reduces the need to maintain code library versions. Different developers in a team can work on separate routing functions independently, and it becomes simpler to reuse code in future projects. You can configure routes in API Gateway in the application’s SAM template:

Resources:
  GetProducts:
    Type: AWS::Serverless::Function 
    Properties:
      CodeUri: getProducts/
      Handler: app.handler
      Runtime: nodejs12.x
      Events:
        GetProductsAPI:
          Type: Api 
          Properties:
            Path: /getProducts
            Method: get

Similarly, you should usually avoid performing workflow orchestrations within Lambda functions. These are sections of code that call out to other services and functions, and perform subsequent actions based on successful execution or failure.

Lambda functions with embedded workflow orchestrations

These workflows quickly become fragile and difficult to modify for new requirements. They can cause idling in the Lambda function, meaning that the function is waiting for return values from external sources, increasingly the cost of execution.

Often, a better approach is to use AWS Step Functions, which can represent complex workflows as JSON definitions in the application’s SAM template. This service reduces the amount of custom code required, and enables long-lived workflows that minimize idling in Lambda functions. It also manages in-flight executions as workflows are upgraded. The example above, rearchitected with a Step Functions workflow, looks like this:

Using Step Functions for orchestration

Using multiple AWS accounts for development teams

There are many ways to deploy serverless applications to production. As applications grow and become more important to your business, development managers generally want to improve the robustness of the deployment process. You have a number of options within AWS for managing the development and deployment of serverless applications.

First, it is highly recommended to use more than one AWS account. Using AWS Organizations, you can centrally manage the billing, compliance, and security of these accounts. You can attach policies to groups of accounts to avoid custom scripts and manual processes. One simple approach is to provide each developer with an AWS account, and then use separate accounts for a beta deployment stage and production:

Multiple AWS accounts in a deployment pipeline

The developer accounts can contains copies of production resources and provide the developer with admin-level permissions to these resources. Each developer has their own set of limits for the account, so their usage does not impact your production environment. Individual developers can deploy CloudFormation stacks and SAM templates into these accounts with minimal risk to production assets.

This approach allows developers to test Lambda functions locally on their development machines against live cloud resources in their individual accounts. It can help create a robust unit testing process, and developers can then push code to a repository like AWS CodeCommit when ready.

By integrating with AWS Secrets Manager, you can store different sets of secrets in each environment and eliminate any need for credentials stored in code. As code is promoted from developer account through to the beta and production accounts, the correct set of credentials is automatically used. You do not need to share environment-level credentials with individual developers.

It’s also possible to implement a CI/CD process to start build pipelines when code is deployed. To deploy a sample application using a multi-account deployment flow, follow this serverless CI/CD tutorial.

Managing feature releases in serverless applications

As you implement CI/CD pipelines for your production serverless applications, it is best practice to favor safe deployments over entire application upgrades. Unlike traditional software deployments, serverless applications are a combination of custom code in Lambda functions and AWS service configurations.

A feature release may consist of a version change in a Lambda function. It may have a different endpoint in API Gateway, or use a new resource such as a DynamoDB table. Access to the deployed feature may be controlled via user configuration and feature toggles, depending upon the application. AWS SAM has AWS CodeDeploy built-in, which allows you to configure canary deployments in the YAML configuration:

Resources:
 GetProducts:
   Type: AWS::Serverless::Function
   Properties:
     CodeUri: getProducts/
     Handler: app.handler
     Runtime: nodejs12.x

     AutoPublishAlias: live

     DeploymentPreference:
       Type: Canary10Percent10Minutes 
       Alarms:
         # A list of alarms that you want to monitor
         - !Ref AliasErrorMetricGreaterThanZeroAlarm
         - !Ref LatestVersionErrorMetricGreaterThanZeroAlarm
       Hooks:
         # Validation Lambda functions run before/after traffic shifting
         PreTraffic: !Ref PreTrafficLambdaFunction
         PostTraffic: !Ref PostTrafficLambdaFunction

CodeDeploy automatically creates aliases pointing to the old and versions of a function. The canary deployment enables you to gradually shift traffic from the old to the new alias, as you become confident that the new version is working as expected. Or you can rollback the update if needed. You can also set PreTraffic and PostTraffic hooks to invoke Lambda functions before and after traffic shifting.

Conclusion

As any software application grows in size, it’s important for development managers to organize code repositories and manage releases. There are established patterns in serverless to help manage larger applications. Generally, it’s best to avoid monolithic functions and mono-repos, and you should scope repositories to either the microservice or function level.

Well-designed serverless applications use custom code in Lambda functions to connect with managed services. It’s important to identify libraries and packages that can be replaced with services to minimize the deployment size and simplify the code base. This is especially true in applications that have been migrated from server-based environments.

Using AWS Organizations, you manage groups of accounts to enable your developers to have their own AWS accounts for development. This enables engineers to clone production assets and test against the AWS Cloud when writing and debugging code. You can use a CI/CD pipeline to push code through a beta environment to production, while safeguarding secrets using Secrets Manager. You can also use CodeDeploy to manage canary deployments easily.

To learn more about deploying Lambda functions with SAM and CodeDeploy, follow the steps in this tutorial.

Deploying a serverless application using AWS CDK

Post Syndicated from Georges Leschener original https://aws.amazon.com/blogs/devops/deploying-a-serverless-application-using-aws-cdk/

There are multiple ways to deploy API endpoints, such as this example, in which you could use an application running on Amazon EC2 to demonstrate how to integrate Amazon ElastiCache with Amazon DocumentDB (with MongoDB capability). While the approach in this example help achieve great performance and reliability through the elasticity and the ability to scale up or down the number of EC2 instances in order to accommodate the load on the application, there is still however some operational overhead you still have to manage the EC2 instances yourself. One way of addressing the operational overhead issue and related costs could be to transform the application into a serverless architecture.

The example in this blog post uses an application that provides a similar use case, leveraging a serverless architecture showcasing some of the tools that are being leveraged by customers transitioning from lift-and-shift to building cloud-native applications. It uses Amazon API Gateway to provide the REST API endpoint connected to an AWS Lambda function to provide the business logic to read and write from an Amazon Aurora Serverless database. It also showcases the deployment of most of the infrastructure with the AWS Cloud Development Kit, known as the CDK. By moving your applications to cloud native architecture like the example showcased in this blog post, you will be able to realize a number of benefits including:

  • Fast and clean deployment of your application thereby achieving fast time to market
  • Reduce operational costs by serverless and managed services

Architecture Diagram

At the end of this blog, you have an AWS Cloud9 instance environment containing a CDK project which deploys an API Gateway and Lambda function. This Lambda function leverages a secret stored in your AWS Secrets Manager to read and write from your Aurora Serverless database through the data API, as shown in the following diagram.

 

Architecture diagram for deploying a serverless application using AWS CDK

This above architecture diagram showcases the resources to be deployed in your AWS Account

Through the blog post you will be creating the following resources:

  1. Deploy an Amazon Aurora Serverless database cluster
  2. Secure the cluster credentials in AWS Secrets Manager
  3. Create and populate your database in the AWS Console
  4. Deploy an AWS Cloud9 instance used as a development environment
  5. Initialize and configure an AWS Cloud Development Kit project including the definition of your Amazon API Gateway endpoint and AWS Lambda function
  6. Deploy an AWS CloudFormation template through the AWS Cloud Development Kit

Prerequisites

In order to deploy the CDK application, there are a few prerequisites that need to be met:

  1. Create an AWS account or use an existing account.
  2. Install Postman for testing purposes

Amazon Aurora serverless cluster creation

To begin, navigate to the AWS console to create a new Amazon RDS database.

  1. Select Create Database from the Amazon RDS service.
  2. Select Standard Create under Choose a database creation method.
  3. Select Serverless under Database features.
  4. Select Amazon Aurora as the engine type under Engine options.
  5. Enter db-blog for your DB Cluster Identifier.
  6. Expand the Additional Connectivity section and select the Data API option. This functionality enables you to access Aurora Serverless with web services-based applications. It also allows you to use the query editor feature for Aurora Serverless in order to run SQL queries against your database instance.
  7. Leave the default selection for everything else and choose Create Database.

Your database instance is created in a single availability zone (AZ), but an Aurora Serverless database cluster has a capability known as automatic multi-AZ failover, which enables Aurora to recreate the database instance in a different AZ should the current database instance or the AZ become unavailable. The storage volume for the cluster is spread across multiple AZs, since Aurora separates computation capacity and storage. This allows for data to remain available even if the database instance or the associated AZ is affected by an outage.

Securing database credentials with AWS Secrets Manager

After creating the database instance, the next step is to store your secrets for your database in AWS Secrets Manager.

  • Navigate to AWS Secrets Manager, and select Store a New Secret.
  • Leave the default selection (Credentials for RDS database) for the secret type. Enter your database username and password and then select the radio button for the database you created in the previous step (in this example, db-blog), as shown in the following screenshot.

database search in aws secrets manager

  •  Choose Next.
  • Enter a name and optionally a description. For the name, make sure to add the prefix rds-db-credentials/ as shown in the following screenshot.

AWS Secrets Manager Store a new secret window

  • Choose Next and leave the default selection.
  • Review your settings on the last page and choose Store to have your secrets created and stored in AWS Secrets Manager, which you can now use to connect to your database.

Creating and populating your Amazon Aurora Serverless database

After creating the DB cluster, create the database instance; create your tables and populate them; and finally, test a connection to ensure that you can query your database.

  • Navigate to the Amazon RDS service from the AWS console, and select your db-blog database cluster.
  • Select Query under Actions to open the Connect to database window as shown in the screenshot below . Enter your database connection details. You can copy your secret manager ARN from the Secrets Manager service and paste it into the corresponding field in the database connection window.

Amazon RDS connect to database window

  • To create the DB instance run the following SQL query: CREATE DATABASE recordstore;from the Query editor shown in the screenshot below:

 

Amazon RDS Query editor

  • Before you can run the following commands, make sure you are using the Recordstore database you just created by running the command:
USE recordstore;
  • Create a records table using the following command:
CREATE TABLE IF NOT EXISTS records (recordid INT PRIMARY KEY, title VARCHAR(255) NOT NULL, release_date DATE);
  • Create a singers table using the following command:
CREATE TABLE IF NOT EXISTS singers (id INT PRIMARY KEY, name VARCHAR(255) NOT NULL, nationality VARCHAR(255) NOT NULL, recordid INT NOT NULL, FOREIGN KEY (recordid) REFERENCES records (recordid) ON UPDATE RESTRICT ON DELETE CASCADE);
  • Add a record to your records table and a singer to your singers table.
INSERT INTO records(recordid,title,release_date) VALUES(001,'Liberian Girl','2012-05-03');
INSERT INTO singers(id,name,nationality,recordid) VALUES(100,'Michael Jackson','American',001);

If you have the AWS CLI set up on your computer, you can connect to your database and retrieve records.

To test it, use the rds-data execute-statement API within the AWS CLI to connect to your database via the data API web service and query the singers table, as shown below:

aws rds-data execute-statement —secret-arn "arn:aws:secretsmanager:REGION:xxxxxxxxxxx:secret:rds-db-credentials/xxxxxxxxxxxxxxx" —resource-arn "arn:aws:rds:us-east-1:xxxxxxxxxx:cluster:db-blog" —database demodb —sql "select * from singers" —output json

You should see the following result:

    "numberOfRecordsUpdated": 0,
    "records": [
        [
            {
                "longValue": 100
            },
            {
                "stringValue": "Michael Jackson"
            },
            {
                "stringValue": "American"
            },
            {
                "longValue": 1
            }
        ]
    ]
}

Creating a Cloud9 instance

To create a Cloud9 instance:

  1. Navigate to the Cloud9 console and select Create Environment.
  2. Name your environment AuroraServerlessBlog.
  3. Keep the default values under the Environment Settings.

Once your instance is launched, you see the screen shown in the following screenshot:

AWS Cloud9

 

You can now install the CDK in your environment. Run the following command inside your bash terminal on the blue section at the bottom of your screen:

npm install -g [email protected]

For the next section of this example, you mostly work on the command line of your Cloud9 terminal and on your file explorer.

Creating the CDK deployment

The AWS Cloud Development Kit (AWS CDK) is an open-source software development framework to model and provision your cloud application resources using familiar programming languages. If you would like to familiarize yourself the CDKWorkshop is a great place to start.

First, create a working directory called RecordsApp and initialize a CDK project from a template.

Run the following commands:

mkdir RecordsApp
cd RecordsApp
cdk init app --language typescript
mkdir resources
npm install @aws-cdk/[email protected] @aws-cdk/[email protected] @aws-cdk/[email protected]

Now your instance should look like the example shown in the following screenshot:

AWS Cloud9 shell

 

You are mainly working in two directories:

  • Resources
  • Lib

Your initial set up is ready, and you can move into creating specific services and deploying them to your account.

Creating AWS resources using the CDK

  1. Follow these steps to create AWS resources using the CDK:
  2. Under the /lib folder,  create a new file called records_service.ts.
    • Inside of your new file, paste the following code with these changes:
    • Replace the dbARN with the ARN of your AuroraServerless DB ARN from the previous steps.

Replace the dbSecretARN with the ARN of your Secrets Manager secret ARN from the previous steps.

import core = require("@aws-cdk/core");
import apigateway = require("@aws-cdk/aws-apigateway");
import lambda = require("@aws-cdk/aws-lambda");
import iam = require("@aws-cdk/aws-iam");

//REPLACE THIS
const dbARN = "arn:aws:rds:XXXX:XXXX:cluster:aurora-serverless-blog";
//REPLACE THIS
const dbSecretARN = "arn:aws:secretsmanager:XXXXX:XXXXX:secret:rds-db-credentials/XXXXX";

export class RecordsService extends core.Construct {
  constructor(scope: core.Construct, id: string) {
    super(scope, id);

    const lambdaRole = new iam.Role(this, 'AuroraServerlessBlogLambdaRole', {
      assumedBy: new iam.ServicePrincipal('lambda.amazonaws.com'),
      managedPolicies: [
            iam.ManagedPolicy.fromAwsManagedPolicyName('AmazonRDSDataFullAccess'),
            iam.ManagedPolicy.fromAwsManagedPolicyName('service-role/AWSLambdaBasicExecutionRole')
        ]
    });

    const handler = new lambda.Function(this, "RecordsHandler", {
     role: lambdaRole,
     runtime: lambda.Runtime.NODEJS_12_X, // So we can use async in widget.js
     code: lambda.Code.asset("resources"),
     handler: "records.main",
     environment: {
       TABLE: dbARN,
       TABLESECRET: dbSecretARN,
       DATABASE: "recordstore"
     }
   });

    const api = new apigateway.RestApi(this, "records-api", {
      restApiName: "Records Service",
      description: "This service serves records."
   });

    const getRecordsIntegration = new apigateway.LambdaIntegration(handler, {
      requestTemplates: { "application/json": '{ "statusCode": 200 }' }
    });

    api.root.addMethod("GET", getRecordsIntegration); // GET /

    const record = api.root.addResource("{id}");
    const postRecordIntegration = new apigateway.LambdaIntegration(handler);
    const getRecordIntegration = new apigateway.LambdaIntegration(handler);

    record.addMethod("POST", postRecordIntegration); // POST /{id}
    record.addMethod("GET", getRecordIntegration); // GET/{id}
  }
}

This snippet of code will instruct the AWS CDK to create the following resources:

  • IAM role: AuroraServerlessBlogLambdaRole containing the following managed policies:
    • AmazonRDSDataFullAccess
    • service-role/AWSLambdaBasicExecutionRole
  • Lambda function: RecordsHandler, which has a Node.js 8.10 runtime and three environmental variables
  • API Gateway: Records Service, which has the following characteristics:
    • GET Method
      • GET /
    • { id } Resource
      • GET method
        • GET /{id}
      • POST method
        • POST /{id}

Now that you have a service, you need to add it to your stack under the /lib directory.

  1. Open the records_app-stack.ts
  2. Replace the contents of this file with the following:
import cdk = require('@aws-cdk/core'); 
import records_service = require('../lib/records_service'); 
export class RecordsAppStack extends cdk.Stack { 
  constructor(scope: cdk.Construct, id: string, props?
: cdk.StackProps) { 
    super(scope, id, props); 
    new records_service.RecordsService(this, 'Records'
); 
  } 
}
  1. Create the Lambda code that is invoked from the API Gateway endpoint. Under the /resources directory, create a file called records.js and paste the following code in this file
const AWS = require('aws-sdk');
var rdsdataservice = new AWS.RDSDataService();

exports.main = async function(event, context) {
  try {
    var method = event.httpMethod;
    var recordName = event.path.startsWith('/') ? event.path.substring(1) : event.path;
// Defining parameters for rdsdataservice
    var params = {
      resourceArn: process.env.TABLE,
      secretArn: process.env.TABLESECRET,
      database: process.env.DATABASE,
   }
   if (method === "GET") {
      if (event.path === "/") {
       //Here is where we are defining the SQL query that will be run at the DATA API
       params['sql'] = 'select * from records';
       const data = await rdsdataservice.executeStatement(params).promise();
       var body = {
           records: data
       };
       return {
         statusCode: 200,
         headers: {},
         body: JSON.stringify(body)
       };
     }
     else if (recordName) {
       params['sql'] = `SELECT singers.id, singers.name, singers.nationality, records.title FROM singers INNER JOIN records on records.recordid = singers.recordid WHERE records.title LIKE '${recordName}%';`
       const data = await rdsdataservice.executeStatement(params).promise();
       var body = {
           singer: data
       };
       return {
         statusCode: 200,
         headers: {},
         body: JSON.stringify(body)
       };
     }
   }
   else if (method === "POST") {
     var payload = JSON.parse(event.body);
     if (!payload) {
       return {
         statusCode: 400,
         headers: {},
         body: "The body is missing"
       };
     }

     //Generating random IDs
     var recordId = uuidv4();
     var singerId = uuidv4();

     //Parsing the payload from body
     var recordTitle = `${payload.recordTitle}`;
     var recordReleaseDate = `${payload.recordReleaseDate}`;
     var singerName = `${payload.singerName}`;
     var singerNationality = `${payload.singerNationality}`;

      //Making 2 calls to the data API to insert the new record and singer
      params['sql'] = `INSERT INTO records(recordid,title,release_date) VALUES(${recordId},"${recordTitle}","${recordReleaseDate}");`;
      const recordsWrite = await rdsdataservice.executeStatement(params).promise();
      params['sql'] = `INSERT INTO singers(recordid,id,name,nationality) VALUES(${recordId},${singerId},"${singerName}","${singerNationality}");`;
      const singersWrite = await rdsdataservice.executeStatement(params).promise();

      return {
        statusCode: 200,
        headers: {},
        body: JSON.stringify("Your record has been saved")
      };

    }
    // We got something besides a GET, POST, or DELETE
    return {
      statusCode: 400,
      headers: {},
      body: "We only accept GET, POST, and DELETE, not " + method
    };
  } catch(error) {
    var body = error.stack || JSON.stringify(error, null, 2);
    return {
      statusCode: 400,
      headers: {},
      body: body
    }
  }
}
function uuidv4() {
  return 'xxxx'.replace(/[xy]/g, function(c) {
    var r = Math.random() * 16 | 0, v = c == 'x' ? r : (r & 0x3 | 0x8);
    return v;
  });
}

Take a look at what this Lambda function is doing. You have two functions inside of your Lambda function. The first is the exported handler, which is defined as an asynchronous function. The second is a unique identifier function to generate four-digit random numbers you use as UIDs for your database records. In your handler function, you handle the following actions based on the event you get from API Gateway:

  • Method GETwith empty path /:
    • This calls the data API executeStatement method with the following SQL query:
SELECT * from records
  • Method GET with a record name in the path /{recordName}:
    • This calls the data API executeStatmentmethod with the following SQL query:
SELECT singers.id, singers.name, singers.nationality, records.title FROM singers INNER JOIN records on records.recordid = singers.recordid WHERE records.title LIKE '${recordName}%';
  • Method POST with a payload in the body:
    • This makes two calls to the data API executeStatement with the following SQL queries:
INSERT INTO records(recordid,titel,release_date) VALUES(${recordId},"${recordTitle}",“${recordReleaseDate}”);<br />INSERT INTO singers(recordid,id,name,nationality) VALUES(${recordId},${singerId},"${singerName}","${singerNationality}");

Now you have all the pieces you need to deploy your endpoint and Lambda function by running the following commands:

npm run build
cdk synth
cdk bootstrap
cdk deploy

If you change the Lambda code or add aditional AWS resources to your CDK deployment, you can redeploy the application by running all four commands in a single line:

npm run build; cdk synth; cdk bootstrap; cdk deploy

Testing with Postman

Once it’s done, you can test it using Postman:

GET = ‘RecordName’ in the path

  • example:
    • ENDPOINT/RecordName

POST = Payload in the body

  • example:
{
   "recordTitle" : "BlogTest",
   "recordReleaseDate" : "2020-01-01",
   "singerName" : "BlogSinger",
   "singerNationality" : "AWS"
}

Clean up

To clean up the resources created by the CDK, run the following command in your Cloud9 instance:

cdk destroy

To clean up the resources created manually, run the following commands:

aws rds delete-db-cluster --db-cluster-identifier Serverless-blog --skip-final-snapshot
aws secretsmanager delete-secret --secret-id XXXXX --recovery-window-in-days 7

Conclusion

This blog post demonstrated how to transform an application running on Amazon EC2 from a previous blog into serverless architecture by leveraging services such as Amazon API Gateway, Lambda, Cloud 9, AWS CDK, and Aurora Serverless. The benefit of serverless architecture is that it takes away the overhead of having to manage a server and helps reduce costs, as you only pay for the time in which your code executes.

This example used a record-store application written in Node.js that allows users to find their favorite singer’s record titles, as well as the dates when they were released. This example could be expanded, for instance, by adding a payment gateway and a shopping cart to allow users to shop and pay for their favorite records. You could then incorporate some machine learning into the application to predict user choice based on previous visits, purchases, or information provided through registration profiles.

 


 

About the Authors

Luis Lopez Soria is an AI/ML specialist solutions architect working with the AWS machine learning team. He works with AWS customers to help them with the adoption of Machine Learning on a large scale. He enjoys doing sports in addition to traveling around the world, exploring new foods and cultures.

 

 

 

 Georges Leschener is a Partner Solutions Architect in the Global System Integrator (GSI) team at Amazon Web Services. He works with our GSIs partners to help migrate customers’ workloads to AWS cloud, design and architect innovative solutions on AWS by applying AWS recommended best practices.

 

Using dynamic Amazon S3 event handling with Amazon EventBridge

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/using-dynamic-amazon-s3-event-handling-with-amazon-eventbridge/

A common pattern in serverless applications is to invoke a Lambda function in response to an event from Amazon S3. For example, you could use this pattern for automating document translation, transcribing audio files, or staging data imports. You can configure this integration in many places, including the AWS Management Console, the AWS CLI, or the AWS Serverless Application Model (SAM).

If you need to fan out notifications, or hold messages in queue, you are also able to route S3 events to Amazon SNS or Amazon SQS. These standard notification mechanisms work well for most applications, and are simple to implement. However, for more complex notification patterns, you can use Amazon EventBridge to route events dynamically. This blog post explores advanced use-cases and how to implement these in your serverless applications.

S3 to EventBridge, using CloudTrail.

To set up the example applications, visit the GitHub repo and follow the instructions in the README.md file. The code uses SAM templates, enabling you to deploy the applications in your own AWS account. This walkthrough creates resources covered in the AWS Free Tier but you may incur cost if you test with large amounts of data.

Integrating S3 events with Lambda via EventBridge

EventBridge consumes S3 events via AWS CloudTrail. A single trail can log events for one or more S3 buckets, and you can configure which data events are recorded. It’s best practice to store CloudTrail log files in a separate S3 bucket. Once this is configured, EventBridge can then receive any event logged in the trail.

The first example in the GitHub repo shows how this can be configured in a SAM template. The application comprises an S3 bucket, a Lambda EventConsumer function, and other required resources. First, the template defines the two buckets:

Resources: 
  SourceBucket: 
    Type: AWS::S3::Bucket
    Properties:
      BucketName: "TheSourceBucket"

  LoggingBucket: 
    Type: AWS::S3::Bucket
    Properties:
      BucketName: "TheLoggingBucket"

Next, an S3 bucket policy grants permissions for CloudTrail to write files to the logging bucket:

  BucketPolicy: 
    Type: AWS::S3::BucketPolicy
    Properties: 
      Bucket: 
        Ref: LoggingBucket
      PolicyDocument: 
        Version: "2012-10-17"
        Statement: 
          - 
            Sid: "AWSCloudTrailAclCheck"
            Effect: "Allow"
            Principal: 
              Service: "cloudtrail.amazonaws.com"
            Action: "s3:GetBucketAcl"
            Resource: 
              !Sub |-
                arn:aws:s3:::${LoggingBucket}
          - 
            Sid: "AWSCloudTrailWrite"
            Effect: "Allow"
            Principal: 
              Service: "cloudtrail.amazonaws.com"
            Action: "s3:PutObject"
            Resource:
              !Sub |-
                arn:aws:s3:::${LoggingBucket}/AWSLogs/${AWS::AccountId}/*
            Condition: 
              StringEquals:
                s3:x-amz-acl: "bucket-owner-full-control"

The template configures the trail and sets the logging bucket. It defines event selectors, which identify the specific events for logging:

  myTrail: 
    Type: AWS::CloudTrail::Trail
    DependsOn: 
      - BucketPolicy
    Properties: 
      TrailName: "MyTrailName"
      S3BucketName: 
        Ref: LoggingBucket
      IsLogging: true
      IsMultiRegionTrail: false
      EventSelectors:
        - DataResources:
          - Type: AWS::S3::Object
            Values:
              - !Sub |-
                arn:aws:s3:::${SourceBucket}/
      IncludeGlobalServiceEvents: false

The SAM template configures a target Lambda function for receiving the events:

  EventConsumerFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: eventConsumer/
      Handler: app.handler
      Runtime: nodejs12.x

Finally, it defines a rule that sets the event pattern and targets. It also grants permission to EventBridge to invoke the Lambda function:

  EventRule: 
    Type: AWS::Events::Rule
    Properties: 
      Description: "EventRule"
      State: "ENABLED"
      EventPattern: 
        source: 
          - "aws.s3"
        detail: 
          eventName: 
            - "PutObject"
          requestParameters:
            bucketName: !Ref SourceBucketName

      Targets: 
        - 
          Arn: 
            Fn::GetAtt: 
              - "EventConsumerFunction"
              - "Arn"
          Id: "EventConsumerFunctionTarget"

  PermissionForEventsToInvokeLambda: 
    Type: AWS::Lambda::Permission
    Properties: 
      FunctionName: 
        Ref: "EventConsumerFunction"
      Action: "lambda:InvokeFunction"
      Principal: "events.amazonaws.com"
      SourceArn: 
        Fn::GetAtt: 
          - "EventRule"
          - "Arn"

To deploy this application, follow the instructions in the GitHub repo’s README.file. To test, upload any file to the Source Bucket. This invokes the Lambda function via the EventBridge event, and logs out the event details. Open the CloudWatch Logs console for the deployed Lambda function to view the output.

The event pattern in this example matches on any PutObject event in the Source Bucket. You can also match on any attribute, or combination of attributes, in an S3 event. This makes it possible to identify events by source IP address, object size, time range, or principalId (the user causing the event). With access to the entire S3 event, this enables more granularity on matching events before invoking the target Lambda function.

Consuming events from existing S3 buckets

When deploying S3 and Lambda integrations in SAM templates, you cannot use existing buckets managed outside of the CloudFormation stack. Frequently, it’s useful to deploy serverless applications that integrate with existing S3 buckets. Using the S3-to-EventBridge integration, you can create new applications that receive events from existing buckets.

Consuming events from existing S3 buckets

The second example in the GitHub repo shows how to configure a new application for an existing bucket. This template takes the existing S3 bucket name as a parameter, and generates the CloudTrail trail, EventBridge rule, and required permissions.

Follow this example’s README.md file to deploy the application. To test, upload any file into the existing S3 bucket you selected. This invokes the eventConsumer logging function deployed in the template.

Invoking a single Lambda function from multiple S3 buckets

With EventBridge decoupling the producer and consumer of the events, this also makes it easier to introduce multiple producers. In the third example, the SAM template creates three buckets that invoke the same EventConsumer Lambda function:

Invoking Lambda from multiple S3 buckets

The MultiBucketName parameter is used to create the three buckets with a number appended to the name. First, the CloudTrail EventSelector includes the three buckets in the trail:

  # The CloudTrail trail 
  myTrail: 
    Type: AWS::CloudTrail::Trail
    DependsOn: 
      - BucketPolicy
    Properties: 
      TrailName: "myTrail"
      S3BucketName: 
        Ref: LoggingBucket
      IsLogging: true
      IsMultiRegionTrail: false
      EventSelectors:
        - DataResources:
          - Type: AWS::S3::Object
            Values:
              - !Sub 'arn:aws:s3:::${MultiBucketName}-1/'
              - !Sub 'arn:aws:s3:::${MultiBucketName}-2/'
              - !Sub 'arn:aws:s3:::${MultiBucketName}-3/'
      IncludeGlobalServiceEvents: false

Next, the EventRule includes the three bucket names in the event pattern, so events from any of these buckets can now trigger the rule:

  # EventBridge rule - invokes EventConsumerFunction 
  EventRule: 
    Type: AWS::Events::Rule
    Properties: 
      Description: "EventRule"
      State: "ENABLED"
      EventPattern: 
        source: 
          - "aws.s3"
        detail: 
          eventName: 
            - "PutObject"
          requestParameters:
            bucketName:
              - !Sub '${MultiBucketName}-1'
              - !Sub '${MultiBucketName}-2'
              - !Sub '${MultiBucketName}-3'

It’s also possible to use content-based filtering in event patterns to match dynamically on bucket names. For example, if you have multiple buckets with the prefix myCompanySales, you can create an event pattern to match all of these buckets:

      EventPattern: 
        source: 
          - "aws.s3"
        detail: 
          eventName: 
            - "PutObject"
          requestParameters:
            bucketName:
              - "prefix": "myCompanySales" 

This enables your application to consume events from new buckets created after the application is deployed. With content-based filtering, you can create search patterns that allow greater flexibility in matching events.

Multiple buckets with multiple Lambda functions

In the standard S3 and Lambda integration, a single Lambda function can only be invoked by distinct prefix and suffix patterns in the S3 trigger. This means that the same Lambda function cannot be set as the trigger for PutObject events for the same filetype or prefix. When you need to invoke multiple functions with the same or overlapping prefixes or suffixes, the EventBridge integration can handle this.

EventBridge allows up to five targets per rule, so you can specify up to five separate Lambda functions to receive the event. All five functions are invoked in parallel when the event pattern matches. To use this, add the targets in the rule – no change to the event pattern is required.

In the fourth example, the SAM template configures three buckets and three Lambda functions, all subscribing to the same event pattern.

Multiple buckets with multiple Lambda subscribers

This template takes the existing S3 bucket name as a parameter, and generates the CloudTrail trail, EventBridge rule, and required permissions. The key change to the template is in the EventRule, where now more than one target is defined:

      Targets: 
        - Arn: 
            Fn::GetAtt: 
              - "EventConsumerFunction1"
              - "Arn"
          Id: "EventConsumerFunctionTarget1"
        - Arn: 
            Fn::GetAtt: 
              - "EventConsumerFunction2"
              - "Arn"
          Id: "EventConsumerFunctionTarget2"
        - Arn: 
            Fn::GetAtt: 
              - "EventConsumerFunction3"
              - "Arn"
          Id: "EventConsumerFunctionTarget3"

This approach enables more complex routing of S3 events to Lambda targets. It allows events from multiple S3 buckets with overlapping prefixes and suffixes in object names. It also enables you to route those events to multiple Lambda functions simultaneously.

Conclusion

The standard S3 to Lambda integration enables developers to deploy code that responds to bucket- or object-based events. You can also use SNS or SQS as targets for fanning out or buffering messages from S3. Using Amazon EventBridge, you can employ even more sophisticated routing and filtering of events between S3 and Lambda.

In this blog post, I show how to deploy a basic integration using a SAM template with a single bucket and single Lambda function. I cover how to use existing S3 buckets in your new application deployments, and use EventBridge content filtering in rules to dynamically match bucket events.

Finally, in complex serverless applications, I show how EventBridge completely decouples the producers and consumers. This makes it easy to route events from multiple S3 buckets to multiple Lambda functions. When combined with attribute matching across the entire S3 event object, this allows much more granularity in identifying events before invoking Lambda functions.

To learn more about using decoupled, event-driven architectures in your serverless applications, visit the Amazon EventBridge Learning Path.

Register now for the first virtual AWS Serverless-First Function

Post Syndicated from Rachel Richardson original https://aws.amazon.com/blogs/compute/register-now-for-the-first-virtual-aws-serverless-first-function/

This post is courtesy of Rebecca Marshburn, Sr. Product Marketing Manager, Serverless

AWS is hosting the first-ever completely serverless-focused, free virtual event: the AWS Serverless-First Function. The event takes place across two Thursdays – May 21 and May 28 – and focuses on two important themes of building on serverless:

  • May 21: Serverless for your Organization sessions focus on using, growing, promoting, and managing the use of serverless best practices across organizations.
  • May 28: Serverless for your Application sessions highlight best practices in building serverless applications, with an emphasis on hands-on, technical education.

Dr. Werner Vogels, CTO at Amazon, will introduce the event. Following him, AWS serverless leaders, builders, customers, and market analysts deliver topical sessions complemented by live, interactive chat.

The event is designed to offer educational takeaways for everyone, in every role, across their serverless journey. Business leaders, engineering managers, technical practitioners, and IT operators will all find something of value across the event, whether topically by day or technically by session. You can find highlighted sessions from the agenda below, to help decide which are best for you.

If you’re ready to attend the event and live chat with session presenters, AWS experts, partners, customers, and Serverless Heroes, register here.

Register for the Serverless-First Function

AGENDA HIGHLIGHTS

May 21: Serverless for your Organization

With the goal of giving you real-world tactics for transforming your organization, today’s sessions bring you insights, ideas, and learnings from AWS leaders, customers, market analysts, and more. The following sessions offer you the opportunity to hear from voices inside and outside of AWS.

  • 8:30 AM PST | 11:30 AM EST with AWS VP of Serverless, David Richardson

How operations change as your organization embraces event-driven architectures

In this session, David Richardson discusses serverless security best practices and benefits. Richardson will cover how operations teams can implement governance with APIs and events.

  • 10:10 AM PST | 1:10 PM EST with Workgrid Software Head of Cloud Engineering, Gillian McCann

Built Serverless-First: How Workgrid Software transformed from a Liberty Mutual project to its own global startup

Connected through a central IT team, Liberty Mutual has embraced serverless since AWS Lambda’s inception in 2014. In this session, Gillian McCann discusses Workgrid’s serverless journey—from internal microservices project within Liberty Mutual to independent business entity, all built serverless-first. Introduction by AWS Principal Serverless SA, Sam Dengler.

  • 11:30 AM PST | 2:30 PM EST featuring Forrester VP and Principal Analyst Jeffrey Hammond

Market insights: A conversation with Forrester analyst Jeffrey Hammond & Director of Product for Lambda, Ajay Nair

In this session, guest speaker Jeffrey Hammond and Director of Product for AWS Lambda, Ajay Nair, discuss the state of serverless, Lambda-based architectural approaches, Functions-as-a-Service platforms, and more. You’ll learn about the high-level and enduring enterprise patterns and advancements that analysts see driving the market today and determining the market in the future.

May 28: Serverless for your Application

Join these sessions to learn end-to-end best practices for building serverless applications. Kicked off by a conversation with AWS SVP Charlie Bell, today’s sessions will offer a complete builder’s guide for serverless applications, starting with the basics and culminating in ways to fine-tune your serverless applications to ensure maximum performance and your highest ROI.

  • 8:00 AM PST | 11:00 AM EST with AWS SVP, Charlie Bell

Navigating the new world of serverless: VP of Application Integration Jesse Dougherty interviews Charlie Bell, SVP at AWS

In this session, AWS VP of Application Integration Jesse Dougherty interviews AWS SVP Charlie Bell about serverless operations today-they’ll discuss how AWS uses serverless and the benefits AWS teams get from adopting serverless-first.

  • 8:30 AM PST | 11:30 AM EST with Sr. Developer Advocate, Ben Smith

Building serverless web applications

In this session, follow along as Ben Smith shows you how to build and deploy a completely serverless web application from scratch. The application will span from a mobile friendly front end to complex business logic on the back end.

  • 11:30 AM PST | 2:30 PM EST with Sr. Developer Advocate, James Beswick

Performance tuning for serverless web applications

In this session, James Beswick shows you how to get the most from your serverless backend. He will cover how to reduce and eliminate cold starts, how to identify bottlenecks and areas for improvement, and how to measure application performance.

To join us for any of the above sessions and more, register here. The event is free and open to anyone interested in building on serverless. We encourage you to invite your teams, your leaders, your customers, and your fellow builders.

We look forward to seeing you online!

Build a serverless Martian weather display with CircuitPython and AWS Lambda

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/build-a-serverless-martian-weather-display-with-circuitpython-and-aws-lambda/

Build a standalone digital weather display of Mars showing the latest images from the Mars Curiosity Rover.

This project uses an Adafruit PyPortal, an open-source IoT touch display. Traditionally, a microcontroller is programmed with firmware compiled using various specific toolchains. Fortunately, the PyPortal is programmed using CircuitPython, a lightweight version of Python that works on embedded hardware. You just copy your code to the PyPortal like you would to a thumb drive and it runs.

I deploy the backend, the part in the cloud that does all the heavy lifting, using the AWS Serverless Application Repository (SAR). The code on the PyPortal makes a REST call to the backend to handle the requests to the NASA Mars Rover Photos API and InSight: Mars Weather Service API. It then converts and resizes the image before returning the information to the PyPortal for display.

An Adafruit PyPortal displaying the latest images from the Mars Curiosity Rover and weather data from InSight Mars Lander.

An Adafruit PyPortal displaying the latest images from the Mars Curiosity Rover and weather data from InSight Mars Lander.

Prerequisites

You need the following to complete the project:

Deploy the backend application

An architecture diagram of the serverless backend.

An architecture diagram of the serverless backend.

Using a serverless backend reduces the load on the PyPortal. The PyPortal makes a call to the backend API and receives a small JSON object with the relevant data. This allows you to change to the logic of where and how to get the image and weather data without needing physical access to the device.

The backend API consists of an AWS Lambda function, written in Python, behind an Amazon API Gateway endpoint. When invoked, the FetchMarsData function makes requests to two separate NASA APIs. First it fetches the latest images from the Mars Curiosity Rover, typically from the previous day, and picks one at random. It resizes and converts the image to bitmap format before uploading to Amazon S3 with public read permissions. The PyPortal downloads the image from S3 later.

The function then calls the InSight: Mars Weather Service API. It retrieves the average air temperature, wind speed, pressure, season, solar day (sol), as well as the first and last timestamp of daily sampling. The API returns these values and the S3 image URL as a JSON object.

I use the AWS Serverless Application Model (SAM) to create the backend. While it can be deployed using the AWS SAM CLI, you can also deploy from the AWS Management Console:

  1. Generate a free NASA API key at api.nasa.gov. This is required to gain access to the NASA data APIs.
  2. Navigate to the aws-serverless-pyportal-mars-weather-display application in the Serverless Application Repository.
  3. Choose Deploy.
  4. On the next page, under Application Settings, enter the parameter, NasaApiKey.

  5. Once complete, choose View CloudFormation Stack.

  6. Select the Outputs tab and make a note of the MarsApiUrl. This is required for configuring the PyPortal.

  7. Navigate to the MarsApiKey URL listed in the Outputs tab.

  8. Click Show to reveal the API key. Make a note of this. This is required for authenticating requests from the PyPortal to the MarsApiUrl.

PyPortal setup

  1. Follow these instructions from Adafruit to install the latest version of the CircuitPython bootloader. At the time of writing, the latest version is 5.2.0.
  2. Follow these instructions to install the latest Adafruit CircuitPython library bundle. I use bundle version 5.x.
  3. Insert the microSD card in the slot located on the back of the device.
  4. Optionally install the Mu Editor, a multi-platform code editor and serial debugger compatible with Adafruit CircuitPython boards. This can help if you need to troubleshoot issues.
  5. Optionally if you have a 3D printer at home, you can print a case for your PyPortal. This can protect your project while also being a great way to display it on a desk.

Code PyPortal

As with regular Python, CircuitPython does not need to be compiled to execute. Flashing new firmware on the PyPortal is as simple as copying a Python file and necessary assets over to a mounted volume. The bootloader runs code.py anytime the device starts or any files are updated.

  1. Use a USB cable to plug the PyPortal into your computer and wait until a new mounted volume CIRCUITPY is available.
  2. Download the project from GitHub. Inside the project, copy the contents of /circuit-python on to the CIRCUITPY volume.
  3. Inside the volume, open and edit the secrets.py file. Include your Wi-Fi credentials along with the MarsApiKey and MarsApiUrl API Gateway endpoint, which can be found under Outputs in the AWS CloudFormation stack created by the Serverless Application Repository.
  4. Save the file, and the device restarts. It takes a moment to connect to Wi-Fi and make the first request.
    Optionally, if you installed the Mu Editor, you can click on “Serial” to follow along the device log.Animated gif of the PyPortal device displaying a Mars rover image and Mars weather data.

Understanding how CircuitPython calls API Gateway

The main CircuitPython file is code.py. At the end of the file, the while loop periodically performs the operations necessary to display the photos from the Curiosity Rover and the InSight Mars lander weather data.

while True:
    data = callAPIEndpoint(secrets['mars_api_url'])
    downloadImage(data['image_url'])
    showDisplay(data['insight'], 
    displayTime=60*interval_minutes)

First, it calls the API Gateway endpoint using the URL from the secrets.py file, and passes the returned JSON to helper functions. The callAPIEndpoint(url) function passes the MarsApiKey in the header and a timeout of 30 seconds to the wifi.get() method. The timeout is required for integrations with services like Lambda and API Gateway. Remember, the CircuitPython code is running on a microcontroller and sometimes must wait longer when making requests.

def callAPIEndpoint(mars_api_url):
    headers = {"x-api-key": secrets['mars_api_key']}
    response = wifi.get(mars_api_url, headers=headers, timeout=30)
    data = response.json()
    print("JSON Response: ", data)
    response.close()
    return data

The JSON object that is received by the PyPortal is defined in the handler of the Lambda function. In the GitHub project downloaded earlier, see src/app.py.

def lambda_handler(event, context):
    url = fetchRoverImage()
    imgData = fetchImageData(url)
    image_s3_url = resize_image(imgData)
    weatherData = getMarsInsightWeather()

    return {
        "statusCode": 200,
        "body": json.dumps({
            "image_url": image_s3_url,
            "insight": weatherData
        })
    }

Similar to the CircuitPython code, this uses helper functions to perform all the various operations needed to retrieve and craft the data. At completion, the returned JSON is passed as the response to the PyPortal.

A quick way to add a new property is to edit the Lambda function directly through the AWS Lambda Console. Here, a key “hello” is added with a value “world”:

In the CircuitPython code.py file, the key is now available in the JSON response from API Gateway. The following prints the key value, which can be seen using the Mu Editor Serial debugger.

data = callAPIEndpoint(secrets['mars_api_url'])

print(data[‘hello’])

The Lambda function is packaged with the AWS Python SDK, boto3, which provides methods for interacting with a variety of AWS services. The Python Requests library is also included to make calls to the NASA APIs. Try exploring how to incorporate other services or APIs into your project. To understand how to modify the visual display on the PyPortal itself, see the displayio guide from Adafruit.

Conclusion

I show how to build a “live” Martian weather display using an Adafruit PyPortal, CircuitPython, and AWS Serverless technologies. Whether this is your first time using hardware or a serverless backend in the AWS Cloud, this project is simplified by the use of CircuitPython and the Serverless Application Model.

I also show how to make a request to API Gateway from the PyPortal. I then craft a response in Lambda for the PyPortal. Since both use variants of the Python programming language, much of the syntax stays the same.

To learn more, explore other devices supported by CircuitPython and the variety of community contributed libraries. Combined with the breadth of AWS services, you can push the boundaries of creativity.

Building a Scalable Document Pre-Processing Pipeline

Post Syndicated from Joel Knight original https://aws.amazon.com/blogs/architecture/building-a-scalable-document-pre-processing-pipeline/

In a recent customer engagement, Quantiphi, Inc., a member of the Amazon Web Services Partner Network, built a solution capable of pre-processing tens of millions of PDF documents before sending them for inference by a machine learning (ML) model. While the customer’s use case—and hence the ML model—was very specific to their needs, the pipeline that does the pre-processing of documents is reusable for a wide array of document processing workloads. This post will walk you through the pre-processing pipeline architecture.

Pre-processing pipeline architecture-SM

Architectural goals

Quantiphi established the following goals prior to starting:

  • Loose coupling to enable independent scaling of compute components, flexible selection of compute services, and agility as the customer’s requirements evolved.
  • Work backwards from business requirements when making decisions affecting scale and throughput and not simply because “fastest is best.” Scale components only where it makes sense and for maximum impact.
  •  Log everything at every stage to enable troubleshooting when something goes wrong, provide a detailed audit trail, and facilitate cost optimization exercises by identifying usage and load of every compute component in the architecture.

Document ingestion

The documents are initially stored in a staging bucket in Amazon Simple Storage Service (Amazon S3). The processing pipeline is kicked off when the “trigger” Amazon Lambda function is called. This Lambda function passes parameters such as the name of the staging S3 bucket and the path(s) within the bucket which are to be processed to the “ingestion app.”

The ingestion app is a simple application that runs a web service to enable triggering a batch and lists documents from the S3 bucket path(s) received via the web service. As the app processes the list of documents, it feeds the document path, S3 bucket name, and some additional metadata to the “ingest” Amazon Simple Queue Service (Amazon SQS) queue. The ingestion app also starts the audit trail for the document by writing a record to the Amazon Aurora database. As the document moves downstream, additional records are added to the database. Records are joined together by a unique ID and assigned to each document by the ingestion app and passed along throughout the pipeline.

Chunking the documents

In order to maximize grip and control, the architecture is built to submit single-page files to the ML model. This enables correlating an inference failure to a specific page instead of a whole document (which may be many pages long). It also makes identifying the location of features within the inference results an easier task. Since the documents being processed can have varied sizes, resolutions, and page count, a big part of the pre-processing pipeline is to chunk a document up into its component pages prior to sending it for inference.

The “chunking orchestrator” app repeatedly pulls a message from the ingest queue and retrieves the document named therein from the S3 bucket. The PDF document is then classified along two metrics:

  • File size
  • Number of pages

We use these metrics to determine which chunking queue the document is sent to:

  • Large: Greater than 10MB in size or greater than 10 pages
  • Small: Less than or equal to 10MB and less than or equal to 10 pages
  • Single page: Less than or equal to 10MB and exactly one page

Each of these queues is serviced by an appropriately sized compute service that breaks the document down into smaller pieces, and ultimately, into individual pages.

  • Amazon Elastic Cloud Compute (EC2) processes large documents primarily because of the high memory footprint needed to read large, multi-gigabyte PDF files into memory. The output from these workers are smaller PDF documents that are stored in Amazon S3. The name and location of these smaller documents is submitted to the “small documents” queue.
  • Small documents are processed by a Lambda function that decomposes the document into single pages that are stored in Amazon S3. The name and location of these single page files is sent to the “single page” queue.

The Dead Letter Queues (DLQs) are used to hold messages from their respective size queue which are not successfully processed. If messages start landing in the DLQs, it’s an indication that there is a problem in the pipeline. For example, if messages start landing in the “small” or “single page” DLQ, it could indicate that the Lambda function processing those respective queues has reached its maximum run time.

An Amazon CloudWatch Alarm monitors the depth of each DLQ. Upon seeing DLQ activity, a notification is sent via Amazon Simple Notification Service (Amazon SNS) so an administrator can then investigate and make adjustments such as tuning the sizing thresholds to ensure the Lambda functions can finish before reaching their maximum run time.

In order to ensure no documents are left behind in the active run, there is a failsafe in the form of an Amazon EC2 worker that retrieves and processes messages from the DLQs. This failsafe app breaks a PDF all the way down into individual pages and then does image conversion.

For documents that don’t fall into a DLQ, they make it to the “single page” queue. This queue drives each page through the “image conversion” Lambda function which converts the single page file from PDF to PNG format. These PNG files are stored in Amazon S3.

Sending for inference

At this point, the documents have been chunked up and are ready for inference.

When the single-page image files land in Amazon S3, an S3 Event Notification is fired which places a message in a “converted image” SQS queue which in turn triggers the “model endpoint” Lambda function. This function calls an API endpoint on an Amazon API Gateway that is fronting the Amazon SageMaker inference endpoint. Using API Gateway with SageMaker endpoints avoided throttling during Lambda function execution due to high volumes of concurrent calls to the Amazon SageMaker API. This pattern also resulted in a 2x inference throughput speedup. The Lambda function passes the document’s S3 bucket name and path to the API which in turn passes it to the auto scaling SageMaker endpoint. The function reads the inference results that are passed back from API Gateway and stores them in Amazon Aurora.

The inference results as well as all the telemetry collected as the document was processed can be queried from the Amazon Aurora database to build reports showing number of documents processed, number of documents with failures, and number of documents with or without whatever feature(s) the ML model is trained to look for.

Summary

This architecture is able to take PDF documents that range in size from single page up to thousands of pages or gigabytes in size, pre-process them into single page image files, and then send them for inference by a machine learning model. Once triggered, the pipeline is completely automated and is able to scale to tens of millions of pages per batch.

In keeping with the architectural goals of the project, Amazon SQS is used throughout in order to build a loosely coupled system which promotes agility, scalability, and resiliency. Loose coupling also enables a high degree of grip and control over the system making it easier to respond to changes in business needs as well as focusing tuning efforts for maximum impact. And with every compute component logging everything it does, the system provides a high degree of auditability and introspection which facilitates performance monitoring, and detailed cost optimization.

Using AWS ParallelCluster with a serverless API

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/using-aws-parallelcluster-with-a-serverless-api/

This post is contributed by Dario La Porta, AWS Senior Consultant – HPC

AWS ParallelCluster simplifies the creation and the deployment of HPC clusters. Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. AWS Lambda automatically runs your code without requiring you to provision or manage servers.

In this post, I create a serverless API of the AWS ParallelCluster command line interface using these services. With this API, you can create, monitor, and destroy your clusters. This makes it possible to integrate AWS ParallelCluster programmatically with other applications you may have running on-premises or in the AWS Cloud.

The serverless integration of AWS ParallelCluster can enable a cleaner and more reproducible infrastructure as code paradigm to legacy HPC environments.

Taking this serverless, infrastructure as code approach enables several new types of functionality for HPC environments. For example, you can build on-demand clusters from an API when on-premises resources cannot handle the workload. AWS ParallelCluster can extend on-premises resources for running elastic and large-scale HPC on AWS’ virtually unlimited infrastructure.

You can also create an event-driven workflow in which new clusters are created when new data is stored in an S3 bucket. With event-driven workflows, you can be creative in finding new ways to build HPC infrastructure easily. It also helps optimize time for researchers.

Security is paramount in HPC environments because customers are performing scientific analyses that are central to their businesses. By using a serverless API, this solution can improve security by removing the need to run the AWS ParallelCluster CLI in a user environment. This help keep customer environments secure and more easily control the IAM roles and security groups that researchers have access to.

Additionally, the Amazon API Gateway for HPC job submission post explains how to submit a job in the cluster using the API. You can use this instead of connecting to the master node via SSH.

This diagram shows the components required to create the cluster and interact with the solution.

Cluster Architecture

Cluster Architecture

Cost of the solution

You can deploy the solution in this blog post within the AWS Free Tier. Make sure that your AWS ParallelCluster configuration uses the t2.micro instance type for the cluster’s master and compute instances. This is the default instance type for AWS ParallelCluster configuration.

For real-world HPC use cases, you most likely want to use a different instance type, such as C5 or C5n. C5n in particular can work well for HPC workloads because it includes the option to use the Elastic Fabric Adapter (EFA) network interface. This makes it possible to scale tightly coupled workloads to more compute instances and reduce communications latency when using protocols such as MPI.

To stay within the AWS Free Tier allowance, be sure to destroy the created resources as described in the teardown section of this post.

VPC configuration

Choose Launch Stack to create the VPC used for this configuration in your account:

The stack creates the VPC, the public subnets, and the private subnet required for the cluster in the eu-west-1 Region.

Stack Outputs

Stack Outputs

You can also use an existing VPC that complies with the AWS ParallelCluster network requirements.

Deploy the API with AWS SAM

The AWS Serverless Application Model (AWS SAM) is an open-source framework that you can use to build serverless applications on AWS. You use AWS SAM to simplify the setup of the serverless architecture.

In this case, the framework automates the manual configuration of setting up the API Gateway and Lambda function. Instead you can focus more on how the API works with AWS ParallelCluster. It improves security and provides a simple, alternative method for cluster lifecycle management.

You can install the AWS SAM CLI by following the Installing the AWS SAM CLI documentation. You can download the code used in this example from this repo. Inside the repo:

  • the sam-app folder in the aws-sample repository contains the code required to build the AWS ParallelCluster serverless API.
  • sam-app/template.yml contains the policy required for the Lambda function for the creation of the cluster. Be sure to modify <AWS ACCOUNT ID> to match the value for your account.

The AWS Identity and Access Management Roles in AWS ParallelCluster document contains the latest version of the policy, in the ParallelClusterUserPolicy section.

To deploy the application, run the following commands:

cd sam-app
sam build
sam deploy --guided

From here, provide parameter values for the SAM deployment wizard that are appropriate for your Region and AWS account. After the deployment, take a note of the Outputs:

SAM deploying

SAM deploying

SAM Stack Outputs

SAM Stack Outputs

The API Gateway endpoint URL is used to interact with the API, and has the following format:

https:// <ServerlessRestApi>.execute-api.eu-west-1.amazonaws.com/Prod/pcluster

AWS ParallelCluster configuration file

AWS ParallelCluster is an open source cluster management tool to deploy and manage HPC clusters in the AWS Cloud. AWS ParallelCluster uses a configuration file to build the cluster and its syntax is explained in the documentation guide. The pcluster.conf configuration file can be created in a directory of your local file system.

The configuration file has been tested with AWS ParallelCluster v2.6.0. The master_subnet_id contains the id of the created public subnet and the compute_subnet_id contains the private one.

Deploy the cluster with the pcluster API

The pcluster API created in the previous steps requires some parameters:

  • command – the pcluster command to execute. A detailed list is available commands is available in the AWS ParallelCluster CLI commands page.
  • cluster_name – the name of the cluster.
  • –data-binary “$(base64 /path/to/pcluster/config)” – parameter used to pass the local AWS ParallelCluster configuration file to the API.
  • -H “additional_parameters: <param1> <param2> <…>” – used to pass additional parameters to the pcluster cli.

The following command creates a cluster named “cluster1”:

$ curl --request POST -H "additional_parameters: --nowait"  --data-binary "$(base64 /tmp/pcluster.conf)" "https://<ServerlessRestApi>.execute-api.eu-west-1.amazonaws.com/Prod/pcluster?command=create&cluster_name=cluster1"

Beginning cluster creation for cluster: cluster1
Creating stack named: parallelcluster-cluster1
Status: CREATE_IN_PROGRESS

The cluster creation status can be queried with the following:

$ curl --request POST -H "additional_parameters: --nowait"  --data-binary "$(base64 /tmp/pcluster.conf)" "https://<ServerlessRestApi>.execute-api.eu-west-1.amazonaws.com/Prod/pcluster?command=status&cluster_name=cluster1"

Status: CREATE_IN_PROGRESS

When the cluster is in the “CREATE_COMPLETE” state, you can retrieve the master node IP address using the following API call:

$ curl --request POST -H "additional_parameters: --nowait"  --data-binary "$(base64 /tmp/pcluster.conf)" "https://<ServerlessRestApi>.execute-api.eu-west-1.amazonaws.com/Prod/pcluster?command=status&cluster_name=cluster1"

Status: CREATE_COMPLETE

$ curl --request POST -H "additional_parameters: "  --data-binary "$(base64 /tmp/pcluster.conf)" "https://<ServerlessRestApi>.execute-api.eu-west-1.amazonaws.com/Prod/pcluster?command=status&cluster_name=cluster1"

Status: CREATE_COMPLETE
MasterServer: RUNNING
MasterPublicIP: 34.253.102.227
ClusterUser: ec2-user
MasterPrivateIP: 10.0.0.134

When the cluster is not needed anymore, destroy it with the following API call:

$ curl --request POST -H "additional_parameters: --nowait"  --data-binary "$(base64 /tmp/pcluster.conf)" "https://<ServerlessRestApi>.execute-api.eu-west-1.amazonaws.com/Prod/pcluster?command=delete&cluster_name=cluster1"

Deleting: cluster1

The additional_parameters: —nowait prevents waiting for stack events after executing a stack command and avoids triggering the Lambda function timeout. The Amazon API Gateway for HPC job submission post explains how you can submit a job in the cluster using the API, instead of connecting to the master node via SSH.

The authentication to the API can be managed by following the Controlling and Managing Access to a REST API in API Gateway Documentation.

Teardown

You can destroy the resources by deleting the CloudFormation stacks created during installation. Deleting a Stack on the AWS CloudFormation Console explains the required steps.

Conclusion

In this post, I show how to integrate AWS ParallelCluster with Amazon API Gateway and manage the lifecycle of an HPC cluster using this API. Using Amazon API Gateway and AWS Lambda, you can run a serverless implementation of the AWS ParallelCluster CLI. This makes it possible to integrate AWS ParallelCluster programmatically with other applications you run on-premise or in the AWS Cloud.

This solution can help you improve the security of your HPC environment by simplifying the IAM roles and security groups that must be granted to individual users to successfully create HPC clusters. With this implementation, researchers no longer must run the AWS ParallelCluster CLI in their own user environment. As a result, by simplifying the security management of your HPC clusters’ lifecycle management, you can better ensure that important research is safe and secure.

To learn more, read more about how to use AWS ParallelCluster.

Building an automated knowledge repo with Amazon EventBridge and Zendesk

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/building-an-automated-knowledge-repo-with-amazon-eventbridge-and-zendesk/

Zendesk Guide is a smart knowledge base that helps customers harness the power of institutional knowledge. It enables users to build a customizable help center and customer portal.

This post shows how to implement a bidirectional event orchestration pattern between AWS services and an Amazon EventBridge third-party integration partner. This example uses support ticket events to build a customer self-service knowledge repository. It uses the EventBridge partner integration with Zendesk to accelerate the growth of a customer help center.

The examples in this post are part of a serverless application called FreshTracks. This is built in Vue.js and demonstrates SaaS integrations with Amazon EventBridge. To test this example, ask a question on the Fresh Tracks application.

The backend components for this EventBridge integration with Zendesk have been extracted into a separate example application in this GitHub repo.

How the application works

Routing Zendesk events with Amazon EventBridge.

Routing Zendesk events with Amazon EventBridge.

  1. A user searches the knowledge repository via a widget embedded in the web application.
  2. If there is no answer, the user submits the question via the web widget.
  3. Zendesk receives the question as a support ticket.
  4. Zendesk emits events when the support ticket is resolved.
  5. These events are streamed into a custom SaaS event bus in EventBridge.
  6. Event rules match events and send them downstream to an AWS Step Functions Express Workflow.
  7. The Express Workflow orchestrates Lambda functions to retrieve additional information about the event with the Zendesk API.
  8. A Lambda function uses the Zendesk API to publish a new help article from the support ticket data.
  9. The new article is searchable on the website widget for other users to read.

Before deploying this application, you must generate an API key from within Zendesk.

Creating the Zendesk API resource

Use an API to execute events on your Zendesk account from AWS. Follow these steps to generate a Zendesk API token. This is used by the application to authenticate Zendesk API calls.

To generate an API token

  1. Log in to the Zendesk dashboard.
  2. Click the Admin icon in the sidebar, then select Channels > API.
  3. Click the Settings tab, and make sure that Token Access is enabled.
  4. Click the + button to the right of Active API Tokens.

    Creating a Zendesk API token.

    Creating a Zendesk API token.

  5. Copy the token, and store it securely. Once you close this window, the full token is not displayed again.
  6. Click Save to return to the API page, which shows a truncated version of the token.

    Zendesk API token.

    Zendesk API token.

Configuring Zendesk with Amazon EventBridge

Step 1. Configuring your Zendesk event source.

  1. Go to your Zendesk Admin Center and select Admin Center > Integrations.
  2. Choose Connect in events Connector for Amazon EventBridge to open the page to configure your Zendesk event source.

    Zendesk integrations

    Zendesk integrations

  3. Enter your AWS account ID in the Amazon Web Services account ID field, and select the Region to receive events.
  4. Choose Save.

    Zendesk Amazon EventBridge configuration.

Step 2. Associate the Zendesk event source with a new event bus: 

  1. Log into the AWS Management Console and navigate to services > Amazon EventBridge > Partner event sources
    New event source

    New event source

     

  2. Select the radio button next to the new event source and choose Associate with event bus.
    Associating event source with event bus.

    Associating event source with event bus.

     

  3. Choose Associate.

Deploying the backend application

After associating the Event source with a new partner event bus, you can deploy backend services to receive events.

To set up the example application, visit the GitHub repo and follow the instructions in the README.md file.

When deploying the application stack, make sure to provide the custom event bus name, and Zendesk API credentials with --parameter-overrides.

sam deploy --parameter-overrides ZendeskEventBusName=aws.partner/zendesk.com/123456789/default ZenDeskDomain=MydendeskDomain ZenDeskPassword=myAPITOken ZenDeskUsername=myZendeskAgentUsername

You can find the name of the new Zendesk custom event bus in the custom event bus section of the EventBridge console.

Routing events with rules

When a support ticket is updated in Zendesk, a number of individual events are streamed to EventBridge, these include an event for each of:

  • Agent Assignment Changed
  • Comment Created
  • Status Changed
  • Brand Changed
  • Subject Changed

An EventBridge rule is used filter for events. The AWS Serverless Application Model (SAM) template defines the rule with the `AWS::Events::Rule` resource type. This routes the event downstream to an AWS Step Functions Express Workflow.  The EventPattern is shown below:

  ZendeskNewWebQueryClosed: 
    Type: AWS::Events::Rule
    Properties: 
      Description: "New Web Query"
      EventBusName: 
         Ref: ZendeskEventBusName
      EventPattern: 
        account:
        - !Sub '${AWS::AccountId}'
        detail-type: 
        - "Support Ticket: Comment Created"
        detail:
          ticket_event:
            ticket:
              status: 
              - solved
              tags:
              - web_widget
              tags: 
              - guide
      Targets: 
        - RoleArn: !GetAtt [ MyStatesExecutionRole, Arn ]
          Arn: !Ref FreshTracksZenDeskQueryMachine
          Id: NewQuery

The tickets must have two specific tags (web_widget and guide) for this pattern to match. These are defined as separate fields to create an AND  matching rule, instead of declaring within the same array field to create an OR rule. A new comment on a support ticket triggers the event.

The Step Functions Express Workflow

The application routes events to a Step Functions Express Workflow that is defined in the application’s SAM template:

FreshTracksZenDeskQueryMachine:
    Type: "AWS::StepFunctions::StateMachine"
    Properties:
      StateMachineType: EXPRESS
      DefinitionString: !Sub |
               {
                    "Comment": "Create a new article from a zendeskTicket",
                    "StartAt": "GetFullZendeskTicket",
                    "States": {
                      "GetFullZendeskTicket": {
                      "Comment": "Get Full Ticket Details",
                      "Type": "Task",
                      "ResultPath": "$.FullTicket",
                      "Resource": "${GetFullZendeskTicket.Arn}",
                      "Next": "GetFullZendeskUser"
                      },
                      "GetFullZendeskUser": {
                      "Comment": "Get Full User Details",
                      "Type": "Task",
                      "ResultPath": "$.FullUser",
                      "Resource": "${GetFullZendeskUser.Arn}",
                      "Next": "PublishArticle"
                      },
                      "PublishArticle": {
                      "Comment": "Publish as an article",
                      "Type": "Task",
                      "Resource": "${CreateZendeskArticle.Arn}",
                      "End": true
                      }
                    }
                }
      RoleArn: !GetAtt [ MyStatesExecutionRole, Arn ]

This application is suited for a Step Functions Express Workflow because it is orchestrating short duration, high-volume, event-based workloads.  Each workflow task is idempotent and stateless. The Express Workflow carries the workload’s state by passing the output of one task to the input of the next. The Amazon States Language ResultPath definition is used to control where each tasks output is appended to workflow’s state before it is passed to the next task.

 

AWS StepFunctions Express workflow

AWS StepFunctions Express workflow

Lambda functions

Each task in this Express Workflow invokes a Lambda function defined within the example application’s SAM template. The Lambda functions use the Node.js Axios package to make a request to Zendesk’s API.  The Zendesk API credentials are stored in the Lambda function’s environment variables and accessible via ‘process.env’.

The first two Lambda functions in the workflow make a GET request to Zendesk. This retrieves additional data about the support ticket, the author, and the agent’s response.

The final Lambda function makes a POST request to Zendesk API. This creates and publishes a new article using this data.  The permission_group and section defined in this function must be set to your Zendesk account’s default permission group ID and FAQ section ID.

AWS Lambda function code

AWS Lambda function code

Integrating with your front-end application

Follow the instructions in the Fresh Tracks repo on GitHub to deploy the front-end application. This application includes Zendesk’s web widget script in the index.html page. The widget has been customized using Zendesk’s javascript API. This is implemented in the navigation component to insert custom forms into the widget and prefill the email address field for authenticated users. The backend application starts receiving Zendesk emitted events immediately.

The video below demonstrates the implementation from end to end.

Conclusion

This post explains how to set up EventBridge’s third-party integration with Zendesk to capture events. The example backend application demonstrates how to filter these events, and send downstream to a Step Functions Express Workflow. The Express Workflow orchestrates a series of stateless Lambda functions to gather additional data about the event. Zendesk’s API is then used to publish a new help guide article from this data.

This pattern provides a framework for bidirectional event orchestration between AWS services, custom web applications and third party integration partners. This can be replicated and applied to any number of third party integration partners.

This is implemented with minimal code to provide near real-time streaming of events and without adding latency to your application.

The possibilities are vast. I am excited to see how builders use this bidirectional serverless pattern to add even more value to their third party services.

Start here to learn about other SaaS integrations with Amazon EventBridge.

Decoupling larger applications with Amazon EventBridge

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/decoupling-larger-applications-with-amazon-eventbridge/

Many applications start to grow in complexity as they mature, making it harder for developers to maintain code or add new features. This can lead to monolithic applications, where developers must know more about the entire architecture to make changes. Typically, this causes code to become more fragile, and the rate of development slows down.

This blog post shows how you can use an event-based architecture to decouple services and functional areas of applications. It uses the document repository solution as an example, to compare architecture after shifting to an event-based approach. The new architecture offers both greater extensibility and simplicity for developers adding new functionality in the future. It can help alleviate the problems associated with monolithic applications.

The original version of this application uses Amazon S3 event notifications to invoke AWS Lambda functions to index content in the Amazon Elasticsearch Service:

Original document repository application architecture

There are some limitations with this design. First, there is a single source bucket for documents, which may not reflect production usage. Also, while it could be modified to allow new file types for indexing, adding new functionality such as translating documents would require refactoring. And despite having multiple Lambda functions, it’s packaged as a single application, which makes it harder to deploy changes.

The new design uses events to decouple each service used to process incoming S3 objects. It can also use one or more buckets as event sources, which you can change dynamically as needed. Most importantly, it can be easier to introduce changes and new functionality, since the application is no longer deployed as a mono-repo. The new architecture uses this design:

Decoupled architecture

  1. Setup and configuration of AWS resources.
  2. Parser function to filter and reformat S3 events for the application.
  3. Converter functions to operate on distinct file types.
  4. Analyzer functions for interpreting the content of the files.
  5. The Loader function imports the metadata into the Amazon Elasticsearch Service.

The code uses the AWS Serverless Application Model (SAM), enabling you to deploy the application easily in your own AWS account. This walkthrough creates resources covered in the AWS Free Tier but you may incur cost for significant data usage. Additionally, it requires an Amazon Elasticsearch Service domain, which may incur cost on your AWS bill.

The resulting solution is five separate applications, which you deploy in stages. To set up the application, visit the GitHub repo and follow the instructions in the README.md file.

Setup and configuration

The SAM template in the setup directory creates the S3 buckets, and configures AWS CloudTrail to capture put events in these buckets. This is required as EventBridge consumes S3 events via CloudTrail. Now, when any object is stored in any of these S3 buckets, EventBridge receives an event.

This template also creates a customer managed IAM policy that creates read-only access to the source S3 buckets:

  MyManagedPolicy:
    Type: AWS::IAM::ManagedPolicy
    Properties:
      ManagedPolicyName: docrepo-s3-read-policy
      PolicyDocument: 
        Version: 2012-10-17
        Statement: 
          - Effect: Allow
            Action:
              - s3:GetObject
              - s3:ListBucket
              - s3:GetBucketLocation
              - s3:GetObjectVersion
              - s3:GetLifecycleConfiguration
            Resource:
              - !Sub 'arn:aws:s3:::${Dept1Bucket}/*'
              - !Sub 'arn:aws:s3:::${Dept2Bucket}/*'
              - !Sub 'arn:aws:s3:::${Dept3Bucket}/*'

This policy can be attached to any Lambda function that must read the contents from one of the S3 buckets. If the pool of source buckets changes in the future, you only need to modify this policy. Any downstream Lambda functions using the policy automatically gain access to the added buckets.

In the second setup application, the Parser service receives those S3 events and reformats the event for downstream services. Specifically, it creates a new attribute for the file type of the S3 object. After you deploy these two templates, creating any objects in the source S3 buckets generates the following event in the default event bus:

Parsing events from Amazon S3

Building the converter processes

This application uses converters to process incoming objects in the S3 buckets. One converter handles one file type. There are two converters required to replicate the original application’s functionality, for pdf and docx files. An EventBridge rule matches incoming events and triggers the appropriate Lambda function to convert the object. This diagram shows abridged input and output events for these functions:

  1. A matching EventBridge rule invokes the relevant converter function. The function converts the source file into raw text.
  2. The text is split into batches of 5,000 characters.
  3. The functions publish the text batches back to EventBridge, using new detail-type and source attributes.

The SAM template specifies the EventBridge rules, the permissions for EventBridge to invoke the Lambda functions, and the processing Lambda functions. The Lambda functions use the customer managed IAM policy created during the setup for read-only access to the originating S3 bucket. Each converter has its own logic for processing file types differently, and can produce different types of events if needed.

The analyzer functions

In this workflow, any file type containing text is analyzed by Amazon Comprehend to detect entities. The AnalyzeText function is invoked by an EventBridge rule. The rule is filtering for the NewTextBatch attribute in an event from docRepo.converters.

Another EventBridge rule triggers the AnalyzeImage function. This is filtering for jpg and jpeg file types where the event source is docRepo.s3. This function uses Amazon Rekognition to identify labels in the images.

Both functions produce new events containing the entities and labels, using new detail-type and source attributes. These events are published back to the default bus on EventBridge:

Analyzers processing events

  1. A matching EventBridge rule invokes the relevant analyzer function. The function uses Amazon ML services to detect labels in images and entities in text.
  2. The functions publish the metadata back to EventBridge, using new detail-type and source attributes.

Loader function

The Loader function is invoked by an EventBridge rule that is filtering for events from the Analyzers functions. This final function receives those events and loads the labels and entities metadata into the Amazon Elasticsearch Service:

Loader function processing events

Choosing between AWS Step Functions and Amazon EventBridge

In this application, there is a sequence of steps to the workflow that could also be handled by AWS Step Functions. Both services can simplify workflows in distributed applications and make it easier to maintain and modify serverless applications. In many cases, it makes sense to use both services for larger enterprise applications with complex business logic.

However, EventBridge enables you to separate processes into independent applications. It also allows other consumers to build custom logic using your events without impacting your application design or performance. In enterprise applications, this makes it much easier to innovate and develop new application features.

Benefits for developers

With the original monolithic application divided into five separate applications, it’s now easier for different teams to work on this project. It’s also easier and safer to deploy changes to a single microservice without needing to deploy the entire application. Developers must only understand their own service rather than the complete architecture of the application.

For example, to add more S3 buckets to the source list, you only need to modify the SAM template in the setup part of the application. The Parser function consumes put events from any number of buckets, and downstream functions consume events via EventBridge. To add a new file type, you only need to add a new converter function. Or to change the indexing provider, you create a new loader function to route the metadata to another service. The services of this application are independent, decoupled by EventBridge, and you can add more producers and consumers as required.

Traditionally, one of the challenges with event-based applications is tracking the format of events. Event schemas are typically hard to manage because any service can produce an event. The schema may also change as developers release new versions of a service. To help solve these issues, EventBridge has a feature called schema discovery that can automate the tracking and management of events in your application.

All the microservices in this application publish with a source attribute of docRepo. If you enable schema discovery, EventBridge quickly identifies these custom event schemas:

Schema discovery in Amazon EventBridge

The schemas are defined in JSON using the OpenAPI Specification. As you develop new features, you can download code bindings directly from these schemas. For type-safe languages, this allows you to use events as objects directly in your applications, helping to accelerate development. To learn more about how to use code bindings and schema discovery, watch this video:

Conclusion

Larger applications can quickly become monoliths. You can use event-based architectures to decouple services within applications, and maintain flexibility as your application grows. Amazon EventBridge is a serverless event bus that can help simplify you architecture, allowing each service to operate independently with no dependence on event consumers.

In this post, I show how to rearchitect the Serverless Document Repository example into five smaller applications orchestrated using events. I explore the benefits of developing applications using this approach, including the ability to make changes more easily. I also show how EventBridge schema discovery can help automate event schema management.

To learn more about how to use Amazon EventBridge to decouple large applications, visit the Amazon EventBridge learning path.

Serving Billions of Ads in Just 100 ms Using Amazon Elasticache for Redis

Post Syndicated from Rodrigo Asensio original https://aws.amazon.com/blogs/architecture/serving-billions-of-ads-with-amazon-elasticache-for-redis/

This post was co-written with Lucas Ceballos, CTO of Smadex

Introduction

Showing ads may seem to be a simple task, but it’s not. Showing the right ad to the right user is an incredibly complex challenge that involves multiple disciplines such as artificial intelligence, data science, and software engineering. Doing it one million times per second with a 100-ms constraint is even harder.

In the ad-tech business, speed and infrastructure costs are the keys to success. The less the final user waits for an ad, the higher the probability of that user clicking on the ad. Doing that while keeping infrastructure costs under control is crucial for business profitability.

About Smadex

Smadex is the leading mobile-first programmatic advertising platform specifically built to deliver best user acquisition performance and complete transparency.

Its state-of-the-art digital signal processing (DSP) technology provides advertisers with the tools they need to achieve their goals and ROI, with measurable results from web forms, post-app install events, store visits, and sales.

Smadex advertising architecture

What does showing ads look like under the hood? At Smadex, our technology works based on the OpenRTB (Real-Time Bidding) protocol.

RTB is a means by which advertising inventory is bought and sold on a per-impression basis, via programmatic instantaneous auction, which is similar to financial markets.

To show ads, we participate in auctions deciding in real time which ad to show and how much to bid trying to optimize the cost of every impression.

High level diagram

  1. The final user browses the publisher’s website or app.
  2. Ad-exchange is called to start a new auction.
  3. Smadex receives the bid request and has to decide which ad to show and how much to offer in just 100 ms (and this is happening one million times per second).
  4. If Smadex won the auction, the ad must be sent and rendered on the publisher’s website or app.
  5. In the end, the user interacts with the ad sending new requests and events to Smadex platform.

Flow of data

As you can see in the previous diagram, showing ads is just one part of the challenge. After the ad is shown, the final user interacts with it in multiple ways, such as clicking it, installing an application, subscribing to a service, etc. This happens during a determined period that we call the “attribution window.” All of those interactions must be tracked and linked to the original bid transaction (using the request_id parameter).

Doing this is complicated: billions of bid transactions must be stored and available so that they can be quickly accessed every time the user interacts with the ad. The longer we store the transactions, the longer we can “wait” for an interaction to take place, and the better for our business and our clients, too.

Detailed diagram

Challenge #1: Cost

The challenge is: What kind of database can store billions of records per day, with at least a 30-day retention capacity (attribution window), be accessed by key-value, and all by spending as little as possible?

The answer is…none! Based on our research, all the available options that met the technical requirements were way out of our budget.

So…how to solve it? Here is when creativity and the combination of different AWS services comes into place.

We started to analyze the time dispersion of the events trying to find some clues. The interesting thing we spotted was that 90% of what we call “post-bid events” (impression, click, install, etc.) happened within one hour after the auction took place.

That means that we can process 90% of post-bid events by storing just one hour of bids.

Under our current workload, in one hour we participate in approximately 3.7 billion auctions generating 100 million bid records of an average 600 bytes each. This adds up to 55 gigabytes per hour, an easier amount of data to process.

Instead of thinking about one single database to store all the bid requests, we decided to split bids into two different categories:

  • Hot Bid: A request that took place within the last hour (small amount and frequently accessed)
  • Cold Bid: A request that took place more than our hour ago (huge amount and infrequently accessed)

Amazon ElastiCache for Redis is the best option to store 55 GB of data in memory, which gives us the ability to query in a key-value way with the lowest possible latency.

Hot Bids flow

Hot Bids flow diagram

  1. Every new bid is a hot bid by definition so it’s going to be stored in the hot bids Redis cluster.
  2. At the moment of the user interaction with the ad, the Smadex tracker component receives an HTTPS notification, including the bid request UUID that originated it.
  3. Based on the date of occurrence extracted from the received UUID, the tracker component can determine if it’s looking for a hot bid or not. If it’s a hot bid, the tracker reads it directly from Redis performing a key-value lookup query.

It’s been easy so far but what to do with the other 29 days and 23 hours we need to store?

Challenge #2: Performance

As we previously mentioned, cold bids are a huge infrequently accessed number of records with only 10% of post-bid events pointing to them. That sounds like a good use case for an inexpensive and slower data store like Amazon S3.

Thanks to the S3 low-cost storage prices combined with the ability to query S3 objects directly using Amazon Athena, we were able to optimize our costs by storing and querying cold bids by implementing a serverless architecture.

Cold Bids Flow

Cold Bids flow diagram

  1. Incoming bids are buffered by Fluentd and flushed to S3 every one minute in JSON format. Every single file flushed to S3 contains all the bids processed by a specific EC2 instance for one minute.
  2. An AWS Lambda function is automatically triggered on every new PutObject event from S3. This function transforms the JSON records to Parquet format and will save it back the S3 bucket, but this time into a specific partition folder based on file creation timestamp.
  3. As seen on the hot bids flow, the tracker component will determine if it’s looking for a hot or a cold bid based on the extracted timestamp of the request UUID. In this case, the cold bid will be retrieved by running an Amazon Athena look-up query leveraging the use of partitions and Parquet format to reduce as much as possible the latency and data that needs to be scanned.

Conclusion

Thanks to this combined approach using different technologies and a variety of AWS services we were able to extend our attribution window from 30 to 90 days while reducing the infrastructure costs by 45%.

 

 

Automating scalable business workflows using minimal code

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/automating-scalable-business-workflows-using-minimal-code/

Organizations frequently have complex workflows embedded in their processes. When a customer places an order, it triggers a workflow. Or when an employee requests vacation time, this starts another set of processes. Managing these at scale can be challenging in traditional applications, which must often manage thousands of separate tasks.

In this blog post, I show how to use a serverless application to build and manage enterprise workflows at scale. This minimal-code solution is highly scalable and flexible, and can be modified easily to meet your needs. This application uses Amazon S3, AWS Lambda, and AWS Step Functions:

Using S3-to-Lambda to trigger Step Functions workflows

AWS Step Functions allows you to represent workflows as a JSON state machine. This service can help remove custom code and convoluted logic from distributed systems, and make it easier to maintain and modify. S3 is a highly scalable service that stores trillions of objects, and Lambda runs custom code in response to events. By combining these services, it’s simple to build resilient workflows with high throughput, triggered by putting objects in S3 buckets.

There are many business use-cases for this approach. For example, you could automatically pay invoices from approved vendors under a threshold amount by reading the invoices stored in S3 using Amazon Textract. Or your application could automatically book consultations for patients emailing their completed authorization forms. Almost any action that is triggered by a document or form is a potential candidate for an automated workflow solution.

To set up the example application, visit the GitHub repo and follow the instructions in the README.md file. The code uses the AWS Serverless Application Model (SAM), enabling you to deploy the application in your own AWS account. This walkthrough creates resources covered in the AWS Free Tier but you may incur cost if you test with large amounts of data.

How the application works

The starting point for this serverless solution is S3. When new objects are stored, this triggers a Lambda function that starts an execution in the Step Functions workflow. Lambda scales to keep pace as more objects are written to the S3 bucket, and Step Functions creates a separate execution for each S3 object. It also manages the state of all the distinct workflows.

Simple Step Functions workflow.

  1. A downstream process stores data in the S3 bucket.
  2. This invokes the Start Execution Lambda function. The function creates a new execution in Step Functions using the S3 object as event data.
  3. The workflow invokes the Decider function. This uses Amazon Rekognition to detect the contents of objects stored in S3.
  4. This function uses environment variables to determine the matching attributes. If the S3 object matches the criteria, it triggers the Match function. Otherwise, the No Match function is invoked.

The application’s SAM template configures the Step Functions state machine as JSON. It also defines an IAM role allowing Step Functions to invoke the Lambda functions. The initial function invoked by S3 is defined to accept the state machine ARN as an environment variable. The template also defines the permissions needed and the S3 trigger:

  StartExecutionFunction:
    Type: AWS::Serverless::Function 
    Properties:
      CodeUri: StartExecutionFunction/
      Handler: app.handler
      Runtime: nodejs12.x
      MemorySize: 128
      Environment:
        Variables:
          stateMachineArn: !Ref 'MatcherStateMachine'
      Policies:
        - S3CrudPolicy:
            BucketName: !Ref InputBucketName
        - Statement:
          - Effect: Allow
            Resource: !Ref 'MatcherStateMachine'
            Action:
              - states:*
      Events:
        FileUpload:
          Type: S3
          Properties:
            Bucket: !Ref InputBucket
            Events: s3:ObjectCreated:*

This uses SAM policy templates to provide read access to the S3 bucket. It also defines the event that causes the function invocation from S3, filtering only for new objects with a .json suffix.

The Decider function is the first step of the Step Functions workflow. It uses Amazon Rekognition to detect labels and words from the images provide. The SAM template passes the required labels and words to the function, together with an optional confidence score:

  DeciderFunction:
    Type: AWS::Serverless::Function 
    Properties:
      CodeUri: deciderFunction/
      Handler: app.handler
      Environment:
        Variables:
          requiredWords: "NEW YORK"
          requiredLabels: "Driving License,Person"
          minConfidence: 70

If the requiredLabels environment variable is present, the function’s code calls Amazon Rekognition’s detectLabel method. It then calls the detectText method if the requiredWords environment variable is used:

// The standard Lambda handler
exports.handler = async (event) => {
  return await processDocument(event)
}

// Detect words/labels on document or image
const processDocument = async (event) => {

  // If using a required labels
  if (process.env.requiredLabels) {
    // If no match, return immediately
    if (!await checkRequiredLabels(event)) return 'NoMatch'
  }  

  // If using a required words test
  if (process.env.requiredWords) {
    // If no match, return immediately
    if (!await checkRequiredWords(event)) return 'NoMatch'
  }

  return 'Match'
}

The Decider function returns “Match” or “No match” to the Step Functions workflow. This invokes downstream functions depending on the result. The Match and No Match functions are stubs where you can build the intended functionality in the workflow. This Step Functions workflow is designed generically so you can extend the functionality easily.

Testing the application

Deploy the first application by following the README.md in the GitHub repo, and note the application’s S3 bucket name. There are three test cases:

  • Create a workflow for a matched subject in an image. From photos uploaded to S3, identify which images contain one or more subjects, and invoke the Match path of the workflow.
  • Create a workflow for invoices from a specific vendor. From multiple invoices uploaded, matching those from a vendor, and trigger the Match path of the workflow.
  • Create a workflow for driver licenses issued by a single state. From a collection of drivers licenses, trigger the Match workflow for only a single state.

1. Create a workflow for matched subject in an image

In this example, the application identifies cats in images uploaded to the S3 bucket. The default configuration in the SAM template in the GitHub repo contains the environment variables set for this example:

Environment variables in SAM template.

First, I upload over 20 images of various animals to the S3 bucket:

Uploading files to the S3 bucket.

After navigating to the Step Function console, and selecting the application’s state machine, it shows 24 separate executions, one per image:

Step Functions execution detail.

I select one of these executions, for cat3.jpg. This has followed the MatchFound execution path of the workflow:

MatchFound execution path.

2. Create a workflow for invoices for a specified vendor.

For this example, the application looks for a customer account number and vendor name in invoices uploaded to the S3 bucket. The Decider function uses environment variables to determine the matching keywords. These can be updated by either deploying the SAM template or editing the Lambda function directly.

I modify the SAM template to match the vendor name and account number as follows:

SAM template with vendor information.

Next I upload several different invoices from the local machine to the S3 bucket:

Uploading different files to the S3 bucket.

In the Step Functions console, I select the execution for utility-bill.png. This execution matches the criteria and follows the MatchFound path in the workflow.

MatchFound path in visual workflow.

3. Matching a driver’s license by state

In this example, the application routes based upon the state where a driver’s license is issued. For this test, I use a range of sample images of licenses from DMVs in multiple states.

I modify the SAM template so the Decider function uses both label and word detection. I set “Driving License” and “Person” as required labels. This ensures that Amazon Rekognition identifies a person is in the photo in addition to the document type.

Environment variables in the SAM template.

Next, I upload the driver’s license images to the S3 bucket:

Uploading files to the S3 bucket.

In the Step Functions console, I open the execution for the driver-license-ny.png file, and it has followed the MatchFound path in the workflow:

Execution path for driver's license test.

When I select the execution for the Texas driver’s license, this did not match and has followed the NoMatchFound execution path:

Execution path for NoMatchFound.

Extending the functionality

By triggering Step Functions workflows from S3 PutObject events, this application is highly scalable. As more objects are stored in the S3 bucket, it creates as many executions as needed in the state machine. The custom code only handles the specific logic requirements for a single object and the Lambda service scales up to meet demand.

In these examples, the application uses Amazon Rekognition to analyze specific document types or image contents. You could extend this logic to include value ranges, multiple alternative workflow paths, or include steps to enable human intervention.

Using Step Functions also makes it easy to modify workflows as requirements change. Any incomplete workflows continue on the existing version of the state machine used when they started. As a result, you can add steps without impacting existing code, making it faster to adapt applications to users’ needs.

Conclusion

You can use Step Functions to model many common business workflows with JSON. Combining this powerful workflow management service with the scalability of S3 and Lambda, you can quickly build nuanced solutions that operate at scale.

In this post, I show how you can deploy a simple Step Functions workflow where executions are created by objects stored in an S3 bucket. Using minimal code, it can perform complex workflow routing tasks based on document types and contents. This provides a highly flexible and scalable way to manage common organizational workflow needs.

Building well-architected serverless applications: Understanding application health – part 2

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-understanding-application-health-part-2/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the nine serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the Introduction post for a table of contents and explaining the example application.

Question OPS1: How do you evaluate your serverless application’s health?

This post continues part 1 of this Operational Excellence question. Previously, I covered using Amazon CloudWatch out-of-the-box standard metrics and alerts, and structured and centralized logging with the Embedded Metrics Format.

Best practice: Use application, business, and operations metrics

Identifying key performance indicators (KPIs), including business, customer, and operations outcomes, in additional to application metrics, helps to show a higher-level view of how the application and business is performing.

Business KPIs measure your application performance against business goals. For example, if fewer flight reservations are flowing through the system, the business would like to know.

Customer experience KPIs highlight the overall effectiveness of how customers use the application over time. Examples are perceived latency, time to find a booking, make a payment, etc.

Operational metrics help to see how operationally stable the application is over time. Examples are continuous integration/delivery/deployment feedback time, mean-time-between-failure/recovery, number of on-call pages and time to resolution, etc.

Custom Metrics

Embedded Metrics Format can also emit custom metrics to help understand your workload health’s impact on business.

The airline booking service uses AWS Step Functions, AWS Lambda, Amazon SNS, and Amazon DynamoDB.

In the confirm booking module function handler, I add a new namespace and dimension to associate this set of logs with this application and service.

metrics.set_namespace("ServerlessAirlineEMF")
metrics.put_dimensions({"service":"confirm_booking"})

Within the try statement within the try/except block, I emit a metric for a successful booking:

metrics.put_metric("BookingSuccessful", 1, "Count")

And within the except statement within the try/except block, I emit a metric for a failed booking:

metrics.put_metric("BookingFailed", 1, "Count")

Once I make a booking, within the CloudWatch console, I navigate to Logs | Log groups and select the Airline-ConfirmBooking-develop group. I select a log stream and find the custom metric as part of the structured log entry.

structured-log-entry

Custom metric structured log entry

I can also create a custom metric graph. Within the CloudWatch console, I navigate to Metrics. I see the ServerlessAirlineEMF Custom Namespace is available.

custom-namespace

Custom metric namespace

I select the Namespace and the available metric.

available-namespace

Available metric

I select a Line graph, add a name, and see the successful booking now plotted on the graph.

custom-metric-plotted

Plotted CloudWatch metric

I can also visualize and analyze this metric using CloudWatch Insights.

Once a booking is made, within the CloudWatch console, I navigate to Logs | Insights. I select the /aws/lambda/Airline-ConfirmBooking-develop log group. I choose Run query which shows a list of discovered fields on the right of the console.

I can search for discovered booking related fields.

cloudwatch-insights-discovered-fieldsI then enter the following in the query pane to search the logs and plot the sum of BookingReference and choose Run Query:

fields @timestamp, @message
| stats sum(@BookingReference)

cloudwatch-insights-displayed-bookingreference

CloudWatch Insights query

There are a number of other component metrics that are important to measure. Track interactions between upstream and downstream components such as message queue length, integration latency, and throttling.

Improvement plan summary:

  1. Identify user journeys and metrics from each customer transaction.
  2. Create custom metrics asynchronously instead of synchronously for improved performance, cost, and reliability outcomes.
  3. Emit business metrics from within your workload to measure application performance against business goals.
  4. Create and analyze component metrics to measure interactions with upstream and downstream components.
  5. Create and analyze operational metrics to assess the health of your continuous delivery pipeline and operational processes.

Good practice: Use distributed tracing and code is instrumented with additional context

Logging provides information on the individual point in time events the application generates. Tracing provides a wider continuous view of an application. Tracing helps to follow a user journey or transaction through the application.

AWS X-Ray is an example of a distributed tracing solution, there are a number of third-party options as well. X-Ray collects data about the requests that your application serves and also builds a service graph, which shows all the components of an application. This provides visibility to understand how upstream/downstream services may affect workload health.

For the most comprehensive view, enable X-Ray across as many services as possible and include X-Ray tracing instrumentation in code. This is the list of AWS Services integrated with X-Ray.

X-Ray receives data from services as segments. Segments for a common request are then grouped into traces. Segments can be further broken down into more granular subsegments. Custom data key-value pairs are added to segments and subsegments with annotations and metadata. Traces can also be grouped which helps with filter expressions.

AWS Lambda instruments incoming requests for all supported languages. Lambda application code can be further instrumented to emit information about its status, correlation identifiers, and business outcomes to determine transaction flows across your workload.

X-Ray tracing for a Lambda function is enabled in the Lambda Console. Select a Lambda function. I select the Airline-ReserveBooking-develop function. In the Configuration pane, I select to enable X-Ray Active tracing.

x-ray-enable

X-Ray tracing enabled

X-Ray can also be enabled via CloudFormation with the following code:

TracingConfig:
  Mode: Active

Lambda IAM permissions to write to X-Ray are added automatically when active tracing is enabled via the console. When using CloudFormation, Allow the Actions xray:PutTraceSegments and xray:PutTelemetryRecords

It is important to understand what invocations X-Ray does trace. X-Ray applies a sampling algorithm. If an upstream service, such as API Gateway with X-Ray tracing enabled, has already sampled a request, the Lambda function request is also sampled. Without an upstream request, X-Ray traces data for the first Lambda invocation each second, and then 5% of additional invocations.

For the airline application, X-Ray tracing is initiated within the shared library with the code:

from aws_xray_sdk.core import models, patch_all, xray_recorder

Segments, subsegments, annotations, and metadata are added to functions with the following example code:

segment = xray_recorder.begin_segment('segment_name')
# Start a subsegment
subsegment = xray_recorder.begin_subsegment('subsegment_name')
# Add metadata and annotations
segment.put_metadata('key', dict, 'namespace')
subsegment.put_annotation('key', 'value')
# Close the subsegment and segment
xray_recorder.end_subsegment()
xray_recorder.end_segment()

For example, within the collect payment module, an annotation is added for a successful payment with:

tracer.put_annotation("PaymentStatus", "SUCCESS")

CloudWatch ServiceLens

Once a booking is made and payment is successful, the tracing is available in the X-Ray console.

I explore how Amazon CloudWatch ServiceLens connects metrics, logs and the X-Ray tracing. Within the CloudWatch console, I navigate to ServiceLens | Service Map.

I can visualize all application resources and dependencies where X-Ray is enabled. I can trace performance or availability issues. If there was an issue connecting to SNS for example, this would be shown.

I select the Airline-CollectPayment-develop node and can view the out-of-the-box standard Lambda metrics.

I can select View Logs to jump to the CloudWatch Logs Insights console.

cloudwatch-insights-service-map-view

CloudWatch Insights Service map

I select View dashboard to see the function metrics, node map, and function details.

cloudwatch-insights-service-map-dashboard

CloudWatch Insights Service Map dashboard

I select View traces and can filter by the custom metric PaymentStatus. I select SUCCESS and chose Add to filter. I then select a trace.

cloudwatch-insights-service-map-select-trace

CloudWatch Insights Filtered traces

I see the full trace details, which show the full application transaction of a payment collection.

cloudwatch-insights-service-map-view-trace

Segments timeline

Selecting the Lambda handler subsegment – ## lambda_handler, I can view the trace Annotations and Metadata, which include the business transaction details such as Customer and PaymentStatus.

cloudwatch-insights-service-map-view-trace-annotations

X-Ray annotations

Trace groups are another feature of X-Ray and ServiceLens. Trace groups use filter expressions such as Annotation.PaymentStatus = "FAILED" which are used to view traces that match the particular group. Service graphs can also be viewed, and CloudWatch alarms created based on the group.

CloudWatch ServiceLens provides powerful capabilities to understand application performance bottlenecks and issues, helping determine how users are impacted.

Improvement plan summary:

  1. Identify common business context and system data that are commonly present across multiple transactions.
  2. Instrument SDKs and requests to upstream/downstream services to understand the flow of a transaction across system components.

Recent announcements

There have been a number of recent announcements for X-Ray and CloudWatch to improve how to evaluate serverless application health.

Conclusion

Evaluating application health helps you identify which services should be optimized to improve your customer’s experience. In part 1, I cover out-of-the-box standard metrics and alerts, as well as structured and centralized logging. In this post, I explore custom metrics and distributed tracing and show how to use ServiceLens to view logs, metrics, and traces together.

In an upcoming post, I will cover the next Operational Excellence question from the Well-Architected Serverless Lens – Approaching application lifecycle management.

ICYMI: Serverless Q1 2020

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/icymi-serverless-q1-2020/

Welcome to the ninth edition of the AWS Serverless ICYMI (in case you missed it) quarterly recap. Every quarter, we share all of the most recent product launches, feature enhancements, blog posts, webinars, Twitch live streams, and other interesting things that you might have missed!

A calendar of the January, February, and March.

In case you missed our last ICYMI, checkout what happened last quarter here.

Launches/New products

In 2018, we launched the AWS Well-Architected Tool. This allows you to review workloads in a structured way based on the AWS Well-Architected Framework. Until now, we’ve provided workload-specific advice using the concept of a “lens.”

As of February, this tool now lets you apply those lenses to provide greater visibility in specific technology domains to assess risks and find areas for improvement. Serverless is the first available lens.

You can apply a lens when defining a workload in the Well-Architected Tool console.

A screenshot of applying a lens.

HTTP APIs beta was announced at AWS re:Invent 2019. Now HTTP APIs is generally available (GA) with more features to help developers build APIs better, faster, and at lower cost. HTTP APIs for Amazon API Gateway is built from the ground up based on lessons learned from building REST and WebSocket APIs, and looking closely at customer feedback.

For the majority of use cases, HTTP APIs offers up to 60% reduction in latency.

HTTP APIs costs at least 71% lower when compared against API Gateway REST APIs.

A bar chart showing the cost comparison between HTTP APIs and API Gateway.

HTTP APIs also offers a more intuitive experience and powerful features, like easily configuring cross origin resource scripting (CORS), JWT authorizers, auto-deploying stages, and simplified route integrations.

AWS Lambda

You can now view and monitor the number of concurrent executions of your AWS Lambda functions by version and alias. Previously, the ConcurrentExecutions metric measured and emitted the sum of concurrent executions for all functions in the account. It included even those that had a reserved concurrency limit specified.

Now, the ConcurrentExecutions metric is emitted for all functions, versions, aliases. This can be used to see which functions consume your concurrency limits and estimate peak traffic based on consumption averages. Fine grain visibility in these areas can help plan appropriate configuration for Provisioned Concurrency.

A Lambda function written in Ruby 2.7.

A Lambda function written in Ruby 2.7.

AWS Lambda now supports Ruby 2.7. Developers can take advantage of new features in this latest release of Ruby, like pattern matching, argument forwarding and numbered arguments. Lambda functions written in Ruby 2.7 run on Amazon Linux 2.

Updated AWS Mock .NET Lambda Test Tool

Updated AWS Mock .NET Lambda Test Tool

.NET Core 3.1 is now a supported runtime in AWS Lambda. You can deploy to Lambda by setting the runtime parameter value to dotnetcore3.1. Updates have also been released for the AWS Toolkit for Visual Studio and .NET Core Global Tool Amazon .Lambda.Tools. These make it easier to build and deploy your .NET Core 3.1 Lambda functions.

With .NET Core 3.1, you can take advantage of all the new features it brings to Lambda, including C# 8.0, F# 4.7 support, and .NET Standard 2.1 support, a new JSON serializer, and a ReadyToRun feature for ahead-of-time compilation. The AWS Mock .NET Lambda Test Tool has also been updated to support .NET Core 3.1 with new features to help debug and improve your workloads.

Cost Savings

Last year we announced Savings Plans for AWS Compute Services. This is a flexible discount model provided in exchange for a commitment of compute usage over a period of one or three years. AWS Lambda now participates in Compute Savings Plans, allowing customers to save money. Visit the AWS Cost Explorer to get started.

Amazon API Gateway

With the HTTP APIs launched in GA, customers can build APIs for services behind private ALBs, private NLBs, and IP-based services registered in AWS Cloud Map such as ECS tasks. To make it easier for customers to work between API Gateway REST APIs and HTTP APIs, customers can now use the same custom domain across both REST APIs and HTTP APIs. In addition, this release also enables customers to perform granular throttling for routes, improved usability when using Lambda as a backend, and better error logging.

AWS Step Functions

AWS Step Functions VS Code plugin.

We launched the AWS Toolkit for Visual Studio Code back in 2019 and last month we added toolkit support for AWS Step Functions. This enables you to define, visualize, and create workflows without leaving VS Code. As you craft your state machine, it is continuously rendered with helpful tools for debugging. The toolkit also allows you to update state machines in the AWS Cloud with ease.

To further help with debugging, we’ve added AWS Step Functions support for CloudWatch Logs. For standard workflows, you can select different levels of logging and can exclude logging of a workflow’s payload. This makes it easier to monitor event-driven serverless workflows and create metrics and alerts.

AWS Amplify

AWS Amplify is a framework for building modern applications, with a toolchain for easily adding services like authentication, storage, APIs, hosting, and more, all via command line interface.

Customers can now use the Amplify CLI to take advantage of AWS Amplify console features like continuous deployment, instant cache invalidation, custom redirects, and simple configuration of custom domains. This means you can do end-to-end development and deployment of a web application entirely from the command line.

Amazon DynamoDB

You can now easily increase the availability of your existing Amazon DynamoDB tables into additional AWS Regions without table rebuilds by updating to the latest version of global tables. You can benefit from improved replicated write efficiencies without any additional cost.

On-demand capacity mode is now available in the Asia Pacific (Osaka-Local) Region. This is a flexible capacity mode for DynamoDB that can serve thousands of requests per second without requiring capacity planning. DynamoDB on-demand offers simple pay-per-request pricing for read and write requests so that you only pay for what you use, making it easy to balance cost and performance.

AWS Serverless Application Repository

The AWS Serverless Application Repository (SAR) is a service for packaging and sharing serverless application templates using the AWS Serverless Application Model (SAM). Applications can be customized with parameters and deployed with ease. Previously, applications could only be shared publicly or with specific AWS account IDs. Now, SAR has added sharing for AWS Organizations. These new granular permissions can be added to existing SAR applications. Learn how to take advantage of this feature today to help improve your organizations productivity.

Amazon Cognito

Amazon Cognito, a service for managing identity providers and users, now supports CloudWatch Usage Metrics. This allows you to monitor events in near-real time, such as sign-in and sign-out. These can be turned into metrics or CloudWatch alarms at no additional cost.

Cognito User Pools now supports logging for all API calls with AWS CloudTrail. The enhanced CloudTrail logging improves governance, compliance, and operational and risk auditing capabilities. Additionally, Cognito User Pools now enables customers to configure case sensitivity settings for user aliases, including native user name, email alias, and preferred user name alias.

Serverless posts

Our team is always working to build and write content to help our customers better understand all our serverless offerings. Here is a list of the latest published to the AWS Compute Blog this quarter.

January

February

March

Tech Talks and events

We hold AWS Online Tech Talks covering serverless topics throughout the year. You can find these in the serverless section of the AWS Online Tech Talks page. We also delivered talks at conferences and events around the globe, regularly join in on podcasts, and record short videos you can find to learn in quick byte-sized chunks.

Here are the highlights from Q1.

January

February

March

Live streams

Rob Sutter, a Senior Developer Advocate on AWS Serverless, has started hosting Serverless Office Hours every Tuesday at 14:00 ET on Twitch. He’ll be imparting his wisdom on Step Functions, Lambda, Golang, and taking questions on all things serverless.

Check out some past sessions:

Happy Little APIs Season 2 is airing every other Tuesday on the AWS Twitch Channel. Checkout the first episode where Eric Johnson and Ran Ribenzaft, Serverless Hero and CTO of Epsagon, talk about private integrations with HTTP API.

Eric Johnson is also streaming “Sessions with SAM” every Thursday at 10AM PST. Each week Eric shows how to use SAM to solve different problems with serverless and how to leverage SAM templates to build out powerful serverless applications. Catch up on the last few episodes on our Twitch channel.

Relax with a cup of your favorite morning beverage every Friday at 12PM EST with a Serverless Coffee Break with James Beswick. These are chats about all things serverless with special guests. You can catch these live on Twitter or on your own time with these recordings.

AWS Serverless Heroes

This year, we’ve added some new faces to the list of AWS Serverless Heroes. The AWS Hero program is a selection of worldwide experts that have been recognized for their positive impact within the community. They share helpful knowledge and organize events and user groups. They’re also contributors to numerous open-source projects in and around serverless technologies.

Still looking for more?

The Serverless landing page has even more information. The Lambda resources page contains case studies, webinars, whitepapers, customer stories, reference architectures, and Getting Started tutorials.

Serverless Stream-Based Processing for Real-Time Insights

Post Syndicated from Justin Pirtle original https://aws.amazon.com/blogs/architecture/serverless-stream-based-processing-for-real-time-insights/

Building on our previous posts regarding messaging patterns and queue-based processing, we now explore stream-based processing and how it helps you achieve low-latency, near real-time data processing in your applications. AWS offers two managed services for streaming, Amazon Kinesis and Amazon Managed Streaming for Apache Kafka (Amazon MSK).

What is streaming data?

At AWS, we define streaming data as data that is emitted at high volume in a continuous, incremental manner with the goal of low-latency processing. Whereas traditional batch-oriented business intelligence would offer insights in retrospect after months, days, or hours have passed, stream-based processing can offer actionable insights in real time. Stream-based processing is commonly used to respond to clickstream events, rapidly ingest various types of logs, and extract, transform, and load (ETL) data in real-time into data lakes and data warehouses.

Amazon Kinesis is the AWS service that makes it easy to collect, process, and analyze such real-time, streaming data with four different capabilities:

For this blog post, we focus on Kinesis Data Streams and Kinesis Data Firehose, since both of these services are foundational for streaming, ingestion, buffering, and processing in your streaming data pipeline.

Kinesis Data Streams

Amazon Kinesis Data Streams is a massively scalable service that can continuously capture gigabytes of data per second from hundreds of thousands of sources. Like many distributed systems, Kinesis Data Streams achieves this level of scalability by partitioning or sharding your data where records are simultaneously written to and read from different shards in parallel. All Kinesis Data Streams require allocation of at least one shard and you choose how many shards you want to allocate to a given stream.

When writing to a shard in a Kinesis Data Stream, each shard supports ingestion of up to 1 MB of data per second or 1,000 records written per second. When reading from a shard, each shard supports output of 2 MB of data per second. You choose an initial number of shards to allocate for your Kinesis Data Stream, then can update your shard allocation over time. Increasing your shard allocation enables your application to easily scale from thousands of records to millions of records written per second.

Producing streaming data

Streaming data producers are processes that put records onto a Kinesis stream by calling the putRecord API to write a single record or putRecords API to write multiple records in a single invocation. Common approaches for producing messages including direct use of AWS tools, including:

  • AWS SDK, which simplifies authentication and other semantics of invoking AWS service APIs
  • Amazon Kinesis Agent, which enables local file/log monitoring and rotation sending in real time
  • Amazon Kinesis Producer Library, which simplifies aggregating records into larger payloads to improve throughput.

Additionally, several AWS services natively integrate with Amazon Kinesis as a data producer:

There are also several third-party services that offer native integration as data producers, including:

Regardless of the producer service/tool of choice, all data producers put records onto a stream by providing a partition key, stream name, and the data itself, which altogether must not exceed 1 MB in size. The partition key provided is used to determine which shard the data should be written to on the stream. Amazon Kinesis Data Streams offers ordering guarantees and maintain message ordering within a given shard in a stream using sequence numbers to track the unique position of each message sent.

Consuming streaming data

Once records are written to a Kinesis Data Stream, they are buffered in their respective shards for consumption. Unlike queue-based processing, the records are buffered until the data retention period set on the stream elapses, enabling one or more consumers to replay all in the messages in the shards of the stream. If your application must deliver your records to a data lake, data warehouse, Elasticsearch Service cluster, or Splunk, Kinesis Data Firehose can natively deliver your records to the following without needing to write any custom code:

  • Amazon S3
  • Amazon Redshift
  • Amazon Elasticsearch Service
  • Splunk

You simply indicate the desired delivery destination and configuration regarding how to batch and deliver the messages. Kinesis Data Firehose can also use your configured S3 desired object naming, Amazon Redshift table name, Amazon Elasticsearch index name, and more.

For custom processing or destinations outside of the Amazon Kinesis Data Firehose supported services above, you will need to write and execute custom code to consume data from the stream. Though you can use the Kinesis Client Library (KCL) to run your own custom processing application on persistent virtual machines or container instances, AWS Lambda offers serverless computing with native event source integration with Amazon Kinesis Data Streams. AWS Lambda as a stream consumer takes care of the operational overhead of reading shards, maintaining record order, check pointing as records are processed, and parallelizing processing.

Serverless stream processing with AWS Lambda

When configured with a Kinesis Stream as its event source, AWS Lambda continuously polls every shard in your stream at no extra charge and only invokes your Lambda code if and when there are messages in the stream. It additionally scales up the number of concurrent executions to parallelize reading all shards of a stream at the same time (and can have multiple executions reading the same shard simultaneously for a higher parallelization factor, if desired). AWS Lambda automatically checkpoints which records were successfully processed and handles retries and any failures automatically according to your desired configuration.

Best of all, there is no additional cost of the Lambda service handling all of these operational needs for you. You only pay for compute time when your function is invoked and messages are available on the stream for processing. You’re able to focus on processing your data with your business logic directly in your code since your records are sent as an array to your Lambda code. There is no additional code to author/manage regarding checkpointing, shard splits/merges, or other complexities.

Conclusion

In this blog, we defined streaming data and explored the Amazon Kinesis service and its various capabilities. We then reviewed the various options available for producing and consuming real-time streaming data with Amazon Kinesis, including using AWS Lambda for serverless streaming data processing. Please refer to the following resources for further learning on AWS streaming data processing:

Building a Raspberry Pi telepresence robot using serverless: Part 2

Post Syndicated from Moheeb Zara original https://aws.amazon.com/blogs/compute/building-a-raspberry-pi-telepresence-robot-using-serverless-part-2/

The deployed web frontend and the robot it controls.

The deployed web frontend and the robot it controls.

In a previous post, I show how to build a telepresence robot using serverless technologies and a Raspberry Pi. The result is a robot that transmits live video using Amazon Kinesis Video Streams with WebRTC. It can be driven remotely via an AWS Lambda function using an Amazon API Gateway REST endpoint.

This post walks through deploying a web interface to view the live stream and control the robot. The application is built using AWS Amplify and Vue.js. Amplify is a development framework that makes it easy to add authentication, hosting, and other AWS resources. It also provides a pipeline for deploying web applications.

I use the Amplify Command Line Interface (CLI) to create an authentication flow for user sign-in using Amazon Cognito. I then show how to set up an authorizer in API Gateway so that only authenticated users can drive the robot. An AWS Identity Access and Management (IAM) role sets permissions so users can assume access to Kinesis Video Streams to view the live video feed. The web application is then configured and run locally for testing. Finally, using the Amplify CLI, I show how to add hosting and publish a production ready web application.

Prerequisites

You need the following to complete the project:

Amplify CLI and project setup

An architecture diagram showing the client relationship between the AWS resources deployed by Amplify.

The Amplify CLI allows you to create and manage resource on AWS. With the libraries and UI components provided by the Amplify Framework, you can build powerful applications using a variety of cloud services.

The web interface for the telepresence robot is built using Amplify Vue.js components for user registration and sign-in. Download the application and use the Amplify CLI to configure resources for the web application.

To install and configure Amplify on the frontend web application, refer to the project set-up instructions on the GitHub project.

Creating an API Gateway authorizer

In the first guide, API Gateway is used to create a REST endpoint to send commands to the robot. Currently, the endpoint accepts requests without any authentication. To ensure that only authenticated users can control the robot, you must create an authorizer for the API.

The backend resources deployed by the Amplify web application include a Cognito User Pool. This is a user directory that provides sign-up and sign-in services, user profiles, and identity providers. The following instructions demonstrate how to configure an authorizer on API Gateway that verifies access using a user pool.

  1. Navigate to the Amazon API Gateway console.
  2. Choose the API created in the first guide for driving the robot.
  3. Choose Authorizers from the menu.
  4. Choose Create New Authorizer. Choose Cognito for Type and select the user pool created by the Amplify CLI. Set Token Source to Authorization.
  5. Choose Create.
  6. Choose Resources from the menu.
  7. Choose POST, Method Request.
  8. Set Authorization to the newly created authorizer.

Adding permissions

The web application loads a component for viewing video from the robot over a WebRTC connection. WebRTC is a protocol for negotiating peer to peer data connections by using a signaling channel.

The previous guide configured the robot to use a Kinesis Video Signaling Channel. Users signed into the web application must assume some permissions for Kinesis Video Streams to access the signaling channel.

When the Amplify CLI deploys an authentication flow, it creates a role in IAM. Cognito uses this role to assume permissions for a user pool based on matching conditions.

This Trust Relationship on the authRole controls when the role’s permissions are assumed. In this case, on a matching “authenticated” user from the identity pool.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "cognito-identity.amazonaws.com"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "cognito-identity.amazonaws.com:aud": "us-west-2:12345e-9548-4a5a-b44c-12345677"
        },
        "ForAnyValue:StringLike": {
          "cognito-identity.amazonaws.com:amr": "authenticated"
        }
      }
    }
  ]
}

Follow these steps to attach Kinesis Video Streams permissions to the authRole.

  1. Navigate to the IAM console.
  2. Choose Roles from the menu.
  3. Use the search bar to find “authRole”. It is prefixed by the stack name associated with the Amplify deployment. Choose it from the list.
  4. Choose Add inline policy.
  5. Select the JSON tab and paste in the following. In the Resource property, replace <RobotName> with the name of the robot created in the first guide.
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "VisualEditor0",
                "Effect": "Allow",
                "Action": [
                    "kinesisvideo:GetSignalingChannelEndpoint",
                    "kinesisvideo:ConnectAsMaster",
                    "kinesisvideo:GetIceServerConfig",
                    "kinesisvideo:ConnectAsViewer",
                    "kinesisvideo:DescribeSignalingChannel"
                ],
                "Resource": "arn:aws:kinesisvideo:*:*:channel/<RobotName>/*"
            }
        ]
    }
    
  6. Choose Review Policy.
  7. Choose Create Policy.

Configuring the application

The authorizer allows authenticated users to invoke the Lambda function through API Gateway. The permissions set on the authRole control access to the live video. The web application must know the endpoint for sending commands and the Kinesis Video Signaling Channel to use for the robot.

This information is configured in web-app/src/main.js. It requires a file named config.json to let the application know which endpoint and signaling channel to use.

  1. Inside the application folder aws-serverless-telepresence-robot/web-app/src, create a new file named config.json.
    {
      "endpoint": "",
      "channelARN": ""
    }
  2. Replace endpoint with the Invoke URL of the robot API. This can be found in API Gateway console under Stages, Prod. It can also be found under Outputs in the AWS CloudFormation stack created by the aws-serverless-telepresence-robot serverless application from the first guide.
  3. Replace channelARN with the ARN of your robot’s signaling channel. This can be found in the Amazon Kinesis Video Streams console under Signaling channels.

Running the application

You can build and run the application locally for testing purposes. It still uses the backend deployed in the cloud. Do this before publishing to production:

  1. Inside the web-app directory, run the following command:
    npm run serve
  2. Navigate to the locally hosted application at http://localhost:8080
  3. Follow the onscreen steps to create a new account.
  4. Choose Start Video. If the robot is active, a WebRTC connection is made and live video is displayed.
  5. Use the onscreen arrow buttons to drive the robot.

Deploying a hosted application

Amplify makes it easy to deploy a hosted application. The following commands configure and deploy hosting resources in Amazon S3 and Amazon CloudFront. This allows you to securely and quickly deploy your application for production use.

  1. Inside aws-serverless-telepresence-robot/web-app, run the following. When prompted, select PROD, this configures the application to deploy using S3 and CloudFront.
    amplify add hosting
  2. Finally, this command builds and publishes all the backend and frontend resources for your Amplify project. On completion, it provides a URL to the hosted web application. Note, it can take a while for the CloudFront distribution to deploy.
    amplify publish

Conclusion

In this post, I show how to build a web interface for remotely viewing and controlling the robot. This is done using AWS Amplify, Vue.js, and a previously deployed serverless application.

With a few commands, the Amplify CLI is used to configure backend resources for a web frontend. Cognito is used as an identity provider. An Authorizer is created for an API Gateway endpoint, allowing authenticated users to send commands to the robot from the frontend. An IAM Role with a trusted relationship with the Cognito User Pool is given permissions to use Kinesis Video Signaling Channels, which are passed to the authenticated users. This allows the web frontend to open a live video connection to the telepresence robot using WebRTC.

Once run and tested locally, I showed how the Amplify CLI can streamline configuring hosting and deployment of a production web application using S3 and CloudFront. The summation of this is a custom-built telepresence robot with a web application for viewing and operating securely, all done without managed servers.

The principles used in this project can be applied towards a variety of use cases. Use this to build out a fleet of remote vehicles to monitor factories or for personal home security. You can create a community for users to experience environments remotely. The interface Vue component can also easily be modified for custom commands sent to the application running on the robot.

Translating documents at enterprise scale with serverless

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/translating-documents-at-enterprise-scale-with-serverless/

For organizations operating in multiple countries, helping customers in different languages is an everyday reality. But in many IT systems, data remains static in a single language, making it difficult or impossible for international customers to use. In this blog post, I show how you can automate language translation at scale to solve a number of common enterprise problems.

Many types of data are good targets for translation. For example, product catalog information, for sharing with a geographically broad customer base. Or customer emails and interactions in multiple languages, translating back to a single language for analytics. Or even resource files for mobile applications, to automate string processing into different languages during the build process.

Building the machine learning models for language translation is extraordinarily complex, so fortunately you have a pay-as-you-go service available in Amazon Translate. This can accurately translate text between 54 languages, and automatically detects the source language.

Developing a scalable translation solution for thousands of documents can be challenging using traditional, server-based architecture. Using a serverless approach, this becomes much easier since you can use storage and compute services that scale for you – Amazon S3 and AWS Lambda:

Integrating S3 with Translate via Lambda.

In this post, I show two solutions that provide an event-based architecture for automated translation. In the first, S3 invokes a Lambda function when objects are stored. It immediately reads the file and requests a translation from Amazon Translate. The second solution explores a more advanced method for scaling to large numbers of documents, queuing the requests and tracking their state. The walkthrough creates resources covered in the AWS Free Tier but you may incur costs for usage.

To set up both example applications, visit the GitHub repo and follow the instructions in the README.md file. Both applications use the AWS Serverless Application Model (SAM) to make it easy to deploy in your AWS account.

Translating in near real time

In the first application, the workflow is straightforward. The source text is sent immediately to Translate for processing, and the result is saved back into the S3 bucket. It provides near real-time translation whenever an object is saved. This uses the following architecture:

Architecture for the first example application.

  1. The source text is saved in the Batching S3 bucket.
  2. The S3 put event invokes the Batching Lambda function. Since Translate has a limit of 5,000 characters per request, it slices the contents of the input into parts small enough for processing.
  3. The resulting parts are saved in the Translation S3 bucket.
  4. The S3 put events invoke the Translation function, which scales up concurrently depending on the number of parts.
  5. Amazon Translate returns the translations back to the Lambda function, which saves the results in the Translation bucket.

The repo’s SAM template allows you to specify a list of target languages, as a space-delimited list of supported language codes. In this case, any text uploaded to S3 is translated into French, Spanish, and Italian:

Parameters:
  TargetLanguage:
    Type: String
    Default: 'fr es it'

Testing the application

  1. Deploy the first application by following the README.md in the GitHub repo. Note the application’s S3 Translation and Batching bucket names shown in the output:Output values after SAM deployment.
  2. The testdata directory contains several sample text files. Change into this directory, then upload coffee.txt to the S3 bucket, replacing your-bucket below with your Translation bucket name:
    cd ./testdata/
    aws s3 cp ./coffee.txt s3://your-bucket
  3. The application invokes the translation workflow, and within a couple of seconds you can list the output files in the translations folder:aws s3 ls s3://your-bucket/translations/

    Translations output.

  4. Create an output directory, then download the translations to your local machine to view the contents:
    mkdir output
    aws s3 cp s3://your-bucket/translations/ ./output/ --recursive
    more ./output/coffee-fr.txt
    more ./output/coffee-es.txt
    more ./output/coffee-it.txt
  5. For the next step, translate several text files containing test data. Copy these to the Translation bucket, replacing your-bucket below with your bucket name:aws s3 cp ./ s3://your-bucket --include "*.txt" --exclude "*/*" --recursive
  6. After a few seconds, list the files in the translations folder to see your translated files:aws s3 ls s3://your-bucket/translations/

    Listing the translated files.

  7. Finally, translate a larger file using the batching process. Copy this file to the Batching S3 bucket (replacing your-bucket with this bucket name):
    cd ../testdata-batching/
    aws s3 cp ./your-filename.txt s3://your-bucket
  8. Since this is a larger file, the batching Lambda function breaks it apart into smaller text files in the Translation bucket. List these files in the terminal, together with their translations:
    aws s3 ls s3://your-bucket
    aws s3 ls s3://your-bucket/translations/

    Listing the output files.

In this example, you can translate a reasonable number of text files for a trivial use-case. However, in an enterprise environment where there could be thousands of files in a single bucket, you need a more robust architecture. The second application introduces a more resilient approach.

Scaling up the translation solution

In an enterprise environment, the application must handle long documents and large quantities of documents. Amazon Translate has service limits in place per account – you can request an increase via an AWS Support Center ticket if needed. However, S3 can ingest a large number of objects quickly, so the application should decouple these two services.

The next example uses Amazon SQS for decoupling, and also introduces Amazon DynamoDB for tracking the status of each translation. The new architecture looks like this:

Decoupled translation architecture.

  1. A downstream process saves text objects in the Batching S3 bucket.
  2. The Batching function breaks these files into smaller parts, saving these in the Translation S3 bucket.
  3. When an object is saved in this bucket, this invokes the Add to Queue function. This writes a message to an SQS queue, and logs the item in a DynamoDB table.
  4. The Translation function receives messages from the SQS queue, and requests translations from the Amazon Translate service.
  5. The function updates the item as completed in the DynamoDB table, and stores the output translation in the Results S3 bucket.

Testing the application

This test uses a much larger text document – the text version of the novel War and Peace, which is over 3 million characters long. It’s recommended that you use a shorter piece of text for the walkthrough, between 20-50 kilobytes, to minimize cost on your AWS bill.

  1. Deploy the second application by following the README.md in the GitHub repo, and note the application’s S3 bucket name and DynamoDB table name.
    The output values from the SAM deployment.
  2. Download your text sample and then upload it the Batching bucket. Replace your-bucket with your bucket name and your-text.txt with your text file name:aws s3 cp ./your-text.txt s3://your-bucket/ 
  3. The batching process creates smaller files in the Translation bucket. After a few seconds, list the files in the Translation bucket (replacing your-bucket with your bucket name):aws s3 ls s3://patterns-translations-v2/ --recursive --summarize

    Listing the translated files.

  4. To see the status of the translations, navigate to the DynamoDB console. Select Tables in the left-side menu and then choose the application’s DynamoDB table. Select the Items tab:Listing items in the DynamoDB table.

    This shows each translation file and a status of Queue or Translated.

  5. As translations complete, these appear in the Results bucket:aws s3 ls s3://patterns-results-v2/ --summarize

    Listing the output translations.

How this works

In the second application, the SQS queue acts as a buffer between the Batching process and the Translation process. The Translation Lambda function fetches messages from the SQS queue when they are available, and submits the source file to Amazon Translate. This throttles the overall speed of processing.

There are configuration settings you can change in the SAM template to vary the speed of throughput:

  • Translator function: this consumes messages from the SQS queue. The BatchSize configured in the SAM template is set to one message per invocation. This is equivalent to processing one source file at a time. You can set a BatchSize value from 1 to 10, so could increase this from the application’s default.
  • Function concurrency: the SAM template sets the Loader function’s concurrency to 1, using the ReservedConcurrentExecutions attribute. In effect, this means Lambda can only invoke 1 function at the same time. As a result, it keeps fetching the next batch from SQS as soon as processing finishes. The concurrency is a multiplier – as this value is increased, the translation throughput increases proportionately, if there are messages available in SQS.
  • Amazon Translate limits: the service limits in place are designed to protect you from higher-than-intended usage. If you need higher soft limits, open an AWS Support Center ticket.

Combining these settings, you have considerable control over the speed of processing. The defaults in the sample application are set at the lowest values possible so you can observe the queueing mechanism.

Conclusion

Automated translation using deep learning enables you to make documents available at scale to an international audience. For organizations operating globally, this can improve your user experience and increase customer access to your company’s products and services.

In this post, I show how you can create a serverless application to process large numbers of files stored in S3. The write operation in the S3 bucket triggers the process, and you use SQS to buffer the workload between S3 and the Amazon Translate service. This solution also uses DynamoDB to help track the state of the translated files.

Visualize user behavior with Auth0 and Amazon EventBridge

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/visualize-user-behavior-with-auth0-and-amazon-eventbridge/

In this post, I show how to capture user events and monitor user behavior by using the Amazon EventBridge partner integration with Auth0. This enables you to gain insights to help deliver a more customized application experience for your users.

Auth0 is a flexible, drop-in solution that adds authentication and authorization services to your applications. The EventBridge integration automatically and continuously pushes Auth0 log events your AWS account via a custom SaaS event bus.

The examples used in this post are implemented in a custom-built serverless application called FreshTracks. This is a demo application built in Vue.js, which I will use to demonstrate multiple SaaS integrations into AWS with EventBridge in this and future blog posts.

FreshTracks – a demo serverless web application with multiple SaaS Integrations.

FreshTracks – a demo serverless web application with multiple SaaS Integrations.

The components for this EventBridge integration with Auth0 have been extracted into a separate example application in this GitHub repo.

How the application works

Routing Auth0 Events with Amazon EventBridge.

Routing Auth0 Events with Amazon EventBridge.

  1. Events are emitted from Auth0 when a user interacts with the login service on the front-end application.
  2. These events are streamed into a custom SaaS event bus in EventBridge.
  3. Event rules match events and send them downstream to a Lambda function target.
  4. The receiving Lambda function performs some data transformation before writing an object to S3.
  5. These objects are made available by a QuickSight data source manifest file and used as datapoints for QuickSight visuals.

Configuring the Auth0 EventBridge integration

To capture Auth0 emitted events in EventBridge, you must first configure Auth0 for use as the Event Source on your Auth0 Dashboard.

  1. Log in to the Auth0 Dashboard.
  2. Choose to Logs > Streams.
  3. Choose + Create Stream.
  4. Choose Amazon EventBridge and enter a unique name for the new Amazon EventBridge Event Stream.
  5. Create the Event Source by providing your AWS Account ID and AWS Region. The Region you select must match the Region of the Amazon EventBridge bus.
  6. Choose Save.
Event Source Configuration on Auth0 dashboard

Event Source Configuration on Auth0 dashboard

Auth0 provides you with an Event Source Name. Make sure to save your Event Source Name value since you need this at a later point to complete the integration.

Creating a custom event bus

  1. Go to the EventBridge partners tab in your AWS Management Console. Ensure the AWS Region matches where the Event Source was created.
  2. Paste the Event Source Name in the Partner event sources search box to find and choose the new Auth0 event source.Note: The Event Source remains in a pending state until it is associated with an event bus.

    Partner event source

  3. Choose the event source, then choose Associate with Event Bus.
  4. Choose Associate.

Deploying the application

Once you have associated the Event Source with a new partner event bus, you are ready to deploy backend services to receive and respond to these events.

To set up the example application, visit the GitHub repo and follow the instructions in the README.md file.

When deploying the application stack, make sure to provide the custom event bus name with –parameter-overrides.

sam deploy --parameter-overrides Auth0EventBusName=aws.partner/auth0.com/auth0username-0123344567-e5d2-4514-84f2-97dd4ff8aad0/auth0.logs

You can find the name of the new Auth0 custom event bus in the custom event bus section of the EventBridge console:

Custom event bus name

Custom event bus name

Routing events with rules

The AWS Serverless Application Model (SAM) template in the example application creates four event rules:

  1. Successful sign-in
  2. Successful signup
  3. Successful log-out
  4. Unsuccessful signup

These are defined with the `AWS::Events::Rule` resource type. Each of these rules is routed to a single target Lambda function. For a successful sign-in, the rule event pattern is matched with detail:data:type:s.  This refers to the Auth0 event type code for a successful sign-in. Every Auth0 event code is listed here.

SuccessfullSignIn: 
    Type: AWS::Events::Rule
    Properties: 
      Description: "Auth0 User Successfully signed in"
      EventBusName: 
         Ref: Auth0EventBusName
      EventPattern: 
        account:
        - !Sub '${AWS::AccountId}'
        detail:
          data:
            type:
            - s
      Targets: 
        - 
          Arn:
            Fn::GetAtt:
              - "SaveAuth0EventToS3" 
              - "Arn"
          Id: "SignInSuccessV1"

To respond to additional events, copy this event rule pattern and change the event code string for the event you want to match.

Writing events to S3 with Lambda

The application routes events to a Lambda function, which performs some data transformation before writing an object to S3.  The function code uses an environment variable named AuthLogsBucket to store the S3 bucket name. The permissions to write to S3 are granted by policy defined within the SAM template:

  SaveAuth0EventToS3:
    Type: AWS::Serverless::Function 
    Properties:
      CodeUri: src/
      Handler: saveAuth0EventToS3.handler
      Runtime: nodejs12.x
      MemorySize: 128
      Environment:
        Variables:
          AuthLogBucket: !Ref AuthZeroToEventBridgeUserActivitylogs
      Policies:
        - S3CrudPolicy:
            BucketName: !Ref AuthZeroToEventBridgeUserActivitylogs

The S3 object is a CSV file with context about each event. Each of the Auth0 event schemas is different. To maintain a consistent CSV file structure across different event types, this Lambda function explicitly defines each of the header and row values. An output string is constructed from the Auth0 event:

Lambda function output string

Lambda function output string

This string is placed into a new buffer and written to S3 with the AWS SDK for Javascript as referenced in GitHub here

Sending events to the application

There is a test event in the /event directory of the example application. This contains an example of a successful sign-in event emitted from Auth0.

Sending a test Auth0 event to Lambda

Send a test event to the Lambda function using the AWS Command Line Interface.

Run the following command in the root directory of the example application, replacing {function-name} with the full name of your Lambda function.

aws lambda invoke --function-name {function-name} --invocation-type Event --payload file://events/event.json  events/response.json --log-type Tail

Response:

{  
 "StatusCode": 202
}

The response output appears in the output terminal window. To confirm that an object is stored in S3, navigate to the S3 Console.  Choose the AuthZeroToEventBridgeUserActivityLogs bucket. You see a new auth0 directory and can open the CSV file that holds context about the event.

Object written to S3

Object written to S3

Sending real Auth0 events from a front-end application

Follow the instructions in the Fresh Tracks repo on GitHub to deploy the front-end application. This application includes Auth0’s authentication flow. You can connect to your Auth0 application by entering your credentials in the `auth0_config.json` file:

{
  "domain": "<YOUR AUTH0 DOMAIN>",
  "clientId": "<YOUR AUTH0 CLIENT ID>"
}

The example backend application starts receiving Auth0 emitted events immediately.

To see the full Fresh Tracks application continue to the backend deployment instructions. This is not required for the examples in this blog post.

Building a QuickSight dashboard

You can visualize these Auth0 user events with an Amazon QuickSight dashboard. This provides a snapshot analysis that you can share with other QuickSight users for reporting purposes.

To use Auth0 events as metrics, create a separate calculated field for each event (for example, successful signup and successful login).  For example, an analysis could include multiple visuals, custom fields, conditional formatting, and events. This gives a snapshot of user interaction with the front-end application at any given time.

FreshTracks final dashboard example

FreshTracks final dashboard example

The example application in the GitHub repo provides instructions on how to create a dashboard.

Conclusion

This post explains how to set up EventBridge’s third-party integration with Auth0 to capture events. The example backend application demonstrates how to filter these events, perform computations on them, save as S3 objects, and send to a downstream service.

The ability to build QuickSight story boards from these events and share visuals with key business stakeholders can provide a narrative about the analysis data. This is implemented with minimal code to provide near real-time streaming of events and without adding latency to your application.

The possibilities are vast. I am excited to see how builders use this serverless pattern to create their own visuals to build a better, more customized application experience for their users.

Start here to learn about other SaaS integrations with Amazon EventBridge.

Creating a searchable enterprise document repository

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/creating-a-searchable-enterprise-document-repository/

Enterprise customers frequently have repositories with thousands of documents, images and other media. While these contain valuable information for users, it’s often hard to index and search this content. One challenge is interpreting the data intelligently, and another issue is processing these files at scale.

In this blog post, I show how you can deploy a serverless application that uses machine learning to interpret your documents. The architecture includes a queueing mechanism for handling large volumes, and posting the indexing metadata to an Amazon Elasticsearch domain. This solution is scalable and cost effective, and you can modify the functionality to meet your organization’s requirements.

The overall architecture for a searchable document repository solution.

The application takes unstructured data and applies machine learning to extract context for a search service, in this case Elasticsearch. There are many business uses for this design. For example, this could help Human Resources departments in searching for skills in PDF resumes. It can also be used by Marketing departments or creative groups with a large number of images, providing a simple way to search for items within the images.

As documents and images are added to the S3 bucket, these events invoke AWS Lambda functions. This runs custom code to extract data from the files, and also calls Amazon ML services for interpretation. For example, when adding a resume, the Lambda function extracts text from the PDF file while Amazon Comprehend determines the key phrases and topics in the document. For images, it uses Amazon Rekognition to determine the contents. In both cases, once it identifies the indexing attributes, it saves the data to Elasticsearch.

The code uses the AWS Serverless Application Model (SAM), enabling you to deploy the application easily in your own AWS Account. This walkthrough creates resources covered in the AWS Free Tier but you may incur cost for large data imports. Additionally, it requires an Elasticsearch domain, which may incur cost on your AWS bill.

To set up the example application, visit the GitHub repo and follow the instructions in the README.md file.

Creating an Elasticsearch domain

This application requires an Elasticsearch development domain for testing purposes. To learn more about production configurations, see the Elasticsearch documentation.

To create the test domain:

  1. Navigate to the Amazon Elasticsearch console. Choose Create a new domain.
  2. For Deployment type, choose Development and testing. Choose Next.
  3. In the Configure Domain page:
    1. For Elasticsearch domain name, enter serverless-docrepo.
    2. Change Instance Type to t2.small.elasticsearch.
    3. Leave all the other defaults. Choose Next at the end of the page.
  4. In Network Configuration, choose Public access. This is adequate for a tutorial but in a production use-case, it’s recommended to use VPC access.
  5. Under Access Policy, in the Domain access policy dropdown, choose Custom access policy.
  6. Select IAM ARN and in the Enter principal field, enter your AWS account ID (learn how to find your AWS account ID). In the Select Action dropdown, select Allow.
  7. Under Encryption, leave HTTPS checked. Choose Next.
  8. On the Review page, review your domain configuration, and then choose Confirm.

Your domain is now being configured, and the Domain status shows Loading in the Overview tab.

After creating the Elasticsearch domain, the domain status shows ‘Loading’.

It takes 10-15 minutes to fully configure the domain. Wait until the Domain status shows Active before continuing. When the domain is ready, note the Endpoint address since you need this in the application deployment.

The Endpoint for an Elasticsearch domain.

Deploying the application

After cloning the repo to your local development machine, open a terminal window and change to the cloned directory.

  1. Run the SAM build process to create an installation package, and then deploy the application:
    sam build
    sam deploy --guided
  2. In the guided deployment process, enter unique names for the S3 buckets when prompted. For the ESdomain parameter, enter the Elasticsearch domain Endpoint from the previous section.
  3. After the deployment completes, note the Lambda function ARN shown in the output:
    Lambda function ARN output.
  4. Back in the Elasticsearch console, select the Actions dropdown and choose Modify Access Policy. Paste the Lambda function ARN as an AWS Principal in the JSON, in addition to the root user, as follows:
    Modify access policy for Elasticsearch.
  5. Choose Submit. This grants the Lambda function access to the Elasticsearch domain.

Testing the application

To test the application, you need a few test documents and images with the file types DOCX (Microsoft Word), PDF, and JPG. This walkthrough uses multiple files to illustrate the queuing process.

  1. Navigate to the S3 console and select the Documents bucket from your deployment.
  2. Choose Upload and select your sample PDF or DOCX files:
    Upload files to S3.
  3. Choose Next on the following three pages to complete the upload process. The application now analyzes these documents and adds the indexing information to Elasticsearch.

To query Elasticsearch, first you must generate an Access Key ID and Secret Access Key. For detailed steps, see this documentation on creating security credentials. Next, use Postman to create an HTTP request for the Elasticsearch domain:

  1. Download and install Postman.
  2. From Postman, enter the Elasticsearch endpoint, adding /_search?q=keyword. Replace keyword with a search term.
  3. On the Authorization tab, complete the Access Key ID and Secret Access Key fields with the credentials you created. For Service Name, enter es.Postman query with AWS authorization.
  4. Choose Send. Elasticsearch responds with document results matching your search term.REST response from Elasticsearch.

How this works

This application creates a processing pipeline between the originating Documents bucket and the Elasticsearch domain. Each document type has a custom parser, preparing the content in a Queuing bucket. It uses an Amazon SQS queue to buffer work, which is fetched by a Lambda function and analyzed with Amazon Comprehend. Finally, the indexing metadata is saved in Elasticsearch.

Serverless architecture for text extraction and classification.

  1. Documents and images are saved in the Documents bucket.
  2. Depending upon the file type, this triggers a custom parser Lambda function.
    1. For PDFs and DOCX files, the extracted text is stored in a Staging bucket. If the content is longer than 5,000 characters, it is broken into smaller files by a Lambda function, then saved in the Queued bucket.
    2. For JPG files, the parser uses Amazon Rekognition to detect the contents of the image. The labeling metadata is stored in the Queued bucket.
  3. When items are stored in the Queued bucket, this triggers a Lambda function to add the job to an SQS queue.
  4. The Analyze function is invoked when there are messages in the SQS queue. It uses Amazon Comprehend to find entities in the text. This function then stores the metadata in Elasticsearch.

S3 and Lambda both scale to handle the traffic. The Elasticsearch domain is not serverless, however, so it’s possible to overwhelm this instance with requests. There may be a large number of objects stored in the Documents bucket triggering the workflow, so the application uses SQS couple to smooth out the traffic. When large numbers of objects are processed, you see the Messages Available increase in the SQS queue:

SQS queue buffers messages to smooth out traffic.
For the Lambda function consuming messages from the SQS queue, the BatchSize configured in the SAM template controls the rate of processing. The function continues to fetch messages from the SQS queue while Messages Available is greater than zero. This can be a useful mechanism for protecting downstream services that are not serverless, or simply cannot scale to match the processing rates of the upstream application. In this case, it provides a more consistent flow of indexing data to the Elasticsearch domain.

In a production use-case, you would scale the Elasticsearch domain depending upon the load of search requests, not just the indexing traffic from this application. This tutorial uses a minimal Elasticsearch configuration, but this service is capable of supporting enterprise-scale use-cases.

Conclusion

Enterprise document repositories can be a rich source of information but can be difficult to search and index. In this blog post, I show how you can use a serverless approach to build scalable solution easily. With minimum code, we can use Amazon ML services to create the indexing metadata. By using powerful image recognition and language comprehension capabilities, this makes the metadata more useful and the search solution more accurate.

This also shows how serverless solutions can be used with existing non-serverless infrastructure, like Elasticsearch. By decoupling scalable serverless applications, you can protect downstream services from heavy traffic loads, even as Lambda scales up. Elasticsearch provides a fast, easy way to query your document repository once the serverless application has completed the indexing process.

To learn more about how to use Elasticsearch for production workloads, see the documentation on managing domains.

Monitoring and management with Amazon QuickSight and Athena in your CI/CD pipeline

Post Syndicated from Umair Nawaz original https://aws.amazon.com/blogs/devops/monitoring-and-management-with-amazon-quicksight-and-athena-in-your-ci-cd-pipeline/

One of the many ways to monitor and manage required CI/CD metrics is to use Amazon QuickSight to build customized visualizations. Additionally, by applying Lean management to software delivery processes, organizations can improve delivery of features faster, pivot when needed, respond to compliance and security changes, and take advantage of instant feedback to improve the customer delivery experience. This blog post demonstrates how AWS resources and tools can provide monitoring and information pertaining to their CI/CD pipelines.

There are three principles in Lean management that this artifact enables and to which it contributes:

  • Limiting work in progress by establishing constraints that drive process improvement and increase throughput.
  • Creating and maintaining dashboards displaying key quality information, productivity metrics, and current status of work (including defects).
  • Using data from development performance and operations monitoring tools to enable business decisions more frequently.

Overview

The following architectural diagram shows how to use AWS services to collect metrics from a CI/CD pipeline and deliver insights through Amazon QuickSight dashboards.

Architecture diagram showing an overview of how CI/CD metrics are extracted and transformed to create a dynamic QuickSight dashboard

In this example, the orchestrator for the CI/CD pipeline is AWS CodePipeline with the entry point as an AWS CodeCommit Git repository for source control. When a developer pushes a code change into the CodeCommit repository, the change goes through a series of phases in CodePipeline. AWS CodeBuild is responsible for performing build actions and, upon successful completion of this phase, AWS CodeDeploy kicks off the actions to execute the deployment.

For each action in CodePipeline, the following series of events occurs:

  • An Amazon CloudWatch rule creates a CloudWatch event containing the action’s metadata.
  • The CloudWatch event triggers an AWS Lambda function.
  • The Lambda function extracts relevant reporting data and writes it to a CSV file in an Amazon S3 bucket.
  • Amazon Athena queries the Amazon S3 bucket and loads the query results into SPICE (an in-memory engine for Amazon QuickSight).
  • Amazon QuickSight obtains data from SPICE to build dashboard displays for the management team.

Note: This solution is for an AWS account with an existing CodePipeline(s). If you do not have a CodePipeline, no metrics will be collected.

Getting started

To get started, follow these steps:

  • Create a Lambda function and copy the following code snippet. Be sure to replace the bucket name with the one used to store your event data. This Lambda function takes the payload from a CloudWatch event and extracts the field’s pipeline, time, state, execution, stage, and action to transform into a CSV file.

Note: Athena’s performance can be improved by compressing, partitioning, or converting data into columnar formats such as Apache Parquet. In this use-case, the dataset size is negligible therefore, a transformation from CSV to Parquet is not required.


import boto3
import csv
import datetime
import os

 # Analyze payload from CloudWatch Event
 def pipeline_execution(data):
     print (data)
     # Specify data fields to deliver to S3
     row=['pipeline,time,state,execution,stage,action']
     
     if "stage" in data['detail'].keys():
         stage=data['detail']['execution']
     else:
         stage='NA'
         
     if "action" in data['detail'].keys():
         action=data['detail']['action']
     else:
         action='NA'
     row.append(data['detail']['pipeline']+','+data['time']+','+data['detail']['state']+','+data['detail']['execution']+','+stage+','+action)  
     values = '\n'.join(str(v) for v in row)
     return values

 # Upload CSV file to S3 bucket
 def upload_data_to_s3(data):
     s3=boto3.client('s3')
     runDate = datetime.datetime.now().strftime("%Y-%m-%d_%H:%M:%S:%f")
     csv_key=runDate+'.csv'
     response = s3.put_object(
         Body=data,
         Bucket='*<example-bucket>*',
         Key=csv_key
     )

 def lambda_handler(event, context):
     upload_data_to_s3(pipeline_execution(event))
  • Create an Athena table to query the data stored in the Amazon S3 bucket. Execute the following SQL in the Athena query console and provide the bucket name that will hold the data.
CREATE EXTERNAL TABLE `devops`(
   `pipeline` string, 
   `time` string, 
   `state` string, 
   `execution` string, 
   `stage` string, 
   `action` string)
 ROW FORMAT DELIMITED 
   FIELDS TERMINATED BY ',' 
 STORED AS INPUTFORMAT 
   'org.apache.hadoop.mapred.TextInputFormat' 
 OUTPUTFORMAT 
   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
 LOCATION
   's3://**<example-bucket>**/'
 TBLPROPERTIES (
   'areColumnsQuoted'='false', 
   'classification'='csv', 
   'columnsOrdered'='true', 
   'compressionType'='none', 
   'delimiter'=',', 
   'skip.header.line.count'='1',  
   'typeOfData'='file')  
  • Create a CloudWatch event rule that passes events to the Lambda function created in Step 1. In the event rule configuration, set the Service Name as CodePipeline and, for Event Type, select All Events.

Sample Dataset view from Athena.

Sample Athena query and the results

Amazon QuickSight visuals

After the initial setup is done, you are ready to create your QuickSight dashboard. Be sure to check that the Athena permissions are properly set before creating an analysis to be published as an Amazon QuickSight dashboard.

Below are diagrams and figures from Amazon QuickSight that can be generated using the event data queried from Athena. In this example, you can see how many executions happened in the account and how many were successful.

The following screenshot shows that most pipeline executions are failing. A manager might be concerned that this points to a significant issue and prompt an investigation in which they can allocate resources to improve delivery and efficiency.

QuickSight Dashboard showing total execution successes and failures

The visual for this solution is dynamic in nature. In case the pipeline has more or fewer actions, the visual will adjust automatically to reflect all actions. After looking at the success and failure rates for each CodePipeline action in Amazon QuickSight, as shown in the following screenshot, users can take targeted actions quickly. For example, if the team sees a lot of failures due to vulnerability scanning, they can work on improving that problem area to drive value for future code releases.

QuickSight Dashboard showing the successes and failures of pipeline actions

Day-over-day visuals reflect date-specific activity and enable teams to see their progress over a period of time.

QuickSight Dashboard showing day over day results of successful CI/CD executions and failures

Amazon QuickSight offers controls that can be configured to apply filters to visuals. For example, the following screenshot demonstrates how users can toggle between visuals for different applications.

QuickSight's control function to switch between different visualization options

Cleanup (optional)

In order to avoid unintended charges, delete the following resources:

  • Amazon CloudWatch event rule
  • Lambda function
  • Amazon S3 Bucket (the location in which CSV files generated by the Lambda function are stored)
  • Athena external table
  • Amazon QuickSight data sets
  • Analysis and dashboard

Conclusion

In this blog, we showed how metrics can be derived from a CI/CD pipeline. Utilizing Amazon QuickSight to create visuals from these metrics allows teams to continuously deliver updates on the deployment process to management. The aggregation of the captured data over time allows individual developers and teams to improve their processes. That is the goal of creating a Lean DevOps process: to oversee the meta-delivery pipeline and optimize all future releases by identifying weak spots and points of risk during the entire release process.

___________________________________________________________

About the Authors

Umair Nawaz is a DevOps Engineer at Amazon Web Services in New York City. He works on building secure architectures and advises enterprises on agile software delivery. He is motivated to solve problems strategically by utilizing modern technologies.
Christopher Flores is an Engagement Manager at Amazon Web Services in New York City. He leads AWS developers, partners, and client teams in using the customer engagement accelerator framework. Christopher expedites stakeholder alignment, enterprise cohesion and risk mitigation while ensuring feedback loops to close the engagement lifecycle.
Carol Liao is a Cloud Infrastructure Architect at Amazon Web Services in New York City. She enjoys designing and developing modern IT solutions in the cloud where there is always more to learn, more problems to solve, and more to build.

 

Building well-architected serverless applications: Understanding application health – part 1

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-well-architected-serverless-applications-understanding-application-health-part-1/

This series of blog posts uses the AWS Well-Architected Tool with the Serverless Lens to help customers build and operate applications using best practices. In each post, I address the nine serverless-specific questions identified by the Serverless Lens along with the recommended best practices. See the Introduction post for a table of contents and explaining the example application.

Question OPS1: How do you evaluate your serverless application’s health?

Evaluating your metrics, distributed tracing, and logging gives you insight into business and operational events, and helps you understand which services should be optimized to improve your customer’s experience. By understanding the health of your Serverless Application, you will know whether it is functioning as expected, and can proactively react to any signals that indicate it is becoming unhealthy.

Required practice: Understand, analyze, and alert on metrics provided out of the box

It is important to understand metrics for every AWS service used in your application so you can decide how to measure its behavior. AWS services provide a number of out-of-the-box standard metrics to help monitor the operational health of your application.

As these metrics are generated automatically, it is a simple way to start monitoring your application and can also be augmented with custom metrics.

The first stage is to identify which services the application uses. The airline booking component uses AWS Step Functions, AWS Lambda, Amazon SNS, and Amazon DynamoDB.

When I make a booking, as shown in the Introduction post, AWS services emit metrics to Amazon CloudWatch. These are processed asynchronously without impacting the application’s performance.

There are two default CloudWatch dashboards to visualize key metrics quickly: per service and cross service.

Per service

To view the per service metrics dashboard, I open the CloudWatch console.

per-service-metrics-dashboardI select a service where Overview is shown, such as Lambda. Now I can view the metrics for all Lambda functions in the account.

per-service-metrics-lambdaCross service

To see an overview of key metrics across all AWS services, open the CloudWatch console and choose View cross service dashboard.

cross-service-metrics-dashboardI see a list of all services with one or two key metrics displayed. This provides a good overview of all services your application uses.

Alerting

The next stage is to identify the key metrics for comparison and set up alerts for under- and over-performing services. Here are some recommended metrics to alarm on for a number of AWS services.

Alerts can be configured manually or via infrastructure as code tools such as the AWS Serverless Application Model, AWS CloudFormation, or third-party tools.

To configure a manual alert for Lambda function errors using CloudWatch Alarms:

  1. I open the CloudWatch console and select Alarms and select Create Alarm.
  2. I choose Select Metric and from AWS Namespaces, select Lambda, Across All Functions and select Errors and select Select metric.add-metric-to-alert
  3. I change the Statistic to Sum and the Period to 1 minute.metric-values
  4. Under Conditions, I select a Static threshold Greater than 1 and select Next.

Alarms can also be created using anomaly detection rather than static values if there is a discernible pattern or trend. Anomaly detection looks at past metric data and uses machine learning to create a model of expected values. Alerts can then be configured if they fall outside this band of “normal” values. I use a Static threshold for this alarm.

  1. For the notification, I set the trigger to alarm to an existing SNS topic with my email address, then choose Next.metric-notification
  2. I enter a descriptive alarm name such as serverlessairline-lambda-prod-errors > 1, select Next, and choose Create alarm.

I have now manually set up an alarm.

Use CloudWatch composite alarms to combine multiple alarms to reduce noise and focus on critical issues. For example, a single alarm could trigger if there are both Lambda function errors as well as high Lambda concurrent executions.

It is simpler and more scalable to include alerting within infrastructure as code. Here is an example of alerting programmatically using CloudFormation.

I view the out of the box standard metrics and in this example, manually create an alarm for Lambda function errors.

Improvement plan summary:

  1. Understand what metrics and dimensions each managed service used provides.
  2. Configure alerts on relevant metrics for when services are unhealthy.

Good practice: Use structured and centralized logging

Central logging provides a single place to search and analyze logs. Structured logging means selecting a consistent log format and content structure to simplify querying across multiple components.

To identify a business transaction across components, such as a particular flight booking, log operational information from upstream and downstream services. Add information such as customer_id along with business outcomes such as order=accepted or order=confirmed. Make sure you are not logging any sensitive or personal identifying data in any logs.

Use JSON as your logging output format. Log multiple fields in a single object or dictionary rather than many one line messages for simpler searching.

Here is an example of a structured logging format.

The airline booking component, which is written in Python, currently uses a shared library with a separate log processing stack.

Embedded Metrics Format is a simpler mechanism to replace the shared library and use structured logging. CloudWatch Embedded Metrics adds environmental metadata such as Lambda Function version and also automatically extracts custom metrics so you can visualize and alarm on them. There are open-source client libraries available for Node.js and Python.

I then add embedded metrics to the individual confirm booking module with the following steps:

  1. I install the aws-embedded-metrics library using the instructions.
  2. In the function init code, I import the module and create a metric_scope with the following code

from aws_embedded_metrics import metric_scope
@metric_scope

  1. In the function handler, I log the generated bookingReference with the following code.

metrics.set_property("BookingReference", ret["bookingReference"])

In this example I also log the entire incoming event details.

metrics.set_property("event", event)

It is best practice to only log what is required to avoid unnecessary costs. Ensure the event does not have any sensitive or personal identifying data which is available to anyone who has access to the logs.

To avoid the duplicate logging in this example airline application which adds cost, I remove the existing shared library logger.*() lines.

When I make a booking, the CloudWatch log message is in structured JSON format. It contains the properties I set event, BookingReference, as well as function metadata.

I can then search for all log activity related to a specific booking across multiple functions with booking_id. I can track customer activity across multiple bookings using customer_id.

Logging is often created as a shared library resource which all functions reference. Another option is using Lambda Layers, which lets functions import additional code such as external libraries. Multiple functions can share this code.

Improvement plan summary:

  1. Log request identifiers from downstream services, component name, component runtime information, unique correlation identifiers, and information that helps identify a business transaction.
  2. Use JSON as the logging output. Prefer logging entire objects/dictionaries rather than many one line messages. Mask or remove sensitive data when logging.
  3. Minimize logging debugging information to a minimum as they can incur both costs and increase noise to signal ratio

Conclusion

Evaluating serverless application health helps understand which services should be optimized to improve your customer’s experience. I cover out of the box metrics and alerts, as well as structured and centralized logging.

This well-architected question will be continued in an upcoming post where I look at custom metrics and distributed tracing.