All posts by Benjamin Smith

Best practices for working with the Apache Velocity Template Language in Amazon API Gateway

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/best-practices-for-working-with-the-apache-velocity-template-language-in-amazon-api-gateway/

This post is written by Ben Freiberg, Senior Solutions Architect, and Marcus Ziller, Senior Solutions Architect.

One of the most common serverless patterns are APIs built with Amazon API Gateway and AWS Lambda. This approach is supported by many different frameworks across many languages. However, direct integration with AWS can enable customers to increase the cost-efficiency and resiliency of their serverless architecture. This blog post discusses best practices for using Apache Velocity Templates for direct service integration in API Gateway.

Deciding between integration via Velocity templates and Lambda functions

Many use cases of Velocity templates in API Gateway can also be solved with Lambda. With Lambda, the complexity of integrating with different backends is moved from the Velocity templating language (VTL) to the programming language. This allows customers to use existing frameworks and methodologies from the ecosystem of their preferred programming language.

However, many customers choose serverless on AWS to build lean architectures and using additional services such as Lambda functions can add complexity to your application. There are different considerations that customers can use to assess the trade-offs between the two approaches.

Developer experience

Apache Velocity has a number of operators that can be used when an expression is evaluated, most prominently in #if and #set directives. These operators allow you to implement complex transformations and business logic in your Velocity templates.

However, this adds complexity to multiple aspects of the development workflow:

  • Testing: Testing Velocity templates is possible but the tools and methodologies are less mature than for traditional programming languages used in Lambda functions.
  • Libraries: API Gateway offers utility functions for VTL that simplify common use cases such as data transformation. Other functionality commonly offered by programming language libraries (for example, Python Standard Library) might not be available in your template.
  • Logging: It is not possible to log information to Amazon CloudWatch from a Velocity template, so there is no option to retain this information.
  • Tracing: API Gateway supports request tracing via AWS X-Ray for native integrations with services such as Amazon DynamoDB.

You should use VTL for data mapping and transformations rather than complex business logic. There are exceptions but the drawbacks of using VTL for other use cases often outweigh the benefits.

API lifecycle

The API lifecycle is an important aspect to consider when deciding on Velocity or Lambda. In early stages, requirements are typically not well defined and can change rapidly while exploring the solution space. This often happens when integrating with databases such as Amazon DynamoDB and finding out the best way to organize data on the persistence layer.

For DynamoDB, this often means changes to attributes, data types, or primary keys. In such cases, it is a sensible decision to start with Lambda. Writing code in a programming language can give developers more leeway and flexibility when incorporating changes. This shortens the feedback loop for changes and can improve the developer experience.

When an API matures and is run in production, changes typically become less frequent and stability increases. At this point, it can make sense to evaluate if the Lambda function can be replaced by moving logic into Velocity templates. Especially for busy APIs, the one-time effort of moving Lambda logic to Velocity templates can pay off in the long run as it removes the cost of Lambda invocations.

Latency

In web applications, a major factor of user perceived performance is the time it takes for a page to load. In modern single page applications, this often means multiple requests to backend APIs. API Gateway offers features to minimize the latency for calls on the API layer. With Lambda for service integration, an additional component is added into the execution flow of the request, which inevitably introduces additional latency.

The degree of that additional latency depends on the specifics of the workload, and often is as low as a few milliseconds.

The following example measures no meaningful difference in latency other than cold starts of the execution environments for a basic CRUD API with a Node.js Lambda function that queries DynamoDB. I observe similar results for Go and Python.

Concurrency and scalability

Concurrency and scalability of an API changes when having an additional Lambda function in the execution path of the request. This is due to different Service Quotas and general scaling behaviors in different services.

For API Gateway, the current default quota is 10,000 requests per second (RPS) with an additional burst capacity provided by the token bucket algorithm, using a maximum bucket capacity of 5,000 requests. API Gateway quotas are independent of Region, while Lambda default concurrency limits depend on the Region.

After the initial burst, your functions’ concurrency can scale by an additional 500 instances each minute. This continues until there are enough instances to serve all requests, or until a concurrency limit is reached. For more details on this topic, refer to Understanding AWS Lambda scaling and throughput.

If your workload experiences sharp spikes of traffic, a direct integration with your persistence layer can lead to a better ability to handle such spikes without throttling user requests. Especially for Regions with an initial burst capacity of 1000 or 500, this can help avoid throttling and provide a more consistent user experience.

Best practices

Organize your project for tooling support

When VTL is used in Infrastructure as Code (IaC) artifacts such as AWS CloudFormation templates, it must be embedded into the IaC document as a string.

This approach has three main disadvantages:

  • Especially with multi-line Velocity templates, this leads to IaC definitions that are difficult to read or write.
  • Tools such as IDEs or Linters do not work with string representations of Velocity templates.
  • The templates cannot be easily used outside of the IaC definition, such as for local testing.

Each aspect impacts developer productivity and make the implementation more prone to errors.

You should decouple the definition of Velocity templates from the definition of IaC templates wherever possible. For the CDK, the implementation requires only a few lines of code.

// The following code is licensed under MIT-0 
import { readFileSync } from 'fs';
import * as path from 'path';

const getUserIntegrationWithVTL = new AwsIntegration({
      service: 'dynamodb',
      integrationHttpMethod: HttpMethods.POST,
      action: 'GetItem',
      options: {
        // Omitted for brevity
        requestTemplates: {
          'application/json': readFileSync(path.join('path', 'to', 'vtl', 'request.vm'), 'utf8').toString(),
        },
        integrationResponses: [
          {
            statusCode: '200',
            responseParameters: {
              'method.response.header.access-control-allow-origin': "'*'",
            },
            responseTemplates: {
              'application/json': readFileSync(path.join('path', 'to', 'vtl', 'request.vm'), 'utf8').toString(),
            },
          },
        ],
      },
    });

Another advantage of this approach is that it forces you to externalize variables in your templates. When defining Velocity templates inside of IaC documents, it is possible to refer to other resources in the same IaC document and set this value in the Velocity template through string concatenation. However, this hardcodes the value into the template as opposed to the recommended way of using Stage Variables.

Test Velocity templates locally

A frequent challenge that customers face with Velocity templates is how to shorten the feedback loop when implementing a template. A common workflow to test changes to templates is:

  1. Make changes to the template.
  2. Deploy the stack.
  3. Test the API endpoint.
  4. Evaluate the results or check logs for errors.
  5. Complete or return to step 1.

Depending on the duration of the stack deployment, this can often lead to feedback loops of several minutes. Although the test ecosystem for Velocity is far from being as extensive as it is for mainstream programming languages, there are still ways to improve the developer experience when writing VTL.

Local Velocity rendering engine with AWS SDK

When API Gateway receives a request that has an AWS integration target, the following things happen:

  1. Retrieve request context: API Gateway retrieves request parameters and stage variables.
  2. Make request: body:  API Gateway uses the template and variables from 1 to render a JSON document.
  3. Send request: API Gateway makes an API call to the respective AWS Service. It abstracts Authorization (via it’s IAM Role), Encoding and other aspects of the request away so that only the request body needs to be provided by API Gateway
  4. Retrieve response: API Gateway retrieves a JSON response from the API call.
  5. Make response body: If the call was successful the JSON response is used as input to render the response template. The result will then be sent back to the client that initiated the request to the API Gateway

To simplify our developing workflow, you can locally replicate the above flow with the AWS SDK and a Velocity rendering engine of your choice.

I recommend using Node.js for two reasons:

  • The velocityjs library is a lightweight but powerful Velocity render engine
  • The client methods (e.g. dynamoDbClient.query(jsonBody)) of the AWS SDK for JavaScript generally expect the same JSON body like the AWS REST API does. For most use cases, no transformation (e.g. camel case to Pascal case) is thus needed

The following snippet shows how to test Velocity templates for request and response of a DynamoDB Service Integration. It loads templates from files and renders them with context and parameters. Refer to the git repository for more details.

// The following code is licensed under MIT-0 
const fs = require('fs')
const Velocity = require('velocityjs');
const AWS = require('@aws-sdk/client-dynamodb');
const ddb = new AWS.DynamoDB()

const requestTemplate = fs.readFileSync('path/to/vtl/request.vm', 'utf8')
const responseTemplate = fs.readFileSync(''path/to/vtl/response.vm', 'utf8')

async function testDynamoDbIntegration() {
  const requestTemplateAsString = Velocity.render(requestTemplate, {
    // Mocks the variables provided by API Gateway
    context: {
      arguments: {
        tableName: 'MyTable'
      }
    },
    input: {
      params: function() {
        return 'someId123'
      },
    },
  });

  print(requestTemplateAsString)

  const sdkJsonRequestBody = JSON.parse(requestTemplateAsString)
  const item = await ddb.query(sdkJsonRequestBody)

  const response = Velocity.render(responseTemplate, {
    input: {
      path: function() {
        return {
          Items: item.Items
        }
      },
    },
  })

  const jsonResponse = JSON.parse(response)
}

This approach does not cover all use cases and ultimately must be validated by a deployment of the template. However, it helps to reduce the length of one feedback loop from minutes to a few seconds and allows for faster iterations in the development of Velocity templates.

Conclusion

This blog post discusses considerations and best practices for working with Velocity Templates in API Gateway. Developer experience, latency, API lifecycle, cost, and scalability are key factors when choosing between Lambda and VTL. For most use cases, we recommend Lambda as a starting point and VTL as an optimization step.

Setting up a local test environment for VTL helps shorten the feedback loop significantly and increase developer productivity. The AWS CDK is the recommended IaC framework for working with VTL projects, since it enables you to efficiently organize your infrastructure as code project for tooling support.

For more serverless learning resources, visit Serverless Land.

Introducing Serverlesspresso Extensions

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/introducing-serverlesspresso-extensions/

Today the Serverless DA team is launching Serverlesspresso Extensions, a new program that lets you contribute to Serverlesspresso. The best extensions will be added to the Serverlesspresso application running in production and featured on the AWS Compute Blog.

What is Serverlesspresso?

Serverlesspresso is a multi-tenant event-driven serverless application for a pop-up coffee bar that allows you to order from your phone. In 2022, Serverlesspresso processed over 20,000 orders at technology events around the world. At this year’s re:Invent, it featured in the keynote of Amazon CTO, Dr Werner Vogels. It was showcased as an example of an event-driven application that can be easily evolved.

The architecture comprises several serverless apps and has been open-source and freely available since it was launched at re:Invent 2021.

What is extensibility?

Extensibility is the ability to add new functionality to an existing piece of software without modifying the core code already in place. Extensions for web browsers are an example of how useful extensibility can be. The core web browser code is not changed or affected when third parties write extensions, but end users can gain new, rich functionality not envisioned or intended by the original browser authors.

In many production business applications extensibility can help you keep up with the pace of your users requests. It allows you to create new and useful functionality without having to rearchitect the core, original part of your code. Choosing an architectural style that supports this concept can help you retain flexibility as your users needs change.

How EDA supports extensibility

Serverlesspresso is built on an event-driven architecture (EDA). This is an architecture style that uses events to decouple an application’s components. Event-driven architecture offers an effective way to create loosely coupled communication between microservices. This makes it a good architectural choice when you are designing workloads that will require extensibility.

Loosely coupled microservices are able to scale and fail independently, increasing the resilience of the application. Development teams can build and release features for their team’s microservice quickly, without needing to worry about the behavior of other microservices in the application. In addition, new features can be added on top of existing events without making changes to the rest of the application.

Choreography and orchestration are two different models for how distributed services can communicate with one another. In orchestration, communication is more tightly controlled. A central service coordinates the interaction and order in which services are invoked.

Choreography achieves communication without tight control. Events flow between services without any centralized coordination. Many applications, including Serverlesspresso use both choreography and orchestration for different use cases. Event buses such as Amazon EventBridge can be used for choreography, and workflow orchestration services like AWS Step Functions can help build for orchestration.

New functional requirements come up all the time in production applications. We can address new requirements for an event driven application by creating new rules for events in the Event Bus. These rules can add new functionality to the application without having any impact to the existing application stack.

Characteristics of a Serverlesspresso EDA extension

  1. Extension resources do not have permission to interact with resources outside the extension definition (including core app resources).
  2. Extensions must contain at least one new EventBridge rule that routes existing Serverlesspresso events.
  3. Extensions can be deployed and deleted independently of other extensions and the core application.

Building a Serverlesspresso extension

This section shows how to build an extension for Serverlesspresso that adds new functionality while remaining decoupled from the core application. Anyone can contribute an extension to Serverlesspresso. Use the Serverlesspresso extensions GitHub repository to host your extension:

  1. Complete the GitHub issue template:
  2. Clone the repository. Duplicate, and rename the example _extension_model directory.
  3. Add the associated extension template and source files.
  4. Add the required meta information to `README.md`.
  5. Make a pull request to the repository with the new extension files.

Additional guidance can be found in the repository’s PUBLISHING.md file.

Tools and resources to help you build

Event decoupling introduces a new set of challenges. Finding events and their schema can be a difficult process. Developers must coordinate with the team responsible for publishing an event, or look through documentation to find its schema, and then manually create an object for the event in order to use it in their code.

The Amazon EventBridge schema registry helps solve this challenge. It automatically finds events and their structure, or schema, and stores them in a shared central location. For serverlesspresso Extensions, we have created the Serverlesspresso events catalog, and filled it with events from the EventBridge schema registry. Here, all Serverlesspresso events have been documented to help you understand how to use them in your extensions. This includes the services that produce and consumer the event as well as example schemes for each event.

The event player

The event player is a Step Functions workflow that simulates 15 minutes of operation at the Serverlesspresso bar. It does this by replaying an array of realistic events. Use the event player to generate Serverlesspresso events, when building and testing your extensions. Each event is emitted onto an event bus named Serverlesspresso.

  1. Clone this repository: git clone https://github.com/aws-samples/serverless-coffee.git
  2. Change directory to the event player: cd extensibility/EventPlayer
  3. Deploy the EventPlayer using the AWS SAM CLI:
    sam build && sam deploy --guided

This deploys a Step Functions workflow and a custom event bus called “Serverlesspresso

Running the events player

  1. Open the event player from the AWS Management Console.
  2. Choose Start execution, leave the default input payload and choose Start execution.

The player takes approximately 15 minutes to complete.

About your extension submission

Extensions will be reviewed by the Serverless DA team within 14 days of submission. When submitting your extension, your extension will become part of the open source offering and is covered by the existing license in the repo. It may be used by any customer under the same license. For additional guidance and ideas to help build your Serverlesspresso extensions, use the following resources:

Conclusion

You can now build extensions for Serverlesspresso, and potentially be featured on the AWS Compute Blog by submitting a Serverlesspresso extension. The best extensions will be added to Serverlesspresso in production.

Some demo extensions have been built and documented at https://github.com/aws-samples/serverless-coffee/tree/main/extensions. You can download and install these extensions to see how they are constructed before creating your own.

Visit the Serverless Workflows Collection to browse the many deployable workflows to help build your serverless applications.

ICYMI: Serverless pre:Invent 2022

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/icymi-serverless-preinvent-2022/

During the last few weeks, the AWS serverless team has been releasing a wave of new features in the build-up to AWS re:Invent 2022. This post recaps some of the most important releases for serverless developers building event-driven applications.

AWS Lambda

Lambda Support for Node.js 18

You can now develop Lambda functions using the Node.js 18 runtime. This version is in active LTS status and considered ready for general use. When creating or updating functions, specify a runtime parameter value of nodejs18.x or use the appropriate container base image to use this new runtime. Lambda’s Node.js runtimes include the AWS SDK for JavaScript.

This enables customers to use the AWS SDK to connect to other AWS services from their function code, without having to include the AWS SDK in their function deployment. This is especially useful when creating functions in the AWS Management Console. It’s also useful for Lambda functions deployed as inline code in CloudFormation templates. This blog post explains the major changes available with the Node.js 18 runtime in Lambda.

Lambda Telemetry API

The AWS Lambda team launched Lambda Telemetry API to provide an easier way to receive enhanced function telemetry directly from the Lambda service and send it to custom destinations. This makes it easier for developers and operators using third-party observability extensions to monitor and observe their Lambda functions.

The Lambda Telemetry API is an enhanced version of Logs API, which enables extensions to receive platform events, traces, and metrics directly from Lambda in addition to logs. This enables tooling vendors to collect enriched telemetry from their extensions, and send to any destination.

To see how the Telemetry API works, try the demos in the GitHub repository. Build your own extensions using the Telemetry API today, or use extensions provided by the Lambda observability partners.

.NET tooling

Lambda launched tooling support to enable applications running .NET 7 to be built and deployed on AWS Lambda. This includes applications compiled using .NET 7 native AOT. .NET 7 is the latest version of .NET and brings many performance improvements and optimizations. Customers can use .NET 7 with Lambda in two ways. First, Lambda has released a base container image for .NET 7, enabling customers to build and deploy .NET 7 functions as container images. Second, you can use Lambda’s custom runtime support to run functions compiled to native code using .NET 7 native AOT.

The new AWS Parameters and Secrets Lambda Extension provides a convenient method for Lambda users to retrieve parameters from AWS Systems Manager Parameter Store and secrets from AWS Secrets Manager. Use the extension to improve application performance by reducing latency and cost of retrieving parameters and secrets. The extension caches parameters and secrets, and persists them throughout the lifecycle of the Lambda function.

Amazon EventBridge

Amazon EventBridge Scheduler

Amazon EventBridge announced Amazon EventBridge Scheduler, a new capability that allows you to create, run, and manage scheduled tasks at scale. With EventBridge Scheduler, you can schedule one-time or recurrently tens of millions of tasks across many AWS services without provisioning or managing underlying infrastructure.

With EventBridge Scheduler, you can create schedules that trigger over 200 services with more than 6,000 APIs. EventBridge Scheduler allows you to configure schedules with a minimum granularity of one minute. It is priced per one million invocations, and the service is included in the AWS Free Tier. See the pricing page for more information. Visit the launch blog post to get started with EventBridge scheduler.

EventBridge now supports enhanced filtering capabilities including the ability to match against characters at the end of a value (suffix filtering), to ignore case sensitivity (equals-ignore-case), and to have a single EventBridge rule match if any conditions across multiple separate fields are true (OR matching). The bounds supported for numeric values has also been increased from -5e9 to 5e9 from -1e9 to 1e9. The new filtering capabilities further reduce the need to write and manage custom filtering code in downstream services.

AWS Step Functions

Intrinsic Functions

We have added 14 new intrinsic functions to AWS Step Functions. These are Amazon States Language (ASL) functions that perform basic data transformations. Intrinsic functions allow you to reduce the use of other services, such as AWS Lambda or AWS Fargate to perform basic data manipulation. This helps to reduce the amount of code and maintenance in your application. Intrinsics can also help reduce the cost of running your workflows by decreasing the number of states, number of transitions, and total workflow duration.

Standard Workflows, Express Workflows, and synchronous Express Workflows all support the new intrinsic functions, which can be grouped into six categories:

The intrinsic functions documentation contains the complete list of intrinsics.

Cross-account access capabilities

Now, customers can take advantage of identity-based policies in Step Functions so your workflow can directly invoke resources in other AWS accounts, allowing cross-account service API integrations. The compute blog post demonstrates how to use cross-account capability using two AWS accounts.

New executions experience for Express Workflows

Step Functions now provides a new console experience for viewing and debugging your Express Workflow executions that makes it easier to trace and root cause issues in your executions.

You can opt in to the new console experience of Step Functions, which allows you to inspect executions using three different views: graph, table, and event view, and add many new features to enhance the navigation and analysis of the executions. You can search and filter your executions and the events in your executions using unique attributes such as state name and error type. Errors are now easier to root cause as the experience highlights the reason for failure in a workflow execution.

The new execution experience for Express Workflows is now available in all Regions where AWS Step Functions is available. For a complete list of Regions and service offerings, see AWS Regions.

Step Functions Workflows Collection

The AWS Serverless Developer Advocate team created the Step Functions Workflows Collection, a fresh experience that makes it easier to discover, deploy, and share Step Functions workflows. Use the Step Functions workflows collection to find simple “building blocks”, reusable patterns, and example applications to help build your serverless applications with Step Functions. All Step Functions builders are invited to contribute to the collection. This is done by submitting a pull request to the Step Functions Workflows Collection GitHub repository. Each submission is reviewed by the Serverless Developer advocate team for quality and relevancy before publishing.

AWS Serverless Application Model (AWS SAM)

AWS SAM Connector

Speed up serverless development while maintaining secure best practices using new AWS SAM connector. AWS SAM Connector allows builders to focus on the relationships between components without expert knowledge of AWS Identity and Access Management (IAM) or direct creation of custom policies. AWS SAM connector supports AWS Step Functions, Amazon DynamoDB, AWS Lambda, Amazon SQS, Amazon SNS, Amazon API Gateway, Amazon EventBridge and Amazon S3, with more resources planned in the future.

Connectors are best for those getting started and who want to focus on modeling the flow of data and events within their applications. Connectors will take the desired relationship model and create the permissions for the relationship to exist and function as intended.

View the Developer Guide to find out more about AWS SAM connectors.

SAM CLI Pipelines now supports Open ID Connect Protocol

SAM Pipelines make it easier to create continuous integration and deployment (CI/CD) pipelines for serverless applications with Jenkins, GitLab, GitHub Actions, Atlassian Bitbucket Pipelines, and AWS CodePipeline. With this launch, SAM Pipelines can be configured to support OIDC authentication from providers supporting OIDC, such as GitHub Actions, GitLab and BitBucket. SAM Pipelines will use the OIDC tokens to configure the AWS Identity and Access Management (IAM) identity providers, simplifying the setup process.

AWS SAM CLI Terraform support

You can now use AWS SAM CLI to test and debug serverless applications defined using Terraform configurations. This public preview allows you to build locally, test, and debug Lambda functions defined in Terraform. Support for the Terraform configuration is currently in preview, and the team is asking for feedback and feature request submissions. The goal is for both communities to help improve the local development process using AWS SAM CLI. Submit your feedback by creating a GitHub issue here.

­­­­­Still looking for more?

Get your free online pass to watch all the biggest AWS news and updates from this year’s re:Invent.

For more learning resources, visit Serverless Land.

Implementing a UML state machine using AWS Step Functions

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/implementing-a-uml-state-machine-using-aws-step-functions/

This post is written by  Michael Havey, Senior Specialist Solutions Architect, AWS

This post shows how to model a Unified Modeling Language (UML) state machine as an AWS Step Functions workflow. A UML state machine models the behavior of an object, naming each of its possible resting states and specifying how it moves from one state to another. A Step Functions state machine implements the object behavior. This post shows how the UML diagram guides the state machine implementation and traces back to it.

State machines are often used to model real-time systems and devices. This post uses a stock order as an example. What drives a business object from one state to another is its interaction with applications through services. When the object enters a new state, it typically responds by calling the application through a service. It is typically an event arising from that application that transitions the business object to the next state. The UML model declares each state and the service interactions. Step Functions, which is designed for service orchestration, implements the behavior.

Overview

This is the approach discussed in this post:

  1. A developer implements a Step Functions state machine guided by a UML state machine designed in a third-party UML modeling tool. The implementation explicitly traces back to the UML model by logging execution of state machine activities.
  2. To invoke the target application, the Step Functions state machine invokes an AWS Lambda handler function. This invocation implements a UML state machine activity.
  3. The handler function, in turn, invokes the application. The implementation of the call is application-specific.
  4. If a callback from the application is expected, the application sends an event to a Lambda event dispatcher function. The implementation of this message exchange is application-specific.
  5. If a callback is expected, the Lambda event dispatcher function calls back the Step Functions state machine with the application event. This enables the Step Functions state machine to implement a UML state transition to the next state.

Traceability is the best way to link the Step Functions implementation to the UML model. This is because it ensures that the implementation is doing what the model intends.

An alternative is to generate Step Functions code based on the UML model using a standard XML format known as XML Metadata Interchange (XMI). A code generator tool can introspect the XMI to generate code from it. While technically feasible, UML state machines are highly expressive with many patterns and idioms. A generator often can’t produce code as lean and readable as that of a developer.

Walkthrough

This example shows a UML state machine in MagicDraw, a UML design tool. This diagram is the basis for the Step Functions implementation. This Git repository includes the XMI file for the UML diagram and the code to set up the Step Functions implementation.

The walkthrough has the following steps:

  1. Deploy Step Functions and AWS Lambda resources.
  2. Run the Step Functions state machine. Check the execution results to see how they trace back to the UML state machine.
  3. Clean up AWS resources.

Provision resources

To run this example, you need an AWS account with permission to use Step Functions and Lambda. On your machine, install the AWS Command Line Interface (CLI) and the AWS Serverless Application Model (AWS SAM) CLI.

Complete the following steps on your machine:

  1. Clone the Git repository.
  2. In a command shell, navigate to the sam folder of the clone.
  3. Run sam build to build the application.
  4. Run sam deploy –-guided to deploy the application to your AWS account.
  5. In the output, find names of Step Functions state machines and Lambda functions created.

The application creates several state machines, but in this post we consider the simplest: Test Buy Sell. The example models the behavior of a buy/sell stock order, which is based on an example from the Step Functions documentation: https://docs.aws.amazon.com/step-functions/latest/dg/sample-lambda-orchestration.html.

Explore UML model for Test BuySell

Begin with the following UML model (also available in the GitHub repository).

In the model:

  1. The black dot on the upper left is the initial state. It has an arrow (a transition) to CheckingStockPrice (a state).
  2. CheckingStockPrice has an activity, called checkStockPrice, of type do. When that state is visited, the activity is automatically run. When the activity finishes, the machine transitions automatically (a completion transition) to the next state.
  3. That state, GeneratingBuySellRecommendation, has its own do activity generateBuySellRecommendation. Completion of that activity moves to the next state.
  4. The next state is Approving, whose activity routeForApproval is of type entry. That activity is run when the state is entered. It waits for an event to move the machine forward. There are three transitions from Approving. Each has a trigger, indicating the type of event expected, called approvalComplete. Each has a guard that distinguishes the outcome of the approval.
  5. If the guard is sell, transition to the state SellingStock.
  6. If it’s buy, transition to the state BuyingStock.
  7. If it’s reject, transition to the terminate state (denoted by an X) and run a transition activity called logReject.
  8. BuyingStock and SellingStock each have a do activity – buyStock and sellStock – and transition on completion to the state ReportingResult. That state has do activity reportResult.
  9. Transition to the final state (the black dot enclosed in a circle).

Explore Step Functions implementation

Find the Step Functions implementation in the AWS Console. Under the list of State Machines, select the function with a name starting with BlogBuySell. Choose Edit to view the design of the machine. From there, open it in Workflow Studio to show the state machine workflow visualization:

The Step Function state machine implements all the activities from the UML state machine. There are Lambda tasks to implement the major state do activities: Check Stock Price, Generate Buy/Sell Recommendation, Buy Stock, Sell Stock, Report Result. There is also a Lambda function for the transition activity: Log Reject. Each Lambda function traces back to the UML state machine and uses the following format to log trace records:

{
 "sourceState": S,
 "activityType": stateEntry|stateExit|stateDo|transition,
 "activityName": N
 "trigger" T, // if transition activity
 "guard": G // if transition activity and has a guard
}

The control flow in the Step Functions state machine intuitively matches the UML state machine. The UML model has mostly completion transitions, so the Step Functions state machine largely flows from one Lambda task to another. However, I must consider the Approving state, where the machine waits for an event and then transitions in one of three directions from the choice state Buy or Sell. For this, use the Step Functions callback capability. Route For Approval is a Lambda task with the Wait For Callback option enabled. The Lambda task has three responsibilities:

  • Executes the UML state entry activity routeForApproval by calling the application.
  • Logs a tracing record that it has executed that activity.
  • Passes the task token provided by the Step Functions state machine to the application.

When the application has an approval decision, it sends an event through messaging. A separate Lambda event dispatcher function receives the message and, using the Step Functions API, calls back the Step Functions state machine with key details from the message: task token, trigger, guard.

Finally, notice the fail step after Log Reject. This implements the terminate state in the UML model.

Execute the Step Functions state machine

Execute the state machine by choosing Start Execution for the BlogBuySell state machine in the Step Functions console. Use this input:

{"appData": "Insert your JSON here"}

The console shows a graph view of the progress of the state machine. It should pause at the Route For Approval task.

Confirm traceability

Check the event view to confirm the tracing back to the UML model. The Task Scheduled event for Check Stock Price shows:

      "sourceState": "CheckingStockPrice",
      "activityType": "stateDo",
      "activityName": "checkStockPrice",

The Task Scheduled event for Generate buy/sell Recommendation shows:

      "sourceState": "GeneratingBuySellRecommendation",
      "activityType": "stateDo",
      "activityName": "generateBuySellRecommendation",

The Task Scheduled event for Route For Approval shows output resembling the following. Your taskToken will be different.

      "sourceState": "Approving",
      "activityType": "stateEntry",
      "activityName": "routeForApproval",
   "taskToken": "AAAAK . . . 99es="

Approve for buy

The state machine is waiting at Route For Approval. Simulate an application event to continue it forward. First, copy the task token value from above, excluding the quotes.

In a separate browser tab, open the Lambda console and find the function whose name contains BlogDummyUMLEventDispatcher. In the Test tab, create a new event:

{
    "taskToken": "<paste the task token here>",
    "trigger": "approvalComplete",
    "guard": "buy",
    "appData": {"x": "y"}
}
 

Choose Test to call the Lambda function with this input, which calls back the state machine.

Confirm execution of approval

In the Step Functions console, confirm that the flow is completed and taken the Buy stock path.

More examples and patterns

The AWS SAM application deploys two additional examples, which show important patterns:

  • Hierarchical or composite states.
  • Parallel or orthogonal states
  • Cancellation events
  • Internal transitions
  • Transition to history
  • Using an event loop for complex flow

You can find a discussion of these examples in the Git repo.

Comparing UML and Step Functions state machines

Step Functions transitions tasks in sequence with the ability to conditionally branch, loop, or parallelize tasks. These tasks aren’t quite the same as states in a UML model. In this approach, tasks map to UML states or transition activities.

A UML state machine spends most of its time waiting in its current state for the next event to happen. A standard workflow in Step Functions can wait too. It can run for up to one year because some activities can pause until they are called back by an external trigger. I used that capability to implement a pattern to trigger the next transition by calling back the Step Functions state machine.

Cleaning up

To avoid incurring future charges, navigate to the directory where you deployed the application and run sam delete to undeploy it.

Conclusion

This post shows code recipes for implementing UML state machines using Step Functions. If your organization already uses modeling tools, this discussion helps you understand the Step Functions implementation path. If you are a Step Functions designer, this discussion shows UML’s expressive power as the model for your implementation.

Learn more about Step Functions implementations on the Sample projects for Step Functions page.

Integrating Amazon MemoryDB for Redis with Java-based AWS Lambda

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/integrating-amazon-memorydb-for-redis-with-java-based-aws-lambda/

This post is written by Mansi Y Doshi, Consultant and Aditya Goteti, Sr. Lead Consultant.

Enterprises are modernizing and migrating their applications to the AWS Cloud to improve scalability, reduce cost, innovate, and reduce time to market new features. Legacy applications are often built with RDBMS as the only backend solution.

Modernizing legacy Java applications with microservices requires breaking down a single monolithic application into multiple independent services. Each microservice does a specific job and requires its own database to persist data, but one database does not fit all use cases. Modern applications require purpose-built databases catering to their specific needs and data models.

This post discusses some of the common use cases for one such data store, Amazon MemoryDB for Redis, which is built to provide durability and faster reads and writes.

Use cases

Modern tech stacks often begin with a backend that interacts with a durable database like MongoDB, Amazon Aurora, or Amazon DynamoDB for their data persistence needs.

But, as traffic volume increases, it often makes sense to introduce a caching layer like ElastiCache. This is populated with data by service logic each time a database read happens, such that the subsequent reads of the same data become faster. While ElastiCache is effective, you must manage and pay for two separate data sources for the same data. You must also write custom logic to handle the cache reads/writes besides the existing read/write logic used for durable databases.

While traditional databases like MySQL, Postgres and DynamoDB provide data durability at the cost of speed, transient data stores like ElastiCache trade durability for faster reads/writes (usually within microseconds). ElastiCache provides writes and strongly consistent reads on the primary node of each shard and eventually consistent reads from read replicas. There is a possibility that the latest data written to the primary node is lost during a failover, which makes ElastiCache fast but not durable.

MemoryDB addresses both these issues. It provides strong consistency on the primary node and eventual consistency reads on replica nodes. The consistency model of MemoryDB is like ElastiCache for Redis. However, in MemoryDB, data is not lost across failovers, allowing clients to read their writes from primaries regardless of node failures. Only data that is successfully persisted in the Multi-AZ transaction log is visible. Replica nodes are still eventually consistent. Because of its distributed transaction model, MemoryDB can provide both durability and microsecond response time.

MemoryDB is most ideal for services that are read-heavy and sensitive to latency, like configuration, search, authentication and leaderboard services. These must operate at microsecond read latency and still be able to persist the data for high availability and durability. Services like leaderboards, having millions of records, often break down the data into smaller chunks/batches and process them in parallel. This needs a data store that can perform calculations on the fly and also store results temporarily. Redis can process millions of operations per second and store temporary calculations for fast retrieval and also run other operations (like aggregations). Since Redis is single-threaded, from the command’s execution point of view, it also helps to avoid dirty writes and reads.

Another use case is a configuration service, where users store, change, and retrieve their configuration data. In large distributed systems, there are often hundreds of independent services interacting with each other using well-defined REST APIs. These services depend on the configuration data to perform specific actions. The configuration service must serve the required information at a low latency to avoid being a bottleneck for the other dependent services.

MemoryDB can read at microsecond latencies durably. It also persists data across multiple Availability Zones. It uses multi- Availability Zone transaction logs to enable fast failover, database recovery, and node restarts. You can use it as a primary database without the need to maintain another cache to lower the data access latency. This also reduces the need to maintain additional caching service, which further reduces cost.

These use cases are a good fit for using MemoryDB. Next, you see how to access, store, and retrieve data in MemoryDB from your Java-based AWS Lambda function.

Overview

This blog shows how to build an Amazon MemoryDB cluster and integrate it with AWS Lambda. Amazon API Gateway and Lambda can be paired together to create a client-facing application, which can be easier to maintain, highly scalable, and secure. Both are fully managed services with no need to provision or manage servers. They can be cost effective when compared to running the application on servers for workloads with long idle periods. Using Lambda authorizers you can also write custom code to control access to your API.

Walkthrough

The following steps show how to provision an Amazon MemoryDB cluster along with Amazon VPC, subnets, security groups and integrate it with a Lambda function using Redis/Jedis Java client. Here, the Lambda function is configured to connect to the same VPC where MemoryDB is provisioned. The steps include provisioning through an AWS SAM template.

Prerequisites

  1. Create an AWS account if you do not already have one and login.
  2. Configure your account and set up permissions to access MemoryDB.
  3. Java 8 or above
  4. Install Maven
  5. Java Client for Redis
  6. Install AWS SAM if you do not already have one

Creating the MemoryDB cluster

Refer to the serverless pattern for a quick setup and customize as required. The AWS SAM template creates VPC, subnets, security groups, the MemoryDB cluster, API Gateway, and Lambda.

To access the MemoryDB cluster from the Lambda function, the security group of the Lambda function is added to the security group of the cluster. The MemoryDB cluster is always launched in a VPC. If the subnet is not specified, the cluster is launched into your default Amazon VPC.

You can also use your existing VPC and subnets and customize the template accordingly. If you are creating a new VPC, you can change the CIDR block and other configuration values as needed. Make sure the DNS hostname and DNS Support of the VPC is enabled. Use the optional parameters section to customize your templates. Parameters enable you to input custom values to your template each time you create or update a stack.

Recommendations

As your workload requirements change, you might want to increase the performance of your cluster or reduce costs by scaling in/out the cluster. To improve the read/write performance, you can scale your cluster horizontally by increasing the number of read replicas or shards for read and write throughout, respectively.

To reduce cost in case the instances are over-provisioned, you can perform vertical scale-in by reducing the size of your cluster, or scale-out by increasing the size to overcome CPU bottlenecks/ memory pressure. Both vertical scaling and horizontal scaling are applied with no downtime and cluster restarts are not required. You can customize the following parameters in the memoryDBCluster as required.

NodeType: db.t4g.small
NumReplicasPerShard: 2
NumShards: 2

In MemoryDB, all the writes are carried on a primary node in a shard and all the reads are performed on the standby nodes. Identifying the right number of read replicas, type of nodes and shards in a cluster is crucial to get the optimal performance and to avoid any additional cost because of over-provisioning the resources. It’s recommended to always start with a minimal number of required resources and scale out as needed.

Replicas improve read scalability, and it is recommended to have at least two read replicas per shard but depending upon the size of the payload and for read heavy workloads, it might be more than two. Adding more read replicas than required does not give any performance improvement, and it attracts additional cost. The following benchmarking is performed using the tool Redis benchmark. The benchmarking is done only on GET requests to simulate a read heavy workload.

The metrics on both the clusters are almost the same with 10 million requests with 1kb of data payload per request. Increasing the size of the payload to 5kb and number of GET requests to 20 million, the cluster with two primary and two replicas could not process, whereas the second cluster processed successfully. To achieve the right sizing, load testing is recommended on the staging/pre-production environment with a similar load as production.

Creating a Lambda function and allow access to the MemoryDB cluster

In the lambda-redis/HelloWorldFunction/pom.xml file, add the following dependency. This adds the Java Jedis client to connect the MemoryDB cluster:

<dependency>
    <groupId>redis.clients</groupId>
    <artifactId>jedis</artifactId>
    <version>4.2.0</version>
</dependency>

The simplest way to connect the Lambda function to the MemoryDB cluster is by configuring it within the same VPC where the MemoryDB cluster was launched.

To create a Lambda function, add the following code in the template.yaml file in the Resources section:

HelloWorldFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: HelloWorldFunction
      Handler: helloworld.App::handleRequest
      Runtime: java8
      MemorySize: 512
      Timeout: 900 #seconds
      Events:
        HelloWorld:
          Type: Api
          Properties:
            Path: /hello
            Method: get
      VpcConfig:
        SecurityGroupIds:
          - !GetAtt lambdaSG.GroupId
        SubnetIds:
          - !GetAtt privateSubnetA.SubnetId
          - !GetAtt privateSubnetB.SubnetId
      Environment:
        Variables:
          ClusterAddress: !GetAtt memoryDBCluster.ClusterEndpoint.Address

Java code to access MemoryDB

  1. In your Java class, connect to Redis using Jedis client:
    HostAndPort hostAndPort = new HostAndPort(System.getenv("ClusterAddress"), 6379);
    JedisCluster jedisCluster = new JedisCluster(Collections.singleton(hostAndPort), 5000, 5000, 2, null, null, new GenericObjectPoolConfig (), true);
  2. You can now perform set and get operations on Redis as follows
    jedisCluster.set(“test”, “value”)
    jedisCluster.get(“test”)

JedisCluster maintains its own pool of connections and takes care of connection teardown. But you can also customize the configuration for closing idle connections using the GenericObjectPoolConfig object.

Clean Up

To delete the entire stack, run the command “sam delete”.

Conclusion

In this post, you learn how to provision a MemoryDB cluster and access it using Lambda. MemoryDB is suitable for applications requiring microsecond reads and single-digit millisecond writes along with durable storage. Accessing MemoryDB through Lambda using API Gateway reduces the further need for provisioning and maintaining servers.

For more serverless learning resources, visit Serverless Land.

Introducing new intrinsic functions for AWS Step Functions

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/introducing-new-intrinsic-functions-for-aws-step-functions/

Developers use AWS Step Functions, a low-code visual workflow service to build distributed applications, automate IT and business processes, and orchestrate AWS services with minimal code. Step Functions Amazon States Language (ASL) provides a set of functions known as intrinsics that perform basic data transformations.

Customers have asked for additional intrinsics to perform more data transformation tasks, such as formatting JSON strings, creating arrays, generating UUIDs, and encoding data. We have added 14 new intrinsic functions to Step Functions. This blog post examines how to use intrinsic functions to optimize and simplify your workflows.

Why use intrinsic functions?

Intrinsic functions can allow you to reduce the use of other services, such as AWS Lambda or AWS Fargate to perform basic data manipulation. This helps to reduce the amount of code and maintenance in your application.

Intrinsics can also help reduce the cost of running your workflows by decreasing the number of states, number of transitions, and total workflow duration. This allows you to focus on delivering business value, using the time spent on writing custom code for more complex processing operations rather than basic transformations.

Using intrinsic functions

Amazon States Language is a JSON-based, structured language used to define Step Functions workflows. Each state within a workflow receives a JSON input and passes a JSON output to the next state.

ASL enables developers to filter and manipulate data at various stages of a workflow state’s execution using paths. A path is a string beginning with $ that lets you identify and filter subsets of JSON text. Learn how to apply these filters to build efficient workflows with minimal state transitions.

Apply intrinsics using ASL in task states within the ResultSelector field, or in a Pass state in either the Parameters or Result field. All intrinsic functions have the prefix “States.” followed by function, as shown in the following example, which uses the new UUID intrinsic for a generating Unique Universal ID:

  "Type": "Pass",
      "End": true,
      "Result": {
        "ticketId.$": "States.UUID()"
      }
    }

Reducing execution duration with intrinsic functions to lower cost

The following example shows the cost and simplicity benefits of intrinsic functions. The same payload is input to both examples. One uses intrinsic functions, the other uses a Lambda function with custom code. This is an extract from a workflow that is used in production for Serverlesspresso, a serverless ordering system for a pop-up coffee bar. It sanitizes new customer orders against menu options stored in an Amazon DynamoDB table.

This example uses a Lambda function to unmartial data from a DynamoDB table and iterates through each item, checking if the order is present and therefore valid. This Lambda function has 18 lines of code with dependencies on an SDK library for DynamoDB operations.

The improved workflow uses a Map state to iterate through, and unmarshal DynamoDB data, and then an intrinsic function within a pass state to sanitize new customer orders against the menu options. Here, the intrinsic used is the new States. ArrayContains(). It searches an array for a value.

I run both workflows 1000 times. The following image from an Amazon CloudWatch dashboard shows their average execution time and billed execution time.

The billed execution time for the workflow using intrinsics is half that of the workflow using a Lambda function (100ms vs. 200ms).

These are Express Workflows, so the total workflow cost is calculated as execution cost + duration cost x number of requests. This means the workflow that uses intrinsics costs approximately half that of the one using Lambda. This doesn’t consider the additional cost associated with running Lambda functions. Read more about building cost efficient workflows from this blog post.

Cost saving: Reducing state transitions with intrinsic functions

The previous example shows how a single intrinsic function can have a large impact on workflow duration, which directly affects the cost of running an Express Workflow. Intrinsics can also help to reduce the number of states in a workflow. This directly affects the cost of running a Standard Workflow, which is billed on the number of state transitions.

The following example runs a sentiment analysis on a text input. If it detects negative sentiment, it invokes a Lambda function to generate a UUID; it saves the information to a DynamoDB table and notifies an administrator. The workflow then pauses using the .waitFortaskToken pattern. The workflow resumes when an administrator takes action, to either allow or deny a refund. The most common path through this workflow comprises 9 state transitions.

In the following example, I remove the Lambda function, which generates a UUID. It contained the following code:

var AWS = require ('aws-sdk');
exports. handler = async (event, context) => {
    let r = Math.random().toString(36).substring(7);
    return r;
};

Instead, I use the new States.UUID() intrinsic in the ResultPath of the DetectSentimentState.

 "DetectSentiment": {
      "Type": "Task",
      "Next": "Record Transaction",
      "Parameters": {
        "LanguageCode": "en",
        "Text. $": "$. message"
      },
      "Resource": "arn:aws:states:::aws-sdk:comprehend:detectSentiment",
      "ResultSelector": {
        "ticketId.$": "States.UUID()"
      },
      "ResultPath": "$.Sentiment"
    },

This has reduced code, resources, and states. The reduction in states from 9 to 8 means that there is one less state transition in the workflow. This has a positive effect on the cost of my Standard Workflow, which is billed by the number of state transitions. It also means that there are no longer any costs incurred for running a Lambda function.

The new intrinsic functions

Standard Workflows, Express Workflows, and synchronous Express Workflows all support the new intrinsic functions. The new intrinsics can be grouped into six categories:

The intrinsic functions documentation contains the complete list of intrinsics.

Doing more with workflows

With the new intrinsic functions, you can do more with workflows. The following example shows how I apply the States.ArrayLength intrinsic function in the Serverlesspresso workflow to check how many instances of the workflow are currently running, and branch accordingly.

The Step Functions List executions SDK task is first used to retrieve a list of executions for the given state machine. I use the States.ArrayLength in the ResultsSelector path to retrieve the length of the response array (total number of executions). It passes the result to a choice state as a numerical constant, allowing the workflow to branch accordingly. Serverlesspresso uses this as a graceful denial of service mechanism, preventing a new customer order when there are too many orders currently in flight.

Conclusion

AWS has added an additional 14 intrinsic functions to Step Functions. These allow you to reduce the use of other services to perform basic data manipulations. This can help reduce workflow duration, state transitions, code, and additional resource management and configuration.

Apply intrinsics using ASL in Task states within the ResultSelector field, or in a Pass state in either the Parameters or Result field. Check the AWS intrinsic functions documentation for the complete list of intrinsics.

Visit the Serverless Workflows Collection to browse the many deployable workflows to help build your serverless applications.

Building cost-effective AWS Step Functions workflows

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/building-cost-effective-aws-step-functions-workflows/

Builders create AWS Step Functions workflows to orchestrate multiple services into business-critical applications with minimal code. Customers are looking for best practices and guidelines to build cost-effective workflows with Step Functions.

This blog post explains the difference between Standard and Express Workflows. It shows the cost of running the same workload as Express or Standard Workflows. Then it covers how to migrate from Standard to Express, how to combine workflow types to optimize for cost, and how to modularize and nest one workflow inside another.

Step Functions Express Workflows

Express Workflows orchestrate AWS services at a higher throughput of up to 100,000 state transitions per second. It also provides a lower cost of $1.00 per million invocations versus $25 per million for Standard Workflows.

Express Workflows can run for a maximum duration of 5 minutes and do not support the .waitForTaskToken or .sync integration pattern. Most Step Functions workflows that do not use these integrations patterns and complete within the 5-minute duration limit see both cost and throughput optimizations by converting the workflow type from Standard to Express.

Consider the following example, a naïve implementation of an ecommerce workflow:

When started, it emits a message onto an Amazon SQS queue. An AWS Lambda function processes and approves this asynchronously (not shown). Once processed, the Lambda function persists the state to an Amazon DynamoDB table. The workflow polls the table to check when the action is completed. It then moves on to process the payment, where it repeats the pattern. Finally, the workflow runs a series of update tasks in sequence before completing.

I run this workflow 1,000 times as a Standard workflow. I then convert this to an Express Workflow and run another 1,000 times. I create an Amazon CloudWatch dashboard to display the average execution times. The Express Workflow runs on average 0.5 seconds faster than the Standard Workflow and also shows improvements in cost:

Workflow Execution times

Running the Standard Workflow 1,000 times costs approximately $0.42. This excludes the 4,000 state transitions included in the AWS Free Tier every month, and the additional services that are being used. In contrast to this, running the Express Workflow 1000 times costs $0.01. How is this calculated?

Standard Workflow cost calculation formula:

Standard Workflows are charged based on the number of state transitions required to run a workload. Step Functions count a state transition each time a step of your workflow runs. You are charged for the total number of state transitions across all your state machines, including retries. The cost is $0.025 per 1,000 state transitions.

A happy path through the workflow comprises 17 transitions (including start and finish).

Total cost = (number of transitions per execution x number of executions) x $0.000025
Total cost = (17 X 1000) X 0.000025 = $0.42*

*Excluding the 4,000 state transitions included in the AWS Free Tier every month.

Express Workflow cost calculation formula:

Express Workflows are charged based on the number of requests and its duration. Duration is calculated from the time that your workflow begins running until it completes or otherwise finishes, rounded up to the nearest 100 ms, and the amount of memory used in running your workflow, billed in 64-MB chunks.

Total cost = (Execution cost + Duration cost) x Number of Requests
Duration cost = (Avg billed duration ms / 100) * price per 100 ms
Execution cost = $0.000001 per request

Total cost = ($0.000001 + $0.0000117746) x 1000 = $0.01
Duration cost = (11300 MS /100) * $ 0.0000001042 = $0.0000117746
Execution cost = $0.000001 per request

This cost changes depending on the number of GB-hours and memory sizes used. The memory usage for this State machine is less than 64 MB.
See the Step Functions pricing page for full more information.

Converting a Standard Workflow to an Express Workflow

Given the cost benefits shown in the previous section, converting existing Standard Workflows to Express Workflows is often a good idea. However, some considerations should be made before doing this. The workflow must finish in less than 5 minutes and not use .WaitForTaskToken or .sync integration patterns. Express Workflows send logging history to CloudWatch Logs at an additional cost.

An additional consideration is idempotency, and exactly once versus at least once execution requirements. If a workload requires a guaranteed once execution model, then a Standard Workflow is preferred. Here, tasks and states are never run more than once unless you have specified retry behavior in Amazon States Language (ASL). This makes them suited to orchestrating non-idempotent actions, such as starting an Amazon EMR cluster or processing payments. Express Workflows use an at-least-once model, where there is a possibility that an execution might be run more than once. This makes them ideal for orchestrating idempotent actions. Idempotence refers to an operation that produces the same result (for a given input) irrespective of how many times it is applied.

To convert a Standard Workflow to an Express Workflow directly from within the Step Functions console:

  1. Go to the Step Functions workflow you want to convert, and choose Actions, Copy to new.

  2. Choose Design your workflow visually.
  3. Choose Express then choose Next.
  4. The next two steps allow you to make changes to your workflow design. Choose Next twice.
  5. Name the workflow, assign permissions, logging and tracing configurations, then choose Create state machine.

If converting a Standard Workflow defined in a templating language such as AWS CDK or AWS SAM, you must change both the Type value and the Resource name. The following example shows how to do this in AWS SAM:

StateMachinetoDDBStandard:
    Type: AWS::Serverless::StateMachine
    Properties:
      Type: STANDARD

Becomes:

StateMachinetoDDBExpress:
    Type: AWS::Serverless::StateMachine
    Properties:
      Type: EXPRESS

This does not overwrite the existing workflow, but creates a new workflow with a new name and type.

Better together

Some workloads may require a combination of both long-running and high-event-rate workflows. By using Step Functions workflows, you can build larger, more complex workflows out of smaller, simpler workflows.

For example, the initial step in the previous workflow may require a pause for human interaction that takes more than 5 minutes, followed by running a series of idempotent actions. These types of workloads can be ideal for using both Standard and Express workflow types together. This can be achieved by nesting a “child” Express Workflow within a “parent” Standard Workflow. The previous workflow example has been refactored as a parent-child nested workflow.

Deploy this nested workflow solution from the Serverless Workflows Collection.

Nesting workflows

Parent Standard Workflow

Child Express Workflow

 

Nested workflow metrics

This new blended workflow has a number of advantages. First the polling pattern is replaced by .WaitForTaskToken. This pauses the workflow until a response is received indicating success or failure. In this case, the response is sent by a Lambda function (not shown). This pause can last for up to 1 year, and the wait time is not billable.

This not only simplifies the workflow but also reduces the number of state transitions. Next, the idempotent steps are moved into an Express Workflow, this reduces the number of state transitions from the Standard Workflow, and benefits from the high throughput provided by Express Workflows. The child workflow is invoked by using the StartExecution StepFunctions API call from the parent workflow.

This new workflow combination runs 1,000 times, costing a total cost of 20 cents. There is no additional charge for starting a nested workflow: It is treated as another state transition. The nested workflow itself is billed the same way as all Step Functions workflows.

Here’s how the cost is calculated:

Parent Standard Workflow:

Total cost = (number of transitions per execution x number of executions) x $0.000025
Total cost =(8*1000) *0.000025 = $0.20

Child Express Workflow:

Total cost = (Execution cost + Duration cost) x No Requests
Duration cost = (Avg billed duration ms / 100) * price per 100ms
Execution cost = $0.000001 per request

Total cost = ($0.000001 + $0.0000013546) x 1000 = $0.0002
Duration cost = (1300 ms /100) * $ 0.0000001042 = $0.0000013546
Execution cost = $0.000001 per request

Total cost for nested workflow = (cost of Parent Standard Workflow) + (cost of Child Express Workflow)
Total cost for nested workflow = 0.20 cents  / 1000 executions.

Conclusion

This blog post explains the difference between Standard and Express Workflows. It describes the exactly once and at-least-one execution models and how this relates to idempotency. It compares the cost of running the same workload as an Express and Standard Workflow, showing how to migrate from one to the other and the considerations to make before doing so.

Finally, it explains how to combine workflow types to optimize for cost. Nesting state machines between types enables teams to work on individual workflows, turning them into modular reusable building blocks.

Visit the Serverless Workflows Collection to browse the many deployable workflows to help build your serverless applications.

Introducing the new AWS Step Functions Workflows Collection

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/introducing-the-new-aws-step-functions-workflows-collection/

Today, the AWS Serverless Developer Advocate team introduces the Step Functions Workflows Collection, a fresh experience that makes it easier to discover, deploy, and share Step Functions workflows.

Builders create Step Functions workflows to orchestrate multiple services into business-critical applications with minimal code. Customers were looking for opinionated templates that implement best practices for building serverless applications with Step Functions.

This blog post explains what Step Functions workflows are and what challenges they help solve. It shows how to use the new Step Functions workflows collection to find simple “building blocks”, reusable patterns, and example applications to help build your serverless applications with Step Functions.

Overview

Large serverless applications often comprise multiple decoupled resources. These are sometimes challenging to observe and discover errors. Step Functions is a low-code visual workflow service that helps solve this challenge. It provides instant visual understanding of an application, the services it integrates with, and any errors that might occur during execution.

Step Functions workflows comprise a sequence of steps where the output of one step passes on as input to the next. Step Functions can integrate with over 220 AWS services by using an AWS SDK integration task. This allows users to call AWS SDK actions directly without the need to write additional code.

Getting started with the Step Functions workflows collection

Explore the Step Functions workflows collection to discover new workflows. The collection has three levels of workflows:

  1. Fundamental: A simple, reusable building block.
  2. Pattern: A common reusable component of an application.
  3. Application: A complete serverless application or microservice.

Workflows are also categorized by multiple use-cases, including data processing, SaaS integration, and security automation. Once you find a workflow that want to use in your application:

  1. Choose View to go to the workflow details page.
  2. Choose Template from the workflow details page to view the infrastructure as code (IaC) deployment template. Here, you can see how to configure resources with AWS best practices.
    The workflows collection currently supports deployable workflow templates defined with AWS Serverless Applications Model (AWS SAM) or the AWS Cloud Development Kit (AWS CDK)Structure of an AWS SAM template

    AWS SAM is an open-source framework for building serverless applications. It provides shorthand syntax that makes it easier to build and deploy serverless applications. With only a few lines, you can define each resource using YAML or JSON.

    An AWS SAM template can have serverless-specific resources or standard AWS CloudFormation resources. When you run sam deploy, sam transforms serverless resources into CloudFormation syntax.

    Structure of an AWS CDK template

    The AWS CDK provides another way to define your application resources using common programming languages. The CDK is an open source framework that you can use to model your applications. As with AWS SAM, when you run ‘npx cdk deploy –app ‘ts-node .’ , the CDK transforms the template into AWS CloudFormation syntax and creates the specified resources for you.

  3. Choose Workflow Definition to see the Amazon States Language definition (ASL). That defines the workflow.ASL is a JSON-based, structured language for authoring Step Functions workflows. It enables developers to filter and manipulate data at various stages of a workflow state’s execution using paths. A path is a string beginning with $ that lets you identify and filter subsets of JSON text. Learning how to apply these filters helps to build efficient workflows with minimal state transitions.

    The more advanced workflows in the collection show how to use intrinsic functions to manipulate payload data. Intrinsic functions are ASL constructs that help build and convert payloads without creating additional task state transitions. Use intrinsic functions in Task states within the ResultSelector field, or in a Pass state in either the Parameters or ResultSelector field. The Step Functions documentation shows examples of how to:

    1. Construct strings from interpolated values.
    2. Convert a JSON object to a string.
    3. Convert arguments to an array.Use the workflow definition to see how to configure each workflow state. This is helpful to understand how to define task types you are unfamiliar with and how to apply intrinsic functions to help reduce state transitions. Use the data flow simulator to model and refine your input and output path processing.
  4. Follow the Download and Deployment commands to deploy the workflow into your AWS account. Use the Additional resources to read more about the workflow.
  5. Once you have deployed the workflow into your AWS account, continue building in the AWS Management Console with Workflow studio or locally by editing the downloaded files.Continue building with Workflow Studio
    To edit the workflow in Workflow Studio, select the workflow from the Step Functions console and choose Edit > Workflow Studio.
    From here, you can drag-and-drop flow and Task states onto the canvas, then configure states and data transformations using built-in forms. Workflow Studio composes your workflow definition in real time. If you are new to Step Functions, Workflow Studio provides an easy way to continue building your first workflow that delivers business value.

    Continue building in your local IDE
    For developers who prefer to build locally, the AWS Toolkit for VS Code enables you to define, visualize, and create your Step Functions workflows without leaving the VS Code. The toolkit also provides code snippets for seven different ASL state types and additional service integrations to speed up workflow development. To continue building locally with VS Code:

    1. Download the AWS Toolkit for VS Code
    2. Open the statemachine.asl.json definition file, and choose Render graph to visual the workflow as you build.

Contributing to the Step Functions Workflows collection

Anyone can contribute a workflow to the Step Functions workflows collection. GitHub can host new workflow files in the AWS workflows-collection repository, or in a pre-existing repository of your own.

To submit a workflow:

  1. Choose Submit a workflow from the navigation section.
  2. Fill out the GitHub issue template.
  3. Clone the repository, and duplicate and rename the example _workflow_model directory.
  4. Add the associated workflow template files, ASL, and workflow image.
  5. Add the required meta information to `example-workflow.json`
  6. Make a Pull Request to the repository with the new workflow files.

Additional guidance can be found in the repository’s PUBLISHING.md file.

Conclusion

Today, the AWS Serverless Developer Advocate team is launching a new Serverless Land experience called “The Step Functions workflows collection”. This helps builders search, deploy, and contribute example Step Functions workflows.

The workflows collection simplifies the Step Functions getting started experience, and also shows more advanced users how to apply best practices to their workflows. These examples consist of fundamental building blocks for workflows, common application patterns implemented as workflows, and end to end applications.

All Step Functions builders are invited to contribute to the collection. This is done by submitting a pull request to the Step Functions Workflows Collection GitHub repository. Each submission is reviewed by the Serverless Developer advocate for quality and relevancy before publishing.

You can now learn to use Step Functions with a new workshop called the AWS Step Functions Workshop. This self-paced tutorial teaches you how to use the primary features of Step Functions through a series of interactive modules.

For more information on building applications with Step Functions visit Serverlessland.com.

Orchestrating AWS Glue crawlers using AWS Step Functions

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/orchestrating-aws-glue-crawlers-using-aws-step-functions/

This blog post is written by Justin Callison, General Manager, AWS Workflow.

Organizations generate terabytes of data every day in a variety of semistructured formats. AWS Glue and Amazon Athena can give you a simpler and more cost-effective way to analyze this data with no infrastructure to manage. AWS Glue crawlers identify the schema of your data and manage the metadata required to analyze the data in place, without the need to transform this data and load into a data warehouse.

The timing of when your crawlers run and complete is important. You must ensure the crawler runs after your data has updated and before you query it with Athena or analyze with an AWS Glue job. If not, your analysis may experience errors or return incomplete results.

In this blog, you learn how to use AWS Step Functions, a low-code visual workflow service that integrates with over 220 AWS services. The service orchestrates your crawlers to control when they start, confirm completion, and combine them into end-to-end, serverless data processing workflows.

Using Step Functions to orchestrate multiple AWS Glue crawlers, provides a number of benefits when compared to implementing a solution directly with code. Firstly, the workflow provides an instant visual understanding of the application, and any errors that might occur during execution. Step Functions’ ability to run nested workflows inside a Map state helps to decouple and reuse application components with native array iteration. Finally, the Step Functions wait state lets the workflow periodically poll the status of the crawl job, without incurring additional cost for idol wait time.

Deploying the example

With this example, you create three datasets in Amazon S3, then use Step Functions to orchestrate AWS Glue crawlers to analyze the datasets and make them available to query using Athena.

You deploy the example with AWS CloudFormation using the following steps:

  1. Download the template.yaml file from here.
  2. Log in to the AWS Management Console and go to AWS CloudFormation.
  3. Navigate to Stacks -> Create stack and select With new resources (standard).
  4. Select Template is ready and Upload a template file, then Choose File and select the template.yaml file that you downloaded in Step 1 and choose Next.
  5. Enter a stack name, such as glue-stepfunctions-demo, and choose Next.
  6. Choose Next, check the acknowledgement boxes in the Capabilities and transforms section, then choose Create stack.
  7. After deployment, the status updates to CREATE_COMPLETE.

Create your datasets

Navigate to Step Functions in the AWS Management Console and select the create-dataset state machine from the list. This state machine uses Express Workflows and the Parallel state to build three datasets concurrently in S3. The first two datasets include information by user and location respectively and include files per day over the 5-year period from 2016 to 2020. The third dataset is a simpler, all-time summary of data by location.

To create the datasets, you choose Start execution from the toolbar for the create-dataset state machine, then choose Start execution again in the dialog box. This runs the state machine and creates the datasets in S3.

Navigate to the S3 console and view the glue-demo-databucket created for this example. In this bucket, in a folder named data, there are three subfolders, each containing a dataset.

The all-time-location-summaries folder contains a set of JSON files, one for each location.

The daily-user-summaries and daily-location-summaries contain a folder structure with nested folders for each year, month, and date. In addition to making this data easier to navigate via the console, this folder structure provides hints to AWS Glue that it can use to partition this dataset and make it more efficient to query.

Crawling

You now use AWS Glue crawlers to analyze these datasets and make them available to query. Navigate to the AWS Glue console, select Crawlers to see the list of Crawlers that you created when you deployed this example. Select the daily-user-summaries crawler to view details and note that they have tags assigned to indicate metadata such as the datatype of the data and whether the dataset is-partitioned.

Now, return to the Step Functions console and view the run-crawlers-with-tags state machine. This state machine uses AWS SDK service integrations to get a list of all crawlers matching the tag criteria you enter. It then uses the map state and the optimized service integration for Step Functions to execute the run-crawler state machine for each of the matching crawlers concurrently. The run-crawler state machine starts each crawler and monitors status until the crawler completes. Once each of the individual crawlers have completed, the run-crawlers-with-tags state machine also completes.

To initiate the crawlers:

  1. Choose Start execution from the top of the page when viewing the run-crawlers-with-tags state machine
  2. Provide the following as Input
    {"tags": {"datatype": "json"}}
  3. Choose Start execution.

After 2-3 minutes, the execution finishes with a Succeeded status once all three crawlers have completed. During this time, you can navigate to the run-crawler state machine to view the individual, nested executions per crawler or to the AWS Glue console to see the status of the crawlers.

Querying the data using Amazon Athena

Now, navigate to the Athena console where you can see the database and tables created by your crawlers. Note that AWS Glue recognized the partitioning scheme and included fields for year, month, and date in addition to user and usage fields for the data contained in the JSON files.

If you have not used Athena in this account before, you see a message instructing you to set a query result location. Choose View settings -> Manage -> Browse S3 and select the athena-results bucket that you created when you deployed the example. Choose Save then return to the Editor to continue.

You can now run queries such as the following, to calculate the total usage for all users over 5 years.

SELECT SUM(usage) all_time_usage FROM “daily_user_summaries”

You can also add filters, as shown in the following example, which limit results to those from 2016.

SELECT SUM(usage) all_time_usage FROM “daily_user_summaries” WHERE year = ‘2016’

Note this second query scanned only 17% as much data (133 KB vs 797 KB) and completed faster. This is because Athena used the partitioning information to avoid querying the full dataset. While the differences in this example are small, for real-world datasets with terabytes of data, your cost and latency savings from partitioning data can be substantial.

The disadvantage of a partitioning scheme is that new folders are not included in query results until you add new partitions. Re-running your crawler identifies and adds the new partitions and using Step Functions to orchestrate these crawlers makes that task simpler.

Extending the example

You can use these example state machines as they are in your AWS accounts to manage your existing crawlers. You can use Amazon S3 event notifications with Amazon EventBridge to trigger crawlers based on data changes. With the Optimized service integration for Amazon Athena, you can extend your workflows to execute queries against these crawled datasets. And you can use these examples to integrate crawler execution into your end-to-end data processing workflows, creating reliable, auditable workflows from ingestion through to analysis.

Conclusion

In this blog post, you learn how to use Step Functions to orchestrate AWS Glue crawlers. You deploy an example that generates three datasets, then uses Step Functions to start and coordinate crawler runs that analyze this data and make it available to query using Athena.

To learn more about Step Functions, visit Serverless Land.

Debugging AWS Step Functions executions with the new console experience

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/debugging-aws-step-functions-executions-with-the-new-console-experience/

Today, AWS Step Functions introduces a new opt-in console experience that makes it easier to analyze, debug, and optimize Standard Workflows.

Builders create Step Functions workflows to orchestrate multiple services into business-critical applications with minimal code. Customers wanted better ways to debug workflow executions and analyze the payload as it passes through each state.

This blog post explains the new capabilities of the enhanced Step Functions executions page. It shows how to debug workflows quickly, sort and filter on state events, and view the input and output path processing for each state.

Overview

The new Executions Details page allows you to inspect executions using three different view types: Graph view, Table view, and Event view. It has multi-level navigation enhancements for analyzing the map state feature, the ability to search execution history based on unique attributes and improved events, and table navigation with filtering, sorting, and pagination. For a full list of all the feature enhancements, see the Step Functions documentation.

Getting started

To get started, go to the Step Functions state machines page in the AWS Management Console:

  1. Choose any standard workflow from the list.
  2. From the workflows executions list page, choose an execution to analyze.
  3. Choose the New executions page button:

An enhanced execution summary section at the top of the page contains some new information:

  1. You can copy the execution ARN to the clipboard by choosing the copy icon.
  2. This section now shows the number of state transitions for the execution. This is helpful in optimizing the cost of your workflows. With Step Functions, you pay for the number of state transitions used per month for each state transition over the AWS Free Tier.
  3. You can view the total execution duration.

The following execution summary is a new section that displays execution errors. This helps to find the root cause of any workflow faults or failures:

Choosing the between execution views

The following examples show how to use the new execution views to inspect a workflow execution. This example focuses on the Order Processing Workflow that powers Serverlesspresso. Serverlesspresso is an interactive serverless application showcased at AWS re:Invent and AWS Summits. It allows attendees to order coffee from their smartphones. Each order starts a workflow execution.

Graph view

The graph view provides a visual representation of the workflow execution path. It shows which states succeeded, failed, or are currently in progress, and any errors caught. The legend at the bottom of the graph helps to decode each color.

To access the Graph view, choose the Graph view from the view navigation, as shown in the following screenshot. There is a new option to render the graph vertically or horizontally, which you can choose from the Layout option.

 

The graph view shows that the workflow caught an error at the Emit–Workflow Started TT state. I choose this state from the graph to view more details about it.

The Events tab

Each state in a workflow execution moves through a sequence of events, from TaskStateEntered to TaskStateExited. The Events tab, shown in the following image, displays all the events for the selected state, with their corresponding timestamp.

You can drill down into an event to see its output. In this case, the TaskTimedOut event happens to the selected state with the error message “States.Timeout”. This corresponds with the Caught error shown in the Graph view.

The Input & Output tab

Amazon States Language (ASL) enables you to filter and manipulate data at various stages of a workflow state’s execution using paths. A path is a string beginning with $ that lets you identify and filter subsets of JSON text. Learning how to apply these filters helps to build efficient workflows with minimal state transitions.

Select the Input & Output tab, and then toggle the Advanced view option to display the payload after each path process is applied.

This is useful for checking what each JSON path evaluates to. For example, the following images show how I configure the state Parameters, along with the Task input. This is what the parameters evaluate to after Step Functions applies the JSON path processing:

State Details and Definition tabs

The new execution page lets you view a state’s definition and execution details in isolation from the other states. The task details section contains additional information such as the Duration, Heartbeat, Started After and Timeout values.

Table view

The Table view provides a tabular representation of each state. Use this view to access information quickly about a state’s duration, resources, or status. A new timeline column shows the relative duration that each state took to complete. You can configure which columns are displayed.

To search and filter the table based on unique attributes such as state name or error type, start typing into the search input. Use the predictive autocomplete to define the search criteria. You can also choose a relative or absolute time range to filter by.

Event view

Step Functions stores all changes to state as a sequence of events to give you more visibility into the execution. The event view allows you to find and investigate a particular state event quickly by using the search and sort options:

  1. Choose the Timestamp column header to list events in reverse order of occurrence.
  2. Choose the arrow in the left-most column to reveal more details about each event
  3. Use the search input to search by keyword, state name, event type, or attribute.
  4. The date button lets you filter events by date or time.

Map State

The map state (“Type”: “Map”) allows you to run a set of steps for each element of an input array. The Step Functions execution page now helps you investigate and debug workflows using the Map state with the following enhancements:

  1. A hierarchical table view of steps for each iteration.
  2. An integrations overview, showing the execution summary at a glance.
  3. A paginated list of every event across all iterations.

The following Serverlesspresso workflow processes orders in batches. The map state iterates over each order, sanitizes each item, and checks it is currently in stock. If the item is not in stock, the map state throws a failure. If it is in stock, the order is recorded to a database, and a new event is emitted onto the serverless event bus.

The workflow runs for a new batch of orders and produces the following executions results page:

Using the Graph view:

  1. You can step through each iteration using the Map iteration viewer
  2. See the summary status in the iterations overview section

The Table view shows a hierarchical list of each iteration and I can drill down into the execution that failed to investigate further.

Use the Event view to search for failed executions to quickly filter the results:

Conclusion

Today, Step Functions is launching a new opt-in console experience to help builders analyse, debug, and optimize Step Functions Standard Workflows. These enhancements include three different ways to view your workflow executions, better visibility into workflows that use the Map state, fast access to workflow faults and failures, and new tools to search and sort workflow executions. This blog post shows how to use the new views to debug workflows, sort and filter on state events, and view the input and output path processing for each state.

The new console executions experience is generally available in AWS Regions where Step Functions is available.

For more information on building applications with Step Functions, visit serverlessland.com.

Optimizing AWS Lambda function performance for Java

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/optimizing-aws-lambda-function-performance-for-java/

This post is written by Mark Sailes, Senior Specialist Solutions Architect.

This blog post shows how to optimize the performance of AWS Lambda functions written in Java, without altering any of the function code. It shows how Java virtual machine (JVM) settings affect the startup time and performance. You also learn how you can benchmark your applications to test these changes.

When a Lambda function is invoked for the first time, or when Lambda is horizontally scaling to handle additional requests, an execution environment is created. The first phase in the execution environment’s lifecycle is initialization (Init).

For Java managed runtimes, a new JVM is started and your application code is loaded. This is called a cold start. Subsequent requests then reuse this execution environment. This means that the Init phase does not need to run again. The JVM will already be started. This is called a warm start.

In latency-sensitive applications such as customer facing APIs, it’s important to reduce latency where possible to give the best possible experience. Cold starts can increase the latency for APIs when they occur.

How can you improve cold start latency?

Changing the tiered compilation level can help you to reduce cold start latency. By setting the tiered compilation level to 1, the JVM uses the C1 compiler. This compiler quickly produces optimized native code but it does not generate any profiling data and never uses the C2 compiler.

Tiered compilation is a feature of the Java virtual machine (JVM). It allows the JVM to make best use of both of the just-in-time (JIT) compilers. The C1 compiler is optimized for fast start-up time. The C2 compiler is optimized for the best overall performance but uses more memory and takes a longer time to achieve it.

There are five different levels of tiered compilation. Level 0 is where Java byte code is interpreted. Level 4 is where the C2 compiler analyses profiling data collected during application startup. It observes code usage over a period of time to find the best optimizations. Choosing the correct level can help you optimize your performance.

Changing the tiered compilation level to 1 can reduce cold start times by up to 60%. Thanks to changes in the Lambda execution environment, you can do this in one step with an environment variable for all Java managed runtimes.

Language-specific environment variables

Lambda supports the customization of the Java runtime via language-specific environment variables. The environment variable JAVA_TOOL_OPTIONS allows you to specify additional command line arguments to be used when Java is launched. Using this environment variable, you can change various aspects of the JVM configuration including garbage collection functionality, memory settings as well as the configuration for tiered compilation. To change the tiered compilation level to 1 you would set the value of JAVA_TOOL_OPTIONS to “-XXx:+TieredCompilation -XX:TieredStopAtLevel=1”. When the Java managed runtime starts any value set will be included in the program arguments. For more information on how you can collect and analyses garbage collection data read our Field Notes: Monitoring the Java Virtual Machine Garbage Collection on AWS Lambda.

Customer facing APIs

The following diagram is an example architecture that might be used to create a customer-facing API. Amazon API Gateway is used to manage a REST API and is integrated with Lambda to handle requests. The Lambda function reads and writes data to Amazon DynamoDB to serve the requests.

This is an example use case, which would benefit from optimization. The shorter the duration of each request made to the API the better the customer experience will be.

You can explore the code for this example in the GitHub repo: https://github.com/aws-samples/aws-lambda-java-tiered-compilation-example. The project includes the Lambda function source code, infrastructure as code template, and instructions to deploy it to your own AWS account.

Measuring cold starts

Before you add the environment variable to your Lambda function, measure the current duration for a request. One way to do this is by using the test functionality in the Lambda console.

The following screenshot is a summary from a test invoke, run from the console. You can see that it is a cold start because it includes an Init duration value. If the summary doesn’t include an Init duration, it is a warm start. In this case, the duration is 5,313ms.

Applying the optimization

This change can be configured using AWS Serverless Application Model (AWS SAM), AWS Cloud Development Kit (CDK), AWS CloudFormation, or from within the AWS Management Console.

Using the AWS Management Console:

  1. Navigate to the AWS Lambda console.
  2. Choose Functions and choose the Lambda function to update.
  3. From the menu, choose the Configuration tab and Environment variables. Choose Edit.
  4. Choose Add environment variable. Add the following:
    – Key: JAVA_TOOL_OPTIONS
    – Value: -XXx:+TieredCompilation -XX:TieredStopAtLevel=1

  5. Choose Save. You can verify that the changes are applied by invoking the function and viewing the log events in Amazon CloudWatch. The log line Picked up _JAVA_OPTIONS: -XX:+TieredCompilation -XX:TieredStopAtLevel=1 is added by the JVM during startup.

Checking if performance has improved

Invoke the Lambda function again to see if performance has improved.

The following screenshot shows the results of a test for a function with tiered compilation set to level 1. The duration is 2,169 ms. The cold start duration has decreased by 3,144 ms (59%).

Other use cases

This optimization can be applied to other use cases. Examples could include image resizing, document generation and near real-time ETL pipelines. The common trait being that they do a small number of discrete pieces of work in each execution.

The function code doesn’t have as many candidates for further optimization with the C2 compiler. Even if the C2 compiler did make further optimizations there wouldn’t be enough usage of those optimizations to decrease the total execution time. Instead of allowing this extra compilation to happen, you can tell the JVM not to use the C2 compiler and only use C1.

This optimization may not be suitable if a Lambda function is running for minutes or is repeating the same piece of code thousands of times within the same execution. Frequently executed sections of code are called hot spots, and are prime candidate for further optimization with the C2 compiler.

The C2 compiler analyses profiling data collected as the application runs, and produce a more efficient way to execute that piece of code. After the optimization by the C2 compiler that section of code would execute quicker. Because it is repeated thousands of times in a single Lambda invocation, the overhead of the optimization is worth it overall. An example use case where this would happen is in Monte Carlo simulations. Simulations of random events are calculated thousands, millions, or even billions of times to analyze the most likely outcomes.

Conclusion

In this post, you learn how to improve Lambda cold start performance by up to 60% for functions running the Java runtime. Thanks to the recent changes in the Java execution environment, you can implement these optimizations by adding a single environment variable.

This optimization is suitable for Java workloads such as customer-facing APIs, just-in-time image resizing, near real-time data processing pipelines, and other short-running processes. For more information on tired compilation, read about Tiered Compilation in JVM.

For more serverless learning resources, visit Serverless Land.

Mocking service integrations with AWS Step Functions Local

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/mocking-service-integrations-with-aws-step-functions-local/

This post is written by Sam Dengler, Principal Specialist Solutions Architect, and Dhiraj Mahapatro, Senior Specialist Solutions Architect.

AWS Step Functions now supports over 200 AWS Service integrations via AWS SDK Integration. Developers want to build and test control flow logic for workflows using branching logicerror handling, and retries. This allows for precise workflow execution with deterministic results. Additionally, developers use Step Functions’ input and output processing features to transform data as it enters and exits tasks.

Developers can test their state machines locally using Step Functions Local before deploying them to an AWS account. However state machines that use service integrations like AWS Lambda, Amazon SQS, or Amazon SNS require Step Functions Local to perform calls to AWS service endpoints. Often, developers want to test the control and data flows of their state machine executions in isolation, without any dependency on service integration availability.

Today, AWS is releasing Mocked Service Integrations for Step Functions Local. This allows developers to define sample outputs from AWS service integrations. You can combine them into test case scenarios to validate workflow control and data flow definitions. You can find the code used in this post in the Step Functions examples GitHub repository.

Sales lead generation sample workflow

In this example, new sales leads are created in a customer relationship management system. This triggers the sample workflow execution using input data, which provides information about the contact.

Using the sales lead data, the workflow first validates the contact’s identity and address. If valid, it uses Step Functions’ AWS SDK integration for Amazon Comprehend to call the DetectSentiment API. It uses the sales lead’s comments as input for sentiment analysis.

If the comments have a positive sentiment, it adds the sales leads information to a DynamoDB table for follow-up. The event is published to Amazon EventBridge to notify subscribers.

If the sales lead data is invalid or a negative sentiment is detected, it publishes events to EventBridge for notification. No record is added to the Amazon DynamoDB table. The following Step Functions Workflow Studio diagram shows the control logic:

The full workflow definition is available in the code repository. Note the workflow task names in the diagram, such as DetectSentiment, which are important when defining the mocked responses.

Sentiment analysis test case

In this example, you test a scenario in which:

  1. The identity and address are successfully validated using a Lambda function.
  2. A positive sentiment is detected using the Comprehend.DetectSentiment API after three retries.
  3. A contact item is written to a DynamoDB table successfully
  4. An event is published to an EventBridge event bus successfully

The execution path for this test scenario is shown in the following diagram (the red and green numbers have been added). 0 represents the first execution; 1, 2, and 3 represent the max retry attempts (MaxAttempts), in case of an InternalServerException.

Mocked response configuration

To use service integration mocking, create a mock configuration file with sections specifying mock AWS service responses. These are grouped into test cases that can be activated when executing state machines locally. The following example provides code snippets and the full mock configuration is available in the code repository.

To mock a successful Lambda function invocation, define a mock response that conforms to the Lambda.Invoke API response elements. Associate it to the first request attempt:

"CheckIdentityLambdaMockedSuccess": {
  "0": {
    "Return": {
      "StatusCode": 200,
      "Payload": {
        "statusCode": 200,
        "body": "{\"approved\":true,\"message\":\"identity validation passed\"
}"
      }
    }
  }
}

To mock the DetectSentiment retry behavior, define failure and successful mock responses that conform to the Comprehend.DetectSentiment API call. Associate the failure mocks to three request attempts, and associate the successful mock to the fourth attempt:

"DetectSentimentRetryOnErrorWithSuccess": {
  "0-2": {
    "Throw": {
      "Error": "InternalServerException",
      "Cause": "Server Exception while calling DetectSentiment API in Comprehend Service"
    }
  },
  "3": {
    "Return": {
      "Sentiment": "POSITIVE",
      "SentimentScore": {
        "Mixed": 0.00012647535,
        "Negative": 0.00008031699,
        "Neutral": 0.0051454515,
        "Positive": 0.9946478
      }
    }
  }
}

Note that Step Functions Local does not validate the structure of the mocked responses. Ensure that your mocked responses conform to actual responses before testing. To review the structure of service responses, either perform the actual service calls using Step Functions or view the documentation for those services.

Next, associate the mocked responses to a test case identifier:

"RetryOnServiceExceptionTest": {
  "Check Identity": "CheckIdentityLambdaMockedSuccess",
  "Check Address": "CheckAddressLambdaMockedSuccess",
  "DetectSentiment": "DetectSentimentRetryOnErrorWithSuccess",
  "Add to FollowUp": "AddToFollowUpSuccess",
  "CustomerAddedToFollowup": "CustomerAddedToFollowupSuccess"
}

With the test case and mock responses configured, you can use them for testing with Step Functions Local.

Test case execution using Step Functions Local

The Step Functions Developer Guide describes the steps used to set up Step Functions Local on your workstation and create a state machine.

After these steps are complete, you can run a workflow locally using the start-execution AWS CLI command. Activate the mocked responses by appending a pound sign and the test case identifier to the state machine ARN:

aws stepfunctions start-execution \
  --endpoint http://localhost:8083 \
  --state-machine arn:aws:states:us-east-1:123456789012:stateMachine: LeadGenerationStateMachine#RetryOnServiceExceptionTest \
  --input file://events/sfn_valid_input.json

Test case validation

To validate the workflow executed correctly in the test case, examine the state machine execution events using the StepFunctions.GetExecutionHistory API. This ensures that the correct states are used. There are a variety of validation tools available. This post shows how to achieve this using the AWS CLI filtering feature using JMESPath syntax.

In this test case, you validate the TaskFailed and TaskSucceeded events match the retry definition for the DetectSentiment task, which specifies three retries. Use the following AWS CLI command to get the execution history and filter on the execution events:

aws stepfunctions get-execution-history \
  --endpoint http://localhost:8083 \
  --execution-arn <ExecutionArn>
  --query 'events[?(type==`TaskFailed` && contains(taskFailedEventDetails.cause, `Server Exception while calling DetectSentiment API in Comprehend Service`)) || (type==`TaskSucceeded` && taskSucceededEventDetails.resource==`comprehend:detectSentiment`)]'

The results include matching events:

{
  "timestamp": "2022-01-13T17:24:32.276000-05:00",
  "type": "TaskFailed",
  "id": 19,
  "previousEventId": 18,
  "taskFailedEventDetails": {
    "error": "InternalServerException",
    "cause": "Server Exception while calling DetectSentiment API in Comprehend Service"
  }
}

These results should be compared to the test acceptance criteria to verify the execution behavior. Test cases, acceptance criteria, and validation expressions vary by customer and use case. These techniques are flexible to accommodate various happy path and error scenarios. To explore additional sample test cases and examples, visit the example code repository.

Conclusion

This post introduces a new robust way to test AWS Step Functions state machines in isolation. With mocking, developers get more control over the type of scenarios that a state machine can handle, leading to assertion of multiple behaviors. Testing a state machine with mocks can also be part of the software release. Asserting on behaviors like error handling, branching, parallel, dynamic parallel (map state) helps test the entire state machine’s behavior. For any new behavior in the state machine, such as a new type of exception from a state, you can mock and add as a test.

See the Step Functions Developer Guide for more information on service mocking with Step Functions Local. The sample application covers basic scenarios of testing a state machine. You can use a similar approach for complex scenarios including other Step Functions flows, like map and wait.

For more serverless learning resources, visit Serverless Land.

Filtering event sources for AWS Lambda functions

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/filtering-event-sources-for-aws-lambda-functions/

This post is written by Heeki Park, Principal Specialist Solutions Architect – Serverless.

When an AWS Lambda function is configured with an event source, the Lambda service triggers a Lambda function for each message or record. The exact behavior depends on the choice of event source and the configuration of the event source mapping. The event source mapping defines how the Lambda service handles incoming messages or records from the event source.

Today, AWS announces the ability to filter messages before the invocation of a Lambda function. Filtering is supported for the following event sources: Amazon Kinesis Data Streams, Amazon DynamoDB Streams, and Amazon SQS. This helps reduce requests made to your Lambda functions, may simplify code, and can reduce overall cost.

Overview

Consider a logistics company with a fleet of vehicles in the field. Each vehicle is enabled with sensors and 4G/5G connectivity to emit telemetry data into Kinesis Data Streams:

  • In one scenario, they use machine learning models to infer the health of vehicles based on each payload of telemetry data, which is outlined in example 2 on the Lambda pricing page.
  • In another scenario, they want to invoke a function, but only when tire pressure is low on any of the tires.

If tire pressure is low, the company notifies the maintenance team to check the tires when the vehicle returns. The process checks if the warehouse has enough spare replacements. Optionally, it notifies the purchasing team to buy additional tires.

The application responds to the stream of incoming messages and runs business logic if tire pressure is below 32 psi. Each vehicle in the field emits telemetry as follows:

{
    "time": "2021-11-09 13:32:04",
    "fleet_id": "fleet-452",
    "vehicle_id": "a42bb15c-43eb-11ec-81d3-0242ac130003",
    "lat": 47.616226213162406,
    "lon": -122.33989110734133,
    "speed": 43,
    "odometer": 43519,
    "tire_pressure": [41, 40, 31, 41],
    "weather_temp": 76,
    "weather_pressure": 1013,
    "weather_humidity": 66,
    "weather_wind_speed": 8,
    "weather_wind_dir": "ne"
}

To process all messages from a fleet of vehicles, you configure a filter matching the fleet id in the following example. The Lambda service applies the filter pattern against the full payload that it receives.

The schema of the payload for Kinesis and DynamoDB Streams is shown under the “kinesis” attribute in the example Kinesis record event. When building filters for Kinesis or DynamoDB Streams, you filter the payload under the “data” attribute. The schema of the payload for SQS is shown in the array of records in the example SQS message event. When working with SQS, you filter the payload under the “body” attribute:

{
    "data": {
        "fleet_id": ["fleet-452"]
    }
}

To process all messages associated with a specific vehicle, configure a filter on only that vehicle id. The fleet id is kept in the example to show that it matches on both of those filter criteria:

{
    "data": {
        "fleet_id": ["fleet-452"],
        "vehicle_id": ["a42bb15c-43eb-11ec-81d3-0242ac130003"]
    }
}

To process all messages associated with that fleet but only if tire pressure is below 32 psi, you configure the following rule pattern. This pattern searches the array under tire_pressure to match values less than 32:

{
    "data": {
        "fleet_id": ["fleet-452"],
        "tire_pressure": [{"numeric": ["<", 32]}]
    }
}

To create the event source mapping with this filter criteria with an AWS CLI command, run the following command.

aws lambda create-event-source-mapping \
--function-name fleet-tire-pressure-evaluator \
--batch-size 100 \
--starting-position LATEST \
--event-source-arn arn:aws:kinesis:us-east-1:0123456789012:stream/fleet-telemetry \
--filter-criteria '{"Filters": [{"Pattern": "{\"tire_pressure\": [{\"numeric\": [\"<\", 32]}]}"}]}'

For the CLI, the value for Pattern in the filter criteria requires the double quotes to be escaped in order to be properly captured.

Alternatively, to create the event source mapping with this filter criteria with an AWS Serverless Application Model (AWS SAM) template, use the following snippet.

Events: 
  TirePressureEvent: 
    Type: Kinesis    
    Properties: 
      BatchSize: 100
      StartingPosition: LATEST
      Stream: "arn:aws:kinesis:us-east-1:0123456789012:stream/fleet-telemetry"
      Filters: 
        - Pattern: "{\"data\": {\"tire_pressure\": [{\"numeric\": [\"<\", 32]}]}}"

For the AWS SAM template, the value for Pattern in the filter criteria does not require escaped double quotes.

For more information on how to create filters, refer to examples of event pattern rules in EventBridge, as Lambda filters messages in the same way.

Reducing costs with event filtering

By configuring the event source with this filter criteria, you can reduce the number of messages that are used to invoke your Lambda function.

Using the example from the Lambda pricing page, with a fleet of 10,000 vehicles in the field, each is emitting telemetry once an hour. Each month, the vehicles emit 10,000 * 24 * 31 = 7,440,000 messages, which trigger the same number of Lambda invocations. You configure the function with 256 MB of memory and the average duration of the function is 100 ms. In this example, vehicles emit low-pressure telemetry once every 31 days.

Without filtering, the cost of the application is:

  • Monthly request charges → 7.44M * $0.20/million = $1.49
  • Monthly compute duration (seconds) → 7.44M * 0.1 seconds = 0.744M seconds
  • Monthly compute (GB-s) → 256MB/1024MB * 0.744M seconds = 0.186M GB-s
  • Monthly compute charges → 0.186M GB-s * $0.0000166667 = $3.10
  • Monthly total charges = $1.49 + $3.10 = $4.59

With filtering, the cost of the application is:

  • Monthly request charges → (7.44M / 31)* $0.20/million = $0.05
  • Monthly compute duration (seconds) → (7.44M / 31) * 0.1 seconds = 0.024M seconds
  • Monthly compute (GB-s) → 256MB/1024MB * 0.024M seconds = 0.006M GB-s
  • Monthly compute charges → 0.006M GB-s * $0.0000166667 = $0.10
  • Monthly total charges = $0.05 + $0.10 = $0.15

By using filtering, the cost is reduced from $4.59 to $0.15, a 96.7% cost reduction.

Designing and implementing event filtering

In addition to reducing cost, the functions now operate more efficiently. This is because they no longer iterate through arrays of messages to filter out messages. The Lambda service filters the messages that it receives from the source before batching and sending them as the payload for the function invocation. This is the order of operations:

Event flow with filtering

Event flow with filtering

As you design filter criteria, keep in mind a few additional properties. The event source mapping allows up to five patterns. Each pattern can be up to 2048 characters. As the Lambda service receives messages and filters them with the pattern, it fills the batch per the normal event source behavior.

For example, if the maximum batch size is set to 100 records and the maximum batching window is set to 10 seconds, the Lambda service filters and accumulates records in a batch until one of those two conditions is satisfied. In the case where 100 records that meet the filter criteria come during the batching window, the Lambda service triggers a function with those filtered 100 records in the payload.

If fewer than 100 records meeting the filter criteria arrive during the batch window, Lambda triggers a function with the filtered records that came during the batch window at the end of the 10-second batch window. Be sure to configure the batch window to match your latency requirements.

The Lambda service ignores filtered messages and treats them as successfully processed. For Kinesis Data Streams and DynamoDB Streams, the iterator advances past the records that were sent via the event source mapping.

For SQS, the messages are deleted from the queue without any additional processing. With SQS, be sure that the messages that are filtered out are not required. For example, you have an Amazon SNS topic with multiple SQS queues subscribed. The Lambda functions consuming each of those SQS queues process different subsets of messages. You could use filters on SNS but that would require the message publisher to add attributes to the messages that it sends. You could instead use filters on the event source mapping for SQS. Now the publisher does not need to make any changes, as the filter is applied on the messages payload directly.

Conclusion

Lambda now supports the ability to filter messages based on a criteria that you define. This can reduce the number of messages that your functions process, may reduce cost, and can simplify code.

You can now build applications for specific use cases that use only a subset of the messages that flow through your event-driven architectures. This can help optimize the compute efficiency of your functions.

Learn more about this capability in our AWS Lambda Developer Guide.

Introducing AWS SAM Pipelines: Automatically generate deployment pipelines for serverless applications

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/introducing-aws-sam-pipelines-automatically-generate-deployment-pipelines-for-serverless-applications/

Today, AWS announces the public preview of AWS SAM Pipelines, a new capability of AWS Serverless Application Model (AWS SAM) CLI. AWS SAM Pipelines makes it easier to create secure continuous integration and deployment (CI/CD) pipelines for your organizations preferred continuous integration and continuous deployment (CI/CD) system.

This blog post shows how to use AWS SAM Pipelines to create a CI/CD deployment pipeline configuration file that integrates with GitLab CI/CD.

AWS SAM Pipelines

A deployment pipeline is an automated sequence of steps that are performed to release a new version of an application. They are defined by a pipeline template file. AWS SAM Pipelines provides templates for popular CI/CD systems such as AWS CodePipeline, Jenkins, GitHub Actions, and GitLab CI/CD. Pipeline templates include AWS deployment best practices to help with multi-account and multi-Region deployments. AWS environments such as dev and production typically exist in different AWS accounts. This allows development teams to configure safe deployment pipelines, without making unintended changes to infrastructure. You can also supply your own custom pipeline templates to help to standardize pipelines across development teams.

AWS SAM Pipelines is composed of two commands:

  1. sam pipeline bootstrap, a configuration command that creates the AWS resources required to create a pipeline.
  2. sam pipeline init, an initialization command that creates a pipeline file for your preferred CI/CD system. For example, a Jenkinsfile for Jenkins or a .gitlab-ci.yml file for GitLab CI/CD.

Having two separate commands allows you to manage the credentials for operators and developer separately. Operators can use sam pipeline bootstrap to provision AWS pipeline resources. This can reduce the risk of production errors and operational costs. Developers can then focus on building without having to set up the pipeline infrastructure by running the sam pipeline init command.

You can also combine these two commands by running sam pipeline init –bootstrap. This takes you through the entire guided bootstrap and initialization process.

Getting started

The following steps show how to use AWS SAM Pipelines to create a deployment pipeline for GitLab CI/CD. GitLab is an AWS Partner Network (APN) member to build, review, and deploy code. AWS SAM Pipelines creates two deployment pipelines, one for a feature branch, and one for a main branch. Each pipeline runs as a separate environment in separate AWS accounts. Each time you make a commit to the repository’s feature branch, the pipeline builds, tests, and deploys a serverless application in the development account. For each commit to the Main branch, the pipeline builds, tests, and deploys to a production account.

Prerequisites

  • An AWS account with permissions to create the necessary resources.
  • Install AWS Command Line Interface (CLI) and AWS SAM CLI.
  • A verified GitLab account: This post assumes you have the required permissions to configure GitLab projects, create pipelines, and configure GitLab variables.
  • Create a new GitLab project and clone it to your local environment

Create a serverless application

Use the AWS SAM CLI to create a new serverless application from a Quick Start Template.

Run the following AWS SAM CLI command in the root directory of the repository and follow the prompts. For this example, can select any of the application templates:

sam init

Creating pipeline deployment resources

The sam pipeline bootstrap command creates the AWS resources and permissions required to deploy application artifacts from your code repository into your AWS environments.

For this reason, AWS SAM Pipelines creates IAM users and roles to allow you to deploy applications across multiple accounts. AWS SAM Pipelines creates these deployment resources following the principal of least privilege:

Run the following command in a terminal window:

sam pipeline init --bootstrap

This guides you through a series of questions to help create a .gitlab-ci.yml file. The --bootstrap option enables you to set up AWS pipeline stage resources before the template file is initialized:

  1. Enter 1, to choose AWS Quick Start Pipeline Templates
  2. Enter 2 to choose to create a GitLab CI/CD template file, which includes a two stage pipeline.
  3. Next AWS SAM reports “No bootstrapped resources were detected.” and asks if you want to set up a new CI/CD stage. Enter Y to set up a new stage:

Set up the dev stage by answering the following questions:

  1. Enter “dev” for the Stage name.
  2. AWS SAM CLI detects your AWS CLI credentials file. It uses a named profile to create the required resources for this stage. If you have a development profile, select that, otherwise select the default profile.
  3. Enter a Region for the stage name (for example, “eu-west-2”).
  4. Keep the pipeline IAM user ARN and pipeline and CloudFormation execution role ARNs blank to generate these resources automatically.
  5. An Amazon S3 bucket is required to store application build artifacts during the deployment process. Keep this option blank for AWS SAM Pipelines to generate a new S3 bucket.If your serverless application uses AWS Lambda functions packaged as a container image, you must create or specify an Amazon ECR Image repository. The bootstrap command configures the required permissions to access this ECR repository.
  6. Enter N to specify you are not using Lambda functions packages as container images.
  7. Press “Enter” to confirm the resources to be created.

AWS SAM Pipelines creates a PipelineUser with an associated ACCESS_KEY_ID and SECRET_ACCESS_KEY which GitLab uses to deploy artifacts to your AWS accounts. An S3 bucket is created along with two roles PipelineExecutionRole and CloudFormationExecutionRole.

Make a note of these values. You use these in the following steps to configure the production deployment environment and CI/CD provider.

Creating the production stage

The AWS SAM Pipeline command automatically detects that a second stage is required to complete the GitLab template, and prompts you to go through the set-up process for this:

  1. Enter “Y” to continue to build the next pipeline stage resources.
  2. When prompted for Stage Name, enter “prod”.
  3. When asked to Select a credential source, choose a suitable named profile from your AWS config file. The following example shows that a named profile called “prod” is selected.
  4. Enter a Region to deploy the resources to. The example uses the eu-west-1 Region.
  5. Press enter to use the same Pipeline IAM user ARN created in the previous step.
  6. When prompted for the pipeline execution role ARN and the CloudFormation execution role ARN, leave blank to allow the bootstrap process to create them.
  7. Provide the same answers as in the previous steps 5-7.

The AWS resources and permissions are now created to run the deployment pipeline for a dev and prod stage. The definition of these two stages is saved in .aws-sam/pipeline/pipelineconfig.toml.

AWS SAM Pipelines now automatically continues the walkthrough to create a GitLab deployment pipeline file.

Creating a deployment pipeline file

The following questions help create a .gitlab-ci.yml file. GitLab uses this file to run the CI/CD pipeline to build and deploy the application. When prompted, enter the name for both the dev and Prod stages. Use the following example to help answer the questions:

Deployment pipeline file

A .gitlab-ci.yml pipeline file is generated. The file contains a number of environment variables, which reference the details from AWS SAM pipeline bootstrap command. This includes using the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY securely stored in the GitLab CI/CD repository.

The pipeline file contains a build and deploy stage for a branch that follows the naming pattern ‘feature-*’. The build process assumes the TESTING_PIPELINE_EXECUTION_ROLE in the testing account to deploy the application. sam build uses the AWS SAM template file previously created. It builds the application artifacts using the default AWS SAM build images. You can further customize the sam build –use-container command if necessary.

By default the Docker image used to create the build artifact is pulled from Amazon ECR Public. The default Node.js 14 image in this example is based on the language specified during sam init. To pull a different container image, use the --build-image option as specified in the documentation.

sam deploy deploys the application to a new stack in the dev stage using the TESTING_CLOUDFORMATION_EXECUTION_ROLE.The following code shows how this configured in the .gitlab-ci.yml file.

build-and-deploy-feature:
stage: build
only:
- /^feature-.*$/
script:
- . assume-role.sh ${TESTING_PIPELINE_EXECUTION_ROLE} feature-deployment
- sam build --template ${SAM_TEMPLATE} --use-container
- sam deploy --stack-name features-${CI_COMMIT_REF_NAME}-cfn-stack
--capabilities CAPABILITY_IAM
--region ${TESTING_REGION}
--s3-bucket ${TESTING_ARTIFACTS_BUCKET}
--no-fail-on-empty-changeset
--role-arn ${TESTING_CLOUDFORMATION_EXECUTION_ROLE}

The file also contains separate build and deployments stages for the main branch. sam package prepares the application artifacts. The build process then assumes the role in the production stage and prepares the application artifacts for production. You can customize the file to include testing phases, and manual approval steps, if necessary.

Configure GitLab CI/CD credentials

GitLab CI/CD uses the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to authenticate to your AWS account. The values are associated with a new user generated in the previous sam pipeline init --bootstrap step. Save these values securely in GitLab’s CI/CD variables section:

  1. Navigate to Settings > CI/CD > Variables and choose expand.
  2. Choose Add variable, and enter in the key name and value for the AWS_SECRET_ACCESS_KEY noted down in the previous steps:
  3. Repeat this process for the AWS_ACCESS_KEY_ID:

Creating a feature branch

Create a new branch in your GitLab CI/CD project named feature-1:

  1. In the GitLab CI/CD menu, choose Branches from the Repository section. Choose New branch.
  2. For Branch name, enter branch “feature-1” and in the Create from field choose main.
  3. Choose Create branch.

Configure the new feature-1 branch to be protected so it can access the protected GitLab CI/D variables.

  1. From the GitLab CI/CD main menu, choose to Settings then choose Repository.
  2. Choose Expand in the Protected Branches section
  3. Select the feature-1 branch in the Branch field and set Allowed to merge and Allowed to push to Maintainers.
  4. Choose Protect

Trigger a deployment pipeline run

1. Add the AWS SAM application files to the repository and push the branch changes to GitLab CI/CD:

git checkout -b feature-1
git add .
git commit -am “added sam application”
git push --set-upstream origin feature-1 

This triggers a new pipeline run that deploys the application to the dev environment. The following screenshot shows GitLab’s CI/CD page.

AWS CloudFormation shows that a new stack has been created in the dev stage account. It is named after the feature branch:

To deploy the feature to production, make a pull request to merge the feature-1 branch into the main branch:

  1. In GitLab CI/CD, navigate to Merge requests and choose New merge request.
  2. On the following screen, choose feature-1 as the Source branch and main as the Target branch.
  3. Choose Compare branches and continue, and then choose Create merge request.
  4. Choose Merge

This merges the feature-1 branch to the main branch, triggering the pipeline to run the production build, testing, and deployment steps:

Conclusion

AWS SAM Pipelines is a new feature of the AWS SAM CLI that helps organizations quickly create pipeline files for their preferred CI/CD system. AWS provides a default set of pipeline templates that follow best practices for popular CI/CD systems such as AWS CodePipeline, Jenkins, GitHub Actions, and GitLab CI/CD. Organizations can also supply their custom pipeline templates via Git repositories to standardize custom pipelines across hundreds of application development teams. This post shows how to use AWS SAM Pipelines to create a CI/CD deployment pipeline for GitLab.

Watch guided video tutorials to learn how to create deployment pipelines for GitHub Actions, GitLab CI/CD, and Jenkins.

For more learning resources, visit https://serverlessland.com/explore/sam-pipelines.

Prototyping at speed with AWS Step Functions new Workflow Studio

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/prototyping-at-speed-with-aws-step-functions-new-workflow-studio/

AWS recently introduced Workflow Studio for AWS Step Functions. This is a new visual builder for creating Step Functions workflows in the AWS Management Console. This post shows how to use the Workflow Studio for rapid workflow prototyping. It also explains how to transition to local development, integrating the prototype with your infrastructure as code templates.

Since its release in December 2016, developers have been building Step Functions workflows with Amazon States Language (ASL) to orchestrate multiple services into business-critical applications. Developers wanted faster ways to prototype and build orchestration workflows without writing custom code or using additional services.

­­­­­­

What’s new?

The new Step Functions Workflow Studio provides an additional workflow building experience. Developers and business users can now build prototype workflows quickly with a graphical user interface in the Step Functions console.

These workflo­­­ws can include all the same workflow states, patterns, and service integrations available when building with ASL. Each state is configured using editable forms. The workflow ASL definition can be exported for further editing in the console or in your local integrated development environment (IDE). Workflow Studio can build new workflows or edit a pre-existing workflow. To get started with Workflow Studio, see this introduction video.

Business users

Workflow Studio provides new opportunities for a more diverse range of users to build step functions workflows. Business users and those in non-technical roles can quickly create workflow prototypes. This can help to reason about and understand business processes before passing to a developer to add business logic and configure service integrations.

Rapid workflow prototyping

Workflow Studio allows you to create placeholders for AWS Lambda functions and other service integrations using the ‘drag-and-drop’ interface. This means that resources do not need to exist before designing the workflow. Once a workflow is prototyped you can save and continue to edit in the console or copy the ASL definition to your local IDE. You can then incorporate the workflow with application resources and infrastructure as code templates.

In the following steps, I use Workflow Studio to build the workflow described in this post. The full application template is found in this GitHub repository. The workflow analyzes web form submissions for negative sentiment. It generates a case reference number and saves the data in an Amazon DynamoDB table. The workflow returns the case reference number and message sentiment score.

To start fast prototyping for this workflow with the visual studio:

  1. Log into the Step Functions console and choose Create state machine.
  2. Choose Design your workflow visually from the authoring method section. This opens up Workflow Studio.
  3. Choose AWS Lambda Invoke from the Actions menu and drag it into the workflow.
  4. Choose the Configuration tab from the Form panel and enter the name Detect Sentiment in the State name field.
  5. In the function name field, choose Enter Function Name.
  6. Enter ${DetectSentiment} into the function name parameters field. This is a dynamic reference to a value that is provided by an Infrastructure-as-code template.

    The Workflow Studio provides an interface to add input and output path processing configurations to the workflow.
  7. Choose the Output tab and select Combine input and result with ResultPath. Selecting this option uses the ResultPath filter to add the result into the original state input. The specified path indicates where to add the result.
  8. Enter $.SentimentResults into the path ResultsPath text input.
  9. View the workflow ASL definition by choosing Definition from the top menu. This shows:
    1. The state is named Detect Sentiment.
    2. The Lambda function name uses a dynamic reference to ${DetectSentiment}. This is provided by the infrastructure-as-code template, explained in the following steps.
    3. A default retry configuration is defined.
    4. The ResultPath is configured.

Continue building the workflow this way, adding more Task and Flow states. A completed workflow looks as follows:

Transitioning to local development

Once the workflow is created in the Workflow Studio, you can export the ASL definition to a local IDE to incorporate into an infrastructure as code template. The template describes all the AWS resources that make up the application:

  1. To copy the ASL definition, choose the Definition button in the top navigation, and copy the entire ASL workflow definition to the clipboard.
  2. Create a new directory in your local filesystem named statemachine and save the definition to a file in this directory named sfn-template.asl.json. The following screenshot shows how the workflow appears in your IDE when rendered with the AWS Toolkit for Visual Studio Code.

  3. AWS Serverless Application Model (AWS SAM) is an open-source infrastructure as code framework for building serverless applications.
  4. Create an AWS SAM template named template.yaml to describe the application resources. A completed version of this file is found in this GitHub repository.
  5. Create a directory for each Lambda function. Within each directory, save the function code to a file called app.js. The function code, can be found in this GitHub repository. The final application file directory looks as follows:
    root
    ┣ LambdaFunctions/
    ┃ ┣ GenerateReferenceNumber/
    ┃ ┃ ┗ app.js
    ┃ ┣ detectSentiment/
    ┃ ┃ ┗ app.js
    ┃ ┗ sendEmailConfirmation/
    ┃   ┗ app.js
    ┣ statemachine/
    ┃ ┗ sfn-template.asl.json
    ┗ template.yaml

The full application can be found in this GitHub repository.

The AWS SAM template describes the Step Functions workflow’s security permissions and allows for dynamic referencing of the resources described within the template such as the Lambda functions and DynamoDB table:

##########################################################################
#   STEP FUNCTION                                                        #
##########################################################################

  ProcessFormStateMachineExpressSync:
    Type: AWS::Serverless::StateMachine # More info about State Machine Resource: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-resource-statemachine.html
    Properties:
      DefinitionUri: statemachine/sfn-template.asl.json
      DefinitionSubstitutions:
        NotifyAdminWithSES: !Ref NotifyAdminWithSES
        GenerateRefernceNumber: !Ref GenerateRefernceNumber
        DetectSentiment: !Ref DetectSentiment
        DDBTable: !Ref FormDataTable
      Policies: # Find out more about SAM policy templates: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-policy-templates.html
        - LambdaInvokePolicy:
            FunctionName: !Ref NotifyAdminWithSES
        - LambdaInvokePolicy:
            FunctionName: !Ref DetectSentiment
        - LambdaInvokePolicy:
            FunctionName: !Ref GenerateRefernceNumber
        - DynamoDBWritePolicy:
            TableName: !Ref FormDataTable
      Type: EXPRESS
  • The DefinitionURI value provides the location of the ASL definition that is exported from the Workflow Studio, in statemachine/sfn-template.asl.json.
  • The DefinitionSubstitutions values provide the names of the resources used within the workflow. Here you see $.DetectSentiment Lambda function name passed to the workflow definition. This was entered into the Workflow Studio in the previous steps.

The application is deployed using the AWS SAM CLI. Follow these steps in the GitHub repository to deploy the application.

Once the application is deployed, the workflow can be edited by updating the ASL definition in the Step Functions console or the local template file. It can also be edited from the drag-and-snap interface in the Workflow Studio. Any edits made in the AWS Management Console should be copied back to the local template file.

Conclusion

The AWS Step Functions Workflow Studio is a new visual builder for creating Step Functions workflows in the AWS Management Console. The drag-and-drop interface can be used to build new or edit existing workflows quickly. Each state is configured using editable forms, with the ASL definition visible and available for export as you build.

This post shows how to use the Workflow Studio for rapid workflow prototyping. It explains how to export the ASL definition to your local IDE and integrate it with your infrastructure as code application templates.

The Workflow Studio is included in Step Functions pricing at no additional fee and is available in all regions where Step Functions is available. To get started, visit https://aws.amazon.com/stepfunctions.

Getting started with serverless for developers part 5: Sandbox developer account

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/getting-started-with-serverless-for-developers-part-5-sandbox-developer-account/

This is part 5 of the Getting started with serverless series. In part 4, you learn how the developer workflow for building serverless applications differs to a traditional developer workflow. You see how to test business logic locally before deploying to an AWS account.

In this post, you learn how to secure and manage access to your AWS Lambda functions. I show how to invoke Lambda functions in a sandbox developer account directly from an integrated developer environment (IDE) and view output logs in near-real-time. Finally, I show how this helps to test for infrastructure and security configurations before committing changes to the main branch.

A sandbox developer account

Serverless services like Lambda and Amazon API Gateway are pay-per-use, this means developers no longer need to share multiple environments (for example, dev, staging, and production). Instead, every developer can have their own sandboxed AWS developer account. This allows developers to not have to replicate everything to their local environment but rather test with real resources in the cloud.

You can still run code locally during the development of a feature. In post 4, I show how I run Lambda function code locally, using a test harness. This allows me to maintain a fast inner loop, iteratively updating and locally testing code. If my Lambda function interacts with other AWS infrastructure, I deploy them to a sandboxed AWS developer account. This allows me to test my Lambda function code locally while still being able to access managed services in the cloud.

However, it is useful to deploy your function code to a Lambda function in a sandboxed developer account. A sandbox developer account is an AWS account allocated to a developer on a 1:1 basis. It should give developers as much freedom as possible while still protecting resources and budget.

This allows you to test for security configurations and ensure that your Lambda function code behaves as expected when run in the Lambda execution environment:

Creating a sandboxed developer account

The following best practices can help to minimize costs and prevent unauthorized usage.

After creating a sandbox account, it can be useful to associate a named profile with it. A named profile is a collection of credentials that you can apply to an AWS Command Line Interface (AWS CLI) command. When you specify a profile to run a command, the settings and credentials are used to run that command. The AWS CLI supports multiple named profiles that are stored in the config and credentials files.

Configure profiles by adding entries to the config and credentials files. To learn more about named profiles refer to the AWS CLI documentation.

In the following example I configure my credentials file with two named profiles.

The profile named prod is my production account, and the profile named default is my sandbox developer account. The CLI automatically uses the profile named default, if no --profile option is specified in a CLI command.

[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

[dev]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalBBUtnFEMI/&7MDENG/bPxRfiCYEXAMPLEKEY

[prod]
aws_access_key_id=AKIAI44QH8DHBEXAMPLE
aws_secret_access_key=je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY

AWS Lambda security permissions

AWS Identity and Access Management (IAM) is the service used to manage access to AWS services. Lambda is fully integrated with IAM, allowing you to control precisely what each Lambda function can do within the AWS Cloud. There are two important things that define the scope of permissions in Lambda functions:

The resource policy: Defines which events are authorized to invoke the function.

The execution role policy: Limits what the Lambda function is authorized to do.

Using IAM roles to describe a Lambda function’s permissions, decouples it’s security configuration from the code. This helps reduce the complexity of a lambda function, making it easier to maintain.

A Lambda function’s resource and execution policy should be granted the minimum required permissions for the function to perform it’s task effectively. This is sometimes referred to as the rule of least privilege. As you develop a Lambda function, you expand the scope of this policy to allow access to other resources as required.

When building Lambda-based applications with frameworks such as AWS SAM, you describe both policies in the application’s template.

The following steps show how I deploy and test a Lambda function in a sandbox developer account from within my IDE.

Before you start

All the code relating to this example application can be found in this GitHub repository. To deploy this stage of the application, follow the steps from post 1 to clone the sample application.

  1. Run the following command from the root directory of the cloned repository:
    cd ./part_5
  2. After creating a sandbox developer account, deploy the example application into it by specifying the corresponding profile name in the AWS SAM CLI command. You can omit this if you named the profile default:
    sam deploy --config-file ../samconfig.toml  –guided  --profile default

    This produces the following output:

    Make a note of the StarWebhookLambdaFunctionName, you will use this in the following steps.

Logging with serverless applications

After deploying your serverless application to the sandboxed developer account, you need to verify that it’s operating properly. Lambda automatically monitors functions on your behalf, reporting metrics through Amazon CloudWatch. It collects data in the form of logs, metrics, and events and provides a unified view of AWS resources, applications, and services.

To help simplify troubleshooting, the AWS Serverless Application Model CLI (AWS SAM CLI) has a command called sam logs. This command lets you fetch CloudWatch Logs generated by your Lambda function from the command line.

Run the following command in a terminal window to view a live tail of logs generated by the StarWebhookHandler Lambda function. Replace StarWebhookLambdaFunctionName with the Lambda function name generated by your deployment:

sam logs -n StarWebhookLambdaFunctionName --tail

Checking Lambda function permissions in a sandbox developer account

I open a new terminal window and invoke the StarWebhookHandler Lambda function directly from my IDE by running the following AWS SAM CLI command. To invoke the function I pass an example payload located in events/testEvent.json.

aws lambda invoke --function-name <<replace-with-function-name>> \
--payload fileb://events/testEvent.json  \
out.txt

The following screenshot shows my two terminal windows side by side.

The response returned by the CLI command is on the right. The left window shows the tail of logs generated by the Lambda function. I observe that the CLI invocation shows a status 200 response, but the Lambda function logs report an ‘AccessDenied’ error. The function does not have the required permissions to write to Amazon S3.

I edit the Lambda function policy definition, adding permission for my Lambda function to write to an S3 bucket. I run sam build and sam deploy to re-deploy the application to the sandbox developer account. I invoke the Lambda function again. The logs show the following:

  1. The Lambda function responds with “StatusCode 200″.
  2. The Lambda function billed duration, memory size and running duration.
  3. The Lambda function has successfully copied the file to S3

IAM permission errors such as these may not be detected when running the function code locally. This is one of the advantages of deploying and running Lambda functions in a sandboxed developer account while developing an application.

Conclusion

This post explains the advantages of using a sandbox developer account. It shows how to deploy your business logic to a Lambda function in a sandboxed developer account. You are introduced to IAM policies, which control precisely what each Lambda function can do within the AWS Cloud. You learn that CloudWatch provides a unified view of logs for all AWS resources.

Finally, I show how to use the AWS SAM CLI and AWS CLI to invoke a Lambda function in the cloud and view its log output directly from the IDE. This helps to test for security configurations and to ensure that your business logic behaves as expected when run in the Lambda service. Invoking functions and observing their log output directly from your IDE helps to reduce context switching as you build.

Getting Started with serverless for developers: Part 4 – Local developer workflow

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/getting-started-with-serverless-for-developers-part-4-local-developer-workflow/

This blog is part 4 of the “Getting started with serverless for developers” series, helping developers start building serverless applications from their IDE.

Many “getting started” guides demonstrate how to build serverless applications from within the AWS Management Console. However, most developers spend the majority of their time building from within their local integrated development environment (IDE).

The next two blog posts in this series focus on the serverless developer workflow. They describe how to check logs, test, and iterate on business logic while building locally, and how this differs from traditional applications.

This blog post explains how the developer workflow for building serverless applications differs from a traditional developer workflow. It shows the methods that I use to test business logic locally before deploying to a sandbox AWS developer account, and how to test against cloud services as you build. This eliminates the need to deploy to the AWS Cloud each time you want to test a code change.

Traditional developer workflow

Developers typically use the following workflow cycle before committing code to the main branch:

  1. Write code.
  2. Save code.
  3. Run code.
  4. Check results.

This is sometimes referred to as the inner loop, shown as follows:

With traditional (non-serverless) applications, developers commonly create a development environment on their local machine. This local development environment keeps parity with the staging and production environments. It allows developers to test their application locally end-to-end before committing code to the main branch.

Serverless developer workflow

Part 2: the business logic, explains how serverless applications use managed services that abstract away the need for developers to patch and scale their application infrastructure. This means that the code base in a serverless application is focused purely on business logic, allowing managed services to handle other important layers such as:

  • Authorization
  • Presentation
  • Database
  • Application integration
  • Notification

A good serverless developer workflow enables developers to test and iterate on business logic quickly. It allows them to check that the business logic runs correctly with the managed services that compose an application.

To achieve this, the best approach is not to try and emulate managed services on your local development machine. Instead, your local code should interact directly with real cloud services in a sandboxed AWS account.

This approach lets you rapidly test and iterate on business logic locally, deploying to the development environment to test for infrastructure, security, and environment configuration changes. Once the business logic is ready in the development environment, it can be deployed to other environments (test, staging, production) via CI/CD automation or manually triggered commands.

The following sections explain how I test business logic locally using a custom-written test harness. Each time I create a Lambda function I create a directory to hold:

  1. The function code.
  2. A relative package.json.
  3. A file called testharness.js.

The test harness file is used to run the Lambda function code on my local development machine. It is configured to mock environment variables loaded from a file named `env.json` and loads a JSON test event payload located in the events directory.

Testing Lambda function code locally with a test harness

Part 2: the business logic, explains the anatomy of a Lambda function and its handler:

A small Lambda function

Note that the function handler receives an input payload called an event object. To test the local function code effectively, use a test event object that represents the production event object.

The serverless application introduced in getting started with serverless part 1, shows that Amazon API Gateway invokes the Lambda function.

This invocation contains an event object with a JSON representation of the HTTP request. The event object has a defined structure. Follow the steps in this GitHub repository to see how to create a test event object.

Before you start

To deploy this stage of the application, follow the steps from post 1 to clone and deploy the sample application.

  1. Run the following commands from the root directory of the cloned repository:
    cd ./part_4/src_starred
    npm install

The following steps show how I configure my testharness.js file to run my function code:

// Mock event
const event = require('../events/testEvent.json')
// Mock environment variables
const environmentVars = require('../env.json')
process.env.AWS_REGION = environmentVars.AWS_REGION
process.env.localTest = true
process.env.slackEndpoint= environmentVars.slackEndpoint
process.env.bucket = environmentVars.bucket
// Lambda handler
const { handler } = require('./app')
const main = async () => {
  console.time('localTest')
  console.dir(await handler(event))
  console.timeEnd('localTest')
}
main().catch(error => console.error(error))
  1. A test event object is loaded into a variable called event.
  2. The environment variables required by the Lambda function are defined in env.json and loaded.
  3. The Lambda function code is loaded into a variable called handler
  4. The Lambda function code is run synchronously
  5. The console shows output from the Lambda function code, along with any errors that occur.

I run my test harness file by entering the following command in a terminal window:

$ node testharness.js

This produces the following output:

The output indicates that the function code ran without error, it returned a 200 status code and completed in 30.999 ms.

To generate an error in the function code,  I change the app.js file by commenting out the following line:

//const axios = require('axios');

I save the file and run the test harness again. I see the following response in the terminal window:

f

This indicates an error in my function code that I must resolve before deploying to my AWS account.

By iteratively updating and locally testing my code, I maintain a fast inner loop. This eliminates the need to deploy to the AWS Cloud each time I want to test a code change.

Testing against cloud resources

In many instances, a Lambda function interacts with other cloud resources. This could be via an SDK or some other native integration. In this case, you should deploy those resources to a sandboxed AWS developer account.

In the following example, I update a Lambda function to log each inbound HTTP request to a bucket in Amazon S3, a highly scalable object storage service running in the AWS cloud. To do this, I use the JavaScript SDK:

s3.putObject(params, function(err, data)

I update the AWS Serverless Application Model (AWS SAM) template to define a new S3 bucket:

SrcBucket:
Type: AWS::S3::Bucket

I change into the part_4 directory then build and deploy the application with the following commands:

cd ../part_4
sam build
sam deploy --guided --config-file ../samconfig.toml

After deploying, the output shows:

I copy the new bucket value to the environment variable defined in in /part_4/env.json .

"slackEndpoint": "Insert_Slack_Endpoint",
"bucket" : "Insert_S3_Bucket_Name"
}

Now I am ready to test the local Lambda function code against cloud resources in my sandbox AWS development account. I run a new local test with the following command:

node testHarness.js

The terminal returns:

This indicates that the Lambda function code completed without error. I can verify this by checking the contents of the S3 bucket using the AWS Command Line Interface (AWS CLI):

aws s3 ls s3://githubtoslackapp-srcbucket-ge4wkt9dljwa

This returns the following, confirming that the request object has been saved to S3 and the application code is running correctly:

Conclusion

Using managed services to build serverless applications helps developers focus on business logic. It also changes the developer workflow compared with building traditional (non-serverless) applications.

A good serverless developer workflow enables developers to test and iterate on business logic quickly while still being able to interact with cloud services. This blog post shows how I achieve this by using a test harness to run function code locally and deploying application resources to a sandboxed developer account.

In the next blog post, I show how to invoke Lambda functions deployed to sandboxed developer account, without leaving your IDE. This lets you test for infrastructure, security, and environment configuration changes while building.

Analyzing Freshdesk data using Amazon EventBridge and Amazon Athena

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/analyzing-freshdesk-data-using-amazon-eventbridge-and-amazon-athena/

This post is written by Shashi Shankar, Application Architect, Shared Delivery Teams

Freshdesk is an omnichannel customer service platform by Freshworks. It provides automation services to help speed up customer support processes.

The Freshworks connector to Amazon EventBridge allows real time streaming of Freshdesk events with minimal configuration and setup. This integration provides real-time insights into customer support operations without the operational overhead of provisioning and maintaining any servers.

In this blog post, I walk through a serverless approach to ingest and analyze Freshdesk data. This solution uses EventBridge, Amazon Kinesis Data Firehose, Amazon S3, and Amazon Athena. I also look at examples of customer service questions that can be answered using this approach.

The following diagram shows a high-level architecture of the proposed solution:

  1. When a Freshdesk ticket is updated or created, the Freshworks connector pushes event data to the Amazon EventBridge partner event bus.
  2. A rule on the partner event bus pushes the event data to Kinesis Data Firehose.
  3. Kinesis Data Firehose batches data before sending to S3. An AWS Lambda function transforms the data by adding a new line to each record before sending.
  4. Kinesis Data Firehose delivers the batch of records to S3.
  5. Athena is used to query relevant data from S3 using standard SQL.

The walkthrough shows you how to:

  1. Add the EventBridge app to Freshdesk account.
  2. Configure a Freshworks partner event bus in EventBridge.
  3. Deploy a Kinesis Data Firehose stream, a Lambda function, and an S3 bucket.
  4. Set up a custom rule on the event bus to push data to Kinesis Data Firehose.
  5. Generate sample Freshdesk data to validate the ingestion process.
  6. Set up a table in Athena to query the S3 bucket.
  7. Query and analyze data

Pre-requisites

  • A Freshdesk account (which can be created here).
  • An AWS account.
  • AWS Serverless Application Model (AWS SAM CLI), installed and configured.

Adding the Amazon EventBridge app to a Freshdesk account

  1. Log in to your Freshdesk account and navigate to Admin Helpdesk Productivity Apps. Search for EventBridge:
  2. Choose the Amazon EventBridge icon and choose Install.
  • Enter your AWS account number in the AWS Account ID field.
  • Enter “OnTicketCreate”, “OnTicketUpdate” in the Events field.
  • Enter the AWS Region to send the Freshdesk events in the Region field. This walkthrough uses the us-east-1 Region.

Configuring a Freshworks partner event bus in EventBridge

Once previous step is completed, a partner event source is automatically created in the EventBridge console. Copy the partner event source name to a clipboard.

  1. Clone the GitHub repo and deploy the AWS SAM template:
    git clone https://github.com/aws-samples/amazon-eventbridge-freshdesk-example.git
    cd ./amazon-eventbridge-freshdesk-example
    sam deploy --guided
  2. PartnerEventSource – Enter partner event source name copied from the previous step.
  3. S3BucketName – Enter an S3 bucket name to store Freshdesk ticket event data.

The AWS SAM template creates an association between the partner event source and event bus:

    Type: AWS::Events::EventBus
    Properties:
      EventSourceName: !Ref PartnerEventSource
      Name: !Ref PartnerEventSource

The template creates a Kinesis Data Firehose delivery stream, Lambda function, and S3 bucket to process and store the events from Freshdesk tickets. It also adds a rule to the custom event bus with the Kinesis Data Firehose stream as the target:

  PushToFirehoseRule:
    Type: "AWS::Events::Rule"
    Properties:
      Description: Test Freshdesk Events Rule
      EventBusName: !Ref PartnerEventSource
      EventPattern:
        account: [!Ref AWS::AccountId]
      Name: freshdeskeventrule
      State: ENABLED
      Targets:
        - Arn:
            Fn::GetAtt:
              - "FirehoseDeliveryStream"
              - "Arn"
          Id: "idfreshdeskeventrule"
          RoleArn: !GetAtt EventRuleTargetIamRole.Arn

  EventRuleTargetIamRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Sid: ""
            Effect: "Allow"
            Principal:
              Service:
                - "events.amazonaws.com"
            Action:
              - "sts:AssumeRole"
      Path: "/"
      Policies:
        - PolicyName: Invoke_Firehose
          PolicyDocument:
            Version: "2012-10-17"
            Statement:
              - Effect: "Allow"
                Action:
                  - "firehose:PutRecord"
                  - "firehose:PutRecordBatch"
                Resource:
                  - !GetAtt FirehoseDeliveryStream.Arn

Generating sample Freshdesk data to validate the ingestion process:

To generate sample Freshdesk data, login to the Freshdesk account and browse to the “Tickets” screen as shown:

Follow the steps to simulate two customer service operations:

  1. To create a ticket of type “Refund”. Choose the New button and enter the details:
  2. Update an existing ticket and change the priority to “Urgent”.
  3. Within a few minutes of updating the ticket, the data is pushed via the Freshworks connector to the S3 bucket created using the AWS SAM template. To verify this, browse to the S3 bucket and see that a new object with the ticket data is created:

You can also use the S3 Select option under object actions to view the raw JSON data that is sent from the partner system. You are now ready to analyze the data using Athena.

Setting up a table in Athena to query the S3 bucket

If you are familiar with Apache Hive, you may find creating tables on Athena helpful. You can create tables by writing the DDL statement in the query editor or by using the wizard or JDBC driver. To create a table in Athena:

  1. Copy and paste the following DDL statement in the Athena query editor to create a Freshdesk’s events table. For this example, the table is created in the default database.
  2. Replace S3_Bucket_Name in the following query with the name of the S3 bucket created by deploying the previous AWS SAM template:
CREATE EXTERNAL TABLE ` freshdeskevents`(
  `id` string COMMENT 'from deserializer', 
  `detail-type` string COMMENT 'from deserializer', 
  `source` string COMMENT 'from deserializer', 
  `account` string COMMENT 'from deserializer', 
  `time` string COMMENT 'from deserializer', 
  `region` string COMMENT 'from deserializer', 
  `detail` struct<ticket:struct<subject:string,description:string,is_description_truncated:boolean,description_text:string,is_description_text_truncated:boolean,due_by:string,fr_due_by:string,fr_escalated:boolean,is_escalated:boolean,fwd_emails:array<string>,reply_cc_emails:array<string>,email_config_id:string,id:int,group_id:bigint,product_id:string,company_id:string,requester_id:bigint,responder_id:bigint,status:int,priority:int,type:string,tags:array<string>,spam:boolean,source:int,tweet_id:string,cc_emails:array<string>,to_emails:string,created_at:string,updated_at:string,attachments:array<string>,custom_fields:string,changes:struct<responder_id:array<bigint>,ticket_type:array<string>,status:array<int>,status_details:array<struct<id:int,name:string>>,group_id:array<bigint>>>,requester:struct<id:bigint,name:string,email:string,mobile:string,phone:string,language:string,created_at:string>> COMMENT 'from deserializer')
ROW FORMAT SERDE 
  'org.openx.data.jsonserde.JsonSerDe' 
WITH SERDEPROPERTIES ( 
  'paths'='account,detail,detail-type,id,region,resources,source,time,version') 
STORED AS INPUTFORMAT 
  'org.apache.hadoop.mapred.TextInputFormat' 
OUTPUTFORMAT 
  'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION  's3://S3_Bucket_Name/'

The table is created on the data stored in S3 and is ready to be queried. Note that table freshdeskevents points at the bucket s3://S3_Bucket_Name/. As more data is added to the bucket, the table automatically grows, providing a near-real-time data analysis experience.

Querying and analyzing data

You can use the following examples to get started with querying the Athena table.

  1. To get all the events data, run:
SELECT * FROM default.freshdeskevents  limit 10

The preceding output has a detail column containing the details related to the ticket. Tickets can be filtered on nested notations to build more insightful queries. Also, the detail-type column provides classification of tickets as new (onTicketCreate) vs updated (onTicketUpdate).

  1. To show new tickets created today with the type “Refund”:
SELECT detail.ticket.subject,detail.ticket.description_text, detail.ticket.type  FROM default.freshdeskevents
where detail.ticket.type = 'Refund' and "detail-type" = 'onTicketCreate' and date(from_iso8601_timestamp(time)) = date(current_date)
  1. All tickets with an “Urgent” priority but not assigned to an agent:
SELECT "detail-type", detail.ticket.responder_id,detail.ticket.priority, detail.ticket.subject, detail.ticket.type  FROM default.freshdeskevents
where detail.ticket.responder_id is null and detail.ticket.priority = 4

Conclusion

In this blog post, you learn how to configure Freshworks partner event source from the Freshdesk console. Once a partner event source is configured, an AWS SAM template is deployed that creates a custom event bus by attaching the partner event source. A Kinesis Data Firehose, Lambda function, and S3 bucket is used to ingest Freshdesk’s ticket events data for analysis. An EventBridge rule is configured to route the event data to the S3 bucket.

Once event data starts flowing into the S3 bucket, an Amazon Athena table is created to run queries and analyze the ticket events data. Alternative customer service data analysis use cases can be built on the architecture shown in this blog.

To learn more about other partner integrations and the native capabilities of EventBridge, visit the AWS Compute Blog.

Node.js 14.x runtime now available in AWS Lambda

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/node-js-14-x-runtime-now-available-in-aws-lambda/

You can now develop AWS Lambda functions using the Node.js 14.x runtime. This is the current Long Term Support (LTS) version of Node.js. Start using this new version today by specifying a runtime parameter value of nodejs14.x when creating or updating functions or by using the appropriate managed runtime base image.

Language Updates

Node.js 14 is a stable release and brings several new features, including:

  • Updated V8 engine
  • Diagnostic reporting
  • Updated Node streams

V8 engine updated To V8.1

Node.js 14.x is powered by V8 version 8.1, which is a significant upgrade from the V8 7.4 engine powering the previous Node.js 12.x. This upgrade brings performance enhancements and some notable new features:

  • Nullish Coalescing ?? A logical operator that returns its right-hand side operand when its left-hand side operand is not defined or null.
    const newVersion = null ?? ‘this works great’ ;
    console.log(newVersion);
    // expected output: "this works great"
    
    const nullishTest = 0 ?? 36;
    console.log(nullishTest);
    // expected output: 0 because 0 is not the same as null or undefined

This new operator is useful for debugging and error handling in your Lambda functions when values unexpectedly return null or undefined.

  • Intl.DateTimeFormat – This feature enables numberingSystem and calendar options.
    const newVersion = null ?? ‘this works great’ ;
    console.log(newVersion);
    // expected output: "this works great"
    
    const nullishTest = 0 ?? 36;
    console.log(nullishTest);
    // expected output: 0 because 0 is not the same as null or undefined
  • Intl.DisplayNames – Offers the consistent translation of region, language, and script display names.
    const date = new Date(Date.UTC(2021, 01, 20, 3, 23, 16, 738));
    // Results below assume UTC timezone - your results may vary
    
    // Specify date formatting for language
    console.log(new Intl.DateTimeFormat('en-US').format(date));
    // expected output: "2/20/2021"
  • Optional Chaining ?. – Use this operator to access a property’s value within a chain without needing to validate each reference. This removes the requirement of checking for the existence of a deeply nested property using the && operator or lodash.get:
    const player = {
      name: 'Roxie',
      superpower: {
        value: 'flight',
      }
    };
    
    // Using the && operator
    if (player && player.superpower && player.superpower.value) {
      // do something with player.superpower.value
    }
    
    // Using the ?. operator
    if (player?.superpower?.value) {
      // do something with player.superpower.value
    }
    

Diagnostic reporting

Diagnostic reporting is now a stable feature in Node.js 14. This option allows you to generate a JSON-formatted report on demand or when certain events occur. This helps to diagnose problems such as slow performance, memory leaks, unexpected errors, and more.

The following example generates a report from within a Lambda function, and outputs the results to Amazon Cloudwatch for further inspection.

const report = process.report.getReport();
console.log(typeof report === 'object'); // true

// Similar to process.report.writeReport() output
console.log(JSON.stringify(report, null, 2));

See the official docs on diagnostic reporting in Node.js to learn other ways to use the command.

Updated node streams

The streams APIs has been updated to help remove ambiguity and streamline behaviours across the various parts of Node.js core.

Runtime Updates

To help keep Lambda functions secure, AWS updates Node.js 14 with all minor updates released by the Node.js community when using the zip archive format. For Lambda functions packaged as a container image, pull, rebuild and deploy the latest base image from DockerHub or Amazon ECR Public.

Deprecation schedule

AWS will be deprecating Node.js 10 according to the end of life schedule provided by the community. Node.js 10 reaches end of life on April 30, 2021. After March 30, 2021 you can no longer create a Node.js 10 Lambda function. The ability to update a function will be disabled after May 28, 2021 . More information on can be found in the runtime support policy.

You can migrate Existing Node.js 12 functions to the new runtime by making any necessary changes to code for compatibility with Node.js 14, and changing the function’s runtime configuration to “nodejs14.x”. Lambda functions running on Node.js 14 will have 2 full years of support.

Amazon Linux 2

Node.js 14 managed runtime, like Node.js 12, Java 11, and Python 3.8, is based on an Amazon Linux 2 execution environment. Amazon Linux 2 provides a secure, stable, and high-performance execution environment to develop and run cloud and enterprise applications.

Next steps

Get started building with Node.js 14 today by specifying a runtime parameter value of nodejs14.x when creating your Lambda functions using the zip archive packaging format. You can also build Lambda functions in Node.js 14 by deploying your function code as a container image using the Node.js 14 AWS base image for Lambda. You can read about the Node.js programming model in the AWS Lambda documentation to learn more about writing functions in Node.js 14.

For existing Node.js functions, migrate to the new runtime by changing the function’s runtime configuration to nodejs14.x

Happy coding with Node.js 14!

Building PHP Lambda functions with Docker container images

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/building-php-lambda-functions-with-docker-container-images/

At re:Invent 2020, AWS announced that you can package and deploy AWS Lambda functions as container images. Packaging AWS Lambda functions as container images brings some notable benefits for developers running custom runtimes, such as PHP. This blog post explains those benefits and shows how to use the new container image support for Lambda functions to build serverless PHP applications.

Overview

Many PHP developers are familiar with building applications as containers to create a portable artifact for easier deployment. Packaging applications as containers helps to maintain consistent PHP versions, package versions, and configurations settings across multiple environments.

The new container image support for Lambda allows you to use familiar container tooling to build your applications. It also allows you to transition your applications into a serverless event-driven model. This brings the benefits of having no infrastructure to manage, automated scalability and a pay-per-use billing.

The advantages of an event-driven model for PHP applications are explained across the blog series “The serverless LAMP stack”. It explores the concepts, methods, and reasons for creating serverless applications with PHP. The architectural patterns and service limits in this blog series apply to functions packaged using both container image and zip archive formats, with some key exceptions:

Zip archive Container image
Maximum package size 250 MB 10 GB
Lambda layers Supported Include in image
Lambda Extensions Supported Include in image

Custom runtimes with container images

For custom runtimes such as PHP, Lambda provides base images containing the required Amazon Linux or Amazon Linux 2 operating system. Extend this to include your own runtime by implementing the Lambda Runtime API in a bootstrap file.

Before container image support for Lambda, a custom runtime is packaged using the .zip format. This required the developer to:

  1. Set up an Amazon Linux environment compatible with the Lambda execution environment.
  2. Install compilation dependencies and compile a version of PHP.
  3. Save the compiled PHP binary together with a bootstrap file and package as a .zip.
  4. Publish the .zip as a runtime layer.
  5. Add the runtime layer to a Lambda function.

Any edits to the custom runtime such as new packages, PHP versions, modules, or dependences require the process to be repeated. This process can be time consuming and prone to error.

Creating a custom PHP runtime using the new container image support for Lambda can simplify changing the runtime environment. Dockerfiles allow you to have a fully scripted, faster, and portable build process without setting up an Amazon Linux environment.

This GitHub repository contains a custom PHP runtime for Lambda functions packaged as a container image. The following Dockerfile uses the base image for Amazon Linux provided by AWS. The instructions perform the following:

  • Install system-wide Linux packages (zip, curl, tar).
  • Download and compile PHP.
  • Download and install composer dependency manager and dependencies.
  • Move PHP binaries, bootstrap, and vendor dependencies into a directory that Lambda can read from.
  • Set the container entrypoint.
#Lambda base image Amazon Linux
FROM public.ecr.aws/lambda/provided as builder 
# Set desired PHP Version
ARG php_version="7.3.6"
RUN yum clean all && \
    yum install -y autoconf \
                bison \
                bzip2-devel \
                gcc \
                gcc-c++ \
                git \
                gzip \
                libcurl-devel \
                libxml2-devel \
                make \
                openssl-devel \
                tar \
                unzip \
                zip

# Download the PHP source, compile, and install both PHP and Composer
RUN curl -sL https://github.com/php/php-src/archive/php-${php_version}.tar.gz | tar -xvz && \
    cd php-src-php-${php_version} && \
    ./buildconf --force && \
    ./configure --prefix=/opt/php-7-bin/ --with-openssl --with-curl --with-zlib --without-pear --enable-bcmath --with-bz2 --enable-mbstring --with-mysqli && \
    make -j 5 && \
    make install && \
    /opt/php-7-bin/bin/php -v && \
    curl -sS https://getcomposer.org/installer | /opt/php-7-bin/bin/php -- --install-dir=/opt/php-7-bin/bin/ --filename=composer

# Prepare runtime files
# RUN mkdir -p /lambda-php-runtime/bin && \
    # cp /opt/php-7-bin/bin/php /lambda-php-runtime/bin/php
COPY runtime/bootstrap /lambda-php-runtime/
RUN chmod 0755 /lambda-php-runtime/bootstrap

# Install Guzzle, prepare vendor files
RUN mkdir /lambda-php-vendor && \
    cd /lambda-php-vendor && \
    /opt/php-7-bin/bin/php /opt/php-7-bin/bin/composer require guzzlehttp/guzzle

###### Create runtime image ######
FROM public.ecr.aws/lambda/provided as runtime
# Layer 1: PHP Binaries
COPY --from=builder /opt/php-7-bin /var/lang
# Layer 2: Runtime Interface Client
COPY --from=builder /lambda-php-runtime /var/runtime
# Layer 3: Vendor
COPY --from=builder /lambda-php-vendor/vendor /opt/vendor

COPY src/ /var/task/

CMD [ "index" ]

To deploy this Lambda function, follow the instructions in the GitHub repository.

All runtime-related instructions are saved in the Dockerfile, which makes the custom runtime simpler to manage, update, and test. You can add additional Linux packages by appending to the yum install command. To install alternative PHP versions, change the php_version argument. Import additional PHP modules by adding to the compile command.

View the complete application in the following file tree:

project/
┣ runtime/
┃ ┗ bootstrap
┣ src/
┃ ┗ index.php
┗ Dockerfile

The Lambda function code is stored in the src directory in a file named index.php. This contains the Lambda function handler “index()”.

A bootstrap file is in the ‘runtime’ directory. This uses the Lambda runtime API to communicate with the Lambda execution environment.

The shebang hash sequence at the beginning of the bootstrap script instructs Lambda to run the file with the PHP executable, set by the Dockerfile.

All environment variables used in the bootstrap are set by the Lambda execution environment when running in the AWS Cloud. When running locally, the Lambda Runtime Interface Emulator (RIE) sets these values.

#!/var/lang/bin/php

Testing locally with the Lambda RIE

Using container image support for Lambda makes it easier for PHP developers to test Lambda functions locally. The previous container image example builds from the Lambda base image provided by AWS. This base image contains the Lambda RIE.

This is a proxy for Lambda’s Runtime and Extensions APIs. It acts as a lightweight web server that converts HTTP requests to JSON events and maintains functional parity with the Lambda Runtime API in the AWS Cloud. This allows developers to test functions locally using familiar tools such as cURL and the Docker CLI.

  1. Build the previous custom runtime image using the Docker build command:
    docker build -t phpmyfuntion .
  2. Run the function locally using the Docker run command, bound to port 9000:
    docker run -p 9000:8080 phpmyfuntion:latest
  3. This command starts up a local endpoint at:
    localhost:9000/2015-03-31/functions/function/invocations
  4. Post an event to this endpoint using a curl command. The Lambda function payload is provided by using the -d flag. This is a valid Json object required by the Runtime Interface Emulator:
    curl "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{"queryStringParameters": {"name":"Ben"}}'
  5. A 200 status response is returned:

Building web applications with Bref container images

Bref is an open source runtime Lambda layer for PHP. Using the bref-fpm layer, you can build applications with traditional PHP frameworks such as Symfony and Laravel. Bref’s implementation of the FastCGI protocol returns an HTTP response instead of a JSON response. When using the zip archive format to package Lambda functions, Bref’s custom runtime is provided to the function as a Lambda layer. Functions packaged as container images do not support adding Lambda layers to the function configuration. In addition to runtime layers, Bref also provides a number of Docker images. These images use the Lambda runtime API to form a runtime interface client that communicates with the Lambda execution environment.

The following example shows how to compose a Dockerfile that uses the bref php-74-fpm container image:

# Uses PHP 74-fpm.0, as the base image
FROM bref/php-74-fpm
# download composer for dependency management
RUN curl -s https://getcomposer.org/installer | php
# install bref using composer
RUN php composer.phar require bref/bref
# copy the project files into a Location that the Lambda service can read from
COPY . /var/task
#set the function handler entry point
CMD _HANDLER=index.php /opt/bootstrap
  1. The first line sets the base image to use bref/php-74-fpm.
  2. Composer, a dependency manager for PHP is installed.
  3. Composer’s require command is used to add the bref package to the composer.json file.
  4. The project files are then copied into the /var/task directory, where the function code runs from.
  5. The function handler is set along with Bref’s bootstrap file.

The steps to build and deploy this image to the Amazon Elastic Container Registry are the same for any runtime, and explained in this announcement blog post.

Conclusion

The new container image support for Lambda functions allows developers to package Lambda functions of up to 10 GB in size. Using the container image format and a Dockerfile can make it easier to build and update functions with custom runtimes such as PHP.

Developers can include specific language versions, modules, and package dependencies. The Amazon Linux and Amazon Linux 2 base images give developers a starting point to customize the runtime. With the Lambda Runtime Interface Emulator, it’s simpler for developers to test Lambda functions locally. PHP developers can use existing third-party images, such as bref-fpm, to create web applications in a single Lambda function.

Visit serverlessland.com for more information on building serverless PHP applications.