Tag Archives: AWS Lambda

Operating Lambda: Anti-patterns in event-driven architectures – Part 3

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/operating-lambda-anti-patterns-in-event-driven-architectures-part-3/

In the Operating Lambda series, I cover important topics for developers, architects, and systems administrators who are managing AWS Lambda-based applications. This three-part section discusses event-driven architectures and how these relate to Lambda-based applications.

Part 1 covers the benefits of the event-driven paradigm and how it can improve throughput, scale, and extensibility. Part 2 explains some of the design principles and best practices that can help developers gain the benefits of building Lambda-based applications. This post explores anti-patterns in event-driven architectures.

Lambda is not a prescriptive service and provides broad functionality for you to build applications as needed. While this flexibility is important to customers, there are some designs that are technically functional but suboptimal from an architecture standpoint.

The Lambda monolith

In many applications migrated from traditional servers, Amazon EC2 instances or AWS Elastic Beanstalk applications, developers “lift and shift” existing code. Frequently, this results in a single Lambda function that contains all of the application logic that is triggered for all events. For a basic web application, for example, a monolithic Lambda function handles all Amazon API Gateway routes and integrates with all necessary downstream resources:

Monolithic Lambda application

This approach has several drawbacks:

  • Package size: The Lambda function may be much larger because it contains all possible code for all paths, which makes it slower for the Lambda service to download and run.
  • Harder to enforce least privilege: The function’s IAM role must grant permissions for all resources needed for all paths, making the permissions very broad. Many paths in the functional monolith do not need all the permissions that have been granted.
  • Harder to upgrade: In a production system, any upgrades to the single function are more risky and could cause the entire application to stop working. Upgrading a single path in the Lambda function is an upgrade to the entire function.
  • Harder to maintain: It’s more difficult to have multiple developers working on the service since it’s a monolithic code repository. It also increases the cognitive burden on developers and makes it harder to create appropriate test coverage for code.
  • Harder to reuse code: Typically, it can be harder to separate libraries from monoliths, making code reuse more difficult. As you develop and support more projects, this can make it harder to support the code and scale your team’s velocity.
  • Harder to test: As the lines of code increase, it becomes harder to unit all the possible combinations of inputs and entry points in the code base. It’s generally easier to implement unit testing for smaller services with less code.

The preferred alternative is to decompose the monolithic Lambda function into individual microservices, mapping a single Lambda function to a single, well-defined task. In this example web application with a few API endpoints, the resulting microservice-based architecture is based on the API routes.

Microservice architecture

The process of decomposing a monolith depends upon the complexity of your workload. Using strategies like the strangler pattern, you can migrate code from larger code bases to microservices. There are many potential benefits to running a Lambda-based application this way:

  • Package sizes can be optimized for only the code needed for a single task, which helps make the function more performant, and may reduce running cost.
  • IAM roles can be scoped to precisely the access needed by the microservice, making it easier to enforce the principles of least privilege. In controlling the blast radius, using IAM roles this way can give your application a stronger security posture.
  • Easier to upgrade: you can apply upgrades at a microservice level without impacting the entire workload. Upgrades occur at the functional level, not at the application level, and you can implement canary releases to control the rollout.
  • Easier to maintain: adding new features is usually easier when working with a single small service than a monolithic with significant coupling. Frequently, you implement features by adding new Lambda functions without modifying existing code.
  • Easier to reuse code: when you have specialized functions that perform a single task, it’s often easier to copy these across multiple projects. Building a library of generic specialized functions can help accelerate development in future projects.
  • Easier to test: unit testing is easier when there are few lines of code and the range of potential inputs for a function is smaller.
  • Lower cognitive load for developers since each development team has a smaller surface area of the application to understand. This can help accelerate onboarding for new developers.

To learn more, read “Decomposing the Monolith with Event Storming”.

Lambda as orchestrator

Many business workflows result in complex workflow logic, where the flow of operations depends on multiple factors. In an ecommerce example, a payments service is an example of a complex workflow:

  • A payment type may be cash, check, or credit card, all of which have different processes.
  • A credit card payment has many possible states, from successful to declined.
  • The service may need to issue refunds or credits for a portion or the entire amount.
  • A third-party service that processes credit cards may be unavailable due to an outage.
  • Some payments may take multiple days to process.

Implementing this logic in a Lambda function can result in ‘spaghetti code’ that’s different to read, understand, and maintain. It can also become fragile in production systems. The complexity is compounded if you must handle error handling, retry logic, and inputs and outputs processing. These types of orchestration functions are an anti-pattern in Lambda-based applications.

Instead, use AWS Step Functions to orchestrate these workflows using a versionable, JSON-defined state machine. State machines can handle nested workflow logic, errors, and retries. A workflow can also run for up to 1 year, and the service can maintain different versions of workflows, allowing you to upgrade production systems in place. Using this approach also results in less custom code, making an application easier to test and maintain.

While Step Functions is generally best-suited for workflows within a bounded context or microservice, to coordinate state changes across multiple services, instead use Amazon EventBridge. This is a serverless event bus that routes events based upon rules, and simplifies orchestration between microservices.

Recursive patterns that cause invocation loops

AWS services generate events that invoke Lambda functions, and Lambda functions can send messages to AWS services. Generally, the service or resource that invokes a Lambda function should be different to the service or resource that the function outputs to. Failure to manage this can result in invocation loops.

For example, a Lambda function writes an object to an Amazon S3 object, which in turn invokes the same Lambda function via a put event. The invocation causes a second object to be written to the bucket, which invokes the same Lambda function:

Event loops in Lambda-based applications

While the potential for infinite loops exists in most programming languages, this anti-pattern has the potential to consume more resources in serverless applications. Both Lambda and S3 automatically scale based upon traffic, so the loop may cause Lambda to scale to consume all available concurrency and S3 to continue to write objects and generate more events for Lambda. In this situation, you can press the “Throttle” button in the Lambda console to scale the function concurrency down to zero and break the recursion cycle.

This example uses S3 but the risk of recursive loops also exists in Amazon SNS, Amazon SQS, Amazon DynamoDB, and other services. In most cases, it is safer to separate the resources that produce and consume events from Lambda. However, if you need a Lambda function to write data back to the same resource that invoked the function, ensure that you:

  • Use a positive trigger: For example, an S3 object trigger may use a naming convention or meta tag that is only triggered on the first invocation. This prevents objects written from the Lambda function from invoking the same Lambda function again. See the S3-to-Lambda translation application for an example of this mechanism.
  • Use reserved concurrency: Setting the function’s reserved concurrency to a lower limit prevents the function from scaling concurrently beyond that limit. It does not prevent the recursion, but limits the resources consumed as a safety mechanism. This can be useful during the development and test phases.
  • Use Amazon CloudWatch monitoring and alarming: By setting an alarm on a function’s concurrency metric, you can receive alerts if the concurrency suddenly spikes and take appropriate action.

Lambda functions calling Lambda functions

Functions enable encapsulation and code reuse. Most programming languages support the concept of code synchronously calling functions within a code base. In this case, the caller waits until the function returns a response. This model does not generally adapt well to serverless development.

For example, consider a simple ecommerce application consisting of three Lambda functions that process an order:

Ecommerce example with three functions

In this case, the Create order function calls the Process payment function, which in turn calls the Create invoice function. While this synchronous flow may work within a single application on a server, it introduces several avoidable problems in a distributed serverless architecture:

  • Cost: With Lambda, you pay for the duration of an invocation. In this example, while the Create invoice functions runs, two other functions are also running in a wait state, shown in red on the diagram.
  • Error handling: In nested invocations, error handling can become more complex. Either errors are thrown to parent functions to handle at the top-level function, or functions require custom handling. For example, an error in Create invoice might require the Process payment function to reverse the charge, or it may instead retry the Create invoice process.
  • Tight coupling: Processing a payment typically takes longer than creating an invoice. In this model, the availability of the entire workflow is limited by the slowest function.
  • Scaling: The concurrency of all three functions must be equal. In a busy system, this uses more concurrency than would otherwise be needed.

In serverless applications, there are two common approaches to avoid this pattern. First, use an SQS queue between Lambda functions. If a downstream process is slower than an upstream process, the queue durably persists messages and decouples the two functions. In this example, the Create order function publishes a message to an SQS queue, and the Process payment function consumes messages from the queue.

The second approach is to use AWS Step Functions. For complex processes with multiple types of failure and retry logic, Step Functions can help reduce the amount of custom code needed to orchestrate the workflow. As a result, Step Functions orchestrates the work and robustly handles errors and retries, and the Lambda functions contain only business logic.

Synchronous waiting within a single Lambda function

Within a single Lambda, ensure that any potentially concurrent activities are not scheduled synchronously. For example, a Lambda function might write to an S3 bucket and then write to a DynamoDB table:

The wait states, shown in red in the diagram, are compounded because the activities are sequential. If the tasks are independent, they can be run in parallel, which results in the total wait time being set by the longest-running task.

Parallel tasks in Lambda functions

In cases where the second task depends on the completion of the first task, you may be able to reduce the total waiting time and the cost of execution by splitting the Lambda functions:

Splitting tasks over two functions

In this design, the first Lambda function responds immediately after putting the object to the S3 bucket. The S3 service invokes the second Lambda function, which then writes data to the DynamoDB table. This approach minimizes the total wait time in the Lambda function executions.

To learn more, read the “Serverless Applications Lens” from the AWS Well-Architected Framework.

Conclusion

This post discusses anti-patterns in event-driven architectures using Lambda. I show some of the issues when using monolithic Lambda functions or custom code to orchestrate workflows. I explain how to avoid recursive architectures that may cause invocation loops and why you should avoid calling functions from functions. I also explain different approaches to handling waiting in functions to minimize cost.

For more serverless learning resources, visit Serverless Land.

Building PHP Lambda functions with Docker container images

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/building-php-lambda-functions-with-docker-container-images/

At re:Invent 2020, AWS announced that you can package and deploy AWS Lambda functions as container images. Packaging AWS Lambda functions as container images brings some notable benefits for developers running custom runtimes, such as PHP. This blog post explains those benefits and shows how to use the new container image support for Lambda functions to build serverless PHP applications.

Overview

Many PHP developers are familiar with building applications as containers to create a portable artifact for easier deployment. Packaging applications as containers helps to maintain consistent PHP versions, package versions, and configurations settings across multiple environments.

The new container image support for Lambda allows you to use familiar container tooling to build your applications. It also allows you to transition your applications into a serverless event-driven model. This brings the benefits of having no infrastructure to manage, automated scalability and a pay-per-use billing.

The advantages of an event-driven model for PHP applications are explained across the blog series “The serverless LAMP stack”. It explores the concepts, methods, and reasons for creating serverless applications with PHP. The architectural patterns and service limits in this blog series apply to functions packaged using both container image and zip archive formats, with some key exceptions:

Zip archive Container image
Maximum package size 250 MB 10 GB
Lambda layers Supported Include in image
Lambda Extensions Supported Include in image

Custom runtimes with container images

For custom runtimes such as PHP, Lambda provides base images containing the required Amazon Linux or Amazon Linux 2 operating system. Extend this to include your own runtime by implementing the Lambda Runtime API in a bootstrap file.

Before container image support for Lambda, a custom runtime is packaged using the .zip format. This required the developer to:

  1. Set up an Amazon Linux environment compatible with the Lambda execution environment.
  2. Install compilation dependencies and compile a version of PHP.
  3. Save the compiled PHP binary together with a bootstrap file and package as a .zip.
  4. Publish the .zip as a runtime layer.
  5. Add the runtime layer to a Lambda function.

Any edits to the custom runtime such as new packages, PHP versions, modules, or dependences require the process to be repeated. This process can be time consuming and prone to error.

Creating a custom PHP runtime using the new container image support for Lambda can simplify changing the runtime environment. Dockerfiles allow you to have a fully scripted, faster, and portable build process without setting up an Amazon Linux environment.

This GitHub repository contains a custom PHP runtime for Lambda functions packaged as a container image. The following Dockerfile uses the base image for Amazon Linux provided by AWS. The instructions perform the following:

  • Install system-wide Linux packages (zip, curl, tar).
  • Download and compile PHP.
  • Download and install composer dependency manager and dependencies.
  • Move PHP binaries, bootstrap, and vendor dependencies into a directory that Lambda can read from.
  • Set the container entrypoint.
#Lambda base image Amazon Linux
FROM public.ecr.aws/lambda/provided as builder 
# Set desired PHP Version
ARG php_version="7.3.6"
RUN yum clean all && \
    yum install -y autoconf \
                bison \
                bzip2-devel \
                gcc \
                gcc-c++ \
                git \
                gzip \
                libcurl-devel \
                libxml2-devel \
                make \
                openssl-devel \
                tar \
                unzip \
                zip

# Download the PHP source, compile, and install both PHP and Composer
RUN curl -sL https://github.com/php/php-src/archive/php-${php_version}.tar.gz | tar -xvz && \
    cd php-src-php-${php_version} && \
    ./buildconf --force && \
    ./configure --prefix=/opt/php-7-bin/ --with-openssl --with-curl --with-zlib --without-pear --enable-bcmath --with-bz2 --enable-mbstring --with-mysqli && \
    make -j 5 && \
    make install && \
    /opt/php-7-bin/bin/php -v && \
    curl -sS https://getcomposer.org/installer | /opt/php-7-bin/bin/php -- --install-dir=/opt/php-7-bin/bin/ --filename=composer

# Prepare runtime files
# RUN mkdir -p /lambda-php-runtime/bin && \
    # cp /opt/php-7-bin/bin/php /lambda-php-runtime/bin/php
COPY runtime/bootstrap /lambda-php-runtime/
RUN chmod 0755 /lambda-php-runtime/bootstrap

# Install Guzzle, prepare vendor files
RUN mkdir /lambda-php-vendor && \
    cd /lambda-php-vendor && \
    /opt/php-7-bin/bin/php /opt/php-7-bin/bin/composer require guzzlehttp/guzzle

###### Create runtime image ######
FROM public.ecr.aws/lambda/provided as runtime
# Layer 1: PHP Binaries
COPY --from=builder /opt/php-7-bin /var/lang
# Layer 2: Runtime Interface Client
COPY --from=builder /lambda-php-runtime /var/runtime
# Layer 3: Vendor
COPY --from=builder /lambda-php-vendor/vendor /opt/vendor

COPY src/ /var/task/

CMD [ "index" ]

To deploy this Lambda function, follow the instructions in the GitHub repository.

All runtime-related instructions are saved in the Dockerfile, which makes the custom runtime simpler to manage, update, and test. You can add additional Linux packages by appending to the yum install command. To install alternative PHP versions, change the php_version argument. Import additional PHP modules by adding to the compile command.

View the complete application in the following file tree:

project/
┣ runtime/
┃ ┗ bootstrap
┣ src/
┃ ┗ index.php
┗ Dockerfile

The Lambda function code is stored in the src directory in a file named index.php. This contains the Lambda function handler “index()”.

A bootstrap file is in the ‘runtime’ directory. This uses the Lambda runtime API to communicate with the Lambda execution environment.

The shebang hash sequence at the beginning of the bootstrap script instructs Lambda to run the file with the PHP executable, set by the Dockerfile.

All environment variables used in the bootstrap are set by the Lambda execution environment when running in the AWS Cloud. When running locally, the Lambda Runtime Interface Emulator (RIE) sets these values.

#!/var/lang/bin/php

Testing locally with the Lambda RIE

Using container image support for Lambda makes it easier for PHP developers to test Lambda functions locally. The previous container image example builds from the Lambda base image provided by AWS. This base image contains the Lambda RIE.

This is a proxy for Lambda’s Runtime and Extensions APIs. It acts as a lightweight web server that converts HTTP requests to JSON events and maintains functional parity with the Lambda Runtime API in the AWS Cloud. This allows developers to test functions locally using familiar tools such as cURL and the Docker CLI.

  1. Build the previous custom runtime image using the Docker build command:
    docker build -t phpmyfuntion .
  2. Run the function locally using the Docker run command, bound to port 9000:
    docker run -p 9000:8080 phpmyfuntion:latest
  3. This command starts up a local endpoint at:
    localhost:9000/2015-03-31/functions/function/invocations
  4. Post an event to this endpoint using a curl command. The Lambda function payload is provided by using the -d flag. This is a valid Json object required by the Runtime Interface Emulator:
    curl "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{"queryStringParameters": {"name":"Ben"}}'
  5. A 200 status response is returned:

Building web applications with Bref container images

Bref is an open source runtime Lambda layer for PHP. Using the bref-fpm layer, you can build applications with traditional PHP frameworks such as Symfony and Laravel. Bref’s implementation of the FastCGI protocol returns an HTTP response instead of a JSON response. When using the zip archive format to package Lambda functions, Bref’s custom runtime is provided to the function as a Lambda layer. Functions packaged as container images do not support adding Lambda layers to the function configuration. In addition to runtime layers, Bref also provides a number of Docker images. These images use the Lambda runtime API to form a runtime interface client that communicates with the Lambda execution environment.

The following example shows how to compose a Dockerfile that uses the bref php-74-fpm container image:

# Uses PHP 74-fpm.0, as the base image
FROM bref/php-74-fpm
# download composer for dependency management
RUN curl -s https://getcomposer.org/installer | php
# install bref using composer
RUN php composer.phar require bref/bref
# copy the project files into a Location that the Lambda service can read from
COPY . /var/task
#set the function handler entry point
CMD _HANDLER=index.php /opt/bootstrap
  1. The first line sets the base image to use bref/php-74-fpm.
  2. Composer, a dependency manager for PHP is installed.
  3. Composer’s require command is used to add the bref package to the composer.json file.
  4. The project files are then copied into the /var/task directory, where the function code runs from.
  5. The function handler is set along with Bref’s bootstrap file.

The steps to build and deploy this image to the Amazon Elastic Container Registry are the same for any runtime, and explained in this announcement blog post.

Conclusion

The new container image support for Lambda functions allows developers to package Lambda functions of up to 10 GB in size. Using the container image format and a Dockerfile can make it easier to build and update functions with custom runtimes such as PHP.

Developers can include specific language versions, modules, and package dependencies. The Amazon Linux and Amazon Linux 2 base images give developers a starting point to customize the runtime. With the Lambda Runtime Interface Emulator, it’s simpler for developers to test Lambda functions locally. PHP developers can use existing third-party images, such as bref-fpm, to create web applications in a single Lambda function.

Visit serverlessland.com for more information on building serverless PHP applications.

Operating Lambda: Design principles in event-driven architectures – Part 2

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/operating-lambda-design-principles-in-event-driven-architectures-part-2/

In the Operating Lambda series, I cover important topics for developers, architects, and systems administrators who are managing AWS Lambda-based applications. This three-part section discusses event-driven architectures and how these relate to Lambda-based applications.

Part 1 covers the benefits of the event-driven paradigm and how it can improve throughput, scale and extensibility. This post explains some of the design principles and best practices that can help developers gain the benefits of building Lambda-based applications.

Overview

Many of the best practices that apply to software development and distributed systems also apply to serverless application development. The broad principles are consistent with the Well-Architected Framework. The overall goal is to develop workloads that are:

  • Reliable: offering your end users a high level of availability. AWS serverless services are reliable because they are also designed for failure.
  • Durable: providing storage options that meet the durability needs of your workload.
  • Secure: following best practices and using the tools provided to secure access to workloads and limit the blast radius, if any issues occur.
  • Performant: using computing resources efficiently and meeting the performance needs of your end users.
  • Cost-efficient: designing architectures that avoid unnecessary cost that can scale without overspending, and also be decommissioned, if necessary, without significant overhead.

When you develop Lambda-based applications, there are several important design principles that can help you build workloads that meet these goals. You may not apply every principle to every architecture and you have considerable flexibility in how you approach building with Lambda. However, they should guide you in general architecture decisions.

Use services instead of custom code

Serverless applications usually comprise several AWS services, integrated with custom code run in Lambda functions. While Lambda can be integrated with most AWS services, the services most commonly used in serverless applications are:

Category AWS service
Compute AWS Lambda
Data storage Amazon S3
Amazon DynamoDB
Amazon RDS
API Amazon API Gateway
Application integration Amazon EventBridge
Amazon SNS
Amazon SQS
Orchestration AWS Step Functions
Streaming data and analytics Amazon Kinesis Data Firehose

There are many well-established, common patterns in distributed architectures that you can build yourself or implement using AWS services. For most customers, there is little commercial value in investing time to develop these patterns from scratch. When your application needs one of these patterns, use the corresponding AWS service:

Pattern AWS service
Queue Amazon SQS
Event bus Amazon EventBridge
Publish/subscribe (fan-out) Amazon SNS
Orchestration AWS Step Functions
API Amazon API Gateway
Event streams Amazon Kinesis

These services are designed to integrate with Lambda and you can use infrastructure as code (IaC) to create and discard resources in the services. You can use any of these services via the AWS SDK without needing to install applications or configure servers. Becoming proficient with using these services via code in your Lambda functions is an important step to producing well-designed serverless applications.

Understanding the level of abstraction

The Lambda service limits your access to the underlying operating systems, hypervisors, and hardware running your Lambda functions. The service continuously improves and changes infrastructure to add features, reduce cost and make the service more performant. Your code should assume no knowledge of how Lambda is architected and assume no hardware affinity.

Similarly, the integration of other services with Lambda is managed by AWS with only a small number of configuration options exposed. For example, when API Gateway and Lambda interact, there is no concept of load balancing available since it is entirely managed by the services. You also have no direct control over which Availability Zones the services use when invoking functions at any point in time, or how and when Lambda execution environments are scaled up or destroyed.

This abstraction allows you to focus on the integration aspects of your application, the flow of data, and the business logic where your workload provides value to your end users. Allowing the services to manage the underlying mechanics helps you develop applications more quickly with less custom code to maintain.

Implementing statelessness in functions

When building Lambda functions, you should assume that the environment exists only for a single invocation. The function should initialize any required state when it is first started – for example, fetching a shopping cart from a DynamoDB table. It should commit any permanent data changes before exiting to a durable store such as S3, DynamoDB, or SQS. It should not rely on any existing data structures or temporary files, or any internal state that would be managed by multiple invocations (such as counters or other calculated, aggregate values).

Lambda provides an initializer before the handler where you can initialize database connections, libraries, and other resources. Since execution environments are reused where possible to improve performance, you can amortize the time taken to initialize these resources over multiple invocations. However, you should not store any variables or data used in the function within this global scope.

Lambda function design

Most architectures should prefer many, shorter functions over fewer, larger ones. Making Lambda functions highly specialized for your workload means that they are concise and generally result in shorter executions. The purpose of each function should be to handle the event passed into the function, with no knowledge or expectations of the overall workflow or volume of transactions. This makes the function agnostic to the source of the event with minimal coupling to other services.

Any global-scope constants that change infrequently should be implemented as environment variables to allow updates without deployments. Any secrets or sensitive information should be stored in AWS Systems Manager Parameter Store or AWS Secrets Manager and loaded by the function. Since these resources are account-specific, this allows you to create build pipelines across multiple accounts. The pipelines load the appropriate secrets per environment, without exposing these to developers or requiring any code changes.

Building for on-demand data instead of batches

Many traditional systems are designed to run periodically and process batches of transactions that have built up over time. For example, a banking application may run every hour to process ATM transactions into central ledgers. In Lambda-based applications, the custom processing should be triggered by every event, allowing the service to scale up concurrency as needed, to provide near-real time processing of transactions.

While you can run cron tasks in serverless applications by using scheduled expressions for rules in Amazon EventBridge, these should be used sparingly or as a last-resort. In any scheduled task that processes a batch, there is the potential for the volume of transactions to grow beyond what can be processed within the 15-minute Lambda timeout. If the limitations of external systems force you to use a scheduler, you should generally schedule for the shortest reasonable recurring time period.

For example, it’s not best practice to use a batch process that triggers a Lambda function to fetch a list of new S3 objects. This is because the service may receive more new objects in between batches than can be processed within a 15-minute Lambda function.

S3 fetch anti-pattern

Instead, the Lambda function should be invoked by the S3 service each time a new object is put into the S3 bucket. This approach is significantly more scalable and also invokes processing in near-real time.

S3 to Lambda events

Orchestrating workflows

Workflows that involve branching logic, different types of failure models and retry logic typically use an orchestrator to keep track of the state of the overall execution. Avoid using Lambda functions for this purpose, since it results in tightly coupled groups of functions and services and complex code handling routing and exceptions.

With AWS Step Functions, you use state machines to manage orchestration. This extracts the error handling, routing, and branching logic from your code, replacing it with state machines declared using JSON. Apart from making workflows more robust and observable, it allows you to add versioning to workflows and make the state machine a codified resource that you can add to a code repository.

It’s common for simpler workflows in Lambda functions to become more complex over time, and for developers to use a Lambda function to orchestrate the flow. When operating a production serverless application, it’s important to identify when this is happening, so you can migrate this logic to a state machine.

Developing for retries and failures

AWS serverless services, including Lambda, are fault-tolerant and designed to handle failures. In the case of Lambda, if a service invokes a Lambda function and there is a service disruption, Lambda invokes your function in a different Availability Zone. If your function throws an error, the Lambda service retries your function.

Since the same event may be received more than once, functions should be designed to be idempotent. This means that receiving the same event multiple times does not change the result beyond the first time the event was received.

For example, if a credit card transaction is attempted twice due to a retry, the Lambda function should process the payment on the first receipt. On the second retry, either the Lambda function should discard the event or the downstream service it uses should be idempotent.

A Lambda function implements idempotency typically by using a DynamoDB table to track recently processed identifiers to determine if the transaction has been handled previously. The DynamoDB table usually implements a Time To Live (TTL) value to expire items to limit the storage space used.

Idempotent microservice

For failures within the custom code of a Lambda function, the service offers a number of features to help preserve and retry the event, and provide monitoring to capture that the failure has occurred. Using these approaches can help you develop workloads that are resilient to failure and improve the durability of events as they are processed by Lambda functions.

Conclusion

This post discusses the design principles that can help you develop well-architected serverless applications. I explain why using services instead of code can help improve your application’s agility and scalability. I also show how statelessness and function design also contribute to good application architecture. I cover how using events instead of batches helps serverless development, and how to plan for retries and failures in your Lambda-based applications.

Part 3 of this series will look at common anti-patterns in event-driven architectures and how to avoid building these into your microservices.

For more serverless learning resources, visit Serverless Land.

Best practices and advanced patterns for Lambda code signing

Post Syndicated from Cassia Martin original https://aws.amazon.com/blogs/security/best-practices-and-advanced-patterns-for-lambda-code-signing/

Amazon Web Services (AWS) recently released Code Signing for AWS Lambda. By using this feature, you can help enforce the integrity of your code artifacts and make sure that only trusted developers can deploy code to your AWS Lambda functions. Today, let’s review a basic use case along with best practices for lambda code signing. Then, let’s dive deep and talk about two advanced patterns—one for centralized signing and one for cross account layer validation. You can use these advanced patterns to use code signing in a distributed ownership model, where you have separate groups for developers writing code and for groups responsible for enforcing specific signing profiles or for publishing layers.

Secure software development lifecycle

For context of what this capability gives you, let’s look at the secure software development lifecycle (SDLC). You need different kinds of security controls for each of your development phases. An overview of the secure SDLC development stages—code, build, test, deploy, and monitor—, along with applicable security controls, can be found in Figure 1. You can use code signing for Lambda to protect the deployment stage and give a cryptographically strong hash verification.

Figure 1: Code signing provides hash verification in the deployment phase of a secure SDLC

Figure 1: Code signing provides hash verification in the deployment phase of a secure SDLC

Adding Security into DevOps and Implementing DevSecOps Using AWS CodePipeline provide additional information on building a secure SDLC, with a particular focus on the code analysis controls.

Basic pattern:

Figure 2 shows the basic pattern described in Code signing for AWS Lambda and in the documentation. The basic code signing pattern uses AWS Signer on a ZIP file and calls a create API to install the signed artifact in Lambda.

Figure 2: The basic code signing pattern

Figure 2: The basic code signing pattern

The basic pattern illustrated in Figure 2 is as follows:

  1. An administrator creates a signing profile in AWS Signer. A signing profile is analogous to a code signing certificate and represents a publisher identity. Administrators can provide access via AWS Identity and Access Management (IAM) for developers to use the signing profile to sign their artifacts.
  2. Administrators create a code signing configuration (CSC)—a new resource in Lambda that specifies the signing profiles that are allowed to sign code and the signature validation policy that defines whether to warn or reject deployments that fail the signature checks. CSC can be attached to existing or new Lambda functions to enable signature validations on deployment.
  3. Developers use one of the allowed signing profiles to sign the deployment artifact—a ZIP file—in AWS Signer.
  4. Developers deploy the signed deployment artifact to a function using either the CreateFunction API or the UpdateFunctionCode API.

Lambda performs signature checks before accepting the deployment. The deployment fails if the signature checks fail and you have set the signature validation policy in the CSC to reject deployments using ENFORCE mode.

Code signing checks

Code signing for Lambda provides four signature checks. First, the integrity check confirms that the deployment artifact hasn’t been modified after it was signed using AWS Signer. Lambda performs this check by matching the hash of the artifact with the hash from the signature. The second check is the source mismatch check, which detects if a signature isn’t present or if the artifact is signed by a signing profile that isn’t specified in the CSC. The third, expiry check, will fail if a signature is past its point of expiration. The fourth is the revocation check, which is used to see if anyone has explicitly marked the signing profile used for signing or the signing job as invalid by revoking it.

The integrity check must succeed or Lambda will not run the artifact. The other three checks can be configured to either block invocation or generate a warning. These checks are performed in order until one check fails or all checks succeed. As a security leader concerned about the security of code deployments, you can use the Lambda code signing checks to satisfy different security assurances:

  • Integrity – Provides assurance that code has not been tampered with, by ensuring that the signature on the build artifact is cryptographically valid.
  • Source mismatch – Provides assurance that only trusted entities or developers can deploy code.
  • Expiry – Provides assurance that code running in your environment is not stale, by making sure that signatures were created within a certain date and time.
  • Revocation – Allows security administrators to remove trust by invalidating signatures after the fact so that they cannot be used for code deployment if they have been exposed or are otherwise no longer trusted.

The last three checks are enforced only if you have set the signature validation policy—UntrustedArtifactOnDeployment parameter—in the CSC to ENFORCE. If the policy is set to WARN, then failures in any of the mismatch, expiry, and revocation checks will log a metric called a signature validation error in Amazon CloudWatch. The best practice for this setting is to initially set the policy to WARN. Then, you can monitor the warnings, if any, and update the policy to enforce when you’re confident in the findings in CloudWatch.

Centralized signing enforcement

In this scenario, you have a security administrators team that centrally manages and approves signing profiles. The team centralizes signing profiles in order to enforce that all code running on Lambda is authored by a trusted developer and isn’t tampered with after it’s signed. To do this, the security administrators team wants to enforce that developers—in the same account—can only create Lambda functions with signing profiles that the team has approved. By owning the signing profiles used by developer teams, the security team controls the lifecycle of the signatures and the ability to revoke the signatures. Here are instructions for creating a signing profile and CSC, and then enforcing their use.

Create a signing profile

To create a signing profile, you’ll use the AWS Command Line Interface (AWS CLI). Start by logging in to your account as the central security role. This is an administrative role that is scoped with permissions needed for setting up code signing. You’ll create a signing profile to use for an application named ABC. These example commands are written with prepopulated values for things like profile names, IDs, and descriptions. Change those as appropriate for your application.

To create a signing profile

  1. Run this command:
    aws signer put-signing-profile --platform-id "AWSLambda-SHA384-ECDSA" --profile-name profile_for_application_ABC
    

    Running this command will give you a signing profile version ARN. It will look something like arn:aws:signer:sa-east-1:XXXXXXXXXXXX:/signing-profiles/profile_for_application_ABC/XXXXXXXXXX. Make a note of this value to use in later commands.

    As the security administrator, you must grant the developers access to use the profile for signing. You do that by using the add-profile-permission command. Note that in this example, you are explicitly only granting permission for the signer:StartSigningJob action. You might want to grant permissions to other actions, such as signer:GetSigningProfile or signer:RevokeSignature, by making additional calls to add-profile-permission.

  2. Run this command, replacing <role-name> with the principal you’re using:
    aws signer add-profile-permission \
    --profile-name profile_for_application_ABC \
    --action signer:StartSigningJob \
    --principal <role-name> \
    --statement-id testStatementId
    

Create a CSC

You also want to make a CSCwith the signing profile that you, as the security administrator, want all your developers to use.

To create a CSC

Run this command, replacing <signing-profile-version-arn> with the output from Step 1 of the preceding procedure—Create a signing profile:

aws lambda create-code-signing-config \
--description "Application ABC CSC" \
--allowed-publishers SigningProfileVersionArns=<signing-profile-version-arn> \
--code-signing-policies "UntrustedArtifactOnDeployment"="Enforce"

Running this command will give you a CSCARN that will look something like arn:aws:lambda:sa-east-1:XXXXXXXXXXXX:code-signing-config:approved-csc-XXXXXXXXXXXXXXXXX. Make a note of this value to use later.

Write an IAM policy using the new CSC

Now that the security administrators team has created this CSC, how do they ensure that all the developers use it? Administrators can use IAM to grant access to the CreateFunction API, while using the new lambda:CodeSigningConfig condition key with the CSC ARN you created. This will ensure that developers can create functions only if code signing is enabled.

This IAM policy will allow the developer roles to create Lambda functions, but only when they are using the approved CSC. The additional clauses Deny the developers from creating their own signing profiles or CSCs, so that they are forced to use the ones provided by the central team.

To write an IAM policy

Run the following command. Replace <code-signing-config-arn> with the CSC ARN you created previously.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "lambda:CreateFunction",
        "lambda:PutFunctionCodeSigningConfig"
      ],
      "Resource": "*",
      "Condition": {
        "ForAnyValue:StringEquals": {
          "lambda:CodeSigningConfig": ["<code-signing-config-arn>"]
          }
         }        
        },
       {
         "Effect": "Deny", 
         "Action": [
        "signer:PutSigningProfile",
        "lambda:DeleteFunctionCodeSigningConfig",
        "lambda:UpdateCodeSigningConfig",
        "lambda:DeleteCodeSigningConfig",
        "lambda:CreateCodeSigningConfig"
      ],
         "Resource": "*"
       }
  ]
}

Create a signed Lambda function

Now, the developers have permission to create new Lambda functions, but only if the functions are configured with the approved CSC. The approved CSC can specify the settings for Lambda signing policies, and lists exactly what profiles are approved for signing the function code with. This means that developers in that account will only be able to create functions if the functions are signed with a profile approved by the central team and the developer permissions have been added to the signing profile used.

To create a signed Lambda function

  1. Upload any Lambda code file to an Amazon Simple Storage Service (Amazon S3) bucket with the name main-function.zip. Note that your S3 bucket must be version enabled.
  2. Sign the zipped Lambda function using AWS Signer and the following command, replacing <lambda-bucket> and <version-string> with the correct details from your uploaded main-function.zip.
    aws signer start-signing-job \ 
    --source 's3={bucketName=<lambda-bucket>, version=<version-string>, key=main-function.zip}' \
    --destination 's3={bucketName=<lambda-bucket>, prefix=signed-}' \
    --profile-name profile_for_application_ABC
    

  3. Download the newly created ZIP file from your Lambda bucket. It will be called something like signed-XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX.zip.
  4. For convenience, rename it to signed-main-function.zip.
  5. Run the following command, replacing <lambda-role> with the ARN of your Lambda execution role, and replacing <code-signing-config-arn> with the result of the earlier procedure Create a CSC.
    aws lambda create-function \
        --function-name "signed-main-function" \
        --runtime "python3.8" \
        --role <lambda-role> \
        --zip-file "fileb://signed-main-function.zip" \
        --handler lambda_function.lambda_handler \ 
        --code-signing-config-arn <code-signing-config-arn>
    

Cross-account centralization

This pattern supports the use case where the security administrators and the developers are working in the same account. You might want to implement this across different accounts, which requires creating CSCs in specific accounts where developers need to deploy and update Lambda functions. To do this, you can use AWS CloudFormation StackSets to deploy CSCs. Stack sets allow you to roll out CloudFormation stacks across multiple AWS accounts. Use AWS CloudFormation StackSets for Multiple Accounts in an AWS Organization illustrates how to use an AWS CloudFormation template for deployment to multiple accounts.

The security administrators can detect and react to any changes to the stack set deployed CSCs by using drift detection. Drift detection is an AWS CloudFormation feature that detects unmanaged changes to the resources deployed using StackSets. To complete the solution, Implement automatic drift remediation for AWS CloudFormation using Amazon CloudWatch and AWS Lambda shares a solution for taking automated remediation when drift is detected in a CloudFormation stack.

Cross-account validation for Lambda layers

So far, you have the tools to sign your own Lambda code so that no one can tamper with it, and you’ve reviewed a pattern where one team creates and owns the signing profiles to be used by different developers. Let’s look at one more advanced pattern where you publish code as a signed Lambda layer in one account, and you then use it in a Lambda function in a separate account. A Lambda layer is an archive containing additional code that you can include in a function.

For this, let’s consider how to set up code signing when you’re using layers across two accounts. Layers allow you to use libraries in your function without needing to include them in your deployment package. It’s also possible to publish a layer in one account, and have a different account consume that layer. Let’s act as a publisher of a layer. In this use case, you want to use code signing so that consumers of your layer can have the security assurance that no one has tampered with the layer. Note that if you enable code signing to verify signatures on a layer, Lambda will also verify the signatures on the function code. Therefore, all of your deployment artifacts must be signed, using a profile listed in the CSC attached to the function.

Figure 3 illustrates the cross-account layer pattern, where you sign a layer in a publishing account and a function uses that layer in another consuming account.

Figure 3: This advanced pattern supports cross-account layers

Figure 3: This advanced pattern supports cross-account layers

Here are the steps to build this setup. You’ll be logging in to two different accounts, your publishing account and your consuming account.

Make a publisher signing profile

Running this command will give you a profile version ARN. Make a note of the value returned to use in a later step.

To make a publisher signing profile

  1. In the AWS CLI, log in to your publishing account.
  2. Run this command to make a signing profile for your publisher:
    aws signer put-signing-profile --platform-id "AWSLambda-SHA384-ECDSA" --profile-name publisher_approved_profile1
    

Sign your layer code using signing profile

Next, you want to sign your layer code with this signing profile. For this example, use the blank layer code from this GitHub project. You can make your own layer by creating a ZIP file with all your code files included in a directory supported by your Lambda runtime. AWS Lambda layers has instructions for creating your own layer.

You can then sign your layer code using the signing profile.

To sign your layer code

  1. Name your Lambda layer code file blank-python.zip and upload it to your S3 bucket.
  2. Sign the zipped Lambda function using AWS Signer with the following command. Replace <lambda-bucket> and <version-string> with the details from your uploaded blank-python.zip.
    aws signer start-signing-job \ 
    --source 's3={bucketName=<lambda-bucket>, version=<version-string>, key=blank-python.zip}' \
    --destination 's3={bucketName=<lambda-bucket>, prefix=signed-}' \
    --profile-name publisher_approved_profile1
    

Publish your signed layer

Now publish the resulting, signed layer. Note that the layers themselves don’t have signature validation on deployment. However, the signatures will be checked when they’re added to a function.

To publish your signed layer

  1. Download your new signed ZIP file from your S3 bucket, and rename it signed-layer.zip.
  2. Run the following command to publish your layer:
    aws lambda publish-layer-version \
    --layer-name lambda_signing \
    --zip-file "fileb://signed-layer.zip" \
    --compatible-runtimes python3.8 python3.7        
    

This command will return information about your newly published layer. Search for the LayerVersionArn and make a note of it for use later.

Grant read access

For the last step in the publisher account, you must grant read access to the layer using the add-layer-version-permission command. In the following command, you’re granting access to an individual account using the principal parameter.

(Optional) You could instead choose to grant access to all accounts in your organization by using “*” as the principal and adding the organization-id parameter.

To grant read access

  • Run the following command to grant read access to your layer, replacing <consuming-account-id> with the account ID of your second account:
    aws lambda add-layer-version-permission \
    --layer-name lambda_signing \
    --version-number 1 \
    --statement-id for-consuming-account \
    --action lambda:GetLayerVersion \
    --principal <consuming-account-id> 	
    

Create a CSC

It’s time to switch your AWS CLI to work with the consuming account. This consuming account can create a CSC for their Lambda functions that specifies what signing profiles are allowed.

To create a CSC

  1. In the AWS CLI, log out from your publishing account and into your consuming account.
  2. The consuming account will need a signing profile of its own to sign the main Lambda code. Run the following command to create one:
    aws signer put-signing-profile --platform-id "AWSLambda-SHA384-ECDSA" --profile-name consumer_approved_profile1
    

  3. Run the following command to create a CSC that allows code to be signed either by the publisher or the consumer. Replace <consumer-signing-profile-version-arn> with the profile version ARN you created in the preceding step. Replace <publisher-signing-profile-version-arn> with the signing profile from the Make a publisher signing profile procedure. Make a note of the CSC returned by this command to use in later steps.
    aws lambda create-code-signing-config \
    --description "Allow layers from publisher" \
    --allowed-publishers SigningProfileVersionArns="<publisher-signing-profile-version-arn>,<consumer-signing-profile-version-arn>" \
    --code-signing-policies "UntrustedArtifactOnDeployment"="Enforce"
    

Create a Lambda function using the CSC

When creating the function that uses the signed layer, you can pass in the CSC that you created. Lambda will check the signature on the function code in this step.

To create a Lambda function

  1. Use your own lambda code function, or make a copy of blank-python.zip, and rename it consumer-main-function.zip.) Upload consumer-main-function.zip to a versioned S3 bucket in your consumer account.

    Note: If the S3 bucket doesn’t have versioning enabled, the procedure will fail.

  2. Sign the function with the signing profile of the consumer account. Replace <consumers-lambda-bucket> and <version-string> in the following command with the name of the S3 bucket you uploaded the consumer-main-function.zip to and the version.
    aws signer start-signing-job \ 
    --source 's3={bucketName=<consumers-lambda-bucket>, version=<version-string>, key=consumer-main-function.zip}' \
    --destination 's3={bucketName=<consumers-lambda-bucket>, prefix=signed-}' \
    --profile-name consumer_approved_profile1
    

  3. Download your new file and rename it to signed-consumer-main-function.zip.
  4. Run the following command to create a new Lambda function, replacing <lambda-role> with a valid Lambda execution role and <code-signing-config-arn> with the value returned from the previous procedure: Creating a CSC.
    aws lambda create-function \
        --function-name "signed-consumer-main-function" \
        --runtime "python3.8" \
        --role <lambda-role> \
        --zip-file "fileb://signed-consumer-main-function.zip" \
        --handler lambda_function.lambda_handler \ 
        --code-signing-config <code-signing-config-arn>
    

  5. Finally, add the signed layer from the publishing account into the configuration of that function. Run the following command, replacing <lamba-layer-arn> with the result from the preceding step Publish your signed layer.
    aws lambda update-function-configuration \
    --function-name "signed-consumer-main-function" \
    --layers "<lambda-layer-arn>"   
    

Lambda will check the signature on the layer code in this step. If the signature of any deployed layer artifact is corrupt, the Lambda function stops you from attaching the layer and deploying your code. This is true regardless of the mode you choose—WARN or ENFORCE. If you have multiple layers to add to your function, you must sign all layers invoked in a Lambda function.

This capability allows layer publishers to share signed layers. A publisher can sign all layers using a specific signing profile and ask all the layer consumers to use that signing profile as one of the allowed profiles in their CSCs. When someone uses the layer, they can trust that the layer comes from that publisher and hasn’t been tampered with.

Conclusion

You’ve learned some best practices and patterns for using code signing for AWS Lambda. You know how code signing fits in the secure SDLC, and what value you get from each of the code signing checks. You also learned two patterns for using code signing for distributed ownership—one for centralized signing and one for cross account layer validation. No matter your role—as a developer, as a central security team, or as a layer publisher—you can use these tools to help enforce the integrity of code artifacts in your organization.

You can learn more about Lambda code signing in Configure code signing for AWS Lambda.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Lambda forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Cassia Martin

Cassia is a Security Solutions Architect in New York City. She works with large financial institutions to solve security architecture problems and to teach them cloud tools and patterns. Cassia has worked in security for over 10 years, and she has a strong background in application security.

Optimizing Lambda functions packaged as container images

Post Syndicated from Rob Sutter original https://aws.amazon.com/blogs/compute/optimizing-lambda-functions-packaged-as-container-images/

AWS Lambda launched support for packaging and deploying functions as container images at re:Invent 2020. In this post you learn how to build container images that reduce image size as well as build, deployment, and update time. Lambda container images have unique characteristics to consider for optimization. This means that the techniques you use to optimize container images for Lambda functions are slightly different from those you use for other environments.

To understand how to optimize container images, it helps to understand how container images are packaged, as well as how the Lambda service retrieves, caches, deploys, and retires container images.

Pre-requisites and assumptions

This post assumes you have access to an IAM user or role in an AWS account and a version of the tar utility on your machine. You must also install Docker and the AWS SAM CLI and start Docker.

Lambda container image packaging

Lambda container images are packaged according to the Open Container Initiative (OCI) Image Format specification. The specification defines how programs build and package individual layers into a single container image. To explore an example of the OCI Image Format, open a terminal and perform the following steps:

  1. Create an AWS SAM application.
    sam init –name container-images
  2. Choose 1 to select an AWS quick start template, then choose 2 to select container image as the packaging format, and finally choose 9 to use the amazon-go1.x-base image.
    Image showing the suggested choices for a sam init command
  3. After the AWS SAM CLI generates the application, enter the following commands to change into the new directory and build the Lambda container image
    cd container-images
    sam build
  4. AWS SAM builds your function and packages it as helloworldfunction:go1.x-v1. Export this container image to a tar archive and extract the filesystem into a new directory to explore the image format.
    docker save helloworldfunction:go1.x-v1 &gt; oci-image.tar
    mkdir -p image
    tar xf oci-image.tar -C image

The image directory contains several subdirectories, a container metadata JSON file, a manifest JSON file, and a repositories JSON file. Each subdirectory represents a single layer, and contains a version file, its own metadata JSON file, and a tar archive of the files that make up the layer.

Image of the result of running the tree command in a terminal window

The manifest.json file contains a single JSON object with the name of the container metadata file, a list of repository tags, and a list of included layers. The list of included layers is ordered according to the build order in your Dockerfile. The metadata JSON file in each subfolder also contains a mapping from each layer to its parent layer or final container.

Your function should have layers similar to the following. A separate layer is created any time files are added to the container image. This includes FROM, RUN, ADD, and COPY statements in your Dockerfile and base image Dockerfiles. Note that the specific layer IDs, layer sizes, number, and composition of layers may change over time.

ID Size Description Your function’s Dockerfile step
5fc256be… 641 MB Amazon Linux
c73e7f67… 320 KB Third-party licenses
de5f5100… 12 KB Lambda entrypoint script
2bd3c722… 7.8 MB AWS Lambda RIE
5d9d381b… 10.0 MB AWS Lambda runtime
cb832ffc… 12 KB Bootstrap link
1fcc74e8… 560 KB Lambda runtime library FROM public.ecr.aws/lambda/go:1
acb8dall… 9.6 MB Function code COPY –from=build-image /go/bin/ /var/task/

Runtimes generate a filesystem image by destructively overlaying each image layer over its parent. This means that any changes to one layer require all child layers to be recreated. In the following example, if you change the layer cb832ffc... then the layers 1fcc74e8… and acb8da111… are also considered “dirty” and must be recreated from the new parent image. This results in a new container image with eight layers, the first five the same as the original image, and the last three newly built, each with new IDs and parents.

Representation of a container image with eight layers, one of which is updated requiring two additional child layers to be updated also.

The layered structure of container images informs several decisions you make when optimizing your container images.

Strategies for optimizing container images

There are four main strategies for optimizing your container images. First, wherever possible, use the AWS-provided base images as a starting point for your container images. Second, use multi-stage builds to avoid adding unnecessary layers and files to your final image. Third, order the operations in your Dockerfile from most stable to most frequently changing. Fourth, if your application uses one or more large layers across all of your functions, store all of your functions in a single repository.

Use AWS-provided base images

If you have experience packaging traditional applications for container runtimes, using AWS-provided base images may seem counterintuitive. The AWS-provided base images are typically larger than other minimal container base images. For example, the AWS-provided base image for the Go runtime public.ecr.aws/lambda/go:1 is 670 MB, while alpine:latest, a popular starting point for building minimal container images, is only 5.58 MB. However, using the AWS-provided base images offers three advantages.

First, the AWS-provided base images are cached pro-actively by the Lambda service. This means that the base image is either nearby in another upstream cache or already in the worker instance cache. Despite being much larger, the deployment time may still be shorter when compared to third-party base images, which may not be cached. For additional details on how the Lambda service caches container images, see the re:Invent 2021 talk Deep dive into AWS Lambda security: Function isolation.

Second, the AWS-provided base images are stable. As the base image is at the bottom layer of the container image, any changes require every other layer to be rebuilt and redeployed. Fewer changes to your base image mean fewer rebuilds and redeployments, which can reduce build cost.

Finally, the AWS-provided base images are built on Amazon Linux and Amazon Linux 2. Depending on your chosen runtime, they may already contain a number of utilities and libraries that your functions may need. This means that you do not need to add them in later, saving you from creating additional layers that can cause more build steps leading to increased costs.

Use multi-stage builds

Multi-stage builds allow you to build your code in larger preliminary images, copy only the artifacts you need into your final container image, and discard the preliminary build steps. This means you can run any arbitrarily large number of commands and add or copy files into the intermediate image, but still only create one additional layer in your container image for the artifact. This reduces both the final size and the attack surface of your container image by excluding build-time dependencies from your runtime image.

AWS SAM CLI generates Dockerfiles that use multi-stage builds.

FROM golang:1.14 as build-image
WORKDIR /go/src
COPY go.mod main.go ./
RUN go build -o ../bin

FROM public.ecr.aws/lambda/go:1
COPY --from=build-image /go/bin/ /var/task/

# Command can be overwritten by providing a different command in the template directly.
CMD ["hello-world"]

This Dockerfile defines a two-stage build. First, it pulls the golang:1.14 container image and names it build-image. Naming intermediate stages is optional, but it makes it easier to refer to previous stages when packaging your final container image. Note that the golang:1.14 image is 810 MB, is not likely to be cached by the Lambda service, and contains a number of build tools that you should not include in your production images. The build-image stage then builds your function and saves it in /go/bin.

The second and final stage begins from the public.ecr.aws/lambda/go:1 base image. This image is 670 MB, but because it is an AWS-provided image, it is more likely to be cached on worker instances. The COPY command copies the contents of /go/bin from the build-image stage into /var/task in the container image, and discards the intermediate stage.

Build from stable to frequently changing

Any time a layer in an image changes, all layers that follow must be rebuilt, repackaged, redeployed, and recached by the Lambda service. In practice, this means that you should make your most frequently occurring changes as late in your Dockerfile as possible.

For example, if you have a stable Lambda function that uses a frequently updated machine learning model to make predictions, add your function to the container image before adding the machine learning model. However, if you have a function that changes frequently but relies on a stable Lambda extension, copy the extension into the image first.

If you put the frequently changing component early in your Dockerfile, all the build steps that follow must be re-run every time that component changes. If one of those actions is costly, for example, compiling a large library or running a complex simulation, these repetitions add unnecessary time and cost to your deployment pipeline.

Use a single repository for functions with large layers

When you create an application with multiple Lambda functions, you either store the container images in a single Amazon ECR repository or in multiple repositories, one for each function. If your application uses one or more large layers across all of your functions, store all of your functions in a single repository.

ECR repositories compare each layer of a container image when it is pushed to avoid uploading and storing duplicates. If each function in your application uses the same large layer, such as a custom runtime or machine learning model, that layer is stored exactly once in a shared repository. If you use a separate repository for each function, that layer is duplicated across repositories and must be uploaded separately to each one. This costs you time and network bandwidth.

Conclusion

Packaging your Lambda functions as container images enables you to use familiar tooling and take advantage of larger deployment limits. In this post you learn how to build container images that reduce image size as well as build, deployment, and update time. You learn some of the unique characteristics of Lambda container images that impact optimization. Finally, you learn how to think differently about image optimization for Lambda functions when compared to packaging traditional applications for container runtimes.

For more information on how to build serverless applications, including source code, blogs, videos, and more, visit the Serverless Land website.

Operating Lambda: Understanding event-driven architecture – Part 1

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/operating-lambda-understanding-event-driven-architecture-part-1/

In the Operating Lambda series, I cover important topics for developers, architects, and systems administrators who are managing AWS Lambda-based applications. This three-part series discusses event-driven architectures and how these relate to serverless applications.

Part 1 covers the benefits of the event-driven paradigm and how it can improve throughput, scale and extensibility, while also reducing complexity and the overall amount of code in an application.

Event-driven architectures have grown in popularity because they help address some of the inherent challenges in building the complex systems commonly used in modern organizations. This approach promotes the use of microservices, which are small, specialized services performing a narrow set of functions. A well-designed, Lambda-based application is compatible with the principles of microservice architectures.

How Lambda fits into the event-driven paradigm

Lambda is an on-demand compute service that runs custom code in response to events. Most AWS services generate events, and many can act as an event source for Lambda. Within Lambda, your code is stored in a code deployment package and contains an event handler. All interaction with the code occurs through the Lambda API and there is no direct invocation of functions from outside of the service. The main purpose of Lambda functions is to process events.

Lambda API triggers function code

Unlike traditional servers, Lambda functions do not run constantly. When a function is triggered by an event, this is called an invocation. Lambda functions are purposefully limited to 15 minutes in duration but on average, across all AWS customers, most invocations only last for less than a second. In some intensive compute operations, it may take several minutes to process a single event but in the majority of cases the duration is brief.

An event triggering a Lambda function could be almost anything, from an HTTP request via Amazon API Gateway, a schedule managed by an Amazon EventBridge rule, or an Amazon S3 notification. Even the simplest Lambda-based application uses at least one event.

Different Lambda event sources

The event itself is a JSON object that contains information about what happened. Events are facts about a change in the system state, they are immutable, and the time when they happen is significant. The first parameter of every Lambda handler contains the event. An event could be custom-generated from another microservice, such as new order generated in an ecommerce application:

Defining a console test event

The event may also be generated by an AWS service, such as Amazon SQS when a new message is available in a queue:

SQS test event

In either case, the event is passed to the Lambda function as the first parameter in the Lambda handler:

INIT code and event handler

  1. The code outside of the handler, also known as “INIT” code, is run before the handler. This is used for tasks like importing libraries or declaring and initializing global objects.
  2. The handler itself is a function that takes the event object. Regardless of runtime used in the Lambda function, the event is a JSON object.

For smaller applications, the difference between event-driven and request-driven applications may not be clear. As your applications develop more functionality and handle more traffic, this becomes more apparent. Request-driven applications typically use directed commands to coordinate downstream functions to complete an activity and are often tightly coupled. Event-driven applications create events that are observable by other services and systems, but the event producer is unaware of which consumers, if any, are listening. Typically, these are loosely coupled.

Most Lambda-based applications use a combination of AWS services for durably storing data and integrating with other system and services. In these applications, Lambda acts as glue between the services, providing business logic to transform data as it moves between services.

Grouping AWS serverless services into layers

Building Lambda-based applications follows many of the best practices of building any event-based architecture. A number of development approaches have emerged to help developers create event-driven systems. Event storming, which is an interactive approach to domain-driven design (DDD), is one popular methodology. As you explore the events in your workload, you can group these as bounded contexts to develop the boundaries of the microservices in your application.

To learn more about event-driven architectures, read “What is an Event-Driven Architecture?” and “What do you mean by Event-Driven?

The benefits of event-driven architectures

Replacing polling and webhooks with events

Many traditional architectures frequently use polling and webhook mechanisms to communicate state between different components. Polling can be highly inefficient for fetching updates since there is a lag between new data becoming available and synchronization with downstream services. Webhooks are not always supported by other microservices that you want to integrate with. They may also require custom authorization and authentication configurations. In both cases, these integration methods are challenging to scale on-demand without additional work by development teams.

Polling and webhooks

Both of these mechanisms can be replaced by events, which can be filtered, routed, and pushed downstream to consuming microservices. This approach can result in less bandwidth consumption, CPU utilization, and potentially lower cost. These architectures can reduce complexity, since each functional unit is smaller and there is often less code.

Event communication

Event-driven architectures can also make it easier to design near-real-time systems, helping organizations move away from batch-based processing. Events are generated at the time when state in the application changes, so the custom code of a microservice should be designed to handle the processing of a single event. Since scaling is handled by the Lambda service, this architecture can handle significant increases in traffic without changing custom code. As events scale up, so does the compute layer that processes events.

Reducing complexity

Microservices enable developers and architects to decompose complex workflows. For example, an ecommerce monolith may be broken down into order acceptance and payment processes with separate inventory, fulfillment and accounting services. What may be complex to manage and orchestrate in a monolith becomes a series of decoupled services that communicate asynchronously with event messages.

Ecommerce microservices example

This approach also makes it possible to assemble services that process data at different rates. In this case, an order acceptance microservice can store high volumes of incoming orders by buffering the messages in an Amazon SQS queue.

A payment processing service, which is typically slower due to the complexity of handling payments, can take a steady stream of messages from the SQS queue. It can orchestrate complex retry and error handling logic using AWS Step Functions, and coordinate active payment workflows for hundreds of thousands of orders.

Improving scalability and extensibility

Microservices generate events that are typically published to messaging services like Amazon SNS and SQS. These behave like an elastic buffer between microservices and help handle scaling when traffic increases. Services like EventBridge can then filter and route messages depending upon the content of the event, as defined in rules. As a result, event-based applications can be more scalable and offer greater redundancy than monolithic applications.

This system is also highly extensible, allowing other teams to extend features and add functionality without impacting the order processing and payment processing microservices. By publishing events using EventBridge, this application integrates with existing systems, such as the inventory microservice, but also enables any future application to integrate as an event consumer. Producers of events have no knowledge of event consumers, which can help simplify the microservice logic.

To learn more, read “How event-driven architecture solves modern web app problems” and “How to Use Amazon EventBridge to Build Decoupled, Event-Driven Architectures”.

Trade-offs of event-driven architectures

Variable latency

Unlike monolithic applications, which may process everything within the same memory space on a single device, event-driven applications communicate across networks. This design introduces variable latency. While it’s possible to engineer applications to minimize latency, monolithic applications can almost always be optimized for lower latency at the expense of scalability and availability.

The serverless services in AWS are highly available, meaning that they operate in more than one Availability Zone in a Region. In the event of a service disruption, services automatically fail over to alternative Availability Zones and retry transactions. As a result, instead of a transaction failing, it may be completed successfully but with higher latency.

Workloads that require consistent low-latency performance, such as high-frequency trading applications in banks or submillisecond robotics automation in warehouses, are not good candidates for event-driven architecture.

Eventual consistency

An event represents a change in state. With many events flowing through different services in an architecture at any given point of time, such workloads are often eventually consistent. This makes it more complex to process transactions, handle duplicates, or determine the exact overall state of a system.

Some workloads are not well suited for event-driven architecture, due to the need for ACID properties. However, many workloads contain a combination of requirements that are eventually consistent (for example, total orders in the current hour) or strongly consistent (for example, current inventory). For those features needing strong data consistency, there are architecture patterns to support this.

Event-based architectures are designed around individual events instead of large batches of data. Generally, workflows are designed to manage the steps of an individual event or execution flow instead of operating on multiple events simultaneously. Real-time event processing is preferred to batch processing in event-driven systems, replacing a batch with many small incremental updates. While this can make workloads more available and scalable, it also makes it more challenging for events to have awareness of other events.

Returning values to callers

In many cases, event-based applications are asynchronous. This means that caller services do not wait for requests from other services before continuing with other work. This is a fundamental characteristic of event-driven architectures that enables scalability and flexibility. This means that passing return values or the result of a workflow is often more complex than in synchronous execution flows.

Most Lambda invocations in productions systems are asynchronous, responding to events from services like S3 or SQS. In these cases, the success or failure of processing an event is often more important than returning a value. Features such as dead letter queues (DLQs) in Lambda are provided to ensure you can identify and retry failed events, without needing to notify the caller.

For interactive workloads, such as web and mobile applications, the end user usually expects to receive a return value or a current status of a transaction. For these workloads, there are several design patterns that can provide rich eventing back to the caller. However, these implementations are more complex than using a traditional asynchronous return value.

Debugging across services and functions

Debugging event-driven systems is also different to solving problems with a monolithic application. With different systems and services passing events, it is often not possible to record and reproduce the exact state of multiple services when an error occurs. Since each service and function invocation has separate log files, it can be more complicated to determine what happened to a specific event that caused an error.

To learn more, read “Challenges with distributed systems” and “Implementing Microservices on AWS”.

Conclusion

Event-driven architectures have grown in popularity in modern organizations. This approach promotes the use of microservices, which can be designed as Lambda-based applications. This post discusses the benefits of the event-driven approach, along with the trade-offs involved.

Part 2 of this series will discuss design principles and the best practices for developing Lambda-based applications.

Discovering sensitive data in AWS CodeCommit with AWS Lambda

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/discovering-sensitive-data-in-aws-codecommit-with-aws-lambda-2/

This post is courtesy of Markus Ziller, Solutions Architect.

Today, git is a de facto standard for version control in modern software engineering. The workflows enabled by git’s branching capabilities are a major reason for this. However, with git’s distributed nature, it can be difficult to reliably remove changes that have been committed from all copies of the repository. This is problematic when secrets such as API keys have been accidentally committed into version control. The longer it takes to identify and remove secrets from git, the more likely that the secret has been checked out by another user.

This post shows a solution that automatically identifies credentials pushed to AWS CodeCommit in near-real-time. I also show three remediation measures that you can use to reduce the impact of secrets pushed into CodeCommit:

  • Notify users about the leaked credentials.
  • Lock the repository for non-admins.
  • Hard reset the CodeCommit repository to a healthy state.

I use the AWS Cloud Development Kit (CDK). This is an open source software development framework to model and provision cloud application resources. Using the CDK can reduce the complexity and amount of code needed to automate the deployment of resources.

Overview of solution

The services in this solution are AWS Lambda, AWS CodeCommit, Amazon EventBridge, and Amazon SNS. These services are part of the AWS serverless platform. They help reduce undifferentiated work around managing servers, infrastructure, and the parts of the application that add less value to your customers. With serverless, the solution scales automatically, has built-in high availability, and you only pay for the resources you use.

Solution architecture

This diagram outlines the workflow implemented in this blog:

  1. After a developer pushes changes to CodeCommit, it emits an event to an event bus.
  2. A rule defined on the event bus routes this event to a Lambda function.
  3. The Lambda function uses the AWS SDK for JavaScript to get the changes introduced by commits pushed to the repository.
  4. It analyzes the changes for secrets. If secrets are found, it publishes another event to the event bus.
  5. Rules associated with this event type then trigger invocations of three Lambda functions A, B, and C with information about the problematic changes.
  6. Each of the Lambda functions runs a remediation measure:
    • Function A sends out a notification to an SNS topic that informs users about the situation (A1).
    • Function B locks the repository by setting a tag with the AWS SDK (B2). It sends out a notification about this action (B2).
    • Function C runs git commands that remove the problematic commit from the CodeCommit repository (C2). It also sends out a notification (C1).

Walkthrough

The following walkthrough explains the required components, their interactions and how the provisioning can be automated via CDK.

For this walkthrough, you need:

Checkout and deploy the sample stack:

  1. After completing the prerequisites, clone the associated GitHub repository by running the following command in a local directory:
    git clone [email protected]:aws-samples/discover-sensitive-data-in-aws-codecommit-with-aws-lambda.git
  2. Open the repository in a local editor and review the contents of cdk/lib/resources.ts, src/handlers/commits.ts, and src/handlers/remediations.ts.
  3. Follow the instructions in the README.md to deploy the stack.

The CDK will deploy resources for the following services in your account.

Using CodeCommit to manage your git repositories

The CDK creates a new empty repository called TestRepository and adds a tag RepoState with an initial value of ok. You later use this tag in the LockRepo remediation strategy to restrict access.

It also creates two IAM groups with one user in each. Members of the CodeCommitSuperUsers group are always able to access the repository, while members of the CodeCommitUsers group can only access the repository when the value of the tag RepoState is not locked.

I also import the CodeCommitSystemUser into the CDK. Since the user requires git credentials in a downloaded CSV file, it cannot be created by the CDK. Instead it must be created as described in the README file.

The following CDK code sets up all the described resources:

const TAG_NAME = "RepoState";

const superUsers = new Group(this, "CodeCommitSuperUsers", { groupName: "CodeCommitSuperUsers" });
superUsers.addUser(new User(this, "CodeCommitSuperUserA", {
    password: new Secret(this, "CodeCommitSuperUserPassword").secretValue,
    userName: "CodeCommitSuperUserA"
}));

const users = new Group(this, "CodeCommitUsers", { groupName: "CodeCommitUsers" });
users.addUser(new User(this, "User", {
    password: new Secret(this, "CodeCommitUserPassword").secretValue,
    userName: "CodeCommitUserA"
}));

const systemUser = User.fromUserName(this, "CodeCommitSystemUser", props.codeCommitSystemUserName);

const repo = new Repository(this, "Repository", {
    repositoryName: "TestRepository",
    description: "The repository to test this project out",
});
Tags.of(repo).add(TAG_NAME, "ok");

users.addToPolicy(new PolicyStatement({
    effect: Effect.ALLOW,
    actions: ["*"],
    resources: [repo.repositoryArn],
    conditions: {
        StringNotEquals: {
            [`aws:ResourceTag/${TAG_NAME}`]: "locked"
        }
    }
}));

superUsers.addToPolicy(new PolicyStatement({
    effect: Effect.ALLOW,
    actions: ["*"],
    resources: [repo.repositoryArn]
}));

Using EventBridge to pass events between components

I use EventBridge, a serverless event bus, to connect the Lambda functions together. Many AWS services like CodeCommit are natively integrated into EventBridge and publish events about changes in their environment.

repo.onCommit is a higher-level CDK construct. It creates the required resources to invoke a Lambda function for every commit to a given repository. The created events rule looks like this:

EventBridge rule definition

Note that this event rule only matches commit events in TestRepository. To send commits of all repositories in that account to the inspecting Lambda function, remove the resources filter in the event pattern.

CodeCommit Repository State Change is a default event that is published by CodeCommit if changes are made to a repository. In addition, I define CodeCommit Security Event, a custom event, which Lambda publishes to the same event bus if secrets are discovered in the inspected code.

The sample below shows how you can set up Lambda functions as targets for both type of events.

const DETAIL_TYPE = "CodeCommit Security Event";
const eventBus = new EventBus(this, "CodeCommitEventBus", {
    eventBusName: "CodeCommitSecurityEvents"
});

repo.onCommit("AnyCommitEvent", {
    ruleName: "CallLambdaOnAnyCodeCommitEvent",
    target: new targets.LambdaFunction(commitInspectLambda)
});


new Rule(this, "CodeCommitSecurityEvent", {
    eventBus,
    enabled: true,
    ruleName: "CodeCommitSecurityEventRule",
    eventPattern: {
        detailType: [DETAIL_TYPE]
    },
    targets: [
        new targets.LambdaFunction(lockRepositoryLambda),
        new targets.LambdaFunction(raiseAlertLambda),
        new targets.LambdaFunction(forcefulRevertLambda)
    ]
});

Using Lambda functions to run remediation measures

AWS Lambda functions allow you to run code in response to events. The example defines four Lambda functions.

By comparing the delta to its predecessor, the commitInspectLambda function analyzes if secrets are introduced by a commit. With the CDK, you can create a Lambda function with:

const myLambdaInCDK = new Function(this, "UniqueIdentifierRequiredByCDK", {
    runtime: Runtime.NODEJS_12_X,
    handler: "<handlerfile>.<function name>",
    code: Code.fromAsset(path.join(__dirname, "..", "..", "src", "handlers")),
    // See git repository for complete code
});

The code for this Lambda function uses the AWS SDK for JavaScript to fetch the details of the commit, the differences introduced, and the new content.

The code checks each modified file line by line with a regular expression that matches typical secret formats. In src/handlers/regex.json, I provide a few regular expressions that match common secrets. You can extend this with your own patterns.

If a secret is discovered, a CodeCommit Security Event is published to the event bus. EventBridge then invokes all Lambda functions that are registered as targets with this event. This demo triggers three remediation measures.

The raiseAlertLambda function uses the AWS SDK for JavaScript to send out a notification to all subscribers (that is, CodeCommit administrators) on an SNS topic. It takes no further action.

SNS.publish({
    TopicArn: <TOPIC_ARN>,
    Subject: `[ACTION REQUIRED] Secrets discovered in <repo>`
    Message: `<Your message>
}

Notification about secrets discovered in a commit in TestRepository

The lockRepositoryLambda function uses the AWS SDK for JavaScript to change the RepoState tag from ok to locked. This restricts access to members of the CodeCommitSuperUsers IAM group.

CodeCommit.tagResource({
    resourceArn: event.detail.repositoryArn,
    tags: {
        RepoState: "locked"
    }
})

In addition, the Lambda function uses SNS to send out a notification. The forcefulRevertLambda function runs the following git commands:

git clone <repository>
git checkout <branch>
git reset –hard <previousCommitId>
git push origin <branch> --force

These commands reset the repository to the last accepted commit, by forcefully removing the respective commit from the git history of your CodeCommit repo. I advise you to handle this with care and only activate it on a real project if you fully understand the consequences of rewriting git history.

The Node.js v12 runtime for Lambda does not have a git runtime installed by default. You can add one by using the git-lambda2 Lambda layer. This allows you to run git commands from within the Lambda function.

Logs for the remediation measure Hard Reset

Finally, this Lambda function also sends out a notification. The complete code is available in the GitHub repo.

Using SNS to notify users

To notify users about secrets discovered and actions taken, you create an SNS topic and subscribe to it via email.

const topic = new Topic(this, "CodeCommitSecurityEventNotification", {
    displayName: "CodeCommitSecurityEventNotification",
});

topic.addSubscription(new subs.EmailSubscription(/* your email address */));

Testing the solution

You can test the deployed solution by running these two sets of commands. First, add a file with no credentials:

echo "Clean file - no credentials here" > clean_file.txt
git add clean_file.txt
git commit clean_file.txt -m "Adds clean_file.txt"
git push

Then add a file containing credentials:

SECRET_LIKE_STRING=$(cat /dev/urandom | env LC_CTYPE=C tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)
echo "secret=$SECRET_LIKE_STRING" > problematic_file.txt
git add problematic_file.txt
git commit problematic_file.txt -m "Adds secret-like string to problematic_file.txt"
git push

This first command creates, commits and pushes an unproblematic file clean_file.txt that will pass the checks of commitInspectLambda. The second command creates, commits, and pushes problematic_file.txt, which matches the regular expressions and triggers the remediation measures.

If you check your email, you soon receive notifications about actions taken by the Lambda functions.

Cleaning up

To avoid incurring charges, delete the resources by running cdk destroy and confirming the deletion.

Conclusion

This post demonstrates how you can implement a solution to discover secrets in commits to AWS CodeCommit repositories. It also defines different strategies to remediate this.

The CDK code to set up all components is minimal and can be extended for remediation measures. The template is portable between Regions and uses serverless technologies to minimize cost and complexity.

For more serverless learning resources, visit Serverless Land.

ICYMI: Serverless Q4 2020

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/icymi-serverless-q4-2020/

Welcome to the 12th edition of the AWS Serverless ICYMI (in case you missed it) quarterly recap. Every quarter, we share all of the most recent product launches, feature enhancements, blog posts, webinars, Twitch live streams, and other interesting things that you might have missed!

ICYMI Q4 calendar

In case you missed our last ICYMI, check out what happened last quarter here.

AWS re:Invent

re:Invent 2020 banner

re:Invent was entirely virtual in 2020 and free to all attendees. The conference had a record number of registrants and featured over 700 sessions. The serverless developer advocacy team presented a number of talks to help developers build their skills. These are now available on-demand:

AWS Lambda

There were three major Lambda announcements at re:Invent. Lambda duration billing changed granularity from 100 ms to 1 ms, which is shown in the December billing statement. All functions benefit from this change automatically, and it’s especially beneficial for sub-100ms Lambda functions.

Lambda has also increased the maximum memory available to 10 GB. Since memory also controls CPU allocation in Lambda, this means that functions now have up to 6 vCPU cores available for processing. Finally, Lambda now supports container images as a packaging format, enabling teams to use familiar container tooling, such as Docker CLI. Container images are stored in Amazon ECR.

There were three feature releases that make it easier for developers working on data processing workloads. Lambda now supports self-hosted Kafka as an event source, allowing you to source events from on-premises or instance-based Kafka clusters. You can also process streaming analytics with tumbling windows and use custom checkpoints for processing batches with failed messages.

We launched Lambda Extensions in preview, enabling you to more easily integrate monitoring, security, and governance tools into Lambda functions. You can also build your own extensions that run code during Lambda lifecycle events. See this example extensions repo for starting development.

You can now send logs from Lambda functions to custom destinations by using Lambda Extensions and the new Lambda Logs API. Previously, you could only forward logs after they were written to Amazon CloudWatch Logs. Now, logging tools can receive log streams directly from the Lambda execution environment. This makes it easier to use your preferred tools for log management and analysis, including Datadog, Lumigo, New Relic, Coralogix, Honeycomb, or Sumo Logic.

Lambda Logs API architecture

Lambda launched support for Amazon MQ as an event source. Amazon MQ is a managed broker service for Apache ActiveMQ that simplifies deploying and scaling queues. The event source operates in a similar way to using Amazon SQS or Amazon Kinesis. In all cases, the Lambda service manages an internal poller to invoke the target Lambda function.

Lambda announced support for AWS PrivateLink. This allows you to invoke Lambda functions from a VPC without traversing the public internet. It provides private connectivity between your VPCs and AWS services. By using VPC endpoints to access the Lambda API from your VPC, this can replace the need for an Internet Gateway or NAT Gateway.

For developers building machine learning inferencing, media processing, high performance computing (HPC), scientific simulations, and financial modeling in Lambda, you can now use AVX2 support to help reduce duration and lower cost. In this blog post’s example, enabling AVX2 for an image-processing function increased performance by 32-43%.

Lambda now supports batch windows of up to 5 minutes when using SQS as an event source. This is useful for workloads that are not time-sensitive, allowing developers to reduce the number of Lambda invocations from queues. Additionally, the batch size has been increased from 10 to 10,000. This is now the same batch size as Kinesis as an event source, helping Lambda-based applications process more data per invocation.

Code signing is now available for Lambda, using AWS Signer. This allows account administrators to ensure that Lambda functions only accept signed code for deployment. You can learn more about using this new feature in the developer documentation.

AWS Step Functions

Synchronous Express Workflows have been launched for AWS Step Functions, providing a new way to run high-throughput Express Workflows. This feature allows developers to receive workflow responses without needing to poll services or build custom solutions. This is useful for high-volume microservice orchestration and fast compute tasks communicating via HTTPS.

The Step Functions service recently added support for other AWS services in workflows. You can now integrate API Gateway REST and HTTP APIs. This enables you to call API Gateway directly from a state machine as an asynchronous service integration.

Step Functions now also supports Amazon EKS service integration. This allows you to build workflows with steps that synchronously launch tasks in EKS and wait for a response. The service also announced support for Amazon Athena, so workflows can now query data in your S3 data lakes.

Amazon API Gateway

API Gateway now supports mutual TLS authentication, which is commonly used for business-to-business applications and standards such as Open Banking. This is provided at no additional cost. You can now also disable the default REST API endpoint when deploying APIs using custom domain names.

HTTP APIs now supports service integrations with Step Functions Synchronous Express Workflows. This is a result of the service team’s work to add the most popular features of REST APIs to HTTP APIs.

AWS X-Ray

X-Ray now integrates with Amazon S3 to trace upstream requests. If a Lambda function uses the X-Ray SDK, S3 sends tracing headers to downstream event subscribers. This allows you to use the X-Ray service map to view connections between S3 and other services used to process an application request.

X-Ray announced support for end-to-end tracing in Step Functions to make it easier to trace requests across multiple AWS services. It also launched X-Ray Insights in preview, which generates actionable insights based on anomalies detected in an application. For Java developers, the services released an auto-instrumentation agent, for collecting instrumentation without modifying existing code.

Additionally, the AWS Distro for Open Telemetry is now in preview. OpenTelemetry is a collaborative effort by tracing solution providers to create common approaches to instrumentation.

Amazon EventBridge

You can now use event replay to archive and replay events with Amazon EventBridge. After configuring an archive, EventBridge automatically stores all events or filtered events, based upon event pattern matching logic. Event replay can help with testing new features or changes in your code, or hydrating development or test environments.

EventBridge archive and replay

EventBridge also launched resource policies that simplify managing access to events across multiple AWS accounts. Resource policies provide a powerful mechanism for modeling event buses across multiple account and providing fine-grained access control to EventBridge API actions.

EventBridge resource policies

EventBridge announced support for Server-Side Encryption (SSE). Events are encrypted using AES-256 at no additional cost for customers. EventBridge also increased PutEvent quotas to 10,000 transactions per second in US East (N. Virginia), US West (Oregon), and Europe (Ireland). This helps support workloads with high throughput.

Developer tools

The AWS SDK for JavaScript v3 was launched and includes first-class TypeScript support and a modular architecture. This makes it easier to import only the services needed to minimize deployment package sizes.

The AWS Serverless Application Model (AWS SAM) is an AWS CloudFormation extension that makes it easier to build, manage, and maintain serverless applications. The latest versions include support for cached and parallel builds, together with container image support for Lambda functions.

You can use AWS SAM in the new AWS CloudShell, which provides a browser-based shell in the AWS Management Console. This can help run a subset of AWS SAM CLI commands as an alternative to using a dedicated instance or AWS Cloud9 terminal.

AWS CloudShell

Amazon SNS

Amazon SNS announced support for First-In-First-Out (FIFO) topics. These are used with SQS FIFO queues for applications that require strict message ordering with exactly once processing and message deduplication.

Amazon DynamoDB

Developers can now use PartiQL, an SQL-compatible query language, with DynamoDB tables, bringing familiar SQL syntax to NoSQL data. You can also choose to use Kinesis Data Streams to capture changes to tables.

For customers using DynamoDB global tables, you can now use your own encryption keys. While all data in DynamoDB is encrypted by default, this feature enables you to use customer managed keys (CMKs). DynamoDB also announced the ability to export table data to data lakes in Amazon S3. This enables you to use services like Amazon Athena and AWS Lake Formation to analyze DynamoDB data with no custom code required.

AWS Amplify and AWS AppSync

You can now use existing Amazon Cognito user pools and identity pools for Amplify projects, making it easier to build new applications for an existing user base. With the new AWS Amplify Admin UI, you can configure application backends without using the AWS Management Console.

AWS AppSync enabled AWS WAF integration, making it easier to protect GraphQL APIs against common web exploits. You can also implement rate-based rules to help slow down brute force attacks. Using AWS Managed Rules for AWS WAF provides a faster way to configure application protection without creating the rules directly.

Serverless Posts

October

November

December

Tech Talks & Events

We hold AWS Online Tech Talks covering serverless topics throughout the year. These are listed in the Serverless section of the AWS Online Tech Talks page. We also regularly deliver talks at conferences and events around the world, speak on podcasts, and record videos you can find to learn in bite-sized chunks.

Here are some from Q4:

Videos

October:

November:

December:

There are also other helpful videos covering Serverless available on the Serverless Land YouTube channel.

The Serverless Land website

Serverless Land website

To help developers find serverless learning resources, we have curated a list of serverless blogs, videos, events, and training programs at a new site, Serverless Land. This is regularly updated with new information – you can subscribe to the RSS feed for automatic updates or follow the LinkedIn page.

Still looking for more?

The Serverless landing page has lots of information. The Lambda resources page contains case studies, webinars, whitepapers, customer stories, reference architectures, and even more Getting Started tutorials.

You can also follow all of us on Twitter to see latest news, follow conversations, and interact with the team.

Optimizing AWS Lambda cost and performance using AWS Compute Optimizer

Post Syndicated from Chad Schmutzer original https://aws.amazon.com/blogs/compute/optimizing-aws-lambda-cost-and-performance-using-aws-compute-optimizer/

This post is authored by Brooke Chen, Senior Product Manager for AWS Compute Optimizer, Letian Feng, Principal Product Manager for AWS Compute Optimizer, and Chad Schmutzer, Principal Developer Advocate for Amazon EC2

Optimizing compute resources is a critical component of any application architecture. Over-provisioning compute can lead to unnecessary infrastructure costs, while under-provisioning compute can lead to poor application performance.

Launched in December 2019, AWS Compute Optimizer is a recommendation service for optimizing the cost and performance of AWS compute resources. It generates actionable optimization recommendations tailored to your specific workloads. Over the last year, thousands of AWS customers reduced compute costs up to 25% by using Compute Optimizer to help choose the optimal Amazon EC2 instance types for their workloads.

One of the most frequent requests from customers is for AWS Lambda recommendations in Compute Optimizer. Today, we announce that Compute Optimizer now supports memory size recommendations for Lambda functions. This allows you to reduce costs and increase performance for your Lambda-based serverless workloads. To get started, opt in for Compute Optimizer to start finding recommendations.

Overview

With Lambda, there are no servers to manage, it scales automatically, and you only pay for what you use. However, choosing the right memory size settings for a Lambda function is still an important task. Computer Optimizer uses machine-learning based memory recommendations to help with this task.

These recommendations are available through the Compute Optimizer console, AWS CLI, AWS SDK, and the Lambda console. Compute Optimizer continuously monitors Lambda functions, using historical performance metrics to improve recommendations over time. In this blog post, we walk through an example to show how to use this feature.

Using Compute Optimizer for Lambda

This tutorial uses the AWS CLI v2 and the AWS Management Console.

In this tutorial, we setup two compute jobs that run every minute in AWS Region US East (N. Virginia). One job is more CPU intensive than the other. Initial tests show that the invocation times for both jobs typically last for less than 60 seconds. The goal is to either reduce cost without much increase in duration, or reduce the duration in a cost-efficient manner.

Based on these requirements, a serverless solution can help with this task. Amazon EventBridge can schedule the Lambda functions using rules. To ensure that the functions are optimized for cost and performance, you can use the memory recommendation support in Compute Optimizer.

In your AWS account, opt in to Compute Optimizer to start analyzing AWS resources. Ensure you have the appropriate IAM permissions configured – follow these steps for guidance. If you prefer to use the console to opt in, follow these steps. To opt in, enter the following command in a terminal window:

$ aws compute-optimizer update-enrollment-status --status Active

Once you enable Compute Optimizer, it starts to scan for functions that have been invoked for at least 50 times over the trailing 14 days. The next section shows two example scheduled Lambda functions for analysis.

Example Lambda functions

The code for the non-CPU intensive job is below. A Lambda function named lambda-recommendation-test-sleep is created with memory size configured as 1024 MB. An EventBridge rule is created to trigger the function on a recurring 1-minute schedule:

import json
import time

def lambda_handler(event, context):
  time.sleep(30)
  x=[0]*100000000
  return {
    'statusCode': 200,
    'body': json.dumps('Hello World!')
  }

The code for the CPU intensive job is below. A Lambda function named lambda-recommendation-test-busy is created with memory size configured as 128 MB. An EventBridge rule is created to trigger the function on a recurring 1-minute schedule:

import json
import random

def lambda_handler(event, context):
  random.seed(1)
  x=0
  for i in range(0, 20000000):
    x+=random.random()

  return {
    'statusCode': 200,
    'body': json.dumps('Sum:' + str(x))
  }

Understanding the Compute Optimizer recommendations

Compute Optimizer needs a history of at least 50 invocations of a Lambda function over the trailing 14 days to deliver recommendations. Recommendations are created by analyzing function metadata such as memory size, timeout, and runtime, in addition to CloudWatch metrics such as number of invocations, duration, error count, and success rate.

Compute Optimizer will gather the necessary information to provide memory recommendations for Lambda functions, and make them available within 48 hours. Afterwards, these recommendations will be refreshed daily.

These are recent invocations for the non-CPU intensive function:

Recent invocations for the non-CPU intensive function

Function duration is approximately 31.3 seconds with a memory setting of 1024 MB, resulting in a duration cost of about $0.00052 per invocation. Here are the recommendations for this function in the Compute Optimizer console:

Recommendations for this function in the Compute Optimizer console

The function is Not optimized with a reason of Memory over-provisioned. You can also fetch the same recommendation information via the CLI:

$ aws compute-optimizer \
  get-lambda-function-recommendations \
  --function-arns arn:aws:lambda:us-east-1:123456789012:function:lambda-recommendation-test-sleep
{
    "lambdaFunctionRecommendations": [
        {
            "utilizationMetrics": [
                {
                    "name": "Duration",
                    "value": 31333.63587049883,
                    "statistic": "Average"
                },
                {
                    "name": "Duration",
                    "value": 32522.04,
                    "statistic": "Maximum"
                },
                {
                    "name": "Memory",
                    "value": 817.67049838188,
                    "statistic": "Average"
                },
                {
                    "name": "Memory",
                    "value": 819.0,
                    "statistic": "Maximum"
                }
            ],
            "currentMemorySize": 1024,
            "lastRefreshTimestamp": 1608735952.385,
            "numberOfInvocations": 3090,
            "functionArn": "arn:aws:lambda:us-east-1:123456789012:function:lambda-recommendation-test-sleep:$LATEST",
            "memorySizeRecommendationOptions": [
                {
                    "projectedUtilizationMetrics": [
                        {
                            "name": "Duration",
                            "value": 30015.113193697029,
                            "statistic": "LowerBound"
                        },
                        {
                            "name": "Duration",
                            "value": 31515.86878891883,
                            "statistic": "Expected"
                        },
                        {
                            "name": "Duration",
                            "value": 33091.662123300975,
                            "statistic": "UpperBound"
                        }
                    ],
                    "memorySize": 900,
                    "rank": 1
                }
            ],
            "functionVersion": "$LATEST",
            "finding": "NotOptimized",
            "findingReasonCodes": [
                "MemoryOverprovisioned"
            ],
            "lookbackPeriodInDays": 14.0,
            "accountId": "123456789012"
        }
    ]
}

The Compute Optimizer recommendation contains useful information about the function. Most importantly, it has determined that the function is over-provisioned for memory. The attribute findingReasonCodes shows the value MemoryOverprovisioned. In memorySizeRecommendationOptions, Compute Optimizer has found that using a memory size of 900 MB results in an expected invocation duration of approximately 31.5 seconds.

For non-CPU intensive jobs, reducing the memory setting of the function often doesn’t have a negative impact on function duration. The recommendation confirms that you can reduce the memory size from 1024 MB to 900 MB, saving cost without significantly impacting duration. The new duration cost per invocation saves approximately 12%.

The Compute Optimizer console validates these calculations:

Compute Optimizer console validates these calculations

These are recent invocations for the second function which is CPU-intensive:

Recent invocations for the second function which is CPU-intensive

The function duration is about 37.5 seconds with a memory setting of 128 MB, resulting in a duration cost of about $0.000078 per invocation. The recommendations for this function appear in the Compute Optimizer console:

recommendations for this function appear in the Compute Optimizer console

The function is also Not optimized with a reason of Memory under-provisioned. The same recommendation information is available via the CLI:

$ aws compute-optimizer \
  get-lambda-function-recommendations \
  --function-arns arn:aws:lambda:us-east-1:123456789012:function:lambda-recommendation-test-busy
{
    "lambdaFunctionRecommendations": [
        {
            "utilizationMetrics": [
                {
                    "name": "Duration",
                    "value": 36006.85851551957,
                    "statistic": "Average"
                },
                {
                    "name": "Duration",
                    "value": 38540.43,
                    "statistic": "Maximum"
                },
                {
                    "name": "Memory",
                    "value": 53.75978407557355,
                    "statistic": "Average"
                },
                {
                    "name": "Memory",
                    "value": 55.0,
                    "statistic": "Maximum"
                }
            ],
            "currentMemorySize": 128,
            "lastRefreshTimestamp": 1608725151.752,
            "numberOfInvocations": 741,
            "functionArn": "arn:aws:lambda:us-east-1:123456789012:function:lambda-recommendation-test-busy:$LATEST",
            "memorySizeRecommendationOptions": [
                {
                    "projectedUtilizationMetrics": [
                        {
                            "name": "Duration",
                            "value": 27340.37604781184,
                            "statistic": "LowerBound"
                        },
                        {
                            "name": "Duration",
                            "value": 28707.394850202432,
                            "statistic": "Expected"
                        },
                        {
                            "name": "Duration",
                            "value": 30142.764592712556,
                            "statistic": "UpperBound"
                        }
                    ],
                    "memorySize": 160,
                    "rank": 1
                }
            ],
            "functionVersion": "$LATEST",
            "finding": "NotOptimized",
            "findingReasonCodes": [
                "MemoryUnderprovisioned"
            ],
            "lookbackPeriodInDays": 14.0,
            "accountId": "123456789012"
        }
    ]
}

For this function, Compute Optimizer has determined that the function’s memory is under-provisioned. The value of findingReasonCodes is MemoryUnderprovisioned. The recommendation is to increase the memory from 128 MB to 160 MB.

This recommendation may seem counter-intuitive, since the function only uses 55 MB of memory per invocation. However, Lambda allocates CPU and other resources linearly in proportion to the amount of memory configured. This means that increasing the memory allocation to 160 MB also reduces the expected duration to around 28.7 seconds. This is because a CPU-intensive task also benefits from the increased CPU performance that comes with the additional memory.

After applying this recommendation, the new expected duration cost per invocation is approximately $0.000075. This means that for almost no change in duration cost, the job latency is reduced from 37.5 seconds to 28.7 seconds.

The Compute Optimizer console validates these calculations:

Compute Optimizer console validates these calculations

Applying the Compute Optimizer recommendations

To optimize the Lambda functions using Compute Optimizer recommendations, use the following CLI command:

$ aws lambda update-function-configuration \
  --function-name lambda-recommendation-test-sleep \
  --memory-size 900

After invoking the function multiple times, we can see metrics of these invocations in the console. This shows that the function duration has not changed significantly after reducing the memory size from 1024 MB to 900 MB. The Lambda function has been successfully cost-optimized without increasing job duration:

Console shows the metrics from recent invocations

To apply the recommendation to the CPU-intensive function, use the following CLI command:

$ aws lambda update-function-configuration \
  --function-name lambda-recommendation-test-busy \
  --memory-size 160

After invoking the function multiple times, the console shows that the invocation duration is reduced to about 28 seconds. This matches the recommendation’s expected duration. This shows that the function is now performance-optimized without a significant cost increase:

Console shows that the invocation duration is reduced to about 28 seconds

Final notes

A couple of final notes:

  • Not every function will receive a recommendation. Compute optimizer only delivers recommendations when it has high confidence that these recommendations may help reduce cost or reduce execution duration.
  • As with any changes you make to an environment, we strongly advise that you test recommended memory size configurations before applying them into production.

Conclusion

You can now use Compute Optimizer for serverless workloads using Lambda functions. This can help identify the optimal Lambda function configuration options for your workloads. Compute Optimizer supports memory size recommendations for Lambda functions in all AWS Regions where Compute Optimizer is available. These recommendations are available to you at no additional cost. You can get started with Compute Optimizer from the console.

To learn more visit Getting started with AWS Compute Optimizer.

 

Ingesting MongoDB Atlas data using Amazon EventBridge

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/ingesting-mongodb-atlas-data-using-amazon-eventbridge/

This post is courtesy of Gopalakrishnan Ramaswamy, Solutions Architect

Amazon EventBridge is a serverless event bus that makes it easier to connect applications together using data from your own applications, integrated software as a service (SaaS) applications, and AWS services. It does so by delivering a stream of real-time data from various event sources. You can set up routing rules to send data to targets like AWS Lambda and build loosely coupled application architectures that react in near-real time to data sources.

MongoDB is a document database, which means it stores data in JSON-like documents. It provides a query language and has support for multi-document ACID transactions. MongoDB Atlas is a fully managed MongoDB database service hosted on the cloud. It can be used as a globally distributed database that automates administrative tasks such as database configuration, infrastructure provisioning, patching, scaling, and backups.

With EventBridge, you can use data from MongoDB to trigger workflows for customer support, business operations and more. In this post, I walk through the process of connecting MongoDB Atlas with the AWS Cloud and triggering events from changes in the MongoDB collections data.

Overview

The following diagram shows the high-level architecture of an example scenario to ingest MongoDB data into the AWS Cloud using Amazon EventBridge.

Solution architecture

MongoDB stores data records as BSON documents, which are gathered together in collections. A database stores one or more collections of documents.

This walkthrough shows you how to:

  1. Create a MongoDB cluster and load sample data.
  2. Create a database trigger associated to a collection.
  3. Create an event bus in AWS, linked to the partner event source.
  4. Create a Lambda function and the associated role with permissions.
  5. Create an EventBridge rule and associate it to the Lambda function.
  6. Verify the process.

Steps 3–5 create and configure the AWS resources using the AWS Serverless Application Model (AWS SAM). To set up the sample application, visit the GitHub repo and follow the instructions in the README.md file.

Prerequisites

This walkthrough requires:

  • An AWS account.
  • A MongoDB account.
  • The AWS SAM CLI installed and configured on your machine.

Creating a MongoDB Atlas cluster and loading sample data

For detailed steps to create a cluster and load data, see MongoDB Atlas documentation. To create the test cluster:

  1. Create a MongoDB Atlas account.
  2. Deploy a free tier cluster using these instructions, selecting your preferred cloud provider and Region.
  3. Add your trusted connection IP address to the IP access list. This allows to connect to the cluster and access the data.
  4. After connecting to your cluster, load sample data into your cluster:
    • Navigate to the clusters view by choosing Clusters in the left navigation pane.
    • Select the cluster, choose the ellipses (…) button, and Load Sample Dataset.

MongoDB clusters UI

Create MongoDB database trigger

MongoDB database triggers allow you to run server-side logic when a document is added, updated, or removed in a linked cluster. Use database triggers to implement complex data interactions, including updating information in one document when a related document changes or interacting with an external service when a new document is inserted.

  1. Sign in to your account and choose Triggers in the left-hand panel.
  2. Choose Add Trigger to open the trigger configuration page.
  3. Select Database for Trigger Type.Add trigger
  4. Enter a name for the trigger.
  5. In the Trigger Source Details section:
    • Select the cluster with sample data loaded (for example, Cluster0) for Cluster Name.
    • For Database Name select sample_analytics.
    • Select customers for Collection Name.
    • Check Insert, Update, Delete, and Replace for Operation Type.Trigger source details
  6. In the Function section:
    • For Select An Event Type, Select EventBridge.
    • Enter your AWS Account ID. Learn how to find your account ID in this documentation.
    • Select an AWS Region where the event bus will be created.EventBridge configuration
  7. Choose Save.

Once a MongoDB Atlas trigger is created, it creates a corresponding partner event source in the Amazon EventBridge console. Initially, these event sources show as Pending with no event bus associated to them.

Partner event source

Next, use the AWS SAM template in the GitHub repo to create the event bus, Lambda function, and event rule.

  1. Clone the GitHub repo and deploy the AWS SAM template:
    git clone https://github.com/aws-samples/amazon-eventbridge-partnerevent-example
    cd ./amazon-eventbridge-partnerevent-example
    sam deploy --guided
  2. Choose a stack name and enter the partner event source name.

The next section explains the steps that are performed by the AWS SAM template.

Creating the event bus

To receive events from SaaS partners, an event bus must be created that is associated to the partner event source:

  PartnerEventBus: 
    Type: AWS::Events::EventBus
    Properties: 
      EventSourceName: !Ref PartnerEventSource
      Name: !Ref PartnerEventSource

The partner event source name and the name of the event bus are derived from the parameter entered when running the template.

Once you create an event bus associated with a partner event source, the status of the partner event source changes to Active. A new event bus with the same name as the partner event source is created. You can see this in the EventBridge console, in Event buses in the left-hand panel.

Partner event sources

Creating the Lambda function

The following section of the template creates a Lambda function that is invoked by an event rule:

  myeventfunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: eventLambda/
      Handler: index.handler
      Runtime: nodejs12.x
      FunctionName: myeventfunction

Creating an event bus rule

The following section in the template creates an event rule that triggers the preceding Lambda function. The event pattern used by the rule, selects and routes events to targets.

  myeventrule:
    Type: 'AWS::Events::Rule'
    Properties:
      Description: Test Events Rule
      EventBusName: !Ref PartnerEventSource
      EventPattern: 
        account: [!Ref AWS::AccountId]
      Name: myeventrule
      State: ENABLED
      Targets:
       - 
         Arn: 
           Fn::GetAtt:
             - "myeventfunction"
             - "Arn"
         Id: "idmyeventrule"

Permission is provided to the rule, to invoke Lambda functions. This allows the rule to trigger the associated Lambda function:

  PermissionForEventsToInvokeLambda: 
    Type: AWS::Lambda::Permission
    Properties: 
      FunctionName: 
        Ref: "myeventfunction"
      Action: "lambda:InvokeFunction"
      Principal: "events.amazonaws.com"
      SourceArn: 
        Fn::GetAtt: 
          - "myeventrule"
          - "Arn"         

Verifying the integration

After deploying the AWS SAM template, verify that the EventBridge integration works by inserting test data into the source MongoDB collection. After adding this data, the event is sent to the event bus, which invokes the Lambda function. This is shown in the CloudWatch logs for the event payload.

To verify the deployment:

  1. Download and install the MongoDB shell.
  2. Connect to MongoDB shell using:
    mongo "mongodb+srv://cluster0.xvo4o.mongodb.net/sample_analytics" --username yourusername

    Replace the cluster name with the cluster you created. Connect to the sample_analytics database, which has the sample data and collections.

  3. Next, insert a record into the customers collection with associated the database trigger. In the MongoDB shell, run the following command:
    db.customers.insertOne(
    {
      username:"myuser99",
      name:"Eventbridge Mongo",
      address:"My Address XYZ",
      birthdate:{"$date":"1975-03-02T02:20:31.000Z"},
      email:"[email protected]",
      active:true,
      accounts:[371138,324287,276528,332179,422649,387979],
      tier_and_details:{
         "0df078f33aa74a2e9696e0520c1a828a":{
         tier:"Bronze",
         id:"0df078f33aa74a2e9696e0520c1a828a",
         active:true,
         benefits:["sports tickets"]
        },
       "699456451cc24f028d2aa99d7534c219":{
       tier:"Bronze",
       benefits:["24 hour dedicated line","concierge services"],
       active:true,
       id:"699456451cc24f028d2aa99d7534c219"
      }
      }
      }
    )
    
  4. Once the record is successfully inserted:
    • Navigate to CloudWatch in the AWS console and choose Log groups in the left-hand panel.
    • Search for the log group /aws/lambda/myeventfunction and choose the event stream.
    • Expand the log items to reveal the event. This contains the payload that was sent from MongoDB Atlas to EventBridge.

Conclusion

This post demonstrates how to connect MongoDB Atlas data with the AWS Cloud using Amazon EventBridge. EventBridge helps you connect data from a range of SaaS applications using minimal code. It can help reduce operational overhead and build powerful event-driven architectures more easily. For more information about integrating data between SaaS applications, see Amazon EventBridge.

For more serverless learning resources, visit Serverless Land.

Building a Controlled Environment Agriculture Platform

Post Syndicated from Ashu Joshi original https://aws.amazon.com/blogs/architecture/building-a-controlled-environment-agriculture-platform/

This post was co-written by Michael Wirig, Software Engineering Manager at Grōv Technologies.

A substantial percentage of the world’s habitable land is used for livestock farming for dairy and meat production. The dairy industry has leveraged technology to gain insights that have led to drastic improvements and are continuing to accelerate. A gallon of milk in 2017 involved 30% less water, 21% less land, a 19% smaller carbon footprint, and 20% less manure than it did in 2007 (US Dairy, 2019). By focusing on smarter water usage and sustainable land usage, livestock farming can grow to provide sustainable and nutrient-dense food for consumers and livestock alike.

Grōv Technologies (Grōv) has pioneered the Olympus Tower Farm, a fully automated Controlled Environment Agriculture (CEA) system. Unique amongst vertical farming startups, Grōv is growing cattle feed to improve that sustainable use of land for livestock farming while increasing the economic margins for dairy and beef producers.

The challenges of CEA

The set of growing conditions for a CEA is called a “recipe,” which is a combination of ingredients like temperature, humidity, light, carbon dioxide levels, and water. The optimal recipe is dynamic and is sensitive to its ingredients. Crops must be monitored in near-real time, and CEAs should be able to self-correct in order to maintain the recipe. To build a system with these capabilities requires answers to the following questions:

  • What parameters are needed to measure for indoor cattle feed production?
  • What sensors enable the accuracy and price trade-offs at scale?
  • Where do you place the sensors to ensure a consistent crop?
  • How do you correlate the data from sensors to the nutrient value?

To progress from a passively monitored system to a self-correcting, autonomous one, the CEA platform also needs to address:

  • How to maintain optimum crop conditions
  • How the system can learn and adapt to new seed varieties
  • How to communicate key business drivers such as yield and dry matter percentage

Grōv partnered with AWS Professional Services (AWS ProServe) to build a digital CEA platform addressing the challenges posed above.

Olympus Tower - Grov Technologies

Tower automation and edge platform

The Olympus Tower is instrumented for measuring recipe ingredients by combining the mechanical, electrical, and domain expertise of the Grōv team with the IoT edge and sensor expertise of the AWS ProServe team. The teams identified a primary set of features such as height, weight, and evenness of the growth to be measured at multiple stages within the Tower. Sensors were also added to measure secondary features such as water level, water pH, temperature, humidity, and carbon dioxide.

The teams designed and developed a purpose-built modular and industrial sensor station. Each sensor station has sensors for direct measurement of the features identified. The sensor stations are extended to support indirect measurement of features using a combination of Computer Vision and Machine Learning (CV/ML).

The trays with the growing cattle feed circulate through the Olympus Tower. A growth cycle starts on a tray with seeding, circulates through the tower over the cycle, and returns to the starting position to be harvested. The sensor station at the seeding location on the Olympus Tower tags each new growth cycle in a tray with a unique “Grow ID.” As trays pass by, each sensor station in the Tower collects the feature data. The firmware, jointly developed for the sensor station, uses AWS IoT SDK to stream the sensor data along with the Grow ID and metadata that’s specific to the sensor station. This information is sent every five minutes to an on-site edge gateway powered by AWS IoT Greengrass. Dedicated AWS Lambda functions manage the lifecycle of the Grow IDs and the sensor data processing on the edge.

The Grōv team developed AWS Greengrass Lambda functions running at the edge to ingest critical metrics from the operation automation software running the Olympus Towers. This information provides the ability to not just monitor the operational efficiency, but to provide the hooks to control the feedback loop.

The two sources of data were augmented with site-level data by installing sensor stations at the building level or site level to capture environmental data such as weather and energy consumption of the Towers.

All three sources of data are streamed to AWS IoT Greengrass and are processed by AWS Lambda functions. The edge software also fuses the data and correlates all categories of data together. This enables two major actions for the Grōv team – operational capability in real-time at the edge and enhanced data streamed into the cloud.

Grov Technologies - Architecture

Cloud pipeline/platform: analytics and visualization

As the data is streamed to AWS IoT Core via AWS IoT Greengrass. AWS IoT rules are used to route ingested data to store in Amazon Simple Sotrage Service (Amazon S3) and Amazon DynamoDB. The data pipeline also includes Amazon Kinesis Data Streams for batching and additional processing on the incoming data.

A ReactJS-based dashboard application is powered using Amazon API Gateway and AWS Lambda functions to report relevant metrics such as daily yield and machine uptime.

A data pipeline is deployed to analyze data using Amazon QuickSight. AWS Glue is used to create a dataset from the data stored in Amazon S3. Amazon Athena is used to query the dataset to make it available to Amazon QuickSight. This provides the extended Grōv tech team of research scientists the ability to perform a series of what-if analyses on the data coming in from the Tower Systems beyond what is available in the react-based dashboard.

Data pipeline - Grov Technologies

Completing the data-driven loop

Now that the data has been collected from all sources and stored it in a data lake architecture, the Grōv CEA platform established a strong foundation for harnessing the insights and delivering the customer outcomes using machine learning.

The integrated and fused data from the edge (sourced from the Olympus Tower instrumentation, Olympus automation software data, and site-level data) is co-related to the lab analysis performed by Grōv Research Center (GRC). Harvest samples are routinely collected and sent to the lab, which performs wet chemistry and microbiological analysis. Trays sent as samples to the lab are associated with the results of the analysis with the sensor data by corresponding Grow IDs. This serves as a mechanism for labeling and correlating the recipe data with the parameters used by dairy and beef producers – dry matter percentage, micro and macronutrients, and the presence of myco-toxins.

Grōv has chosen Amazon SageMaker to build a machine learning pipeline on its comprehensive data set, which will enable fine tuning the growing protocols in near real-time. Historical data collection unlocks machine learning use cases for future detection of anomalous sensors readings and sensor health monitoring, as well.

Because the solution is flexible, the Grōv team plans to integrate data from animal studies on their health and feed efficiency into the CEA platform. Machine learning on the data from animal studies will enhance the tuning of recipe ingredients that impact the animals’ health. This will give the farmer an unprecedented view of the impact of feed nutrition on the end product and consumer.

Conclusion

Grōv Technologies and AWS ProServe have built a strong foundation for an extensible and scalable architecture for a CEA platform that will nourish animals for better health and yield, produce healthier foods and to enable continued research into dairy production, rumination and animal health to empower sustainable farming practices.

Automating mutual TLS setup for Amazon API Gateway

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/automating-mutual-tls-setup-for-amazon-api-gateway/

This post is courtesy of Pankaj Agrawal, Solutions Architect.

In September 2020, Amazon API Gateway announced support for mutual Transport Layer Security (TLS) authentication. This is a new method for client-to-server authentication that can be used with API Gateway’s existing authorization options. Mutual TLS (mTLS) is an extension of Transport Layer Security(TLS), requiring both the server and client to verify each other.

Mutual TLS is commonly used for business-to-business (B2B) applications. It’s used in standards such as Open Banking, which enables secure open API integrations for financial institutions. It’s also common for Internet of Things (IoT) applications to authenticate devices using digital certificates.

This post covers automating the mTLS setup for API Gateway HTTP APIs, but the same steps can also be used for REST APIs as well. Download the code used in this walkthrough from the project’s GitHub repo.

Overview

To enable mutual TLS, you must create an API with a valid custom domain name. Mutual TLS is available for both regional REST APIs and the newer HTTP APIs. To set up mutual TLS with API Gateway, you must upload a certificate authority (CA) public key certificate to Amazon S3. This is called a truststore and is used for validating client certificates.

Reference architecture

The AWS Certificate Manager Private Certificate Authority (ACM Private CA) is a highly available private CA service. I am using the ACM Private CA as a certificate authority to configure HTTP APIs and to distribute certificates to clients.

Deploying the solution

To deploy the application, the solution uses the AWS Serverless Application Model (AWS SAM). AWS SAM provides shorthand syntax to define functions, APIs, databases, and event source mappings. As a prerequisite, you must have AWS SAM CLI and Java 8 installed. You must also have the AWS CLI configured.

To deploy the solution:

  1. Clone the GitHub repository and build the application with the AWS SAM CLI. Run the following commands in a terminal:
    git clone https://github.com/aws-samples/api-gateway-auth.git
    cd api-gateway-auth
    sam build

    Console output

  2. Deploy the application:
    sam deploy --guided

Provide a stack name and preferred AWS Region for the deployment process. The template requires three parameters:

  1. HostedZoneId: The template uses an Amazon Route 53 public hosted zone to configure the custom domain. Provide the hosted zone ID where the record set must be created.
  2. DomainName: The custom domain name for the API Gateway HTTP API.
  3. TruststoreKey: The name for the trust store file in S3 bucket, which is used by API Gateway for mTLS. By default its truststore.pem.

SAM deployment configuration

After deployment, the stack outputs the ARN of a test client certificate (ClientOneCertArn). This is used to validate the setup later. The API Gateway HTTP API endpoint is also provided as output.

SAM deployment output

You have now created an API Gateway HTTP APIs endpoint using mTLS.

Setting up the ACM Private CA

The AWS SAM template starts with setting up the ACM Private CA. This enables you to create a hierarchy of certificate authorities with up to five levels. A well-designed CA hierarchy offers benefits such as granular security controls and division of administrative tasks. To learn more about the CA hierarchy, visit designing a CA hierarchy. The ACM Private CA is used to configure HTTP APIs and to distribute certificates to clients.

First, a root CA is created and activated, followed by a subordinate CA following best practices. The subordinate CA is used to configure mTLS for the API and distribute the client certificates.

  PrivateCA:
    Type: AWS::ACMPCA::CertificateAuthority
    Properties:
      KeyAlgorithm: RSA_2048
      SigningAlgorithm: SHA256WITHRSA
      Subject:
        CommonName: !Sub "${AWS::StackName}-rootca"
      Type: ROOT

  PrivateCACertificate:
    Type: AWS::ACMPCA::Certificate
    Properties:
      CertificateAuthorityArn: !Ref PrivateCA
      CertificateSigningRequest: !GetAtt PrivateCA.CertificateSigningRequest
      SigningAlgorithm: SHA256WITHRSA
      TemplateArn: 'arn:aws:acm-pca:::template/RootCACertificate/V1'
      Validity:
        Type: YEARS
        Value: 10

  PrivateCAActivation:
    Type: AWS::ACMPCA::CertificateAuthorityActivation
    Properties:
      Certificate: !GetAtt
        - PrivateCACertificate
        - Certificate
      CertificateAuthorityArn: !Ref PrivateCA
      Status: ACTIVE

  MtlsCA:
    Type: AWS::ACMPCA::CertificateAuthority
    Properties:
      Type: SUBORDINATE
      KeyAlgorithm: RSA_2048
      SigningAlgorithm: SHA256WITHRSA
      Subject:
        CommonName: !Sub "${AWS::StackName}-mtlsca"

  MtlsCertificate:
    DependsOn: PrivateCAActivation
    Type: AWS::ACMPCA::Certificate
    Properties:
      CertificateAuthorityArn: !Ref PrivateCA
      CertificateSigningRequest: !GetAtt
        - MtlsCA
        - CertificateSigningRequest
      SigningAlgorithm: SHA256WITHRSA
      TemplateArn: 'arn:aws:acm-pca:::template/SubordinateCACertificate_PathLen3/V1'
      Validity:
        Type: YEARS
        Value: 3

  MtlsActivation:
    Type: AWS::ACMPCA::CertificateAuthorityActivation
    Properties:
      CertificateAuthorityArn: !Ref MtlsCA
      Certificate: !GetAtt
        - MtlsCertificate
        - Certificate
      CertificateChain: !GetAtt
        - PrivateCAActivation
        - CompleteCertificateChain
      Status: ACTIVE

Issuing client certificate from ACM Private CA

Create a client certificate, which is used as a test certificate to validate the mTLS setup:

ClientOneCert:
    DependsOn: MtlsActivation
    Type: AWS::CertificateManager::Certificate
    Properties:
      CertificateAuthorityArn: !Ref MtlsCA
      CertificateTransparencyLoggingPreference: ENABLED
      DomainName: !Ref DomainName
      Tags:
        - Key: Name
          Value: ClientOneCert

Setting up a truststore in Amazon S3

The ACM Private CA is ready for configuring mTLS on the API. The configuration uses an S3 object as its truststore to validate client certificates. To automate this, an AWS Lambda backed custom resource copies the public certificate chain of the ACM Private CA to the S3 bucket:

  TrustStoreBucket:
    Type: AWS::S3::Bucket
    Properties:
      VersioningConfiguration:
        Status: Enabled

  TrustedStoreCustomResourceFunction:
    Type: AWS::Serverless::Function
    Properties:
      FunctionName: TrustedStoreCustomResourceFunction
      Handler: com.auth.TrustedStoreCustomResourceHandler::handleRequest
      Timeout: 120
      Policies:
        - S3CrudPolicy:
            BucketName: !Ref TrustStoreBucket

The example custom resource is written in Java but it could also be written in another language runtime. The custom resource is invoked with the public certificate details of the private root CA, subordinate CAs, and the target S3 bucket. The Lambda function then concatenates the certificate chain and stores the object in the S3 bucket.

TrustedStoreCustomResource:
    Type: Custom::TrustedStore
    Properties:
      ServiceToken: !GetAtt TrustedStoreCustomResourceFunction.Arn
      TrustStoreBucket: !Ref TrustStoreBucket
      TrustStoreKey: !Ref TruststoreKey
      Certs:
        - !GetAtt MtlsCertificate.Certificate
        - !GetAtt PrivateCACertificate.Certificate

You can view and download the handler code for the Lambda-backed custom resource from the repo.

Configuring Amazon API Gateway HTTP APIs with mTLS

With a valid truststore object in the S3 bucket, you can set up the API. A valid custom domain must be configured for API Gateway to enable mTLS. The following code creates and sets up a custom domain for HTTP APIs. See template.yaml for a complete example.

CustomDomainCert:
    Type: AWS::CertificateManager::Certificate
    Properties:
      CertificateTransparencyLoggingPreference: ENABLED
      DomainName: !Ref DomainName
      DomainValidationOptions:
        - DomainName: !Ref DomainName
          HostedZoneId: !Ref HostedZoneId
      ValidationMethod: DNS

  SampleHttpApi:
    Type: AWS::Serverless::HttpApi
    DependsOn: TrustedStoreCustomResource
    Properties:
      CorsConfiguration:
        AllowMethods:
          - GET
        AllowOrigins:
          - http://localhost:8080
      Domain:
        CertificateArn: !Ref CustomDomainCert
        DomainName: !Ref DomainName
        EndpointConfiguration: REGIONAL
        SecurityPolicy: TLS_1_2
        MutualTlsAuthentication:
          TruststoreUri: !GetAtt TrustedStoreCustomResource.TrustStoreUri
          TruststoreVersion: !GetAtt TrustedStoreCustomResource.ObjectVersion
        Route53:
          EvaluateTargetHealth: False
          HostedZoneId: !Ref HostedZoneId
        DisableExecuteApiEndpoint: true

An Amazon Route 53 public hosted zone is used to configure the custom domain. This must be set up in your AWS account separately and you must provide the hosted zone ID as a parameter to the template.

Since the HTTP APIs default endpoint does not require mutual TLS, it is disabled via DisableExecuteApiEndpoint. This helps to ensure that mTLS authentication is enforced for all traffic to the API.

The sample API invokes a Lambda function and returns the request payload as the response.

Testing and validating the setup

To validate the setup, first export the client certificate created earlier. You can export the certificate by using the AWS Management Console or AWS CLI. This example uses the AWS CLI to export the certificate. To learn how to do this via the console, see exporting a private certificate using the console.

  1. Export the base64 PEM-encoded certificate to a local file, client.pem.aws acm export-certificate --certificate-arn <<Certificat ARN from stack output>>
    --passphrase $(echo -n 'your paraphrase' | base64) --region us-east-2 | jq -r '"\(.Certificate)"' > client.pem
  2. Export the encrypted private key associated with the public key in the certificate and save it to a local file client.encrypted.key. You must provide a passphrase to associate with the encrypted private key. This is used to decrypt the exported private key.aws acm export-certificate --certificate-arn <<Certificat ARN from stack output>>
    --passphrase $(echo -n 'your paraphrase' | base64) --region us-east-2| jq -r '"\(.PrivateKey)"' > client.encrypted.key
  3. Decrypt the exported private key using passphrase and OpenSSL:openssl rsa -in client.encrypted.key -out client.decrypted.key
  4. Access the API using mutual TLS:curl -v --cert client.pem  --key client.decrypted.key https://demo-api.example.com

Adding a certificate revocation list

AWS Certificate Manager Private Certificate Authority (ACM Private CA) can be natively configured with an optional certificate revocation list (CRL).

CRL is a way for certificate authority (CA) to make it known that one or more of their digital certificates is no longer trustworthy. When they revoke a certificate, they invalidate the certificate ahead of its expiration date. The certificate authority can revoke an issued certificate for several reasons, the most common one being that the certificate’s private key are compromised.

API Gateway HTTP APIs mTLS setup can be used along with all existing API Gateway authorizer options. You can further extend validation to AWS Lambda authorizers, which can be configured to validate the client certificates against this certificate revocation list (CRL). For example:

Certificate revocation architecture

For Lambda authorizer blueprint examples, refer to aws-apigateway-lambda-authorizer-blueprints.

Conclusion

Mutual TLS (mTLS) for API Gateway is now generally available at no additional cost. This post shows how to automate mutual TLS for Amazon API Gateway HTTP APIs using the AWS Certificate Manager Private Certificate Authority as a private CA. Using infrastructure as code (IaC) enables you to develop, deploy, and scale cloud applications, often with greater speed, less risk, and reduced cost.

Download the complete working example for deploying mTLS with API Gateway at this GitHub repo. To learn more about Amazon API Gateway, visit the API Gateway developer guide documentation.

For more serverless learning resources, visit Serverless Land.

Setting up automated data quality workflows and alerts using AWS Glue DataBrew and AWS Lambda

Post Syndicated from Romi Boimer original https://aws.amazon.com/blogs/big-data/setting-up-automated-data-quality-workflows-and-alerts-using-aws-glue-databrew-and-aws-lambda/

Proper data management is critical to successful, data-driven decision-making. An increasingly large number of customers are adopting data lakes to realize deeper insights from big data. As part of this, you need clean and trusted data in order to gain insights that lead to improvements in your business. As the saying goes, garbage in is garbage out—the analysis is only as good as the data that drives it.

Organizations today have continuously incoming data that may develop slight changes in schema, quality, or profile over a period of time. To ensure data is always of high quality, we need to consistently profile new data, evaluate that it meets our business rules, alert for problems in the data, and fix any issues. In this post, we leverage AWS Glue DataBrew, a visual data preparation tool that makes it easy to profile and prepare data for analytics and machine learning (ML). We demonstrate how to use DataBrew to publish data quality statistics and build a solution around it to automate data quality alerts.

Overview of solution

In this post, we walk through a solution that sets up a recurring profile job to determine data quality metrics and, using your defined business rules, report on the validity of the data. The following diagram illustrates the architecture.

We’ll walk through a solution that takes sets up a recurring Profile job to determine data quality metrics, and using your defined business rules.

The steps in this solution are as follows:

  1. Periodically send raw data to Amazon Simple Storage Service (Amazon S3) for storage.
  2. Read the raw data in Amazon S3 and generate a scheduled DataBrew profile job to determine data quality.
  3. Write the DataBrew profile job output to Amazon S3.
  4. Trigger an Amazon EventBridge event after job completion.
  5. Invoke an AWS Lambda function based on the event, which reads the profile output from Amazon S3 and determines whether the output meets data quality business rules.
  6. Publish the results to an Amazon Simple Notification Service (Amazon SNS) topic.
  7. Subscribe email addresses to the SNS topic to inform members of your organization.

Prerequisites

For this walkthrough, you should have the following prerequisites:

Deploying the solution

For a quick start of this solution, you can deploy the provided AWS CloudFormation stack. This creates all the required resources in your account (us-east-1 Region). Follow the rest of this post for a deeper dive into the resources.

  1. Choose Launch Stack:

  1. In Parameters, for Email, enter an email address that can receive notifications.
  2. Scroll to the end of the form and select I acknowledge that AWS CloudFormation might create IAM resources.
  3. Choose Create stack.

It takes a few minutes for the stack creation to complete; you can follow progress on the Events tab.

  1. Check your email inbox and choose Confirm subscription in the email from AWS Notifications.

The default behavior of the deployed stack runs the profile on Sundays. You can start a one-time run from the DataBrew console to try out the end-to-end solution.

Setting up your source data in Amazon S3

In this post, we use an open dataset of New York City Taxi trip record data from The Registry of Open Data on AWS. This dataset represents a collection of CSV files defining trips taken by taxis and for-hire vehicles in New York City. Each record contains the pick-up and drop-off IDs and timestamps, distance, passenger count, tip amount, fair amount, and total amount. For the purpose of illustration, we use a static dataset; in a real-world use case, we would use a dataset that is refreshed at a defined interval.

You can download the sample dataset (us-east-1 Region) and follow the instructions for this solution, or use your own data that gets dumped into your data lake on a recurring basis. We recommend creating all your resources in the same account and Region. If you use the sample dataset, choose us-east-1.

Creating a DataBrew profile job

To get insights into the quality of our data, we run a DataBrew profile job on a recurring basis. This profile provides us with a statistical summary of our dataset, including value distributions, sparseness, cardinality, and type determination.

Connecting a DataBrew dataset

To connect your dataset, complete the following steps:

  1. On the DataBrew console, in the navigation pane, choose Datasets.
  2. Choose Connect new dataset.
  3. Enter a name for the dataset.
  4. For Enter your source from S3, enter the S3 path of your data source. In our case, this is s3://nyc-tlc/misc/.
  5. Select your dataset (for this post, we choose the medallions trips dataset FOIL_medallion_trips_june17.csv).

  1. Scroll to the end of the form and choose Create dataset.

Creating the profile job

You’re now ready to create your profile job.

  1. In the navigation pane, choose Datasets.
  2. On the Datasets page, select the dataset that you created in the previous step. The row in the table should be highlighted.
  3. Choose Run data profile.
  4. Select Create profile job.
  5. For Job output settings, enter an S3 path as destination for the profile results. Make sure to note down the S3 bucket and key, because you use it later in this tutorial.
  6. For Permissions, choose a role that has access to your input and output S3 paths. For details on required permissions, see DataBrew permission documentation.
  7. On the Associate schedule drop-down menu, choose Create new schedule.
  8. For Schedule name, enter a name for the schedule.
  9. For Run frequency, choose a frequency based on the time and rate at which your data is refreshed.
  10. Choose Add.

  1. Choose Create and run job.

The job run on sample data typically takes 2 minutes to complete.

Exploring the data profile

Now that we’ve run our profile job, we can expose insightful characteristics about our dataset. We can also review the results of the profile through the visualizations of the DataBrew console or by reading the raw JSON results in our S3 bucket.

The profile analyzes both at a dataset level and column level granularity. Looking at our column analytics for String columns, we have the following statistics:

  • MissingCount – The number of missing values in the dataset
  • UniqueCount – The number of unique values in the dataset
  • Datatype – The data type of the column
  • CommonValues – The top 100 most common strings and their occurrences
  • Min – The length of the shortest String value
  • Max – The length of the longest String value
  • Mean – The average length of the values
  • Median – The middle value in terms of character count
  • Mode – The most common String value length
  • StandardDeviation – The standard deviation for the lengths of the String values

For numerical columns, we have the following:

  • Min – The minimum value
  • FifthPercentile – The value that represents 5th percentile (5% of values fall below this and 95% fall above)
  • Q1 – The value that represents 25th percentile (25% of values fall below this and 75% fall above)
  • Median – The value that represents 50th percentile (50% of values fall below this and 50% fall above)
  • Q3 – The value that represents 75th percentile (75% of values fall below this and 25% fall above)
  • NinetyFifthPercentile – The value that represents 95th percentile (95% of values fall below this and 5% fall above)
  • Max – The highest value
  • Range – The difference between the highest and lowest values
  • InterquartileRange – The range between the 25th percentile and 75th percentile values
  • StandardDeviation – The standard deviation of the values (measures the variation of values)
  • Kurtosis – The kurtosis of the values (measures the heaviness of the tails in the distribution)
  • Skewness – The skewness of the values (measures symmetry in the distribution)
  • Sum – The sum of the values
  • Mean – The average of the values
  • Variance – The variance of the values (measures divergence from the mean)
  • CommonValues – A list of the most common values in the column and their occurrence count
  • MinimumValues – A list of the 5 minimum values in the list and their occurrence count
  • MaximumValues – A list of the 5 maximum values in the list and their occurrence count
  • MissingCount – The number of missing values
  • UniqueCount – The number of unique values
  • ZerosCount – The number of zeros
  • Datatype – The datatype of the column
  • Min – The minimum value
  • Max – The maximum value
  • Median – The middle value
  • Mean – The average value
  • Mode – The most common value 

Finally, at a dataset level, we have an overview of the profile as well as cross-column analytics:

  • DatasetName – The name of the dataset the profile was run on
  • Size – The size of the data source in KB
  • Source – The source of the dataset (for example, Amazon S3)
  • Location – The location of the data source
  • CreatedBy – The ARN of the user that created the profile job
  • SampleSize – The number of rows used in the profile
  • MissingCount – The total number of missing cells
  • DuplicateRowCount – The number of duplicate rows in the dataset
  • StringColumnsCount – The number of columns that are of String type
  • NumberColumnsCount – The number of columns that are of numeric type
  • BooleanColumnsCount – The number of columns that are of Boolean type
  • MissingWarningCount – The number of warnings on columns due to missing values
  • DuplicateWarningCount – The number of warnings on columns due to duplicate values
  • JobStarted – A timestamp indicating when the job started
  • JobEnded – A timestamp indicating when the job ended
  • Correlations – The statistical relationship between columns

By default, the DataBrew profile is run on a 20,000-row First-N sample of your dataset. If you want to increase the limit and run the profile on your entire dataset, send a request to [email protected].

Creating an SNS topic and subscription

Amazon SNS allows us to deliver messages regarding the quality of our data reliably and at scale. For this post, we create an SNS topic and subscription. The topic provides us with a central communication channel that we can broadcast to when the job completes, and the subscription is then used to receive the messages published to our topic. For our solution, we use an email protocol in the subscription in order to send the profile results to the stakeholders in our organization.

Creating the SNS topic

To create your topic, complete the following steps:

  1. On the Amazon SNS console, in the navigation pane, choose Topics.
  2. Choose Create topic.
  3. For Type, select Standard.
  4. For Name, enter a name for the topic.

For Name, enter a name for the topic.

  1. Choose Create topic.
  2. Take note of the ARN in the topic details to use later.

Creating the SNS subscription

To create your subscription, complete the following steps:

  1. In the navigation pane, choose Subscriptions.
  2. Choose Create subscription.
  3. For Topic ARN, choose the topic that you created in the previous step.
  4. For Protocol, choose Email.
  5. For Endpoint, enter an email address that can receive notifications.

For Endpoint, enter an email address that can receive notifications.

  1. Choose Create subscription.
  2. Check your email inbox and choose Confirm subscription in the email from AWS Notifications.

Creating a Lambda function for business rule validation

The profile has provided us with an understanding of the characteristics of our data. Now we can create business rules that ensure we’re consistently monitoring the quality our data.

For our sample taxi dataset, we will validate the following:

  • Making sure the pu_loc_id and do_loc_id columns meet a completeness rate of 90%.
  • If more than 10% of the data in those columns is missing, we’ll notify our team that the data needs to be reviewed.

Creating the Lambda function

To create your function, complete the following steps:

  1. On the Lambda console, in the navigation pane, choose Functions.
  2. Choose Create function.
  3. For Function name¸ enter a name for the function.
  4. For Runtime, choose the language you want to write the function in. If you want to use the code sample provided in this tutorial, choose Python 3.8.

For Runtime, choose the language you want to write the function in. If you want to use the code sample provided in this tutorial, choose Python 3.8.

  1. Choose Create function.

Adding a destination to the Lambda function

You now add a destination to your function.

  1. On the Designer page, choose Add destination.
  2. For Condition, select On success.
  3. For Destination type, choose SNS topic.
  4. For Destination, choose the SNS topic from the previous step.

For Destination, choose the SNS topic from the previous step.

  1. Choose Save.

Authoring the Lambda function

For the function code, enter the following sample code or author your own function that parses the DataBrew profile job JSON and verifies it meets your organization’s business rules.

If you use the sample code, make sure to fill in the values of the required parameters to match your configuration:

  • topicArn – The resource identifier for the SNS topic. You find this on the Amazon SNS console’s topic details page (for example, topicArn = 'arn:aws:sns:us-east-1:012345678901:databrew-profile-topic').
  • profileOutputBucket – The S3 bucket the profile job is set to output to. You can find this on the DataBrew console’s job details page (for example, profileOutputBucket = 'taxi-data').
  • profileOutputPathKey – The S3 key the profile job is set to output to. You can find this on the DataBrew console’s job details page (for example, profileOutputPathKey = profile-out/'). If you’re writing directly to an S3 bucket, keep this as an empty String (profileOutputPathKey = '').
    import json
    import boto3
    
    sns = boto3.client('sns')
    s3 = boto3.client('s3')
    s3Resource = boto3.resource('s3')
    
    # === required parameters ===
    topicArn = 'arn:aws:sns:<YOUR REGION>:<YOUR ACCOUNT ID>:<YOUR TOPIC NAME>'
    profileOutputBucket = '<YOUR S3 BUCKET NAME>'
    profileOutputPrefix = '<YOUR S3 KEY>'
    
    def verify_completeness_rule(bucket, key):
        # completeness threshold set to 10%
        threshold = 0.1
        
        # parse the DataBrew profile
        profileObject = s3.get_object(Bucket = bucket, Key = key)
        profileContent = json.loads(profileObject['Body'].read().decode('utf-8'))
        
        # verify the completeness rule is met on the pu_loc_id and do_loc_id columns
        for column in profileContent['columns']:
            if (column['name'] == 'pu_loc_id' or column['name'] == 'do_loc_id'):
                if ((column['missingValuesCount'] / profileContent['sampleSize']) > threshold):
                    # failed the completeness check
                    return False
    
        # passed the completeness check
        return True
    
    def lambda_handler(event, context):
        jobRunState = event['detail']['state']
        jobName = event['detail']['jobName'] 
        jobRunId = event['detail']['jobRunId'] 
        profileOutputKey = ''
    
        if (jobRunState == 'SUCCEEDED'):
            profileOutputPostfix = jobRunId[3:] + '.json'
    
            bucket = s3Resource.Bucket(profileOutputBucket)
            for object in bucket.objects.filter(Prefix = profileOutputPrefix):
                if (profileOutputPostfix in object.key):
                    profileOutputKey = object.key
            
            if (verify_completeness_rule(profileOutputBucket, profileOutputKey)):
                message = 'Nice! Your profile job ' + jobName + ' met business rules. Head to https://console.aws.amazon.com/databrew/ to view your profile.' 
                subject = 'Profile job ' + jobName + ' met business rules' 
            else:
                message = 'Uh oh! Your profile job ' + jobName + ' did not meet business rules. Head to https://console.aws.amazon.com/databrew to clean your data.'
                subject = 'Profile job ' + jobName + ' did not meet business rules'
        
        else:
            # State is FAILED, STOPPED, or TIMEOUT - intervention required
            message = 'Uh oh! Your profile job ' + jobName + ' is in state ' + jobRunState + '. Check the job details at https://console.aws.amazon.com/databrew#job-details?job=' + jobName
            subject = 'Profile job ' + jobName + ' in state ' + jobRunState
            
        response = sns.publish (
            TargetArn = topicArn,
            Message = message,
            Subject = subject
        )
    
        return {
            'statusCode': 200,
            'body': json.dumps(response)
        }

Updating the Lambda function’s permissions

In this final step of configuring your Lambda function, you update your function’s permissions.

  1. In the Lambda function editor, choose the Permissions tab.
  2. For Execution role, choose the role name to navigate to the AWS Identity and Access Management (IAM) console.
  3. In the Role summary, choose Add inline policy.
  4. For Service, choose S3.
  5. For Actions, under List, choose ListBucket.
  6. For Actions, under Read, choose Get Object.
  7. In the Resources section, for bucket, choose Add ARN.
  8. Enter the bucket name you used for your output data in the create profile job step.
  9. In the modal, choose Add.
  10. For object, choose Add ARN.
  11. For bucket name, enter the bucket name you used for your output data in the create profile job step and append the key (for example, taxi-data/profile-out).
  12. For object name, choose Any. This provides read access to all objects in the chosen path.
  13. In the modal, choose Add.
  14. Choose Review policy.
  15. On the Review policy page, enter a name.
  16. Choose Create policy. 

We return to the Lambda function to add a trigger later, so keep the Lambda service page open in a tab as you continue to the next step, adding an EventBridge rule.

Creating an EventBridge rule for job run completion

EventBridge is a serverless event bus service that we can configure to connect applications. For this post, we configure an EventBridge rule to route DataBrew job completion events to our Lambda function. When our profile job is complete, the event triggers the function to process the results.

Creating the EventBridge rule

To create our rule in EventBridge, complete the following steps:

  1. On the EventBridge console, in the navigation pane, choose Rules.
  2. Choose Create rule.
  3. Enter a name and description for the rule.
  4. In the Define pattern section, select Event pattern.
  5. For Event matching pattern, select Pre-defined pattern by service.
  6. For Service provider, choose AWS.
  7. For Service name, choose AWS Glue DataBrew.
  8. For Event type, choose DataBrew Job State Change.
  9. For Target, choose Lambda function.
  10. For Function, choose the name of the Lambda function you created in the previous step.

For Function, choose the name of the Lambda function you created in the previous step.

  1. Choose Create.

Adding the EventBridge rule as the Lambda function trigger

To add your rule as the function trigger, complete the following steps:

  1. Navigate back to your Lambda function configuration page from the previous step.
  2. In the Designer, choose Add trigger.
  3. For Trigger configuration, choose EventBridge (CloudWatch Events).
  4. For Rule, choose the EventBridge rule you created in the previous step.

For Rule, choose the EventBridge rule you created in the previous step.

  1. Choose Add.

Testing your system

That’s it! We’ve completed all the steps required for this solution to run periodically. To give it an end-to-end test, we can run our profile job once and wait for the resulting email to get our results.

  1. On the DataBrew console, in the navigation pane, choose Jobs.
  2. On the Profile jobs tab, select the job that you created. The row in the table should be highlighted.
  3. Choose Run job.
  4. In the Run job modal, choose Run job.

A few minutes after the job is complete, you should receive an email notifying you of the results of your business rule validation logic.

A few minutes after the job is complete, you should receive an email notifying you of the results of your business rule validation logic.

Cleaning up

To avoid incurring future charges, delete the resources created during this walkthrough.

Conclusion

In this post, we walked through how to use DataBrew alongside Amazon S3, Lambda, EventBridge, and Amazon SNS to automatically send data quality alerts. We encourage you to extend this solution by customizing the business rule validation to meet your unique business needs.


About the Authors

Romi Boimer is a Sr. Software Development Engineer at AWS and a technical lead for AWS Glue DataBrew. She designs and builds solutions that enable customers to efficiently prepare and manage their data. Romi has a passion for aerial arts, in her spare time she enjoys fighting gravity and hanging from fabric.

 

 

Shilpa Mohan is a Sr. UX designer at AWS and leads the design of AWS Glue DataBrew. With over 13 years of experience across multiple enterprise domains, she is currently crafting products for Database, Analytics and AI services for AWS. Shilpa is a passionate creator, she spends her time creating anything from content, photographs to crafts.

Using container image support for AWS Lambda with AWS SAM

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/using-container-image-support-for-aws-lambda-with-aws-sam/

At AWS re:Invent 2020, AWS Lambda released Container Image Support for Lambda functions. This new feature allows developers to package and deploy Lambda functions as container images of up to 10 GB in size. With this release, AWS SAM also added support to manage, build, and deploy Lambda functions using container images.

In this blog post, I walk through building a simple serverless application that uses Lambda functions packaged as container images with AWS SAM. I demonstrate creating a new application and highlight changes to the AWS SAM template specific to container image support. I then cover building the image locally for debugging in addition to eventual deployment. Finally, I show using AWS SAM to handle packaging and deploying Lambda functions from a developer’s machine or a CI/CD pipeline.

Push to invoke lifecycle

Push to invoke lifecycle

The process for creating a Lambda function packaged as a container requires only a few steps. A developer first creates the container image and tags that image with the appropriate label. The image is then uploaded to an Amazon Elastic Container Registry (ECR) repository using docker push.

During the Lambda create or update process, the Lambda service pulls the image from ECR, optimizes the image for use, and deploys the image to the Lambda service. Once this, and any other configuration processes are complete, the Lambda function is then in Active status and ready to be invoked. The AWS SAM CLI manages most of these steps for you.

Prerequisites

The following tools are required in this walkthrough:

Create the application

Use the terminal and follow these steps to create a serverless application:

  1. Enter sam init.
  2. For Template source, select option one for AWS Quick Start Templates.
  3. For Package type, choose option two for Image.
  4. For Base image, select option one for amazon/nodejs12.x-base.
  5. Name the application demo-app.
Demonstration of sam init

Demonstration of sam init

Exploring the application

Open the template.yaml file in the root of the project to see the new options available for container image support. The AWS SAM template has two new values that are required when working with container images. PackageType: Image tells AWS SAM that this function is using container images for packaging.

AWS SAM template

AWS SAM template

The second set of required data is in the Metadata section that helps AWS SAM manage the container images. When a container is created, a new tag is added to help identify that image. By default, Docker uses the tag, latest. However, AWS SAM passes an explicit tag name to help differentiate between functions. That tag name is a combination of the Lambda function resource name, and the DockerTag value found in the Metadata. Additionally, the DockerContext points to the folder containing the function code and Dockerfile identifies the name of the Dockerfile used in building the container image.

In addition to changes in the template.yaml file, AWS SAM also uses the Docker CLI to build container images. Each Lambda function has a Dockerfile that instructs Docker how to construct the container image for that function. The Dockerfile for the HelloWorldFunction is at hello-world/Dockerfile.

Local development of the application

AWS SAM provides local development support for zip-based and container-based Lambda functions. When using container-based images, as you modify your code, update the local container image using sam build. AWS SAM then calls docker build using the Dockerfile for instructions.

Dockerfile for Lambda function

Dockerfile for Lambda function

In the case of the HelloWorldFunction that uses Node.js, the Docker command:

  1. Pulls the latest container base image for nodejs12.x from the Amazon Elastic Container Registry Public.
  2. Copies the app.js code and package.json files to the container image.
  3. Installs the dependencies inside the container image.
  4. Sets the invocation handler.
  5. Creates and tags new version of the local container image.

To build your application locally on your machine, enter:

sam build

The results are:

Results for sam build

Results for sam build

Now test the code by locally invoking the HelloWorldFunction using the following command:

sam local invoke HelloWorldFunction

The results are:

Results for sam local invoke

Results for sam local invoke

You can also combine these commands and add flags for cached and parallel builds:

sam build --cached --parallel && sam local invoke HelloWorldFunction

Deploying the application

There are two ways to deploy container-based Lambda functions with AWS SAM. The first option is to deploy from AWS SAM using the sam deploy command. The deploy command tags the local container image, uploads it to ECR, and then creates or updates your Lambda function. The second method is the sam package command used in continuous integration and continuous delivery or deployment (CI/CD) pipelines, where the deployment process is separate from the artifact creation process.

AWS SAM package tags and uploads the container image to ECR but does not deploy the application. Instead, it creates a modified version of the template.yaml file with the newly created container image location. This modified template is later used to deploy the serverless application using AWS CloudFormation.

Deploying from AWS SAM with the guided flag

Before you can deploy the application, use the AWS CLI to create a new ECR repository to store the container image for the HelloWorldFunction.

Run the following command from a terminal:

aws ecr create-repository --repository-name demo-app-hello-world \
--image-tag-mutability IMMUTABLE --image-scanning-configuration scanOnPush=true

This command creates a new ECR repository called demo-app-hello-world. The –image-tag-mutability IMMUTABLE option prevents overwriting tags. The –image-scanning-configuration scanOnPush=true enables automated vulnerability scanning whenever a new image is pushed to the repository. The output is:

Amazon ECR creation output

Amazon ECR creation output

Make a note of the repositoryUri as you need it in the next step.

Before you can push your images to this new repository, ensure that you have logged in to the managed Docker service that ECR provides. Update the bracketed tokens with your information and run the following command in the terminal:

aws ecr get-login-password --region <region> | docker login --username AWS \
--password-stdin <account id>.dkr.ecr.<region>.amazonaws.com

You can also install the Amazon ECR credentials helper to help facilitate Docker authentication with Amazon ECR.

After building the application locally and creating a repository for the container image, you can deploy the application. The first time you deploy an application, use the guided version of the sam deploy command and follow these steps:

  1. Type sam deploy --guided, or sam deploy -g.
  2. For Stack Name, enter demo-app.
  3. Choose the same Region that you created the ECR repository in.
  4. Enter the Image Repository for the HelloWorldFunction (this is the repositoryUri of the ECR repository).
  5. For Confirm changes before deploy and Allow SAM CLI IAM role creation, keep the defaults.
  6. For HelloWorldFunction may not have authorization defined, Is this okay? Select Y.
  7. Keep the defaults for the remaining prompts.
Results of sam deploy --guided

Results of sam deploy –guided

AWS SAM uploads the container images to the ECR repo and deploys the application. During this process, you see a changeset along with the status of the deployment. When the deployment is complete, the stack outputs are then displayed. Use the HelloWorldApi endpoint to test your application in production.

Deploy outputs

Deploy outputs

When you use the guided version, AWS SAM saves the entered data to the samconfig.toml file. For subsequent deployments with the same parameters, use sam deploy. If you want to make a change, use the guided deployment again.

This example demonstrates deploying a serverless application with a single, container-based Lambda function in it. However, most serverless applications contain more than one Lambda function. To work with an application that has more than one Lambda function, follow these steps to add a second Lambda function to your application:

  1. Copy the hello-world directory using the terminal command cp -R hello-world hola-world
  2. Replace the contents of the template.yaml file with the following
    AWSTemplateFormatVersion: '2010-09-09'
    Transform: AWS::Serverless-2016-10-31
    Description: demo app
      
    Globals:
      Function:
        Timeout: 3
    
    Resources:
      HelloWorldFunction:
        Type: AWS::Serverless::Function
        Properties:
          PackageType: Image
          Events:
            HelloWorld:
              Type: Api
              Properties:
                Path: /hello
                Method: get
        Metadata:
          DockerTag: nodejs12.x-v1
          DockerContext: ./hello-world
          Dockerfile: Dockerfile
          
      HolaWorldFunction:
        Type: AWS::Serverless::Function
        Properties:
          PackageType: Image
          Events:
            HolaWorld:
              Type: Api
              Properties:
                Path: /hola
                Method: get
        Metadata:
          DockerTag: nodejs12.x-v1
          DockerContext: ./hola-world
          Dockerfile: Dockerfile
    
    Outputs:
      HelloWorldApi:
        Description: "API Gateway endpoint URL for Prod stage for Hello World function"
        Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/hello/"
      HolaWorldApi:
        Description: "API Gateway endpoint URL for Prod stage for Hola World function"
        Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/hola/"
  3. Replace the contents of hola-world/app.js with the following
    let response;
    exports.lambdaHandler = async(event, context) => {
        try {
            response = {
                'statusCode': 200,
                'body': JSON.stringify({
                    message: 'hola world',
                })
            }
        }
        catch (err) {
            console.log(err);
            return err;
        }
        return response
    };
  4. Create an ECR repository for the HolaWorldFunction
    aws ecr create-repository --repository-name demo-app-hola-world \
    --image-tag-mutability IMMUTABLE --image-scanning-configuration scanOnPush=true
  5. Run the guided deploy to add the second repository:
    sam deploy -g

The AWS SAM guided deploy process allows you to provide the information again but prepopulates the defaults with previous values. Update the following:

  1. Keep the same stack name, Region, and Image Repository for HelloWorldFunction.
  2. Use the new repository for HolaWorldFunction.
  3. For the remaining steps, use the same values from before. For Lambda functions not to have authorization defined, enter Y.
Results of sam deploy --guided

Results of sam deploy –guided

Deploying in a CI/CD pipeline

Companies use continuous integration and continuous delivery (CI/CD) pipelines to automate application deployment. Because the process is automated, using an interactive process like a guided AWS SAM deployment is not possible.

Developers can use the packaging process in AWS SAM to prepare the artifacts for deployment and produce a separate template usable by AWS CloudFormation. The package command is:

sam package --output-template-file packaged-template.yaml \
--image-repository 5555555555.dkr.ecr.us-west-2.amazonaws.com/demo-app

For multiple repositories:

sam package --output-template-file packaged-template.yaml \ 
--image-repositories HelloWorldFunction=5555555555.dkr.ecr.us-west-2.amazonaws.com/demo-app-hello-world \
--image-repositories HolaWorldFunction=5555555555.dkr.ecr.us-west-2.amazonaws.com/demo-app-hola-world

Both cases create a file called packaged-template.yaml. The Lambda functions in this template have an added tag called ImageUri that points to the ECR repository and a tag for the Lambda function.

Packaged template

Packaged template

Using sam package to generate a separate CloudFormation template enables developers to separate artifact creation from application deployment. The deployment process can then be placed in an isolated stage allowing for greater customization and observability of the pipeline.

Conclusion

Container image support for Lambda enables larger application artifacts and the ability to use container tooling to manage Lambda images. AWS SAM simplifies application management by bringing these tools into the serverless development workflow.

In this post, you create a container-based serverless application in using command lines in the terminal. You create ECR repositories and associate them with functions in the application. You deploy the application from your local machine and package the artifacts for separate deployment in a CI/CD pipeline.

To learn more about serverless and AWS SAM, visit the Sessions with SAM series at s12d.com/sws and find more resources at serverlessland.com.

#ServerlessForEveryone

Optimizing batch processing with custom checkpoints in AWS Lambda

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/optimizing-batch-processing-with-custom-checkpoints-in-aws-lambda/

AWS Lambda can process batches of messages from sources like Amazon Kinesis Data Streams or Amazon DynamoDB Streams. In normal operation, the processing function moves from one batch to the next to consume messages from the stream.

However, when an error occurs in one of the items in the batch, this can result in reprocessing some of the same messages in that batch. With the new custom checkpoint feature, there is now much greater control over how you choose to process batches containing failed messages.

This blog post explains the default behavior of batch failures and options available to developers to handle this error state. I also cover how to use this new checkpoint capability and show the benefits of using this feature in your stream processing functions.

Overview

When using a Lambda function to consume messages from a stream, the batch size property controls the maximum number of messages passed in each event.

The stream manages two internal pointers: a checkpoint and a current iterator. The checkpoint is the last known item position that was successfully processed. The current iterator is the position in the stream for the next read operation. In a successful operation, here are two batches processed from a stream with a batch size of 10:

Checkpoints and current iterators

  1. The first batch delivered to the Lambda function contains items 1–10. The function processes these items without error.
  2. The checkpoint moves to item 11. The next batch delivered to the Lambda function contains items 11–20.

In default operation, the processing of the entire batch must succeed or fail. If a single item fails processing and the function returns an error, the batch fails. The entire batch is then retried until the maximum retries is reached. This can result in the same failure occurring multiple times and unnecessary processing of individual messages.

You can also enable the BisectBatchOnFunctonError property in the event source mapping. If there is a batch failure, the calling service splits the failed batch into two and retries the half-batches separately. The process continues recursively until there is a single item in a batch or messages are processed successfully. For example, in a batch of 10 messages, where item number 5 is failing, the processing occurs as follows:

Bisect batch on error processing

  1. Batch 1 fails. It’s split into batches 2 and 3.
  2. Batch 2 fails, and batch 3 succeeds. Batch 2 is split into batches 4 and 5.
  3. Batch 4 fails and batch 5 succeeds. Batch 4 is split into batches 6 and 7.
  4. Batch 6 fails and batch 7 succeeds.

While this provides a way to process messages in a batch with one failing message, it results in multiple invocations of the function. In this example, message number 4 is processed four times before succeeding.

With the new custom checkpoint feature, you can return the sequence identifier for the failed messages. This provides more precise control over how to choose to continue processing the stream. For example, in a batch of 10 messages where the sixth message fails:

Custom checkpoint behavior

  1. Lambda processes the batch of messages, items 1–10. The sixth message fails and the function returns the failed sequence identifier.
  2. The checkpoint in the stream is moved to the position of the failed message. The batch is retried for only messages 6–10.

Existing stream processing behaviors

In the following examples, I use a DynamoDB table with a Lambda function that is invoked by the stream for the table. You can also use a Kinesis data stream if preferred, as the behavior is the same. The event source mapping is set to a batch size of 10 items so all the stream messages are passed in the event to a single Lambda invocation.

Architecture diagram

I use the following Node.js script to generate batches of 10 items in the table.

const AWS = require('aws-sdk')
AWS.config.update({ region: 'us-east-1' })
const docClient = new AWS.DynamoDB.DocumentClient()

const ddbTable = 'ddbTableName'
const BATCH_SIZE = 10

const createRecords = async () => {
  // Create envelope
  const params = {
    RequestItems: {}
  }
  params.RequestItems[ddbTable] = []

  // Add items to batch and write to DDB
  for (let i = 0; i < BATCH_SIZE; i++) {
    params.RequestItems[ddbTable].push({
      PutRequest: {
        Item: {
          ID: Date.now() + i
        }
      }
    })
  }
  await docClient.batchWrite(params).promise()
}

const main = async() => await createRecords()
main()

After running this script, there are 10 items in the DynamoDB table, which are then put into the DynamoDB stream for processing.

10 items in DynamoDB table

The processing Lambda function uses the following code. This contains a constant called FAILED_MESSAGE_NUM to force an error on the message with the corresponding index in the event batch:

exports.handler = async (event) => {
  console.log(JSON.stringify(event, null, 2))
  console.log('Records: ', event.Records.length)
  const FAILED_MESSAGE_NUM = 6
  
  let recordNum = 1
  let batchItemFailures = []

  event.Records.map((record) => {
    const sequenceNumber = record.dynamodb.SequenceNumber
    
    if ( recordNum === FAILED_MESSAGE_NUM ) {
      console.log('Error! ', sequenceNumber)
      throw new Error('kaboom')
    }
    console.log('Success: ', sequenceNumber)
    recordNum++
  })
}

The code uses the DynamoDB item’s sequence number, which is provided in each record of the stream event:

Item sequence number in event

In the default configuration of the event source mapping, the failure of message 6 causes the whole batch to fail. The entire batch is then retried multiple times. This appears in the CloudWatch Logs for the function:

Logs with retried batches

Next, I enable the bisect-on-error feature in the function’s event trigger. The first invocation fails as before but this causes two subsequent invocations with batches of five messages. The original batch is bisected. These batches complete processing successfully.

Logs with bisected batches

Configuring a custom checkpoint

Finally, I enable the custom checkpoint feature. This is configured in the Lambda function console by selecting the “Report batch item failures” check box in the DynamoDB trigger:

Add trigger settings

I update the processing Lambda function with the following code:

exports.handler = async (event) => {
  console.log(JSON.stringify(event, null, 2))
  console.log('Records: ', event.Records.length)
  const FAILED_MESSAGE_NUM = 4
  
  let recordNum = 1
  let sequenceNumber = 0
    
  try {
    event.Records.map((record) => {
      sequenceNumber = record.dynamodb.SequenceNumber
  
      if ( recordNum === FAILED_MESSAGE_NUM ) {
        throw new Error('kaboom')
      }
      console.log('Success: ', sequenceNumber)
      recordNum++
    })
  } catch (err) {
    // Return failed sequence number to the caller
    console.log('Failure: ', sequenceNumber)
    return { "batchItemFailures": [ {"itemIdentifier": sequenceNumber} ]  }
  }
}

In this version of the code, the processing of each message is wrapped in a try…catch block. When processing fails, the function stops processing any remaining messages. It returns the sequence number of the failed message in a JSON object:

{ 
  "batchItemFailures": [ 
    {
      "itemIdentifier": sequenceNumber
    }
  ]
}

The calling service then updates the checkpoint value with the sequence number provided. If the batchItemFailures array is empty, the caller assumes all messages have been processed correctly. If the batchItemFailures array contains multiple items, the lowest sequence number is used as the checkpoint.

In this example, I also modify the FAILED_MESSAGE_NUM constant to 4 in the Lambda function. This causes the fourth message in every batch to throw an error. After adding 10 items to the DynamoDB table, the CloudWatch log for the processing function shows:

Lambda function logs

This is how the stream of 10 messages has been processed using the custom checkpoint:

Custom checkpointing walkthrough

  1. In the first invocation, all 10 messages are in the batch. The fourth message throws an error. The function returns this position as the checkpoint.
  2. In the second invocation, messages 4–10 are in the batch. Message 7 throws an error and its sequence number is returned as the checkpoint.
  3. In the third invocation, the batch contains messages 7–10. Message 10 throws an error and its sequence number is now the returned checkpoint.
  4. The final invocation contains only message 10, which is successfully processed.

Using this approach, subsequent invocations do not receive messages that have been successfully processed previously.

Conclusion

The default behavior for stream processing in Lambda functions enables entire batches of messages to succeed or fail. You can also use batch bisecting functionality to retry batches iteratively if a single message fails. Now with custom checkpoints, you have more control over handling failed messages.

This post explains the three different processing modes and shows example code for handling failed messages. Depending upon your use-case, you can choose the appropriate mode for your workload. This can help reduce unnecessary Lambda invocations and prevent reprocessing of the same messages in batches containing failures.

To learn more about how to use this feature, read the developer documentation. To learn more about building with serverless technology, visit Serverless Land.

Using AWS Lambda for streaming analytics

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/using-aws-lambda-for-streaming-analytics/

AWS Lambda now supports streaming analytics calculations for Amazon Kinesis and Amazon DynamoDB. This allows developers to calculate aggregates in near-real time and pass state across multiple Lambda invocations. This feature provides an alternative way to build analytics in addition to services like Amazon Kinesis Data Analytics.

In this blog post, I explain how this feature works with Kinesis Data Streams and DynamoDB Streams, together with example use-cases.

Overview

For workloads using streaming data, data arrives continuously, often from different sources, and is processed incrementally. Discrete data processing tasks, such as operating on files, have a known beginning and end boundary for the data. For applications with streaming data, the processing function does not know when the data stream starts or ends. Consequently, this type of data is commonly processed in batches or windows.

Before this feature, Lambda-based stream processing was limited to working on the incoming batch of data. For example, in Amazon Kinesis Data Firehose, a Lambda function transforms the current batch of records with no information or state from previous batches. This is also the same for processing DynamoDB streams using Lambda functions. This existing approach works well for MapReduce or tasks focused exclusively on the date in the current batch.

Comparing DynamoDB and Kinesis streams

  1. DynamoDB streams invoke a processing Lambda function asynchronously. After processing, the function may then store the results in a downstream service, such as Amazon S3.
  2. Kinesis Data Firehose invokes a transformation Lambda function synchronously, which returns the transformed data back to the service.

This new feature introduces the concept of a tumbling window, which is a fixed-size, non-overlapping time interval of up to 15 minutes. To use this, you specify a tumbling window duration in the event-source mapping between the stream and the Lambda function. When you apply a tumbling window to a stream, items in the stream are grouped by window and sent to the processing Lambda function. The function returns a state value that is passed to the next tumbling window.

You can use this to calculate aggregates over multiple windows. For example, you can calculate the total value of a data item in a stream using 30-second tumbling windows:

Tumbling windows

  1. Integer data arrives in the stream at irregular time intervals.
  2. The first tumbling window consists of data in the 0–30 second range, passed to the Lambda function. It adds the items and returns the total of 6 as a state value.
  3. The second tumbling window invokes the Lambda function with the state value of 6 and the 30–60 second batch of stream data. This adds the items to the existing total, returning 18.
  4. The third tumbling window invokes the Lambda function with a state value of 18 and the next window of values. The running total is now 28 and returned as the state value.
  5. The fourth tumbling window invokes the Lambda function with a state value of 28 and the 90–120 second batch of data. The final total is 32.

This feature is useful in workloads where you need to calculate aggregates continuously. For example, for a retailer streaming order information from point-of-sale systems, it can generate near-live sales data for downstream reporting. Using Lambda to generate aggregates only requires minimal code, and the function can access other AWS services as needed.

Using tumbling windows with Lambda functions

When you configure an event source mapping between Kinesis or DynamoDB and a Lambda function, use the new setting, Tumbling window duration. This appears in the trigger configuration in the Lambda console:

Trigger configuration

You can also set this value in AWS CloudFormation and AWS SAM templates. After the event source mapping is created, events delivered to the Lambda function have several new attributes:

New attributes in events

These include:

  • Window start and end: the beginning and ending timestamps for the current tumbling window.
  • State: an object containing the state returned from the previous window, which is initially empty. The state object can contain up to 1 MB of data.
  • isFinalInvokeForWindow: indicates if this is the last invocation for the tumbling window. This only occurs once per window period.
  • isWindowTerminatedEarly: a window ends early only if the state exceeds the maximum allowed size of 1 MB.

In any tumbling window, there is a series of Lambda invocations following this pattern:

Tumbling window process in Lambda

  1. The first invocation contains an empty state object in the event. The function returns a state object containing custom attributes that are specific to the custom logic in the aggregation.
  2. The second invocation contains the state object provided by the first Lambda invocation. This function returns an updated state object with new aggregated values. Subsequent invocations follow this same sequence.
  3. The final invocation in the tumbling window has the isFinalInvokeForWindow flag set to the true. This contains the state returned by the most recent Lambda invocation. This invocation is responsible for storing the result in S3 or in another data store, such as a DynamoDB table. There is no state returned in this final invocation.

Using tumbling windows with DynamoDB

DynamoDB streams can invoke Lambda function using tumbling windows, enabling you to generate aggregates per shard. In this example, an ecommerce workload saves orders in a DynamoDB table and uses a tumbling window to calculate the near-real time sales total.

First, I create a DynamoDB table to capture the order data and a second DynamoDB table to store the aggregate calculation. I create a Lambda function with a trigger from the first orders table. The event source mapping is created with a Tumbling window duration of 30 seconds:

DynamoDB trigger configuration

I use the following code in the Lambda function:

const AWS = require('aws-sdk')
AWS.config.update({ region: process.env.AWS_REGION })
const docClient = new AWS.DynamoDB.DocumentClient()
const TableName = 'tumblingWindowsAggregation'

function isEmpty(obj) { return Object.keys(obj).length === 0 }

exports.handler = async (event) => {
    // Save aggregation result in the final invocation
    if (event.isFinalInvokeForWindow) {
        console.log('Final: ', event)
        
        const params = {
          TableName,
          Item: {
            windowEnd: event.window.end,
            windowStart: event.window.start,
            sales: event.state.sales,
            shardId: event.shardId
          }
        }
        return await docClient.put(params).promise()
    }
    console.log(event)
    
    // Create the state object on first invocation or use state passed in
    let state = event.state

    if (isEmpty (state)) {
        state = {
            sales: 0
        }
    }
    console.log('Existing: ', state)

    // Process records with custom aggregation logic

    event.Records.map((item) => {
        // Only processing INSERTs
        if (item.eventName != "INSERT") return
        
        // Add sales to total
        let value = parseFloat(item.dynamodb.NewImage.sales.N)
        console.log('Adding: ', value)
        state.sales += value
    })

    // Return the state for the next invocation
    console.log('Returning state: ', state)
    return { state: state }
}

This function code processes the incoming event to aggregate a sales attribute, and return this aggregated result in a state object. In the final invocation, it stores the aggregated value in another DynamoDB table.

I then use this Node.js script to generate random sample order data:

const AWS = require('aws-sdk')
AWS.config.update({ region: 'us-east-1' })
const docClient = new AWS.DynamoDB.DocumentClient()

const TableName = 'tumblingWindows'
const ITERATIONS = 100
const SLEEP_MS = 100

let totalSales = 0

function sleep(ms) { 
  return new Promise(resolve => setTimeout(resolve, ms));
}

const createSales = async () => {
  for (let i = 0; i < ITERATIONS; i++) {

    let sales = Math.round (parseFloat(100 * Math.random()))
    totalSales += sales
    console.log ({i, sales, totalSales})

    await docClient.put ({
      TableName,
      Item: {
        ID: Date.now().toString(),
        sales,
        ITERATIONStamp: new Date().toString()
      }
    }).promise()
    await sleep(SLEEP_MS)
  }
}

const main = async() => {
  await createSales()
  console.log('Total Sales: ', totalSales)
}

main()

Once the script is complete, the console shows the individual order transactions and the total sales:

Script output

After the tumbling window duration is finished, the second DynamoDB table shows the aggregate values calculated and stored by the Lambda function:

Aggregate values in second DynamoDB table

Since aggregation for each shard is independent, the totals are stored by shardId. If I continue to run the test data script, the aggregation function continues to calculate and store more totals per tumbling window period.

Using tumbling windows with Kinesis

Kinesis data streams can also invoke a Lambda function using a tumbling window in a similar way. The biggest difference is that you control how many shards are used in the data stream. Since aggregation occurs per shard, this controls the total number aggregate results per tumbling window.

Using the same sales example, first I create a Kinesis data stream with one shard. I use the same DynamoDB tables from the previous example, then create a Lambda function with a trigger from the first orders table. The event source mapping is created with a Tumbling window duration of 30 seconds:

Kinesis trigger configuration

I use the following code in the Lambda function, modified to process the incoming Kinesis data event:

const AWS = require('aws-sdk')
AWS.config.update({ region: process.env.AWS_REGION })
const docClient = new AWS.DynamoDB.DocumentClient()
const TableName = 'tumblingWindowsAggregation'

function isEmpty(obj) {
    return Object.keys(obj).length === 0
}

exports.handler = async (event) => {

    // Save aggregation result in the final invocation
    if (event.isFinalInvokeForWindow) {
        console.log('Final: ', event)
        
        const params = {
          TableName,
          Item: {
            windowEnd: event.window.end,
            windowStart: event.window.start,
            sales: event.state.sales,
            shardId: event.shardId
          }
        }
        console.log({ params })
        await docClient.put(params).promise()

    }
    console.log(JSON.stringify(event, null, 2))
    
    // Create the state object on first invocation or use state passed in
    let state = event.state

    if (isEmpty (state)) {
        state = {
            sales: 0
        }
    }
    console.log('Existing: ', state)

    // Process records with custom aggregation logic

    event.Records.map((record) => {
        const payload = Buffer.from(record.kinesis.data, 'base64').toString('ascii')
        const item = JSON.parse(payload).Item

        // // Add sales to total
        let value = parseFloat(item.sales)
        console.log('Adding: ', value)
        state.sales += value
    })

    // Return the state for the next invocation
    console.log('Returning state: ', state)
    return { state: state }
}

This function code processes the incoming event in the same way as the previous example. I then use this Node.js script to generate random sample order data, modified to put the data on the Kinesis stream:

const AWS = require('aws-sdk')
AWS.config.update({ region: 'us-east-1' })
const kinesis = new AWS.Kinesis()

const StreamName = 'testStream'
const ITERATIONS = 100
const SLEEP_MS = 10

let totalSales = 0

function sleep(ms) { 
  return new Promise(resolve => setTimeout(resolve, ms));
}

const createSales = async() => {

  for (let i = 0; i < ITERATIONS; i++) {

    let sales = Math.round (parseFloat(100 * Math.random()))
    totalSales += sales
    console.log ({i, sales, totalSales})

    const data = {
      Item: {
        ID: Date.now().toString(),
        sales,
        timeStamp: new Date().toString()
      }
    }

    await kinesis.putRecord({
      Data: Buffer.from(JSON.stringify(data)),
      PartitionKey: 'PK1',
      StreamName
    }).promise()
    await sleep(SLEEP_MS)
  }
}

const main = async() => {
  await createSales()
}

main()

Once the script is complete, the console shows the individual order transactions and the total sales:

Console output

After the tumbling window duration is finished, the second DynamoDB table shows the aggregate values calculated and stored by the Lambda function:

Aggregate values in second DynamoDB table

As there is only one shard in this Kinesis stream, there is only one aggregation value for all the data items in the test.

Conclusion

With tumbling windows, you can calculate aggregate values in near-real time for Kinesis data streams and DynamoDB streams. Unlike existing stream-based invocations, state can be passed forward by Lambda invocations. This makes it easier to calculate sums, averages, and counts on values across multiple batches of data.

In this post, I walk through an example that aggregates sales data stored in Kinesis and DynamoDB. In each case, I create an aggregation function with an event source mapping that uses the new tumbling window duration attribute. I show how state is passed between invocations and how to persist the aggregated value at the end of the tumbling window.

To learn more about how to use this feature, read the developer documentation. To learn more about building with serverless technology, visit Serverless Land.

Using self-hosted Apache Kafka as an event source for AWS Lambda

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/using-self-hosted-apache-kafka-as-an-event-source-for-aws-lambda/

Apache Kafka is an open source event streaming platform used to support workloads such as data pipelines and streaming analytics. Apache Kafka is a distributed streaming platform that it is conceptually similar to Amazon Kinesis.

With the launch of Kafka as an event source for Lambda, you can now consume messages from a topic in a Lambda function. This makes it easier to integrate your self-hosted Kafka clusters with downstream serverless workflows.

In this blog post, I explain how to set up an Apache Kafka cluster on Amazon EC2 and configure key elements in the networking configuration. I also show how to create a Lambda function to consume messages from a Kafka topic. Although the process is similar to using Amazon Managed Streaming for Apache Kafka (Amazon MSK) as an event source, there are also some important differences.

Overview

Using Kafka as an event source operates in a similar way to using Amazon SQS or Amazon Kinesis. In all cases, the Lambda service internally polls for new records or messages from the event source, and then synchronously invokes the target Lambda function. Lambda reads the messages in batches and provides the message batches to your function in the event payload.

Lambda is a consumer application for your Kafka topic. It processes records from one or more partitions and sends the payload to the target function. Lambda continues to process batches until there are no more messages in the topic.

Configuring networking for self-hosted Kafka

It’s best practice to deploy the Amazon EC2 instances running Kafka in private subnets. For the Lambda function to poll the Kafka instances, you must ensure that there is a NAT Gateway running in the public subnet of each Region.

It’s possible to route the traffic to a single NAT Gateway in one AZ for test and development workloads. For redundancy in production workloads, it’s recommended that there is one NAT Gateway available in each Availability Zone. This walkthrough creates the following architecture:

Self-hosted Kafka architecture

  1. Deploy a VPC with public and private subnets and a NAT Gateway that enables internet access. To configure this infrastructure with AWS CloudFormation, deploy this template.
  2. From the VPC console, edit the default security group created by this template to provide inbound access to the following ports:
    • Custom TCP: ports 2888–3888 from all sources.
    • SSH (port 22), restricted to your own IP address.
    • Custom TCP: port 2181 from all sources.
    • Custom TCP: port 9092 from all sources.
    • All traffic from the same security group identifier.

Security Group configuration

Deploying the EC2 instances and installing Kafka

Next, you deploy the EC2 instances using this network configuration and install the Kafka application:

  1. From the EC2 console, deploy an instance running Ubuntu Server 18.04 LTS. Ensure that there is one instance in each private subnet, in different Availability Zones. Assign the default security group configured by the template.
  2. Next, deploy another EC2 instance in either of the public subnets. This is a bastion host used to access the private instances. Assign the default security group configured by the template.EC2 instances
  3. Connect to the bastion host, then SSH to the first private EC2 instance using the method for your preferred operating system. This post explains different methods. Repeat the process in another terminal for the second private instance.EC2 terminals
  4. On each instance, install Java:
    sudo add-apt-repository ppa:webupd8team/java
    sudo apt update
    sudo apt install openjdk-8-jdk
    java –version
  5. On each instance, install Kafka:
    wget http://www-us.apache.org/dist/kafka/2.3.1/kafka_2.12-2.3.1.tgz
    tar xzf kafka_2.12-2.3.1.tgz
    ln -s kafka_2.12-2.3.1 kafka

Configure and start Zookeeper

Configure and start the Zookeeper service that manages the Kafka brokers:

  1. On the first instance, configure the Zookeeper ID:
    cd kafka
    mkdir /tmp/zookeeper
    touch /tmp/zookeeper/myid
    echo "1" >> /tmp/zookeeper/myid
  2. Repeat the process on the second instance, using a different ID value:
    cd kafka
    mkdir /tmp/zookeeper
    touch /tmp/zookeeper/myid
    echo "2" >> /tmp/zookeeper/myid
  3. On the first instance, edit the config/zookeeper.properties file, adding the private IP address of the second instance:
    initLimit=5
    syncLimit=2
    tickTime=2000
    # list of servers: <ip>:2888:3888
    server.1=0.0.0.0:2888:3888 
    server.2=<<IP address of second instance>>:2888:3888
    
  4. On the second instance, edit the config/zookeeper.properties file, adding the private IP address of the first instance:
    initLimit=5
    syncLimit=2
    tickTime=2000
    # list of servers: <ip>:2888:3888
    server.1=<<IP address of first instance>>:2888:3888 
    server.2=0.0.0.0:2888:3888
  5. On each instance, start Zookeeper:bin/zookeeper-server-start.sh config/zookeeper.properties

Configure and start Kafka

Configure and start the Kafka broker:

  1. On the first instance, edit the config/server.properties file:
    broker.id=1
    zookeeper.connect=0.0.0.0:2181, =<<IP address of second instance>>:2181
  2. On the second instance, edit the config/server.properties file:
    broker.id=2
    zookeeper.connect=0.0.0.0:2181, =<<IP address of first instance>>:2181
  3. Start Kafka on each instance:
    bin/kafka-server-start.sh config/server.properties

At the end of this process, Zookeeper and Kafka are running on both instances. If you use separate terminals, it looks like this:

Zookeeper and Kafka terminals

Configuring and publishing to a topic

Kafka organizes channels of messages around topics, which are virtual groups of one or many partitions across Kafka brokers in a cluster. Multiple producers can send messages to Kafka topics, which can then be routed to and processed by multiple consumers. Producers publish to the tail of a topic and consumers read the topic at their own pace.

From either of the two instances:

  1. Create a new topic called test:
    bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 2 --partitions 2 --topic test
  2. Start a producer:
    bin/kafka-console-producer.sh --broker-list localhost:9092 –topic
  3. Enter test messages to check for successful publication:Sending messages to the Kafka topic

At this point, you can successfully publish messages to your self-hosted Kafka cluster. Next, you configure a Lambda function as a consumer for the test topic on this cluster.

Configuring the Lambda function and event source mapping

You can create the Lambda event source mapping using the AWS CLI or AWS SDK, which provide the CreateEventSourceMapping API. In this walkthrough, you use the AWS Management Console to create the event source mapping.

Create a Lambda function that uses the self-hosted cluster and topic as an event source:

  1. From the Lambda console, select Create function.
  2. Enter a function name, and select Node.js 12.x as the runtime.
  3. Select the Permissions tab, and select the role name in the Execution role panel to open the IAM console.
  4. Choose Add inline policy and create a new policy called SelfHostedKafkaPolicy with the following permissions. Replace the resource example with the ARNs of your instances:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "ec2:CreateNetworkInterface",
                    "ec2:DescribeNetworkInterfaces",
                    "ec2:DescribeVpcs",
                    "ec2:DeleteNetworkInterface",
                    "ec2:DescribeSubnets",
                    "ec2:DescribeSecurityGroups",
                    "logs:CreateLogGroup",
                    "logs:CreateLogStream",
                    "logs:PutLogEvents"
                ],
                "Resource": " arn:aws:ec2:<REGION>:<ACCOUNT_ID>:instance/<instance-id>"
            }
        ]
    }
    

    Create policy

  5. Choose Create policy and ensure that the policy appears in Permissions policies.IAM role page
  6. Back in the Lambda function, select the Configuration tab. In the Designer panel, choose Add trigger.
  7. In the dropdown, select Apache Kafka:
    • For Bootstrap servers, add each of the two instances private IPv4 DNS addresses with port 9092 appended.
    • For Topic name, enter ‘test’.
    • Enter your preferred batch size and starting position values (see this documentation for more information).
    • For VPC, select the VPC created by the template.
    • For VPC subnets, select the two private subnets.
    • For VPC security groups, select the default security group.
    • Choose Add.

Add trigger configuration

The trigger’s status changes to Enabled in the Lambda console after a few seconds. It then takes several minutes for the trigger to receive messages from the Kafka cluster.

Testing the Lambda function

At this point, you have created a VPC with two private and public subnets and a NAT Gateway. You have created a Kafka cluster on two EC2 instances in private subnets. You set up a target Lambda function with the necessary IAM permissions. Next, you publish messages to the test topic in Kafka and see the resulting invocation in the logs for the Lambda function.

  1. In the Function code panel, replace the contents of index.js with the following code and choose Deploy:
    exports.handler = async (event) => {
        // Iterate through keys
        for (let key in event.records) {
          console.log('Key: ', key)
          // Iterate through records
          event.records[key].map((record) => {
            console.log('Record: ', record)
            // Decode base64
            const msg = Buffer.from(record.value, 'base64').toString()
            console.log('Message:', msg)
          }) 
        }
    }
  2. Back in the terminal with the producer script running, enter a test message:Send test message in Kafka
  3. In the Lambda function console, select the Monitoring tab then choose View logs in CloudWatch. In the latest log stream, you see the original event and the decoded message:Log events output

Using Lambda as event source

The Lambda function target in the event source mapping does not need to be connected to a VPC to receive messages from the private instance hosting Kafka. However, you must provide details of the VPC, subnets, and security groups in the event source mapping for the Kafka cluster.

The Lambda function must have permission to describe VPCs and security groups, and manage elastic network interfaces. These execution roles permissions are:

  • ec2:CreateNetworkInterface
  • ec2:DescribeNetworkInterfaces
  • ec2:DescribeVpcs
  • ec2:DeleteNetworkInterface
  • ec2:DescribeSubnets
  • ec2:DescribeSecurityGroups

The event payload for the Lambda function contains an array of records. Each array item contains details of the topic and Kafka partition identifier, together with a timestamp and base64 encoded message:

Event payload example

There is an important difference in the way the Lambda service connects to the self-hosted Kafka cluster compared with Amazon MSK. MSK encrypts data in transit by default so the broker connection defaults to using TLS. With a self-hosted cluster, TLS authentication is not supported when using the Apache Kafka event source. Instead, if accessing brokers over the internet, the event source uses SASL/SCRAM authentication, which can be configured in the event source mapping:

SASL/SCRAM configuration

To learn how to configure SASL/SCRAM authentication your self-hosted Kafka cluster, see this documentation.

Conclusion

Lambda now supports self-hosted Kafka as an event source so you can invoke Lambda functions from messages in Kafka topics to integrate into other downstream serverless workflows.

This post shows how to configure a self-hosted Kafka cluster on EC2 and set up the network configuration. I also cover how to set up the event source mapping in Lambda and test a function to decode the messages sent from Kafka.

To learn more about how to use this feature, read the documentation. For more serverless learning resource, visit Serverless Land.

Use Macie to discover sensitive data as part of automated data pipelines

Post Syndicated from Brandon Wu original https://aws.amazon.com/blogs/security/use-macie-to-discover-sensitive-data-as-part-of-automated-data-pipelines/

Data is a crucial part of every business and is used for strategic decision making at all levels of an organization. To extract value from their data more quickly, Amazon Web Services (AWS) customers are building automated data pipelines—from data ingestion to transformation and analytics. As part of this process, my customers often ask how to prevent sensitive data, such as personally identifiable information, from being ingested into data lakes when it’s not needed. They highlight that this challenge is compounded when ingesting unstructured data—such as files from process reporting, text files from chat transcripts, and emails. They also mention that identifying sensitive data inadvertently stored in structured data fields—such as in a comment field stored in a database—is also a challenge.

In this post, I show you how to integrate Amazon Macie as part of the data ingestion step in your data pipeline. This solution provides an additional checkpoint that sensitive data has been appropriately redacted or tokenized prior to ingestion. Macie is a fully managed data security and privacy service that uses machine learning and pattern matching to discover sensitive data in AWS.

When Macie discovers sensitive data, the solution notifies an administrator to review the data and decide whether to allow the data pipeline to continue ingesting the objects. If allowed, the objects will be tagged with an Amazon Simple Storage Service (Amazon S3) object tag to identify that sensitive data was found in the object before progressing to the next stage of the pipeline.

This combination of automation and manual review helps reduce the risk that sensitive data—such as personally identifiable information—will be ingested into a data lake. This solution can be extended to fit your use case and workflows. For example, you can define custom data identifiers as part of your scans, add additional validation steps, create Macie suppression rules to archive findings automatically, or only request manual approvals for findings that meet certain criteria (such as high severity findings).

Solution overview

Many of my customers are building serverless data lakes with Amazon S3 as the primary data store. Their data pipelines commonly use different S3 buckets at each stage of the pipeline. I refer to the S3 bucket for the first stage of ingestion as the raw data bucket. A typical pipeline might have separate buckets for raw, curated, and processed data representing different stages as part of their data analytics pipeline.

Typically, customers will perform validation and clean their data before moving it to a raw data zone. This solution adds validation steps to that pipeline after preliminary quality checks and data cleaning is performed, noted in blue (in layer 3) of Figure 1. The layers outlined in the pipeline are:

  1. Ingestion – Brings data into the data lake.
  2. Storage – Provides durable, scalable, and secure components to store the data—typically using S3 buckets.
  3. Processing – Transforms data into a consumable state through data validation, cleanup, normalization, transformation, and enrichment. This processing layer is where the additional validation steps are added to identify instances of sensitive data that haven’t been appropriately redacted or tokenized prior to consumption.
  4. Consumption – Provides tools to gain insights from the data in the data lake.

 

Figure 1: Data pipeline with sensitive data scan

Figure 1: Data pipeline with sensitive data scan

The application runs on a scheduled basis (four times a day, every 6 hours by default) to process data that is added to the raw data S3 bucket. You can customize the application to perform a sensitive data discovery scan during any stage of the pipeline. Because most customers do their extract, transform, and load (ETL) daily, the application scans for sensitive data on a scheduled basis before any crawler jobs run to catalog the data and after typical validation and data redaction or tokenization processes complete.

You can expect that this additional validation will add 5–10 minutes to your pipeline execution at a minimum. The validation processing time will scale linearly based on object size, but there is a start-up time per job that is constant.

If sensitive data is found in the objects, an email is sent to the designated administrator requesting an approval decision, which they indicate by selecting the link corresponding to their decision to approve or deny the next step. In most cases, the reviewer will choose to adjust the sensitive data cleanup processes to remove the sensitive data, deny the progression of the files, and re-ingest the files in the pipeline.

Additional considerations for deploying this application for regular use are discussed at the end of the blog post.

Application components

The following resources are created as part of the application:

Note: the application uses various AWS services, and there are costs associated with these resources after the Free Tier usage. See AWS Pricing for details. The primary drivers of the solution cost will be the amount of data ingested through the pipeline, both for Amazon S3 storage and data processed for sensitive data discovery with Macie.

The architecture of the application is shown in Figure 2 and described in the text that follows.
 

Figure 2: Application architecture and logic

Figure 2: Application architecture and logic

Application logic

  1. Objects are uploaded to the raw data S3 bucket as part of the data ingestion process.
  2. A scheduled EventBridge rule runs the sensitive data scan Step Functions workflow.
  3. triggerMacieScan Lambda function moves objects from the raw data S3 bucket to the scan stage S3 bucket.
  4. triggerMacieScan Lambda function creates a Macie sensitive data discovery job on the scan stage S3 bucket.
  5. checkMacieStatus Lambda function checks the status of the Macie sensitive data discovery job.
  6. isMacieStatusCompleteChoice Step Functions Choice state checks whether the Macie sensitive data discovery job is complete.
    1. If yes, the getMacieFindingsCount Lambda function runs.
    2. If no, the Step Functions Wait state waits 60 seconds and then restarts Step 5.
  7. getMacieFindingsCount Lambda function counts all of the findings from the Macie sensitive data discovery job.
  8. isSensitiveDataFound Step Functions Choice state checks whether sensitive data was found in the Macie sensitive data discovery job.
    1. If there was sensitive data discovered, run the triggerManualApproval Lambda function.
    2. If there was no sensitive data discovered, run the moveAllScanStageS3Files Lambda function.
  9. moveAllScanStageS3Files Lambda function moves all of the objects from the scan stage S3 bucket to the scanned data S3 bucket.
  10. triggerManualApproval Lambda function tags and moves objects with sensitive data discovered to the manual review S3 bucket, and moves objects with no sensitive data discovered to the scanned data S3 bucket. The function then sends a notification to the ApprovalRequestNotification Amazon SNS topic as a notification that manual review is required.
  11. Email is sent to the email address that’s subscribed to the ApprovalRequestNotification Amazon SNS topic (from the application deployment template) for the manual review user with the option to Approve or Deny pipeline ingestion for these objects.
  12. Manual review user assesses the objects with sensitive data in the manual review S3 bucket and selects the Approve or Deny links in the email.
  13. The decision request is sent from the Amazon API Gateway to the receiveApprovalDecision Lambda function.
  14. manualApprovalChoice Step Functions Choice state checks the decision from the manual review user.
    1. If denied, run the deleteManualReviewS3Files Lambda function.
    2. If approved, run the moveToScannedDataS3Files Lambda function.
  15. deleteManualReviewS3Files Lambda function deletes the objects from the manual review S3 bucket.
  16. moveToScannedDataS3Files Lambda function moves the objects from the manual review S3 bucket to the scanned data S3 bucket.
  17. The next step of the automated data pipeline will begin with the objects in the scanned data S3 bucket.

Prerequisites

For this application, you need the following prerequisites:

You can use AWS Cloud9 to deploy the application. AWS Cloud9 includes the AWS CLI and AWS SAM CLI to simplify setting up your development environment.

Deploy the application with AWS SAM CLI

You can deploy this application using the AWS SAM CLI. AWS SAM uses AWS CloudFormation as the underlying deployment mechanism. AWS SAM is an open-source framework that you can use to build serverless applications on AWS.

To deploy the application

  1. Initialize the serverless application using the AWS SAM CLI from the GitHub project in the aws-samples repository. This will clone the project locally which includes the source code for the Lambda functions, Step Functions state machine definition file, and the AWS SAM template. On the command line, run the following:
    sam init --location gh: aws-samples/amazonmacie-datapipeline-scan
    

    Alternatively, you can clone the Github project directly.

  2. Deploy your application to your AWS account. On the command line, run the following:
    sam deploy --guided
    

    Complete the prompts during the guided interactive deployment. The first deployment prompt is shown in the following example.

    Configuring SAM deploy
    ======================
    
            Looking for config file [samconfig.toml] :  Found
            Reading default arguments  :  Success
    
            Setting default arguments for 'sam deploy'
            =========================================
            Stack Name [maciepipelinescan]:
    

  3. Settings:
    • Stack Name – Name of the CloudFormation stack to be created.
    • AWS RegionRegion—for example, us-west-2, eu-west-1, ap-southeast-1—to deploy the application to. This application was tested in the us-west-2 and ap-southeast-1 Regions. Before selecting a Region, verify that the services you need are available in those Regions (for example, Macie and Step Functions).
    • Parameter StepFunctionName – Name of the Step Functions state machine to be created—for example, maciepipelinescanstatemachine).
    • Parameter BucketNamePrefix – Prefix to apply to the S3 buckets to be created (S3 bucket names are globally unique, so choosing a random prefix helps ensure uniqueness).
    • Parameter ApprovalEmailDestination – Email address to receive the manual review notification.
    • Parameter EnableMacie – Whether you need Macie enabled in your account or Region. You can select yes or no; select yes if you need Macie to be enabled for you as part of this template, select no, if you already have Macie enabled.
  4. Confirm changes and provide approval for AWS SAM CLI to deploy the resources to your AWS account by responding y to prompts, as shown in the following example. You can accept the defaults for the SAM configuration file and SAM configuration environment prompts.
    #Shows you resources changes to be deployed and require a 'Y' to initiate deploy
    Confirm changes before deploy [y/N]: y
    #SAM needs permission to be able to create roles to connect to the resources in your template
    Allow SAM CLI IAM role creation [Y/n]: y
    ReceiveApprovalDecisionAPI may not have authorization defined, Is this okay? [y/N]: y
    ReceiveApprovalDecisionAPI may not have authorization defined, Is this okay? [y/N]: y
    Save arguments to configuration file [Y/n]: y
    SAM configuration file [samconfig.toml]: 
    SAM configuration environment [default]:
    

    Note: This application deploys an Amazon API Gateway with two REST API resources without authorization defined to receive the decision from the manual review step. You will be prompted to accept each resource without authorization. A token (Step Functions taskToken) is used to authenticate the requests.

  5. This creates an AWS CloudFormation changeset. Once the changeset creation is complete, you must provide a final confirmation of y to Deploy the changeset? [y/N] when prompted as shown in the following example.
    Changeset created successfully. arn:aws:cloudformation:ap-southeast-1:XXXXXXXXXXXX:changeSet/samcli-deploy1605213119/db681961-3635-4305-b1c7-dcc754c7XXXX
    
    
    Previewing CloudFormation changeset before deployment
    ======================================================
    Deploy this changeset? [y/N]:
    

Your application is deployed to your account using AWS CloudFormation. You can track the deployment events in the command prompt or via the AWS CloudFormation console.

After the application deployment is complete, you must confirm the subscription to the Amazon SNS topic. An email will be sent to the email address entered in Step 3 with a link that you need to select to confirm the subscription. This confirmation provides opt-in consent for AWS to send emails to you via the specified Amazon SNS topic. The emails will be notifications of potentially sensitive data that need to be approved. If you don’t see the verification email, be sure to check your spam folder.

Test the application

The application uses an EventBridge scheduled rule to start the sensitive data scan workflow, which runs every 6 hours. You can manually start an execution of the workflow to verify that it’s working. To test the function, you will need a file that contains data that matches your rules for sensitive data. For example, it is easy to create a spreadsheet, document, or text file that contains names, addresses, and numbers formatted like credit card numbers. You can also use this generated sample data to test Macie.

We will test by uploading a file to our S3 bucket via the AWS web console. If you know how to copy objects from the command line, that also works.

Upload test objects to the S3 bucket

  1. Navigate to the Amazon S3 console and upload one or more test objects to the <BucketNamePrefix>-data-pipeline-raw bucket. <BucketNamePrefix> is the prefix you entered when deploying the application in the AWS SAM CLI prompts. You can use any objects as long as they’re a supported file type for Amazon Macie. I suggest uploading multiple objects, some with and some without sensitive data, in order to see how the workflow processes each.

Start the Scan State Machine

  1. Navigate to the Step Functions state machines console. If you don’t see your state machine, make sure you’re connected to the same region that you deployed your application to.
  2. Choose the state machine you created using the AWS SAM CLI as seen in Figure 3. The example state machine is maciepipelinescanstatemachine, but you might have used a different name in your deployment.
     
    Figure 3: AWS Step Functions state machines console

    Figure 3: AWS Step Functions state machines console

  3. Select the Start execution button and copy the value from the Enter an execution name – optional box. Change the Input – optional value replacing <execution id> with the value just copied as follows:
    {
        “id”: “<execution id>”
    }
    

    In my example, the <execution id> is fa985a4f-866b-b58b-d91b-8a47d068aa0c from the Enter an execution name – optional box as shown in Figure 4. You can choose a different ID value if you prefer. This ID is used by the workflow to tag the objects being processed to ensure that only objects that are scanned continue through the pipeline. When the EventBridge scheduled event starts the workflow as scheduled, an ID is included in the input to the Step Functions workflow. Then select Start execution again.
     

    Figure 4: New execution dialog box

    Figure 4: New execution dialog box

  4. You can see the status of your workflow execution in the Graph inspector as shown in Figure 5. In the figure, the workflow is at the pollForCompletionWait step.
     
    Figure 5: AWS Step Functions graph inspector

    Figure 5: AWS Step Functions graph inspector

The sensitive discovery job should run for about five to ten minutes. The jobs scale linearly based on object size, but there is a start-up time per job that is constant. If sensitive data is found in the objects uploaded to the <BucketNamePrefix>-data-pipeline-upload S3 bucket, an email is sent to the address provided during the AWS SAM deployment step, notifying the recipient requesting of the need for an approval decision, which they indicate by selecting the link corresponding to their decision to approve or deny the next step as shown in Figure 6.
 

Figure 6: Sensitive data identified email

Figure 6: Sensitive data identified email

When you receive this notification, you can investigate the findings by reviewing the objects in the <BucketNamePrefix>-data-pipeline-manual-review S3 bucket. Based on your review, you can either apply remediation steps to remove any sensitive data or allow the data to proceed to the next step of the data ingestion pipeline. You should define a standard response process to address discovery of sensitive data in the data pipeline. Common remediation steps include review of the files for sensitive data, deleting the files that you do not want to progress, and updating the ETL process to redact or tokenize sensitive data when re-ingesting into the pipeline. When you re-ingest the files into the pipeline without sensitive data, the files will not be flagged by Macie.

The workflow performs the following:

  • If you select Approve, the files are moved to the <BucketNamePrefix>-data-pipeline-scanned-data S3 bucket with an Amazon S3 SensitiveDataFound object tag with a value of true.
  • If you select Deny, the files are deleted from the <BucketNamePrefix>-data-pipeline-manual-review S3 bucket.
  • If no action is taken, the Step Functions workflow execution times out after five days and the file will automatically be deleted from the <BucketNamePrefix>-data-pipeline-manual-review S3 bucket after 10 days.

Clean up the application

You’ve successfully deployed and tested the sensitive data pipeline scan workflow. To avoid ongoing charges for resources you created, you should delete all associated resources by deleting the CloudFormation stack. In order to delete the CloudFormation stack, you must first delete all objects that are stored in the S3 buckets that you created for the application.

To delete the application

  1. Empty the S3 buckets created in this application (<BucketNamePrefix>-data-pipeline-raw S3 bucket, <BucketNamePrefix>-data-pipeline-scan-stage, <BucketNamePrefix>-data-pipeline-manual-review, and <BucketNamePrefix>-data-pipeline-scanned-data).
  2. Delete the CloudFormation stack used to deploy the application.

Considerations for regular use

Before using this application in a production data pipeline, you will need to stop and consider some practical matters. First, the notification mechanism used when sensitive data is identified in the objects is email. Email doesn’t scale: you should expand this solution to integrate with your ticketing or workflow management system. If you choose to use email, subscribe a mailing list so that the work of reviewing and responding to alerts is shared across a team.

Second, the application is run on a scheduled basis (every 6 hours by default). You should consider starting the application when your preliminary validations have completed and are ready to perform a sensitive data scan on the data as part of your pipeline. You can modify the EventBridge Event Rule to run in response to an Amazon EventBridge event instead of a scheduled basis.

Third, the application currently uses a 60 second Step Functions Wait state when polling for the Macie discovery job completion. In real world scenarios, the discovery scan will take 10 minutes at a minimum, likely several orders of magnitude longer. You should evaluate the typical execution times for your application execution and tune the polling period accordingly. This will help reduce costs related to running Lambda functions and log storage within CloudWatch Logs. The polling period is defined in the Step Functions state machine definition file (macie_pipeline_scan.asl.json) under the pollForCompletionWait state.

Fourth, the application currently doesn’t account for false positives in the sensitive data discovery job results. Also, the application will progress or delete all objects identified based on the decision by the reviewer. You should consider expanding the application to handle false positives through automation rather than manual review / intervention (such as deleting the files from the manual review bucket or removing the sensitive data tags applied).

Last, the solution will stop the ingestion of a subset of objects into your pipeline. This behavior is similar to other validation and data quality checks that most customers perform as part of the data pipeline. However, you should test to ensure that this will not cause unexpected outcomes and address them in your downstream application logic accordingly.

Conclusion

In this post, I showed you how to integrate sensitive data discovery using Macie as an additional validation step in an automated data pipeline. You’ve reviewed the components of the application, deployed it using the AWS SAM CLI, tested to validate that the application functions as expected, and cleaned up by removing deployed resources.

You now know how to integrate sensitive data scanning into your ETL pipeline. You can use automation and—where required—manual review to help reduce the risk of sensitive data, such as personally identifiable information, being inadvertently ingested into a data lake. You can take this application and customize it to fit your use case and workflows, such as using custom data identifiers as part of your scans, adding additional validation steps, creating Macie suppression rules to define cases to archive findings automatically, or only request manual approvals for findings that meet certain criteria (such as high severity findings).

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Macie forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Brandon Wu

Brandon is a security solutions architect helping financial services organizations secure their critical workloads on AWS. In his spare time, he enjoys exploring outdoors and experimenting in the kitchen.

Using Amazon CloudWatch Lambda Insights to Improve Operational Visibility

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/using-amazon-cloudwatch-lambda-insights-to-improve-operational-visibility/

To balance costs, while at the same time ensuring the service levels needed to meet business requirements are met, some customers elect to continuously monitor and optimize their AWS Lambda functions. They collect and analyze metrics and logs to monitor performance, and to isolate errors for troubleshooting purposes. Additionally, they also seek to right-size function configurations by measuring function duration, CPU usage, and memory allocation. Using various tools and sources of data to do this can be time-consuming, and some even go so far as to build their own customized dashboards to surface and analyze this data.

We announced Amazon CloudWatch Lambda Insights as a public preview this past October for customers looking to gain deeper operational oversight and visibility into the behavior of their Lambda functions. Today, I’m pleased to announce that CloudWatch Lambda Insights is now generally available. CloudWatch Lambda Insights provides clearer and simpler operational visibility of your functions by automatically collating and summarizing Lambda performance metrics, errors, and logs in prebuilt dashboards, saving you from time-consuming, manual work.

Once enabled on your functions, CloudWatch Lambda Insights automatically starts collecting and summarizing performance metrics and logs, and, from a convenient dashboard, provides you with a one-click drill-down into metrics and errors for Lambda function requests, simplifying analysis and troubleshooting.

Exploring CloudWatch Lambda Insights
To get started, I need to enable Lambda Insights on my functions. In the Lambda console, I navigate to my list of functions, and then select the function I want to enable for Lambda Insights by clicking on its name. From the function’s configuration view I then scroll to the Monitoring tools panel, click Edit, enable Enhanced monitoring, and click Save. If you want to enable enhanced monitoring for many functions, you may find it more convenient to use AWS Command Line Interface (CLI), AWS Tools for PowerShell, or AWS CloudFormation approaches instead. Note that once enhanced monitoring has been enabled, it can take a few minutes before data begins to surface in CloudWatch.

Screenshot showing enabling of <span title="">Lambda Insights</span> In the Amazon CloudWatch Console, I start by selecting Performance monitoring beneath Lambda Insights in the navigation panel. This takes me to the Multi-function view. Metrics for all functions on which I have enabled Lambda Insights are graphed in the view. At the foot of the page there’s also a table listing the functions, summarizing some data in the graphs and adding Cold starts. The table gives me the ability to sort the data based on the metric I’m interested in.

Screenshot of metric graphs on the <span title="">Lambda Insights</span> Multi-function viewScreenshot of the <span title="">Lambda Insights</span> Multi-function view summary listAn interesting graph on this page, especially if you are trying to balance cost with performance, is Function Cost. This graph shows the direct cost of your functions in terms of megabyte milliseconds (MB-MS), which is how Lambda computes the financial charge of a function’s invocation. Hovering over the graph at a particular point in time shows more details.

Screenshot of function cost graphLet’s examine my ExpensiveFunction further. Moving to the summary list at the bottom of the page I click on the function name which takes me to the Single function view (from here I can switch to my other functions using the controls at the top of the page, without needing to return to the multiple function view). The graphs show me metrics for invocations and errors, duration, any throttling, and memory, CPU, and network usage on the selected function and to add to the detail available, the most recent 1000 invocations are also listed in a table which I can sort as needed.

Clicking View in the Trace column of a request in the invocations list takes me to the Service Lens trace view, showing where my function spent its time on that particular invocation request. I could use this to determine if changes to the business logic of the function might improve performance by reducing function duration, which will have a direct effect on cost. If I’m troubleshooting, I can view the Application or Performance logs for the function using the View logs button. Application logs are those that existed before Lambda Insights was enabled on the function, whereas Performance logs are those that Lambda Insights has collated across all my enabled functions. The log views enable me to run queries and in the case of the Performance logs I can run queries across all enabled functions in my account, for example to perform a top-N analysis to determine my most expensive functions, or see how one function compares to another.

Here’s how I can make use of Lambda Insights to check if I’m ‘moving the needle’ in the correct direction when attempting to right-size a function, by examining the effect of changes to memory allocation on function cost. The starting point for my ExpensiveFunction is 128MB. By moving from 128MB to 512MB, the data shows me that function cost, duration, and concurrency are all reduced – this is shown at (1) in the graphs. Moving from 512MB to 1024MB, (2), has no impact on function cost, but it further reduces duration by 50% and also affected the maximum concurrency. I ran two further experiments, first moving from 1024MB to 2048MB, (3), which resulted in a further reduction in duration but the function cost started to increase so the needle is starting to swing in the wrong direction. Finally, moving from 2048MB to 3008MB, (4), significantly increased the cost but had no effect on duration. With the aid of Lambda Insights I can infer that the sweet spot for this function (assuming latency is not a consideration) lies between 1024MB and 2048MB. All these experiments are shown in the graphs below (the concurrency graph lags slightly, as earlier invocations are finishing up as configuration changes are made).

Screenshot of function cost experiments

CloudWatch Lambda Insights gives simple and convenient operational oversight and visibility into the behavior of my AWS Lambda functions, and is available today in all regions where AWS Lambda is present.

Learn more about Amazon CloudWatch Lambda Insights in the documentation and get started today.

— Steve