Tag Archives: serverless

Estimating cost for Amazon SQS message processing using AWS Lambda

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/estimating-cost-for-amazon-sqs-message-processing-using-aws-lambda/

This post was written by Sabha Parameswaran, Senior Solutions Architect.

AWS Lambda enables fully managed asynchronous messaging processing through integration with Amazon SQS. This blog post helps estimate the cost and performance benefits when using Lambda to handle millions of messages per day by using a simulated setup.

Overview

Lambda supports asynchronous handling of messages using SQS integration as an event source and can scale for handling millions of messages per day. Customers often ask about the cost of implementing a Lambda-based messaging solution.

There are multiple variables like Lambda function runtime, individual message size, batch size for consuming from SQS, processing latency per message (depending on the backend services invoked), and function memory size settings. These can determine the overall performance and associated cost of a Lambda-based messaging solution.

This post provides cost estimation using these variables, along with guidance around optimization. The estimates focus on consuming from standard queues and not FIFO queues.

SQS event source

The Lambda event source mapping supports integration for SQS. Lambda users specify the SQS queue to consume messages. Lambda internally polls the queue and invokes the function synchronously with an event containing the queue messages.

The configuration controls in Lambda for consuming messages from an SQS queue are:

  • Batch size: The maximum number of records that can be batched as one event delivered to the consuming Lambda function. The maximum batch size is 10,000 records.
  • Batch window: The maximum time (in seconds) to gather records as a single batch. A larger batch window size means waiting longer for a larger SQS batch of messages before passing to the Lambda function.
  • SQS content filtering: Selecting only the messages that match a defined content criteria. This can reduce cost by removing unwanted or irrelevant messages. Lambda now supports content filtering (for SQS, Kinesis, and DynamoDB) and developers can use the filtering capabilities to avoid processing SQS messages, reducing unnecessary invocations and associated cost.

Lambda sends as many records in a single batch as allowed by the batch size, as long as it’s earlier than the batch window value, and smaller than the maximum payload size of 6 MB. Having large batch sizes means that a single Lambda invocation can handle more messages rather than multiple Lambda invocations to handle smaller batches (which translates to setting higher concurrency limits).

The cost and time to process might vary based on the actual number of messages in the batch. A larger batch size can imply longer processing but requires lower concurrency (number of concurrent Lambda invocations).

Lambda configurations

Lambda function costs are calculated based on memory used and time spent (in GB-second) in execution of a function. Aside from the event source configuration, there are several other Lambda function configurations that impact cost and performance:

  • Processor type: Lambda functions provide options to choose between x86 and Arm/Graviton processors. The newer Arm/Graviton processors can yield a higher performance and lower cost compared to x86 based on the workload. Compare the options and run tests before selecting.
  • Memory allotted: This is directly proportional to the CPU allotted to the function and translates to price for each invocation. Higher memory can lead to faster execution but also higher cost. The optimal memory required for a small batch versus large batch can vary based on the workload, size of incoming messages, transformations, requirements to store intermediate, or final results. Optimal tuning of the memory configurations is key to ensuring right cost versus performance. See the AWS Lambda Power Tuning documentation for more details on identifying the optimal memory versus performance for a fixed batch size and then extrapolate the memory settings for larger batch sizes.
  • Lambda function runtime: Some runtimes have a smaller memory footprint and may be more cost effective than others that are memory intensive. Choosing the runtime affects the memory allocation.
  • Function performance: This can be considered as TPS – total number of requests completed per second. Or conversely measured as time to complete one request. The performance – time to finish a function execution can be dependent on the event containing the batch of messages; bigger batches mean more time to complete an event and complexity and dependencies (performance of the backend that needs to be invoked) of the message processing. The calculations are based on the assumption that the Lambda function and related dependencies have been optimized and tuned to scale linearly with various batch sizes and number of invocations.
  • Concurrency: Number of concurrent Lambda function executions. Concurrency is important for scaling of Lambda functions, allowing users to delegate the capacity planning and scaling to thee Lambda service.

The higher the concurrency, the more workloads it can process in a shorter time, allowing better performance, but this does not change the overall cost. Concurrency is not equivalent to TPS: it is more of a scaling factor in overall TPS. For example, a workload comprised of a set of messages takes 20 seconds to complete. 100 workloads would mean 2000 seconds to complete. With a concurrency of 10, it takes 200 seconds. With a concurrency of 100, the time drops to 20 seconds as each of the 100 workloads are handled concurrently. But each function essentially runs for the same duration and memory, regardless of concurrency. So the cost remains the same, as it is measured in GB-hours (memory multiplied by time). But the performance view differs. So, the cost estimations do not consider the concurrency settings of Lambda functions as the workloads have to be processed either sequential or concurrently.

Assumptions

The cost estimation tool presented helps users estimate monthly Lambda function costs for processing SQS standard queue messages based on the following assumptions:

  • The system has reached steady state and has millions of messages available to be consumed per day in standard queues. The number of messages per day remains constant throughout the entire month.
  • Since it’s a steady state, there are no associated Lambda function cold start delays.
  • All SQS messages that need to be processed successfully have already met the filter criteria. Also, no poison messages that have to be re-tried repeatedly. Messages are not going to be rejected, unacknowledged, or reprocessed.
  • The workload scales linearly in performance versus batch size. All the associated dependencies can scale linearly and a batch of N messages should take the same time as N x a single message with a fixed overhead per function invocation irrespective of the batch size. For example, a function’s overhead is 50 ms irrespective of the batch size. Processing a single message takes 20 ms. So a batch of 20 messages should take 490 ms (50 + 20*20) versus a batch of 5 messages takes 150 ms (50 + 5*20).
  • Function memory increases in steps, based on increasing the batch size. For example, 100 messages uses a 256 MB of baseline memory. Every additional 500 messages require additional 128 MB of memory. A sliding window of memory to batch size:
Batch size Memory
1–100 256 MB
100–600 384 MB
600–1100 512 MB
1100–1600 640 MB

Lambda uses SQS APIs internally to poll and dequeue the messages. The costs for the polling and dequeue operations using SQS APIs are not included as part of the estimations. The internal SQS dequeue portion is outside the control of the Lambda developer and the cost estimates only cover the message processing using Lambda. Also, the tool does not consider any reprocessing or duplicate processing of messages due to exceptions or errors that can vary the cost.

Using the cost estimation tool

The estimator tool is a Python-based command line program that takes in an input properties file that specifies the various input parameters to come up with Lambda function cost versus performance estimations for various batch sizes, messages per day, etc. The tool does take into account the eligible monthly free tier for Lambda function executions.

Pre-requisites: Running the tool requires Python 3.9 and installation of Plotly package (5.7.+) or creating and using Docker images.

To run the tool:

  1. Clone the repo:
    git clone https://github.com/aws-samples/aws-lambda-sqs-cost-estimator
  2. Install the tool:
    cd aws-lambda-sqs-cost-estimator/code
    pip3 install -r requirements.txt
  3. Edit the input.prop file and run the tool to generate cost estimations:
    python3 LambdaPlotly.py

This shows the cost estimates on a local browser instance. Running the code as a Docker image is also supported. Refer to the GitHub repo for additional instructions.

  1. Clone the repo and build the Docker container:
    git clone https://github.com/aws-samples/aws-lambda-sqs-cost-estimator
    cd aws-lambda-sqs-cost-estimator/code
    docker build -t lambda-dash .
  2. Edit the input.prop file and run the tool to generate cost estimations:
    docker run -it -v `pwd`:/app -p 8080:8080 lambda-dash
  3. Navigate to http://0.0.0.0:8080/app in a browser to view the generated cost estimate plot.

There are various input parameters for the cost estimations specified inside the input.prop file. Tune the input parameters as needed:

Parameter Description Sample value (units not included)
base_lambda_memory_mb Baseline memory for the Lambda function (in MB) 128
warm_latency_ms Invocation time for Lambda handler method (going with warm start) irrespective of batch size in the incoming event payload in ms 20
process_per_message_ms Time to process a single message (linearly scales with number of messages per batch in event payload) in ms 10
max_batch_size Maximum batch size per event payload processed by a single Lambda instance 1000 (max is 10000)
batch_memory_overhead_mb Additional memory for processing increments in batch size (in MB) 128
batch_increment Increments of batch size for increased memory 300

The following is sample input.prop file content:

base_lambda_memory_mb=128

# Total process time for N messages in batch = warm_latency_ms + (process_per_message_ms * N)

# Time spent in function initialization/warm-up

warm_latency_ms=20

# Time spent for processing each message in milliseconds

process_per_message_ms=10

# Max batch size

max_batch_size=1000

# Additional lambda memory X mb required for managing/parsing/processing N additional messages processed when using variable batch sizes

#batch_memory_overhead_mb=X

#batch_increment=N

batch_memory_overhead_mb=128

batch_increment=300

The tool generates a page with plot graphs and tables with 3 sections:

Cost example

There is an accompanying interactive legend showing cost and batch size. The top section shows a graph of cost versus message volumes versus batch size:

cost versus message volumes vs Batch size

The second section shows the actual cost variation for different batch sizes for 10 million messages:

actual cost variation for different batch sizes for 10 million messages.

The third section shows the memory and time required to process with different batch sizes:

memory and time required to process with different batch sizes

The various control input parameters used for graph generation are shown at the bottom of the page.

Double-clicking on a specific batch size or line on the right-hand legend displays that specific plot with its pricing details.

specific plot to be displayed with its pricing details

You can modify the input parameters with different settings for memory, batch sizes, memory for increased batches and rerun the program to create different cost estimations. You can also export the generated graphs as PNG image files for reference.

Conclusion

You can use Lambda functions to handle fully managed asynchronous processing of SQS messages. Estimating the cost and optimal setup depends on leveraging the various configurations of SQS and Lambda functions. The cost estimator tool presented in this blog should help you understand these configurations and their impact on the overall cost and performance of the Lambda function-based messaging solutions.

For more serverless learning resources, visit Serverless Land.

Securely retrieving secrets with AWS Lambda

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/securely-retrieving-secrets-with-aws-lambda/

AWS Lambda functions often need to access secrets, such as certificates, API keys, or database passwords. Storing secrets outside the function code in an external secrets manager helps to avoid exposing secrets in application source code. Using a secrets manager also allows you to audit and control access, and can help with secret rotation. Do not store secrets in Lambda environment variables, as these are visible to anyone who has access to view function configuration.

This post highlights some solutions to store secrets securely and retrieve them from within your Lambda functions.

AWS Partner Network (APN) member Hashicorp provides Vault to secure secrets and application data. Vault allows you to control access to your secrets centrally, across applications, systems, and infrastructure. You can store secrets in Vault and access them from a Lambda function to access a database, for example. The Vault Agent for AWS helps you authenticate with Vault, retrieve the database credentials, and then perform the queries. You can also use the Vault AWS Lambda extension to manage connectivity to Vault.

AWS Systems Manager Parameter Store enables you to store configuration data securely, including secrets, as parameter values. For information on Parameter Store pricing, see the documentation.

AWS Secrets Manager allows you to replace hardcoded credentials in your code with an API call to Secrets Manager to retrieve the secret programmatically. You can generate, protect, rotate, manage, and retrieve secrets throughout their lifecycle. By default, Secrets Manager does not write or cache the secret to persistent storage. Secrets Manager supports cross-account access to secrets. For information on Secrets Manager pricing, see the documentation.

Parameter Store integrates directly with Secrets Manager as a pass-through service for references to Secrets Manager secrets. Use this integration if you prefer using Parameter Store as a consistent solution for calling and referencing secrets across your applications. For more information, see “Referencing AWS Secrets Manager secrets from Parameter Store parameters.”

For an example application to show Secrets Manager functionality, deploy the example detailed in “How to securely provide database credentials to Lambda functions by using AWS Secrets Manager”.

When to retrieve secrets

When Lambda first invokes your function, it creates a runtime environment. It runs the function’s initialization (init) code, which is the code outside the main handler. Lambda then runs the function handler code as the invocation. This receives the event payload and processes your business logic. Subsequent invocations can use the same runtime environment.

You can retrieve secrets during each function invocation from within your handler code. This ensures that the secret value is always up to date but can lead to increased function duration and cost, as the function calls the secret manager during each invocation. There may also be additional retrieval costs from Secret Manager.

Retrieving secret during each invocation

Retrieving secret during each invocation

You can reduce costs and improve performance by retrieving the secret during the function init process. During subsequent invocations using the same runtime environment, your handler code can use the same secret.

Retrieving secret during function initialization.

Retrieving secret during function initialization.

The Serverless Land pattern example shows how to retrieve a secret during the init phase using Node.js and top-level await.

If a secret may change between subsequent invocations, ensure that your handler can check for the secret validity and, if necessary, retrieve the secret again.

Retrieve changed secret during subsequent invocation.

Retrieve changed secret during subsequent invocation.

You can also use Lambda extensions to retrieve secrets from Secrets Manager, cache them, and automatically refresh the cache based on a time value. The extension retrieves the secret from Secrets Manager before the init process and makes it available via a local HTTP endpoint. The function then retrieves the secret from the local HTTP endpoint, rather than directly from Secrets Manager, increasing performance. You can also share the extension with multiple functions, which can reduce function code. The extension handles refreshing the cache based on a configurable timeout value. This ensures that the function has the updated value, without handling the refresh in your function code, which increases reliability.

Using Lambda extensions to cache and refresh secret.

Using Lambda extensions to cache and refresh secret.

You can deploy the solution using the steps in Cache secrets using AWS Lambda extensions.

Lambda Powertools

Lambda Powertools provides a suite of utilities for Lambda functions to simplify the adoption of serverless best practices. AWS Lambda Powertools for Python and AWS Lambda Powertools for Java both provide a parameters utility that integrates with Secrets Manager.

from aws_lambda_powertools.utilities import parameters
def handler(event, context):
    # Retrieve a single secret
    value = parameters.get_secret("my-secret")
import software.amazon.lambda.powertools.parameters.SecretsProvider;
import software.amazon.lambda.powertools.parameters.ParamManager;

public class AppWithSecrets implements RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {
    // Get an instance of the Secrets Provider
    SecretsProvider secretsProvider = ParamManager.getSecretsProvider();

    // Retrieve a single secret
    String value = secretsProvider.get("/my/secret");

Rotating secrets

You should rotate secrets to prevent the misuse of your secrets. This helps you to replace long-term secrets with short-term ones, which reduces the risk of compromise.

Secrets Manager has built-in functionality to rotate secrets on demand or according to a schedule. Secrets Manager has native integrations with Amazon RDS, Amazon DocumentDB, and Amazon Redshift, using a Lambda function to manage the rotation process for you. It deploys an AWS CloudFormation stack and populates the function with the Amazon Resource Name (ARN) of the secret. You specify the permissions to rotate the credentials, and how often you want to rotate the secret. You can view and edit Secrets Manager rotation settings in the Secrets Manager console.

Secrets Manager rotation settings

Secrets Manager rotation settings

You can also create your own rotation Lambda function for other services.

Auditing secrets access

You should continually review how applications are using your secrets to ensure that the usage is as you expect. You should also log any changes to them so you can investigate any potential issues, and roll back changes if necessary.

When using Hashicorp Vault, use Audit devices to log all requests and responses to Vault. Audit devices can append logs to a file, write to syslog, or write to a socket.

Secrets Manager supports logging API calls using AWS CloudTrail. CloudTrail monitors and records all API calls for Secrets Manager as events. This includes calls from code calling the Secrets Manager APIs and access via the Secrets Manager console. CloudTrail data is considered sensitive, so you should use AWS KMS encryption to protect it.

The CloudTrail event history shows the requests to secretsmanager.amazonaws.com.

Viewing CloudTrail access to Secrets Manager

Viewing CloudTrail access to Secrets Manager

You can use Amazon EventBridge to respond to alerts based on specific operations recorded in CloudTrail. These include secret rotation or deleted secrets. You can also generate an alert if someone tries to use a version of a secret version while it is pending deletion. This may help identify and alert you when an outdated certificate is used.

Securing secrets

You must tightly control access to secrets because of their sensitive nature. Create AWS Identity and Access Management (IAM) policies and resource policies to enable minimal access to secrets. You can use role-based, as well as attribute-based, access control. This can prevent credentials from being accidentally used or compromised. For more information, see “Authentication and access control for AWS Secrets Manager”.

Secrets Manager supports encryption at rest using AWS Key Management Service (AWS KMS) using keys that you manage. Secrets are encrypted in transit using TLS by default, which requires request signing.

You can access secrets from inside an Amazon Virtual Private Cloud (Amazon VPC) without requiring internet access. Use AWS PrivateLink and configure a Secrets Manager specific VPC endpoint.

Do not store plaintext secrets in Lambda environment variables. Ensure that you do not embed secrets directly in function code, commit these secrets to code repositories, or log the secret to CloudWatch.

Conclusion

Using a secrets manager to store secrets such as certificates, API keys or database passwords helps to avoid exposing secrets in application source code. This post highlights some AWS and third-party solutions, such as Hashicorp Vault, to store secrets securely and retrieve them from within your Lambda functions.

Secrets Manager is the preferred AWS solution for storing and managing secrets. I explain when to retrieve secrets, including using Lambda extensions to cache secrets, which can reduce cost and improve performance.

You can use the Lambda Powertools parameters utility, which integrates with Secrets Manager. Rotating secrets reduces the risk of compromise and you can audit secrets using CloudTrail and respond to alerts using EventBridge. I also cover security considerations for controlling access to your secrets.

For more serverless learning resources, visit Serverless Land.

Build your next big idea with Cloudflare Pages

Post Syndicated from Nevi Shah original https://blog.cloudflare.com/big-ideas-on-pages/

Build your next big idea with Cloudflare Pages

Build your next big idea with Cloudflare Pages

Have you ever had a surge of inspiration for a project? That feeling when you have a great idea – a big idea — that you just can’t shake? When all you can think about is putting your hands to your keyboard and hacking away? Building a website takes courage, creativity, passion and drive, and with Cloudflare Pages we believe nothing should stand in the way of that vision.

Especially not a price tag.

Big ideas

We built Pages to be at the center of your developer experience – a way for you to get started right away without worrying about the heavy lift of setting up a fullstack app.  A quick commit to your git provider or direct upload to our platform, and your rich and powerful site is deployed to our network of 270+ data centers in seconds. And above all, we built Pages to scale with you as you grow exponentially without getting hit by an unexpected bill.

The limit does not exist

We’re a platform that’s invested in your vision – no matter how wacky and wild (the best ones usually are!). That’s why for many parts of Pages we want your experience to be limitless.

Unlimited requests: As your idea grows, so does your traffic. While thousands and millions of end users flock to your site, Pages is prepared to handle all of your traffic with no extra cost to you – even when millions turn to billions.

Unlimited bandwidth: As your traffic grows, you’ll need more bandwidth – and with Pages, we got you. If your site takes off in popularity one day, the next day shouldn’t be a cause for panic because of a nasty bill. It should be a day of celebration and pride. We’re giving unlimited bandwidth so you can keep your eyes focused on moving up and to the right.

Unlimited free seats: With a rise in demand for your app, you’re going to need more folks working with you. We know from experience that more great ideas don’t just come from one person but a strong team of people. We want to be there every step of the way along with every person you want on this journey with you. Just because your team grows doesn’t mean your bill has to.

Unlimited projects: With one great idea, means many more to follow. With Pages, you can deploy as many projects as you want – keep them coming! Not every idea is going to be the right one – we know this! Prove out your vision in private org-owned repos for free! Try out a plethora of development frameworks until you’ve found the perfect combination for your idea. Test your changes locally using our Wrangler integration so you can be confident in whatever you choose to put out into the world.

Quick, easy and free integrations

Workers: Take your idea from static to dynamic with Pages’ native integration with Cloudflare Workers, our serverless functions offering. Drop your functions into your functions folder and deploy them alongside your static assets no extra configuration required! We announced built-in support for Cloudflare Workers back in November and have since announced framework integrations with Remix, Sveltekit and Qwik, and are working on fullstack support for Next.js within the year!

Cloudflare Access: Working with more than just a team of developers? Send your staging changes to product managers and marketing teams with a unique preview URL for every deployment. And what’s more? You can enable protection for your preview links using our native integration with Cloudflare Access at no additional cost. With one click, send around your latest version without fear of getting into the wrong hands.

Custom domains: With every Pages project, get a free pages.dev subdomain to deploy your project under. When you’re ready for the big day, with built in SSL for SaaS, bring that idea to life with a custom domain of its own!

Web Analytics: When launch day comes around, check out just how well it’s going with our deep, privacy-first integration with Cloudflare’s Web Analytics. Track every single one of your sites’ progress and performance, including metrics about your traffic and core web vitals with just one click – completely on us!

Wicked fast performance

And the best part? Our generous free tier never means compromising site performance. Bring your site closer to your users on your first deployment no matter where they are in the world. The Cloudflare network spans across 270+ cities around the globe and your site is distributed to each of them faster than you can say “it’s go time”. There’s also no need to choose regions for your deployments, we want you to have them all and get even more granular, so your big idea can truly go global.

What else?

Building on Pages is just the start of what your idea could grow to become. In the coming months you can expect deep integrations with our new Cloudflare storage offerings like R2, our object storage service with zero egress fees (open beta), and D1 our first SQL database on the edge (private beta).

We’ve talked a lot about building your entire platform on Cloudflare. We’re reimagining this experience to be even easier and even more powerful.

Using just Cloudflare, you’ll be able to build big projects – like an entire store! You can use R2 to host the images, D1 to store product info, inventory data and user details, and Pages to seamlessly build and deploy. A frictionless dev experience for a full stack app that can live and work entirely from the edge. Best of all, don’t worry about getting hung up on cost, we’ll always have a generous free tier so you can get started right away.

At Cloudflare, we believe that every developer deserves to dream big. For the developers who love to build, who are curious, who explore, let us take you there – no surprises! Leave the security and scalability to us, so you can put your fingers to the keyboard and do what you do best!

Give it a go

Learn more about Pages and check out our developer documentation. Be sure to join our active Cloudflare Developer Discord and meet our community of developers building on our platform. You can chat directly with our product and engineering teams and get exclusive access to our offered betas!

Building scheduling system with Workers and Durable Objects

Post Syndicated from Adam Janiš original https://blog.cloudflare.com/building-scheduling-system-with-workers-and-durable-objects/

Building scheduling system with Workers and Durable Objects

Building scheduling system with Workers and Durable Objects

We rely on technology to help us on a daily basis – if you are not good at keeping track of time, your calendar can remind you when it’s time to prepare for your next meeting. If you made a reservation at a really nice restaurant, you don’t want to miss it! You appreciate the app to remind you a day before your plans the next evening.

However, who tells the application when it’s the right time to send you a notification? For this, we generally rely on scheduled events. And when you are relying on them, you really want to make sure that they occur. Turns out, this can get difficult. The scheduler and storage backend need to be designed with scale in mind – otherwise you may hit limitations quickly.

Workers, Durable Objects, and Alarms are actually a perfect match for this type of workload. Thanks to the distributed architecture of Durable Objects and their storage, they are a reliable and scalable option. Each Durable Object has access to its own isolated storage and alarm scheduler, both being automatically replicated and failover in case of failures.

There are many use cases where having a reliable scheduler can come in handy: running a webhook service, sending emails to your customers a week after they sign up to keep them engaged, sending invoices reminders, and more!

Today, we’re going to show you how to build a scalable service that will schedule HTTP requests on a specific schedule or as one-off at a specific time as a way to guide you through any use case that requires scheduled events.

Building scheduling system with Workers and Durable Objects

Quick intro into the application stack

Before we dive in, here are some of the tools we’re going to be using today:

The application is going to have following components:

  • Scheduling system API to accept scheduled requests and manage Durable Objects
  • Unique Durable Object per scheduled request, each with
  • Storage – keeping the request metadata, such as URL, body, or headers.
  • Alarm – a timer (trigger) to wake Durable Object up.

While we will focus on building the application, the Cloudflare global network will take care of the rest – storing and replicating our data, and making sure to wake our Durable Objects up when the time’s right. Let’s build it!

Initialize new Workers project

Get started by generating a completely new Workers project using the wrangler init command, which makes creating new projects quick & easy.

wrangler init -y durable-objects-requests-scheduler

Building scheduling system with Workers and Durable Objects

For more information on how to install, authenticate, or update Wrangler, check out https://developers.cloudflare.com/workers/wrangler/get-started

Preparing TypeScript types

From my personal experience, at least a draft of TypeScript types significantly helps to be more productive down the road, so let’s prepare and describe our scheduled request in advance. Create a file types.ts in src directory and paste the following TypeScript definitions.

src/types.ts

export interface Env {
    DO_REQUEST: DurableObjectNamespace
}
 
export interface ScheduledRequest {
  url: string // URL of the request
  triggerAt?: number // optional, unix timestamp in milliseconds, defaults to `new Date()`
  requestInit?: RequestInit // optional, includes method, headers, body
}

A scheduled request Durable Object class & alarm

Based on our architecture design, each scheduled request will be saved into its own Durable Object, effectively separating storage and alarms from each other and allowing our scheduling system to scale horizontally – there is no limit to the number of Durable Objects we create.

In the end, the Durable Object class is a matter of a couple of lines. The code snippet below accepts and saves the request body to a persistent storage and sets the alarm timer. Workers runtime will wake up the Durable Object and call the alarm() method to process the request.

The alarm method reads the scheduled request data from the storage, then processes the request, and in the end reschedules itself in case it’s configured to be executed on a recurring schedule.

src/request-durable-object.ts

import { ScheduledRequest } from "./types";
 
export class RequestDurableObject {
  id: string|DurableObjectId
  storage: DurableObjectStorage
 
  constructor(state:DurableObjectState) {
    this.storage = state.storage
    this.id = state.id
  }
 
    async fetch(request:Request) {
    // read scheduled request from request body
    const scheduledRequest:ScheduledRequest = await request.json()
     
    // save scheduled request data to Durable Object storage, set the alarm, and return Durable Object id
    this.storage.put("request", scheduledRequest)
    this.storage.setAlarm(scheduledRequest.triggerAt || new Date())
    return new Response(JSON.stringify({
      id: this.id.toString()
    }), {
      headers: {
        "content-type": "application/json"
      }
    })
  }
 
  async alarm() {
    // read the scheduled request from Durable Object storage
    const scheduledRequest:ScheduledRequest|undefined = await this.storage.get("request")
 
    // call fetch on scheduled request URL with optional requestInit
    if (scheduledRequest) {
      await fetch(scheduledRequest.url, scheduledRequest.requestInit ? webhook.requestInit : undefined)
 
      // cleanup scheduled request once done
      this.storage.deleteAll()
    }
  }
}

Wrangler configuration

Once we have the Durable Object class, we need to create a Durable Object binding by instructing Wrangler where to find it and what the exported class name is.

wrangler.toml

name = "durable-objects-request-scheduler"
main = "src/index.ts"
compatibility_date = "2022-08-02"
 
# added Durable Objects configuration
[durable_objects]
bindings = [
  { name = "DO_REQUEST", class_name = "RequestDurableObject" },
]
 
[[migrations]]
tag = "v1"
new_classes = ["RequestDurableObject"]

Scheduling system API

The API Worker will accept POST HTTP methods only, and is expecting a JSON body with scheduled request data – what URL to call, optionally what method, headers, or body to send. Any other method than POST will return 405 – Method Not Allowed HTTP error.

HTTP POST /:scheduledRequestId? will create or override a scheduled request, where :scheduledRequestId is optional Durable Object ID returned from a scheduling system API before.

src/index.ts

import { Env } from "./types"
export { RequestDurableObject } from "./request-durable-object"
 
export default {
    async fetch(
        request: Request,
        env: Env
    ): Promise<Response> {
        if (request.method !== "POST") {
            return new Response("Method Not Allowed", {status: 405})
        }
 
        // parse the URL and get Durable Object ID from the URL
        const url = new URL(request.url)
        const idFromUrl = url.pathname.slice(1)
 
        // construct the Durable Object ID, use the ID from pathname or create a new unique id
        const doId = idFromUrl ? env.DO_REQUEST.idFromString(idFromUrl) : env.DO_REQUEST.newUniqueId()
 
        // get the Durable Object stub for our Durable Object instance
        const stub = env.DO_REQUEST.get(doId)
 
        // pass the request to Durable Object instance
        return stub.fetch(request)
    },
}

It’s good to mention that the script above does not implement any listing of scheduled or processed webhooks. Depending on how the scheduling system would be integrated, you can save each created Durable Object ID to your existing backend, or write your own registry – using one of the Workers storage options.

Starting a local development server and testing our application

We are almost done! Before we publish our scheduler system to the Cloudflare edge, let’s start Wrangler in a completely local mode to run a couple of tests against it and to see it in action – which will work even without an Internet connection!

wrangler dev --local
Building scheduling system with Workers and Durable Objects

The development server is listening on localhost:8787, which we will use for scheduling our first request. The JSON request payload should match the TypeScript schema we defined in the beginning – required URL, and optional triggerEverySeconds number or triggerAt unix timestamp. When only the required URL is passed, the request will be dispatched right away.

An example of request payload that will send a GET request to https://example.com every 30 seconds.

{
	"url":  "https://example.com",
	"triggerEverySeconds": 30,
}
> curl -X POST -d '{"url": "https://example.com", "triggerEverySeconds": 30}' http://localhost:8787
{"id":"000000018265a5ecaa5d3c0ab6a6997bf5638fdcb1a8364b269bd2169f022b0f"}
Building scheduling system with Workers and Durable Objects

From the wrangler logs we can see the scheduled request ID 000000018265a5ecaa5d3c0ab6a6997bf5638fdcb1a8364b269bd2169f022b0f is being triggered in 30s intervals.

Need to double the interval? No problem, just send a new POST request and pass the request ID as a pathname.

> curl -X POST -d '{"url": "https://example.com", "triggerEverySeconds": 60}' http://localhost:8787/000000018265a5ecaa5d3c0ab6a6997bf5638fdcb1a8364b269bd2169f022b0f
{"id":"000000018265a5ecaa5d3c0ab6a6997bf5638fdcb1a8364b269bd2169f022b0f"}

Every scheduled request gets a unique Durable Object ID with its own storage and alarm. As we demonstrated, the ID becomes handy when you need to change the settings of the scheduled request, or to deschedule them completely.

Publishing to the network

Following command will bundle our Workers application, export and bind Durable Objects, and deploy it to our workers.dev subdomain.

wrangler publish
Building scheduling system with Workers and Durable Objects

That’s it, we are live! 🎉 The URL of your deployment is shown in the Workers logs. In a reasonably short period of time we managed to write our own scheduling system that is ready to handle requests at scale.

You can check full source code in Workers templates repository, or experiment from your browser without installing any dependencies locally using the StackBlitz template.

What to see or build next

Introducing tiered pricing for AWS Lambda

Post Syndicated from Sam Dengler original https://aws.amazon.com/blogs/compute/introducing-tiered-pricing-for-aws-lambda/

This blog post is written by Heeki Park, Principal Solutions Architect, Serverless.

AWS Lambda charges for on-demand function invocations based on two primary parameters: invocation requests and compute duration, measured in GB-seconds. If you configure additional ephemeral storage for your function, Lambda also charges for ephemeral storage duration, measured in GB-seconds.

AWS continues to find ways to help customers reduce cost for running on Lambda. In February 2020, AWS announced that AWS Lambda would participate in Compute Savings Plans. In December 2020, AWS announced 1 ms billing granularity to help customers save on cost for their Lambda function invocations. With that pricing change, customers whose function duration is less than 100 ms pay less for those function invocations. In September 2021, AWS announced Graviton2 support for running your function on ARM and potential improvements for price performance for compute.

Today, AWS introduces tiered pricing for Lambda. With tiered pricing, customers who run large workloads on Lambda can automatically save on their monthly costs. Tiered pricing is based on compute duration measured in GB-seconds. The tiered pricing breaks down as follows:

Compute duration (GB-seconds) Architecture New tiered discount
0 – 6 billion x86 Same as today
6 – 15 billion x86 10%
Anything over 15 billion x86 20%
0 – 7.5 billion arm64 Same as today
7.5 – 18.75 billion arm64 10%
Anything over 18.75 billion arm64 20%

The Lambda pricing page lists the pricing for all Regions and architectures.

Tiered pricing discount example

Consider a financial services provider who provides on-demand stock portfolio analysis. The customers pay per portfolio analyzed and find the service valuable for providing them insight into the performance of those assets. The application is built using Lambda, runs on x86, and is optimized to use 2048 MB (2 GB) of memory with an average function duration of 60 seconds. This current month resulted in 75 million function invocations.

Without tiered pricing, this workload costs the following:

Monthly request charges: 75M * $0.20/million = $15.00
Monthly compute duration (seconds): 75M * 60 seconds = 4.5B seconds
Monthly compute (GB-seconds): 4.5B seconds * 2 GB = 9B GB-seconds
Monthly compute duration charges: 9B GB-s * $0.0000166667/GB-s = $150,000.30
Total monthly charges = request charges + compute duration charges = $15.00 + $150,000.30 = $150,015.30

With tiered pricing, the portion of compute duration that exceeds 6B GB-seconds receives an automatic discount as follows:

Monthly request charges: 75M * $0.20/million = $15.00
Monthly compute duration (seconds): 75M * 60 seconds = 4.5B seconds
Monthly compute (GB-seconds): 4.5B seconds * 2GB = 9B GB-seconds
Monthly compute duration charge (tier 1): 6B Gb-s * $0.0000166667/GB-s = $100,000.20
Monthly compute duration charge (tier 2): 3B Gb-s * $0.0000150000/GB-s = $45,000.09
Monthly compute duration charges (post-discount): $100,000.20 + $45,000.09 = $145,000.29.
Total monthly charges = request charges + compute duration charges = $15.00 + $145,000.29 = $145,015.29 ($5,000.01 cost savings)

Tiered pricing discount example with increased growth

The service is successful and usage in the following month quadruples, resulting in 300 million function invocations.

Without tiered pricing, this workload costs the following:

Monthly request charges: 300M * $0.20/million = $60.00
Monthly compute duration (seconds): 300M * 60 seconds = 18B seconds
Monthly compute (GB-seconds): 18B seconds * 2GB = 36B GB-seconds
Monthly compute duration charges: 36B GB-s * $0.0000166667/GB-s = $600,001.20
Total monthly charges = request charges + compute duration charges = $60.00 + $600,001.20 = $600,061.20

With tiered pricing, the compute duration portion now also exceeds 15B GB-seconds and receives an automatic discount as follows:

Monthly request charges: 300M * $0.20/million = $60.00
Monthly compute duration (seconds): 300M * 60 seconds = 18B seconds
Monthly compute (GB-seconds): 18B seconds * 2GB = 36B GB-seconds
Monthly compute duration charge (tier 1): 6B GB-s * $0.0000166667/GB-s = $100,000.02
Monthly compute duration charge (tier 2): 9B GB-s * $0.0000150000/GB-s = $135,000.27
Monthly compute duration charge (tier 3): 21B GB-s * $0.0000133333/GB-s = $280,000.56
Monthly compute duration charges (post-discount): $100,000.02 + $135,000.27 + $280,000.56 = $515,001.03.
Total monthly charges = request charges + compute duration charges = $60.00 + $515,001.03 = $515,061.03 ($85,000.17 cost savings)

Tiered pricing discount example with decreased growth

Alternatively, customers used the service less frequently than expected. As a result, usage in the following month is one-third the prior month’s usage, resulting in 25 million function invocations.

Without tiered pricing, this workload costs the following:

Monthly request charges: 25M * $0.20/million = $5.00
Monthly compute duration (seconds): 25M * 60 seconds = 1.5B seconds
Monthly compute (GB-seconds): 1.5B seconds * 2GB = 3B GB-seconds
Monthly compute duration charges: 3B GB-s * $0.0000166667/GB-s = $50,000.10
Total monthly charges = request charges + compute duration charges = $5.00 + $50,000.10 = $50,005.10

When considering tiered pricing, the compute duration portion is under 6B GB-s and is priced without any additional pricing discounts. In this case, the financial services provider did not grow the business as expected or take advantage of tiered pricing. However, they did take advantage of Lambda’s pay-as-you-go model, paying only for the compute that this application used.

Summary and other considerations

Tiered pricing for Lambda applies to the compute duration portion of your on-demand function invocations. It is specific to the architecture (x86 or arm64) and is bucketed by the Region. Refer to the previous table for the specific pricing tiers.

For example, consider a function that is using x86 architecture, deployed in both us-east-1 and us-west-2. Usage in us-east-1 is bucketed and priced separately from usage in us-west-2. If there is a function using arm64 architecture in us-east-1 and us-west-2, that function is also in a separate bucket.

The cost for invocation requests remains the same. The discount applies only to on-demand compute duration and does not apply to provisioned concurrency. Customers who also purchase Compute Savings Plans (CSPs) can take advantage of both, where Lambda applies tiered pricing first, followed by CSPs.

Conclusion

With tiered pricing for Lambda, you can save on the compute duration portion of your monthly Lambda bills. This allows you to architect, build, and run large-scale applications on Lambda and take advantage of these tiered prices automatically.

For more information on tiered pricing for Lambda, see: https://aws.amazon.com/lambda/pricing/.

Using certificate-based authentication for iOS applications with Amazon SNS

Post Syndicated from Sam Dengler original https://aws.amazon.com/blogs/compute/using-certificate-based-authentication-for-ios-applications-with-amazon-sns/

This blog post is written by Yashlin Naidoo, Arnav Thakur, Kim Read, Guilherme Silva.

Amazon SNS enables you to send notifications to a mobile push endpoint using a platform application endpoint by dispatching the notification on your application’s behalf. Push notifications for iOS apps are sent using Apple Push Notification Service (APNs).

To send push notifications using SNS for APNS certificate-based authentication, you must provide a set of credentials for connecting to the Apple Push Notification Service (see prerequisites for push). SNS supports using certificate-based authentication (.p12), in addition to the new token-based authentication (.p8).

Certificate-based authentication uses a provider certificate to establish a secure connection between your provider and APNs. These certificates are tied to a single application and are used to send notifications to this application. This approach can be useful when you haven’t migrated to the new token-based authentication.

For new applications, we recommend using token-based authentication as it provides improved security. It removes the need for yearly renewal of the certificates and can also be shared amongst multiple applications. To learn about how to use token-based authentication, visit Token-Based authentication for iOS applications with Amazon SNS in the AWS Compute Blog.

This blog shows step-by-step instructions on how to build an iOS application. You learn how to create a new certificate from your Apple developer account, and set up a platform application and endpoint in the SNS console. Next, you will learn how to test your application by sending a push notification via SNS to your device. Finally, you view the push notification delivered to your device.

Setting up your iOS application

This section will go over:

  • Creating an iOS application.
  • Creating a .p12 certificate to upload to SNS.

Prerequisites:

Creating an iOS application

  1. Create a new XCode project. Select iOS as the platform.

    New XCode project

    New XCode project

  2. Select your Apple Developer Account team and organization identifier.

    Select your Apple Developer Account team

    Select your Apple Developer Account team

  3. In your project, go to Signing & Capabilities. Under signing, ensure that “Automatically manage signing” is checked and your team is selected.

    Signing & Capabilities

    Signing & Capabilities

  4. To add the push notification capability to your application, select “+” and select Push Notifications.
    Add push notification capability

    Add push notification capability

    This step creates resources on your Apple Developer Account (the App ID and adds Push notification capability to it). You can also verify this in your Apple Developer Account.

  5. Add the following code to AppDelegate.swift:
        import UIKit
        import UserNotifications
    
        @main
        class AppDelegate: UIResponder, UIApplicationDelegate {
    
        func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
        // Override point for customization after application launch
    
        //Call to register for push notifications when launched
        registerForPushNotifications()
    
        return true
        }
    
        // MARK: UISceneSession Lifecycle
    
        func application(_ application: UIApplication, configurationForConnecting connectingSceneSession: UISceneSession, options: UIScene.ConnectionOptions) -> UISceneConfiguration {
        // Called when a new scene session is being created.
        // Use this method to select a configuration to create the new scene with.
        return UISceneConfiguration(name: "Default Configuration", sessionRole: connectingSceneSession.role)
        }
    
        func application(_ application: UIApplication, didDiscardSceneSessions sceneSessions: Set<UISceneSession>) {
        // Called when the user discards a scene session.
        // If any sessions were discarded while the application was not running, this will be called shortly after application:didFinishLaunchingWithOptions.
        // Use this method to release any resources that were specific to the discarded scenes, as they will not return.
        }
    
        func getNotificationSettings() {
        UNUserNotificationCenter.current().getNotificationSettings { settings in
        print("Notification settings: \(settings)")
    
        guard settings.authorizationStatus == .authorized else { return }
        DispatchQueue.main.async {
        UIApplication.shared.registerForRemoteNotifications()
        }
    
        }
        }
    
        func registerForPushNotifications() {
        //1 this handles all notification-related activities in the app including push notifications
        UNUserNotificationCenter.current()
    
        //2 this requests authorization to send the types of notifications specifies in the options
        .requestAuthorization(
        options: [.alert, .sound, .badge]) { [weak self] granted, _ in
        print("Permission granted: \(granted)")
        guard granted else { return }
        self?.getNotificationSettings()
        }
    
        }
    
        func application(
        _ application: UIApplication,
        didRegisterForRemoteNotificationsWithDeviceToken deviceToken: Data
        ) {
        let tokenParts = deviceToken.map { data in String(format: "%02.2hhx", data) }
        let token = tokenParts.joined()
        print("Device Token: \(token)")
        }
    
        func application(
        _ application: UIApplication,
        didFailToRegisterForRemoteNotificationsWithError error: Error
        ) {
        print("Failed to register: \(error)")
        }
    
        }
  6. Build and run the application on an iPhone. Note that the push notification feature does not work with a simulator.
  7. On your phone, select “Allow” when prompted to allow push notifications.

    Allow push notifications

    Allow push notifications

  8. The debugger prints “Permission granted: true” if successful and returns the Device Token.

    Device token

    Device token

You have now configured an iOS application that can receive push notifications. Next, use the application to test sending push notifications with SNS using certificate-based authentication.

Creating a .p12 certificate to upload to SNS

After completing the previous step, you need:

  • An app identifier
  • A certificate signing request (CSR)
  • An SSL certificate

Create an identifier

  1. Log in to your Apple Developer Account.
  2. Choose Certificates, Identifiers & Profiles.
  3. In the Identifiers section, choose the Add button (+).
  4. In the Register a new identifier section, choose App IDs and select Continue.
  5. In the Select a type section, choose App, and select Continue.
  6. For Description, type the application description.
  7. For Bundle ID, use the Bundle ID assigned to your application. You can find this ID under Signing & Capabilities of your application in XCode (see step 3 under “Creating an application”).
  8. Under Capabilities, choose Push Notifications.
  9. Select Continue. In the Confirm your App ID panel, check that all values were entered correctly. The identifier should match your app ID and bundle ID.
  10. Select Register to register the new app ID.

Create a certificate signing request (CSR)

  1. Open Keychain Access located in /Applications/Utilities or search for it on Finder.
  2. Once opened, choose the tab Keychain Access Tab (next to the Apple icon). Navigate to Certificate Assistant and choose Request a Certificate from a Certificate Authority.
  3. Enter the Username, Email Address, Common Name and leave CA Email Address empty.
  4. Choose Saved to disk and choose Continue.

Create a certificate

  1. Log in to your Apple Developer Account.
  2. Choose Certificates, Identifiers & Profiles.
  3. In the Certificate section, select Create new certificate.
  4. Under services, choose your certificate: Apple Push Notification service SSL (Sandbox)/Apple Push Notification service SSL (Sandbox & Production).
  5. Keep Platform as iOS and choose App ID (Identifier) created previously.
  6. Upload the Certificate Signing Request created in the previous step and Download your certificate.

Create .p12 certificate to upload to SNS

  1. Once your certificate.cer file is downloaded (for example, “aps_development.cer”), open it to show in keychain access. Find Apple Development iOS Push Services: (Your Identifier Name/App ID Name) and ensure that the file is placed in the “Login” folder.
  2. Right-click and choose Export as file format .p12 and choose Save. Optionally, set a password.

Creating a new platform application using APNs certificate-based authentication

Prerequisites

To implement APNs certificate-based authentication from SNS, you must have:

  • An Apple Developer Account
  • An iOS mobile application

For creating a new SNS Platform Application that is used to store Push Notification Platform credentials, configurations and related configurations:

  1. Navigate to the SNS Console. Expand the Mobile menu and choose Create platform application.
  2. For the Application name field, enter an application name such as “myfirstiOSapp”. For Push Notification Platform, select Apple iOS/ VoIP/ macOS.

    Create platform application

    Create platform application

  3. Under the Apple Credentials section:
    1. If your application is in development, select the radio button for Used for development in sandbox. If your application is in production, uncheck Used for development in sandbox.
    2. For Push service, choose iOS and for Authentication method, choose Certificate.
    3. Under Certificate, select Choose file to upload the .p12 certificate file.
    4. If you configured a password while creating the certificate, enter this in the Certificate Password field.
    5. Choose Load Credentials from File to extract the Certificate and private key components.
  4. Event Notifications, Delivery Status Logging – Optional: Refer to the guide for enabling Delivery Status logs and the guide to set up Mobile Event related Notifications. More on this step can also be found in the best practices guide.

    Enter Apple credentials

    Enter Apple credentials

  5. Choose Create Platform Application. This creates a certificate-based authentication APNs Platform Application for iOS.

    Create platform application

    Create platform application

Creating a new platform endpoint using APNs token-based authentication

To send Push Notifications using SNS, a platform endpoint resource is created to store the destination address of the corresponding iOS application that is associated with the SNS platform application.

A destination address of a user’s device with the iOS application installed is identified by an unique device token. It is obtained once the app has registered successfully with APNs to receive push notifications. The details of the device token captured in the Platform Endpoint resource along with the configurations in the SNS Platform application are used in conjunction by the service to deliver a push notification message.

In the following steps, you create a new platform endpoint for a destination device that has the iOS application installed and is capable of receiving push notifications.

  1. Open your Platform Application. Choose Create Application Endpoint.

    Application endpoints list

    Application endpoints list

  2. Locate the Device token in the application logs of the iOS app provisioned earlier. Enter it in the Device Token Field.
  3. To store any additional arbitrary data for the endpoint, you can include in the User data field and choose Create application endpoint.

    Create application endpoint

    Create application endpoint

  4. Choose Create application endpoint and the details are shown on the console.

    Application endpoint detail

    Application endpoint detail

Testing a push notification from your device

In this section, you test sending a push notification to your device.

  1. From the SNS console, navigate to your platform endpoint and choose Publish message.
  2. Enter a message to send. This example uses a custom payload that allows you to provide additional APNs headers.

    Publish message

    Publish message

  3. Choose Publish message.
  4. The push notification is delivered to your device.

    Notification

    Notification

Conclusion

Developers send mobile push notifications for APNs certificate-based authentication by using a .p12 certificate to authenticate an Apple device endpoint. Certificate-based authentication ensures a secure connection through TLS (Transport Layer Security). The provider (SNS) initiates the request to APNs and validation from the provider and APNS is required to complete the secure connection.

Certificates expire annually and must be renewed to ensure that SNS can continue to deliver to the endpoint. In this post, you learn how to create an iOS application for APNs certificate-based authentication and integrate it with SNS to send push notifications to your device using a .p12 certificate to authenticate your application with the mobile endpoint.

To learn more about APNs certificate-based authentication with Amazon SNS, visit the Amazon SNS Developer Guide.

For more serverless learning resources, visit Serverless Land.

Using AWS Lambda to run external transactions on Db2 for IBM i

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/using-aws-lambda-to-run-external-transactions-on-db2-for-ibm-i/

This post is written by Basil Lin, Cloud Application Architect, and Jud Neer, Delivery Practice Manager.

Db2 for IBM i (Db2) is a relational database management system that can pose connectivity challenges with cloud environments because of a lack of native support. However, by using Docker on Amazon ECR and AWS Lambda container images, you can transfer data between the two environments with a serverless architecture.

While mainframe modernization solutions are helping customers migrate from on-premises technologies to agile cloud solutions, a complete migration is not always immediately possible. AWS offers broad modernization support from rehosting to refactoring, and platform augmentation is a common scenario for customers getting started on their cloud journey.

Db2 is a common database in on-premises workloads. One common use case of platform augmentation is maintaining Db2 as the existing system-of-record while rehosting applications in AWS. To ensure Db2 data consistency, a change data capture (CDC) process must be able to capture any database changes as SQL transactions. A mechanism then runs these transactions on the existing Db2 database.

While AWS provides CDC tools for multiple services, converting and running these changes for Db2 requires proprietary IBM drivers. Conventionally, you can implement this by hosting a stream-processing application on a server. However, this approach relies on traditional server architecture. This may be less efficient, incur higher overhead, and may not meet availability requirements.

To avoid these issues, you can build this transaction mechanism using a serverless architecture. This blog post’s approach uses ECR and Lambda to externalize and run serverless, on-demand transactions on Db2 for IBM i databases.

Overview

The solution you deploy relies on a Lambda container image to run SQL queries on Db2. While you provide your own Lambda invocation methods and queries, this solution includes the drivers and connection code required to interface with Db2. The following architecture diagram shows this generic solution with no application-specific triggers:

Architecture diagram

This solution builds a Docker image containerized with Db2 interfacing code. The code consists of a Lambda handler to run the specified database transactions, a base class that helps create database Python functions via Open Database Connectivity (ODBC), and finally a forwarder class to establish encrypted connections with the target database.

Deployment scripts create the Docker image, deploy the image to ECR, and create a Lambda function from the image. Lambda then runs your queries on your target Db2 database. This solution does not include the Lambda invocation trigger, the Amazon VPC, and the AWS Direct Connect connection as part of the deployment, but these components may be necessary depending on your use case. The README in the sample repository shows the complete deployment prerequisites.

To interface with Db2, the Lambda function establishes an ODBC session using a proprietary IBM driver. This enables the use of high-level ODBC functions to manipulate the Db2 database management system.

Even with the proprietary driver, ODBC does not properly support TLS encryption with Db2. During testing, enabling the TLS encryption option can cause issues with database connectivity. To work around this limitation, a forwarding package captures all ODBC traffic and forwards packets using TLS encryption to the database. The forwarder opens a local socket listener on port 8471 for unencrypted loopback connections. Once the Lambda function initializes an unencrypted ODBC connection locally, the forwarding package then captures, encrypts, and forwards all ODBC calls to the target Db2 database. This method allows Lambda to form encrypted connections with your target database while still using ODBC to control transactions.

With secure connectivity in place, you can invoke the Lambda function. The function starts the forwarder and retrieves Db2 access credentials from AWS Secrets Manager, as shown in the following diagram. The function then attempts an ODBC loopback connection to send transactions to the forwarder.

Flow process

If the connection is successful, the Lambda function runs the queries, and the forwarder sends the queries to the target Db2. However, if the connection fails, it makes a second connection attempt. The second attempt consists of both restarting the forwarder module and the loopback connection. If the second attempt fails again, the function errors out.

After the transactions complete, a cleanup process runs and the function exits with a success status, unless an exception occurs during the function invocation. If an exception arises during the transaction, the function exits with a failure status. This is an important consideration when building retry mechanisms. You must review Lambda exit statuses to prevent default AWS retry mechanisms from causing unintended invocations.

To simplify deployment, the solution contains scripts you can use. Once you provide AWS credentials, the deployment script deploys a base set of infrastructure into AWS, including the ECR repository for the Docker images and the Secrets Manager secret for the Db2 configuration details.

The deployment script also asks for Db2 configuration details. After you finish entering these, the script sends the information to AWS to configure the previously deployed secret.

Once the secret configuration is complete, the script then builds and pushes a base Docker image to the deployed ECR repository. This base image contains a few basic Python prerequisite libraries necessary for the final code, and also the RPM driver for interfacing with Db2 via ODBC.

Finally, the script builds the solution infrastructure and deploys it into the AWS Cloud. Using the base image in ECR, it creates a Lambda function from a new Docker container image containing the SQL queries and the ODBC transaction code. After deployment, the solution is ready for testing and customization for your use case.

Prerequisites

Before deployment, you must have the following:

  1. The cloned code repository locally.
  2. A local environment configured for deployment.
  3. Amazon VPC and networking configured with Db2 access.

You can find detailed prerequisites and associated instructions in the README file.

Deploying the solution

The deployment creates an ECR repository, a Secrets Manager secret, a Lambda function built from a base container image uploaded to the ECR repo, and associated elastic network interfaces (ENIs) for VPC access.

Because of the complexity of the deployment, a combination of Bash and Python scripts automates the process by automatically deploying infrastructure templates, building and pushing container images, and prompting for input where required. Refer to the README included in the repository for detailed instructions.

To deploy:

  1. Ensure you have met the prerequisites.
  2. Open the README file in the repository and follow the deployment instructions
    1. Configure your local AWS CLI environment.
    2. Configure the project environment variables file.
    3. Run the deployment scripts.
  3. Test connectivity by invoking the deployed Lambda function
  4. Change infrastructure and code for specific queries and use cases

Cleanup

To avoid incurring additional charges, ensure that you delete unused resources. The README contains detailed instructions. You may either manually delete the resources provisioned through the AWS Management Console, or use the automated cleanup script in the repository. The deletion of resources may take up to 45 minutes to complete because of the ENIs created for Lambda in your VPC.

Conclusion

In this blog post, you learn how to run external transactions securely on Db2 for IBM i databases using a combination of Amazon ECR and AWS Lambda. By using Docker to package the driver, forwarder, and custom queries, you can execute transactions from Lambda, allowing modern architectures to interface directly with Db2 workloads. Get started by cloning the GitHub repository and following the deployment instructions.

For more serverless learning resources, visit Serverless Land.

Migrating mainframe JCL jobs to serverless using AWS Step Functions

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/migrating-mainframe-jcl-jobs-to-serverless-using-aws-step-functions/

This post is written by Raghuveer Reddy Talakola, Sr. Modernization Architect, Sanjay Rao, Sr. Mainframe Consultant, and Aneel Murari, Solution Architect.

JCL (Job Control Language) is a scripting language used to program batch jobs on mainframe systems. A JCL can contain one to many job control statements. It can be challenging to understand the condition code parameter checking syntax, which determines the order and conditions under which these statements are run.

If a JCL fails midway through execution, mainframe programmers have no visual aids to help them understand the flow of the JCL. They must examine text-based execution logs to manually correlate condition codes in the logs with condition check rules attached to JCL statements to understand the root cause of failure.

This post explains how AWS Step Functions can make it easier to maintain batch jobs migrated from mainframes to AWS.

Overview

The sample application shows how to use AWS Step Functions to address typical challenges when maintaining a batch workflow built using JCL. The sample business case validates a feed of new employee information against an existing employee database. It identifies discrepancies between the feed and the database and sends out notifications if it finds any.

The mainframe JCL supplied with this blog has seven steps. Each step applies condition code rules to check codes emitted by previous steps to decide if it must run. The Step Functions example achieves the same result. Using its graphical user interface, you can develop each step as an independent task, and link them visually. This makes it easier to understand how to decouple, reorder, or scale tasks if needed.

Visual tools for workflow analysis

A JCL controls its flow by using condition code checking or/and using IF-ELSE statements. A JCL condition code check defines the rules under which its associated JCL step will not run. Developers may code compound rules, double negatives, or triple or more negative conditions into the flow.

Example of condition code check in JCL What it means
//STEPTS2 EXEC PGM=XYZ,COND=(4,GT,STEPTST) Do not execute PGM XYZ if previous step STEPTST ended execution with a code greater than 4
//STEPTS3 EXEC PGM=XYZ,COND=EVEN Execute PGM XYZ even if all the previous steps failed

//STEPTS5 EXEC PGM=XYZ,

COND=((6,EQ),(8,GT))

Do not execute PGM XYZ if any of the preceding steps exited with return code 6 or a code greater than 8

The sample JCL illustrates the complexity of setting up a batch workflow using JCL condition code:

  1. The first step of this JCL deletes files from a previous run. If it ends with code 0, the second JCL step extracts employee data from Db2 using a COBOL program and ends with a return code 0 if it is successful or 4 if no records were found.
  2. The next step coded with condition check (4,LT), runs if all preceding steps ended with codes less than 5. It checks the external extract and emits a condition code of 8 if the external extract is empty.
  3. The next step compares the 2 files if the extract validation step produced a return code of zero.
  4. If this comparison step detects some records that are missing in the employee Db2 database, it creates a file with missing records. If that file is empty, it sets a return code of 8, which ends the program. If the mismatch file has data, it copies the mismatch file over to another system for processing.

With Step Functions, you define the same workflow more easily by using the Amazon States Language (ASL). The Step Functions console provided a graphical representation of that state machine to visualize the application logic using a drag and drop interface.

Step Functions Workflow Studio

  1. The first task fetches the employee file from Amazon S3. It does not need a cleanup task as S3 supports versioning.
  2. If the fetched file is not empty, control passes to the step that runs business logic code inside an AWS Lambda function to validate the employee feed.
  3. The workflow retrieves an environment variable from an external parameter store. This step shows how environment parameters can be externalized in a Step Functions workflow.
  4. It publishes an event to Amazon EventBridge to trigger the external processing needed if discrepancies are found and conditions are met.
  5. The final step is a Succeeded state that marks flow completion.

The following image compares the sample JCL that is converted to a Step Functions workflow:

Sample JCL and Step Functions

Using a graphical interface instead of job control statements

In JCL, you define a batch process with a series of job control statements, which run a program, utility, or a nested procedure in a text editor. There is no visual aid. If a batch process becomes complex, it’s harder to understand the dependencies between the steps.

Step Functions makes it simpler for you to set up tasks, which are the equivalents of steps in JCL. It provides you with a graphical user interface (GUI) that enables you to configure and drag-and-drop steps into a state machine.

Decoupling tasks instead of deleting and commenting of code

To disable or change a step in a JCL, you examine the condition code logic associated with all preceding and succeeding steps of the job. Any mistake in editing these codes can lead to unintended consequences.

With Step Functions, removing or changing a step can be done using the visual editor or by updating the ASL code. This can help improve your agility and make it easier to implement change.

Using Parameter Store instead of editing parameters in code

To make JCL behave differently based on parameters, you must edit dynamic variables known as JCL Symbols inside the JCL or in control cards to affect the behavior change. The following JCL code sample shows a parameter called REGN coded to value DEV. At runtime, this REGN parameter is substituted by DEV in every statement that references this parameter. To reuse this JCL in production, you can change the value assigned to REGN to say PROD.

//   SET REGN=DEV
//    -------
//******************************************************************
//*  RUN  Db2 COBOL Batch Program 
//******************************************************************
//EXTRDB2 EXEC PGM=IKJEFT01,COND=(0,NE)                                
//    -------
//FILEOUT  DD DSN=&REGN..AWS.APG.STEPDB2,                             
//******************************************************************
//*  RUN  VSAM COBOL Batch Program 
//******************************************************************
//    -------
//FILE2    DD DSN=&REGN..AWS.APG.STEPVSM,                             

In Step Functions, configuration parameters can be decoupled from state machine code by managing them in an external data source such as the Amazon DynamoDB, AWS Systems Manager Parameter Store. In the Step Functions workflow, the following step demonstrates retrieving a configuration from Parameter Store and using it to perform branching logic:

Workflow example

Independent scaling of steps versus splitting and cloning JCLs

When a JCL takes a long time to run, mainframe programmers split the job into multiple jobs or steps. Each job is a replica addressing different ranges of data.

With Step Functions, you can run a step or a group of steps concurrently by using a parallel state or map state, without creating multiple jobs that do the same thing. This can help make maintenance easier.

Improved observability and automated retry

If a JCL fails, there are no visual aids to help debug the errors. On the mainframe, you must log into the mainframe and run through several screens of text on SDSF (System Display and Search Facility) to find the cause of the failure.

Step Functions provide visual information on failures, automated retry capabilities, and native integration with AWS services. This can make it easier to understand and recover from failed jobs compared with reading through lengthy logs.

JCL example

Workflow visualization

Benefits for developers

Step Functions provides the following improvements over jobs written in JCL or migrated from JCL.

  • Visual analysis: Step Functions provide a graphical console that shows the status of each task in a visual presentation that developers and support staff can understand and debug more easily than a failed JCL.
  • Decoupling: You can update each component in the workflow independently, unlike in a JCL, where changing a step requires redeployment of the entire batch job to production.
  • Low code: Step Functions are defined with minimal code. The workflow editor can be used to drag and drop different steps and visually edit the workflows.
  • Independent scaling of steps: Step Functions is a serverless solution, and each step can scale independently. This opens up the possibility of scaling up resources for steps that are resource-intensive.
  • Automated retry capabilities: You can configure Step Functions to retry steps and recover from failures. This is much simpler than coding restart conditions in the JCL.
  • Improved logging and visibility: Step Functions can integrate with observability tools like Amazon CloudWatch and AWS X-Ray.

Conclusion

This conversion example shows how Step Functions can help you rewrite complex batch processes written in JCL to serverless workflows. It also shows how such a conversion provides maintenance and monitoring features that make it easier to simplify and scale these batch processes.

To learn more, download the sample JCL and Step Functions workflow from the GitHub repository. To learn more about our AWS Mainframe migration and modernization services, go here.

For more serverless learning resources, visit Serverless Land.

Scaling AWS Lambda permissions with Attribute-Based Access Control (ABAC)

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/scaling-aws-lambda-permissions-with-attribute-based-access-control-abac/

This blog post is written by Chris McPeek, Principal Solutions Architect.

AWS Lambda now supports attribute-based access control (ABAC), allowing you to control access to Lambda functions within AWS Identity and Access Management (IAM) using tags. With ABAC, you can scale an access control strategy by setting granular permissions with tags without requiring permissions updates for every new user or resource as your organization scales.

This blog post shows how to use tags for conditional access to Lambda resources. You can control access to Lambda resources using ABAC by using one or more tags within IAM policy conditions. This can help you scale permissions in rapidly growing environments. To learn more about ABAC, see What is ABAC for AWS, and AWS Services that work with IAM.

Each tag in AWS is a label comprising a user-defined key and value. Customers often use tags with Lambda functions to define keys such as cost center, environment, project, and teams, along with values that map to these keys. This helps with discovery and cost allocation, especially in accounts that may have many Lambda functions. AWS best practices for tagging are included in Tagging AWS resources.

You can now use these same tags, or create new ones, and use them to grant conditional IAM access to Lambda functions more easily. As projects start and finish, employees move to different teams, and applications grow, maintaining access to resources can become cumbersome. ABAC helps developers and security administrators work together to maintain least privilege access to their resources more effectively by using the same tags on IAM roles and Lambda functions. Security administrators can allow or deny access to Lambda API actions when the IAM role tags match the tags on a Lambda function, ensuring least privilege. As developers add additional Lambda functions to the project, they simply apply the same tag when they create a new Lambda function, which grants the same security credentials.

ABAC in Lambda

Using ABAC with Lambda is similar to developing ABAC policies when working with other services. To illustrate how to use ABAC with Lambda, consider a scenario where two new developers join existing projects called Project Falcon and Project Eagle. Project Falcon uses ABAC for authorization using the tag key project-name and value falcon. Project Eagle uses the tag key project-name and value eagle.

Projects Falcon and Eagle tags

Projects Falcon and Eagle tags

The two new developers need access to the Lambda console. The security administrator creates the following policy to allow the developers to list the existing functions that are available using ListFunction. The GetAccountSettings permission allows them to retrieve Lambda-specific information about their account.

{
"Version": "2012-10-17",
"Statement": [
    {
    "Sid": "AllResourcesLambdaNoTags",
    "Effect": "Allow",
    "Action": [
        "lambda:ListFunctions",
        "lambda:GetAccountSettings"
    ],
    "Resource": "*"
    }
]
}

Condition key mappings

The developers then need access to Lambda actions that are part of their projects. The Lambda actions are API calls such as InvokeFunction or PutFunctionConcurrency (see the following table). IAM condition keys are then used to refine the conditions under which an IAM policy statement applies.

Lambda supports the existing global context key:

  • "aws:PrincipalTag/${TagKey}": Control what the IAM principal (the person making the request) is allowed to do based on the tags that are attached to their IAM user or role.

As part of ABAC support, Lambda now supports three additional condition keys:

  • "aws:ResourceTag/${TagKey}": Control access based on the tags that are attached to Lambda functions.
  • "aws:RequestTag/${TagKey}": Require tags to be present in a request, such as when creating a new function.
  • "aws:TagKeys": Control whether specific tag keys can be used in a request.

For more details on these condition context keys, see AWS global condition context keys.

When using condition keys in IAM policies, each Lambda API action supports different tagging condition keys. The following table maps each condition key to its Lambda actions.

Condition keys supported Description Lambda actions
aws:ResourceTag/${TagKey} Set this tag value to allow or deny user actions on resources with specific tags.
lambda:AddPermission
lambda:CreateAlias
lambda:CreateFunctionUrlConfig
lambda:DeleteAlias
lambda:DeleteFunction
lambda:DeleteFunctionCodeSigningConfig
lambda:DeleteFunctionConcurrency
lambda:DeleteFunctionEventInvokeConfig
lambda:DeleteFunctionUrlConfig
lambda:DeleteProvisionedConcurrencyConfig
lambda:DisableReplication
lambda:EnableReplication
lambda:GetAlias
lambda:GetFunction
lambda:GetFunctionCodeSigningConfig
lambda:GetFunctionConcurrency
lambda:GetFunctionConfiguration
lambda:GetFunctionEventInvokeConfig
lambda:GetFunctionUrlConfig
lambda:GetPolicy
lambda:GetProvisionedConcurrencyConfig
lambda:InvokeFunction
lambda:InvokeFunctionUrl
lambda:ListAliases
lambda:ListFunctionEventInvokeConfigs
lambda:ListFunctionUrlConfigs
lambda:ListProvisionedConcurrencyConfigs
lambda:ListTags
lambda:ListVersionsByFunction
lambda:PublishVersion
lambda:PutFunctionCodeSigningConfig
lambda:PutFunctionConcurrency
lambda:PutFunctionEventInvokeConfig
lambda:PutProvisionedConcurrencyConfig
lambda:RemovePermission
lambda:UpdateAlias
lambda:UpdateFunctionCode
lambda:UpdateFunctionConfiguration
lambda:UpdateFunctionEventInvokeConfig
lambda:UpdateFunctionUrlConfig

aws:ResourceTag/${TagKey}
aws:RequestTag/${TagKey}

aws:TagKeys
Set this tag value to allow or deny user requests to create a Lambda function. lambda:CreateFunction
aws:ResourceTag/${TagKey}
aws:RequestTag/${TagKey}

aws:TagKeys
Set this tag value to allow or deny user requests to add or update tags. lambda:TagResource
aws:ResourceTag/${TagKey}
aws:TagKeys
Set this tag value to allow or deny user requests to remove tags. lambda:UntagResource

Security administrators create conditions that only permit the action if the tag matches between the role and the Lambda function.
In this example, the policy grants access to all Lambda function API calls when a project-name tag exists and matches on both the developer’s IAM role and the Lambda function.

{
"Version": "2012-10-17",
"Statement": [
    {
    "Sid": "AllActionsLambdaSameProject",
    "Effect": "Allow",
    "Action": [
        "lambda:InvokeFunction",
        "lambda:UpdateFunctionConfiguration",
        "lambda:CreateAlias",
        "lambda:DeleteAlias",
        "lambda:DeleteFunction",
        "lambda:DeleteFunctionConcurrency", 
        "lambda:GetAlias",
        "lambda:GetFunction",
        "lambda:GetFunctionConfiguration",
        "lambda:GetPolicy",
        "lambda:ListAliases", 
        "lambda:ListVersionsByFunction",
        "lambda:PublishVersion",
        "lambda:PutFunctionConcurrency",
        "lambda:UpdateAlias",
        "lambda:UpdateFunctionCode"
    ],
    "Resource": "arn:aws:lambda:*:*:function:*",
    "Condition": {
        "StringEquals": {
        "aws:ResourceTag/project-name": "${aws:PrincipalTag/project-name}"
        }
    }
    }
]
}

In this policy, Resource is wild-carded as "*" for all Lambda functions. The condition limits access to only resources that have the same project-name key and value, without having to list each individual Amazon Resource Name (ARN).

The security administrator creates an IAM role for each developer’s project, such as falcon-developer-role or eagle-developer-role. Since the policy references both the function tags and the IAM role tags, she can reuse the previous policy and apply it to both of the project roles. Each role should have the tag key project-name with the value set to the project, such as falcon or eagle. The following shows the tags for Project Falcon:

Tags for Project Falcon

Tags for Project Falcon

The developers now have access to the existing Lambda functions in their respective projects. The developer for Project Falcon needs to create additional Lambda functions for only their project. Since the project-name tag also authorizes who can access the function, the developer should not be able to create a function without the correct tags. To enforce this, the security administrator applies a new policy to the developer’s role using the RequestTag condition key to specify that a project-name tag exists:

{
"Version": "2012-10-17",
"Statement": [
    {
    "Sid": "AllowLambdaTagOnCreate",
    "Effect": "Allow",
    "Action": [
        "lambda:CreateFunction",
        “lambda:TagResource”
    ]
    "Resource": "arn:aws:lambda:*:*:function:*",
    "Condition": {
        "StringEquals": {,
            “aws:RequestTag/project-name”: “${aws:PrincipalTag/project-name}”
        },
        "ForAllValues:StringEquals": {
            "aws:TagKeys": [
                 “project-name”
            ]
        }
    }
    }
]
}

To create the functions, the developer must add the key project-name and value falcon to the tags. Without the tag, the developer cannot create the function.

Project Falcon tags

Project Falcon tags

Because Project Falcon is using ABAC, by tagging the Lambda functions during creation, they did not need to engage the security administrator to add additional ARNs to the IAM policy. This provides flexibility to the developers to support their projects. This also helps scale the security administrators’ function by no longer needing to coordinate which resources need to be added to IAM policies to maintain least privilege access.

The project must then add a manager who requires read access to projects as long as they are also in the organization labeled birds and cost-center : it.

Organization and Cost Center tags

Organization and Cost Center tags

The security administrator creates a new IAM policy called manager-policy with the following statements:

{
"Version": "2012-10-17",
"Statement": [
    {
    "Sid": "AllActionsLambdaManager",
    "Effect": "Allow",
    "Action": [
        "lambda:GetAlias",
        "lambda:GetFunction",
        "lambda:GetFunctionConfiguration",
        "lambda:GetPolicy",
        "lambda:GetPolicy",
        "lambda:ListAliases", 
        "lambda:ListVersionsByFunction"
    ],
    "Resource": "arn:aws:lambda:*:*:function:*",
    "Condition": {
        "StringEquals": {
            “aws:ResourceTag/organization”: “${aws:PrincipalTag/organization}”,
            “aws:ResourceTag/cost-center”: “$}aws:PrincipalTag/cost-center}”
        }
    }
    }
]
}

The security administrator attaches the policy to the manager’s role along with the tag organization:birds, and cost-center:it. If any of the projects change organization, the manager no longer has access, even if the cost-center remains IT.

In this policy, the condition ensures both the cost-center and organization tags exist for the function and the values are equal to the tags in the manager’s role. Even if the cost-center tag matches for both the Lambda function and the manager’s role, yet the manager’s organization tag doesn’t match, IAM denies access to the Lambda function. Tags themselves are only a key:value pair with no relationship to other tags. You can use multiple tags, as in this example, to more granularly define Lambda function permissions.

Conclusion

You can now use attribute-based access control (ABAC) with Lambda to control access to functions using tags. This allows you to scale your access controls by simplifying the management of permissions while still maintaining least privilege security best practices. Security administrators can coordinate with developers on a tagging strategy and create IAM policies with ABAC condition keys. This then gives freedom to developers to grow their applications by adding tags to functions, without needing a security administrator to update individual IAM policies.

Attribute-based Access Control (ABAC) for Lambda functions support is also available through many AWS Lambda Partners such as Lumigo, Pulumi and Vertical Relevance.

For additional documentation on ABAC with Lambda see Attribute-based access control for Lambda.

Introducing Amazon CodeWhisperer in the AWS Lambda console (In preview)

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/introducing-amazon-codewhisperer-in-the-aws-lambda-console-in-preview/

This blog post is written by Mark Richman, Senior Solutions Architect.

Today, AWS is launching a new capability to integrate the Amazon CodeWhisperer experience with the AWS Lambda console code editor.

Amazon CodeWhisperer is a machine learning (ML)–powered service that helps improve developer productivity. It generates code recommendations based on their code comments written in natural language and code.

CodeWhisperer is available as part of the AWS toolkit extensions for major IDEs, including JetBrains, Visual Studio Code, and AWS Cloud9, currently supporting Python, Java, and JavaScript. In the Lambda console, CodeWhisperer is available as a native code suggestion feature, which is the focus of this blog post.

CodeWhisperer is currently available in preview with a waitlist. This blog post explains how to request access to and activate CodeWhisperer for the Lambda console. Once activated, CodeWhisperer can make code recommendations on-demand in the Lambda code editor as you develop your function. During the preview period, developers can use CodeWhisperer at no cost.

Amazon CodeWhisperer

Amazon CodeWhisperer

Lambda is a serverless compute service that runs your code in response to events and automatically manages the underlying compute resources for you. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications and only pay for what you use.

With Lambda, you can build your functions directly in the AWS Management Console and take advantage of CodeWhisperer integration. CodeWhisperer in the Lambda console currently supports functions using the Python and Node.js runtimes.

When writing AWS Lambda functions in the console, CodeWhisperer analyzes the code and comments, determines which cloud services and public libraries are best suited for the specified task, and recommends a code snippet directly in the source code editor. The code recommendations provided by CodeWhisperer are based on ML models trained on a variety of data sources, including Amazon and open source code. Developers can accept the recommendation or simply continue to write their own code.

Requesting CodeWhisperer access

CodeWhisperer integration with Lambda is currently available as a preview only in the N. Virginia (us-east-1) Region. To use CodeWhisperer in the Lambda console, you must first sign up to access the service in preview here or request access directly from within the Lambda console.

In the AWS Lambda console, under the Code tab, in the Code source editor, select the Tools menu, and Request Amazon CodeWhisperer Access.

Request CodeWhisperer access in Lambda console

Request CodeWhisperer access in Lambda console

You may also request access from the Preferences pane.

Request CodeWhisperer access in Lambda console preference pane

Request CodeWhisperer access in Lambda console preference pane

Selecting either of these options opens the sign-up form.

CodeWhisperer sign up form

CodeWhisperer sign up form

Enter your contact information, including your AWS account ID. This is required to enable the AWS Lambda console integration. You will receive a welcome email from the CodeWhisperer team upon once they approve your request.

Activating Amazon CodeWhisperer in the Lambda console

Once AWS enables your preview access, you must turn on the CodeWhisperer integration in the Lambda console, and configure the required permissions.

From the Tools menu, enable Amazon CodeWhisperer Code Suggestions

Enable CodeWhisperer code suggestions

Enable CodeWhisperer code suggestions

You can also enable code suggestions from the Preferences pane:

Enable CodeWhisperer code suggestions from Preferences pane

Enable CodeWhisperer code suggestions from Preferences pane

The first time you activate CodeWhisperer, you see a pop-up containing terms and conditions for using the service.

CodeWhisperer Preview Terms

CodeWhisperer Preview Terms

Read the terms and conditions and choose Accept to continue.

AWS Identity and Access Management (IAM) permissions

For CodeWhisperer to provide recommendations in the Lambda console, you must enable the proper AWS Identity and Access Management (IAM) permissions for either your IAM user or role. In addition to Lambda console editor permissions, you must add the codewhisperer:GenerateRecommendations permission.

Here is a sample IAM policy that grants a user permission to the Lambda console as well as CodeWhisperer:

{
  "Version": "2012-10-17",
  "Statement": [{
      "Sid": "LambdaConsolePermissions",
      "Effect": "Allow",
      "Action": [
        "lambda:AddPermission",
        "lambda:CreateEventSourceMapping",
        "lambda:CreateFunction",
        "lambda:DeleteEventSourceMapping",
        "lambda:GetAccountSettings",
        "lambda:GetEventSourceMapping",
        "lambda:GetFunction",
        "lambda:GetFunctionCodeSigningConfig",
        "lambda:GetFunctionConcurrency",
        "lambda:GetFunctionConfiguration",
        "lambda:InvokeFunction",
        "lambda:ListEventSourceMappings",
        "lambda:ListFunctions",
        "lambda:ListTags",
        "lambda:PutFunctionConcurrency",
        "lambda:UpdateEventSourceMapping",
        "iam:AttachRolePolicy",
        "iam:CreatePolicy",
        "iam:CreateRole",
        "iam:GetRole",
        "iam:GetRolePolicy",
        "iam:ListAttachedRolePolicies",
        "iam:ListRolePolicies",
        "iam:ListRoles",
        "iam:PassRole",
        "iam:SimulatePrincipalPolicy"
      ],
      "Resource": "*"
    },
    {
      "Sid": "CodeWhispererPermissions",
      "Effect": "Allow",
      "Action": ["codewhisperer:GenerateRecommendations"],
      "Resource": "*"
    }
  ]
}

This example is for illustration only. It is best practice to use IAM policies to grant restrictive permissions to IAM principals to meet least privilege standards.

Demo

To activate and work with code suggestions, use the following keyboard shortcuts:

  • Manually fetch a code suggestion: Option+C (macOS), Alt+C (Windows)
  • Accept a suggestion: Tab
  • Reject a suggestion: ESC, Backspace, scroll in any direction, or keep typing and the recommendation automatically disappears.

Currently, the IDE extensions provide automatic suggestions and can show multiple suggestions. The Lambda console integration requires a manual fetch and shows a single suggestion.

Here are some common ways to use CodeWhisperer while authoring Lambda functions.

Single-line code completion

When typing single lines of code, CodeWhisperer suggests how to complete the line.

CodeWhisperer single-line completion

CodeWhisperer single-line completion

Full function generation

CodeWhisperer can generate an entire function based on your function signature or code comments. In the following example, a developer has written a function signature for reading a file from Amazon S3. CodeWhisperer then suggests a full implementation of the read_from_s3 method.

CodeWhisperer full function generation

CodeWhisperer full function generation

CodeWhisperer may include import statements as part of its suggestions, as in the previous example. As a best practice to improve performance, manually move these import statements to outside the function handler.

Generate code from comments

CodeWhisperer can also generate code from comments. The following example shows how CodeWhisperer generates code to use AWS APIs to upload files to Amazon S3. Write a comment describing the intended functionality and, on the following line, activate the CodeWhisperer suggestions. Given the context from the comment, CodeWhisperer first suggests the function signature code in its recommendation.

CodeWhisperer generate function signature code from comments

CodeWhisperer generate function signature code from comments

After you accept the function signature, CodeWhisperer suggests the rest of the function code.

CodeWhisperer generate function code from comments

CodeWhisperer generate function code from comments

When you accept the suggestion, CodeWhisperer completes the entire code block.

CodeWhisperer generates code to write to S3.

CodeWhisperer generates code to write to S3.

CodeWhisperer can help write code that accesses many other AWS services. In the following example, a code comment indicates that a function is sending a notification using Amazon Simple Notification Service (SNS). Based on this comment, CodeWhisperer suggests a function signature.

CodeWhisperer function signature for SNS

CodeWhisperer function signature for SNS

If you accept the suggested function signature. CodeWhisperer suggest a complete implementation of the send_notification function.

CodeWhisperer function send notification for SNS

CodeWhisperer function send notification for SNS

The same procedure works with Amazon DynamoDB. When writing a code comment indicating that the function is to get an item from a DynamoDB table, CodeWhisperer suggests a function signature.

CodeWhisperer DynamoDB function signature

CodeWhisperer DynamoDB function signature

When accepting the suggestion, CodeWhisperer then suggests a full code snippet to complete the implementation.

CodeWhisperer DynamoDB code snippet

CodeWhisperer DynamoDB code snippet

Once reviewing the suggestion, a common refactoring step in this example would be manually moving the references to the DynamoDB resource and table outside the get_item function.

CodeWhisperer can also recommend complex algorithm implementations, such as Insertion sort.

CodeWhisperer insertion sort.

CodeWhisperer insertion sort.

As a best practice, always test the code recommendation for completeness and correctness.

CodeWhisperer not only provides suggested code snippets when integrating with AWS APIs, but can help you implement common programming idioms, including proper error handling.

Conclusion

CodeWhisperer is a general purpose, machine learning-powered code generator that provides you with code recommendations in real time. When activated in the Lambda console, CodeWhisperer generates suggestions based on your existing code and comments, helping to accelerate your application development on AWS.

To get started, visit https://aws.amazon.com/codewhisperer/. Share your feedback with us at [email protected].

For more serverless learning resources, visit Serverless Land.

Understanding AWS Lambda scaling and throughput

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/understanding-aws-lambda-scaling-and-throughput/

AWS Lambda provides a serverless compute service that can scale from a single request to hundreds of thousands per second. When designing your application, especially for high load, it helps to understand how Lambda handles scaling and throughput. There are two components to consider: concurrency and transactions per second.

Concurrency of a system is the ability to process more than one task simultaneously. You can measure concurrency at a point in time to view how many tasks the system is doing in parallel. The number of transactions a system can process per second is not the same as concurrency, because a transaction can take more or less than a second to process.

This post shows you how concurrency and transactions per second work within the Lambda lifecycle. It also covers ways to measure, control, and optimize them.

The Lambda runtime environment

Lambda invokes your code in a secure and isolated runtime environment. The following shows the lifecycle of requests to a single function.

First request to invoke your function

First request to invoke your function

At the first request to invoke your function, Lambda creates a new runtime environment. It then runs the function’s initialization (init) code, which is the code outside the main handler. Lambda then runs the function handler code as the invocation. This receives the event payload and processes your business logic.

Each runtime environment processes a single request at a time. While a single runtime environment is processing a request, it cannot process other requests.

After Lambda finishes processing the request, the runtime environment is ready to process an additional request for the same function. As the initialization code has already run, for request 2, Lambda runs only the function handler code as the invocation.

Lambda ready to process an additional request for the same function

Lambda ready to process an additional request for the same function

If additional requests arrive while processing request 1, Lambda creates new runtime environments. In this example, Lambda creates new runtime environments for requests 2, 3, 4, and 5. It runs the function init and the invocation.

Lambda creating new runtime environments

Lambda creating new runtime environments

When requests arrive, Lambda reuses available runtime environments, and creates new ones if necessary. The following example shows the behavior for additional requests after 1-5.

Lambda reusing runtime environments

Lambda reusing runtime environments

Once request 1 completes, the runtime environment is available to process another request. When request 6 arrives, Lambda re-uses request 1s runtime environment and runs the invocation. This process continues for requests 7 and 8, which reuse the runtime environments from requests 2 and 3. When request 9 arrives, Lambda creates a new runtime environment as there isn’t an existing one available. When request 10 arrives, it reuses the runtime environment freed up after request 4.

The number of runtime environments determines the concurrency. This is the sum of all concurrent requests for currently running functions at a particular point in time. For a single runtime environment, the number of concurrent requests is 1.

Single runtime environment concurrent requests

Single runtime environment concurrent requests

For the example with requests 1–10, the Lambda function concurrency at particular times is the following:

Lambda concurrency at particular times

Lambda concurrency at particular times

time concurrency
t1 3
t2 5
t3 4
t4 6
t5 5
t6 2

When the number of requests decreases, Lambda stops unused runtime environments to free up scaling capacity for other functions.

Invocation duration, concurrency, and transactions per second

The number of transactions Lambda can process per second is the sum of all invokes for that period. If a function takes 1 second to run, and 10 invocations happen concurrently, Lambda creates 10 runtime environments. In this case, Lambda processes 10 requests per second.

Lambda transactions per second for 1 second invocation

Lambda transactions per second for 1 second invocation

If the function duration is halved to 500 ms, the Lambda concurrency remains 10. However, transactions per second are now 20.

Lambda transactions per second for 500 ms second invocation

Lambda transactions per second for 500 ms second invocation

If the function takes 2 seconds to run, during the initial second, transactions per second is 0. However, averaged over time, transactions per second is 5.

You can view concurrency using Amazon CloudWatch metrics. Use the metric name ConcurrentExecutions to view concurrent invocations for all or individual functions.

CloudWatch metrics ConcurrentExecutions

CloudWatch metrics ConcurrentExecutions

You can also estimate concurrent requests from the number of requests per unit of time, in this case seconds, and their average duration, using the formula:

RequestsPerSecond x AvgDurationInSeconds = concurrent requests

If a Lambda function takes an average 500 ms to run, at 100 requests per second, the number of concurrent requests is 50:

100 requests/second x 0.5 sec = 50 concurrent requests

If you half the function duration to 250 ms and the number of requests per second doubles to 200 requests/second, the number of concurrent requests remains the same:

200 requests/second x 0.250 sec = 50 concurrent requests.

Reducing a function’s duration can increase the transactions per second that a function can process. For more information on reducing function duration, watch this re:Invent video.

Scaling quotas

There are two scaling quotas to consider with concurrency. Account concurrency quota and burst concurrency quota.

Account concurrency is the maximum concurrency in a particular Region. This is shared across all functions in an account. The default Regional concurrency quota starts at 1,000, which you can increase with a service ticket.

The burst concurrency quota provides an initial burst of traffic for each function, between 500 and 3000 per minute, depending on the Region.

After this initial burst, functions can scale by another 500 concurrent invocations per minute for all Regions. If you reach the maximum number of concurrent requests, further requests are throttled.

For synchronous invocations, Lambda returns a throttling error (429) to the caller, which must retry the request. With asynchronous and event source mapping invokes, Lambda automatically retries the requests. See Error handling and automatic retries in AWS Lambda for more detail.

A scaling quota example

The following walks through how account and burst concurrency work for an example application.

Anticipating handling additional load, the application builders have raised account concurrency to 7,000. There are no other Lambda functions running in this account, so this function can use all available account concurrency.

Lambda account and burst concurrency

Lambda account and burst concurrency

  1. 08:59: The application already has a steady stream of requests, using 1,000 concurrent runtime environments. Each Lambda invocation takes 250 ms, so transactions per second are 4,000.
  2. 09:00: There is a large spike in traffic at 5,000 sustained requests. 1000 requests use the existing runtime environments. Lambda uses the 3,000 available burst concurrency to create new environments to handle the additional load. 1,000 requests are throttled as there is not enough burst concurrency to handle all 5,000 requests. Transactions per second are 16,000.
  3. 09:01: Lambda scales by another 500 concurrent invocations per minute. 500 requests are still throttled. The application can now handle 4,500 concurrent requests.
  4. 09:02: Lambda scales by another 500 concurrent invocations per minute. No requests are throttled. The application can now handle all 5,000 requests.
  5. 09:03: The application continues to handle the sustained 5000 requests. The burst concurrency quota rises to 500.
  6. 09:04: The application sees another spike in traffic, this time unexpected. 3,000 new sustained requests arrive, a combination of 8,000 requests. 5,000 requests use the existing runtime environments. Lambda uses the now available burst concurrency of 1,000 to create new environments to handle the additional load. 1,000 requests are throttled as there is not enough burst concurrency. Another 1,000 requests are throttled as the account concurrency quota has been reached.
  7. 09:05: Lambda scales by another 500 concurrent requests. The application can now handle 6,500 requests. 500 requests are throttled as there is not enough burst concurrency. 1,000 requests are still throttled as the account concurrency quota has been reached.
  8. 09:06: Lambda scales by another 500 concurrent requests. The application can now handle 7,000 requests. 1,000 requests are still throttled as the account concurrency quota has been reached.
  9. 09:07: The application continues to handle the sustained 7,000 requests. 1,000 requests are still throttled as the account concurrency quota has been reached. Transactions per second are 28,000.

Service Quotas is an AWS service that helps you manage your quotas for many AWS services. Along with looking up the values, you can also request a limit increase from the Service Quotas console.

Service Quotas console

Service Quotas console

Reserved concurrency

You can configure a Reserved concurrency setting for your Lambda functions to allocate a maximum concurrency limit for a function. This assigns part of the account concurrency quota to a specific function. This protects and ensures that a function is not throttled and can always scale up to the reserved concurrency value.

It can also help protect downstream resources. If a database or external API can only handle 2 concurrent connections, you can ensure Lambda can’t scale beyond 2 concurrent invokes. This ensures Lambda doesn’t overwhelm the downstream service. You can also use reserved concurrency to avoid race conditions to ensure that your function can only run one concurrent invocation at a time.

Reserved concurrency

Reserved concurrency

You can also set the function concurrency to zero, which stops any function invocations and acts as an off-switch. This can be useful to stop Lambda invocations when you have an issue with a downstream resource. It can give you time to fix the issue before removing or increasing the concurrency to allow invocations to continue.

Provisioned Concurrency

The function initialization process can introduce latency for your applications. You can reduce this latency by configuring Provisioned Concurrency for a function version or alias. This prepares runtime environments in advance, running the function initialization process, so the function is ready to invoke when needed.

This is primarily useful for synchronous requests to ensure you have enough concurrency before an expected traffic spike. You can still burst above this using standard concurrency. The following example shows Provisioned Concurrency configured as 10. Lambda runs the init process for 10 functions, and then when requests arrive, immediately runs the invocation.

Provisioned Concurrency

Provisioned Concurrency

You can use Application Auto Scaling to adjust Provisioned Concurrency automatically based on Lambda’s utilization metric.

Operational metrics

There are CloudWatch metrics available to monitor your account and function concurrency to ensure that your applications can scale as expected. Monitor function Invocations and Duration to understand throughput. Throttles show throttled invocations.

ConcurrentExecutions tracks the total number of runtime environments that are processing events. Ensure this doesn’t reach your account concurrency to avoid account throttling. Use the metric for individual functions to see which are using account concurrency, and also ensure reserved concurrency is not too high. For example, a function may have a reserved concurrency of 2000, but is only using 10.

UnreservedConcurrentExecutions show the number of function invocations without reserved concurrency. This is your available account concurrency buffer.

Use ProvisionedConcurrencyUtilization to ensure you are not paying for Provisioned Concurrency that you are not using. The metric shows the percentage of allocated Provisioned Concurrency in use.

ProvisionedConcurrencySpilloverInvocations show function invocations using standard concurrency, above the configured Provisioned Concurrency value. This may show that you need to increase Provisioned Concurrency.

Conclusion

Lambda provides a highly scalable compute service. Understanding how Lambda scaling and throughput works can help you design your application, especially for high load.

This post explains concurrency and transactions per second. It shows how account and burst concurrency quotas work. You can configure reserved concurrency to ensure that your functions can always scale, and also use it to protect downstream resources. Use Provisioned Concurrency to scale up Lambda in advance of invokes.

For more serverless learning resources, visit Serverless Land.

Creating a serverless Apache Kafka publisher using AWS Lambda

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/creating-a-serverless-apache-kafka-publisher-using-aws-lambda/

This post is written by Philipp Klose, Global Solution Architect, and Daniel Wessendorf, Global Solution Architect.

Streaming data and event-driven architectures are becoming more popular for many modern systems. The range of use cases includes web tracking and other logs, industrial IoT, in-game player activity, and the ingestion of data for modern analytics architecture.

One of the most popular technologies in this spaece is Apache Kafka. This is an open-source distributed event streaming platform used by many customers for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications.

Kafka is based on a simple but powerful pattern. The Kafka cluster itself is a highly available broker that receives messages from various producers. The received messages are stored in topics, which are the primary storage abstraction.

Various consumers can subscribe to a Kafka topic and consume messages. In contrast to classic queuing systems, the consumers do not remove the message from the topic but store the individual reading position on the topic. This allows for multiple different patterns for consumption (for example, fan-out or consumer-groups).

Producer and consumer

Producer and consumer libraries for Kafka are available in various programming languages and technologies. This blog post focuses on using serverless and cloud-native technologies for the producer side.

Overview

This example walks you through how to build a serverless real-time stream producer application using Amazon API Gateway and AWS Lambda.

For testing, this blog includes a sample AWS Cloud Development Kit (CDK) application. This creates a demo environment, including an Amazon Managed Streaming for Apache Kafka (MSK) cluster and a bastion host for observing the produced messages on the cluster.

The following diagram shows the architecture of an application that pushes API requests to a Kafka topic in real time, which you build in this blog post:

Architecture overview

  1. An external application calls an Amazon API Gateway endpoint
  2. Amazon API Gateway forwards the request to a Lambda function
  3. AWS Lambda function behaves as a Kafka producer and pushes the message to a Kafka topic
  4. A Kafka “console consumer” on the bastion host then reads the message

The demo shows how to use Lambda Powertools for Java to streamline logging and tracing, and an IAM authenticator to simplify the cluster authentication process. The following sections take you through the steps to deploy, test, and observe the example application.

Prerequisites

The example has the following prerequisites:

Example walkthrough

  1. Clone the project GitHub repository. Change directory to subfolder serverless-kafka-iac:
    git clone https://github.com/aws-samples/serverless-kafka-producer
    cd serverless-kafka-iac
    
  2. Configure environment variables:
    export CDK_DEFAULT_ACCOUNT=$(aws sts get-caller-identity --query 'Account' --output text)
    export CDK_DEFAULT_REGION=$(aws configure get region)
    
  3. Prepare the virtual Python environment:
    python3 -m venv .venv
    source .venv/bin/activate
    pip3 install -r requirements.txt
    
  4. Bootstrap your account for CDK usage:
    cdk bootstrap aws://$CDK_DEFAULT_ACCOUNT/$CDK_DEFAULT_REGION
  5. Run ‘cdk synth’ to build the code and test the requirements:
    cdk synth
  6. Run ‘cdk deploy’ to deploy the code to your AWS account:
    cdk deploy --all

Testing the example

To test the example, log into the bastion host and start a consumer console to observe the messages being added to the topic. You generate messages for the Kafka topics by sending calls via API Gateway from your development machine or AWS Cloud9 environment.

  1. Use AWS System Manager to log into the bastion host. Use the KafkaDemoBackendStack.bastionhostbastion Output-Parameter to connect or via the system manager console.
    aws ssm start-session --target <Bastion Host Instance Id> 
    sudo su ec2-user
    cd /home/ec2-user/kafka_2.13-2.6.3/bin/
    
  2. Create a topic named messages on the MSK cluster:
    ./kafka-topics.sh --bootstrap-server $ZK --command-config client.properties --create --replication-factor 3 --partitions 3 --topic messages
  3. Open a Kafka consumer console on the bastion host to observe incoming messages:
    ./kafka-console-consumer.sh --bootstrap-server $ZK --topic messages --consumer.config client.properties
    
  4. Open another terminal on your development machine to create test requests using the “ServerlessKafkaProducerStack.kafkaproxyapiEndpoint” output parameter of the CDK stack. Append “/event” for the final URL. Use curl to send the API request:
    curl -X POST -d "Hello World" <ServerlessKafkaProducerStack.messagesapiendpointEndpoint>
  5. For load testing the application, it is important to calibrate the parameters. You can use a tool like Artillery to simulate workloads. You can find a sample artillery script in the /load-testing folder from step 1.
  6. Observe the incoming request in the bastion host terminal.

All components in this example integrate with AWS X-Ray. With AWS X-Ray, you can trace the entire application, which is useful to identify bottlenecks when load testing. You can also trace method execution at the Java method level.

Lambda Powertools for java allows you to accelerate this process by adding the @Trace annotation to see traces on method level in X-Ray.

To trace a request end to end:

  1. Navigate to the CloudWatch console.
  2. Open the Service map.
  3. Select a component to investigate (for example, the Lambda function where you deployed the Kafka producer). Choose View traces.
    X-Ray console
  4. Select a single Lambda method invocation and investigate further at the Java method level.
    X-Ray detail

Cleaning up

In the subdirectory “serverless-kafka-iac”, delete the test infrastructure:

cdk destroy –all

Implementation of a Kafka producer in Lambda

Kafka natively supports Java. To stay open, cloud native, and without third-party dependencies, the producer is written in that language. Currently, the IAM authenticator is only available to Java. In this example, the Lambda handler receives a message from an Amazon API Gateway source and pushes this message to an MSK topic called “messages”.

Typically, Kafka producers are long-living and pushing a message to a Kafka topic is an asynchronous process. As Lambda is ephemeral, you must enforce a full flush of a submitted message until the Lambda function ends, by calling producer.flush().

    @Override
    @Tracing
    @Logging(logEvent = true)
    public APIGatewayProxyResponseEvent 
    handleRequest(APIGatewayProxyRequestEvent input, Context context) {
        APIGatewayProxyResponseEvent response = createEmptyResponse();
        try {

            String message = getMessageBody(input);

            KafkaProducer<String, String> producer = createProducer();

            ProducerRecord<String, String> record = new ProducerRecord<String, String>(TOPIC_NAME, context.getAwsRequestId(), message);

            Future<RecordMetadata> send = producer.send(record);
            producer.flush();

            RecordMetadata metadata = send.get();
            log.info(String.format(“Send message was send to partition %s”, metadata.partition()));

            log.info(String.format(“Message was send to partition %s”, metadata.partition()));

            return response.withStatusCode(200).withBody(“Message successfully pushed to kafka”);
        } catch (Exception e) {
            log.error(e.getMessage(), e);
            return response.withBody(e.getMessage()).withStatusCode(500);
        }
    }

    @Tracing
    private KafkaProducer<String, String> createProducer() {
        if (producer == null) {
            log.info(“Connecting to kafka cluster”);
            producer = new KafkaProducer<String, String>(kafkaProducerProperties.getProducerProperties());
        }
        return producer;
    }

Connect to Amazon MSK using IAM Auth

This example uses IAM authentication to connect to the respective Kafka cluster. See the documentation here, which shows how to configure the producer for connectivity.

Since you configure the cluster via IAM, grant “Connect” and “WriteData” permissions to the producer, so that it can push messages to Kafka.

{
    “Version”: “2012-10-17”,
    “Statement”: [
        {            
            “Effect”: “Allow”,
            “Action”: [
                “kafka-cluster:Connect”
            ],
            “Resource”: “arn:aws:kafka:region:account-id:cluster/cluster-name/cluster-uuid “
        }
    ]
}


{
    “Version”: “2012-10-17”,
    “Statement”: [
        {            
            “Effect”: “Allow”,
            “Action”: [
                “kafka-cluster:Connect”,
                “kafka-cluster: DescribeTopic”,
            ],
            “Resource”: “arn:aws:kafka:region:account-id:topic/cluster-name/cluster-uuid/topic-name“
        }
    ]
}

This shows the Kafka excerpt of the IAM policy, which must be applied to the Kafka producer.

When using IAM authentication, be aware of the current limits of IAM Kafka authentication, which affect the number of concurrent connections and IAM requests for a producer. Read https://docs.aws.amazon.com/msk/latest/developerguide/limits.html and follow the recommendation for authentication backoff in the producer client:

        Map<String, String> configuration = Map.of(
                “key.serializer”, “org.apache.kafka.common.serialization.StringSerializer”,
                “value.serializer”, “org.apache.kafka.common.serialization.StringSerializer”,
                “bootstrap.servers”, getBootstrapServer(),
                “security.protocol”, “SASL_SSL”,
                “sasl.mechanism”, “AWS_MSK_IAM”,
                “sasl.jaas.config”, “software.amazon.msk.auth.iam.IAMLoginModule required;”,
                “sasl.client.callback.handler.class”, “software.amazon.msk.auth.iam.IAMClientCallbackHandler”,
                “connections.max.idle.ms”, “60”,
                “reconnect.backoff.ms”, “1000”
        );

Elaboration on implementation

Each Kafka broker node can handle a maximum of 20 IAM authentication requests per second. The demo setup has three brokers, which result in 60 requests per second. Therefore, the broker setup limits the number of concurrent Lambda functions to 60.

To reduce IAM authentication requests from the Kafka producer, place it outside of the handler. For frequent calls, there is a chance that Lambda reuses the previously created class instance and only re-executes the handler.

For bursting workloads with a high number of concurrent API Gateway requests, this can lead to dropped messages. While for some workloads, this might be tolerable, for others this might not be the case.

In these cases, you can extend the architecture with a buffering technology like Amazon SQS or Amazon Kinesis Data Streams between API Gateway and Lambda.

To reduce latency, you can reduce cold start times for Java by changing the tiered compilation level to “1” as described in this blog post. Provisioned Concurrency ensures that polling Lambda functions are ready before requests arrive.

Conclusion

In this post, you learn how to create a serverless integration Lambda function between API Gateway and Apache Managed Streaming for Apache Kafka (MSK). We show how to deploy such an integration with the CDK.

The general pattern is suitable for many use cases that need an integration between API Gateway and Apache Kafka. It may have cost benefits over containerized implementations in use cases with sparse, low-volume input streams, and unpredictable or spiky workloads.

For more serverless learning resources, visit Serverless Land.

Simplifying serverless best practices with AWS Lambda Powertools for TypeScript

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/simplifying-serverless-best-practices-with-aws-lambda-powertools-for-typescript/

This blog post is written by Sara Gerion, Senior Solutions Architect.

Development teams must have a shared understanding of the workloads they own and their expected behaviors to deliver business value fast and with confidence. The AWS Well-Architected Framework and its Serverless Lens provide architectural best practices for designing and operating reliable, secure, efficient, and cost-effective systems in the AWS Cloud.

Developers should design and configure their workloads to emit information about their internal state and current status. This allows engineering teams to ask arbitrary questions about the health of their systems at any time. For example, emitting metrics, logs, and traces with useful contextual information enables situational awareness and allows developers to filter and select only what they need.

Following such practices reduces the number of bugs, accelerates remediation, and speeds up the application lifecycle into production. They can help mitigate deployment risks, offer more accurate production-readiness assessments and enable more informed decisions to deploy systems and changes.

AWS Lambda Powertools for TypeScript

AWS Lambda Powertools provides a suite of utilities for AWS Lambda functions to ease the adoption of serverless best practices. The AWS Hero Yan Cui’s initial implementation of DAZN Lambda Powertools inspired this idea.

Following the community’s adoption of AWS Lambda Powertools for Python and AWS Lambda Powertools for Java, we are excited to announce the general availability of the AWS Lambda Powertools for TypeScript.

AWS Lambda Powertools for TypeScript provides a suite of utilities for Node.js runtimes, which you can use in both JavaScript and TypeScript code bases. The library follows a modular approach similar to the AWS SDK v3 for JavaScript. Each utility is installed as standalone NPM package.

Today, the library is ready for production use with three observability features: distributed tracing (Tracer), structured logging (Logger), and asynchronous business and application metrics (Metrics).

You can instrument your code with Powertools in three different ways:

  • Manually. It provides the most granular control. It’s the most verbose approach, with the added benefit of no additional dependency and no refactoring to TypeScript Classes.
  • Middy middleware. It is the best choice if your existing code base relies on the Middy middleware engine. Powertools offers compatible Middy middleware to make this integration seamless.
  • Method decorator. Use TypeScript method decorators if you prefer writing your business logic using TypeScript Classes. If you aren’t using Classes, this requires the most significant refactoring.

The examples in this blog post use the Middy approach. To follow the examples, ensure that middy is installed:

npm i @middy/core

Logger

Logger provides an opinionated logger with output structured as JSON. Its key features include:

  • Capturing key fields from the Lambda context, cold starts, and structure logging output as JSON.
  • Logging Lambda invocation events when instructed (disabled by default).
  • Printing all the logs only for a percentage of invocations via log sampling (disabled by default).
  • Appending additional keys to structured logs at any point in time.
  • Providing a custom log formatter (Bring Your Own Formatter) to output logs in a structure compatible with your organization’s Logging RFC.

To install, run:

npm install @aws-lambda-powertools/logger

Usage example:

import { Logger, injectLambdaContext } from '@aws-lambda-powertools/logger';
 import middy from '@middy/core';

 const logger = new Logger({
    logLevel: 'INFO',
    serviceName: 'shopping-cart-api',
});

 const lambdaHandler = async (): Promise<void> => {
     logger.info('This is an INFO log with some context');
 };

 export const handler = middy(lambdaHandler)
     .use(injectLambdaContext(logger));

In Amazon CloudWatch, the structured log emitted by your application looks like:

{
     "cold_start": true,
     "function_arn": "arn:aws:lambda:eu-west-1:123456789012:function:shopping-cart-api-lambda-prod-eu-west-1",
     "function_memory_size": 128,
     "function_request_id": "c6af9ac6-7b61-11e6-9a41-93e812345678",
     "function_name": "shopping-cart-api-lambda-prod-eu-west-1",
     "level": "INFO",
     "message": "This is an INFO log with some context",
     "service": "shopping-cart-api",
     "timestamp": "2021-12-12T21:21:08.921Z",
     "xray_trace_id": "abcdef123456abcdef123456abcdef123456"
 }

Logs generated by Powertools can also be ingested and analyzed by any third-party SaaS vendor that supports JSON.

Tracer

Tracer is an opinionated thin wrapper for AWS X-Ray SDK for Node.js.

Its key features include:

  • Auto-capturing cold start and service name as annotations, and responses or full exceptions as metadata.
  • Automatically tracing HTTP(S) clients and generating segments for each request.
  • Supporting tracing functions via decorators, middleware, and manual instrumentation.
  • Supporting tracing AWS SDK v2 and v3 via AWS X-Ray SDK for Node.js.
  • Auto-disable tracing when not running in the Lambda environment.

To install, run:

npm install @aws-lambda-powertools/tracer

Usage example:

import { Tracer, captureLambdaHandler } from '@aws-lambda-powertools/tracer';
 import middy from '@middy/core'; 

 const tracer = new Tracer({
    serviceName: 'shopping-cart-api'
});

 const lambdaHandler = async (): Promise<void> => {
     /* ... Something happens ... */
 };

 export const handler = middy(lambdaHandler)
     .use(captureLambdaHandler(tracer));
AWS X-Ray segments and subsegments emitted by Powertools

AWS X-Ray segments and subsegments emitted by Powertools

Example service map generated with Powertools

Example service map generated with Powertools

Metrics

Metrics create custom metrics asynchronously by logging metrics to standard output following the Amazon CloudWatch Embedded Metric Format (EMF). These metrics can be visualized through CloudWatch dashboards or used to trigger alerts.

Its key features include:

  • Aggregating up to 100 metrics using a single CloudWatch EMF object (large JSON blob).
  • Validating your metrics against common metric definitions mistakes (for example, metric unit, values, max dimensions, max metrics).
  • Metrics are created asynchronously by the CloudWatch service. You do not need any custom stacks, and there is no impact to Lambda function latency.
  • Creating a one-off metric with different dimensions.

To install, run:

npm install @aws-lambda-powertools/metrics

Usage example:

import { Metrics, MetricUnits, logMetrics } from '@aws-lambda-powertools/metrics';
 import middy from '@middy/core';

 const metrics = new Metrics({
    namespace: 'serverlessAirline', 
    serviceName: 'orders'
});

 const lambdaHandler = async (): Promise<void> => {
     metrics.addMetric('successfulBooking', MetricUnits.Count, 1);
 };

 export const handler = middy(lambdaHandler)
     .use(logMetrics(metrics));

In CloudWatch, the custom metric emitted by your application looks like:

{
     "successfulBooking": 1.0,
     "_aws": {
     "Timestamp": 1592234975665,
     "CloudWatchMetrics": [
         {
         "Namespace": "serverlessAirline",
         "Dimensions": [
             [
             "service"
             ]
         ],
         "Metrics": [
             {
             "Name": "successfulBooking",
             "Unit": "Count"
             }
         ]
     },
     "service": "orders"
 }

Serverless TypeScript demo application

The Serverless TypeScript Demo shows how to use Lambda Powertools for TypeScript. You can find instructions on how to deploy and load test this application in the repository.

Serverless TypeScript Demo architecture

Serverless TypeScript Demo architecture

The code for the Get Products Lambda function shows how to use the utilities. The function is instrumented with Logger, Metrics and Tracer to emit observability data.

// blob/main/src/api/get-products.ts
import { APIGatewayProxyEvent, APIGatewayProxyResult} from "aws-lambda";
import { DynamoDbStore } from "../store/dynamodb/dynamodb-store";
import { ProductStore } from "../store/product-store";
import { logger, tracer, metrics } from "../powertools/utilities"
import middy from "@middy/core";
import { captureLambdaHandler } from '@aws-lambda-powertools/tracer';
import { injectLambdaContext } from '@aws-lambda-powertools/logger';
import { logMetrics, MetricUnits } from '@aws-lambda-powertools/metrics';

const store: ProductStore = new DynamoDbStore();
const lambdaHandler = async (event: APIGatewayProxyEvent): Promise<APIGatewayProxyResult> => {

  logger.appendKeys({
    resource_path: event.requestContext.resourcePath
  });

  try {
    const result = await store.getProducts();

    logger.info('Products retrieved', { details: { products: result } });
    metrics.addMetric('productsRetrieved', MetricUnits.Count, 1);

    return {
      statusCode: 200,
      headers: { "content-type": "application/json" },
      body: `{"products":${JSON.stringify(result)}}`,
    };
  } catch (error) {
      logger.error('Unexpected error occurred while trying to retrieve products', error as Error);

      return {
        statusCode: 500,
        headers: { "content-type": "application/json" },
        body: JSON.stringify(error),
      };
  }
};

const handler = middy(lambdaHandler)
    .use(captureLambdaHandler(tracer))
    .use(logMetrics(metrics, { captureColdStartMetric: true }))
    .use(injectLambdaContext(logger, { clearState: true, logEvent: true }));

export {
  handler
};

The Logger utility adds useful context to the application logs. Structuring your logs as JSON allows you to search on your structured data using Amazon CloudWatch Logs Insights. This allows you to filter out the information you don’t need.

For example, use the following query to search for any errors for the serverless-typescript-demo service.

fields resource_path, message, timestamp
| filter service = 'serverless-typescript-demo'
| filter level = 'ERROR'
| sort @timestamp desc
| limit 20
CloudWatch Logs Insights showing errors for the serverless-typescript-demo service.

CloudWatch Logs Insights showing errors for the serverless-typescript-demo service.

The Tracer utility adds custom annotations and metadata during the function invocation, which it sends to AWS X-Ray. Annotations allow you to search for and filter traces by business or application contextual information such as product ID, or cold start.

You can see the duration of the putProduct method and the ColdStart and Service annotations attached to the Lambda handler function.

putProduct trace view

putProduct trace view

The Metrics utility simplifies the creation of complex high-cardinality application data. Including structured data along with your metrics allows you to search or perform additional analysis when needed.

In this example, you can see how many times per second a product is created, deleted, or queried. You could configure alarms based on the metrics.

Metrics view

Metrics view

Code examples

You can use Powertools with many Infrastructure as Code or deployment tools. The project contains source code and supporting files for serverless applications that you can deploy with the AWS Cloud Development Kit (AWS CDK) or AWS Serverless Application Model (AWS SAM).

The AWS CDK lets you build reliable and scalable applications in the cloud with the expressive power of a programming language, including TypeScript. The AWS SAM CLI is that makes it easier to create and manage serverless applications.

You can use the sample applications provided in the GitHub repository to understand how to use the library quickly and experiment in your own AWS environment.

Conclusion

AWS Lambda Powertools for TypeScript can help simplify, accelerate, and scale the adoption of serverless best practices within your team and across your organization.

The library implements best practices recommended as part of the AWS Well-Architected Framework, without you needing to write much custom code.

Since the library relieves the operational burden needed to implement these functionalities, you can focus on the features that matter the most, shortening the Software Development Life Cycle and reducing the Time To Market.

The library helps both individual developers and engineering teams to standardize their organizational best practices. Utilities are designed to be incrementally adoptable for customers at any stage of their serverless journey, from startup to enterprise.

To get started with AWS Lambda Powertools for TypeScript, see the official documentation. For more serverless learning resources, visit Serverless Land.

Optimizing Node.js dependencies in AWS Lambda

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/optimizing-node-js-dependencies-in-aws-lambda/

This post is written by Richard Davison, Senior Partner Solutions Architect.

AWS Lambda offers support for Node.js versions 12, 14 and recently announced version 16. Since Node.js parses, optimizes and runs JavaScript on-the-fly, it can provide fast startup and low overhead in a serverless environment.

Node.js reads and parses all dependencies and sources that are required or imported from the entry point. Consequently, it’s important to keep the dependencies to a minimum and optimize the ones in use.

This post shows how to bundle and minify Lambda function code to optimize performance and stay up to date with the latest version of your dependencies.

Understanding Node.js module resolution

When you require or import a resource in your code, Node.js tries to resolve that resource by either the file- or directory name, or in the node_modules directory. Once it finds the resource, it is loaded from disk, parsed and run.

If that file or dependency in turn contains other imports or require statements, the process repeats, which causes disk reads. The more dependencies and files that are imported in a function, the longer it takes to initialize.

This only impacts imported and used code. Including files in a project that are not imported or used has minimal effect on startup performance.

You should also evaluate what’s being imported. Even though modern JavaScript bundlers such as esbuild, Rollup, or WebPack uses tree shaking and dead code elimination, importing dependencies via wildcard, global-, or top-level imports can result in larger bundles.

Use path imports if your library supports it:

//es6
import DynamoDB from "aws-sdk/clients/dynamodb"
//es5
const DynamoDB = require("aws-sdk/clients/dynamodb")

Avoid wildcard imports:

//es6
import {* as AWS} from "aws-sdk"
//es5
const AWS = require("aws-sdk")

Avoid top-level imports:

//es6
import AWS from "aws-sdk"
//es5
const AWS = require("aws-sdk")

AWS SDK for JavaScript v3

The documentation shows that all Node.js runtimes share the same AWS SDK for JavaScript version. To control the version of the SDK that you depend on, you must provide it yourself. Consider using AWS SDK V3, which uses a modular architecture with a separate package for each service.

This has many benefits, including faster installations and smaller deployment sizes. It also includes many frequently requested features, such as a first-class TypeScript support and a new middleware stack. Since there is a separate package for each service, top-level import is not possible, which further increases startup performance.

By providing your own AWS SDK, it can also be bundled and minified during the build process, which can result in cold start reduction.

Bundle and minify Node.js Lambda functions

You can bundle and minify Lambda functions by using esbuild. This is one of the fastest JavaScript bundlers available, often 10-100x faster than alternatives like WebPack or Parcel.

To use esbuild:

1. Add esbuild to your dev dependencies using npm or yarn:

  • npm: npm i esbuild --save-dev
  • yarn: yarn add esbuild --dev

2. Create a “build” script in the script section of the package.json file:

 "scripts": {
    "build": "rm -rf dist && esbuild ./src/* --entry-names=[dir]/[name]/index --bundle --minify --sourcemap --platform=node --target=node16.14 --outdir=dist",
 }

This script first removes the dist directory and then runs esbuild with the following command-line arguments:

  • ./src/* First, specify the entry points of the application. esbuild creates one bundle (when the bundle option is enabled) for each entry point provided, containing only the dependencies it uses.
  • --entry-names=[dir]/[name]/index specifies that esbuild should create bundles in the same directory as its entry point and in a directory with the same name as the entry point. The bundle is then named index.js.
  • --bundle indicates that you want to bundle all dependencies and source code in a single file.
  • --minify is used to minify the code.
  • --sourcemap is used to create a source map file, which is essential for debugging minified code. Since the minified code is different from your source code, a source map enables a JavaScript debugger to map the minified code to the original source code. Generating source maps helps debugging but increases the size. Note that source maps must be registered to be applied. To register source maps in a Lambda function, use the NODE_OPTIONS environment variable with the following value: --enable-source-maps
  • --platform=node and --target=node16.14 are used to indicate the ECMAScript version to target. By using a bundler, you can often compile newer JavaScript features and syntaxes to earlier standards. Since Lambda now supports Node.js 16, set the target to node16.14. For reference, use https://node.green/ to compare Node.js versions with ECMAScript features.
  • --outdir=dist indicates that all files should be placed in the dist directory.

Build

Run the build script by running yarn build or npm run build.

Package and deploy

To package your Lambda functions, navigate to the dist directory and zip the contents of each respective directory. Note that one zip file per function should be created, only containing index.js and index.js.map. You may also clone the sample project.

If you are already using the AWS CDK, consider using the NodejsFunction construct. This construct abstracts away the bundle procedure and internally uses esbuild to bundle the code:

const nodeJsFunction = new lambdaNodejs.NodejsFunction(
  this,
  "NodeJsFunction",
  {
    runtime: lambda.Runtime.NODEJS_16_X,
    handler: "main",
    entry: "../path/to/your/entry.js_or_ts",
  }
);

Build and deploy sample project

Once all the sources have been bundled you may have noticed that they have small file sizes compared to zipping node_modules and the source files. Your package may be more than 100x smaller. They will also initialize faster.

  1. Clone the sample project and, install the dependencies, build the project and package the application by running the following commands:
    npm install
    npm run build
    npm run package
    npm run package:unbundled

    This produces zip artifacts in the dist directory as well as in the project root. Comparing the size difference between dist/ddbHandler.zip and unoptimized.zip, the unbundled artifact is more than ten times larger. When unpacked, the code size with dependencies is more than 19 Mb compared to 2.1 Mb for the bundled and minified example.

    This is significant in the ddbHandler example because of the AWS SDK DynamoDB dependencies, which contains multiple files and resources.

  2. To deploy the application, run:
    npm run deploy

Comparing and measuring the results

After deployment, you can also see a significant cold start performance improvement. You can load test the Lambda functions using Artillery. Replace the url from the deployment output:

Load test unbundled

artillery run -t "https://{YOUR_ID_HERE}.execute-api.eu-west-1.amazonaws.com" -v '{ "url": "/x86/v2-top-level-unbundled" }' loadtest.yml

Load test bundled

artillery run -t "https://{YOUR_ID_HERE}.execute-api.eu-west-1.amazonaws.com" -v '{ "url": "/x86/v3" }' loadtest.yml

View results in CloudWatch Insights by selecting the two functions’ log groups and running the following query:

Logs Insights

filter @type = "REPORT"
| parse @log /\d+:\/aws\/lambda\/[\w\d]+-(?<function>[\w\d]+)-[\w\d]+/
| stats
count(*) as invocations,
pct(@duration+greatest(@initDuration,0), 0) as p0,
pct(@duration+greatest(@initDuration,0), 25) as p25,
pct(@duration+greatest(@initDuration,0), 50) as p50,
pct(@duration+greatest(@initDuration,0), 75) as p75,
pct(@duration+greatest(@initDuration,0), 90) as p90,
pct(@duration+greatest(@initDuration,0), 95) as p95,
pct(@duration+greatest(@initDuration,0), 99) as p99,
pct(@duration+greatest(@initDuration,0), 100) as p100
group by function, ispresent(@initDuration) as coldstart
| sort by function, coldstart

The cold start invocations for DdbV3X86 run in 551 ms versus DdbVZTopLevelX86Unbundled, which run in 945 ms (p90). The minified and bundled v3 version has about 1.7x faster cold starts, while also providing faster performance during warm invocations.

Performance results

Conclusion

In this post, you learn how to improve Node.js cold start performance by up to 70% by bundling and minifying your code. You also learned how to provide a different version of AWS SDK for JavaScript and that dependencies and how they are imported affects the performance of Node.js Lambda functions. To achieve the best performance, use AWS SDK V3, bundle and minify your code, and avoid top-level imports.

For more serverless learning resources, visit Serverless Land.

Amazon Redshift Serverless – Now Generally Available with New Capabilities

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/amazon-redshift-serverless-now-generally-available-with-new-capabilities/

Last year at re:Invent, we introduced the preview of Amazon Redshift Serverless, a serverless option of Amazon Redshift that lets you analyze data at any scale without having to manage data warehouse infrastructure. You just need to load and query your data, and you pay only for what you use. This allows more companies to build a modern data strategy, especially for use cases where analytics workloads are not running 24-7 and the data warehouse is not active all the time. It is also applicable to companies where the use of data expands within the organization and users in new departments want to run analytics without having to take ownership of data warehouse infrastructure.

Today, I am happy to share that Amazon Redshift Serverless is generally available and that we added many new capabilities. We are also reducing Amazon Redshift Serverless compute costs compared to the preview.

You can now create multiple serverless endpoints per AWS account and Region using namespaces and workgroups:

  • A namespace is a collection of database objects and users, such as database name and password, permissions, and encryption configuration. This is where your data is managed and where you can see how much storage is used.
  • A workgroup is a collection of compute resources, including network and security settings. Each workgroup has a serverless endpoint to which you can connect your applications. When configuring a workgroup, you can set up private or publicly accessible endpoints.

Each namespace can have only one workgroup associated with it. Conversely, each workgroup can be associated with only one namespace. You can have a namespace without any workgroup associated with it, for example, to use it only for sharing data with other namespaces in the same or another AWS account or Region.

In your workgroup configuration, you can now use query monitoring rules to help keep your costs under control. Also, the way Amazon Redshift Serverless automatically scales data warehouse capacity is more intelligent to deliver fast performance for demanding and unpredictable workloads.

Let’s see how this works with a quick demo. Then, I’ll show you what you can do with namespaces and workgroups.

Using Amazon Redshift Serverless
In the Amazon Redshift console, I select Redshift serverless in the navigation pane. To get started, I choose Use default settings to configure a namespace and a workgroup with the most common options. For example, I’ll be able to connect using my default VPC and default security group.

Console screenshot.

With the default settings, the only option left to configure is Permissions. Here, I can specify how Amazon Redshift can interact with other services such as S3, Amazon CloudWatch Logs, Amazon SageMaker, and AWS Glue. To load data later, I give Amazon Redshift access to an S3 bucket. I choose Manage IAM roles and then Create IAM role.

Console screenshot.

When creating the IAM role, I select the option to give access to specific S3 buckets and pick an S3 bucket in the same AWS Region. Then, I choose Create IAM role as default to complete the creation of the role and to automatically use it as the default role for the namespace.

Console screenshot.

I choose Save configuration and after a few minutes the database is ready for use. In the Serverless dashboard, I choose Query data to open the Redshift query editor v2. There, I follow the instructions in the Amazon Redshift Database Developer guide to load a sample database. If you want to do a quick test, a few sample databases (including the one I am using here) are already available in the sample_data_dev database. Note also that loading data into Amazon Redshift is not required for running queries. I can use data from an S3 data lake in my queries by creating an external schema and an external table.

The sample database consists of seven tables and tracks sales activity for a fictional “TICKIT” website, where users buy and sell tickets for sporting events, shows, and concerts.

Sample database tables relations

To configure the database schema, I run a few SQL commands to create the users, venue, category, date, event, listing, and sales tables.

Console screenshot.

Then, I download the tickitdb.zip file that contains the sample data for the database tables. I unzip and load the files to a tickit folder in the same S3 bucket I used when configuring the IAM role.

Now, I can use the COPY command to load the data from the S3 bucket into my database. For example, to load data into the users table:

copy users from 's3://MYBUCKET/tickit/allusers_pipe.txt' iam_role default;

The file containing the data for the sales table uses tab-separated values:

copy sales from 's3://MYBUCKET/tickit/sales_tab.txt' iam_role default delimiter '\t' timeformat 'MM/DD/YYYY HH:MI:SS';

After I load data in all tables, I start running some queries. For example, the following query joins five tables to find the top five sellers for events based in California (note that the sample data is for the year 2008):

select sellerid, username, (firstname ||' '|| lastname) as sellername, venuestate, sum(qtysold)
from sales, date, users, event, venue
where sales.sellerid = users.userid
and sales.dateid = date.dateid
and sales.eventid = event.eventid
and event.venueid = venue.venueid
and year = 2008
and venuestate = 'CA'
group by sellerid, username, sellername, venuestate
order by 5 desc
limit 5;

Console screenshot.

Now that my database is ready, let’s see what I can do by configuring Amazon Redshift Serverless namespaces and workgroups.

Using and Configuring Namespaces
Namespaces are collections of database data and their security configurations. In the navigation pane of the Amazon Redshift console, I choose Namespace configuration. In the list, I choose the default namespace that I just created.

In the Data backup tab, I can create or restore a snapshot or restore data from one of the recovery points that are automatically created every 30 minutes and kept for 24 hours. That can be useful to recover data in case of accidental writes or deletes.

Console screenshot.

In the Security and encryption tab, I can update permissions and encryption settings, including the AWS Key Management Service (AWS KMS) key used to encrypt and decrypt my resources. In this tab, I can also enable audit logging and export the user, connection, and user activity logs.

Console screenshot.

In the Datashares tab, I can create a datashare to share data with other namespaces and AWS accounts in the same or different Regions. In this tab, I can also create a database from a share I receive from other namespaces or AWS accounts, and I can see the subscriptions for datashares managed by AWS Data Exchange.

Console screenshot.

When I create a datashare, I can select which objects to include. For example, here I want to share only the date and event tables because they don’t contain sensitive data.

Console screenshot.

Using and Configuring Workgroups
Workgroups are collections of compute resources and their network and security settings. They provide the serverless endpoint for the namespace they are configured for. In the navigation pane of the Amazon Redshift console, I choose Workgroup configuration. In the list, I choose the default namespace that I just created.

In the Data access tab, I can update the network and security settings (for example, change the VPC, the subnets, or the security group) or make the endpoint publicly accessible. In this tab, I can also enable Enhanced VPC routing to route network traffic between my serverless database and the data repositories I use (for example, the S3 buckets used to load or unload data) through a VPC instead of the internet. To access serverless endpoints that are in another VPC or subnet, I can create a VPC endpoint managed by Amazon Redshift.

Console screenshot.

In the Limits tab, I can configure the base capacity (expressed in Redshift processing units, or RPUs) used to process my queries. Amazon Redshift Serverless scales the capacity to deal with a higher number of users. Here I also have the option to increase the base capacity to speed up my queries or decrease it to reduce costs.

In this tab, I can also set Usage limits to configure daily, weekly, and monthly thresholds to keep my costs predictable. For example, I configured a daily limit of 200 RPU-hours, and a monthly limit of 2,000 RPU-hours for my compute resources. To control the data-transfer costs for cross-Region datashares, I configured a daily limit of 3 TB and a weekly limit of 10 TB. Finally, to limit the resources used by each query, I use Query limits to time out queries running for more than 60 seconds.

Console screenshot.

Availability and Pricing
Amazon Redshift Serverless is generally available today in the US East (Ohio), US East (N. Virginia), US East (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), and Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo) AWS Regions.

You can connect to a workgroup endpoint using your favorite client tools via JDBC/ODBC or with the Amazon Redshift query editor v2, a web-based SQL client application available on the Amazon Redshift console. When using web services-based applications (such as AWS Lambda functions or Amazon SageMaker notebooks), you can access your database and perform queries using the built-in Amazon Redshift Data API.

With Amazon Redshift Serverless, you pay only for the compute capacity your database consumes when active. The compute capacity scales up or down automatically based on your workload and shuts down during periods of inactivity to save time and costs. Your data is stored in managed storage, and you pay a GB-month rate.

To give you improved price performance and the flexibility to use Amazon Redshift Serverless for an even broader set of use cases, we are lowering the price from $0.5 to $0.375 per RPU-hour for the US East (N. Virginia) Region. Similarly, we are lowering the price in other Regions by an average of 25 percent from the preview price. For more information, see the Amazon Redshift pricing page.

To help you get practice with your own use cases, we are also providing $300 in AWS credits for 90 days to try Amazon Redshift Serverless. These credits are used to cover your costs for compute, storage, and snapshot usage of Amazon Redshift Serverless only.

Get insights from your data in seconds with Amazon Redshift Serverless.

Danilo

ICYMI: Serverless Q2 2022

Post Syndicated from dboyne original https://aws.amazon.com/blogs/compute/icymi-serverless-q2-2022/

Welcome to the 18th edition of the AWS Serverless ICYMI (in case you missed it) quarterly recap. Every quarter, we share all the most recent product launches, feature enhancements, blog posts, webinars, Twitch live streams, and other interesting things that you might have missed!

In case you missed our last ICYMI, check out what happened last quarter here.

AWS Lambda

For Node.js developers, AWS Lambda now supports the Node.js 16.x runtime version. This offers new features, including the Stable timers promises API and RegExp match indices. There is also new documentation for TypeScript with Lambda.

Customers are rapidly adopting the new runtime version by updating to Node.js 16.x. To help keep Lambda functions secure, AWS continually updates Node.js 16 with all minor updates released by the Node.js community when using the zip archive format. Read the release blog post to learn more about building Lambda functions with Node.js 16.x.

A new Lambda custom runtime is now available for PowerShell. It makes it even easier to run Lambda functions written in PowerShell. Although Lambda has supported PowerShell since 2018, this new version simplifies the process and reduces the additional steps required during the development process.

To get started, see the GitHub repository which contains the code, examples and installation instructions.

PowerShell code in Lambda console

PowerShell code in Lambda console

AWS Lambda Powertools is an open-source library to help customers discover and incorporate serverless best practices more easily. Powertools for Python went GA in July 2020, followed by Java in 2021, TypeScript in 2022, and .NET is coming soon. AWS Lambda Powertools crossed the 10M download milestone and TypeScript support has now moved from beta to a release candidate.

When building with Lambda, it’s important to develop solutions to handle retries and failures when the same event may be received more than once. Lambda Powertools provide a utility to handle idempotency within your functions.

To learn more:

AWS Step Functions

AWS Step Functions launched a new opt-in console experience to help builders analyze, debug, and optimize Step Functions Standard Workflows. This allows you to debug workflow executions and analyze the payload as it passes through each state. To opt in to the new console experience and get started, follow these detailed instructions.

Events Tab in Step Functions Workflow

Events tab in Step Functions workflow

Amazon EventBridge

Amazon EventBridge released support for global endpoints in April 2022. Global endpoints provide a reliable way for you to improve availability and reliability of event-driven applications. Using global endpoints, you can fail over event ingestion automatically to another Region during service disruptions.

The new IngestionToInvocationStartLatency metric exposes the time to process events from the point at which they are ingested by EventBridge to the point of the first invocation. Amazon Route 53 uses this information to failover event ingestion automatically to a secondary Region if the metric exceeds a configured threshold of 30 seconds, consecutively for 5 minutes.

To learn more:

Amazon EventBridge Architecture for Global Endpoints

Amazon EventBridge global endpoints architecture diagram

Serverless Blog Posts

April

Apr 6 – Getting Started with Event-Driven Architecture

Apr 7 – Introducing global endpoints for Amazon EventBridge

Apr 11 – Building an event-driven application with Amazon EventBridge

Apr 12 – Orchestrating high performance computing with AWS Step Functions and AWS Batch

Apr 14 – Working with events and the Amazon EventBridge schema registry

Apr 20 – Handling Lambda functions idempotency with AWS Lambda Powertools

Apr 26 – Build a custom Java runtime for AWS Lambda

May

May 05 – Amazon EC2 DL1 instances Deep Dive

May 05 – Orchestrating Amazon S3 Glacier Deep Archive object retrieval using AWS Step Functions

May 09 – Benefits of migrating to event-driven architecture

May 09 – Debugging AWS Step Functions executions with the new console experience

May 12 – Node.js 16.x runtime now available in AWS Lambda

May 25 – Introducing the PowerShell custom runtime for AWS Lambda

June

Jun 01 – Testing Amazon EventBridge events using AWS Step Functions

Jun 02 – Optimizing your AWS Lambda costs – Part 1

Jun 02 – Optimizing your AWS Lambda costs – Part 2

Jun 02 – Extending PowerShell on AWS Lambda with other services

Jun 02 – Running AWS Lambda functions on AWS Outposts using AWS IoT Greengrass

Jun 14 – Combining Amazon AppFlow with AWS Step Functions to maximize application integration benefits

Jun 14 – Capturing GPU Telemetry on the Amazon EC2 Accelerated Computing Instances

Serverlesspresso goes global

Serverlesspresso in five countries

Serverlesspresso is a serverless event-driven application that allows you to order coffee from your phone.

Since building Serverlesspresso for reinvent 2021, the Developer Advocate team have put in approximately 100 additional development hours to improve the application to make it a multi-tenant event-driven serverless app.

This allowed us to run Serverlesspresso concurrently at five separate events across Europe on a single day in June, serving over 5,000 coffees. Each order is orchestrated by a single Step Functions workflow. To read more about how this application is built:

AWS Heroes EMEA Summit in Milan, Italy

AWS Heros in Milan, Italy 2022

AWS Heroes EMEA Summit in Milan, Italy

The AWS Heroes program recognizes talented experts whose enthusiasm for knowledge-sharing has a real impact within the community. The EMEA-based Heroes gathered for a Summit on June 28 to share their thoughts, providing valuable feedback on topics such as containers, serverless and machine learning.

Serverless workflow collection added to Serverless Land

Serverless Land is a website that is maintained by the Serverless Developer Advocate team to help you learn with workshops, patterns, blogs and videos.

The Developer Advocate team have extended Serverless Land and introduced the new AWS Step Functions workflows collection.

Using the new collection you can explore common patterns built with Step Functions and use the 1-click deploy button to deploy straight into your AWS account.

Serverless Workflows Collection on Serverless Land

Serverless Workflows Collection on Serverless Land

Videos

Serverless Office Hours – Tues 10AM PT

ServerlessLand YouTube Channel

ServerlessLand YouTube Channel

Weekly live virtual office hours. In each session we talk about a specific topic or technology related to serverless and open it up to helping you with your real serverless challenges and issues. Ask us anything you want about serverless technologies and applications.

YouTube: youtube.com/serverlessland
Twitch: twitch.tv/aws

April

May

June

FooBar Serverless YouTube channel

FooBar Serverless YouTube Header

FooBar Serverless Channel

Marcia Villalba frequently publishes new videos on her popular serverless YouTube channel. You can view all of Marcia’s videos at https://www.youtube.com/c/FooBar_codes.

April

May

June

Still looking for more?

The Serverless landing page has more information. The Lambda resources page contains case studies, webinars, whitepapers, customer stories, reference architectures, and even more Getting Started tutorials.

You can also follow the Serverless Developer Advocacy team on Twitter to see the latest news, follow conversations, and interact with the team.

Introducing the new AWS Step Functions Workflows Collection

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/introducing-the-new-aws-step-functions-workflows-collection/

Today, the AWS Serverless Developer Advocate team introduces the Step Functions Workflows Collection, a fresh experience that makes it easier to discover, deploy, and share Step Functions workflows.

Builders create Step Functions workflows to orchestrate multiple services into business-critical applications with minimal code. Customers were looking for opinionated templates that implement best practices for building serverless applications with Step Functions.

This blog post explains what Step Functions workflows are and what challenges they help solve. It shows how to use the new Step Functions workflows collection to find simple “building blocks”, reusable patterns, and example applications to help build your serverless applications with Step Functions.

Overview

Large serverless applications often comprise multiple decoupled resources. These are sometimes challenging to observe and discover errors. Step Functions is a low-code visual workflow service that helps solve this challenge. It provides instant visual understanding of an application, the services it integrates with, and any errors that might occur during execution.

Step Functions workflows comprise a sequence of steps where the output of one step passes on as input to the next. Step Functions can integrate with over 220 AWS services by using an AWS SDK integration task. This allows users to call AWS SDK actions directly without the need to write additional code.

Getting started with the Step Functions workflows collection

Explore the Step Functions workflows collection to discover new workflows. The collection has three levels of workflows:

  1. Fundamental: A simple, reusable building block.
  2. Pattern: A common reusable component of an application.
  3. Application: A complete serverless application or microservice.

Workflows are also categorized by multiple use-cases, including data processing, SaaS integration, and security automation. Once you find a workflow that want to use in your application:

  1. Choose View to go to the workflow details page.
  2. Choose Template from the workflow details page to view the infrastructure as code (IaC) deployment template. Here, you can see how to configure resources with AWS best practices.
    The workflows collection currently supports deployable workflow templates defined with AWS Serverless Applications Model (AWS SAM) or the AWS Cloud Development Kit (AWS CDK)Structure of an AWS SAM template

    AWS SAM is an open-source framework for building serverless applications. It provides shorthand syntax that makes it easier to build and deploy serverless applications. With only a few lines, you can define each resource using YAML or JSON.

    An AWS SAM template can have serverless-specific resources or standard AWS CloudFormation resources. When you run sam deploy, sam transforms serverless resources into CloudFormation syntax.

    Structure of an AWS CDK template

    The AWS CDK provides another way to define your application resources using common programming languages. The CDK is an open source framework that you can use to model your applications. As with AWS SAM, when you run ‘npx cdk deploy –app ‘ts-node .’ , the CDK transforms the template into AWS CloudFormation syntax and creates the specified resources for you.

  3. Choose Workflow Definition to see the Amazon States Language definition (ASL). That defines the workflow.ASL is a JSON-based, structured language for authoring Step Functions workflows. It enables developers to filter and manipulate data at various stages of a workflow state’s execution using paths. A path is a string beginning with $ that lets you identify and filter subsets of JSON text. Learning how to apply these filters helps to build efficient workflows with minimal state transitions.

    The more advanced workflows in the collection show how to use intrinsic functions to manipulate payload data. Intrinsic functions are ASL constructs that help build and convert payloads without creating additional task state transitions. Use intrinsic functions in Task states within the ResultSelector field, or in a Pass state in either the Parameters or ResultSelector field. The Step Functions documentation shows examples of how to:

    1. Construct strings from interpolated values.
    2. Convert a JSON object to a string.
    3. Convert arguments to an array.Use the workflow definition to see how to configure each workflow state. This is helpful to understand how to define task types you are unfamiliar with and how to apply intrinsic functions to help reduce state transitions. Use the data flow simulator to model and refine your input and output path processing.
  4. Follow the Download and Deployment commands to deploy the workflow into your AWS account. Use the Additional resources to read more about the workflow.
  5. Once you have deployed the workflow into your AWS account, continue building in the AWS Management Console with Workflow studio or locally by editing the downloaded files.Continue building with Workflow Studio
    To edit the workflow in Workflow Studio, select the workflow from the Step Functions console and choose Edit > Workflow Studio.
    From here, you can drag-and-drop flow and Task states onto the canvas, then configure states and data transformations using built-in forms. Workflow Studio composes your workflow definition in real time. If you are new to Step Functions, Workflow Studio provides an easy way to continue building your first workflow that delivers business value.

    Continue building in your local IDE
    For developers who prefer to build locally, the AWS Toolkit for VS Code enables you to define, visualize, and create your Step Functions workflows without leaving the VS Code. The toolkit also provides code snippets for seven different ASL state types and additional service integrations to speed up workflow development. To continue building locally with VS Code:

    1. Download the AWS Toolkit for VS Code
    2. Open the statemachine.asl.json definition file, and choose Render graph to visual the workflow as you build.

Contributing to the Step Functions Workflows collection

Anyone can contribute a workflow to the Step Functions workflows collection. GitHub can host new workflow files in the AWS workflows-collection repository, or in a pre-existing repository of your own.

To submit a workflow:

  1. Choose Submit a workflow from the navigation section.
  2. Fill out the GitHub issue template.
  3. Clone the repository, and duplicate and rename the example _workflow_model directory.
  4. Add the associated workflow template files, ASL, and workflow image.
  5. Add the required meta information to `example-workflow.json`
  6. Make a Pull Request to the repository with the new workflow files.

Additional guidance can be found in the repository’s PUBLISHING.md file.

Conclusion

Today, the AWS Serverless Developer Advocate team is launching a new Serverless Land experience called “The Step Functions workflows collection”. This helps builders search, deploy, and contribute example Step Functions workflows.

The workflows collection simplifies the Step Functions getting started experience, and also shows more advanced users how to apply best practices to their workflows. These examples consist of fundamental building blocks for workflows, common application patterns implemented as workflows, and end to end applications.

All Step Functions builders are invited to contribute to the collection. This is done by submitting a pull request to the Step Functions Workflows Collection GitHub repository. Each submission is reviewed by the Serverless Developer advocate for quality and relevancy before publishing.

You can now learn to use Step Functions with a new workshop called the AWS Step Functions Workshop. This self-paced tutorial teaches you how to use the primary features of Step Functions through a series of interactive modules.

For more information on building applications with Step Functions visit Serverlessland.com.

Building a low-code speech “you know” counter using AWS Step Functions

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/building-a-low-code-speech-you-know-counter-using-aws-step-functions/

This post is written by Doug Toppin, Software Development Engineer, and Kishore Dhamodaran, Solutions Architect.

In public speaking, filler phrases can distract the audience and reduce the value and impact of what you are telling them. Reviewing recordings of presentations can be helpful to determine whether presenters are using filler phrases. Instead of manually reviewing prior recordings, automation can process media files and perform a speech-to-text function. That text can then be processed to report on the use of filler phrases.

This blog explains how to use AWS Step Functions, Amazon EventBridge, Amazon Transcribe and Amazon Athena to report on the use of the common phrase “you know” in media files. These services can automate and reduce the time required to find the use of filler phrases.

Step Functions can automate and chain together multiple activities and other Amazon services. Amazon Transcribe is a speech to text service that uses media files as input and produces textual transcripts from them. Athena is an interactive query service that makes it easier to analyze data in Amazon S3 using standard SQL. Athena enables the use of standard SQL to query data in S3.

This blog shows a low-code, configuration driven approach to implementing this solution. Low-code means writing little or no custom software to perform a function. Instead, you use a configuration drive approach using service integrations where state machine tasks call AWS services using existing SDKs, APIs, or interfaces. A configuration driven approach in this example is using Step Functions’ Amazon States Language (ASL) to tie actions together rather than writing traditional code. This requires fewer details for data management and error handling combined with a visual user interface for composing the workflow. As the actions and logic are clearly defined with the visual workflow, this reduces maintenance.

Solution overview

The following diagram shows the solution architecture.

SolutionOverview

Solution Overview

  1. You upload a media file to an Amazon S3 Media bucket.
  2. The media file upload to S3 triggers an EventBridge rule.
  3. The EventBridge rule starts the Step Functions state machine execution.
  4. The state machine invokes Amazon Transcribe to process the media file.
  5. The transcription output file is stored in the Amazon S3 Transcript bucket.
  6. The state machine invokes Athena to query the textual transcript for the filler phrase. This uses the AWS Glue table to describe the format of the transcription results file.
  7. The filler phrase count determined by Athena is returned and stored in the Amazon S3 Results bucket.

Prerequisites

  1. An AWS account and an AWS user or role with sufficient permissions to create the necessary resources.
  2. Access to the following AWS services: Step Functions, Amazon Transcribe, Athena, and Amazon S3.
  3. Latest version of the AWS Serverless Application Model (AWS SAM) CLI, which helps developers create and manage serverless applications in the AWS Cloud.
  4. Test media files (for example, the Official AWS Podcast).

Example walkthrough

  1. Clone the GitHub repository to your local machine.
  2. git clone https://github.com/aws-samples/aws-stepfunctions-examples.git
  3. Deploy the resources using AWS SAM. The deploy command processes the AWS SAM template file to create the necessary resources in AWS. Choose you-know as the stack name and the AWS Region that you want to deploy your solution to.
  4. cd aws-stepfunctions-examples/sam/app-low-code-you-know-counter/
    sam deploy --guided

Use the default parameters or replace with different values if necessary. For example, to get counts of a different filler phrase, replace the FillerPhrase parameter.

GlueDatabaseYouKnowP Name of the AWS Glue database to create.
AthenaTableName Name of the AWS Glue table that is used by Athena to query the results.
FillerPhrase The filler phrase to check.
AthenaQueryPreparedStatementName Name of the Athena prepared statement used to run SQL queries on.
AthenaWorkgroup Athena workgroup to use
AthenaDataCatalog The data source for running the Athena queries
SAM Deploy

SAM Deploy

Running the filler phrase counter

  1. Navigate to the Amazon S3 console and upload an mp3 or mp4 podcast recording to the bucket named bucket-{account number}-{Region}-you-know-media.
  2. Navigate to the Step Functions console. Choose the running state machine, and monitor the execution of the transcription state machine.
  3. State Machine Execution

    State Machine Execution

  4. When the execution completes successfully, select the QueryExecutionSuccess task to examine the output and see the filler phrase count.
  5. State Machine Output

    State Machine Output

  6. Amazon Transcribe produces the transcript text of the media file. You can examine the output in the Results bucket. Using the S3 console, navigate to the bucket, choose the file matching the media file name and use ‘Query with S3 Select’ to view the content.
  7. If the transcription job does not execute, the state machine reports the failure and exits.
  8. State Machine Fail

    State Machine Fail

Exploring the state machine

The state machine orchestrates the transcription processing:

State Machine Explore

State Machine Explore

The StartTranscriptionJob task starts the transcription job. The Wait state adds a 60-second delay before checking the status of the transcription job. Until the status of the job changes to FAILED or COMPLETED, the choice state continues.

When the job successfully completes, the AthenaStartQueryExecutionUsingPreparedStatement task starts the Athena query, and stores the results in the S3 results bucket. The AthenaGetQueryResults task retrieves the count from the resultset.

The TranscribeMediaBucket holds the media files to be uploaded. The configuration sends the upload notification event to EventBridge:

      
   NotificationConfiguration:
     EventBridgeConfiguration:
       EventBridgeEnabled: true
	  

The TranscribeResultsBucket has an associated policy to provide access to Amazon Transcribe. Athena stores the output from the queries performed by the state machine in the AthenaQueryResultsBucket .

When a media upload occurs, the YouKnowTranscribeStateMachine uses Step Functions’ native event integration to trigger an EventBridge rule. This contains an event object similar to:

{
  "version": "0",
  "id": "99a0cb40-4b26-7d74-dc59-c837f5346ac6",
  "detail-type": "Object Created",
  "source": "aws.s3",
  "account": "012345678901",
  "time": "2022-05-19T22:21:10Z",
  "region": "us-east-2",
  "resources": [
    "arn:aws:s3:::bucket-012345678901-us-east-2-you-know-media"
  ],
  "detail": {
    "version": "0",
    "bucket": {
      "name": "bucket-012345678901-us-east-2-you-know-media"
    },
    "object": {
      "key": "Podcase_Episode.m4a",
      "size": 202329,
      "etag": "624fce93a981f97d85025e8432e24f48",
      "sequencer": "006286C2D604D7A390"
    },
    "request-id": "B4DA7RD214V1QG3W",
    "requester": "012345678901",
    "source-ip-address": "172.0.0.1",
    "reason": "PutObject"
  }
}

The state machine allows you to prepare parameters and use the direct SDK integrations to start the transcription job by calling the Amazon Transcribe service’s API. This integration means you don’t have to write custom code to perform this function. The event triggering the state machine execution contains the uploaded media file location.


  StartTranscriptionJob:
	Type: Task
	Comment: Start a transcribe job on the provided media file
	Parameters:
	  Media:
		MediaFileUri.$: States.Format('s3://{}/{}', $.detail.bucket.name, $.detail.object.key)
	  TranscriptionJobName.$: "$.detail.object.key"
	  IdentifyLanguage: true
	  OutputBucketName: !Ref TranscribeResultsBucket
	Resource: !Sub 'arn:${AWS::Partition}:states:${AWS::Region}:${AWS::AccountId}:aws-sdk:transcribe:startTranscriptionJob'

The SDK uses aws-sdk:transcribe:getTranscriptionJob to get the status of the job.


  GetTranscriptionJob:
	Type: Task
	Comment: Retrieve the status of an Amazon Transcribe job
	Parameters:
	  TranscriptionJobName.$: "$.TranscriptionJob.TranscriptionJobName"
	Resource: !Sub 'arn:${AWS::Partition}:states:${AWS::Region}:${AWS::AccountId}:aws-sdk:transcribe:getTranscriptionJob'
	Next: TranscriptionJobStatus

The state machine uses a polling loop with a delay to check the status of the transcription job.


  TranscriptionJobStatus:
	Type: Choice
	Choices:
	- Variable: "$.TranscriptionJob.TranscriptionJobStatus"
	  StringEquals: COMPLETED
	  Next: AthenaStartQueryExecutionUsingPreparedStatement
	- Variable: "$.TranscriptionJob.TranscriptionJobStatus"
	  StringEquals: FAILED
	  Next: Failed
	Default: Wait

When the transcription job completes successfully, the filler phrase counting process begins.

An Athena prepared statement performs the query with the transcription job name as a runtime parameter. The AWS SDK starts the query and the state machine execution pauses, waiting for the results to return before progressing to the next state:

athena:startQueryExecution.sync

When the query completes, Step Functions uses the SDK integration to retrieve the results using athena:getQueryResults:

athena:getQueryResults

It creates an Athena prepared statement to pass the transcription jobname as a parameter for the query execution:

  ResultsQueryPreparedStatement:
    Type: AWS::Athena::PreparedStatement
    Properties:
      Description: Create a statement that allows the use of a parameter for specifying an Amazon Transcribe job name in the Athena query
      QueryStatement: !Sub >-
        select cardinality(regexp_extract_all(results.transcripts[1].transcript, '${FillerPhrase}')) AS item_count from "${GlueDatabaseYouKnow}"."${AthenaTableName}" where jobname like ?
      StatementName: !Ref AthenaQueryPreparedStatementName
      WorkGroup: !Ref AthenaWorkgroup

There are several opportunities to enhance this tool. For example, adding support for multiple filler phrases. You could build a larger application to upload media and retrieve the results. You could take advantage of Amazon Transcribe’s real-time transcription API to display the results while a presentation is in progress to provide immediate feedback to the presenter.

Cleaning up

  1. Navigate to the Amazon Transcribe console. Choose Transcription jobs in the left pane, select the jobs created by this example, and choose Delete.
  2. Cleanup Delete

    Cleanup Delete

  3. Navigate to the S3 console. In the Find buckets by name search bar, enter “you-know”. This shows the list of buckets created for this example. Choose each of the radio buttons next to the bucket individually and choose Empty.
  4. Cleanup S3

    Cleanup S3

  5. Use the following command to delete the stack, and confirm the stack deletion.
  6. sam delete

Conclusion

Low-code applications can increase developer efficiency by reducing the amount of custom code required to build solutions. They can also enable non-developer roles to create automation to perform business functions by providing drag-and-drop style user interfaces.

This post shows how a low-code approach can build a tool chain using AWS services. The example processes media files to produce text transcripts and count the use of filler phrases in those transcripts. It shows how to process EventBridge data and how to invoke Amazon Transcribe and Athena using Step Functions state machines.

For more serverless learning resources, visit Serverless Land.

Orchestrating AWS Glue crawlers using AWS Step Functions

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/orchestrating-aws-glue-crawlers-using-aws-step-functions/

This blog post is written by Justin Callison, General Manager, AWS Workflow.

Organizations generate terabytes of data every day in a variety of semistructured formats. AWS Glue and Amazon Athena can give you a simpler and more cost-effective way to analyze this data with no infrastructure to manage. AWS Glue crawlers identify the schema of your data and manage the metadata required to analyze the data in place, without the need to transform this data and load into a data warehouse.

The timing of when your crawlers run and complete is important. You must ensure the crawler runs after your data has updated and before you query it with Athena or analyze with an AWS Glue job. If not, your analysis may experience errors or return incomplete results.

In this blog, you learn how to use AWS Step Functions, a low-code visual workflow service that integrates with over 220 AWS services. The service orchestrates your crawlers to control when they start, confirm completion, and combine them into end-to-end, serverless data processing workflows.

Using Step Functions to orchestrate multiple AWS Glue crawlers, provides a number of benefits when compared to implementing a solution directly with code. Firstly, the workflow provides an instant visual understanding of the application, and any errors that might occur during execution. Step Functions’ ability to run nested workflows inside a Map state helps to decouple and reuse application components with native array iteration. Finally, the Step Functions wait state lets the workflow periodically poll the status of the crawl job, without incurring additional cost for idol wait time.

Deploying the example

With this example, you create three datasets in Amazon S3, then use Step Functions to orchestrate AWS Glue crawlers to analyze the datasets and make them available to query using Athena.

You deploy the example with AWS CloudFormation using the following steps:

  1. Download the template.yaml file from here.
  2. Log in to the AWS Management Console and go to AWS CloudFormation.
  3. Navigate to Stacks -> Create stack and select With new resources (standard).
  4. Select Template is ready and Upload a template file, then Choose File and select the template.yaml file that you downloaded in Step 1 and choose Next.
  5. Enter a stack name, such as glue-stepfunctions-demo, and choose Next.
  6. Choose Next, check the acknowledgement boxes in the Capabilities and transforms section, then choose Create stack.
  7. After deployment, the status updates to CREATE_COMPLETE.

Create your datasets

Navigate to Step Functions in the AWS Management Console and select the create-dataset state machine from the list. This state machine uses Express Workflows and the Parallel state to build three datasets concurrently in S3. The first two datasets include information by user and location respectively and include files per day over the 5-year period from 2016 to 2020. The third dataset is a simpler, all-time summary of data by location.

To create the datasets, you choose Start execution from the toolbar for the create-dataset state machine, then choose Start execution again in the dialog box. This runs the state machine and creates the datasets in S3.

Navigate to the S3 console and view the glue-demo-databucket created for this example. In this bucket, in a folder named data, there are three subfolders, each containing a dataset.

The all-time-location-summaries folder contains a set of JSON files, one for each location.

The daily-user-summaries and daily-location-summaries contain a folder structure with nested folders for each year, month, and date. In addition to making this data easier to navigate via the console, this folder structure provides hints to AWS Glue that it can use to partition this dataset and make it more efficient to query.

Crawling

You now use AWS Glue crawlers to analyze these datasets and make them available to query. Navigate to the AWS Glue console, select Crawlers to see the list of Crawlers that you created when you deployed this example. Select the daily-user-summaries crawler to view details and note that they have tags assigned to indicate metadata such as the datatype of the data and whether the dataset is-partitioned.

Now, return to the Step Functions console and view the run-crawlers-with-tags state machine. This state machine uses AWS SDK service integrations to get a list of all crawlers matching the tag criteria you enter. It then uses the map state and the optimized service integration for Step Functions to execute the run-crawler state machine for each of the matching crawlers concurrently. The run-crawler state machine starts each crawler and monitors status until the crawler completes. Once each of the individual crawlers have completed, the run-crawlers-with-tags state machine also completes.

To initiate the crawlers:

  1. Choose Start execution from the top of the page when viewing the run-crawlers-with-tags state machine
  2. Provide the following as Input
    {"tags": {"datatype": "json"}}
  3. Choose Start execution.

After 2-3 minutes, the execution finishes with a Succeeded status once all three crawlers have completed. During this time, you can navigate to the run-crawler state machine to view the individual, nested executions per crawler or to the AWS Glue console to see the status of the crawlers.

Querying the data using Amazon Athena

Now, navigate to the Athena console where you can see the database and tables created by your crawlers. Note that AWS Glue recognized the partitioning scheme and included fields for year, month, and date in addition to user and usage fields for the data contained in the JSON files.

If you have not used Athena in this account before, you see a message instructing you to set a query result location. Choose View settings -> Manage -> Browse S3 and select the athena-results bucket that you created when you deployed the example. Choose Save then return to the Editor to continue.

You can now run queries such as the following, to calculate the total usage for all users over 5 years.

SELECT SUM(usage) all_time_usage FROM “daily_user_summaries”

You can also add filters, as shown in the following example, which limit results to those from 2016.

SELECT SUM(usage) all_time_usage FROM “daily_user_summaries” WHERE year = ‘2016’

Note this second query scanned only 17% as much data (133 KB vs 797 KB) and completed faster. This is because Athena used the partitioning information to avoid querying the full dataset. While the differences in this example are small, for real-world datasets with terabytes of data, your cost and latency savings from partitioning data can be substantial.

The disadvantage of a partitioning scheme is that new folders are not included in query results until you add new partitions. Re-running your crawler identifies and adds the new partitions and using Step Functions to orchestrate these crawlers makes that task simpler.

Extending the example

You can use these example state machines as they are in your AWS accounts to manage your existing crawlers. You can use Amazon S3 event notifications with Amazon EventBridge to trigger crawlers based on data changes. With the Optimized service integration for Amazon Athena, you can extend your workflows to execute queries against these crawled datasets. And you can use these examples to integrate crawler execution into your end-to-end data processing workflows, creating reliable, auditable workflows from ingestion through to analysis.

Conclusion

In this blog post, you learn how to use Step Functions to orchestrate AWS Glue crawlers. You deploy an example that generates three datasets, then uses Step Functions to start and coordinate crawler runs that analyze this data and make it available to query using Athena.

To learn more about Step Functions, visit Serverless Land.

Sending Amazon EventBridge events to private endpoints in a VPC

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/sending-amazon-eventbridge-events-to-private-endpoints-in-a-vpc/

This post is written by Emily Shea, Senior GTM Specialist, Event-Driven Architectures.

Building with events can help you accelerate feature velocity and build scalable, fault tolerant applications. You can achieve loose coupling in your application using asynchronous communication via events. Loose coupling allows each development team to build and deploy independently and each component to scale and fail without impacting the others. This approach is referred to as event-driven architecture.

Amazon EventBridge helps you build event-driven architectures. You can publish events to the EventBridge event bus and EventBridge routes those events to targets. You can write rules to filter events and only send them to the interested targets. For example, an order fulfillment service may only be interested in events of type ‘new order created.’

EventBridge is serverless, so there is no infrastructure to manage and the service scales automatically. EventBridge has native integrations with over 100 AWS services and over 40 SaaS providers.

Amazon EventBridge has a native integration with AWS Lambda, and many AWS customers use events to trigger Lambda functions to process events. You may also want to send events to workloads running on Amazon EC2 or containerized workloads deployed with Amazon ECS or Amazon EKS. These services are deployed into an Amazon Virtual Private Cloud, or VPC.

For some use cases, you may be able to expose public endpoints for your VPC. You can use EventBridge API destinations to send events to any public HTTP endpoint. API destinations include features like OAuth support and rate limiting to control the number of events you are sending per second.

However, some customers are not able to expose public endpoints for security or compliance purposes. This tutorial shows you how to send EventBridge events to a private endpoint in a VPC using a Lambda function to relay events. This solution deploys the Lambda function connected to the VPC and uses IAM permissions to enable EventBridge to invoke the Lambda function. Learn more about Lambda VPC connectivity here.

In this blog post, you learn how to send EventBridge events to a private endpoint in a VPC. You set up an example application with an EventBridge event bus, a Lambda function to relay events, a Flask application running in an EKS cluster to receive events behind an Application Load Balancer (ALB), and a secret stored in Secrets Manager for authenticating requests. This application uses EKS and Secrets Manager to demonstrate sending and authenticating requests to a containerized workload, but the same pattern applies for other container orchestration services like ECS and your preferred secret management solution.

Continue reading for the full example application and walkthrough. If you have an existing application in a VPC, you can deploy just the event relay portion and input your VPC details as parameters.

Solution overview

Architecture

  1. An event is sent to the EventBridge bus.
  2. If the event matches a certain pattern (ex, if ‘detail-type’ is ‘inbound-event-sent’), an EventBridge rule uses EventBridge’s input transformer to format the event as an HTTP call.
  3. The EventBridge rule pushes the event to a Lambda function connected to the VPC and a CloudWatch Logs group for debugging.
  4. The Lambda function calls Secrets Manager and retrieves a secret key. It appends the secret key to the event headers and then makes an HTTP call to the ALB URL endpoint.
  5. ALB routes this HTTP call to a node group in the EKS cluster. The Flask application running on the EKS cluster calls Secret Manager, confirms that the secret key is valid, and then processes the event.
  6. The Lambda function receives a response from ALB.
    1. If the Flask application fails to process the event for any reason, the Lambda function raises an error. The function’s failure destination is configured to send the event and the error message to an SQS dead letter queue.
    2. If the Flask application successfully processes the event and the ‘return-response-event’ flag in the event was set to ‘true’, then the Lambda function publishes a new ‘outbound-event-sent’ event to the same EventBridge bus.
  7. Another EventBridge rule matches detail-type ‘outbound-event-sent’ events and routes these to the CloudWatch Logs group for debugging.

Prerequisites

To run the application, you must install the AWS CLI, Docker CLI, eksctl, kubectl, and AWS SAM CLI.

To clone the repository, run:

git clone https://github.com/aws-samples/eventbridge-events-to-vpc.git

Creating the EKS cluster

  1. In the example-vpc-application directory, use eksctl to create the EKS cluster using the config file.
    cd example-vpc-application
    eksctl create cluster --config-file eksctl_config.yaml

    This takes a few minutes. This step creates an EKS cluster with one node group in us-east-1. The EKS cluster has a service account with IAM permissions to access the Secrets Manager secret you create later.

  2. Use your AWS account’s default Amazon Elastic Container Registry (ECR) private registry to store the container image. First, follow these instructions to authenticate Docker to ECR. Next, run this command to create a new ECR repository. The create-repository command returns a repository URI (for example, 123456789.dkr.ecr.us-east-1.amazonaws.com/events-flask-app).
    aws ecr create-repository --repository-name events-flask-app 

    Use the repository URI in the following commands to build, tag, and push the container image to ECR.

    docker build --tag events-flask-app .
    docker tag events-flask-app:latest {repository-uri}:1
    docker push {repository-uri}:1
  3. In the Kuberenetes deployment manifest file (/example-vpc-application/manifests/deployment.yaml), fill in your repository URI and container image version (for example, 123456789.dkr.ecr.us-east-1.amazonaws.com/events-flask-app:1)

Deploy the Flask application and Application Load Balancer

  1. Within the example-vpc-application directory, use kubectl to apply the Kubernetes manifest files. This step deploys the ALB, which takes time to create and you may receive an error message during the deployment (‘no endpoints available for service “aws-load-balancer-webhook-service”‘). Rerun the same command until the ALB is deployed and you no longer receive the error message.
    kubectl apply --kustomize manifests/
  2. Once the deployment is completed, verify that the Flask application is running by retrieving the Kubernetes pod logs. The first command retrieves a pod name to fill in for the second command.
    kubectl get pod --namespace vpc-example-app
    kubectl logs --namespace vpc-example-app {pod-name} --follow

    You should see the Flask application outputting ‘Hello from my container!’ in response to GET request health checks.

    Hello message

Get VPC and ALB details

Next, you retrieve the security group ID, private subnet IDs, and ALB DNS Name to deploy the Lambda function connected to the same VPC and private subnet and send events to the ALB.

  1. In the AWS Management Console, go to the VPC dashboard and find Subnets. Copy the subnet IDs for the two private subnets (for example, subnet name ‘eksctl-events-cluster/SubnetPrivateUSEAST1A’).
    Subnets
  2. In the VPC dashboard, under Security, find the Security Groups tab. Copy the security group ID for ‘eksctl-events-cluster/ClusterSharedNodeSecurityGroup’.
    Security groups
  3. Go to the EC2 dashboard. Under Load Balancing, find the Load Balancer tab. There is a load balancer associated with your VPC ID. Copy the DNS name for the load balancer, adding ‘http://’ as a prefix (for example, http://internal-k8s-vpcexamp-vpcexamp-c005e07d1a-1074647274.us-east-1.elb.amazonaws.com).
    Load balancer

Create the Secrets Manager VPC endpoint

You need a VPC endpoint for your application to call Secrets Manager.

  1. In the VPC dashboard, find the Endpoints tab and choose Create Endpoint. Select Secrets Manager as the service, and then select the VPC, private subnets, and security group that you copied in the previous step. Choose Create.VPC endpoint

Deploy the event relay application

Deploy the event relay application using the AWS Serverless Application Model (AWS SAM) CLI:

  1. Open a new terminal window and navigate to the event-relay directory. Run the following AWS SAM CLI commands to build the application and step through a guided deployment.
    cd event-relay
    sam build
    sam deploy --guided

    The guided deployment process prompts for input parameters. Enter ‘event-relay-app’ as the stack name and accept the default Region. For other parameters, submit the ALB and VPC details you copied: Url (ALB DNS name), security group ID, and private subnet IDs. For the Secret parameter, pass any value.The AWS SAM template saves this value as a Secrets Manager secret to authenticate calls to the container application. This is an example of how to pass secrets in the event relay HTTP call. Replace this with your own authentication method in production environments.

  2. Accept the defaults for the remaining options. For ‘Deploy this changeset?’, select ‘y’. Here is an example of the deployment parameters.
    Parameters

Test the event relay application

Both the Flask application in a VPC and the event relay application are now deployed. To test the event relay application, keep the Kubernetes pod logs from a previous step open to monitor requests coming into the Flask application.

  1. You can open a new terminal window and run this AWS CLI command to put an event on the bus, or go to the EventBridge console, find your event bus, and use the Send events UI.
    aws events put-events \
    --entries '[{"EventBusName": "event-relay-bus" ,"Source": "eventProducerApp", "DetailType": "inbound-event-sent", "Detail": "{ \"event-id\": \"123\", \"return-response-event\": true }"}]'

    When the event is relayed to the Flask application, a POST request in the Kubernetes pod logs confirms that the application processed the event.

    Terminal response

  2. Navigate to the CloudWatch Logs groups in the AWS Management Console. In the ‘/aws/events/event-bus-relay-logs’ group, there are logs for the EventBridge events. In ‘/aws/lambda/EventRelayFunction’ stream, the Lambda function relays the inbound event and puts a new outbound event on the EventBridge bus.
  3. You can test the SQS dead letter queue by creating an error. For example, you can manually change the Lambda function code in the console to pass an incorrect value for the secret. After sending a test event, navigate to the SQS queue in the console and poll for messages. The message shows the error message from the Flask application and the full event that failed to process.

Cleaning up

In the VPC dashboard in the AWS Management Console, find the Endpoints tab and delete the Secrets Manager VPC endpoint. Next, run the following commands to delete the rest of the example application. Be sure to run the commands in this order as some of the resources have dependencies on one another.

sam delete --stack-name event-relay-app
kubectl --namespace vpc-example-app delete ingress vpc-example-app-ingress

From the example-vpc-application directory, run this command.

eksctl delete cluster --config-file eksctl_config.yaml

Conclusion

Event-driven architectures and EventBridge can help you accelerate feature velocity and build scalable, fault tolerant applications. This post demonstrates how to send EventBridge events to a private endpoint in a VPC using a Lambda function to relay events and emit response events.

To learn more, read Getting started with event-driven architectures and visit EventBridge tutorials on Serverless Land.