Tag Archives: contributed

Enriching operational events with AWS Serverless

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/enriching-operational-events-with-aws-serverless/

This post was written by Ben Moses, Senior Solutions Architect, Enterprise.

AWS Serverless is a fit for many IT automation and operations use cases, especially for reacting to events. Infrastructure events are a useful way to understand the health of your infrastructure that supports your applications and customers and this blog examines using serverless to help enrich these operational events.

The scenario used in this post shows how an infrastructure event can be intercepted in real-time, enriched with additional information from your AWS environment and workloads, and be sent to a downstream consumer with the added valuable information.

This example focuses on Amazon EC2 state change events. The concept applies to any type of event, for example those emitted by other AWS services to Amazon CloudWatch Events. These events could also include events produced by AWS Config, and some of AWS CloudTrail’s events, including CloudTrail Insights.

The purpose is to add more valuable information and context to events in real-time. Operators and downstream consumers can then identify emerging patterns in near real-time.

How does this happen today?

It is common for existing solutions to store infrastructure events in whatever format the source system generates, or in a standardized open or proprietary format. Operations staff and systems then analyze these logs to understand patterns and to support root cause analysis. This data must often be enriched by other sources to give it context and meaning. This is done either in a scheduled batch operation by using CSV data from other systems, or by integrating with other enterprise tooling.

The state of your cloud infrastructure changes frequently due to the elasticity and disposability of resources. This can cause an issue with your data quality when using the schedule batch method. When you come to enrich an infrastructure event, the state may have changed by the time your scheduled batch runs. This leads to gaps or inaccuracies in data, which makes it harder for operators to spot trends and anomalies.

A serverless approach

This example uses serverless services and concepts from event driven architecture (EDA). With this architecture, you only pay when events happen and are enriched. There’s no need for any third-party tooling, and your events are enriched in near real-time.

The EC2 “State Change Event” is enriched by obtaining the instance’s name tag, if it has one. The end-to-end journey look like this:

Overview

  1. An EC2 instance’s state changes (i.e., shutdown, restart).
  2. An Amazon EventBridge rule that matches the event pattern triggers a target action to run an AWS Step Functions state machine.
  3. The state machine transforms inputs, makes a native AWS API SDK call to the EC2 service to find a name tag, and emits a newly enriched event back to EventBridge.
  4. An EventBridge rule matching the enriched event triggers an action to send an email via Amazon SNS to simulate a downstream consumer.

EventBridge is a serverless event bus that can be used with event driven architectures on AWS. An EventBridge rule is defined with a pattern, and if an event matches that pattern, then the rule’s target action is triggered. In this example, the rule is:

{
  "detail-type": ["EC2 Instance State-change Notification"],
  "source": ["aws.ec2"]
}

An EC2 state change event looks like this:

{
  "version": "0",
  "id": "672123fe-53aa-3b22-3b37-1fae26df2aff",
  "detail-type": "EC2 Instance State-change Notification",
  "source": "aws.ec2",
  "account": "1234567890",
  "time": "2022-08-17T18:25:01Z",
  "region": "eu-west-1",
  "resources": [
    "arn:aws:ec2:eu-west-1:1234567890:instance/i-1234567890"
  ],
  "detail": {
    "instance-id": "i-0123456789",
    "state": "running"
  }
}

See the detail-type and source fields in the event. These match the rule and this entire event payload is passed on to the next component of the architecture: the Step Functions state machine.

Step Functions uses JSONPath to select, transform, and move data through the states within a state machine. This flexibility means that, in this example, no compute resources such as AWS Lambda are required. This can mean less custom code, lower cost, and less complexity.

Step Functions Workflow Studio lets you design workflows visually. These are the key actions that take place when the state machine runs using the EC2 state change event:

Step Functions state machine

1. Remove problem characters from input

Pass states allow us to transform inputs and outputs. In this architecture, a Pass state is used to remove any problem characters from the incoming event that are known to cause issues in future steps, such as API calls to services.

In this example, the parameters for the API call used in Step 2 requires the EC2 instance ID. This information is in the detail of the original event, but the API action can’t use anything with a hyphen in it.

To solve this, use a JSONPath Parameter to effectively rewrite this information without the hyphen. This creates a new field named instanceid, which is assigned the value from the original event’s detail.

{
  "instanceid.$": "$.detail.instance-id"
}

2. Get instance name from Tag

The “EC2: DescribeInstances” task in Step Functions is an example of a native SDK integration with an AWS service. This action expects a single parameter to the API, an array of EC2 instance IDs.

{
  "InstanceIds.$": "States.Array($.detail.refined.instanceid)"
}

The States.Array() intrinsic function is used to wrap the instance ID from the re-written field created in step 1. This single-member array is then passed to the EC2 Describe Instances API.

When a response is received from the EC2 Describe Instances API call, it is passed to a Result Selector. The purpose of this is to extract the value of a “Name” tag, if one was returned from the EC2 Describe Instances API.

Step Functions supports the use of JSONPath filter expressions.

{
  "instancename.$": "$..Reservations[0].Instances[0].Tags[?(@.Key==Name)].Value",
  "instanceid.$": "$.Reservations[0].Instances[0].InstanceId"
}

To understand the advanced JSONPath filter expression used in this example, read this blog post.

If an error occurs with the API call, or the filter expression is unable to find a “Name” tag on the EC2 instance, then Step Functions allows you to handle these errors within the workflow.

3. Convert instance name to a string

The output from the previous state returns an array, but an EC2 instance can only have one unique “Name” tag. A pass state is used again, with a parameter as seen in Step 1. This parameter expression takes the first element from the array and stores it in a new field named instancename.

{
  "instancename.$": "$.detail.refined.instancename[0]",
  "instanceid.$": "$.detail.refined.instanceid"
}

As with previous steps, the instanceid is re-written as part of the output, and both of these values are appended to the state’s output.

4. Get default name from Parameter Store

If the filter expression in the result selector in step 2 fails for any reason, then Step Functions error handling moves here.

Failures can happen for a variety of reasons, and with Step Functions, you can branch out error handling for each different error type. In this example, all errors are dealt with the same regardless of the cause being a missing “Name” tag, or a permissions issue. In this architecture, a default placeholder value is used in place of the name of the instance. In your context, a different approach may be more suitable.

The default placeholder name is stored as a static value in AWS Systems Manager Parameter Store. The native Systems Manager: GetParameter action within Step Functions can retrieve this value directly. An advantage of this approach is that the parameter can be updated externally without having to make any changes to the Step Functions state machine itself.

5. Add ID back to refined

A pass state is used to format the response from the Parameter Store API and parameter expression then appends the default instance name on to the output.

Whether the workflow execution followed the intended execution path, or encountered an error, there is now an enriched event payload with an instance name.

6. Emit enriched event

The EventBridge: PutEvents native SDK action within Step Functions is used to construct and emit the enriched event.

{
  "Entries": [
    {
      "Detail": {
        "Message.$": "$"
      },
      "DetailType": "EnrichedEC2Event",
      "EventBusName": "serverless-event-enrichment-ApplicationEventBus",
      "Source": "custom.enriched.ec2"
    }
  ]
}

The DetailType and Source of the enriched event are custom values, specified in the last step of the state machine. As you consider schemas for your events within your organization, note that the AWS prefix is reserved for AWS service events.

The enriched event payload looks like this:

{
  "version": "0",
  "id": "a80e378b-e9a7-8007-1f18-b947e6d72c4b",
  "detail-type": "EnrichedEC2Event",
  "source": "custom.enriched.ec2",
  "account": "123456789",
  "time": "2022-08-17T18:25:03Z",
  "region": "eu-west-1",
  "resources": [
    "arn:aws:states:eu-west-1:123456789:stateMachine:EventEnrichmentStateMachine-2T5jFlCPOha1",
    "arn:aws:states:eu-west-1:123456789:execution:EventEnrichmentStateMachine-2T5jFlCPOha1:672123fe-53aa-3b22-3b37-1fae26df2aff_90821b68-ba92-2374-5015-8804c8da5769"
  ],
  "detail": {
    "Message": {
      "version": "0",
      "id": "672123fe-53aa-3b22-3b37-1fae26df2aff",
      "detail-type": "EC2 Instance State-change Notification",
      "source": "aws.ec2",
      "account": "123456789",
      "time": "2022-08-17T18:25:01Z",
      "region": "eu-west-1",
      "resources": [
        "arn:aws:ec2:eu-west-1:123456789:instance/i-123456789"
      ],
      "detail": {
        "instance-id": "i-123456789",
        "state": "running",
        "refined": {
          "instancename": "ec2-enrichment-demo-instance",
          "instanceid": "i-123456789"
        }
      }
    }
  }
}

Consuming enriched events

When enriching event data in real-time, the events are only valuable if they’re consumed. To use these enriched events, a consuming service must create and own a new EventBridge rule on the custom application bus. In this architecture, an appropriate rule pattern is:

{
  "detail-type": ["EnrichedEC2Event"],
  "source": ["custom.enriched.ec2"]
}

The target of the rule depends on the use case. For operational events, then service management applications or log aggregation services may make the most sense. In this example, the rule has an SNS topic as the target. When SNS receives a message, it is sent to operator via email. With EventBridge, future consumers can add their own rules to match the enriched events, and add their specific target actions to suit their use case.

Conclusion

This post shows how you can create rules in EventBridge to react to operational events from AWS services. These events are routed to Step Functions, which runs a workflow consisting of steps to enrich the event, handle errors, and emit the enriched event. The example shows how to consume the enriched events, resulting in an operator receiving an email.

This example is available on GitHub as an AWS Serverless Application Model (AWS SAM) template. It contains instructions to deploy, test, and then remove all of the resources when you’ve finished.

For more serverless learning resources, visit Serverless Land.

Server-side rendering micro-frontends – the architecture

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/server-side-rendering-micro-frontends-the-architecture/

This post is written by Luca Mezzalira, Principal Specialist Solutions Architect, Serverless.

Microservices are a common pattern for building distributed systems. As frontend developers have modified their approaches to build architectures at scale, many are building micro-frontends.

This blog series explores how to implement micro-frontends using a server-side rendering (SSR) approach with AWS services. This first article covers the architecture characteristics and building blocks for designing a successful micro-frontends architecture in the AWS Cloud.

What are micro-frontends?

Micro-frontends are the technical representation of a business subdomain. They allow independent teams to work in parallel, reducing external dependencies and increasing delivery throughput. They embody several microservices characteristics such as governance decentralization, design for failure, and evolutionary design.

The main difference between micro-frontends and components is related to the domain ownership present inside a micro-frontend. With components, the domain knowledge is usually delegated to its container, which knows how to use the component’s property based on the context. Owning the domain inside a micro-frontend enables the independence that you expect in a distributed system. This doesn’t mean that micro-frontends cannot communicate or share resources, but the mindset is different compared with components.

If you are using microservices today, you may benefit from micro-frontends for scaling your frontend applications. Before micro-frontends, scaling was based primarily on developers’ expertise. Micro-frontends allow you to modernize frontend applications iteratively like you would with microservices. Every user downloads only the code needed for accomplishing a specific task, increasing the performance and users experience of a web application.

Architecture characteristics

This blog series builds a product details page of an example ecommerce website using micro-frontends with serverless infrastructure.

Page layout

The page is composed of:

  • A template that includes a header. This could include more common parts but uses one in this example.
  • A notifications micro-frontend that is client-side rendered. The notifications system must react to user interactions, so cannot be server-side rendered with the rest of the page.
  • A product details micro-frontend.
  • A reviews micro-frontend.

Every micro-frontend is independent and can be developed by different teams working on the same project. This can reduce external dependencies and potential bugs across the entire application.

The key system characteristics of this project are:

  1. Server-side rendering: The system must be designed with a server-side rendering approach. This provides fast rendering of the page inside modern browsers and reduces the need of client-side JavaScript for rendering the page.
  2. Framework agnostic: The solution must work with a broad variety of JavaScript libraries available and not be bound or optimized to a specific framework.
  3. Use optimizations best practices: Optimization is a key feature for server-side rendering applications. Many industries rely on these characteristics for increasing sales. This example encapsulates core web vitals metrics, progressive hydration, and different levels of caches to speed up the response times of the webpages.
  4. Team independence: Every micro-frontend must be developed with minimum external dependencies. Constant coordination across teams can be a sign of design-time coupling that invalidates the purpose behind a distributed system.
  5. Serverless infrastructure for frontend developers: The serverless paradigm helps developers focus on the business logic instead of infrastructure, using a “pay for value” model, which helps to reduce costs. You can cache micro-frontend responses and reduce the traffic on the origin and the need to scale every part of the system in the same way.

High-level architecture design

This is the high-level design to incorporate these architectural characteristics:

Architectural overview

  1. The application entry point is a content delivery network (CDN) that is used for caching, performance, and security reasons.
  2. The server-side rendering approach requires a place to store all the static files to hydrate the JavaScript code in the browser and for styling components.
  3. Pages requests require a UI composer that retrieves the micro-frontends and stitches them together to provide the page consumed by a browser. It streams the final HTML page to the browser to enhance the largest contentful paint (LCP) metric from the core web vitals.
  4. Decouple micro-frontends from the UI composer relies on two mechanisms: A micro-frontends discovery that acts like a service discovery in a microservice architecture, and an HTML template per page that describes where to inject the micro-frontends inside a page. The templates can live in the same repository where the other static files are present.
  5. The notification micro-frontend reacts to user interactions, providing a notification when a user adds a product in the cart.
  6. The product details micro-frontend has highly cacheable data that doesn’t require many changes over time.
  7. The reviews micro-frontend must retrieve user reviews of a specific product.

The key element for avoiding design-time coupling in this architecture is the micro-frontends discovery. The main advantages of this approach are to provide discoverability to simplify multi-environments strategies, and also to reduce the blast radius thanks to using blue/green deployments or canary releases. This topic will be covered in depth in an upcoming post.

From high-level design into implementation

The framework-agnostic approach helps to enable control over system evolution. It achieves this by using HTML-over-the-wire, where every micro-frontend renders an HTML fragment and returns it to the UI composer.

When the UI composer gathers the HTML fragments, it composes the final page to render using transclusion. Every page is represented by a specific template hosted in static files. The UI composer retrieves the template and then retrieves placeholder references in the template that can be replaced with the micro-frontend fragments.

This is the architecture used:

Architecture diagram

  1. Amazon CloudFront provides a unique entry point to the application. The distribution has two origins: the first for static files and the second for the UI composer.
  2. Static files are hosted in an Amazon S3 bucket. They are consumed by the browser and the UI composer for HTML templates.
  3. The UI composer runs on a containers cluster in AWS Fargate. Using a containerized solution allows you to use streaming capabilities and multithreading rendering if needed.
  4. AWS Systems Manager Parameter Store is used as a basic micro-frontends discovery system. This service is a key-value store used by the UI composer for retrieving the micro-frontends endpoints to consume.
  5. The notifications micro-frontend stores the optimized JavaScript bundle in the S3 bucket. This renders on the client since it must react to user interactions.
  6. The reviews micro-frontend is composed by an AWS Lambda function with the user reviews stored in Amazon DynamoDB. It’s rendered fully server-side and it outputs an HTML fragment.
  7. The product details micro-frontend is a low-code micro-frontend using AWS Step Functions. The Express Workflow can be invoked synchronously and contains the logic for rendering the HTML fragment and a caching layer. This increases performance due to the native integration with over 200 AWS services.

Using this approach, every team developing a micro-frontend is independent to build and evolve their business domain. The main touchpoints with other teams are related to the initial integrations and the communication mechanism between micro-frontends present in the same page. When these points are achieved, every team reduces external dependencies and can embrace the evolutionary nature of micro-frontends.

Conclusion

This first post starts the journey into micro-frontends, a distributed architecture for frontend applications. The next post will explore the UI composer and micro-frontends discovery implementations.

If you are interested in learning more about micro-frontends, see the micro-frontends decisions framework, a mental model created for the initial complexity of approaching micro-frontends design. When used as a north star, the decisions framework simplifies the development of micro-frontends applications.

In the AWS reference architectures section, you can find a complete diagram similar to the application described in this blog series with additional details.

For more serverless learning resources, visit Serverless Land.

Serverless and Application Integration sessions at AWS re:Invent 2022

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/serverless-and-application-integration-sessions-at-aws-reinvent-2022/

This post is written by Josh Kahn, Tech Leader, AWS Serverless.

AWS re:Invent 2022 is only a few weeks away, featuring an exciting slate of sessions on Serverless and Application Integration. This post highlights many of the sessions we are hosting on Serverless and Application Integration. It groups sessions by theme to help you quickly find the sessions most interesting to you.

AWS re:Invent 2022

As in past years, the conference offers a variety of session formats:

  • Breakout sessions: lecture-style presentations delivered by AWS experts, builders, and customers.
  • Builder’s sessions: smaller sessions led by AWS experts during which you will build a project on your own laptop.
  • Chalk talks: interactive sessions led by experts on a variety of topics. Share your own experiences and feedback.
  • Workshops: hands-on learning sessions designed to help you learn about new technologies. Bring your own laptop.

For detailed descriptions and schedule, visit the AWS re:Invent Session Catalog. If you are attending re:Invent, we would love to connect at our AWS Village and Serverlesspresso booths in the Expo or the Modern Applications Zone at the Venetian. You can also reach out to your AWS account team.

Don’t have a ticket yet? Join us in Las Vegas from November 28-December 2, 2022 by registering for re:Invent 2022.

Leadership session (SVS210)

Join Holly Mesrobian, Vice President of Serverless Compute at AWS, to learn how serverless technology empowers organizations to go to market faster while lowering cost across a wide range of applications. Learn about the innovations happening at all layers of the stack, across both serverless functions and serverless containers. Explore newly released innovations that enable more secure, reliable, and performant applications.

Getting started

Are you new to Serverless or taking your first steps? Hear from AWS experts and customers on best practices and strategies for building serverless workloads. Get hands-on with services by building the next great “to do” app or customer experience for a theme park:

We also offer a series of Builder’s Sessions where you can build the same serverless project using three different infrastructure as code frameworks (attend one or more). These sessions are an opportunity to test drive another IaC framework or understand how your framework of choice can be used with serverless:

Event-driven architectures

Event-driven architectures (EDA) are a popular approach to building modern applications. EDA utilizes events (a change in state) to communicate between decoupled services. This architectural approach lends itself well to a wide-variety of use cases from ecommerce to order fulfillment with individual components able to scale (and fail) independently.

Whether you are getting started with EDA, want to get hands-on, or dive into complex architectures, there is a session for you:

Building serverless architectures

Explore the range of tools available to build serverless architectures and cross-cutting concerns, such as security and observability. These sessions cover the brass tacks of building with serverless, going to beyond “hello world” to help builders understand how to implement a serverless strategy:

Orchestration

AWS offers several options to orchestrate complex workflows. Whether you need to tightly control data processing workflows or user sign-ups, you can take advantage of these orchestration engines to simplify, become more agile, and modernize your workflows.

Integration patterns

Explore the variety of enterprise integration patterns available using AWS, including Amazon SNS, Amazon SQS, Amazon MQ, and more. These sessions explore the wide variety of patterns available using managed services:

Advanced topics

If you are already familiar with serverless, advanced sessions provide an opportunity to go deeper, including under the hood of the AWS Lambda service. Learn advanced design patterns, best practices, and how to build performant, reliable workloads:

Building serverless applications with Java

New this year, there are several sessions dedicating to building serverless applications with the Java runtime. These sessions dive deep on best practices for building performant Java-based applications:

Other talks

Serverless has become such a popular topic that you will also find related sessions in other tracks as well. This list is not exhaustive, but includes talks that you may want to explore:

If you are unable to join us in-person, Breakout Sessions will be available via our YouTube channel after the event. Contact your AWS Account Team is you are interested in learning more about any of these sessions or how to bring our experts to you.

We look forward to seeing you at re:Invent 2022! For more serverless learning resources, visit Serverless Land.

Propagating valid mTLS client certificate identity to downstream services using Amazon API Gateway

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/propagating-valid-mtls-client-certificate-identity-to-downstream-services-using-amazon-api-gateway/

This blog written by Omkar Deshmane, Senior SA and Anton Aleksandrov, Principal SA, Serverless.

This blog shows how to use Amazon API Gateway with a custom authorizer to process incoming requests, validate the mTLS client certificate, extract the client certificate subject, and propagate it to the downstream application in a base64 encoded HTTP header.

This pattern allows you to terminate mTLS at the edge so downstream applications do not need to perform client certificate validation. With this approach, developers can focus on application logic and offload mTLS certificate management and validation to a dedicated service, such as API Gateway.

Overview

Authentication is one of the core security aspects that you must address when building a cloud application. Successful authentication proves you are who you are claiming to be. There are various common authentication patterns, such as cookie-based authentication, token-based authentication, or the topic of this blog – a certificate-based authentication.

Transport Layer Security (TLS) certificates are at the core of a secure and safe internet. TLS certificates secure the connection between the client and server by encrypting data, ensuring private communication. When using the TLS protocol, the server must prove its identity to the client using a certificate signed by a certificate authority trusted by the client.

Mutual TLS (mTLS) introduces an additional layer of security, in which both the client and server must prove their identities to each other. Developers commonly use mTLS for application-to-application authentication — using digital certificates to represent both client and server apps is a common authentication pattern for building such workflows. We highly recommended decoupling the mTLS implementation from the application business logic so that you do not have to update the application when changing the mTLS configuration. It is a common pattern to implement the mTLS authentication and termination in a network appliance at the edge, such as Amazon API Gateway.

In this solution, we show a pattern of using API Gateway with an authorizer implemented with AWS Lambda to validate the mTLS client certificate, extract the client certificate subject, and propagate it to the downstream application in a base64 encoded HTTP header.

While this blog describes how to implement this pattern for identities extracted from the mTLS client certificates, you can generalize it and apply to propagating information obtained via any other means of authentication.

mTLS Sample Application

This blog includes a sample application implemented using the AWS Serverless Application Model (AWS SAM). It creates a demo environment containing resources like API Gateway, a Lambda authorizer, and an Amazon EC2 instance, which simulates the backend application.

The EC2 instance is used for the backend application to mimic common customer scenarios. You can use any other type of compute, such as Lambda functions or containerized applications with Amazon Elastic Container Service (Amazon ECS) or Amazon Elastic Kubernetes Service (Amazon EKS), as a backend application layer as well.

The following diagram shows the solution architecture:

Example architecture diagram

Example architecture diagram

  1. Store the client certificate in a trust store in an Amazon S3 bucket.
  2. The client makes a request to the API Gateway endpoint, supplying the client certificate to establish the mTLS session.
  3. API Gateway retrieves the trust store from the S3 bucket. It validates the client certificate, matches the trusted authorities, and terminates the mTLS connection.
  4. API Gateway invokes the Lambda authorizer, providing the request context and the client certificate information.
  5. The Lambda authorizer extracts the client certificate subject. It performs any necessary custom validation, and returns the extracted subject to API Gateway as a part of the authorization context.
  6. API Gateway injects the subject extracted in the previous step into the integration request HTTP header and sends the request to a downstream endpoint.
  7. The backend application receives the request, extracts the injected subject, and uses it with custom business logic.

Prerequisites and deployment

Some resources created as part of this sample architecture deployment have associated costs, both when running and idle. This includes resources like Amazon Virtual Private Cloud (Amazon VPC), VPC NAT Gateway, and EC2 instances. We recommend deleting the deployed stack after exploring the solution to avoid unexpected costs. See the Cleaning Up section for details.

Refer to the project code repository for instructions to deploy the solution using AWS SAM. The deployment provisions multiple resources, taking several minutes to complete.

Following the successful deployment, refer to the RestApiEndpoint variable in the Output section to locate the API Gateway endpoint. Note this value for testing later.

AWS CloudFormation output

AWS CloudFormation output

Key areas in the sample code

There are two key areas in the sample project code.

In src/authorizer/index.js, the Lambda authorizer code extracts the subject from the client certificate. It returns the value as part of the context object to API Gateway. This allows API Gateway to use this value in the subsequent integration request.

const crypto = require('crypto');

exports.handler = async (event) => {
    console.log ('> handler', JSON.stringify(event, null, 4));

    const clientCertPem = event.requestContext.identity.clientCert.clientCertPem;
    const clientCert = new crypto.X509Certificate(clientCertPem);
    const clientCertSub = clientCert.subject.replaceAll('\n', ',');

    const response = {
        principalId: clientCertSub,
        context: { clientCertSub },
        policyDocument: {
            Version: '2012-10-17',
            Statement: [{
                Action: 'execute-api:Invoke',
                Effect: 'allow',
                Resource: event.methodArn
            }]
        }
    };

    console.log('Authorizer Response', JSON.stringify(response, null, 4));
    return response;
};

In template.yaml, API Gateway injects the client certificate subject extracted by the Lambda authorizer previously into the integration request as X-Client-Cert-Sub HTTP header. X-Client-Cert-Sub is a custom header name and you can choose any other custom header name instead.

SayHelloGetMethod:
    Type: AWS::ApiGateway::Method
    Properties:
      AuthorizationType: CUSTOM
      AuthorizerId: !Ref CustomAuthorizer
      HttpMethod: GET
      ResourceId: !Ref SayHelloResource
      RestApiId: !Ref RestApi
      Integration:
        Type: HTTP_PROXY
        ConnectionType: VPC_LINK
        ConnectionId: !Ref VpcLink
        IntegrationHttpMethod: GET
        Uri: !Sub 'http://${NetworkLoadBalancer.DNSName}:3000/'
        RequestParameters:
          'integration.request.header.X-Client-Cert-Sub': 'context.authorizer.clientCertSub' 

Testing the example

You create a client key and certificate during the deployment, which are stored in the /certificates directory. Use the curl command to make a request to the REST API endpoint using these files.

curl --cert certificates/client.pem --key certificates/client.key \
<use the RestApiEndpoint found in CloudFormation output>
Example flow diagram

Example flow diagram

The client request to API Gateway uses mTLS with a client certificate supplied for mutual TLS authentication. API Gateway uses the Lambda authorizer to extract the certificate subject, and inject it into the Integration request.

The HTTP server runs on the EC2 instance, simulating the backend application. It accepts the incoming request and echoes it back, supplying request headers as part of the response body. The HTTP response received from the backend application contains a simple message and a copy of the request headers sent from API Gateway to the backend.

One header is x-client-cert-sub with the Common Name value you provided to the client certificate during creation. Verify that the value matches the Common Name that you provided when generating the client certificate.

Response example

Response example

API Gateway validated the mTLS client certificate, used the Lambda authorizer to extract the subject common name from the certificate, and forwarded it to the downstream application.

Cleaning up

Use the sam delete command in the api-gateway-certificate-propagation directory to delete the sample application infrastructure:

sam delete

You can also refer to the project code repository for the clean-up instructions.

Conclusion

This blog shows how to use the API Gateway with a Lambda authorizer for mTLS client certificate validation, custom field extraction, and downstream propagation to backend systems. This pattern allows you to terminate mTLS at the edge so that downstream applications are not responsible for client certificate validation.

For additional documentation, refer to Using API Gateway with Lambda Authorizer. Download the sample code from the project code repository. For more serverless learning resources, visit Serverless Land.

Simplifying serverless permissions with AWS SAM Connectors

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/simplifying-serverless-permissions-with-aws-sam-connectors/

This post written by Kurt Tometich, Senior Solutions Architect, AWS.

Developers have been using the AWS Serverless Application Model (AWS SAM) to streamline the development of serverless applications with AWS since late 2018. Besides making it easier to create, build, test, and deploy serverless applications, AWS SAM now further simplifies permission management between serverless components with AWS SAM Connectors.

Connectors allow the builder to focus on the relationships between components without expert knowledge of AWS Identity and Access Management (IAM) or direct creation of custom policies. AWS SAM connector supports AWS Step Functions, Amazon DynamoDB, AWS Lambda, Amazon SQS, Amazon SNS, Amazon API Gateway, Amazon EventBridge and Amazon S3, with more resources planned in the future.

AWS SAM policy templates are an existing feature that helps builders deploy serverless applications with minimally scoped IAM policies. Because there are a finite number of templates, they’re a good fit when a template exists for the services you’re using. Connectors are best for those getting started and who want to focus on modeling the flow of data and events within their applications. Connectors will take the desired relationship model and create the permissions for the relationship to exist and function as intended.

In this blog post, I show you how to speed up serverless development while maintaining secure best practices using AWS SAM connector. Defining a connector in an AWS SAM template requires a source, destination, and a permission (for example, read or write). From this definition, IAM policies with minimal privileges are automatically created by the connector.

Usage

Within an AWS SAM template:

  1. Create serverless resource definitions.
  2. Define a connector.
  3. Add a source and destination ID of the resources to connect.
  4. Define the permissions (read, write) of the connection.

This example creates a Lambda function that requires write access to an Amazon DynamoDB table to keep track of orders created from a website.

AWS Lambda function needing write access to an Amazon DynamoDB table

AWS Lambda function needing write access to an Amazon DynamoDB table

The AWS SAM connector for the resources looks like the following:

LambdaDynamoDbWriteConnector:
  Type: AWS::Serverless::Connector
  Properties:
    Source:
      Id: CreateOrder
    Destination:
      Id: Orders
    Permissions:
      - Write

“LambdaDynamoDbWriteConnector” is the name of the connector, while the “Type” designates it as an AWS SAM connector. “Properties” contains the source and destination logical ID for our serverless resources found within our template. Finally, the “Permissions” property defines a read or write relationship between the components.

This basic example shows how easy it is to define permissions between components. No specific role or policy names are required, and this syntax is consistent across many other serverless components, enforcing standardization.

Example

AWS SAM connectors save you time as your applications grow and connections between serverless components become more complex. Manual creation and management of permissions become error prone and difficult at scale. To highlight the breadth of support, we’ll use an AWS Step Functions state machine to operate with several other serverless components. AWS Step Functions is a serverless orchestration workflow service that integrates natively with other AWS services.

Solution overview

Architectural overview

Architectural overview

This solution implements an image catalog moderation pipeline. Amazon Rekognition checks for inappropriate content, and detects objects and text in an image. It processes valid images and stores metadata in an Amazon DynamoDB table, otherwise emailing a notification for invalid images.

Prerequisites

  1. Git installed
  2. AWS SAM CLI version 1.58.0 or greater installed

Deploying the solution

  1. Clone the repository and navigate to the solution directory:
    git clone https://github.com/aws-samples/step-functions-workflows-collection
    cd step-functions-workflows-collection/moderated-image-catalog
  2. Open the template.yaml file located at step-functions-workflows-collection/moderated-image-catalog and replace the “ImageCatalogStateMachine:” section with the following snippet. Ensure to preserve YAML formatting.
    ImageCatalogStateMachine:
        Type: AWS::Serverless::StateMachine
        Properties:
          Name: moderated-image-catalog-workflow
          DefinitionUri: statemachine/statemachine.asl.json
          DefinitionSubstitutions:
            CatalogTable: !Ref CatalogTable
            ModeratorSNSTopic: !Ref ModeratorSNSTopic
          Policies:
            - RekognitionDetectOnlyPolicy: {}
  3. Within the same template.yaml file, add the following after the ModeratorSNSTopic section and before the Outputs section:
    # Serverless connector permissions
    StepFunctionS3ReadConnector:
      Type: AWS::Serverless::Connector
      Properties:
        Source:
          Id: ImageCatalogStateMachine
        Destination:
          Id: IngestionBucket
        Permissions:
          - Read
    
    StepFunctionDynamoWriteConnector:
      Type: AWS::Serverless::Connector
      Properties:
        Source:
          Id: ImageCatalogStateMachine
        Destination:
          Id: CatalogTable
        Permissions:
          - Write
    
    StepFunctionSNSWriteConnector:
      Type: AWS::Serverless::Connector
      Properties:
        Source:
          Id: ImageCatalogStateMachine
        Destination:
          Id: ModeratorSNSTopic
        Permissions:
          - Write

    You have removed the existing inline policies for the state machine and replaced them with AWS SAM connector definitions, except for the Amazon Rekognition policy. At the time of publishing this blog, connectors do not support Amazon Rekognition. Take some time to review each of the connector’s syntax.

  4. Deploy the application using the following command:
    sam deploy --guided

    Provide a stack name, Region, and moderators’ email address. You can accept defaults for the remaining prompts.

Verifying permissions

Once the deployment has completed, you can verify the correct role and policies.

  1. Navigate to the Step Functions service page within the AWS Management Console and ensure you have the correct Region selected.
  2. Select State machines from the left menu and then the moderated-image-catalog-workflow state machine.
  3. Select the “IAM role ARN” link, which will take you to the IAM role and policies created.

You should see a list of policies that correspond to the AWS SAM connectors in the template.yaml file with the actions and resources.

Permissions list in console

Permissions list in console

You didn’t need to supply the specific policy actions: Use Read or Write as the permission and the service handles the rest. This results in improved readability, standardization, and productivity, while retaining security best practices.

Testing

  1. Upload a test image to the Amazon S3 bucket created during the deployment step. To find the name of the bucket, navigate to the AWS CloudFormation console. Select the CloudFormation stack via the name entered as part of “sam deploy –guided.” Select the Outputs tab and note the IngestionBucket name.
  2. After uploading the image, navigate to the AWS Step Functions console and select the “moderated-image-catalog-workflow” workflow.
  3. Select Start Execution and input an event:
    {
        "bucket": "<S3-bucket-name>",
        "key": "<image-name>.jpeg"
    }
  4. Select Start Execution and observe the execution of the workflow.
  5. Depending on the image selected, it will either add to the image catalog, or send a content moderation email to the email address provided. Find out more about content considered inappropriate by Amazon Rekognition.

Cleanup

To delete any images added to the Amazon S3 bucket, and the resources created by this template, use the following commands from the same project directory.

aws s3 rm s3://< bucket_name_here> --recursive
sam delete

Conclusion

This blog post shows how AWS SAM connectors simplify connecting serverless components. View the Developer Guide to find out more about AWS SAM connectors. For further sample serverless workflows like the one used in this blog, see Serverless Land.

Announcing server-side encryption with Amazon Simple Queue Service -managed encryption keys (SSE-SQS) by default

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/announcing-server-side-encryption-with-amazon-simple-queue-service-managed-encryption-keys-sse-sqs-by-default/

This post is written by Sofiya Muzychko (Sr Product Manager), Nipun Chagari (Principal Solutions Architect), and Hardik Vasa (Senior Solutions Architect).

Amazon Simple Queue Service (SQS) now provides server-side encryption (SSE) using SQS-owned encryption (SSE-SQS) by default. This feature further simplifies the security posture to encrypt the message body in SQS queues.

SQS is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. Customers are increasingly decoupling their monolithic applications to microservices and moving sensitive workloads to SQS, such as financial and healthcare applications, whose compliance regulations mandate data encryption.

SQS already supports server-side encryption with customer-provided encryption keys using the AWS Key Management Service (SSE-KMS) or using SQS-owned encryption keys (SSE-SQS). Both encryption options greatly reduce the operational burden and complexity involved in protecting data. Additionally, with the SSE-SQS encryption type, you do not need to create, manage, or pay for SQS-managed encryption keys.

Using the default encryption

With this feature, all newly created queues using HTTPS (TLS) and Signature Version 4 endpoints are encrypted using SQS-owned encryption (SSE-SQS) by default, enhancing the protection of your data against unauthorized access. Any new queue created using the non-TLS endpoint will not enable SSE-SQS encryption by default. We hence encourage you to create SQS queues using HTTPS endpoints as a security best practice.

The SSE-SQS default encryption is available for both standard and FIFO. You do not need to make any code or application changes to encrypt new queues. This does not affect existing queues. You can however change the encryption option for existing queues at any time using the SQS console, AWS Command Line Interface, or API.

Create queue

The preceding image shows the SQS queue creation console wizard with configuration options for encryption. As you can see, server-side encryption is enabled by default with encryption key type SSE-SQS option selected.

Creating a new SQS queue with SSE-SQS encryption using AWS CloudFormation

Default SSE-SQS encryption is also supported in AWS CloudFormation. To learn more, see this documentation page.

Here is the sample CloudFormation template to create an SQS standard queue with SQS owned Server Side Encryption (SSE-SQS) explicitly enabled.

AWSTemplateFormatVersion: "2010-09-09"
Description: SSE-SQS Cloudformation template
Resources:
  SQSEncryptionQueue:
    Type: AWS::SQS::Queue
    Properties: 
      MaximumMessageSize: 262144
      MessageRetentionPeriod: 86400
      QueueName: SSESQSQueue
      SqsManagedSseEnabled: true
      KmsDataKeyReusePeriodSeconds: 900
      VisibilityTimeout: 30

Note that if the SqsManagedSseEnabled: true property is not specified, SSE-SQS is enabled by default.

Configuring SSE-SQS encryption for existing queues vis AWS Management Console

To configure SSE-SQS encryption for an existing queue using the SQS console:

  1. Navigate to the SQS console at https://console.aws.amazon.com/sqs/.
  2. In the navigation pane, choose Queues.
  3. Select a queue, and then choose Edit.
  4. Under the Encryption dialog box, for Server-side encryption, choose Enabled.
  5. Select Amazon SQS key (SSE-SQS).
  6. Choose Save.

Edit standard queue

To configure SSE-SQS encryption for an existing queue using the AWS CLI

To enable SSE-SQS to an existing queue with no encryption, use the following AWS CLI command

aws sqs set-queue-attributes --queue-url <queueURL> --attributes SqsManagedSseEnabled=true

Replace <queueURL> with the URL of your SQS queue.

To disable SSE-SQS for an existing queue using the AWS CLI, run:

aws sqs set-queue-attributes --queue-url <queueURL> --attributes SqsManagedSseEnabled=false

Testing the queue with the SSE-SQS encryption enabled

To test sending message to the SQS queue with SSE-SQS enabled, run:

aws sqs send-message --queue-url <queueURL> --message-body test-message

Replace <queueURL> with the URL of your SQS queue. You see the following response, which means the message is successfully sent to the queue:

{
    "MD5OfMessageBody": "beaa0032306f083e847cbf86a09ba9b2",
    "MessageId": "6e53de76-7865-4c45-a640-f058c24a619b"
}

Default SSE-SQS encryption key rotation

You can choose how often the keys will be rotated by configuring the KmsDataKeyReusePeriodSeconds queue attribute. The value must be an integer between 60 (1 minute) and 86,400 (24 hours). The default is 300 (5 minutes).

To update the KMS data key reuse period for an existing SQS queue, run:

aws sqs set-queue-attributes --queue-url <queueURL> --attributes KmsDataKeyReusePeriodSeconds=900

This configures the queue with KMS key rotation to every 900 seconds (15 minutes).

Default SSE-SQS and encrypted messages

Encrypting a message makes its contents unavailable to unauthorized or anonymous users. Anonymous requests are requests made to a queue that is open to a public network without any authentication. Note, if you are using anonymous SendMessage and ReceiveMessage requests to the newly created queues, the requests will now be rejected with SSE-SQS enabled by default.

Making anonymous requests to SQS queues does not follow SQS security best practices. We strongly recommend updating your policy to make signed requests to SQS queues using AWS SDK or AWS CLI and to continue using SSE-SQS enabled by default.

Look at the SQS service response for anonymous messages when SSE-SQS encryption is enabled. For an existing queue, you can change the queue policy to grant all users (anonymous users) SendMessage permission for a queue named EncryptionQueue:

{
  "Version": "2012-10-17",
  "Id": "Queue1_Policy_UUID",
  "Statement": [
    {
      "Sid": "Queue1_SendMessage",
      "Effect": "Allow",
      "Principal": "*",
      "Action": "sqs:SendMessage",
      "Resource": "<queueARN>"
    }
  ]
}

You can then make an anonymous request against the queue:

curl <queueURL> -d 'Action=SendMessage&MessageBody=Hello'

You get an error message similar to the following:

<?xml version="1.0"?>
<ErrorResponse
	xmlns="http://queue.amazonaws.com/doc/2012-11-05/">
	<Error>
		<Type>Sender</Type>
		<Code>AccessDenied</Code>
		<Message>Access to the resource The specified queue does not exist or you do not have access to it. is denied.</Message>
		<Detail/>
	</Error>
	<RequestId> RequestID </RequestId>
</ErrorResponse>

However, for any reason if you want to continue using anonymous requests to the newly created queues in the future, you must create or update the queue with SSE-SQS encryption disabled.

SqsManagedSseEnabled=false

You can also disable the SSE-SQS using the Amazon SQS console.

Encrypting SQS queues with your own encryption keys

You can always change the default of SSE-SQS queues encryption and use your own keys. To encrypt SQS queues with your own encryption keys using the AWS Key Management Service (SSE-KMS), the default encryption with SSE-SQS can be overwritten to SSE-KMS during the queue creation process or afterwards.

You can update the SQS queue Server-side encryption key type using the Amazon SQS console, AWS Command Line Interface, or API.

Benefits of SQS owned encryption (SSE-SQS)

There are a number of significant benefits to encrypting your data with SQS owned encryption (SSE-SQS):

  • SSE-SQS lets you transmit data more securely and improve your security posture commonly required for compliance and regulations with no additional overhead, as you do not need to create and manage encryption keys.
  • Encryption at rest using the default SSE-SQS is provided at no additional charge.
  • The encryption and decryption of your data are handled transparently and continue to deliver the same performance you expect.
  • Data is encrypted using the 256-bit Advanced Encryption Standard (AES-256 GCM algorithm), so that only authorized roles and services can access data.

In addition, customers can enable CloudWatch Alarms to alarm on activities such as authorization failures, AWS Identity and Access Management (IAM) policy changes, or tampering with CloudTrail logs to help detect and stay on top of security incidents in the customer application (to learn more, see Amazon CloudWatch User Guide).

Conclusion

SQS now provides server-side encryption (SSE) using SQS-owned encryption (SSE-SQS) by default. This enhancement makes it easier to create SQS queues, while greatly reducing the operational burden and complexity involved in protecting data.

Encryption at rest using the default SSE-SQS is provided at no additional charge and is supported for both Standard and FIFO SQS queues using HTTPS endpoints. The default SSE-SQS encryption is available now.

To learn more about Amazon Simple Queue Service (SQS), see Getting Started with Amazon SQS and Amazon Simple Queue Service Developer Guide.

For more serverless learning resources, visit Serverless Land.

Introducing message data protection for Amazon SNS

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/introducing-message-data-protection-for-amazon-sns/

This post is written by Otavio Ferreira, Senior Software Development Manager, Marc Pinaud, Senior Product Manager, Usman Nisar, Senior Software Engineer, Hardik Vasa, Senior Solutions Architect, and Mithun Mallick, Senior Specialist Solution Architect.

Today, we are announcing the public preview release of new data protection capabilities for Amazon Simple Notification Service (SNS), message data protection. This is a new way to discover and protect sensitive data in motion at scale, without writing custom code.

SNS is a fully managed serverless messaging service. It provides topics for push-based, many-to-many pub/sub messaging for decoupling distributed systems, microservices, and event-driven serverless applications. As applications grow, so does the amount of data transmitted and the number of systems sending and receiving data. When moving data between different applications, guardrails can help you comply with data privacy regulations that require you to safeguard sensitive personally identifiable information (PII) or protected health information (PHI).

With message data protection for SNS, you can scan messages in real time for PII/PHI data and receive audit reports containing scan results. You can also prevent applications from receiving sensitive data by blocking inbound messages to an SNS topic or outbound messages to an SNS subscription. Message data protection for SNS supports a repository of over 25 unique PII/PHI data identifiers. These include people’s names, addresses, social security numbers, credit card numbers, and prescription drug codes.

These capabilities can help you adhere to a variety of compliance regulations, including HIPAA, FedRAMP, GDPR, and PCI. For more information, including the complete list of supported data identifiers, see message data protection in the SNS Developer Guide.

Overview

SNS topics enable you to integrate distributed applications more easily. As applications become more complex, it can become challenging for topic owners to manage the data flowing through their topics. Developers that publish messages to a topic may inadvertently send sensitive data, increasing regulatory risk. Message data protection enables SNS topic owners to protect sensitive application data with built-in, no-code, scalable capabilities.

To discover and protect data flowing through SNS topics with message data protection, topic owners associate data protection policies to their topics. Within these policies, you can write statements that define which types of sensitive data you want to discover and protect. As part of this, you can define whether you want to act on data flowing inbound to a topic or outbound to a subscription, which AWS accounts or specific AWS Identity and Access Management (AWS IAM) principals the policy is applicable to, and the actions you want to take on the data.

Message data protection provides two actions to help you protect your data. Auditing, to report on the amount of PII/PHI found, and blocking, to prevent the publishing or delivery of payloads that contain PII/PHI data. Once the data protection policy is set, message data protection uses pattern matching and machine learning models to scan your messages in real time for PII/PHI data identifiers and enforce the data protection policy.

For auditing, you can choose to send audit reports to Amazon Simple Storage Service (S3) for archival, Amazon Kinesis Data Firehose for analytics, or Amazon CloudWatch for logging and alarming. Message data protection does not interfere with the topic owner’s ability to use message data encryption at rest, nor with the subscriber’s ability to filter out unwanted messages using message filtering.

Applying message data protection in a use case

Consider an application that processes a variety of transactions for a set of health clinics, an organization that operates in a regulated environment. Compliance frameworks require that the organization take measures to protect both sensitive health records and financial information.

Reference architecture

The application is based on an event-driven serverless architecture. It has a data protection policy attached to the topic to audit for sensitive data and prevent downstream systems from processing certain data types.

The application publishes an event to an SNS topic every time a patient schedules a visit or sees a doctor at a clinic. The SNS topic fans out the event to two subscribed systems, billing and scheduling. Each system stores events in an Amazon SQS queue, which is processed using an AWS Lambda function.

Setting a data protection policy to an SNS topic

You can apply a data protection policy to an SNS topic using the AWS Management Console, the AWS CLI, or the AWS SDKs. You can also use AWS CloudFormation to automate the provisioning of the data protection policy.

This example uses CloudFormation to provision the infrastructure. You have two options for deploying the resources:

  • Deploy the resources by using the message data protection deploy script within the aws-sns-samples repository in GitHub.
  • Alternatively, use the following four CloudFormation templates in order. Allow time for each stack to complete before deploying the next stack, to create the following resources:

1. Prerequisites template

  • Two IAM roles with a managed policy that allows access to receive messages from the SNS topic, one for the billing and another for scheduling system, respectively.

2. Topic owner template

  • SNS topic that delivers events to two distinct systems.
  • A data protection policy that defines both auditing and blocking actions for specific types of PII and PHI.
  • S3 bucket to archive audit findings.
  • CloudWatch log group to monitor audit findings.
  • Kinesis Data Firehose to deliver audit findings to other destinations.

3. Scheduling subscriber template

  • SQS queue for the Scheduling system.
  • Lambda function for the Scheduling system.

4. Billing subscriber template

  • SQS queue for the Billing system.
  • Lambda function for the Billing system.

CloudFormation creates the following data protection policy as part of the topic owner template:

  ClinicSNSTopic:
    Type: 'AWS::SNS::Topic'
    Properties:
      TopicName: SampleClinic
      DataProtectionPolicy:
        Name: data-protection-example-policy
        Description: Policy Description
        Version: 2021-06-01
        Statement:
          - Sid: audit
            DataDirection: Inbound
            Principal:
             - '*'
            DataIdentifier:
              - 'arn:aws:dataprotection::aws:data-identifier/Address'
              - 'arn:aws:dataprotection::aws:data-identifier/AwsSecretKey'
              - 'arn:aws:dataprotection::aws:data-identifier/DriversLicense-US'
              - 'arn:aws:dataprotection::aws:data-identifier/EmailAddress'
              - 'arn:aws:dataprotection::aws:data-identifier/IpAddress'
              - 'arn:aws:dataprotection::aws:data-identifier/NationalDrugCode-US'
              - 'arn:aws:dataprotection::aws:data-identifier/PassportNumber-US'
              - 'arn:aws:dataprotection::aws:data-identifier/Ssn-US'
            Operation:
              Audit:
                SampleRate: 99
                FindingsDestination:
                  CloudWatchLogs:
                    LogGroup: !Ref AuditCWLLogs
                  Firehose:
                    DeliveryStream: !Ref AuditFirehose
                NoFindingsDestination:
                  S3:
                    Bucket: !Ref AuditS3Bucket
          - Sid: deny-inbound
            DataDirection: Inbound
            Principal:
              - '*'
            DataIdentifier:
              - 'arn:aws:dataprotection::aws:data-identifier/PassportNumber-US'
              - 'arn:aws:dataprotection::aws:data-identifier/Ssn-US'
            Operation:
              Deny: {}
          - Sid: deny-outbound-billing
            DataDirection: Outbound
            Principal:
              - !ImportValue "BillingRoleExportDataProtectionDemo"
            DataIdentifier:
              - 'arn:aws:dataprotection::aws:data-identifier/NationalDrugCode-US'
            Operation:
              Deny: {}
          - Sid: deny-outbound-scheduling
            DataDirection: Outbound
            Principal:
              - !ImportValue "SchedulingRoleExportDataProtectionDemo"
            DataIdentifier:
              - 'arn:aws:dataprotection::aws:data-identifier/Address'
              - 'arn:aws:dataprotection::aws:data-identifier/CreditCardNumber'
            Operation:
              Deny: {}

This data protection policy defines:

  • Metadata about the data protection policy, for example name, description, version, and statement IDs (sid).
  • The first statement (sid: audit) scans inbound messages from all principals for addresses, social security numbers, driver’s license, email addresses, IP addresses, national drug codes, passport numbers, and AWS secret keys.
    • The sampling rate is set to 99% so almost all messages are scanned for the defined PII/PHI.
    • Audit results with findings are delivered to CloudWatch Logs and Kinesis Data Firehose for analytics. Audit results without findings are archived to S3.
  • The second statement (sid: deny-inbound) blocks inbound messages to the topic coming from any principal, if the payload includes either a social security number or passport number.
  • The third statement (sid: deny-outbound-billing) blocks the delivery of messages to subscriptions created by the BillingRole, if the messages include any national drug codes.
  • The fourth statement (sid: deny-outbound-scheduling) blocks the delivery of messages to subscriptions created by the SchedulingRole, if the messages include any credit card numbers or addresses.

Testing the capabilities

Test the message data protection capabilities using the following steps:

  1. Publish a message without PII/PHI data to the Clinic Topic. In the CloudWatch console, navigate to the log streams of the respective Lambda functions to confirm that the message is delivered to both subscribers. Both messages are delivered because the payload contains no sensitive data for the data protection policy to deny. The log message looks as follows:
    "This is a demo! received from queue arn:aws:sqs:us-east-1:111222333444:Scheduling-SchedulingQueue"
  2. Publish a message with a social security number (try ‘SSN: 123-12-1234’) to the Clinic Topic. The request is denied, and an audit log is delivered to your CloudWatch Logs log group and Firehose delivery stream.
  3. Navigate to the CloudWatch log console and confirm that the audit log is visible in the /aws/vendedlogs/clinicaudit CloudWatch log group. The following example shows that the data protection policy (sid: deny-inbound) denied the inbound message as the payload contains a US social security number (SSN) between the 5th and the 15th character.
    {
        "messageId": "77ec5f0c-5129-5429-b01d-0457b965c0ac",
        "auditTimestamp": "2022-07-28T01:27:40Z",
        "callerPrincipal": "arn:aws:iam::111222333444:role/Admin",
        "resourceArn": "arn:aws:sns:us-east-1:111222333444:SampleClinic",
        "dataIdentifiers": [
            {
                "name": "Ssn-US",
                "count": 1,
                "detections": [
                    {
                        "start": 5,
                        "end": 15
                    }
                ]
            }
        ]
    }
    
  4. You can use the CloudWatch metrics, MessageWithFindings and MessageWithNoFindings, to track how frequently PII/PHI data is published to an SNS topic. Here’s an example of what the CloudWatch metric graph looks like as the amount of sensitive data published to a topic varies over time:
    CloudWatch metric graph
  5. Publish a message with an address (try ‘410 Terry Ave N, Seattle 98109, WA’). The request is only delivered to the Billing subscription. The data protection policy (sid: deny-outbound-scheduling) denies the outbound message to the Scheduling subscription as the payload contains an address.
  6. Confirm that the message is only delivered to the Billing Lambda function by navigating to the CloudWatch console and inspecting the logs of the two respective Lambda functions. The CloudWatch log of the Billing Lambda function contains the sensitive message that was delivered to it as it was an authorized subscriber. Here’s an example of what the log contains:410 Terry Ave N, Seattle 98109, WA received from queue arn:aws:sqs:us-east-1:111222333444:Billing-BillingQueue
  7. Publish a message with a drug code (try ‘NDC: 0777-3105-02’). The request is only delivered to the Scheduling subscription. The data protection policy (sid: deny-outbound-billing) denies the outbound message to the Billing subscription as the payload contains a drug code.
  8. Confirm that the message is only delivered to the Scheduling Lambda function by navigating to the CloudWatch console and inspecting the logs of the two respective Lambda functions. The CloudWatch log of the Scheduling Lambda function contains the sensitive message that was delivered to it as it was an authorized subscriber. Here’s an example of what the log contains:
    NDC: 0777-3105-02 received from queue arn:aws:sqs:us-east-1:111222333444:Scheduling-SchedulingQueue

Cleaning up

After testing, avoid incurring usage charges by deleting the resources that you created. Navigate to the CloudFormation console and delete the four CloudFormation stacks that you created during the walkthrough. Remember, you must delete all the objects from the S3 bucket before deleting the stack.

Conclusion

This post shows how message data protection enables a topic owner to discover and protect sensitive data that is exchanged through SNS topics. The example shows how to create a data protection policy that generates audit reports for sensitive data and blocks messages from delivery to specific subscribers if the payload contains sensitive data.

Get started with SNS and message data protection by using the AWS Management Console, AWS Command Line Interface (CLI), AWS SDKs, or CloudFormation.

For more details, see message data protection in the SNS Developer Guide. For information on pricing, see SNS pricing.

For more serverless learning resources, visit Serverless Land.

Deploying AWS Lambda functions using AWS Controllers for Kubernetes (ACK)

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/deploying-aws-lambda-functions-using-aws-controllers-for-kubernetes-ack/

This post is written by Rajdeep Saha, Sr. SSA, Containers/Serverless.

AWS Controllers for Kubernetes (ACK) allows you to manage AWS services directly from Kubernetes. With the ACK service controller for AWS Lambda, you can provision and manage Lambda functions with kubectl and custom resources. With ACK, you can have a single consolidated approach to managing container workloads and other AWS services, such as Lambda, directly from Kubernetes without needing additional infrastructure automation tools.

This post walks you through deploying a sample Lambda function from a Kubernetes cluster provided by Amazon EKS.

Use cases

Some of the use cases for provisioning Lambda functions from ACK include:

  • Your organization already has a DevOps process to deploy resources into the Amazon EKS cluster using Kubernetes declarative YAMLs (known as manifest files). With ACK for AWS Lambda, you can now use manifest files to provision Lambda functions without creating separate infrastructure as a code template.
  • Your project has implemented GitOps with Kubernetes. With GitOps, git becomes the single source of truth, and all the changes are done via git repo. In this model, Kubernetes continuously reconciles the git repo (desired state) with the resources running inside the cluster (current state). If any differences are found, the GitOps process automatically implements changes to the cluster from the git repo. Using ACK for AWS Lambda, since you are creating the Lambda function using Kubernetes custom resource, the GitOps model is applied for Lambda.
  • Your organization has established permissions boundaries for different users and groups using role-based access control (RBAC) and IAM roles for service accounts (IRSA). You can reuse this security model for Lambda without having to create new users and policies.

How ACK for AWS Lambda works

  1. The ‘Ops’ team deploys the ACK service controller for Lambda. This controller runs as a pod within the Amazon EKS cluster.
  2. The controller pod needs permission to read the Lambda function code and create the Lambda function. The Lambda function code is stored as a zip file in an S3 bucket for this example. The permissions are granted to the pod using IRSA.
  3. Each AWS service has separate ACK service controllers. This specific controller for AWS Lambda can act on the custom resource type ‘Function’.
  4. The ‘Dev’ team deploys Kubernetes manifest file with custom resource type ‘Function’. This manifest file defines the necessary fields required to create the function, such as S3 bucket name, zip file name, Lambda function IAM role, etc.
  5. The ACK service controller creates the Lambda function using the values from the manifest file.

Prerequisites

You need a few tools before deploying the sample application. Ensure that you have each of the following in your working environment:

This post uses shell variables to make it easier to substitute the actual names for your deployment. When you see placeholders like NAME=<your xyz name>, substitute in the name for your environment.

Setting up the Amazon EKS cluster

  1. Run the following to create an Amazon EKS cluster. The following single command creates a two-node Amazon EKS cluster with a unique name.
    eksctl create cluster
  2. It may take 15–30 minutes to provision the Amazon EKS cluster. When the cluster is ready, run:
    kubectl get nodes
  3. The output shows the following:
    Output
  4. To get the Amazon EKS cluster name to use throughout the walkthrough, run:
    eksctl get cluster
    
    export EKS_CLUSTER_NAME=<provide the name from the previous command>

Setting up the ACK Controller for Lambda

To set up the ACK Controller for Lambda:

  1. Install an ACK Controller with Helm by following these instructions:
    – Change ‘export SERVICE=s3’ to ‘export SERVICE=lambda’.
    – Change ‘export AWS_REGION=us-west-2’ to reflect your Region appropriately.
  2. To configure IAM permissions for the pod running the Lambda ACK Controller to permit it to create Lambda functions, follow these instructions.
    – Replace ‘SERVICE=”s3”’ with ‘SERVICE=”lambda”’.
  3. Validate that the ACK Lambda controller is running:
    kubectl get pods -n ack-system
  4. The output shows the running ACK Lambda controller pod:
    Output

Provisioning a Lambda function from the Kubernetes cluster

In this section, you write a sample “Hello world” Lambda function. You zip up the code and upload the zip file to an S3 bucket. Finally, you deploy that zip file to a Lambda function using the ACK Controller from the EKS cluster you created earlier. For this example, use Python3.9 as your language runtime.

To provision the Lambda function:

  1. Run the following to create the sample “Hello world” Lambda function code, and then zip it up:
    mkdir my-helloworld-function
    cd my-helloworld-function
    cat << EOF > lambda_function.py 
    import json
    
    def lambda_handler(event, context):
        # TODO implement
        return {
            'statusCode': 200,
            'body': json.dumps('Hello from Lambda!')
        }
    EOF
    zip my-deployment-package.zip lambda_function.py
    
  2. Create an S3 bucket following the instructions here. Alternatively, you can use an existing S3 bucket in the same Region of the Amazon EKS cluster.
  3. Run the following to upload the zip file into the S3 bucket from the previous step:
    export BUCKET_NAME=<provide the bucket name from step 2>
    aws s3 cp  my-deployment-package.zip s3://${BUCKET_NAME}
  4. The output shows:
    upload: ./my-deployment-package.zip to s3://<BUCKET_NAME>/my-deployment-package.zip
  5. Create your Lambda function using the ACK Controller. The full spec with all the available fields is listed here. First, provide a name for the function:
    export FUNCTION_NAME=hello-world-s3-ack
  6. Create and deploy the Kubernetes manifest file. The command at the end, kubectl create -f function.yaml submits the manifest file, with kind as ‘Function’. The ACK Controller for Lambda identifies this custom ‘Function’ object and deploys the Lambda function based on the manifest file.
    export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)
    export LAMBDA_ROLE="arn:aws:iam::${AWS_ACCOUNT_ID}:role/lambda_basic_execution"
    
    cat << EOF > lambdamanifest.yaml 
    apiVersion: lambda.services.k8s.aws/v1alpha1
    kind: Function
    metadata:
     name: $FUNCTION_NAME
     annotations:
       services.k8s.aws/region: $AWS_REGION
    spec:
     name: $FUNCTION_NAME
     code:
       s3Bucket: $BUCKET_NAME
       s3Key: my-deployment-package.zip
     role: $LAMBDA_ROLE
     runtime: python3.9
     handler: lambda_function.lambda_handler
     description: function created by ACK lambda-controller e2e tests
    EOF
    kubectl create -f lambdamanifest.yaml
    
  7. The output shows:
    function.lambda.services.k8s.aws/< FUNCTION_NAME> created
  8. To retrieve the details of the function using a Kubernetes command, run:
    kubectl describe function/$FUNCTION_NAME
  9. This Lambda function returns a “Hello world” message. To invoke the function, run:
    aws lambda invoke --function-name $FUNCTION_NAME  response.json
    cat response.json
    
  10. The Lambda function returns the following output:
    {"statusCode": 200, "body": "\"Hello from Lambda!\""}

Congratulations! You created a Lambda function from your Kubernetes cluster.

To learn how to provision the Lambda function using the ACK controller from an OCI container image instead of a zip file in an S3 bucket, follow these instructions.

Cleaning up

This section cleans up all the resources that you have created. To clean up:

  1. Delete the Lambda function:
    kubectl delete function $FUNCTION_NAME
  2. If you have created a new S3 bucket, delete it by running:
    aws s3 rm s3://${BUCKET_NAME} --recursive
    aws s3api delete-bucket --bucket ${BUCKET_NAME}
  3. Delete the EKS cluster:
    eksctl delete cluster --name $EKS_CLUSTER_NAME
  4. Delete the IAM role created for the ACK Controller. Get the IAM role name by running the following command, then delete the role from the IAM console:
    echo $ACK_CONTROLLER_IAM_ROLE

Conclusion

This blog post shows how AWS Controllers for Kubernetes enables you to deploy a Lambda function directly from your Amazon EKS environment. AWS Controllers for Kubernetes provides a convenient way to connect your Kubernetes applications to AWS services directly from Kubernetes.

ACK is open source: you can request new features and report issues on the ACK community GitHub repository.

For more serverless learning resources, visit Serverless Land.

Speeding up incremental changes with AWS SAM Accelerate and nested stacks

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/speeding-up-incremental-changes-with-aws-sam-accelerate-and-nested-stacks/

This blog written by Jeff Marcinko, Sr. Technical Account Manager, Health Care & Life Sciencesand Brian Zambrano, Sr. Specialist Solutions Architect, Serverless.

Developers and operators have been using the AWS Serverless Application Model (AWS SAM) to author, build, test, and deploy serverless applications in AWS for over three years. Since its inception, the AWS SAM team has focused on developer productivity, simplicity, and best practices.

As good as AWS SAM is at making your serverless development experience easier and faster, building non-trivial cloud applications remains a challenge. Developers and operators want a development experience that provides high-fidelity and fast feedback on incremental changes. With serverless development, local emulation of an application composed of many AWS resources and managed services can be incomplete and inaccurate. We recommend developing serverless applications in the AWS Cloud against live AWS services to increase developer confidence. However, the latency of deploying an entire AWS CloudFormation stack for every code change is a challenge that developers face with this approach.

In this blog post, I show how to increase development velocity by using AWS SAM Accelerate with AWS CloudFormation nested stacks. Nested stacks are an application lifecycle management best practice at AWS. We recommend nested stacks for deploying complex serverless applications, which aligns to the Serverless Application Lens of the AWS Well-Architected Framework. AWS SAM Accelerate speeds up deployment from your local system by bypassing AWS CloudFormation to deploy code and resource updates when possible.

AWS CloudFormation nested stacks and AWS SAM

A nested stack is a CloudFormation resource that is part of another stack, referred to as the parent, or root stack.

Nested stack architecture

Nested stack architecture

The best practice for modeling complex applications is to author a root stack template and declare related resources in their own nested stack templates. This partitioning improves maintainability and encourages reuse of common template patterns. It is easier to reason about the configuration of the AWS resources in the example application because they are described in nested templates for each application component.

With AWS SAM, developers create nested stacks using the AWS::Serverless::Application resource type. The following example shows a snippet from a template.yaml file, which is the root stack for an AWS SAM application.

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31

Resources:
  DynamoDB:
    Type: AWS::Serverless::Application
    Properties:
      Location: db/template.yaml

  OrderWorkflow:
    Type: AWS::Serverless::Application
    Properties:
      Location: workflow/template.yaml

  ApiIntegrations:
    Type: AWS::Serverless::Application
    Properties:
      Location: api-integrations/template.yaml

  Api:
    Type: AWS::Serverless::Application
    Properties:
      Location: api/template.yaml

Each AWS::Serverless::Application resource type references a child stack, which is an independent AWS SAM template. The Location property tells AWS SAM where to find the stack definition.

Solution overview

The sample application exposes an API via Amazon API Gateway. One API endpoint (#2) forwards POST requests to Amazon SQS, an AWS Lambda function polls (#3) the SQS Queue and starts an Amazon Step Function workflow execution (#4) for each message.

Sample application architecture

Sample application architecture

Prerequisites

  1. AWS SAM CLI, version 1.53.0 or higher
  2. Python 3.9

Deploy the application

To deploy the application:

  1. Clone the repository:
    git clone <a href="https://github.com/aws-samples/sam-accelerate-nested-stacks-demo.git" target="_blank" rel="noopener">https://github.com/aws-samples/sam-accelerate-nested-stacks-demo.git</a>
  2. Change to the root directory of the project and run the following AWS SAM CLI commands:
    cd sam-accelerate-nested-stacks-demo
    sam build
    sam deploy --guided --capabilities CAPABILITY_IAM CAPABILITY_AUTO_EXPAND

    You must include the CAPABILITY_IAM and CAPABILITY_AUTO_EXPAND capabilities to support nested stacks and the creation of permissions.

  3. Use orders-app as the stack name during guided deployment. During the deploy process, enter your email for the SubscriptionEmail value. This requires confirmation later. Accept the defaults for the rest of the values.

    SAM deploy example

    SAM deploy example

  4. After the CloudFormation deployment completes, save the API endpoint URL from the outputs.

Confirming the notifications subscription

After the deployment finishes, you receive an Amazon SNS subscription confirmation email at the email address provided during the deployment. Choose the Confirm Subscription link to receive notifications.

You have chosen to subscribe to the topic: 
arn:aws:sns:us-east-1:123456789012:order-topic-xxxxxxxxxxxxxxxxxx

To confirm this subscription, click or visit the link below (If this was in error no action is necessary): 
Confirm subscription

Testing the orders application

To test the application, use the curl command to create a new Order request with the following JSON payload:

{
    "quantity": 1,
    "name": "Pizza",
    "restaurantId": "House of Pizza"
}
curl -s --header "Content-Type: application/json" \
  --request POST \
  --data '"quantity":1,"name":"Pizza","quantity":1,"restaurantId":"House of Pizza"}' \
  https://xxxxxxxxxx.execute-api.us-east-1.amazonaws.com/Dev/orders  | python -m json.tool

API Gateway responds with the following message, showing it successfully sent the request to the SQS queue:

API Gateway response

API Gateway response

The application sends an order notification once the Step Functions workflow completes processing. The workflow intentionally randomizes the SUCCESS or FAILURE status message.

Accelerating development with AWS SAM sync

AWS SAM Accelerate enhances the development experience. It automatically observes local code changes and synchronizes them to AWS without building and deploying every function in my project.

However, when you synchronize code changes directly into the AWS Cloud, it can introduce drift between your CloudFormation stacks and its deployed resources. For this reason, you should only use AWS SAM Accelerate to publish changes in a development stack.

In your terminal, change to the root directory of the project folder and run the sam sync command. This runs in the foreground while you make code changes:

cd sam-accelerate-nested-stacks-demo
sam sync --watch --stack-name orders-app

The –watch option causes AWS SAM to perform an initial CloudFormation deployment. After the deployment is complete, AWS SAM watches for local changes and synchronizes them to AWS. This feature allows you to make rapid iterative code changes and sync to the Cloud automatically in seconds.

Making a code change

In the editor, update the Subject argument in the send_order_notification function in workflow/src/complete_order/app.py.

def send_order_notification(message):
    topic_arn = TOPIC_ARN
    response = sns.publish(
        TopicArn=topic_arn,
        Message=json.dumps(message),
        Subject=f'Orders-App: Update for order {message["order_id"]}'
        #Subject='Orders-App: SAM Accelerate for the win!'
    )

On save, AWS SAM notices the local code change, and updates the CompleteOrder Lambda function. AWS SAM does not trigger updates to other AWS resources across the different stacks, since they are unchanged. This can result in increased development velocity.

SAM sync output

SAM sync output

Validate the change by sending a new order request and review the notification email subject.

curl -s --header "Content-Type: application/json" \
  --request POST \
  --data '"quantity":1,"name":"Pizza","quantity":1,"restaurantId":"House of Pizza"}' \
  https://xxxxxxxxxx.execute-api.us-east-1.amazonaws.com/Dev/orders  | python -m json.tool

In this example, AWS SAM Accelerate is 10–15 times faster than the CloudFormation deployment workflow (sam deploy) for single function code changes.

Deployment speed comparison between SAM accelerate and CloudFormation

Deployment speed comparison between SAM accelerate and CloudFormation

Deployment times vary based on the size and complexity of your Lambda functions and the number of resources in your project.

Making a configuration change

Next, make an infrastructure change to show how sync –watch handles configuration updates.

Update ReadCapacityUnits and WriteCapacityUnits in the DynamoDB table definition by changing the values from five to six in db/template.yaml.

Resources:
  OrderTable:
    Type: AWS::DynamoDB::Table
    Properties:
      TableName: order-table-test
      AttributeDefinitions:
        - AttributeName: user_id
          AttributeType: S
        - AttributeName: id
          AttributeType: S
      KeySchema:
        - AttributeName: user_id
          KeyType: HASH
        - AttributeName: id
          KeyType: RANGE
      ProvisionedThroughput:
        ReadCapacityUnits: 5
        WriteCapacityUnits: 5

The sam sync –watch command recognizes the configuration change requires a CloudFormation deployment to update the db nested stack. Nested stacks reflect an UPDATE_COMPLETE status because CloudFormation starts an update to every nested stack to determine if changes must be applied.

SAM sync infrastructure update

SAM sync infrastructure update

Cleaning up

Delete the nested stack resources to make sure that you don’t continue to incur charges. After stopping the sam sync –watch command, run the following command to delete your resources:

sam delete orders-app

You can also delete the CloudFormation root stack from the console by following these steps.

Conclusion

Local emulation of complex serverless applications, built with nested stacks, can be challenging. AWS SAM Accelerate helps builders achieve a high-fidelity development experience by rapidly synchronizing code changes into the AWS Cloud.

This post shows AWS SAM Accelerate features that push code changes in near real time to a development environment in the Cloud. I use a non-trivial sample application to show how developers can push code changes to a live environment in seconds while using CloudFormation nested stacks to achieve the isolation and maintenance benefits.

For more serverless learning resources, visit Serverless Land.

Using custom consumer group ID support for the AWS Lambda event sources for MSK and self-managed Kafka

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/using-custom-consumer-group-id-support-for-the-aws-lambda-event-sources-for-msk-and-self-managed-kafka/

This post is written by Adam Wagner, Principal Serverless Specialist SA.

AWS Lambda already supports Amazon Managed Streaming for Apache Kafka (MSK) and self-managed Apache Kafka clusters as event sources. Today, AWS adds support for specifying a custom consumer group ID for the Lambda event source mappings (ESMs) for MSK and self-managed Kafka event sources.

With this feature, you can create a Lambda ESM that uses a consumer group that has already been created. This enables you to use Lambda as a Kafka consumer for topics that are replicated with MirrorMaker v2 or with consumer groups you create to start consuming at a particular offset or timestamp.

Overview

This blog post shows how to use this feature to enable Lambda to consume a Kafka topic starting at a specific timestamp. This can be useful if you must reprocess some data but don’t want to reprocess all of the data in the topic.

In this example application, a client application writes to a topic on the MSK cluster. It creates a consumer group that points to a specific timestamp within that topic as the starting point for consuming messages. A Lambda ESM is created using that existing consumer group that triggers a Lambda function. This processes and writes the messages to an Amazon DynamoDB table.

Reference architecture

  1. A Kafka client writes messages to a topic in the MSK cluster.
  2. A Kafka consumer group is created with a starting point of a specific timestamp
  3. The Lambda ESM polls the MSK topic using the existing consumer group and triggers the Lambda function with batches of messages.
  4. The Lambda function writes the messages to DynamoDB

Step-by-step instructions

To get started, create an MSK cluster and a client Amazon EC2 instance from which to create topics and publish messages. If you don’t already have an MSK cluster, follow this blog on setting up an MSK cluster and using it as an event source for Lambda.

  1. On the client instance, set an environment variable to the MSK cluster bootstrap servers to make it easier to reference them in future commands:
    export MSKBOOTSTRAP='b-1.mskcluster.oy1hqd.c23.kafka.us-east-1.amazonaws.com:9094,b-2.mskcluster.oy1hqd.c23.kafka.us-east-1.amazonaws.com:9094,b-3.mskcluster.oy1hqd.c23.kafka.us-east-1.amazonaws.com:9094'
  2. Create the topic. This example has a three-node MSK cluster so the replication factor is also set to three. The partition count is set to three in this example. In your applications, set this according to throughput and parallelization needs.
    ./bin/kafka-topics.sh --create --bootstrap-server $MSKBOOT --replication-factor 3 --partitions 3 --topic demoTopic01
  3. Write messages to the topic using this Python script:
    #!/usr/bin/env python3
    import json
    import time
    from random import randint
    from uuid import uuid4
    from kafka import KafkaProducer
    
    BROKERS = ['b-1.mskcluster.oy1hqd.c23.kafka.us-east-1.amazonaws.com:9094', 
            'b-2.mskcluster.oy1hqd.c23.kafka.us-east-1.amazonaws.com:9094',
            'b-3.mskcluster.oy1hqd.c23.kafka.us-east-1.amazonaws.com:9094']
    TOPIC = 'demoTopic01'
    
    producer = KafkaProducer(bootstrap_servers=BROKERS, security_protocol='SSL',
            value_serializer=lambda x: json.dumps(x).encode('utf-8'))
    
    def create_record(sequence_num):
        number = randint(1000000,10000000)
        record = {"id": sequence_num, "record_timestamp": int(time.time()), "random_number": number, "producer_id": str(uuid4()) }
        print(record)
        return record
    
    def publish_rec(seq):
        data = create_record(seq)
        producer.send(TOPIC, value=data).add_callback(on_send_success).add_errback(on_send_error)
        producer.flush()
    
    def on_send_success(record_metadata):
        print(record_metadata.topic, record_metadata.partition, record_metadata.offset)
    
    def on_send_error(excp):
        print('error writing to kafka', exc_info=excp)
    
    for num in range(1,10000000):
        publish_rec(num)
        time.sleep(0.5) 
    
  4. Copy the script into a file on the client instance named producer.py. The script uses the kafka-python library, so first create a virtual environment and install the library.
    python3 -m venv venv
    source venv/bin/activate
    pip3 install kafka-python
    
  5. Start the script. Leave it running for a few minutes to accumulate some messages in the topic.
    Output
  6. Previously, a Lambda function would choose between consuming messages starting at the beginning of the topic or starting with the latest messages. In this example, it starts consuming messages from a few hours ago at 14:30 UTC. To do this, first create a new consumer group on the client instance:
    ./bin/kafka-consumer-groups.sh --command-config client.properties --bootstrap-server $MSKBOOTSTRAP --topic demoTopic01 --group specificTimeCG --to-datetime 2022-08-10T16:00:00.000 --reset-offsets --execute
  7. In this case, specificTimeCG is the consumer group ID used when creating the Lambda ESM. Listing the consumer groups on the cluster shows the new group:
    ./bin/kafka-consumer-groups.sh --list --command-config client.properties --bootstrap-server $MSKBOOTSTRAP

    Output

  8. With the consumer group created, create the Lambda function along with the Event Source Mapping that uses this new consumer group. In this case, the Lambda function and DynamoDB table are already created. Create the ESM with the following AWS CLI Command:
    aws lambda create-event-source-mapping --region us-east-1 --event-source-arn arn:aws:kafka:us-east-1:0123456789:cluster/demo-us-east-1/78a8d1c1-fa31-4f59-9de3-aacdd77b79bb-23 --function-name msk-consumer-demo-ProcessMSKfunction-IrUhEoDY6X9N --batch-size 3 --amazon-managed-kafka-event-source-config '{"ConsumerGroupId":"specificTimeCG"}' --topics demoTopic01

    The event source in the Lambda console or CLI shows the starting position set to TRIM_HORIZON. However, if you specify a custom consumer group ID that already has existing offsets, those offsets take precedent.

  9. With the event source created, navigate to the DynamoDB console. Locate the DynamoDB table to see the records written by the Lambda function.
    DynamoDB table

Converting the record timestamp of the earliest record in DynamoDB, 1660147212, to a human-readable date shows that the first record was created on 2022-08-10T16:00:12.

In this example, the consumer group is created before the Lambda ESM so that you can specify the timestamp to start from.

If you create an ESM and specify a custom consumer group ID that does not exist, it is created. This is a convenient way to create a new consumer group for an ESM with an ID of your choosing.

Deleting an ESM does not delete the consumer group, regardless of whether it is created before, or during, the ESM creation.

Using the AWS Serverless Application Model (AWS SAM)

To create the event source mapping with a custom consumer group using an AWS Serverless Application Model (AWS SAM) template, use the following snippet:

Events:
  MyMskEvent:
    Type: MSK
    Properties:
      Stream: !Sub arn:aws:kafka:${AWS::Region}:012345678901:cluster/ demo-us-east-1/78a8d1c1-fa31-4f59-9de3-aacdd77b79bb-23
      Topics:
        - "demoTopic01"
      ConsumerGroupId: specificTimeCG

Other types of Kafka clusters

This example uses the custom consumer group ID feature when consuming a Kafka topic from an MSK cluster. In addition to MSK clusters, this feature also supports self-managed Kafka clusters. These could be clusters running on EC2 instances or managed Kafka clusters from a partner such as Confluent.

Conclusion

This post shows how to use the new custom consumer group ID feature of the Lambda event source mapping for Amazon MSK and self-managed Kafka. This feature can be used to consume messages with Lambda starting at a specific timestamp or offset within a Kafka topic. It can also be used to consume messages from a consumer group that is replicated from another Kafka cluster using MirrorMaker v2.

For more serverless learning resources, visit Serverless Land.

Introducing bidirectional event integrations with Salesforce and Amazon EventBridge

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/introducing-bidirectional-event-integrations-with-salesforce-and-amazon-eventbridge/

This post is written by Alseny Diallo, Prototype Solutions Architect and Rohan Mehta, Associate Cloud Application Architect.

AWS now supports Salesforce as a partner event source for Amazon EventBridge, allowing you to send Salesforce events to AWS. You can also configure Salesforce with EventBridge API Destinations and send EventBridge events to Salesforce. These integrations enable you to act on changes to your Salesforce data in real-time and build custom applications with EventBridge and over 100 built-in sources and targets.

In this blog post, you learn how to set up a bidirectional integration between Salesforce and EventBridge and use cases for working with Salesforce events. You see an example application for interacting with Salesforce support case events with automated workflows for detecting sentiment with AWS AI/ML services and enriching support cases with customer order data.

Integration overview

Salesforce is a customer relationship management (CRM) platform that gives companies a single, shared view of customers across their marketing, sales, commerce, and service departments. Salesforce Event Relays for AWS enable bidirectional event flows between Salesforce and AWS through EventBridge.

Amazon EventBridge is a serverless event bus that makes it easier to build event-driven applications at scale using events generated from your applications, integrated software as a service (SaaS) applications, and AWS services. EventBridge partner event source integrations enable customers to receive events from over 30 SaaS applications and ingest them into their AWS applications.

Salesforce as a partner event source for EventBridge makes it easier to build event-driven applications that span customers’ data in Salesforce and applications running on AWS. Customers can send events from Salesforce to EventBridge and vice versa without having to write custom code or manage an integration.

EventBridge joins Amazon AppFlow as a way to integrate Salesforce with AWS. The Salesforce Amazon AppFlow integration is well suited for use cases that require ingesting large volumes of data, like a daily scheduled data transfer sending Salesforce records into an Amazon Redshift data warehouse or an Amazon S3 data lake. The Salesforce EventBridge integration is a good fit for real-time processing of changes to individual Salesforce records.

Use cases

Customers can act on new or modified Salesforce records through integrations with a variety of EventBridge targets, including AWS Lambda, AWS Step Functions, and API Gateway. The integration can enable use cases across industries that must act on customer events in real time.

  • Retailers can automatically unify their Salesforce data with AWS data sources. When a new customer support case is created in Salesforce, enrich the support case with recent order data from that customer retrieved from an orders database running on AWS.
  • Media and entertainment providers can augment their omnichannel experiences with AWS AI/ML services to increase customer engagement. When a new customer account is created in Salesforce, use Amazon Personalize and Amazon Simple Email Service to send a welcome email with personalized media recommendations.
  • Insurers can automate form processing workflows. When a new insurance claim form PDF is uploaded to Salesforce, extract the submitted information with Amazon Textract and orchestrate processing the claim information with AWS Step Functions.

Solution overview

The example application shows how the integration can enhance customer support experiences by unifying support tickets with customer order data, detecting customer sentiment, and automating support case workflows.

Reference architecture

  1. A new case is created in Salesforce and an event is sent to an EventBridge partner event bus.
  2. If the event matches the EventBridge rule, the rule sends the event to both the Enrich Case and Case Processor Workflows in parallel.
  3. The Enrich Case Workflow uses the Customer ID in the event payload to query the Orders table for the customer’s recent order. If this step fails, the event is sent to an Amazon SQS dead letter queue.
  4. The Enrich Case Workflow publishes a new event with the customer’s recent order to an EventBridge custom event bus.
  5. The Case Processor Workflow performs sentiment analysis on the support case content and sends a customized text message to the customer. See the diagram below for details on the workflow.
  6. The Case Processor Workflow publishes a new event with the sentiment analysis results to the custom event bus.
  7. EventBridge rules match the events published to the associated rules: CaseProcessorEventRule and EnrichCaseAppEventRule.
  8. These rules send the events to EventBridge API Destinations. API Destinations sends the events to Salesforce HTTP endpoints to create two Salesforce Platform Events.
  9. Salesforce data is updated with the two Platform Events:
    1. The support case record is updated with the customer’s recent order details and the support case sentiment.
    2. If the support case sentiment is negative, a task is created for an agent to follow up with the customer.

The Case Processor workflow uses Step Functions to process the Salesforce events.

Case processor workflow

  1. Detect the sentiment of the customer feedback using Amazon Comprehend. This is positive, negative, or neutral.
  2. Check if the customer phone number is a mobile number and can receive SMS using Amazon Pinpoint’s mobile number validation endpoint.
  3. If the customer did not provide a mobile number, bypass the SMS steps and put an event with the detected sentiment onto the custom event bus.
  4. If the customer provided a mobile number, send them an SMS with the appropriate message based on the sentiment of their case.
    1. If sentiment is positive or neutral, the message is thanking the customer for their feedback.
    2. If the sentiment is negative, the message offers additional support.
  5. The state machine then puts an event with the sentiment analysis results onto the custom event bus.

Prerequisites

Environment setup

  1. Follow the instructions here to set up your Salesforce Event Relay. Once you have an event bus created with the partner event source, proceed to step 2.
  2. Copy the ARN of the event bus.
  3. Create a Salesforce Connected App. This is used for the API Destinations configuration to send updates back into Salesforce.
  4. You can create a new user within Salesforce with appropriate API permissions to update records. The user name and password is used by the API Destinations configuration.
  5. The example provided by Salesforce uses a Platform Event called “Carbon Comparison”. For this sample app, you create three custom platform events with the following configurations:
    1. Customer Support Case (Salesforce to AWS):
      Customer support case
    2. Processed Support Case (AWS to Salesforce):
      Processed Support case
    3. Enrich Case (AWS to Salesforce):
      Enrich case example
  6. This example application assumes that a custom Sentiment field is added to the Salesforce Case record type. See this link for how to create custom fields in Salesforce.
  7. The example application uses Salesforce Flows to trigger outbound platform events and handle inbound platform events. See this link for how to use Salesforce Flows to build event driven applications on Salesforce.
  8. Clone the AWS SAM template here.
    sam build
    sam deploy —guided

    For the parameter prompts, enter:

  • SalesforceOauthClientId and SalesforceOauthClientSecret: Use the values created with the Connected App in step 3.
  • SalesforceUsername and SalesforcePassword: Use the values created for the new user in step 4.
  • SalesforceOauthUrl: Salesforce URL for OAuth authentication
  • SalesforceCaseProcessorEndpointUrl: Salesforce URL for creating a new Processed Support Case Platform Event object, in this case: https://MyDomainName.my.salesforce.com/services/data/v54.0/sobjects/Processed_Support_Case__e
  • SFEnrichCaseEndpointUrl: Salesforce URL for creating a new Enrich Case Platform Event object, in this case: https://MyDomainName.my.salesforce.com/services/data/v54.0/sobjects/Enrich_Case__e
  • SalesforcePartnerEventBusArn: Use the value from step 2.
  • SalesforcePartnerEventPattern: The detail-type value should be the API name of the custom platform event, in thiscase: {“detail-type”: [“Customer_Support_Case__e”]}

Conclusion

This blog shows how to act on changes to your Salesforce data in real-time using the new Salesforce partner event source integration with EventBridge. The example demonstrated how your Salesforce data can be processed and enriched with custom AWS applications and updates sent back to Salesforce using EventBridge API Destinations.

To learn more about EventBridge partner event sources and API Destinations, see the EventBridge Developer Guide. For more serverless resources, visit Serverless Land.

Estimating cost for Amazon SQS message processing using AWS Lambda

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/estimating-cost-for-amazon-sqs-message-processing-using-aws-lambda/

This post was written by Sabha Parameswaran, Senior Solutions Architect.

AWS Lambda enables fully managed asynchronous messaging processing through integration with Amazon SQS. This blog post helps estimate the cost and performance benefits when using Lambda to handle millions of messages per day by using a simulated setup.

Overview

Lambda supports asynchronous handling of messages using SQS integration as an event source and can scale for handling millions of messages per day. Customers often ask about the cost of implementing a Lambda-based messaging solution.

There are multiple variables like Lambda function runtime, individual message size, batch size for consuming from SQS, processing latency per message (depending on the backend services invoked), and function memory size settings. These can determine the overall performance and associated cost of a Lambda-based messaging solution.

This post provides cost estimation using these variables, along with guidance around optimization. The estimates focus on consuming from standard queues and not FIFO queues.

SQS event source

The Lambda event source mapping supports integration for SQS. Lambda users specify the SQS queue to consume messages. Lambda internally polls the queue and invokes the function synchronously with an event containing the queue messages.

The configuration controls in Lambda for consuming messages from an SQS queue are:

  • Batch size: The maximum number of records that can be batched as one event delivered to the consuming Lambda function. The maximum batch size is 10,000 records.
  • Batch window: The maximum time (in seconds) to gather records as a single batch. A larger batch window size means waiting longer for a larger SQS batch of messages before passing to the Lambda function.
  • SQS content filtering: Selecting only the messages that match a defined content criteria. This can reduce cost by removing unwanted or irrelevant messages. Lambda now supports content filtering (for SQS, Kinesis, and DynamoDB) and developers can use the filtering capabilities to avoid processing SQS messages, reducing unnecessary invocations and associated cost.

Lambda sends as many records in a single batch as allowed by the batch size, as long as it’s earlier than the batch window value, and smaller than the maximum payload size of 6 MB. Having large batch sizes means that a single Lambda invocation can handle more messages rather than multiple Lambda invocations to handle smaller batches (which translates to setting higher concurrency limits).

The cost and time to process might vary based on the actual number of messages in the batch. A larger batch size can imply longer processing but requires lower concurrency (number of concurrent Lambda invocations).

Lambda configurations

Lambda function costs are calculated based on memory used and time spent (in GB-second) in execution of a function. Aside from the event source configuration, there are several other Lambda function configurations that impact cost and performance:

  • Processor type: Lambda functions provide options to choose between x86 and Arm/Graviton processors. The newer Arm/Graviton processors can yield a higher performance and lower cost compared to x86 based on the workload. Compare the options and run tests before selecting.
  • Memory allotted: This is directly proportional to the CPU allotted to the function and translates to price for each invocation. Higher memory can lead to faster execution but also higher cost. The optimal memory required for a small batch versus large batch can vary based on the workload, size of incoming messages, transformations, requirements to store intermediate, or final results. Optimal tuning of the memory configurations is key to ensuring right cost versus performance. See the AWS Lambda Power Tuning documentation for more details on identifying the optimal memory versus performance for a fixed batch size and then extrapolate the memory settings for larger batch sizes.
  • Lambda function runtime: Some runtimes have a smaller memory footprint and may be more cost effective than others that are memory intensive. Choosing the runtime affects the memory allocation.
  • Function performance: This can be considered as TPS – total number of requests completed per second. Or conversely measured as time to complete one request. The performance – time to finish a function execution can be dependent on the event containing the batch of messages; bigger batches mean more time to complete an event and complexity and dependencies (performance of the backend that needs to be invoked) of the message processing. The calculations are based on the assumption that the Lambda function and related dependencies have been optimized and tuned to scale linearly with various batch sizes and number of invocations.
  • Concurrency: Number of concurrent Lambda function executions. Concurrency is important for scaling of Lambda functions, allowing users to delegate the capacity planning and scaling to thee Lambda service.

The higher the concurrency, the more workloads it can process in a shorter time, allowing better performance, but this does not change the overall cost. Concurrency is not equivalent to TPS: it is more of a scaling factor in overall TPS. For example, a workload comprised of a set of messages takes 20 seconds to complete. 100 workloads would mean 2000 seconds to complete. With a concurrency of 10, it takes 200 seconds. With a concurrency of 100, the time drops to 20 seconds as each of the 100 workloads are handled concurrently. But each function essentially runs for the same duration and memory, regardless of concurrency. So the cost remains the same, as it is measured in GB-hours (memory multiplied by time). But the performance view differs. So, the cost estimations do not consider the concurrency settings of Lambda functions as the workloads have to be processed either sequential or concurrently.

Assumptions

The cost estimation tool presented helps users estimate monthly Lambda function costs for processing SQS standard queue messages based on the following assumptions:

  • The system has reached steady state and has millions of messages available to be consumed per day in standard queues. The number of messages per day remains constant throughout the entire month.
  • Since it’s a steady state, there are no associated Lambda function cold start delays.
  • All SQS messages that need to be processed successfully have already met the filter criteria. Also, no poison messages that have to be re-tried repeatedly. Messages are not going to be rejected, unacknowledged, or reprocessed.
  • The workload scales linearly in performance versus batch size. All the associated dependencies can scale linearly and a batch of N messages should take the same time as N x a single message with a fixed overhead per function invocation irrespective of the batch size. For example, a function’s overhead is 50 ms irrespective of the batch size. Processing a single message takes 20 ms. So a batch of 20 messages should take 490 ms (50 + 20*20) versus a batch of 5 messages takes 150 ms (50 + 5*20).
  • Function memory increases in steps, based on increasing the batch size. For example, 100 messages uses a 256 MB of baseline memory. Every additional 500 messages require additional 128 MB of memory. A sliding window of memory to batch size:
Batch size Memory
1–100 256 MB
100–600 384 MB
600–1100 512 MB
1100–1600 640 MB

Lambda uses SQS APIs internally to poll and dequeue the messages. The costs for the polling and dequeue operations using SQS APIs are not included as part of the estimations. The internal SQS dequeue portion is outside the control of the Lambda developer and the cost estimates only cover the message processing using Lambda. Also, the tool does not consider any reprocessing or duplicate processing of messages due to exceptions or errors that can vary the cost.

Using the cost estimation tool

The estimator tool is a Python-based command line program that takes in an input properties file that specifies the various input parameters to come up with Lambda function cost versus performance estimations for various batch sizes, messages per day, etc. The tool does take into account the eligible monthly free tier for Lambda function executions.

Pre-requisites: Running the tool requires Python 3.9 and installation of Plotly package (5.7.+) or creating and using Docker images.

To run the tool:

  1. Clone the repo:
    git clone https://github.com/aws-samples/aws-lambda-sqs-cost-estimator
  2. Install the tool:
    cd aws-lambda-sqs-cost-estimator/code
    pip3 install -r requirements.txt
  3. Edit the input.prop file and run the tool to generate cost estimations:
    python3 LambdaPlotly.py

This shows the cost estimates on a local browser instance. Running the code as a Docker image is also supported. Refer to the GitHub repo for additional instructions.

  1. Clone the repo and build the Docker container:
    git clone https://github.com/aws-samples/aws-lambda-sqs-cost-estimator
    cd aws-lambda-sqs-cost-estimator/code
    docker build -t lambda-dash .
  2. Edit the input.prop file and run the tool to generate cost estimations:
    docker run -it -v `pwd`:/app -p 8080:8080 lambda-dash
  3. Navigate to http://0.0.0.0:8080/app in a browser to view the generated cost estimate plot.

There are various input parameters for the cost estimations specified inside the input.prop file. Tune the input parameters as needed:

Parameter Description Sample value (units not included)
base_lambda_memory_mb Baseline memory for the Lambda function (in MB) 128
warm_latency_ms Invocation time for Lambda handler method (going with warm start) irrespective of batch size in the incoming event payload in ms 20
process_per_message_ms Time to process a single message (linearly scales with number of messages per batch in event payload) in ms 10
max_batch_size Maximum batch size per event payload processed by a single Lambda instance 1000 (max is 10000)
batch_memory_overhead_mb Additional memory for processing increments in batch size (in MB) 128
batch_increment Increments of batch size for increased memory 300

The following is sample input.prop file content:

base_lambda_memory_mb=128

# Total process time for N messages in batch = warm_latency_ms + (process_per_message_ms * N)

# Time spent in function initialization/warm-up

warm_latency_ms=20

# Time spent for processing each message in milliseconds

process_per_message_ms=10

# Max batch size

max_batch_size=1000

# Additional lambda memory X mb required for managing/parsing/processing N additional messages processed when using variable batch sizes

#batch_memory_overhead_mb=X

#batch_increment=N

batch_memory_overhead_mb=128

batch_increment=300

The tool generates a page with plot graphs and tables with 3 sections:

Cost example

There is an accompanying interactive legend showing cost and batch size. The top section shows a graph of cost versus message volumes versus batch size:

cost versus message volumes vs Batch size

The second section shows the actual cost variation for different batch sizes for 10 million messages:

actual cost variation for different batch sizes for 10 million messages.

The third section shows the memory and time required to process with different batch sizes:

memory and time required to process with different batch sizes

The various control input parameters used for graph generation are shown at the bottom of the page.

Double-clicking on a specific batch size or line on the right-hand legend displays that specific plot with its pricing details.

specific plot to be displayed with its pricing details

You can modify the input parameters with different settings for memory, batch sizes, memory for increased batches and rerun the program to create different cost estimations. You can also export the generated graphs as PNG image files for reference.

Conclusion

You can use Lambda functions to handle fully managed asynchronous processing of SQS messages. Estimating the cost and optimal setup depends on leveraging the various configurations of SQS and Lambda functions. The cost estimator tool presented in this blog should help you understand these configurations and their impact on the overall cost and performance of the Lambda function-based messaging solutions.

For more serverless learning resources, visit Serverless Land.

Introducing tiered pricing for AWS Lambda

Post Syndicated from Sam Dengler original https://aws.amazon.com/blogs/compute/introducing-tiered-pricing-for-aws-lambda/

This blog post is written by Heeki Park, Principal Solutions Architect, Serverless.

AWS Lambda charges for on-demand function invocations based on two primary parameters: invocation requests and compute duration, measured in GB-seconds. If you configure additional ephemeral storage for your function, Lambda also charges for ephemeral storage duration, measured in GB-seconds.

AWS continues to find ways to help customers reduce cost for running on Lambda. In February 2020, AWS announced that AWS Lambda would participate in Compute Savings Plans. In December 2020, AWS announced 1 ms billing granularity to help customers save on cost for their Lambda function invocations. With that pricing change, customers whose function duration is less than 100 ms pay less for those function invocations. In September 2021, AWS announced Graviton2 support for running your function on ARM and potential improvements for price performance for compute.

Today, AWS introduces tiered pricing for Lambda. With tiered pricing, customers who run large workloads on Lambda can automatically save on their monthly costs. Tiered pricing is based on compute duration measured in GB-seconds. The tiered pricing breaks down as follows:

Compute duration (GB-seconds) Architecture New tiered discount
0 – 6 billion x86 Same as today
6 – 15 billion x86 10%
Anything over 15 billion x86 20%
0 – 7.5 billion arm64 Same as today
7.5 – 18.75 billion arm64 10%
Anything over 18.75 billion arm64 20%

The Lambda pricing page lists the pricing for all Regions and architectures.

Tiered pricing discount example

Consider a financial services provider who provides on-demand stock portfolio analysis. The customers pay per portfolio analyzed and find the service valuable for providing them insight into the performance of those assets. The application is built using Lambda, runs on x86, and is optimized to use 2048 MB (2 GB) of memory with an average function duration of 60 seconds. This current month resulted in 75 million function invocations.

Without tiered pricing, this workload costs the following:

Monthly request charges: 75M * $0.20/million = $15.00
Monthly compute duration (seconds): 75M * 60 seconds = 4.5B seconds
Monthly compute (GB-seconds): 4.5B seconds * 2 GB = 9B GB-seconds
Monthly compute duration charges: 9B GB-s * $0.0000166667/GB-s = $150,000.30
Total monthly charges = request charges + compute duration charges = $15.00 + $150,000.30 = $150,015.30

With tiered pricing, the portion of compute duration that exceeds 6B GB-seconds receives an automatic discount as follows:

Monthly request charges: 75M * $0.20/million = $15.00
Monthly compute duration (seconds): 75M * 60 seconds = 4.5B seconds
Monthly compute (GB-seconds): 4.5B seconds * 2GB = 9B GB-seconds
Monthly compute duration charge (tier 1): 6B Gb-s * $0.0000166667/GB-s = $100,000.20
Monthly compute duration charge (tier 2): 3B Gb-s * $0.0000150000/GB-s = $45,000.09
Monthly compute duration charges (post-discount): $100,000.20 + $45,000.09 = $145,000.29.
Total monthly charges = request charges + compute duration charges = $15.00 + $145,000.29 = $145,015.29 ($5,000.01 cost savings)

Tiered pricing discount example with increased growth

The service is successful and usage in the following month quadruples, resulting in 300 million function invocations.

Without tiered pricing, this workload costs the following:

Monthly request charges: 300M * $0.20/million = $60.00
Monthly compute duration (seconds): 300M * 60 seconds = 18B seconds
Monthly compute (GB-seconds): 18B seconds * 2GB = 36B GB-seconds
Monthly compute duration charges: 36B GB-s * $0.0000166667/GB-s = $600,001.20
Total monthly charges = request charges + compute duration charges = $60.00 + $600,001.20 = $600,061.20

With tiered pricing, the compute duration portion now also exceeds 15B GB-seconds and receives an automatic discount as follows:

Monthly request charges: 300M * $0.20/million = $60.00
Monthly compute duration (seconds): 300M * 60 seconds = 18B seconds
Monthly compute (GB-seconds): 18B seconds * 2GB = 36B GB-seconds
Monthly compute duration charge (tier 1): 6B GB-s * $0.0000166667/GB-s = $100,000.02
Monthly compute duration charge (tier 2): 9B GB-s * $0.0000150000/GB-s = $135,000.27
Monthly compute duration charge (tier 3): 21B GB-s * $0.0000133333/GB-s = $280,000.56
Monthly compute duration charges (post-discount): $100,000.02 + $135,000.27 + $280,000.56 = $515,001.03.
Total monthly charges = request charges + compute duration charges = $60.00 + $515,001.03 = $515,061.03 ($85,000.17 cost savings)

Tiered pricing discount example with decreased growth

Alternatively, customers used the service less frequently than expected. As a result, usage in the following month is one-third the prior month’s usage, resulting in 25 million function invocations.

Without tiered pricing, this workload costs the following:

Monthly request charges: 25M * $0.20/million = $5.00
Monthly compute duration (seconds): 25M * 60 seconds = 1.5B seconds
Monthly compute (GB-seconds): 1.5B seconds * 2GB = 3B GB-seconds
Monthly compute duration charges: 3B GB-s * $0.0000166667/GB-s = $50,000.10
Total monthly charges = request charges + compute duration charges = $5.00 + $50,000.10 = $50,005.10

When considering tiered pricing, the compute duration portion is under 6B GB-s and is priced without any additional pricing discounts. In this case, the financial services provider did not grow the business as expected or take advantage of tiered pricing. However, they did take advantage of Lambda’s pay-as-you-go model, paying only for the compute that this application used.

Summary and other considerations

Tiered pricing for Lambda applies to the compute duration portion of your on-demand function invocations. It is specific to the architecture (x86 or arm64) and is bucketed by the Region. Refer to the previous table for the specific pricing tiers.

For example, consider a function that is using x86 architecture, deployed in both us-east-1 and us-west-2. Usage in us-east-1 is bucketed and priced separately from usage in us-west-2. If there is a function using arm64 architecture in us-east-1 and us-west-2, that function is also in a separate bucket.

The cost for invocation requests remains the same. The discount applies only to on-demand compute duration and does not apply to provisioned concurrency. Customers who also purchase Compute Savings Plans (CSPs) can take advantage of both, where Lambda applies tiered pricing first, followed by CSPs.

Conclusion

With tiered pricing for Lambda, you can save on the compute duration portion of your monthly Lambda bills. This allows you to architect, build, and run large-scale applications on Lambda and take advantage of these tiered prices automatically.

For more information on tiered pricing for Lambda, see: https://aws.amazon.com/lambda/pricing/.

Using certificate-based authentication for iOS applications with Amazon SNS

Post Syndicated from Sam Dengler original https://aws.amazon.com/blogs/compute/using-certificate-based-authentication-for-ios-applications-with-amazon-sns/

This blog post is written by Yashlin Naidoo, Arnav Thakur, Kim Read, Guilherme Silva.

Amazon SNS enables you to send notifications to a mobile push endpoint using a platform application endpoint by dispatching the notification on your application’s behalf. Push notifications for iOS apps are sent using Apple Push Notification Service (APNs).

To send push notifications using SNS for APNS certificate-based authentication, you must provide a set of credentials for connecting to the Apple Push Notification Service (see prerequisites for push). SNS supports using certificate-based authentication (.p12), in addition to the new token-based authentication (.p8).

Certificate-based authentication uses a provider certificate to establish a secure connection between your provider and APNs. These certificates are tied to a single application and are used to send notifications to this application. This approach can be useful when you haven’t migrated to the new token-based authentication.

For new applications, we recommend using token-based authentication as it provides improved security. It removes the need for yearly renewal of the certificates and can also be shared amongst multiple applications. To learn about how to use token-based authentication, visit Token-Based authentication for iOS applications with Amazon SNS in the AWS Compute Blog.

This blog shows step-by-step instructions on how to build an iOS application. You learn how to create a new certificate from your Apple developer account, and set up a platform application and endpoint in the SNS console. Next, you will learn how to test your application by sending a push notification via SNS to your device. Finally, you view the push notification delivered to your device.

Setting up your iOS application

This section will go over:

  • Creating an iOS application.
  • Creating a .p12 certificate to upload to SNS.

Prerequisites:

Creating an iOS application

  1. Create a new XCode project. Select iOS as the platform.

    New XCode project

    New XCode project

  2. Select your Apple Developer Account team and organization identifier.

    Select your Apple Developer Account team

    Select your Apple Developer Account team

  3. In your project, go to Signing & Capabilities. Under signing, ensure that “Automatically manage signing” is checked and your team is selected.

    Signing & Capabilities

    Signing & Capabilities

  4. To add the push notification capability to your application, select “+” and select Push Notifications.
    Add push notification capability

    Add push notification capability

    This step creates resources on your Apple Developer Account (the App ID and adds Push notification capability to it). You can also verify this in your Apple Developer Account.

  5. Add the following code to AppDelegate.swift:
        import UIKit
        import UserNotifications
    
        @main
        class AppDelegate: UIResponder, UIApplicationDelegate {
    
        func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
        // Override point for customization after application launch
    
        //Call to register for push notifications when launched
        registerForPushNotifications()
    
        return true
        }
    
        // MARK: UISceneSession Lifecycle
    
        func application(_ application: UIApplication, configurationForConnecting connectingSceneSession: UISceneSession, options: UIScene.ConnectionOptions) -> UISceneConfiguration {
        // Called when a new scene session is being created.
        // Use this method to select a configuration to create the new scene with.
        return UISceneConfiguration(name: "Default Configuration", sessionRole: connectingSceneSession.role)
        }
    
        func application(_ application: UIApplication, didDiscardSceneSessions sceneSessions: Set<UISceneSession>) {
        // Called when the user discards a scene session.
        // If any sessions were discarded while the application was not running, this will be called shortly after application:didFinishLaunchingWithOptions.
        // Use this method to release any resources that were specific to the discarded scenes, as they will not return.
        }
    
        func getNotificationSettings() {
        UNUserNotificationCenter.current().getNotificationSettings { settings in
        print("Notification settings: \(settings)")
    
        guard settings.authorizationStatus == .authorized else { return }
        DispatchQueue.main.async {
        UIApplication.shared.registerForRemoteNotifications()
        }
    
        }
        }
    
        func registerForPushNotifications() {
        //1 this handles all notification-related activities in the app including push notifications
        UNUserNotificationCenter.current()
    
        //2 this requests authorization to send the types of notifications specifies in the options
        .requestAuthorization(
        options: [.alert, .sound, .badge]) { [weak self] granted, _ in
        print("Permission granted: \(granted)")
        guard granted else { return }
        self?.getNotificationSettings()
        }
    
        }
    
        func application(
        _ application: UIApplication,
        didRegisterForRemoteNotificationsWithDeviceToken deviceToken: Data
        ) {
        let tokenParts = deviceToken.map { data in String(format: "%02.2hhx", data) }
        let token = tokenParts.joined()
        print("Device Token: \(token)")
        }
    
        func application(
        _ application: UIApplication,
        didFailToRegisterForRemoteNotificationsWithError error: Error
        ) {
        print("Failed to register: \(error)")
        }
    
        }
  6. Build and run the application on an iPhone. Note that the push notification feature does not work with a simulator.
  7. On your phone, select “Allow” when prompted to allow push notifications.

    Allow push notifications

    Allow push notifications

  8. The debugger prints “Permission granted: true” if successful and returns the Device Token.

    Device token

    Device token

You have now configured an iOS application that can receive push notifications. Next, use the application to test sending push notifications with SNS using certificate-based authentication.

Creating a .p12 certificate to upload to SNS

After completing the previous step, you need:

  • An app identifier
  • A certificate signing request (CSR)
  • An SSL certificate

Create an identifier

  1. Log in to your Apple Developer Account.
  2. Choose Certificates, Identifiers & Profiles.
  3. In the Identifiers section, choose the Add button (+).
  4. In the Register a new identifier section, choose App IDs and select Continue.
  5. In the Select a type section, choose App, and select Continue.
  6. For Description, type the application description.
  7. For Bundle ID, use the Bundle ID assigned to your application. You can find this ID under Signing & Capabilities of your application in XCode (see step 3 under “Creating an application”).
  8. Under Capabilities, choose Push Notifications.
  9. Select Continue. In the Confirm your App ID panel, check that all values were entered correctly. The identifier should match your app ID and bundle ID.
  10. Select Register to register the new app ID.

Create a certificate signing request (CSR)

  1. Open Keychain Access located in /Applications/Utilities or search for it on Finder.
  2. Once opened, choose the tab Keychain Access Tab (next to the Apple icon). Navigate to Certificate Assistant and choose Request a Certificate from a Certificate Authority.
  3. Enter the Username, Email Address, Common Name and leave CA Email Address empty.
  4. Choose Saved to disk and choose Continue.

Create a certificate

  1. Log in to your Apple Developer Account.
  2. Choose Certificates, Identifiers & Profiles.
  3. In the Certificate section, select Create new certificate.
  4. Under services, choose your certificate: Apple Push Notification service SSL (Sandbox)/Apple Push Notification service SSL (Sandbox & Production).
  5. Keep Platform as iOS and choose App ID (Identifier) created previously.
  6. Upload the Certificate Signing Request created in the previous step and Download your certificate.

Create .p12 certificate to upload to SNS

  1. Once your certificate.cer file is downloaded (for example, “aps_development.cer”), open it to show in keychain access. Find Apple Development iOS Push Services: (Your Identifier Name/App ID Name) and ensure that the file is placed in the “Login” folder.
  2. Right-click and choose Export as file format .p12 and choose Save. Optionally, set a password.

Creating a new platform application using APNs certificate-based authentication

Prerequisites

To implement APNs certificate-based authentication from SNS, you must have:

  • An Apple Developer Account
  • An iOS mobile application

For creating a new SNS Platform Application that is used to store Push Notification Platform credentials, configurations and related configurations:

  1. Navigate to the SNS Console. Expand the Mobile menu and choose Create platform application.
  2. For the Application name field, enter an application name such as “myfirstiOSapp”. For Push Notification Platform, select Apple iOS/ VoIP/ macOS.

    Create platform application

    Create platform application

  3. Under the Apple Credentials section:
    1. If your application is in development, select the radio button for Used for development in sandbox. If your application is in production, uncheck Used for development in sandbox.
    2. For Push service, choose iOS and for Authentication method, choose Certificate.
    3. Under Certificate, select Choose file to upload the .p12 certificate file.
    4. If you configured a password while creating the certificate, enter this in the Certificate Password field.
    5. Choose Load Credentials from File to extract the Certificate and private key components.
  4. Event Notifications, Delivery Status Logging – Optional: Refer to the guide for enabling Delivery Status logs and the guide to set up Mobile Event related Notifications. More on this step can also be found in the best practices guide.

    Enter Apple credentials

    Enter Apple credentials

  5. Choose Create Platform Application. This creates a certificate-based authentication APNs Platform Application for iOS.

    Create platform application

    Create platform application

Creating a new platform endpoint using APNs token-based authentication

To send Push Notifications using SNS, a platform endpoint resource is created to store the destination address of the corresponding iOS application that is associated with the SNS platform application.

A destination address of a user’s device with the iOS application installed is identified by an unique device token. It is obtained once the app has registered successfully with APNs to receive push notifications. The details of the device token captured in the Platform Endpoint resource along with the configurations in the SNS Platform application are used in conjunction by the service to deliver a push notification message.

In the following steps, you create a new platform endpoint for a destination device that has the iOS application installed and is capable of receiving push notifications.

  1. Open your Platform Application. Choose Create Application Endpoint.

    Application endpoints list

    Application endpoints list

  2. Locate the Device token in the application logs of the iOS app provisioned earlier. Enter it in the Device Token Field.
  3. To store any additional arbitrary data for the endpoint, you can include in the User data field and choose Create application endpoint.

    Create application endpoint

    Create application endpoint

  4. Choose Create application endpoint and the details are shown on the console.

    Application endpoint detail

    Application endpoint detail

Testing a push notification from your device

In this section, you test sending a push notification to your device.

  1. From the SNS console, navigate to your platform endpoint and choose Publish message.
  2. Enter a message to send. This example uses a custom payload that allows you to provide additional APNs headers.

    Publish message

    Publish message

  3. Choose Publish message.
  4. The push notification is delivered to your device.

    Notification

    Notification

Conclusion

Developers send mobile push notifications for APNs certificate-based authentication by using a .p12 certificate to authenticate an Apple device endpoint. Certificate-based authentication ensures a secure connection through TLS (Transport Layer Security). The provider (SNS) initiates the request to APNs and validation from the provider and APNS is required to complete the secure connection.

Certificates expire annually and must be renewed to ensure that SNS can continue to deliver to the endpoint. In this post, you learn how to create an iOS application for APNs certificate-based authentication and integrate it with SNS to send push notifications to your device using a .p12 certificate to authenticate your application with the mobile endpoint.

To learn more about APNs certificate-based authentication with Amazon SNS, visit the Amazon SNS Developer Guide.

For more serverless learning resources, visit Serverless Land.

Using AWS Lambda to run external transactions on Db2 for IBM i

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/using-aws-lambda-to-run-external-transactions-on-db2-for-ibm-i/

This post is written by Basil Lin, Cloud Application Architect, and Jud Neer, Delivery Practice Manager.

Db2 for IBM i (Db2) is a relational database management system that can pose connectivity challenges with cloud environments because of a lack of native support. However, by using Docker on Amazon ECR and AWS Lambda container images, you can transfer data between the two environments with a serverless architecture.

While mainframe modernization solutions are helping customers migrate from on-premises technologies to agile cloud solutions, a complete migration is not always immediately possible. AWS offers broad modernization support from rehosting to refactoring, and platform augmentation is a common scenario for customers getting started on their cloud journey.

Db2 is a common database in on-premises workloads. One common use case of platform augmentation is maintaining Db2 as the existing system-of-record while rehosting applications in AWS. To ensure Db2 data consistency, a change data capture (CDC) process must be able to capture any database changes as SQL transactions. A mechanism then runs these transactions on the existing Db2 database.

While AWS provides CDC tools for multiple services, converting and running these changes for Db2 requires proprietary IBM drivers. Conventionally, you can implement this by hosting a stream-processing application on a server. However, this approach relies on traditional server architecture. This may be less efficient, incur higher overhead, and may not meet availability requirements.

To avoid these issues, you can build this transaction mechanism using a serverless architecture. This blog post’s approach uses ECR and Lambda to externalize and run serverless, on-demand transactions on Db2 for IBM i databases.

Overview

The solution you deploy relies on a Lambda container image to run SQL queries on Db2. While you provide your own Lambda invocation methods and queries, this solution includes the drivers and connection code required to interface with Db2. The following architecture diagram shows this generic solution with no application-specific triggers:

Architecture diagram

This solution builds a Docker image containerized with Db2 interfacing code. The code consists of a Lambda handler to run the specified database transactions, a base class that helps create database Python functions via Open Database Connectivity (ODBC), and finally a forwarder class to establish encrypted connections with the target database.

Deployment scripts create the Docker image, deploy the image to ECR, and create a Lambda function from the image. Lambda then runs your queries on your target Db2 database. This solution does not include the Lambda invocation trigger, the Amazon VPC, and the AWS Direct Connect connection as part of the deployment, but these components may be necessary depending on your use case. The README in the sample repository shows the complete deployment prerequisites.

To interface with Db2, the Lambda function establishes an ODBC session using a proprietary IBM driver. This enables the use of high-level ODBC functions to manipulate the Db2 database management system.

Even with the proprietary driver, ODBC does not properly support TLS encryption with Db2. During testing, enabling the TLS encryption option can cause issues with database connectivity. To work around this limitation, a forwarding package captures all ODBC traffic and forwards packets using TLS encryption to the database. The forwarder opens a local socket listener on port 8471 for unencrypted loopback connections. Once the Lambda function initializes an unencrypted ODBC connection locally, the forwarding package then captures, encrypts, and forwards all ODBC calls to the target Db2 database. This method allows Lambda to form encrypted connections with your target database while still using ODBC to control transactions.

With secure connectivity in place, you can invoke the Lambda function. The function starts the forwarder and retrieves Db2 access credentials from AWS Secrets Manager, as shown in the following diagram. The function then attempts an ODBC loopback connection to send transactions to the forwarder.

Flow process

If the connection is successful, the Lambda function runs the queries, and the forwarder sends the queries to the target Db2. However, if the connection fails, it makes a second connection attempt. The second attempt consists of both restarting the forwarder module and the loopback connection. If the second attempt fails again, the function errors out.

After the transactions complete, a cleanup process runs and the function exits with a success status, unless an exception occurs during the function invocation. If an exception arises during the transaction, the function exits with a failure status. This is an important consideration when building retry mechanisms. You must review Lambda exit statuses to prevent default AWS retry mechanisms from causing unintended invocations.

To simplify deployment, the solution contains scripts you can use. Once you provide AWS credentials, the deployment script deploys a base set of infrastructure into AWS, including the ECR repository for the Docker images and the Secrets Manager secret for the Db2 configuration details.

The deployment script also asks for Db2 configuration details. After you finish entering these, the script sends the information to AWS to configure the previously deployed secret.

Once the secret configuration is complete, the script then builds and pushes a base Docker image to the deployed ECR repository. This base image contains a few basic Python prerequisite libraries necessary for the final code, and also the RPM driver for interfacing with Db2 via ODBC.

Finally, the script builds the solution infrastructure and deploys it into the AWS Cloud. Using the base image in ECR, it creates a Lambda function from a new Docker container image containing the SQL queries and the ODBC transaction code. After deployment, the solution is ready for testing and customization for your use case.

Prerequisites

Before deployment, you must have the following:

  1. The cloned code repository locally.
  2. A local environment configured for deployment.
  3. Amazon VPC and networking configured with Db2 access.

You can find detailed prerequisites and associated instructions in the README file.

Deploying the solution

The deployment creates an ECR repository, a Secrets Manager secret, a Lambda function built from a base container image uploaded to the ECR repo, and associated elastic network interfaces (ENIs) for VPC access.

Because of the complexity of the deployment, a combination of Bash and Python scripts automates the process by automatically deploying infrastructure templates, building and pushing container images, and prompting for input where required. Refer to the README included in the repository for detailed instructions.

To deploy:

  1. Ensure you have met the prerequisites.
  2. Open the README file in the repository and follow the deployment instructions
    1. Configure your local AWS CLI environment.
    2. Configure the project environment variables file.
    3. Run the deployment scripts.
  3. Test connectivity by invoking the deployed Lambda function
  4. Change infrastructure and code for specific queries and use cases

Cleanup

To avoid incurring additional charges, ensure that you delete unused resources. The README contains detailed instructions. You may either manually delete the resources provisioned through the AWS Management Console, or use the automated cleanup script in the repository. The deletion of resources may take up to 45 minutes to complete because of the ENIs created for Lambda in your VPC.

Conclusion

In this blog post, you learn how to run external transactions securely on Db2 for IBM i databases using a combination of Amazon ECR and AWS Lambda. By using Docker to package the driver, forwarder, and custom queries, you can execute transactions from Lambda, allowing modern architectures to interface directly with Db2 workloads. Get started by cloning the GitHub repository and following the deployment instructions.

For more serverless learning resources, visit Serverless Land.

Migrating mainframe JCL jobs to serverless using AWS Step Functions

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/migrating-mainframe-jcl-jobs-to-serverless-using-aws-step-functions/

This post is written by Raghuveer Reddy Talakola, Sr. Modernization Architect, Sanjay Rao, Sr. Mainframe Consultant, and Aneel Murari, Solution Architect.

JCL (Job Control Language) is a scripting language used to program batch jobs on mainframe systems. A JCL can contain one to many job control statements. It can be challenging to understand the condition code parameter checking syntax, which determines the order and conditions under which these statements are run.

If a JCL fails midway through execution, mainframe programmers have no visual aids to help them understand the flow of the JCL. They must examine text-based execution logs to manually correlate condition codes in the logs with condition check rules attached to JCL statements to understand the root cause of failure.

This post explains how AWS Step Functions can make it easier to maintain batch jobs migrated from mainframes to AWS.

Overview

The sample application shows how to use AWS Step Functions to address typical challenges when maintaining a batch workflow built using JCL. The sample business case validates a feed of new employee information against an existing employee database. It identifies discrepancies between the feed and the database and sends out notifications if it finds any.

The mainframe JCL supplied with this blog has seven steps. Each step applies condition code rules to check codes emitted by previous steps to decide if it must run. The Step Functions example achieves the same result. Using its graphical user interface, you can develop each step as an independent task, and link them visually. This makes it easier to understand how to decouple, reorder, or scale tasks if needed.

Visual tools for workflow analysis

A JCL controls its flow by using condition code checking or/and using IF-ELSE statements. A JCL condition code check defines the rules under which its associated JCL step will not run. Developers may code compound rules, double negatives, or triple or more negative conditions into the flow.

Example of condition code check in JCL What it means
//STEPTS2 EXEC PGM=XYZ,COND=(4,GT,STEPTST) Do not execute PGM XYZ if previous step STEPTST ended execution with a code greater than 4
//STEPTS3 EXEC PGM=XYZ,COND=EVEN Execute PGM XYZ even if all the previous steps failed

//STEPTS5 EXEC PGM=XYZ,

COND=((6,EQ),(8,GT))

Do not execute PGM XYZ if any of the preceding steps exited with return code 6 or a code greater than 8

The sample JCL illustrates the complexity of setting up a batch workflow using JCL condition code:

  1. The first step of this JCL deletes files from a previous run. If it ends with code 0, the second JCL step extracts employee data from Db2 using a COBOL program and ends with a return code 0 if it is successful or 4 if no records were found.
  2. The next step coded with condition check (4,LT), runs if all preceding steps ended with codes less than 5. It checks the external extract and emits a condition code of 8 if the external extract is empty.
  3. The next step compares the 2 files if the extract validation step produced a return code of zero.
  4. If this comparison step detects some records that are missing in the employee Db2 database, it creates a file with missing records. If that file is empty, it sets a return code of 8, which ends the program. If the mismatch file has data, it copies the mismatch file over to another system for processing.

With Step Functions, you define the same workflow more easily by using the Amazon States Language (ASL). The Step Functions console provided a graphical representation of that state machine to visualize the application logic using a drag and drop interface.

Step Functions Workflow Studio

  1. The first task fetches the employee file from Amazon S3. It does not need a cleanup task as S3 supports versioning.
  2. If the fetched file is not empty, control passes to the step that runs business logic code inside an AWS Lambda function to validate the employee feed.
  3. The workflow retrieves an environment variable from an external parameter store. This step shows how environment parameters can be externalized in a Step Functions workflow.
  4. It publishes an event to Amazon EventBridge to trigger the external processing needed if discrepancies are found and conditions are met.
  5. The final step is a Succeeded state that marks flow completion.

The following image compares the sample JCL that is converted to a Step Functions workflow:

Sample JCL and Step Functions

Using a graphical interface instead of job control statements

In JCL, you define a batch process with a series of job control statements, which run a program, utility, or a nested procedure in a text editor. There is no visual aid. If a batch process becomes complex, it’s harder to understand the dependencies between the steps.

Step Functions makes it simpler for you to set up tasks, which are the equivalents of steps in JCL. It provides you with a graphical user interface (GUI) that enables you to configure and drag-and-drop steps into a state machine.

Decoupling tasks instead of deleting and commenting of code

To disable or change a step in a JCL, you examine the condition code logic associated with all preceding and succeeding steps of the job. Any mistake in editing these codes can lead to unintended consequences.

With Step Functions, removing or changing a step can be done using the visual editor or by updating the ASL code. This can help improve your agility and make it easier to implement change.

Using Parameter Store instead of editing parameters in code

To make JCL behave differently based on parameters, you must edit dynamic variables known as JCL Symbols inside the JCL or in control cards to affect the behavior change. The following JCL code sample shows a parameter called REGN coded to value DEV. At runtime, this REGN parameter is substituted by DEV in every statement that references this parameter. To reuse this JCL in production, you can change the value assigned to REGN to say PROD.

//   SET REGN=DEV
//    -------
//******************************************************************
//*  RUN  Db2 COBOL Batch Program 
//******************************************************************
//EXTRDB2 EXEC PGM=IKJEFT01,COND=(0,NE)                                
//    -------
//FILEOUT  DD DSN=&REGN..AWS.APG.STEPDB2,                             
//******************************************************************
//*  RUN  VSAM COBOL Batch Program 
//******************************************************************
//    -------
//FILE2    DD DSN=&REGN..AWS.APG.STEPVSM,                             

In Step Functions, configuration parameters can be decoupled from state machine code by managing them in an external data source such as the Amazon DynamoDB, AWS Systems Manager Parameter Store. In the Step Functions workflow, the following step demonstrates retrieving a configuration from Parameter Store and using it to perform branching logic:

Workflow example

Independent scaling of steps versus splitting and cloning JCLs

When a JCL takes a long time to run, mainframe programmers split the job into multiple jobs or steps. Each job is a replica addressing different ranges of data.

With Step Functions, you can run a step or a group of steps concurrently by using a parallel state or map state, without creating multiple jobs that do the same thing. This can help make maintenance easier.

Improved observability and automated retry

If a JCL fails, there are no visual aids to help debug the errors. On the mainframe, you must log into the mainframe and run through several screens of text on SDSF (System Display and Search Facility) to find the cause of the failure.

Step Functions provide visual information on failures, automated retry capabilities, and native integration with AWS services. This can make it easier to understand and recover from failed jobs compared with reading through lengthy logs.

JCL example

Workflow visualization

Benefits for developers

Step Functions provides the following improvements over jobs written in JCL or migrated from JCL.

  • Visual analysis: Step Functions provide a graphical console that shows the status of each task in a visual presentation that developers and support staff can understand and debug more easily than a failed JCL.
  • Decoupling: You can update each component in the workflow independently, unlike in a JCL, where changing a step requires redeployment of the entire batch job to production.
  • Low code: Step Functions are defined with minimal code. The workflow editor can be used to drag and drop different steps and visually edit the workflows.
  • Independent scaling of steps: Step Functions is a serverless solution, and each step can scale independently. This opens up the possibility of scaling up resources for steps that are resource-intensive.
  • Automated retry capabilities: You can configure Step Functions to retry steps and recover from failures. This is much simpler than coding restart conditions in the JCL.
  • Improved logging and visibility: Step Functions can integrate with observability tools like Amazon CloudWatch and AWS X-Ray.

Conclusion

This conversion example shows how Step Functions can help you rewrite complex batch processes written in JCL to serverless workflows. It also shows how such a conversion provides maintenance and monitoring features that make it easier to simplify and scale these batch processes.

To learn more, download the sample JCL and Step Functions workflow from the GitHub repository. To learn more about our AWS Mainframe migration and modernization services, go here.

For more serverless learning resources, visit Serverless Land.

Scaling AWS Lambda permissions with Attribute-Based Access Control (ABAC)

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/scaling-aws-lambda-permissions-with-attribute-based-access-control-abac/

This blog post is written by Chris McPeek, Principal Solutions Architect.

AWS Lambda now supports attribute-based access control (ABAC), allowing you to control access to Lambda functions within AWS Identity and Access Management (IAM) using tags. With ABAC, you can scale an access control strategy by setting granular permissions with tags without requiring permissions updates for every new user or resource as your organization scales.

This blog post shows how to use tags for conditional access to Lambda resources. You can control access to Lambda resources using ABAC by using one or more tags within IAM policy conditions. This can help you scale permissions in rapidly growing environments. To learn more about ABAC, see What is ABAC for AWS, and AWS Services that work with IAM.

Each tag in AWS is a label comprising a user-defined key and value. Customers often use tags with Lambda functions to define keys such as cost center, environment, project, and teams, along with values that map to these keys. This helps with discovery and cost allocation, especially in accounts that may have many Lambda functions. AWS best practices for tagging are included in Tagging AWS resources.

You can now use these same tags, or create new ones, and use them to grant conditional IAM access to Lambda functions more easily. As projects start and finish, employees move to different teams, and applications grow, maintaining access to resources can become cumbersome. ABAC helps developers and security administrators work together to maintain least privilege access to their resources more effectively by using the same tags on IAM roles and Lambda functions. Security administrators can allow or deny access to Lambda API actions when the IAM role tags match the tags on a Lambda function, ensuring least privilege. As developers add additional Lambda functions to the project, they simply apply the same tag when they create a new Lambda function, which grants the same security credentials.

ABAC in Lambda

Using ABAC with Lambda is similar to developing ABAC policies when working with other services. To illustrate how to use ABAC with Lambda, consider a scenario where two new developers join existing projects called Project Falcon and Project Eagle. Project Falcon uses ABAC for authorization using the tag key project-name and value falcon. Project Eagle uses the tag key project-name and value eagle.

Projects Falcon and Eagle tags

Projects Falcon and Eagle tags

The two new developers need access to the Lambda console. The security administrator creates the following policy to allow the developers to list the existing functions that are available using ListFunction. The GetAccountSettings permission allows them to retrieve Lambda-specific information about their account.

{
"Version": "2012-10-17",
"Statement": [
    {
    "Sid": "AllResourcesLambdaNoTags",
    "Effect": "Allow",
    "Action": [
        "lambda:ListFunctions",
        "lambda:GetAccountSettings"
    ],
    "Resource": "*"
    }
]
}

Condition key mappings

The developers then need access to Lambda actions that are part of their projects. The Lambda actions are API calls such as InvokeFunction or PutFunctionConcurrency (see the following table). IAM condition keys are then used to refine the conditions under which an IAM policy statement applies.

Lambda supports the existing global context key:

  • "aws:PrincipalTag/${TagKey}": Control what the IAM principal (the person making the request) is allowed to do based on the tags that are attached to their IAM user or role.

As part of ABAC support, Lambda now supports three additional condition keys:

  • "aws:ResourceTag/${TagKey}": Control access based on the tags that are attached to Lambda functions.
  • "aws:RequestTag/${TagKey}": Require tags to be present in a request, such as when creating a new function.
  • "aws:TagKeys": Control whether specific tag keys can be used in a request.

For more details on these condition context keys, see AWS global condition context keys.

When using condition keys in IAM policies, each Lambda API action supports different tagging condition keys. The following table maps each condition key to its Lambda actions.

Condition keys supported Description Lambda actions
aws:ResourceTag/${TagKey} Set this tag value to allow or deny user actions on resources with specific tags.
lambda:AddPermission
lambda:CreateAlias
lambda:CreateFunctionUrlConfig
lambda:DeleteAlias
lambda:DeleteFunction
lambda:DeleteFunctionCodeSigningConfig
lambda:DeleteFunctionConcurrency
lambda:DeleteFunctionEventInvokeConfig
lambda:DeleteFunctionUrlConfig
lambda:DeleteProvisionedConcurrencyConfig
lambda:DisableReplication
lambda:EnableReplication
lambda:GetAlias
lambda:GetFunction
lambda:GetFunctionCodeSigningConfig
lambda:GetFunctionConcurrency
lambda:GetFunctionConfiguration
lambda:GetFunctionEventInvokeConfig
lambda:GetFunctionUrlConfig
lambda:GetPolicy
lambda:GetProvisionedConcurrencyConfig
lambda:InvokeFunction
lambda:InvokeFunctionUrl
lambda:ListAliases
lambda:ListFunctionEventInvokeConfigs
lambda:ListFunctionUrlConfigs
lambda:ListProvisionedConcurrencyConfigs
lambda:ListTags
lambda:ListVersionsByFunction
lambda:PublishVersion
lambda:PutFunctionCodeSigningConfig
lambda:PutFunctionConcurrency
lambda:PutFunctionEventInvokeConfig
lambda:PutProvisionedConcurrencyConfig
lambda:RemovePermission
lambda:UpdateAlias
lambda:UpdateFunctionCode
lambda:UpdateFunctionConfiguration
lambda:UpdateFunctionEventInvokeConfig
lambda:UpdateFunctionUrlConfig

aws:ResourceTag/${TagKey}
aws:RequestTag/${TagKey}

aws:TagKeys
Set this tag value to allow or deny user requests to create a Lambda function. lambda:CreateFunction
aws:ResourceTag/${TagKey}
aws:RequestTag/${TagKey}

aws:TagKeys
Set this tag value to allow or deny user requests to add or update tags. lambda:TagResource
aws:ResourceTag/${TagKey}
aws:TagKeys
Set this tag value to allow or deny user requests to remove tags. lambda:UntagResource

Security administrators create conditions that only permit the action if the tag matches between the role and the Lambda function.
In this example, the policy grants access to all Lambda function API calls when a project-name tag exists and matches on both the developer’s IAM role and the Lambda function.

{
"Version": "2012-10-17",
"Statement": [
    {
    "Sid": "AllActionsLambdaSameProject",
    "Effect": "Allow",
    "Action": [
        "lambda:InvokeFunction",
        "lambda:UpdateFunctionConfiguration",
        "lambda:CreateAlias",
        "lambda:DeleteAlias",
        "lambda:DeleteFunction",
        "lambda:DeleteFunctionConcurrency", 
        "lambda:GetAlias",
        "lambda:GetFunction",
        "lambda:GetFunctionConfiguration",
        "lambda:GetPolicy",
        "lambda:ListAliases", 
        "lambda:ListVersionsByFunction",
        "lambda:PublishVersion",
        "lambda:PutFunctionConcurrency",
        "lambda:UpdateAlias",
        "lambda:UpdateFunctionCode"
    ],
    "Resource": "arn:aws:lambda:*:*:function:*",
    "Condition": {
        "StringEquals": {
        "aws:ResourceTag/project-name": "${aws:PrincipalTag/project-name}"
        }
    }
    }
]
}

In this policy, Resource is wild-carded as "*" for all Lambda functions. The condition limits access to only resources that have the same project-name key and value, without having to list each individual Amazon Resource Name (ARN).

The security administrator creates an IAM role for each developer’s project, such as falcon-developer-role or eagle-developer-role. Since the policy references both the function tags and the IAM role tags, she can reuse the previous policy and apply it to both of the project roles. Each role should have the tag key project-name with the value set to the project, such as falcon or eagle. The following shows the tags for Project Falcon:

Tags for Project Falcon

Tags for Project Falcon

The developers now have access to the existing Lambda functions in their respective projects. The developer for Project Falcon needs to create additional Lambda functions for only their project. Since the project-name tag also authorizes who can access the function, the developer should not be able to create a function without the correct tags. To enforce this, the security administrator applies a new policy to the developer’s role using the RequestTag condition key to specify that a project-name tag exists:

{
"Version": "2012-10-17",
"Statement": [
    {
    "Sid": "AllowLambdaTagOnCreate",
    "Effect": "Allow",
    "Action": [
        "lambda:CreateFunction",
        “lambda:TagResource”
    ]
    "Resource": "arn:aws:lambda:*:*:function:*",
    "Condition": {
        "StringEquals": {,
            “aws:RequestTag/project-name”: “${aws:PrincipalTag/project-name}”
        },
        "ForAllValues:StringEquals": {
            "aws:TagKeys": [
                 “project-name”
            ]
        }
    }
    }
]
}

To create the functions, the developer must add the key project-name and value falcon to the tags. Without the tag, the developer cannot create the function.

Project Falcon tags

Project Falcon tags

Because Project Falcon is using ABAC, by tagging the Lambda functions during creation, they did not need to engage the security administrator to add additional ARNs to the IAM policy. This provides flexibility to the developers to support their projects. This also helps scale the security administrators’ function by no longer needing to coordinate which resources need to be added to IAM policies to maintain least privilege access.

The project must then add a manager who requires read access to projects as long as they are also in the organization labeled birds and cost-center : it.

Organization and Cost Center tags

Organization and Cost Center tags

The security administrator creates a new IAM policy called manager-policy with the following statements:

{
"Version": "2012-10-17",
"Statement": [
    {
    "Sid": "AllActionsLambdaManager",
    "Effect": "Allow",
    "Action": [
        "lambda:GetAlias",
        "lambda:GetFunction",
        "lambda:GetFunctionConfiguration",
        "lambda:GetPolicy",
        "lambda:GetPolicy",
        "lambda:ListAliases", 
        "lambda:ListVersionsByFunction"
    ],
    "Resource": "arn:aws:lambda:*:*:function:*",
    "Condition": {
        "StringEquals": {
            “aws:ResourceTag/organization”: “${aws:PrincipalTag/organization}”,
            “aws:ResourceTag/cost-center”: “$}aws:PrincipalTag/cost-center}”
        }
    }
    }
]
}

The security administrator attaches the policy to the manager’s role along with the tag organization:birds, and cost-center:it. If any of the projects change organization, the manager no longer has access, even if the cost-center remains IT.

In this policy, the condition ensures both the cost-center and organization tags exist for the function and the values are equal to the tags in the manager’s role. Even if the cost-center tag matches for both the Lambda function and the manager’s role, yet the manager’s organization tag doesn’t match, IAM denies access to the Lambda function. Tags themselves are only a key:value pair with no relationship to other tags. You can use multiple tags, as in this example, to more granularly define Lambda function permissions.

Conclusion

You can now use attribute-based access control (ABAC) with Lambda to control access to functions using tags. This allows you to scale your access controls by simplifying the management of permissions while still maintaining least privilege security best practices. Security administrators can coordinate with developers on a tagging strategy and create IAM policies with ABAC condition keys. This then gives freedom to developers to grow their applications by adding tags to functions, without needing a security administrator to update individual IAM policies.

Attribute-based Access Control (ABAC) for Lambda functions support is also available through many AWS Lambda Partners such as Lumigo, Pulumi and Vertical Relevance.

For additional documentation on ABAC with Lambda see Attribute-based access control for Lambda.

Introducing Amazon CodeWhisperer in the AWS Lambda console (In preview)

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/introducing-amazon-codewhisperer-in-the-aws-lambda-console-in-preview/

This blog post is written by Mark Richman, Senior Solutions Architect.

Today, AWS is launching a new capability to integrate the Amazon CodeWhisperer experience with the AWS Lambda console code editor.

Amazon CodeWhisperer is a machine learning (ML)–powered service that helps improve developer productivity. It generates code recommendations based on their code comments written in natural language and code.

CodeWhisperer is available as part of the AWS toolkit extensions for major IDEs, including JetBrains, Visual Studio Code, and AWS Cloud9, currently supporting Python, Java, and JavaScript. In the Lambda console, CodeWhisperer is available as a native code suggestion feature, which is the focus of this blog post.

CodeWhisperer is currently available in preview with a waitlist. This blog post explains how to request access to and activate CodeWhisperer for the Lambda console. Once activated, CodeWhisperer can make code recommendations on-demand in the Lambda code editor as you develop your function. During the preview period, developers can use CodeWhisperer at no cost.

Amazon CodeWhisperer

Amazon CodeWhisperer

Lambda is a serverless compute service that runs your code in response to events and automatically manages the underlying compute resources for you. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications and only pay for what you use.

With Lambda, you can build your functions directly in the AWS Management Console and take advantage of CodeWhisperer integration. CodeWhisperer in the Lambda console currently supports functions using the Python and Node.js runtimes.

When writing AWS Lambda functions in the console, CodeWhisperer analyzes the code and comments, determines which cloud services and public libraries are best suited for the specified task, and recommends a code snippet directly in the source code editor. The code recommendations provided by CodeWhisperer are based on ML models trained on a variety of data sources, including Amazon and open source code. Developers can accept the recommendation or simply continue to write their own code.

Requesting CodeWhisperer access

CodeWhisperer integration with Lambda is currently available as a preview only in the N. Virginia (us-east-1) Region. To use CodeWhisperer in the Lambda console, you must first sign up to access the service in preview here or request access directly from within the Lambda console.

In the AWS Lambda console, under the Code tab, in the Code source editor, select the Tools menu, and Request Amazon CodeWhisperer Access.

Request CodeWhisperer access in Lambda console

Request CodeWhisperer access in Lambda console

You may also request access from the Preferences pane.

Request CodeWhisperer access in Lambda console preference pane

Request CodeWhisperer access in Lambda console preference pane

Selecting either of these options opens the sign-up form.

CodeWhisperer sign up form

CodeWhisperer sign up form

Enter your contact information, including your AWS account ID. This is required to enable the AWS Lambda console integration. You will receive a welcome email from the CodeWhisperer team upon once they approve your request.

Activating Amazon CodeWhisperer in the Lambda console

Once AWS enables your preview access, you must turn on the CodeWhisperer integration in the Lambda console, and configure the required permissions.

From the Tools menu, enable Amazon CodeWhisperer Code Suggestions

Enable CodeWhisperer code suggestions

Enable CodeWhisperer code suggestions

You can also enable code suggestions from the Preferences pane:

Enable CodeWhisperer code suggestions from Preferences pane

Enable CodeWhisperer code suggestions from Preferences pane

The first time you activate CodeWhisperer, you see a pop-up containing terms and conditions for using the service.

CodeWhisperer Preview Terms

CodeWhisperer Preview Terms

Read the terms and conditions and choose Accept to continue.

AWS Identity and Access Management (IAM) permissions

For CodeWhisperer to provide recommendations in the Lambda console, you must enable the proper AWS Identity and Access Management (IAM) permissions for either your IAM user or role. In addition to Lambda console editor permissions, you must add the codewhisperer:GenerateRecommendations permission.

Here is a sample IAM policy that grants a user permission to the Lambda console as well as CodeWhisperer:

{
  "Version": "2012-10-17",
  "Statement": [{
      "Sid": "LambdaConsolePermissions",
      "Effect": "Allow",
      "Action": [
        "lambda:AddPermission",
        "lambda:CreateEventSourceMapping",
        "lambda:CreateFunction",
        "lambda:DeleteEventSourceMapping",
        "lambda:GetAccountSettings",
        "lambda:GetEventSourceMapping",
        "lambda:GetFunction",
        "lambda:GetFunctionCodeSigningConfig",
        "lambda:GetFunctionConcurrency",
        "lambda:GetFunctionConfiguration",
        "lambda:InvokeFunction",
        "lambda:ListEventSourceMappings",
        "lambda:ListFunctions",
        "lambda:ListTags",
        "lambda:PutFunctionConcurrency",
        "lambda:UpdateEventSourceMapping",
        "iam:AttachRolePolicy",
        "iam:CreatePolicy",
        "iam:CreateRole",
        "iam:GetRole",
        "iam:GetRolePolicy",
        "iam:ListAttachedRolePolicies",
        "iam:ListRolePolicies",
        "iam:ListRoles",
        "iam:PassRole",
        "iam:SimulatePrincipalPolicy"
      ],
      "Resource": "*"
    },
    {
      "Sid": "CodeWhispererPermissions",
      "Effect": "Allow",
      "Action": ["codewhisperer:GenerateRecommendations"],
      "Resource": "*"
    }
  ]
}

This example is for illustration only. It is best practice to use IAM policies to grant restrictive permissions to IAM principals to meet least privilege standards.

Demo

To activate and work with code suggestions, use the following keyboard shortcuts:

  • Manually fetch a code suggestion: Option+C (macOS), Alt+C (Windows)
  • Accept a suggestion: Tab
  • Reject a suggestion: ESC, Backspace, scroll in any direction, or keep typing and the recommendation automatically disappears.

Currently, the IDE extensions provide automatic suggestions and can show multiple suggestions. The Lambda console integration requires a manual fetch and shows a single suggestion.

Here are some common ways to use CodeWhisperer while authoring Lambda functions.

Single-line code completion

When typing single lines of code, CodeWhisperer suggests how to complete the line.

CodeWhisperer single-line completion

CodeWhisperer single-line completion

Full function generation

CodeWhisperer can generate an entire function based on your function signature or code comments. In the following example, a developer has written a function signature for reading a file from Amazon S3. CodeWhisperer then suggests a full implementation of the read_from_s3 method.

CodeWhisperer full function generation

CodeWhisperer full function generation

CodeWhisperer may include import statements as part of its suggestions, as in the previous example. As a best practice to improve performance, manually move these import statements to outside the function handler.

Generate code from comments

CodeWhisperer can also generate code from comments. The following example shows how CodeWhisperer generates code to use AWS APIs to upload files to Amazon S3. Write a comment describing the intended functionality and, on the following line, activate the CodeWhisperer suggestions. Given the context from the comment, CodeWhisperer first suggests the function signature code in its recommendation.

CodeWhisperer generate function signature code from comments

CodeWhisperer generate function signature code from comments

After you accept the function signature, CodeWhisperer suggests the rest of the function code.

CodeWhisperer generate function code from comments

CodeWhisperer generate function code from comments

When you accept the suggestion, CodeWhisperer completes the entire code block.

CodeWhisperer generates code to write to S3.

CodeWhisperer generates code to write to S3.

CodeWhisperer can help write code that accesses many other AWS services. In the following example, a code comment indicates that a function is sending a notification using Amazon Simple Notification Service (SNS). Based on this comment, CodeWhisperer suggests a function signature.

CodeWhisperer function signature for SNS

CodeWhisperer function signature for SNS

If you accept the suggested function signature. CodeWhisperer suggest a complete implementation of the send_notification function.

CodeWhisperer function send notification for SNS

CodeWhisperer function send notification for SNS

The same procedure works with Amazon DynamoDB. When writing a code comment indicating that the function is to get an item from a DynamoDB table, CodeWhisperer suggests a function signature.

CodeWhisperer DynamoDB function signature

CodeWhisperer DynamoDB function signature

When accepting the suggestion, CodeWhisperer then suggests a full code snippet to complete the implementation.

CodeWhisperer DynamoDB code snippet

CodeWhisperer DynamoDB code snippet

Once reviewing the suggestion, a common refactoring step in this example would be manually moving the references to the DynamoDB resource and table outside the get_item function.

CodeWhisperer can also recommend complex algorithm implementations, such as Insertion sort.

CodeWhisperer insertion sort.

CodeWhisperer insertion sort.

As a best practice, always test the code recommendation for completeness and correctness.

CodeWhisperer not only provides suggested code snippets when integrating with AWS APIs, but can help you implement common programming idioms, including proper error handling.

Conclusion

CodeWhisperer is a general purpose, machine learning-powered code generator that provides you with code recommendations in real time. When activated in the Lambda console, CodeWhisperer generates suggestions based on your existing code and comments, helping to accelerate your application development on AWS.

To get started, visit https://aws.amazon.com/codewhisperer/. Share your feedback with us at [email protected].

For more serverless learning resources, visit Serverless Land.

Creating a serverless Apache Kafka publisher using AWS Lambda

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/creating-a-serverless-apache-kafka-publisher-using-aws-lambda/

This post is written by Philipp Klose, Global Solution Architect, and Daniel Wessendorf, Global Solution Architect.

Streaming data and event-driven architectures are becoming more popular for many modern systems. The range of use cases includes web tracking and other logs, industrial IoT, in-game player activity, and the ingestion of data for modern analytics architecture.

One of the most popular technologies in this spaece is Apache Kafka. This is an open-source distributed event streaming platform used by many customers for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications.

Kafka is based on a simple but powerful pattern. The Kafka cluster itself is a highly available broker that receives messages from various producers. The received messages are stored in topics, which are the primary storage abstraction.

Various consumers can subscribe to a Kafka topic and consume messages. In contrast to classic queuing systems, the consumers do not remove the message from the topic but store the individual reading position on the topic. This allows for multiple different patterns for consumption (for example, fan-out or consumer-groups).

Producer and consumer

Producer and consumer libraries for Kafka are available in various programming languages and technologies. This blog post focuses on using serverless and cloud-native technologies for the producer side.

Overview

This example walks you through how to build a serverless real-time stream producer application using Amazon API Gateway and AWS Lambda.

For testing, this blog includes a sample AWS Cloud Development Kit (CDK) application. This creates a demo environment, including an Amazon Managed Streaming for Apache Kafka (MSK) cluster and a bastion host for observing the produced messages on the cluster.

The following diagram shows the architecture of an application that pushes API requests to a Kafka topic in real time, which you build in this blog post:

Architecture overview

  1. An external application calls an Amazon API Gateway endpoint
  2. Amazon API Gateway forwards the request to a Lambda function
  3. AWS Lambda function behaves as a Kafka producer and pushes the message to a Kafka topic
  4. A Kafka “console consumer” on the bastion host then reads the message

The demo shows how to use Lambda Powertools for Java to streamline logging and tracing, and an IAM authenticator to simplify the cluster authentication process. The following sections take you through the steps to deploy, test, and observe the example application.

Prerequisites

The example has the following prerequisites:

Example walkthrough

  1. Clone the project GitHub repository. Change directory to subfolder serverless-kafka-iac:
    git clone https://github.com/aws-samples/serverless-kafka-producer
    cd serverless-kafka-iac
    
  2. Configure environment variables:
    export CDK_DEFAULT_ACCOUNT=$(aws sts get-caller-identity --query 'Account' --output text)
    export CDK_DEFAULT_REGION=$(aws configure get region)
    
  3. Prepare the virtual Python environment:
    python3 -m venv .venv
    source .venv/bin/activate
    pip3 install -r requirements.txt
    
  4. Bootstrap your account for CDK usage:
    cdk bootstrap aws://$CDK_DEFAULT_ACCOUNT/$CDK_DEFAULT_REGION
  5. Run ‘cdk synth’ to build the code and test the requirements:
    cdk synth
  6. Run ‘cdk deploy’ to deploy the code to your AWS account:
    cdk deploy --all

Testing the example

To test the example, log into the bastion host and start a consumer console to observe the messages being added to the topic. You generate messages for the Kafka topics by sending calls via API Gateway from your development machine or AWS Cloud9 environment.

  1. Use AWS System Manager to log into the bastion host. Use the KafkaDemoBackendStack.bastionhostbastion Output-Parameter to connect or via the system manager console.
    aws ssm start-session --target <Bastion Host Instance Id> 
    sudo su ec2-user
    cd /home/ec2-user/kafka_2.13-2.6.3/bin/
    
  2. Create a topic named messages on the MSK cluster:
    ./kafka-topics.sh --bootstrap-server $ZK --command-config client.properties --create --replication-factor 3 --partitions 3 --topic messages
  3. Open a Kafka consumer console on the bastion host to observe incoming messages:
    ./kafka-console-consumer.sh --bootstrap-server $ZK --topic messages --consumer.config client.properties
    
  4. Open another terminal on your development machine to create test requests using the “ServerlessKafkaProducerStack.kafkaproxyapiEndpoint” output parameter of the CDK stack. Append “/event” for the final URL. Use curl to send the API request:
    curl -X POST -d "Hello World" <ServerlessKafkaProducerStack.messagesapiendpointEndpoint>
  5. For load testing the application, it is important to calibrate the parameters. You can use a tool like Artillery to simulate workloads. You can find a sample artillery script in the /load-testing folder from step 1.
  6. Observe the incoming request in the bastion host terminal.

All components in this example integrate with AWS X-Ray. With AWS X-Ray, you can trace the entire application, which is useful to identify bottlenecks when load testing. You can also trace method execution at the Java method level.

Lambda Powertools for java allows you to accelerate this process by adding the @Trace annotation to see traces on method level in X-Ray.

To trace a request end to end:

  1. Navigate to the CloudWatch console.
  2. Open the Service map.
  3. Select a component to investigate (for example, the Lambda function where you deployed the Kafka producer). Choose View traces.
    X-Ray console
  4. Select a single Lambda method invocation and investigate further at the Java method level.
    X-Ray detail

Cleaning up

In the subdirectory “serverless-kafka-iac”, delete the test infrastructure:

cdk destroy –all

Implementation of a Kafka producer in Lambda

Kafka natively supports Java. To stay open, cloud native, and without third-party dependencies, the producer is written in that language. Currently, the IAM authenticator is only available to Java. In this example, the Lambda handler receives a message from an Amazon API Gateway source and pushes this message to an MSK topic called “messages”.

Typically, Kafka producers are long-living and pushing a message to a Kafka topic is an asynchronous process. As Lambda is ephemeral, you must enforce a full flush of a submitted message until the Lambda function ends, by calling producer.flush().

    @Override
    @Tracing
    @Logging(logEvent = true)
    public APIGatewayProxyResponseEvent 
    handleRequest(APIGatewayProxyRequestEvent input, Context context) {
        APIGatewayProxyResponseEvent response = createEmptyResponse();
        try {

            String message = getMessageBody(input);

            KafkaProducer<String, String> producer = createProducer();

            ProducerRecord<String, String> record = new ProducerRecord<String, String>(TOPIC_NAME, context.getAwsRequestId(), message);

            Future<RecordMetadata> send = producer.send(record);
            producer.flush();

            RecordMetadata metadata = send.get();
            log.info(String.format(“Send message was send to partition %s”, metadata.partition()));

            log.info(String.format(“Message was send to partition %s”, metadata.partition()));

            return response.withStatusCode(200).withBody(“Message successfully pushed to kafka”);
        } catch (Exception e) {
            log.error(e.getMessage(), e);
            return response.withBody(e.getMessage()).withStatusCode(500);
        }
    }

    @Tracing
    private KafkaProducer<String, String> createProducer() {
        if (producer == null) {
            log.info(“Connecting to kafka cluster”);
            producer = new KafkaProducer<String, String>(kafkaProducerProperties.getProducerProperties());
        }
        return producer;
    }

Connect to Amazon MSK using IAM Auth

This example uses IAM authentication to connect to the respective Kafka cluster. See the documentation here, which shows how to configure the producer for connectivity.

Since you configure the cluster via IAM, grant “Connect” and “WriteData” permissions to the producer, so that it can push messages to Kafka.

{
    “Version”: “2012-10-17”,
    “Statement”: [
        {            
            “Effect”: “Allow”,
            “Action”: [
                “kafka-cluster:Connect”
            ],
            “Resource”: “arn:aws:kafka:region:account-id:cluster/cluster-name/cluster-uuid “
        }
    ]
}


{
    “Version”: “2012-10-17”,
    “Statement”: [
        {            
            “Effect”: “Allow”,
            “Action”: [
                “kafka-cluster:Connect”,
                “kafka-cluster: DescribeTopic”,
            ],
            “Resource”: “arn:aws:kafka:region:account-id:topic/cluster-name/cluster-uuid/topic-name“
        }
    ]
}

This shows the Kafka excerpt of the IAM policy, which must be applied to the Kafka producer.

When using IAM authentication, be aware of the current limits of IAM Kafka authentication, which affect the number of concurrent connections and IAM requests for a producer. Read https://docs.aws.amazon.com/msk/latest/developerguide/limits.html and follow the recommendation for authentication backoff in the producer client:

        Map<String, String> configuration = Map.of(
                “key.serializer”, “org.apache.kafka.common.serialization.StringSerializer”,
                “value.serializer”, “org.apache.kafka.common.serialization.StringSerializer”,
                “bootstrap.servers”, getBootstrapServer(),
                “security.protocol”, “SASL_SSL”,
                “sasl.mechanism”, “AWS_MSK_IAM”,
                “sasl.jaas.config”, “software.amazon.msk.auth.iam.IAMLoginModule required;”,
                “sasl.client.callback.handler.class”, “software.amazon.msk.auth.iam.IAMClientCallbackHandler”,
                “connections.max.idle.ms”, “60”,
                “reconnect.backoff.ms”, “1000”
        );

Elaboration on implementation

Each Kafka broker node can handle a maximum of 20 IAM authentication requests per second. The demo setup has three brokers, which result in 60 requests per second. Therefore, the broker setup limits the number of concurrent Lambda functions to 60.

To reduce IAM authentication requests from the Kafka producer, place it outside of the handler. For frequent calls, there is a chance that Lambda reuses the previously created class instance and only re-executes the handler.

For bursting workloads with a high number of concurrent API Gateway requests, this can lead to dropped messages. While for some workloads, this might be tolerable, for others this might not be the case.

In these cases, you can extend the architecture with a buffering technology like Amazon SQS or Amazon Kinesis Data Streams between API Gateway and Lambda.

To reduce latency, you can reduce cold start times for Java by changing the tiered compilation level to “1” as described in this blog post. Provisioned Concurrency ensures that polling Lambda functions are ready before requests arrive.

Conclusion

In this post, you learn how to create a serverless integration Lambda function between API Gateway and Apache Managed Streaming for Apache Kafka (MSK). We show how to deploy such an integration with the CDK.

The general pattern is suitable for many use cases that need an integration between API Gateway and Apache Kafka. It may have cost benefits over containerized implementations in use cases with sparse, low-volume input streams, and unpredictable or spiky workloads.

For more serverless learning resources, visit Serverless Land.

Simplifying serverless best practices with AWS Lambda Powertools for TypeScript

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/simplifying-serverless-best-practices-with-aws-lambda-powertools-for-typescript/

This blog post is written by Sara Gerion, Senior Solutions Architect.

Development teams must have a shared understanding of the workloads they own and their expected behaviors to deliver business value fast and with confidence. The AWS Well-Architected Framework and its Serverless Lens provide architectural best practices for designing and operating reliable, secure, efficient, and cost-effective systems in the AWS Cloud.

Developers should design and configure their workloads to emit information about their internal state and current status. This allows engineering teams to ask arbitrary questions about the health of their systems at any time. For example, emitting metrics, logs, and traces with useful contextual information enables situational awareness and allows developers to filter and select only what they need.

Following such practices reduces the number of bugs, accelerates remediation, and speeds up the application lifecycle into production. They can help mitigate deployment risks, offer more accurate production-readiness assessments and enable more informed decisions to deploy systems and changes.

AWS Lambda Powertools for TypeScript

AWS Lambda Powertools provides a suite of utilities for AWS Lambda functions to ease the adoption of serverless best practices. The AWS Hero Yan Cui’s initial implementation of DAZN Lambda Powertools inspired this idea.

Following the community’s adoption of AWS Lambda Powertools for Python and AWS Lambda Powertools for Java, we are excited to announce the general availability of the AWS Lambda Powertools for TypeScript.

AWS Lambda Powertools for TypeScript provides a suite of utilities for Node.js runtimes, which you can use in both JavaScript and TypeScript code bases. The library follows a modular approach similar to the AWS SDK v3 for JavaScript. Each utility is installed as standalone NPM package.

Today, the library is ready for production use with three observability features: distributed tracing (Tracer), structured logging (Logger), and asynchronous business and application metrics (Metrics).

You can instrument your code with Powertools in three different ways:

  • Manually. It provides the most granular control. It’s the most verbose approach, with the added benefit of no additional dependency and no refactoring to TypeScript Classes.
  • Middy middleware. It is the best choice if your existing code base relies on the Middy middleware engine. Powertools offers compatible Middy middleware to make this integration seamless.
  • Method decorator. Use TypeScript method decorators if you prefer writing your business logic using TypeScript Classes. If you aren’t using Classes, this requires the most significant refactoring.

The examples in this blog post use the Middy approach. To follow the examples, ensure that middy is installed:

npm i @middy/core

Logger

Logger provides an opinionated logger with output structured as JSON. Its key features include:

  • Capturing key fields from the Lambda context, cold starts, and structure logging output as JSON.
  • Logging Lambda invocation events when instructed (disabled by default).
  • Printing all the logs only for a percentage of invocations via log sampling (disabled by default).
  • Appending additional keys to structured logs at any point in time.
  • Providing a custom log formatter (Bring Your Own Formatter) to output logs in a structure compatible with your organization’s Logging RFC.

To install, run:

npm install @aws-lambda-powertools/logger

Usage example:

import { Logger, injectLambdaContext } from '@aws-lambda-powertools/logger';
 import middy from '@middy/core';

 const logger = new Logger({
    logLevel: 'INFO',
    serviceName: 'shopping-cart-api',
});

 const lambdaHandler = async (): Promise<void> => {
     logger.info('This is an INFO log with some context');
 };

 export const handler = middy(lambdaHandler)
     .use(injectLambdaContext(logger));

In Amazon CloudWatch, the structured log emitted by your application looks like:

{
     "cold_start": true,
     "function_arn": "arn:aws:lambda:eu-west-1:123456789012:function:shopping-cart-api-lambda-prod-eu-west-1",
     "function_memory_size": 128,
     "function_request_id": "c6af9ac6-7b61-11e6-9a41-93e812345678",
     "function_name": "shopping-cart-api-lambda-prod-eu-west-1",
     "level": "INFO",
     "message": "This is an INFO log with some context",
     "service": "shopping-cart-api",
     "timestamp": "2021-12-12T21:21:08.921Z",
     "xray_trace_id": "abcdef123456abcdef123456abcdef123456"
 }

Logs generated by Powertools can also be ingested and analyzed by any third-party SaaS vendor that supports JSON.

Tracer

Tracer is an opinionated thin wrapper for AWS X-Ray SDK for Node.js.

Its key features include:

  • Auto-capturing cold start and service name as annotations, and responses or full exceptions as metadata.
  • Automatically tracing HTTP(S) clients and generating segments for each request.
  • Supporting tracing functions via decorators, middleware, and manual instrumentation.
  • Supporting tracing AWS SDK v2 and v3 via AWS X-Ray SDK for Node.js.
  • Auto-disable tracing when not running in the Lambda environment.

To install, run:

npm install @aws-lambda-powertools/tracer

Usage example:

import { Tracer, captureLambdaHandler } from '@aws-lambda-powertools/tracer';
 import middy from '@middy/core'; 

 const tracer = new Tracer({
    serviceName: 'shopping-cart-api'
});

 const lambdaHandler = async (): Promise<void> => {
     /* ... Something happens ... */
 };

 export const handler = middy(lambdaHandler)
     .use(captureLambdaHandler(tracer));
AWS X-Ray segments and subsegments emitted by Powertools

AWS X-Ray segments and subsegments emitted by Powertools

Example service map generated with Powertools

Example service map generated with Powertools

Metrics

Metrics create custom metrics asynchronously by logging metrics to standard output following the Amazon CloudWatch Embedded Metric Format (EMF). These metrics can be visualized through CloudWatch dashboards or used to trigger alerts.

Its key features include:

  • Aggregating up to 100 metrics using a single CloudWatch EMF object (large JSON blob).
  • Validating your metrics against common metric definitions mistakes (for example, metric unit, values, max dimensions, max metrics).
  • Metrics are created asynchronously by the CloudWatch service. You do not need any custom stacks, and there is no impact to Lambda function latency.
  • Creating a one-off metric with different dimensions.

To install, run:

npm install @aws-lambda-powertools/metrics

Usage example:

import { Metrics, MetricUnits, logMetrics } from '@aws-lambda-powertools/metrics';
 import middy from '@middy/core';

 const metrics = new Metrics({
    namespace: 'serverlessAirline', 
    serviceName: 'orders'
});

 const lambdaHandler = async (): Promise<void> => {
     metrics.addMetric('successfulBooking', MetricUnits.Count, 1);
 };

 export const handler = middy(lambdaHandler)
     .use(logMetrics(metrics));

In CloudWatch, the custom metric emitted by your application looks like:

{
     "successfulBooking": 1.0,
     "_aws": {
     "Timestamp": 1592234975665,
     "CloudWatchMetrics": [
         {
         "Namespace": "serverlessAirline",
         "Dimensions": [
             [
             "service"
             ]
         ],
         "Metrics": [
             {
             "Name": "successfulBooking",
             "Unit": "Count"
             }
         ]
     },
     "service": "orders"
 }

Serverless TypeScript demo application

The Serverless TypeScript Demo shows how to use Lambda Powertools for TypeScript. You can find instructions on how to deploy and load test this application in the repository.

Serverless TypeScript Demo architecture

Serverless TypeScript Demo architecture

The code for the Get Products Lambda function shows how to use the utilities. The function is instrumented with Logger, Metrics and Tracer to emit observability data.

// blob/main/src/api/get-products.ts
import { APIGatewayProxyEvent, APIGatewayProxyResult} from "aws-lambda";
import { DynamoDbStore } from "../store/dynamodb/dynamodb-store";
import { ProductStore } from "../store/product-store";
import { logger, tracer, metrics } from "../powertools/utilities"
import middy from "@middy/core";
import { captureLambdaHandler } from '@aws-lambda-powertools/tracer';
import { injectLambdaContext } from '@aws-lambda-powertools/logger';
import { logMetrics, MetricUnits } from '@aws-lambda-powertools/metrics';

const store: ProductStore = new DynamoDbStore();
const lambdaHandler = async (event: APIGatewayProxyEvent): Promise<APIGatewayProxyResult> => {

  logger.appendKeys({
    resource_path: event.requestContext.resourcePath
  });

  try {
    const result = await store.getProducts();

    logger.info('Products retrieved', { details: { products: result } });
    metrics.addMetric('productsRetrieved', MetricUnits.Count, 1);

    return {
      statusCode: 200,
      headers: { "content-type": "application/json" },
      body: `{"products":${JSON.stringify(result)}}`,
    };
  } catch (error) {
      logger.error('Unexpected error occurred while trying to retrieve products', error as Error);

      return {
        statusCode: 500,
        headers: { "content-type": "application/json" },
        body: JSON.stringify(error),
      };
  }
};

const handler = middy(lambdaHandler)
    .use(captureLambdaHandler(tracer))
    .use(logMetrics(metrics, { captureColdStartMetric: true }))
    .use(injectLambdaContext(logger, { clearState: true, logEvent: true }));

export {
  handler
};

The Logger utility adds useful context to the application logs. Structuring your logs as JSON allows you to search on your structured data using Amazon CloudWatch Logs Insights. This allows you to filter out the information you don’t need.

For example, use the following query to search for any errors for the serverless-typescript-demo service.

fields resource_path, message, timestamp
| filter service = 'serverless-typescript-demo'
| filter level = 'ERROR'
| sort @timestamp desc
| limit 20
CloudWatch Logs Insights showing errors for the serverless-typescript-demo service.

CloudWatch Logs Insights showing errors for the serverless-typescript-demo service.

The Tracer utility adds custom annotations and metadata during the function invocation, which it sends to AWS X-Ray. Annotations allow you to search for and filter traces by business or application contextual information such as product ID, or cold start.

You can see the duration of the putProduct method and the ColdStart and Service annotations attached to the Lambda handler function.

putProduct trace view

putProduct trace view

The Metrics utility simplifies the creation of complex high-cardinality application data. Including structured data along with your metrics allows you to search or perform additional analysis when needed.

In this example, you can see how many times per second a product is created, deleted, or queried. You could configure alarms based on the metrics.

Metrics view

Metrics view

Code examples

You can use Powertools with many Infrastructure as Code or deployment tools. The project contains source code and supporting files for serverless applications that you can deploy with the AWS Cloud Development Kit (AWS CDK) or AWS Serverless Application Model (AWS SAM).

The AWS CDK lets you build reliable and scalable applications in the cloud with the expressive power of a programming language, including TypeScript. The AWS SAM CLI is that makes it easier to create and manage serverless applications.

You can use the sample applications provided in the GitHub repository to understand how to use the library quickly and experiment in your own AWS environment.

Conclusion

AWS Lambda Powertools for TypeScript can help simplify, accelerate, and scale the adoption of serverless best practices within your team and across your organization.

The library implements best practices recommended as part of the AWS Well-Architected Framework, without you needing to write much custom code.

Since the library relieves the operational burden needed to implement these functionalities, you can focus on the features that matter the most, shortening the Software Development Life Cycle and reducing the Time To Market.

The library helps both individual developers and engineering teams to standardize their organizational best practices. Utilities are designed to be incrementally adoptable for customers at any stage of their serverless journey, from startup to enterprise.

To get started with AWS Lambda Powertools for TypeScript, see the official documentation. For more serverless learning resources, visit Serverless Land.